MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

A Practical Roadmap to AI-Driven Testing

2026-01-07 05:26:17

The role of a QA Engineer is shifting. We are moving from "finding bugs" to "preventing them," and Artificial Intelligence is the accelerator for this change.

If you are a QA Engineer wondering how to integrate AI into your workflow without getting overwhelmed, this guide is for you. We’ll look at where AI helps immediately and provide a step-by-step roadmap to future-proof your career.

Why AI Matters in QA

Traditional automation (using tools like Cypress or Playwright) is powerful but brittle. Selectors change, tests flake, and maintenance eats up 40% of our time. AI addresses these pain points by introducing:

  • Self-Healing Scripts: Tests that automatically fix broken selectors.
  • Visual AI: Detecting UI bugs that standard assertions miss.
  • Test Generation: Writing boilerplate code instantly.

The Roadmap: From Manual to AI-Augmented QA

Here is a clear, actionable path to adopting AI in your QA journey.

Phase 1: The "Copilot" Era (Start Here)

Goal: Increase speed and reduce repetitive tasks.

  • Prompt Engineering for Test Cases: Stop writing test cases from scratch. Feed your requirements into ChatGPT or Claude and ask for "Negative test scenarios," "Edge cases," and "Gherkin syntax."
  • Code Generation: Use GitHub Copilot or Codeium in your IDE. If you are writing a Cypress test, type the test name and let the AI suggest the logic. It handles the boilerplate so you can focus on the assertions.

Phase 2: Intelligent Automation (Next Steps)

Goal: Reduce test flakiness and maintenance.

  • Self-Healing Tools: Investigate tools that use AI to identify elements even when attributes (IDs/Classes) change.
  • Visual Regression: Integrate tools like Applitools or Percy. Unlike standard pixel-matching, AI-powered visual testing ignores minor rendering differences (like anti-aliasing) and focuses on actual layout shifts and missing elements.

Phase 3: Predictive & Autonomous QA (Advanced)

Goal: Optimize the pipeline.

  • Predictive Test Selection: Instead of running the entire regression suite for every tiny commit, use AI tools that analyze code changes and select only the relevant tests to run.
  • Autonomous Agents: Experiment with agents that can explore an application without a pre-written script to find crashes and logical errors.

Conclusion

AI is not here to replace the QA mindset; it is here to remove the tedious parts of the job. By following this roadmap, you transition from a "Script Writer" to a "Quality Strategist."

Start small today: Open your IDE, turn on an AI assistant, and ask it to refactor your messiest test function.

Stop Pasting Secrets into AI Chat - Use AI-Safe Credentials Instead

2026-01-07 05:22:16

The Awkward Moment Every Developer Knows

You're pair programming with Claude Code, and it asks:

"I need your database credentials to run this migration."

And then you do the thing. You paste your password right into the chat. 🤦

We've all been there. But here's the problem:

  • That secret is now in your conversation history
  • It might end up in logs or training data
  • You can't easily rotate it
  • There's no audit trail

What If AI Could Use Secrets Without Seeing Them?

This is exactly what I built secretctl to solve.

secretctl CLI Demo

Instead of exposing credentials, secretctl injects them as environment variables. The AI assistant can use your secrets to run commands, but never actually sees the plaintext values.

How It Works with Claude Code

Add secretctl as an MCP server in your Claude Code config:

{
  "mcpServers": {
    "secretctl": {
      "command": "secretctl",
      "args": ["mcp-server"],
      "env": {
        "SECRETCTL_PASSWORD": "your-master-password"
      }
    }
  }
}

Now Claude Code can:

✅ List your available secrets
✅ Run commands with injected credentials
✅ See masked values (e.g., ****WXYZ)
❌ Never see actual plaintext values

Example: Database Migration

You: "Run the database migration using my prod-db credentials"

Claude: I'll run the migration with your credentials injected.
        [Executes: secretctl run prod-db -- npm run migrate]

        Migration completed successfully!

The AI executed the command with real credentials, but never saw password123 in the chat.

Why This Matters

Traditional Approach secretctl Approach
Paste secrets in chat Secrets injected via env vars
Visible in history Never exposed to AI
No audit trail Full audit logging
Hard to rotate Single source of truth
Risk of leakage Zero plaintext exposure

Multi-Field Secrets

Real credentials are complex. secretctl supports multi-field secrets with templates:

# Database credentials
secretctl set prod-db --template database
# Stores: host, port, database, username, password

# SSH configurations
secretctl set bastion --template ssh
# Stores: host, port, username, private_key

# API credentials
secretctl set stripe --template api
# Stores: api_key, api_secret, endpoint

Desktop App for Visual Management

Prefer a GUI? secretctl includes a full-featured desktop app:

secretctl Desktop App

  • Visual secret management
  • Sensitive field masking
  • Audit log viewer
  • Cross-platform (macOS, Windows, Linux)

Quick Start

macOS (Homebrew)

brew install forest6511/tap/secretctl

Windows (Scoop)

scoop bucket add secretctl https://github.com/forest6511/scoop-bucket
scoop install secretctl

Linux / Manual

curl -LO https://github.com/forest6511/secretctl/releases/latest/download/secretctl-linux-amd64
chmod +x secretctl-linux-amd64
sudo mv secretctl-linux-amd64 /usr/local/bin/secretctl

Then initialize:

secretctl init
secretctl set MY_API_KEY

Key Features

  • 🔐 Local-first: Your secrets never leave your machine
  • 🤖 AI-Safe Access: MCP integration without plaintext exposure
  • 🛡️ Strong encryption: AES-256-GCM + Argon2id
  • 📝 Audit logging: Track all secret access
  • 📦 Single binary: No dependencies, no server required
  • 🖥️ Desktop App: Full GUI for visual management

Try It Out

GitHub: https://github.com/forest6511/secretctl
Documentation: https://forest6511.github.io/secretctl/

The project is open source (MIT license). Star it if you find it useful!

Have questions or feedback? Drop a comment below or open an issue on GitHub.

Why Most Telecom APIs Fail Before the First Developer Uses Them

2026-01-07 05:12:41

Telecom APIs have never been more visible.

Swagger files are published.
Developer portals are live.
“Open networks” are on every roadmap.

And yet—very few telecom APIs ever make it into real production applications.

Not because developers aren’t interested.
Not because the technology doesn’t exist.

But because most telecom APIs are exposed, not operationalized.

There’s a big difference—and developers feel it immediately.

Exposure Is Not Adoption

From a telecom perspective, exposing an API often feels like the finish line.

The endpoint works.
The documentation loads.
The demo succeeds.

From a developer’s perspective, that’s barely the starting point.

Real adoption only happens when an API behaves like a product, not an interface.

And this is where most telecom APIs quietly fail—before the first serious developer ever commits to using them.

The First Failure: APIs Without a Clear Use Case

Many telecom APIs are published because they can be exposed, not because someone has clearly defined why they should be used.

Developers opening a portal often see things like:

  • Network status endpoints
  • Location or QoS APIs
  • Usage or event hooks

But no answer to the most basic question:

What problem does this solve for me, right now?

Without a concrete use case—payments, identity, messaging workflows, compliance automation—APIs remain technically impressive but commercially irrelevant.

Developers don’t explore APIs for curiosity.
They adopt them to ship features.

The Second Failure: Authentication That Feels Like a Barrier, Not a Gateway

Telecom APIs often inherit enterprise-grade security models that make sense internally—but feel hostile externally.

Common friction points:

  • Long approval cycles just to get credentials
  • Manual key provisioning
  • Static credentials with no clear rotation strategy
  • Limited sandbox access

For developers used to spinning up cloud APIs in minutes, this feels like friction with no payoff.

If the first interaction feels slow or uncertain, most developers simply move on.

Not because the API is bad—but because it’s harder than the alternative.

The Third Failure: No Concept of Lifecycle

Many telecom APIs exist in a strange timeless state.

They’re documented once and then… left alone.

What’s missing:

  • Clear versioning strategy
  • Deprecation timelines
  • Change logs that explain breaking behavior
  • Backward compatibility guarantees

Developers don’t fear change.
They fear unpredictable change.

Without a visible lifecycle, integrating a telecom API feels risky—especially for production systems where outages or billing issues have real consequences.

The Fourth Failure: APIs Without Economics

This is where telecom APIs differ sharply from successful SaaS or fintech platforms.

Often, there’s no clear answer to:

  • How is this API priced?
  • What happens at scale?
  • Are there rate limits tied to business value?
  • Is usage metered transparently?

Developers don’t just need endpoints.
They need predictable economics.

An API that might later trigger unexpected costs, throttling, or commercial renegotiation is an API developers will avoid—even if the technology is solid.

The Fifth Failure: No Feedback Loop

In modern platforms, APIs are observable.

Developers expect:

  • Usage analytics
  • Error transparency
  • Latency visibility
  • Clear failure modes

Many telecom APIs behave like black boxes.

Requests go in.
Responses come out.
But when something breaks, there’s little insight into why.

Without feedback, developers can’t debug, optimize, or trust the integration. And trust—not performance—is what ultimately drives adoption.

The Pattern Behind All These Failures

None of these issues are about networking capability.

They’re about product thinking.

Telecom APIs often come from infrastructure teams whose goal is exposure and compliance. But developers judge APIs by a different standard:

  • Can I understand it quickly?
  • Can I integrate it safely?
  • Can I scale it predictably?
  • Can I explain it to my product team?

When those answers aren’t clear, the API fails long before the first real user shows up.

  • Where This Is Starting to Change

Some operators and platforms—including teams we work with at TelcoEdge Inc—are beginning to treat APIs not as side artifacts of the network, but as first-class products.

That shift usually includes:

  • Designing APIs around concrete workflows
  • Treating billing, auth, and observability as part of the API—not add-ons
  • Aligning technical exposure with commercial readiness

The technology was never the missing piece.

Execution was.

The Real Question Telecom Needs to Ask

The problem isn’t:

“Why aren’t developers using our APIs?”

The better question is:

“Have we actually built something a developer would bet their product on?”

Until telecom APIs answer that honestly, most of them will continue to fail quietly—before the first line of production code is ever written.

I Tested GLM-4.7 for Two Weeks—Here's What Actually Matters

2026-01-07 05:08:14

Everyone's talking about the new GLM-4.7 benchmarks. 73.8% on SWE-bench. MIT license. 200K context window.

But benchmarks don't tell you what it's like to actually use the thing.

So I spent two weeks building real projects with it—web apps, debugging sessions, UI generation, the works. Here's what I learned that the spec sheet won't tell you.

The Feature That Changes Everything

Most AI coding assistants have a fatal flaw: they forget. Ask them to add authentication to an app you discussed three days ago, and they'll act like they've never heard of your project.

GLM-4.7's "preserved thinking" mechanism actually maintains context across sessions. I tested this by building a full-stack application over multiple days. On day three, when I asked it to add authentication, it referenced architectural decisions from our first conversation.

That never happens with traditional models.

The Real Cost Math

Let me show you what this actually costs:

Side project developer: ~$0.74/month
5-person startup: ~$52/month (with caching)
Enterprise scale: ~$5,200/month

Compare that to Claude Pro at $20/month per person or enterprise GPT-4 costs of $25,000-35,000/month for similar usage.

The math is honestly ridiculous.

What Actually Works (And What Doesn't)

The good:

  • UI generation that doesn't look like 2010 Bootstrap
  • Multilingual coding that actually understands mixed-language codebases
  • Terminal commands that recover from failures instead of panicking

The reality check:

  • Inference speed is middling (55 tokens/sec)
  • Not quite frontier-level on the hardest reasoning tasks
  • Running locally requires serious GPU hardware

Three Ways to Try It

  1. Easiest: Web interface at chat.z.ai
  2. Best for dev work: Integrate with Claude Code or Cline
  3. Full control: Self-host via Hugging Face + vLLM

I've tested all three approaches and documented the exact setup process, real-world gotchas, and when each makes sense.

The Bottom Line

GLM-4.7 isn't the most powerful model available. But it might be the most practical for real-world development at scale.

It's the first time an open-source model feels like it was trained for actual work, not demos.

Read the full deep-dive with code examples, benchmarks, and setup guides here: GLM-4.7: The Open-Source LLM That Codes Like a Pro

SitecoreAI Scheduled Tasks - Getting PowerShell Working Right

2026-01-07 05:06:59

In SitecoreAI CMS (FKA XM Cloud), you have the ability to add PowerShell scripts, which is a great way to enhance functionality and run commands that you might have run in other ways in "legacy" Sitecore. You pretty much have the full run of the content and command structures, though you'll have to dig for it sometimes. A good reference can be found at the Sitecore PowerShell site.

This also allows you to set up PowerShell scripts that can be run as a scheduled task in Sitecore. For example, I have a script that scans recently update taxonomy values, and if anything changed, it finds the related items connected to it and publishes them, so the values are updated in Sitecore Search. But I found that after setting up my script and my task, the task would run but the script wouldn't fire.

I didn't see this in the documentation directly, but I did find the cause in an AI search about the topic. To get the task to fire the script, the script item has to be in a specific area of the tree structure, but then you'll find that the structure isn't fully in place. To set it up:

  1. Go to the /sitecore/system/Modules/PowerShell/Script Library/SPE/Maintenance folder.
  2. Create a folder called "Scheduled Tasks" - the name is the important thing here.
  3. Place your script in this folder.
  4. Set up your scheduled task with the PowerShell Scripted Task Schedule insert option and choose your script.

The key is the script location. My enhancement suggestion to Sitecore would be to add the "Scheduled Tasks" folder to their default IAR setup, but you can easily add it to your own. Again, it's the name that matters, not the item template, ID, or the like.

I hope this helps you out in getting your scheduled tasks going!

Building a Secure Demo Banking App [Part 1]

2026-01-07 05:04:30

When building software projects or applications, it is important to be aware of how quickly technology evolves over time. For development, it is said that some tools or programming languages get updated at most each six months approximately; so that we need to catch up with newer versions that may introduce new patterns or concepts, otherwise we fall behind.

However, no matter how fast technology changes, the foundational core concepts of software development almost always stay the same. Now, with the revolution of AI, generic and simple software projects, such as CRUD apps, don't add the substantial value that we, as Software Engineers, expect to get in terms of knowledge and critical thinking. In order to build something powerful and reliable it should follow the best coding and security practices in each phase of the development cycle.

Why the Demo Banking App project?

  • I decided to build a software related to fintech, for demo purposes, based on the following reasons:
  • It deepens my skills in Full Stack mostly used in Fintech.
  • It represents a substantial software application for portfolio, that can be showcased publicly. The whole planning, development, deployment, I called it as "The Golden Project", since it covers almost all the grounds in terms of planning, coding structure, security, architectural and design patterns, as well as CD/CI. For simplicity, the name of the software application will be "Demo Banking App".
  • I plan to document each important feature and component to reinforce my understanding and adapt my approach as roadblocks arise throughout the build.

Project Goals & Scope

The first version of the "Demo Banking App" will include features noted below:

  • User registration and authentication
  • User profile
  • Account dashboard
  • Transaction history
  • Payments
  • Notifications
  • Fraud-screening

Technology Stack

While deciding the stack to choose for this kind of software project, I took into consideration the relevance and usage in real-world fintech apps, previous experience with some technologies and tools, and relevance of today's software best practices. Below the breakdown of the stack:

  • Frontend: React, TypeScript, Tailwind CSS.
  • Backend: Spring Boot
  • Database: PostgreSQL
  • Authentication: OAuth2 / OpenID Connect
  • Observability: Prometheus, Grafana, OpenTelemetry, ElasticSearch

Architecture

The architecture pattern to be applied is based on microservices and the Saga Pattern. The reason behind this choice is due to the project complexity in order to ensure escalability and smoothness of all transactions, simulating a real-world banking app.

Demo bank application architecture

Security from the Beginning

Even though it is a bank application for demo purposes, it is crucial and important to apply the best security practices. For version #1, some of them include:

  • Using environment variables instead of plain-text values
  • CORS restriction
  • Rate limiting on login routes
  • Input validation
  • Password encryption

Logo and Colors

Demo bank app logo

I designed the logo of the demo banking app in a simplistic way to emphasize minimalism and highlight the realistic corporative touch.

The colors, intended to be used as color scheme for the whole app, were inspired by a German neobank.

What's next!
I will post about the project progress as important features are completed.

To reach me out: