MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

We built an open-source tool that lets you click on UI bugs in the browser and have AI agents fix them automatically

2026-02-26 18:48:19

e kept running into the same problem: we see a bug in the browser, but explaining it to our AI coding agent is painful.

"The third button in the second card, the padding is off, the text is clipped..."

Sound familiar? You see the problem instantly, but translating a visual issue into something an AI agent can act on takes longer than fixing it yourself.

So we built ui-ticket-mcp.

What it does

ui-ticket-mcp is an MCP server that bridges the gap between what you SEE in the browser and what your AI agent sees in code.

You click on the broken element, write a comment like "padding is wrong" or "this text is clipped", and your AI coding agent picks it up with full context:

  • CSS computed styles
  • DOM structure and attributes
  • CSS selectors (unique path to the element)
  • Bounding box and position
  • Accessibility info
  • Sibling elements for context

The agent reads the review, finds the source file, fixes the code, and marks the review as resolved. Full loop — no copy-pasting error descriptions.

How it works

The system has two parts:

1. Review panel (browser side)

A web component (<review-panel>) that you add to your app. It gives you a floating panel where you can:

  • Click any element to annotate it
  • Drag to select multiple elements
  • Write comments with tags (bug, suggestion, question)
  • See numbered markers on reviewed elements (red = open, green = resolved)

2. MCP server (agent side)

A Python MCP server that exposes 10 tools to your AI agent:

  • get_pending_work — all open reviews grouped by page
  • get_reviews — reviews filtered by page
  • get_annotated_reviews — reviews with full element metadata
  • find_source_file — locate the source file for a page
  • add_review / resolve_review — manage reviews
  • and more

The agent calls get_pending_work(), gets the reviews with all the element context, and knows exactly what to fix and where.

Setup (2 minutes)

The easy way

Tell your AI agent:

"Add ui-ticket-mcp to the project"

It handles everything — adds the MCP server config and the review panel to your app.

The manual way

Step 1: Add the MCP server

Add this to your .mcp.json:

{
  "mcpServers": {
    "ui-ticket-mcp": {
      "command": "uvx",
      "args": ["ui-ticket-mcp"],
      "env": {
        "PROJECT_ROOT": "/path/to/your/project"
      }
    }
  }
}

No pip install needed — uvx handles it automatically.

Step 2: Add the review panel

Option A — npm:

npm i ui-ticket-panel
import 'ui-ticket-panel';

Option B — CDN script tag (works with any stack):

<script type="module" src="https://unpkg.com/[email protected]/dist/bundle.js"></script>

That's it. Open your app, click the review panel, start annotating.

What the agent sees

When your agent calls get_annotated_reviews, it gets something like this:

{
  "page_id": "login",
  "text": "Submit button padding is off",
  "tag": "bug",
  "metadata": {
    "tagName": "BUTTON",
    "selector": "form > button.btn-primary",
    "computedStyles": {
      "padding": "4px 8px",
      "fontSize": "14px"
    },
    "boundingBox": { "x": 320, "y": 480, "width": 120, "height": 32 },
    "accessibility": { "role": "button", "name": "Submit" }
  }
}

The agent doesn't have to guess which element you mean. It has the exact selector, styles, and DOM context to locate the source and fix it.

Works with everything

AI agents: Claude Code, Cursor, Windsurf, or any MCP-compatible agent

Frameworks: React, Angular, Vue, Svelte, plain HTML — the review panel is a framework-agnostic web component

Storage: Reviews are stored in a single SQLite file (.reviews/reviews.db) that you can commit to git and share with your team

What we use it for

  • Quick UI bug fixes during development — click, comment, let the agent fix it
  • Design review sessions — go through the app, annotate everything, then let the agent batch-fix
  • Team collaboration — one person reviews in the browser, the agent resolves the tickets

Links

We'd love feedback. What's missing? What would make this more useful for your workflow?

Vuetify 4 is Live Now

2026-02-26 18:46:37

The wait is over, Vuetify 4 has officially been released. This major version marks a fundamental evolution of the world’s most popular Vue UI library. Based on the January 2026 Update and the newly published V4 Release Notes, this version is built for the modern web, focusing on native CSS power and streamlined performance.

Why was this update needed?

As the web moves toward native CSS features and sophisticated design systems, the framework needed to shed its legacy dependencies. Vuetify 4 addresses long-standing developer pain points:

  • Specificity Wars: No more fighting! important flags in your style overrides.
  • Modern Aesthetics: Moving from the older Material Design standards to a more refined, fluid design language.
  • Bundle Bloat: Transitioning from global resets to surgical, component-level normalization.

Material Design 3 (MD3) Integration

The most visible change is the full-scale adoption of Material Design 3. MD3 brings a more organic feel, better accessibility, and a highly customizable color system.

  • Dynamic Color Support: The framework now handles color transparency natively using CSS color-mix() and the relative color syntax (rgb(from …)).
  • New Elevation System: Vuetify has streamlined its elevation system from 25 levels down to 6 distinct levels (0–5), providing a cleaner and more consistent visual hierarchy.

Core Architectural Shifts

CSS Cascade Layers (@layer)

Vuetify 4 fully embraces CSS Cascade Layers. By using five top-level layers (vuetify-core, vuetify-components, vuetify-utilities, etc.), the framework guarantees that your custom styles take priority. This completely eliminates the need for high-specificity selectors when you want to tweak a component’s look.

System Theme by Default

The default theme has shifted from “light” to “system.” Applications built with V4 will now automatically respect the user’s operating system preferences. This not only improves user experience but also fixes common SSR “flicker” issues where a theme would snap from light to dark on page load.

Performance & Layout Improvements

  • Refined Breakpoints: Breakpoints have been adjusted to better match modern device sizes. For instance, md is now 840px (down from 960px) and xl is now 1545px (down from 1920px).
  • VContainer Optimization: The max-width calculations for v-container now round to the nearest 100px, creating more predictable and stable layouts across different viewports.
  • Flex over Grid: Core components like VBtn and VField have reverted from CSS Grid back to Flexbox to resolve gap-control limitations and improve rendering consistency in complex forms.

Developer Experience (DX) Enhancements

Vuetify 4 makes your code cleaner and your workflow faster:

  • Unwrapped Slot Variables: In VForm, variables like isValid are no longer refs. They are passed as plain values to slots, removing the need for .value in your templates.
  • Unified API: The slot naming for items in VSelect, VCombobox, and VAutocomplete has been standardized to internalItem, matching the pattern used in VDataTable.
  • Sass Variable Cleanup: Redundant Sass variables have been consolidated into more logical groups (e.g., $field-gap), making theme-level adjustments straightforward.

Conclusion

Vuetify 4 is a commitment to long-term stability and modern development standards. By moving toward native CSS features and MD3, it eliminates technical debt and provides a faster, more flexible foundation for any Vue.js project. Whether you are building a simple landing page or a massive enterprise dashboard, V4 is designed to stay out of your way and let you build features faster.

Our Commitment to Vuetify 4 At CodedThemes, we are always working to keep our users on the cutting edge. We are currently in the process of updating our flagship products- Berry, Mantis, and Able Pro to fully support Vuetify 4. Stay tuned for these upcoming releases!

Prompt Driven Development (PDD) A Manifesto Against Comfortable Guessing

2026-02-26 18:44:19

We learned TDD because code lies. Or more precisely: code tells you everything you allow it to tell.
Now we have LLMs — and they will still tell you a coherent story even when the premise is already wrong.
Time for a small process hack that doesn’t feel like a hack.

The Scene

You’re sitting there. Coffee. Tabs. A timeline full of “AI will replace developers.”
You enter a prompt. The model delivers code. The code looks good.
And that is exactly why it’s dangerous.

Because LLMs are polite. They rarely contradict you. They deliver. They fill gaps.
And gaps are not romantic in project management. Gaps are budget.

“You wanted a solution. You got an answer. That is not the same thing.”

What PDD Is (and What It Is Not)

Prompt Driven Development (PDD) treats prompts as verifiable specifications.
Not as wish lists. Not as chat. But as a contract that must be measured against reality.

PDD Is Not

  • not “prompt engineering” (more words, better magic)
  • not “the LLM writes my code” (delegation without liability)
  • not “generate tests and hope”
  • not a workflow for buzzword pitches

PDD Is

  • specification first — as a prompt
  • tests as referees (not decoration)
  • iteration on the prompt until it becomes measurable
  • code only when “done” can be defined

The Loop

Classic (and unfortunately common):
Prompt → Code → Fix → More fixes → “Why is this taking so long?”

PDD reverses the order:

  • Prompt (as specification)
  • Test (as a reality sensor)
  • Prompt (iteration: precision instead of poetry)
  • Code (implementation inside the test cage)

If you cannot measure the system, you do not control it.
You are only telling a story about control.

The Five Theses (Manifest)

1) Prompts Are Artifacts

A prompt is not a conversation. It is a document. Versionable. Reviewable. Worth criticizing.
If your prompt disappears in chat history, it effectively does not exist in the project.

2) Unclarity Is the Real Bug

When something goes wrong, it is rarely “just a bug.”
Often it is a fog word.
“Fast.” “Intuitive.” “Simple.” — these words are like smoke grenades, except without an explosion, but with sprint meetings.

3) Tests Validate the Prompt, Not Just the Code

If a test fails, the first question is not “Who broke this?”
It is: “What did we actually demand?”

4) LLMs Are Co-Authors, Not Oracles

The model delivers possibilities. You deliver responsibility.
PDD is the method that separates these roles instead of romantically blending them.

5) First the Contract, Then the Implementation

PDD is a return to something old-fashioned:
a Definition of Done that can be executed.

“If my prompt is not testable, it is not a specification.”

Practice: How a Prompt Becomes Testable

A testable prompt specifies:

  • goal (why does the feature exist?)
  • input/output (what data goes in, what comes out?)
  • constraints (performance, security, offline, KISS/YAGNI)
  • edge cases (how does it fail correctly?)
  • non-goals (what is deliberately not built?)
  • acceptance criteria (measurable: what does ‘done’ mean?)

If you only write “just do it,” you will get “just do it” quality back.
And then you will argue for three days about results instead of two hours about requirements.

Anti-Patterns (You Will Recognize Them)

  • The novel prompt: too long, too vague, too much world — and not a single hard edge.
  • The good-vibes test: “should feel good” — nice, but not executable.
  • Tool fetishism: new models, new plugins — but no Definition of Done.
  • The hallucination deal: “it’ll probably be right” — until production disagrees.

Why This Matters Right Now

Because AI does not only make code faster.
It also makes failure faster — if we continue to specify in fog.

PDD is not a religion. It is a guardrail.
And guardrails are not sexy. They are what stop you from pushing a “small change” to production at 11:48 pm.

“You want speed? Then stop investing in unclarity.”

Closing

If your prompt is not testable, it is not a specification.
And if you have no specification, you are not building faster — you are just building into emptiness sooner.

If you want to try this: take a mini feature, write the prompt like a contract,
let tests emerge, iterate on the prompt, only then write code.
One round. Then another.
And suddenly “AI development” feels less like gambling.

📌 Publication Metadata

Original Publication:
January 13, 2026
Author: Benjamin Lam (blame76)
Categories: Prompt-Driven Development (PDD)
Original URL: https://benjamin-lam.de/2026/01/13/prompt-driven-development-pdd/
Translated with ChatGPT

Will AI Replace You (Yes, You) in the Near Future?

2026-02-26 18:42:05

A-a-a-a!!!111 We're all going to be fired! - one part of the internet panics. The second part is secretly afraid, but hopes for the best. The third part goes to fix the pipes or wiring at their neighbors' houses, and sighs - I wish AI would replace me so I could finally spend time with my grandchildren. And the fourth part thinks that if they surround themselves with LLMs and agents and start flooding social media feeds with posts on this topic, they'll get away with it.

This article is an attempt to analyze the prospects of replacing humans with AI depending on the type of occupation.

So, professions are divided according to interaction:

  • Human ↔ human: baristas, waiters, doctors, politicians, managers.
  • Human ↔ tool: turners, fitters, plumbers, builders.
  • Human ↔ computer: programmers, designers, copywriters.

Some professions combine these interactions. For example, a doctor may look at CT scans of patients or read blood test results on a computer and make a diagnosis based on them, or communicate with the patient, reassuring them that a slight runny nose is unlikely to kill them in the near future.

So which of them will AI replace? Let's consider each option:

Human ↔ Computer

Obviously, those who interact exclusively with computers—designers, programmers, testers—are the first to be affected. If the job is related to computers and nothing else, then such an employee is, as offensive as it may sound, an interface between the task and the computer on which they perform that task. Moreover, the task is also most often assigned via a computer and is formal. And the less such an employee interacts with real people, the more likely they are to be replaced by a neural network.

This is because there is nothing to prevent their manager from setting the same tasks via LLM and monitoring their completion. The only question is the sophistication of the model and the accuracy/detail of the task setting.

If, on the other hand, the employee focuses more on human communication, for example, by understanding the domain, clarifying the task, highlighting potential problem areas or edge cases, clarifying concerns, and taking responsibility for a certain function (i.e., moving to human ↔ human interaction), then the chances of them being fired decrease.

Why:

  • The employee's visibility to their manager increases
  • Their expertise in the subject area grows
  • The manager sees such an employee as one of the key members of their team, someone they can “rely on” in a crisis

Of course, no one is immune to the possibility that their entire department could be eliminated. But if the layoffs are selective, the chances of staying will be much higher than for their sociophobic colleagues. Even if the entire team is laid off, the connections built up during their time at the company will help them find a new job much more quickly and easily.

Conversely, if an employee does not communicate with anyone, remains silent at meetings, and sits with their camera turned off, then even if they perform their tasks well, such an employee is more likely to be a candidate for replacement. As sad as it may sound. Because no matter how good a person and employee they are, business puts money first. And obviously, paying an LLM provider $200 a month is simply more profitable than paying a developer at least 10 times more.

Human ↔ Tool

You have to admit, it's hard to imagine a situation where AI could replace, say, a bricklayer or a furniture assembler. No, theoretically, of course, it's possible to imagine and even implement such a thing. But it just wouldn't be profitable. A robot with the necessary skills would cont way more to operate than a human. Not to mention the cost of development. In addition, humans are much better at adapting to non-standard situations. For example, if something doesn't fit properly for a furniture assembler, they can quickly modify the necessary part right at the customer's place, and everyone will be happy. Or, a plumber may find during installation that a certain fitting needs to be replaced. For a robot to be able to do that... Well, I don't know, we'll have to make progress in that area in at least a decade, but I suspect it will take much longer. So, AI does not pose a threat to replacing skilled workers yet. For now.

Human ↔ Human

Finally, we come to the most interesting part. Thousands of books have been written about human interaction, and it would be nearly impossible to summarize them all in a single paragraph. It would be like trying to push a log through the eye of a needle. But let's give it a try.

No AI can replace human communication. Some may argue that AI can replace a psychologist with whom we communicate, for example, via video, and that would be perfectly fair. But here we return to the interaction between humans and computers. Intonation, a joke that accidentally pops up out of context, a handshake, a shared lunch. I am talking about nonverbal communication. Models cannot reproduce this yet.

And in the end, who fires employees after the implementation of AI? The same guys who play golf with each other and decide strategic issues over a glass of cognac — whether it is profitable to replace a developer or sales manager with AI.

Summary

In summary, the more live communication is involved in your work, the less likely you are to be replaced by AI or an agent. And blue-collar jobs are also out of the danger zone for now.

Of course, with the development of AI and robotics, just about anyone could be affected. But in some cases, it may simply not be profitable, and in others, it could create social problems, such as mass unemployment, and then government regulators will step in.

Of course, prediction is a thankless task, and no one knows what will happen even in five years. And black swans have been arriving with enviable regularity lately. Although, maybe we've just become better informed.

Spack: Package manager for MPI Cluster

2026-02-26 18:41:34

While working and managing my MPI Cluster, I wanted something which can easily manage my software packages and support multiple versions of the same package while also supporting multiple MPI implementations. I used to spend hours fixing the old builds and making sure the experiments I run work without anything bugging me.

Few months back I found Spack, Spack is an open source flexible package manager to easily install software packages for your HPC environment. Some advantages which it offers which are great if someone is trying to create an HPC cluster:

  1. Spack is open source and can be installed quite easily. Many packages, containers like podman, apptainer can be found and installed using a single command.
  2. Spack supports multiple versions of the same package, for example you can install two separate versions of mpich and load whichever you want. Spack creates separate builds for each.
  3. Spack supports multiple implementations of software packages and handles loading and unloading of packages with ease which is great.
  4. Spack also provides environments, its used is pretty similar to what we have in conda.

Quick commands

Spack supports a lot of commands but here are some I personally recommend/use:

Command Description
spack install/uninstall <package>@version Installing/Uninstalling different packages in your cluster
spack load/unload <package> Load and unload which package would you want to use, with all other dependencies handled automatically.
spack list See all available packages which are provided
spack env create <env>
spack env activate <env>
spack add <package>
spack install
Activate a spack env and install packages to create reproducible environment

Another important package I personally use is LMod(Lua Modules), this makes loading and unloading of packages easier. Spack integrates with LMod to generate module files and commands to manipulate them.

How do I use it in my MPI Cluster

  • Ensure that each of my nodes has Spack as their package manager, with each node having mpich and openmpi preinstalled using Spack. If required, I install other packages as well.

  • SSH into the login node and write and run scripts which will perform computations on the compute nodes and use spack or lmod commands to load/unload software packages.

  • Run experiments and get your results; Spack handles all the dependencies automatically when loading new packages.

Conclusion

Spack even though is slightly slower when installing packages, because it builds from source, its flexibility across different supercomputing and HPC environments is great. The number of packages which are currently available is huge and is ever increasing making it an immediate choice for HPC experiments.

Advanced Local AI: Building Digital Employees with Ollama + OpenClaw

2026-02-26 18:37:19

Chatting is not enough. Learn how to combine Ollama's powerful reasoning capabilities with OpenClaw's......

2025 was called the "Year of Local Large Models," and we've gotten used to running Llama 3 or DeepSeek with Ollama to chat and ask about code. But by 2026, simple"conversation" no longer satisfies the appetites of tech enthusiasts.

We want Agents—not just capable of speaking, but truly able to work for us.

Today let's talk about the most hardcore combination in the local AI space right now: Ollama (reasoning engine) + OpenClaw (autonomous execution framework). Under this architecture, AI is no longer just a text generator in a chat box, but a "digital employee" that can operate browsers, read and write files, and run code.

Any Agent needs a smart "brain," and in a local environment, Ollama remains the most robust choice.

If you haven't installed it yet, just go to ollama.ai to download the appropriate version. Once installed, we typically open a terminal and enter commands to download models.

Recommended Models

For Agent applications, choose models that support Tool Calling:

# General reasoning model
ollama pull llama3.3

# Code-specialized model
ollama pull qwen2.5-coder:32b

# Strong reasoning model
ollama pull deepseek-r1:32b

# Lightweight option
ollama pull gpt-oss:20b

But this actually brings a small annoyance: terminal downloading is a "black box."

When you want to try different models (like comparing Qwen 2.5 and Llama 3 effects), or when model files are very large (tens of GB), looking at the monotonous progress bar in the terminal makes it difficult to intuitively manage these behemoths. Moreover, once you have many models installed, deciding which to delete and how much video memory each occupies becomes a headache.

Add a Visual Panel to Ollama: OllaMan

To solve this problem and also make subsequent model scheduling more relaxed, I recommend using it in conjunction with OllaMan for this step.

It can directly read your local Ollama service and provide an App Store-like graphical interface. You can visually browse the online model library on it, click on images to download, and see clear download rates and progress in real time.

More importantly, before handing the model to the Agent, you can first test the model's reasoning ability in OllaMan's conversation interface. After all, if a model can't even handle basic conversation logically, there's no need to waste time configuring it into the Agent.

Once the model environment is ready, the foundation is solid. Now for the main event.

OpenClaw is currently one of the best local Agent frameworks in terms of experience. Its core capability lies in execution—it has system-level permissions, can execute Shell commands, read and write files, and even control browsers.

Prerequisites

Before installing OpenClaw, make sure your system meets the following requirements:

  • Node.js 22 or higher

You can check your Node version with:

node --version

One-Click Installation (Recommended)

OpenClaw officially provides the most convenient one-click installer script, which automatically handles Node.js detection, CLI installation, and the onboarding wizard:

macOS / Linux / WSL2

curl -fsSL https://openclaw.ai/install.sh | bash

Windows (PowerShell)

iwr -useb https://openclaw.ai/install.ps1 | iex

💡 The installer script automatically detects and installs Node.js 22+ (if missing), then launches the onboarding wizard.

If you only want to install the CLI without running the onboarding wizard:

# macOS / Linux / WSL2
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboard

Other Installation Methods

If you already have Node.js 22+ installed, you can also install manually:

npm Installation

npm install -g openclaw@latest
openclaw onboard --install-daemon

pnpm Installation

pnpm add -g openclaw@latest
pnpm approve-builds -g
openclaw onboard --install-daemon

macOS Application

If you're on macOS, you can also download the OpenClaw.app desktop application:

  1. Download the latest .dmg file from OpenClaw Releases
  2. Install and launch the app
  3. Complete system permissions setup (TCC prompts)

Configuring Ollama Integration

After installation, you need to connect OpenClaw with your Ollama service.

1. Enable Ollama API Key

OpenClaw requires an API Key to identify the Ollama service (any value works; Ollama itself doesn't need a real key):

# Set environment variable
export OLLAMA_API_KEY="ollama-local"

# Or via OpenClaw config command
openclaw config set models.providers.ollama.apiKey "ollama-local"

2. Verify Ollama Service

Ensure Ollama is running:

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama service if not running
ollama serve

3. Run Configuration Wizard

OpenClaw provides an interactive configuration wizard that automatically detects your Ollama models:

openclaw onboard

The wizard will automatically:

  • Scan your local Ollama service (http://127.0.0.1:11434)
  • Discover all models that support tool calling
  • Configure default model settings

4. Manual Configuration (Optional)

If you want to manually specify models, edit the config file ~/.openclaw/openclaw.json:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/llama3.3",
        "fallbacks": ["ollama/qwen2.5-coder:32b"]
      }
    }
  }
}

5. Verify Configuration

Check if OpenClaw has successfully recognized your Ollama models:

# List all models recognized by OpenClaw
openclaw models list

# List installed Ollama models
ollama list

Start the Gateway

Once configured, start the OpenClaw Gateway:

openclaw gateway

The Gateway runs on ws://127.0.0.1:18789 by default. It's OpenClaw's core service, responsible for coordinating model calls and skill execution.

Environment setup is just the beginning. OpenClaw's true power lies in its rich Skills ecosystem.

Scenario 1: Automated Code Review

OpenClaw can directly read your local project files. You can give it commands like:

"Traverse all .tsx files in src/components under the current directory, check if there are any useEffect missing dependencies, and summarize the risk points into review_report.md."

During this process:

  1. OpenClaw calls file system skills to traverse directories.
  2. Ollama (Llama 3) reads the code and performs logical reasoning.
  3. OpenClaw organizes the reasoning results and writes them to a new file.

This is far more efficient than copying code segments to ChatGPT, and the data never leaves your local machine.

Scenario 2: Remote Commander (IM Integration)

OpenClaw supports integration with chat platforms like Slack, Discord, and Telegram. This means you can turn your home computer into a server that's always on standby.

Usage Example: After configuring the Telegram bot integration, when you're out and about, you just need to send a message on your phone: "Hey Claw, help me check the remaining disk space on my home NAS. If it's below 10%, send me an alert."

OpenClaw will run the Shell command df -h on your home computer, analyze the results, and send the report back to your phone.

By using Ollama to provide intelligence, OllaMan to manage model assets, and OpenClaw to execute specific tasks, we've built a complete local AI productivity loop.

The biggest charm of this combination is: completely private, completely free, completely under your control.

If you're tired of just chatting, try installing it on your computer and see how your workflow can evolve with the help of this AI assistant.