2026-02-26 18:48:19
e kept running into the same problem: we see a bug in the browser, but explaining it to our AI coding agent is painful.
"The third button in the second card, the padding is off, the text is clipped..."
Sound familiar? You see the problem instantly, but translating a visual issue into something an AI agent can act on takes longer than fixing it yourself.
So we built ui-ticket-mcp.
ui-ticket-mcp is an MCP server that bridges the gap between what you SEE in the browser and what your AI agent sees in code.
You click on the broken element, write a comment like "padding is wrong" or "this text is clipped", and your AI coding agent picks it up with full context:
The agent reads the review, finds the source file, fixes the code, and marks the review as resolved. Full loop — no copy-pasting error descriptions.
The system has two parts:
1. Review panel (browser side)
A web component (<review-panel>) that you add to your app. It gives you a floating panel where you can:
2. MCP server (agent side)
A Python MCP server that exposes 10 tools to your AI agent:
get_pending_work — all open reviews grouped by pageget_reviews — reviews filtered by pageget_annotated_reviews — reviews with full element metadatafind_source_file — locate the source file for a pageadd_review / resolve_review — manage reviewsThe agent calls get_pending_work(), gets the reviews with all the element context, and knows exactly what to fix and where.
Tell your AI agent:
"Add ui-ticket-mcp to the project"
It handles everything — adds the MCP server config and the review panel to your app.
Step 1: Add the MCP server
Add this to your .mcp.json:
{
"mcpServers": {
"ui-ticket-mcp": {
"command": "uvx",
"args": ["ui-ticket-mcp"],
"env": {
"PROJECT_ROOT": "/path/to/your/project"
}
}
}
}
No pip install needed — uvx handles it automatically.
Step 2: Add the review panel
Option A — npm:
npm i ui-ticket-panel
import 'ui-ticket-panel';
Option B — CDN script tag (works with any stack):
<script type="module" src="https://unpkg.com/[email protected]/dist/bundle.js"></script>
That's it. Open your app, click the review panel, start annotating.
When your agent calls get_annotated_reviews, it gets something like this:
{
"page_id": "login",
"text": "Submit button padding is off",
"tag": "bug",
"metadata": {
"tagName": "BUTTON",
"selector": "form > button.btn-primary",
"computedStyles": {
"padding": "4px 8px",
"fontSize": "14px"
},
"boundingBox": { "x": 320, "y": 480, "width": 120, "height": 32 },
"accessibility": { "role": "button", "name": "Submit" }
}
}
The agent doesn't have to guess which element you mean. It has the exact selector, styles, and DOM context to locate the source and fix it.
AI agents: Claude Code, Cursor, Windsurf, or any MCP-compatible agent
Frameworks: React, Angular, Vue, Svelte, plain HTML — the review panel is a framework-agnostic web component
Storage: Reviews are stored in a single SQLite file (.reviews/reviews.db) that you can commit to git and share with your team
pip install ui-ticket-mcp
npm i ui-ticket-panel
We'd love feedback. What's missing? What would make this more useful for your workflow?
2026-02-26 18:46:37
The wait is over, Vuetify 4 has officially been released. This major version marks a fundamental evolution of the world’s most popular Vue UI library. Based on the January 2026 Update and the newly published V4 Release Notes, this version is built for the modern web, focusing on native CSS power and streamlined performance.
As the web moves toward native CSS features and sophisticated design systems, the framework needed to shed its legacy dependencies. Vuetify 4 addresses long-standing developer pain points:
important flags in your style overrides.The most visible change is the full-scale adoption of Material Design 3. MD3 brings a more organic feel, better accessibility, and a highly customizable color system.
color-mix() and the relative color syntax (rgb(from …)).6 distinct levels (0–5), providing a cleaner and more consistent visual hierarchy.Vuetify 4 fully embraces CSS Cascade Layers. By using five top-level layers (vuetify-core, vuetify-components, vuetify-utilities, etc.), the framework guarantees that your custom styles take priority. This completely eliminates the need for high-specificity selectors when you want to tweak a component’s look.
The default theme has shifted from “light” to “system.” Applications built with V4 will now automatically respect the user’s operating system preferences. This not only improves user experience but also fixes common SSR “flicker” issues where a theme would snap from light to dark on page load.
md is now 840px (down from 960px) and xl is now 1545px (down from 1920px).v-container now round to the nearest 100px, creating more predictable and stable layouts across different viewports.VBtn and VField have reverted from CSS Grid back to Flexbox to resolve gap-control limitations and improve rendering consistency in complex forms.Vuetify 4 makes your code cleaner and your workflow faster:
VForm, variables like isValid are no longer refs. They are passed as plain values to slots, removing the need for .value in your templates.VSelect, VCombobox, and VAutocomplete has been standardized to internalItem, matching the pattern used in VDataTable.$field-gap), making theme-level adjustments straightforward.Vuetify 4 is a commitment to long-term stability and modern development standards. By moving toward native CSS features and MD3, it eliminates technical debt and provides a faster, more flexible foundation for any Vue.js project. Whether you are building a simple landing page or a massive enterprise dashboard, V4 is designed to stay out of your way and let you build features faster.
Our Commitment to Vuetify 4 At CodedThemes, we are always working to keep our users on the cutting edge. We are currently in the process of updating our flagship products- Berry, Mantis, and Able Pro to fully support Vuetify 4. Stay tuned for these upcoming releases!
2026-02-26 18:44:19
We learned TDD because code lies. Or more precisely: code tells you everything you allow it to tell.
Now we have LLMs — and they will still tell you a coherent story even when the premise is already wrong.
Time for a small process hack that doesn’t feel like a hack.
You’re sitting there. Coffee. Tabs. A timeline full of “AI will replace developers.”
You enter a prompt. The model delivers code. The code looks good.
And that is exactly why it’s dangerous.
Because LLMs are polite. They rarely contradict you. They deliver. They fill gaps.
And gaps are not romantic in project management. Gaps are budget.
“You wanted a solution. You got an answer. That is not the same thing.”
Prompt Driven Development (PDD) treats prompts as verifiable specifications.
Not as wish lists. Not as chat. But as a contract that must be measured against reality.
Classic (and unfortunately common):
Prompt → Code → Fix → More fixes → “Why is this taking so long?”
PDD reverses the order:
If you cannot measure the system, you do not control it.
You are only telling a story about control.
A prompt is not a conversation. It is a document. Versionable. Reviewable. Worth criticizing.
If your prompt disappears in chat history, it effectively does not exist in the project.
When something goes wrong, it is rarely “just a bug.”
Often it is a fog word.
“Fast.” “Intuitive.” “Simple.” — these words are like smoke grenades, except without an explosion, but with sprint meetings.
If a test fails, the first question is not “Who broke this?”
It is: “What did we actually demand?”
The model delivers possibilities. You deliver responsibility.
PDD is the method that separates these roles instead of romantically blending them.
PDD is a return to something old-fashioned:
a Definition of Done that can be executed.
“If my prompt is not testable, it is not a specification.”
A testable prompt specifies:
If you only write “just do it,” you will get “just do it” quality back.
And then you will argue for three days about results instead of two hours about requirements.
Because AI does not only make code faster.
It also makes failure faster — if we continue to specify in fog.
PDD is not a religion. It is a guardrail.
And guardrails are not sexy. They are what stop you from pushing a “small change” to production at 11:48 pm.
“You want speed? Then stop investing in unclarity.”
If your prompt is not testable, it is not a specification.
And if you have no specification, you are not building faster — you are just building into emptiness sooner.
If you want to try this: take a mini feature, write the prompt like a contract,
let tests emerge, iterate on the prompt, only then write code.
One round. Then another.
And suddenly “AI development” feels less like gambling.
Original Publication:
January 13, 2026
Author: Benjamin Lam (blame76)
Categories: Prompt-Driven Development (PDD)
Original URL: https://benjamin-lam.de/2026/01/13/prompt-driven-development-pdd/
Translated with ChatGPT
2026-02-26 18:42:05
A-a-a-a!!!111 We're all going to be fired! - one part of the internet panics. The second part is secretly afraid, but hopes for the best. The third part goes to fix the pipes or wiring at their neighbors' houses, and sighs - I wish AI would replace me so I could finally spend time with my grandchildren. And the fourth part thinks that if they surround themselves with LLMs and agents and start flooding social media feeds with posts on this topic, they'll get away with it.
This article is an attempt to analyze the prospects of replacing humans with AI depending on the type of occupation.
So, professions are divided according to interaction:
Some professions combine these interactions. For example, a doctor may look at CT scans of patients or read blood test results on a computer and make a diagnosis based on them, or communicate with the patient, reassuring them that a slight runny nose is unlikely to kill them in the near future.
So which of them will AI replace? Let's consider each option:
Obviously, those who interact exclusively with computers—designers, programmers, testers—are the first to be affected. If the job is related to computers and nothing else, then such an employee is, as offensive as it may sound, an interface between the task and the computer on which they perform that task. Moreover, the task is also most often assigned via a computer and is formal. And the less such an employee interacts with real people, the more likely they are to be replaced by a neural network.
This is because there is nothing to prevent their manager from setting the same tasks via LLM and monitoring their completion. The only question is the sophistication of the model and the accuracy/detail of the task setting.
If, on the other hand, the employee focuses more on human communication, for example, by understanding the domain, clarifying the task, highlighting potential problem areas or edge cases, clarifying concerns, and taking responsibility for a certain function (i.e., moving to human ↔ human interaction), then the chances of them being fired decrease.
Why:
Of course, no one is immune to the possibility that their entire department could be eliminated. But if the layoffs are selective, the chances of staying will be much higher than for their sociophobic colleagues. Even if the entire team is laid off, the connections built up during their time at the company will help them find a new job much more quickly and easily.
Conversely, if an employee does not communicate with anyone, remains silent at meetings, and sits with their camera turned off, then even if they perform their tasks well, such an employee is more likely to be a candidate for replacement. As sad as it may sound. Because no matter how good a person and employee they are, business puts money first. And obviously, paying an LLM provider $200 a month is simply more profitable than paying a developer at least 10 times more.
You have to admit, it's hard to imagine a situation where AI could replace, say, a bricklayer or a furniture assembler. No, theoretically, of course, it's possible to imagine and even implement such a thing. But it just wouldn't be profitable. A robot with the necessary skills would cont way more to operate than a human. Not to mention the cost of development. In addition, humans are much better at adapting to non-standard situations. For example, if something doesn't fit properly for a furniture assembler, they can quickly modify the necessary part right at the customer's place, and everyone will be happy. Or, a plumber may find during installation that a certain fitting needs to be replaced. For a robot to be able to do that... Well, I don't know, we'll have to make progress in that area in at least a decade, but I suspect it will take much longer. So, AI does not pose a threat to replacing skilled workers yet. For now.
Finally, we come to the most interesting part. Thousands of books have been written about human interaction, and it would be nearly impossible to summarize them all in a single paragraph. It would be like trying to push a log through the eye of a needle. But let's give it a try.
No AI can replace human communication. Some may argue that AI can replace a psychologist with whom we communicate, for example, via video, and that would be perfectly fair. But here we return to the interaction between humans and computers. Intonation, a joke that accidentally pops up out of context, a handshake, a shared lunch. I am talking about nonverbal communication. Models cannot reproduce this yet.
And in the end, who fires employees after the implementation of AI? The same guys who play golf with each other and decide strategic issues over a glass of cognac — whether it is profitable to replace a developer or sales manager with AI.
In summary, the more live communication is involved in your work, the less likely you are to be replaced by AI or an agent. And blue-collar jobs are also out of the danger zone for now.
Of course, with the development of AI and robotics, just about anyone could be affected. But in some cases, it may simply not be profitable, and in others, it could create social problems, such as mass unemployment, and then government regulators will step in.
Of course, prediction is a thankless task, and no one knows what will happen even in five years. And black swans have been arriving with enviable regularity lately. Although, maybe we've just become better informed.
2026-02-26 18:41:34
While working and managing my MPI Cluster, I wanted something which can easily manage my software packages and support multiple versions of the same package while also supporting multiple MPI implementations. I used to spend hours fixing the old builds and making sure the experiments I run work without anything bugging me.
Few months back I found Spack, Spack is an open source flexible package manager to easily install software packages for your HPC environment. Some advantages which it offers which are great if someone is trying to create an HPC cluster:
conda.Spack supports a lot of commands but here are some I personally recommend/use:
| Command | Description |
|---|---|
spack install/uninstall <package>@version |
Installing/Uninstalling different packages in your cluster |
spack load/unload <package> |
Load and unload which package would you want to use, with all other dependencies handled automatically. |
spack list |
See all available packages which are provided |
spack env create <env> spack env activate <env> spack add <package> spack install
|
Activate a spack env and install packages to create reproducible environment |
Another important package I personally use is LMod(Lua Modules), this makes loading and unloading of packages easier. Spack integrates with LMod to generate module files and commands to manipulate them.
Ensure that each of my nodes has Spack as their package manager, with each node having mpich and openmpi preinstalled using Spack. If required, I install other packages as well.
SSH into the login node and write and run scripts which will perform computations on the compute nodes and use spack or lmod commands to load/unload software packages.
Run experiments and get your results; Spack handles all the dependencies automatically when loading new packages.
Spack even though is slightly slower when installing packages, because it builds from source, its flexibility across different supercomputing and HPC environments is great. The number of packages which are currently available is huge and is ever increasing making it an immediate choice for HPC experiments.
2026-02-26 18:37:19
Chatting is not enough. Learn how to combine Ollama's powerful reasoning capabilities with OpenClaw's......
2025 was called the "Year of Local Large Models," and we've gotten used to running Llama 3 or DeepSeek with Ollama to chat and ask about code. But by 2026, simple"conversation" no longer satisfies the appetites of tech enthusiasts.
We want Agents—not just capable of speaking, but truly able to work for us.
Today let's talk about the most hardcore combination in the local AI space right now: Ollama (reasoning engine) + OpenClaw (autonomous execution framework). Under this architecture, AI is no longer just a text generator in a chat box, but a "digital employee" that can operate browsers, read and write files, and run code.
Any Agent needs a smart "brain," and in a local environment, Ollama remains the most robust choice.
If you haven't installed it yet, just go to ollama.ai to download the appropriate version. Once installed, we typically open a terminal and enter commands to download models.
For Agent applications, choose models that support Tool Calling:
# General reasoning model
ollama pull llama3.3
# Code-specialized model
ollama pull qwen2.5-coder:32b
# Strong reasoning model
ollama pull deepseek-r1:32b
# Lightweight option
ollama pull gpt-oss:20b
But this actually brings a small annoyance: terminal downloading is a "black box."
When you want to try different models (like comparing Qwen 2.5 and Llama 3 effects), or when model files are very large (tens of GB), looking at the monotonous progress bar in the terminal makes it difficult to intuitively manage these behemoths. Moreover, once you have many models installed, deciding which to delete and how much video memory each occupies becomes a headache.
To solve this problem and also make subsequent model scheduling more relaxed, I recommend using it in conjunction with OllaMan for this step.
It can directly read your local Ollama service and provide an App Store-like graphical interface. You can visually browse the online model library on it, click on images to download, and see clear download rates and progress in real time.
More importantly, before handing the model to the Agent, you can first test the model's reasoning ability in OllaMan's conversation interface. After all, if a model can't even handle basic conversation logically, there's no need to waste time configuring it into the Agent.
Once the model environment is ready, the foundation is solid. Now for the main event.
OpenClaw is currently one of the best local Agent frameworks in terms of experience. Its core capability lies in execution—it has system-level permissions, can execute Shell commands, read and write files, and even control browsers.
Before installing OpenClaw, make sure your system meets the following requirements:
You can check your Node version with:
node --version
OpenClaw officially provides the most convenient one-click installer script, which automatically handles Node.js detection, CLI installation, and the onboarding wizard:
curl -fsSL https://openclaw.ai/install.sh | bash
iwr -useb https://openclaw.ai/install.ps1 | iex
💡 The installer script automatically detects and installs Node.js 22+ (if missing), then launches the onboarding wizard.
If you only want to install the CLI without running the onboarding wizard:
# macOS / Linux / WSL2
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboard
If you already have Node.js 22+ installed, you can also install manually:
npm install -g openclaw@latest
openclaw onboard --install-daemon
pnpm add -g openclaw@latest
pnpm approve-builds -g
openclaw onboard --install-daemon
If you're on macOS, you can also download the OpenClaw.app desktop application:
.dmg file from OpenClaw Releases
After installation, you need to connect OpenClaw with your Ollama service.
OpenClaw requires an API Key to identify the Ollama service (any value works; Ollama itself doesn't need a real key):
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or via OpenClaw config command
openclaw config set models.providers.ollama.apiKey "ollama-local"
Ensure Ollama is running:
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama service if not running
ollama serve
OpenClaw provides an interactive configuration wizard that automatically detects your Ollama models:
openclaw onboard
The wizard will automatically:
http://127.0.0.1:11434)If you want to manually specify models, edit the config file ~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "ollama/llama3.3",
"fallbacks": ["ollama/qwen2.5-coder:32b"]
}
}
}
}
Check if OpenClaw has successfully recognized your Ollama models:
# List all models recognized by OpenClaw
openclaw models list
# List installed Ollama models
ollama list
Once configured, start the OpenClaw Gateway:
openclaw gateway
The Gateway runs on ws://127.0.0.1:18789 by default. It's OpenClaw's core service, responsible for coordinating model calls and skill execution.
Environment setup is just the beginning. OpenClaw's true power lies in its rich Skills ecosystem.
OpenClaw can directly read your local project files. You can give it commands like:
"Traverse all .tsx files in src/components under the current directory, check if there are any useEffect missing dependencies, and summarize the risk points into review_report.md."
During this process:
This is far more efficient than copying code segments to ChatGPT, and the data never leaves your local machine.
OpenClaw supports integration with chat platforms like Slack, Discord, and Telegram. This means you can turn your home computer into a server that's always on standby.
Usage Example: After configuring the Telegram bot integration, when you're out and about, you just need to send a message on your phone: "Hey Claw, help me check the remaining disk space on my home NAS. If it's below 10%, send me an alert."
OpenClaw will run the Shell command df -h on your home computer, analyze the results, and send the report back to your phone.
By using Ollama to provide intelligence, OllaMan to manage model assets, and OpenClaw to execute specific tasks, we've built a complete local AI productivity loop.
The biggest charm of this combination is: completely private, completely free, completely under your control.
If you're tired of just chatting, try installing it on your computer and see how your workflow can evolve with the help of this AI assistant.