MoreRSS

site iconSimon WillisonModify

Creator of Datasette and Lanyrd, co-creator of the Django Web Framework.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Simon Willison

Introducing Showboat and Rodney, so agents can demo what they’ve built

2026-02-11 01:45:29

A key challenge working with coding agents is having them both test what they’ve built and demonstrate that software to you, their overseer. This goes beyond automated tests - we need artifacts that show their progress and help us see exactly what the agent-produced software is able to do. I’ve just released two new tools aimed at this problem: Showboat and Rodney.

Proving code actually works

I recently wrote about how the job of a software engineer isn't to write code, it's to deliver code that works. A big part of that is proving to ourselves and to other people that the code we are responsible for behaves as expected.

This becomes even more important - and challenging - as we embrace coding agents as a core part of our software development process.

The more code we churn out with agents, the more valuable tools are that reduce the amount of manual QA time we need to spend.

One of the most interesting things about the StrongDM software factory model is how they ensure that their software is well tested and delivers value despite their policy that "code must not be reviewed by humans". Part of their solution involves expensive swarms of QA agents running through "scenarios" to exercise their software. It's fascinating, but I don't want to spend thousands of dollars on QA robots if I can avoid it!

I need tools that allow agents to clearly demonstrate their work to me, while minimizing the opportunities for them to cheat about what they've done.

Showboat: Agents build documents to demo their work

Showboat is the tool I built to help agents demonstrate their work to me.

It's a CLI tool (a Go binary, optionally wrapped in Python to make it easier to install) that helps an agent construct a Markdown document demonstrating exactly what their newly developed code can do.

It's not designed for humans to run, but here's how you would run it anyway:

showboat init demo.md 'How to use curl and jq'
showboat note demo.md "Here's how to use curl and jq together."
showboat exec demo.md bash 'curl -s https://api.github.com/repos/simonw/rodney | jq .description'
showboat note demo.md 'And the curl logo, to demonstrate the image command:'
showboat image demo.md 'curl -o curl-logo.png https://curl.se/logo/curl-logo.png && echo curl-logo.png'

Here's what the result looks like if you open it up in VS Code and preview the Markdown:

Screenshot showing a Markdown file "demo.md" side-by-side with its rendered preview. The Markdown source (left) shows: "# How to use curl and jq", italic timestamp "2026-02-10T01:12:30Z", prose "Here's how to use curl and jq together.", a bash code block with "curl -s https://api.github.com/repos/simonw/rodney | jq .description", output block showing '"CLI tool for interacting with the web"', text "And the curl logo, to demonstrate the image command:", a bash {image} code block with "curl -o curl-logo.png https://curl.se/logo/curl-logo.png && echo curl-logo.png", and a Markdown image reference "2056e48f-2026-02-10". The rendered preview (right) displays the formatted heading, timestamp, prose, styled code blocks, and the curl logo image in dark teal showing "curl://" with circuit-style design elements.

Here's that demo.md file in a Gist.

So a sequence of showboat init, showboat note, showboat exec and showboat image commands constructs a Markdown document one section at a time, with the output of those exec commands automatically added to the document directly following the commands that were run.

The image command is a little special - it looks for a file path to an image in the output of the command and copies that image to the current folder and references it in the file.

That's basically the whole thing! There's a pop command to remove the most recently added section if something goes wrong, a verify command to re-run the document and check nothing has changed (I'm not entirely convinced by the design of that one) and a extract command that reverse-engineers the CLI commands that were used to create the document.

It's pretty simple - just 172 lines of Go.

I packaged it up with my go-to-wheel tool which means you can run it without even installing it first like this:

uvx showboat --help

That --help command is really important: it's designed to provide a coding agent with everything it needs to know in order to use the tool. Here's that help text in full.

This means you can pop open Claude Code and tell it:

Run "uvx showboat --help" and then use showboat to create a demo.md document describing the feature you just built

And that's it! The --help text acts a bit like a Skill. Your agent can read the help text and use every feature of Showboat to create a document that demonstrates whatever it is you need demonstrated.

Here's a fun trick: if you set Claude off to build a Showboat document you can pop that open in VS Code and watch the preview pane update in real time as the agent runs through the demo. It's a bit like having your coworker talk you through their latest work in a screensharing session.

And finally, some examples. Here are documents I had Claude create using Showboat to help demonstrate features I was working on in other projects:

I've now used Showboat often enough that I've convinced myself of its utility.

(I've also seen agents cheat! Since the demo file is Markdown the agent will sometimes edit that file directly rather than using Showboat, which could result in command outputs that don't reflect what actually happened. Here's an issue about that.)

Rodney: CLI browser automation designed to work with Showboat

Many of the projects I work on involve web interfaces. Agents often build entirely new pages for these, and I want to see those represented in the demos.

Showboat's image feature was designed to allow agents to capture screenshots as part of their demos, originally using my shot-scraper tool or Playwright.

The Showboat format benefits from CLI utilities. I went looking for good options for managing a multi-turn browser session from a CLI and came up short, so I decided to try building something new.

Claude Opus 4.6 pointed me to the Rod Go library for interacting with the Chrome DevTools protocol. It's fantastic - it provides a comprehensive wrapper across basically everything you can do with automated Chrome, all in a self-contained library that compiles to a few MBs.

All Rod was missing was a CLI.

I built the first version as an asynchronous report prototype, which convinced me it was worth spinning out into its own project.

I called it Rodney as a nod to the Rod library it builds on and a reference to Only Fools and Horses - and because the package name was available on PyPI.

You can run Rodney using uvx rodney or install it like this:

uv tool install rodney

(Or grab a Go binary from the releases page.)

Here's a simple example session:

rodney start # starts Chrome in the background
rodney open https://datasette.io/
rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)'
rodney click 'a[href="/for"]'
rodney js location.href
rodney js document.title
rodney screenshot datasette-for-page.png
rodney stop

Here's what that looks like in the terminal:

;~ % rodney start
Chrome started (PID 91462)
Debug URL: ws://127.0.0.1:64623/devtools/browser/cac6988e-8153-483b-80b9-1b75c611868d
~ % rodney open https://datasette.io/
Datasette: An open source multi-tool for exploring and publishing data
~ % rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)'
[
"https://datasette.io/for",
"https://docs.datasette.io/en/stable/",
"https://datasette.io/tutorials",
"https://datasette.io/examples",
"https://datasette.io/plugins"
]
~ % rodney click 'a[href="/for"]'
Clicked
~ % rodney js location.href
https://datasette.io/for
~ % rodney js document.title
Use cases for Datasette
~ % rodney screenshot datasette-for-page.png
datasette-for-page.png
~ % rodney stop
Chrome stopped

As with Showboat, this tool is not designed to be used by humans! The goal is for coding agents to be able to run rodney --help and see everything they need to know to start using the tool. You can see that help output in the GitHub repo.

Here are three demonstrations of Rodney that I created using Showboat:

Test-driven development helps, but we still need manual testing

After being a career-long skeptic of the test-first, maximum test coverage school of software development (I like tests included development instead) I've recently come around to test-first processes as a way to force agents to write only the code that's necessary to solve the problem at hand.

Many of my Python coding agent sessions start the same way:

Run the existing tests with "uv run pytest". Build using red/green TDD.

Telling the agents how to run the tests doubles as an indicator that tests on this project exist and matter. Agents will read existing tests before writing their own so having a clean test suite with good patterns makes it more likely they'll write good tests of their own.

The frontier models all understand that "red/green TDD" means they should write the test first, run it and watch it fail and then write the code to make it pass - it's a convenient shortcut.

I find this greatly increases the quality of the code and the likelihood that the agent will produce the right thing with the smallest amount of prompts to guide it.

But anyone who's worked with tests will know that just because the automated tests pass doesn't mean the software actually works! That’s the motivation behind Showboat and Rodney - I never trust any feature until I’ve seen it running with my own eye.

Before building Showboat I'd often add a “manual” testing step to my agent sessions, something like:

Once the tests pass, start a development server and exercise the new feature using curl

I built both of these tools on my phone

Both Showboat and Rodney started life as Claude Code for web projects created via the Claude iPhone app. Most of the ongoing feature work for them happened in the same way.

I'm still a little startled at how much of my coding work I get done on my phone now, but I'd estimate that the majority of code I ship to GitHub these days was written for me by coding agents driven via that iPhone app.

I initially designed these two tools for use in asynchronous coding agent environments like Claude Code for the web. So far that's working out really well.

Tags: go, projects, testing, markdown, ai, generative-ai, llms, ai-assisted-programming, coding-agents, async-coding-agents

Structured Context Engineering for File-Native Agentic Systems

2026-02-10 07:56:51

Structured Context Engineering for File-Native Agentic Systems

New paper by Damon McMillan exploring challenging LLM context tasks involving large SQL schemas (up to 10,000 tables) across different models and file formats:

Using SQL generation as a proxy for programmatic agent operations, we present a systematic study of context engineering for structured data, comprising 9,649 experiments across 11 models, 4 formats (YAML, Markdown, JSON, Token-Oriented Object Notation [TOON]), and schemas ranging from 10 to 10,000 tables.

Unsurprisingly, the biggest impact was the models themselves - with frontier models (Opus 4.5, GPT-5.2, Gemini 2.5 Pro) beating the leading open source models (DeepSeek V3.2, Kimi K2, Llama 4).

Those frontier models benefited from filesystem based context retrieval, but the open source models had much less convincing results with those, which reinforces my feeling that the filesystem coding agent loops aren't handled as well by open weight models just yet. The Terminal Bench 2.0 leaderboard is still dominated by Anthropic, OpenAI and Gemini.

The "grep tax" result against TOON was an interesting detail. TOON is meant to represent structured data in as few tokens as possible, but it turns out the model's unfamiliarity with that format led to them spending significantly more tokens over multiple iterations trying to figure it out:

Screenshot of a figure from a research paper. Introductory text reads: "As schema size increased, TOON showed dramatically increased token consumption for Claude models despite being ~25% smaller in file size. Scale experiments used Claude models only." Below is "Figure 7: The 'Grep Tax' - TOON Token Overhead at Scale", a bar chart with a logarithmic y-axis labeled "Tokens" comparing YAML (teal) and TOON (purple) at two schema sizes: S5 (500 tables) and S9 (10,000 tables). At S5, TOON is +138% more tokens than YAML (~1,100 vs ~450). At S9, TOON is +740% more tokens (~50,000 vs ~7,000). Below the chart, explanatory text reads: "The 'grep tax' emerged as schema size scaled. At S5 (500 tables), TOON consumed 138% more tokens than YAML; at S9 (10,000 tables), this grew to 740%. Root cause: models lacked familiarity with TOON's syntax and could not construct effective refinement patterns."

Via @omarsar0

Tags: ai, prompt-engineering, generative-ai, llms, paper-review, context-engineering

AI Doesn’t Reduce Work—It Intensifies It

2026-02-10 00:43:07

AI Doesn’t Reduce Work—It Intensifies It

Aruna Ranganathan and Xingqi Maggie Ye from Berkeley Haas School of Business report initial findings in the HBR from their April to December 2025 study of 200 employees at a "U.S.-based technology company".

This captures an effect I've been observing in my own work with LLMs: the productivity boost these things can provide is exhausting.

AI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in parallel, or reviving long-deferred tasks because AI could “handle them” in the background. They did this, in part, because they felt they had a “partner” that could help them move through their workload.

While this sense of having a “partner” enabled a feeling of momentum, the reality was a continual switching of attention, frequent checking of AI outputs, and a growing number of open tasks. This created cognitive load and a sense of always juggling, even as the work felt productive.

I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted.

I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

The HBR piece calls for organizations to build an "AI practice" that structures how AI is used to help avoid burnout and counter effects that "make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity".

I think we've just disrupted decades of existing intuition about sustainable working practices. It's going to take a while and some discipline to find a good new balance.

Via Hacker News

Tags: careers, ai, generative-ai, llms, ai-assisted-programming, ai-ethics

Kākāpō mug by Karen James

2026-02-09 01:25:07

Friend and neighbour Karen James made me a Kākāpō mug. It has a charismatic Kākāpō, four Kākāpō chicks (in celebration of the 2026 breeding season) and even has some rimu fruit!

A simply spectacular sgraffito ceramic mug with a bold, charismatic Kākāpō parrot taking up most of the visible space. It has a yellow beard and green feathers.

Another side of the mug, two cute grey Kākāpō chicks are visible and three red rimu fruit that look like berries, one on the floor and two hanging from wiry branches.

I love it so much.

Tags: kakapo, art

Quoting Thomas Ptacek

2026-02-08 10:25:53

People on the orange site are laughing at this, assuming it's just an ad and that there's nothing to it. Vulnerability researchers I talk to do not think this is a joke. As an erstwhile vuln researcher myself: do not bet against LLMs on this.

Axios: Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source

I think vulnerability research might be THE MOST LLM-amenable software engineering problem. Pattern-driven. Huge corpus of operational public patterns. Closed loops. Forward progress from stimulus/response tooling. Search problems.

Vulnerability research outcomes are in THE MODEL CARDS for frontier labs. Those companies have so much money they're literally distorting the economy. Money buys vuln research outcomes. Why would you think they were faking any of this?

Thomas Ptacek

Tags: thomas-ptacek, anthropic, claude, security, generative-ai, ai, llms, open-source

Vouch

2026-02-08 07:57:57

Vouch

Mitchell Hashimoto's new system to help address the deluge of worthless AI-generated PRs faced by open source projects now that the friction involved in contributing has dropped so low.

He says:

The idea is simple: Unvouched users can't contribute to your projects. Very bad users can be explicitly "denounced", effectively blocked. Users are vouched or denounced by contributors via GitHub issue or discussion comments or via the CLI.

Integration into GitHub is as simple as adopting the published GitHub actions. Done. Additionally, the system itself is generic to forges and not tied to GitHub in any way.

Who and how someone is vouched or denounced is up to the project. I'm not the value police for the world. Decide for yourself what works for your project and your community.

Tags: open-source, ai, github-actions, generative-ai, mitchell-hashimoto, ai-ethics