2026-02-12 01:34:40
cysqlite - a new sqlite driver
Charles Leifer has been maintaining pysqlite3 - a fork of the Python standard library'ssqlite3 module that makes it much easier to run upgraded SQLite versions - since 2018.
He's been working on a ground-up Cython rewrite called cysqlite for almost as long, but it's finally at a stage where it's ready for people to try out.
The biggest change from the sqlite3 module involves transactions. Charles explains his discomfort with the sqlite3 implementation at length - that library provides two different variants neither of which exactly match the autocommit mechanism in SQLite itself.
I'm particularly excited about the support for custom virtual tables, a feature I'd love to see in sqlite3 itself.
cysqlite provides a Python extension compiled from C, which means it normally wouldn't be available in Pyodide. I set Claude Code on it and it built me cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl, a 688KB wheel file with a WASM build of the library that can be loaded into Pyodide like this:
import micropip await micropip.install( "https://simonw.github.io/research/cysqlite-wasm-wheel/cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl" ) import cysqlite print(cysqlite.connect(":memory:").execute( "select sqlite_version()" ).fetchone())
(I also learned that wheels like this have to be built for the emscripten version used by that edition of Pyodide - my experimental wheel loads in Pyodide 0.25.1 but fails in 0.27.5 with a Wheel was built with Emscripten v3.1.46 but Pyodide was built with Emscripten v3.1.58 error.)
You can try my wheel in this new Pyodide REPL i had Claude build as a mobile-friendly alternative to Pyodide's own hosted console.
I also had Claude build this demo page that executes the original test suite in the browser and displays the results:

Via lobste.rs
Tags: python, sqlite, charles-leifer, webassembly, pyodide, ai-assisted-programming, claude-code
2026-02-11 01:45:29
A key challenge working with coding agents is having them both test what they’ve built and demonstrate that software to you, their overseer. This goes beyond automated tests - we need artifacts that show their progress and help us see exactly what the agent-produced software is able to do. I’ve just released two new tools aimed at this problem: Showboat and Rodney.
I recently wrote about how the job of a software engineer isn't to write code, it's to deliver code that works. A big part of that is proving to ourselves and to other people that the code we are responsible for behaves as expected.
This becomes even more important - and challenging - as we embrace coding agents as a core part of our software development process.
The more code we churn out with agents, the more valuable tools are that reduce the amount of manual QA time we need to spend.
One of the most interesting things about the StrongDM software factory model is how they ensure that their software is well tested and delivers value despite their policy that "code must not be reviewed by humans". Part of their solution involves expensive swarms of QA agents running through "scenarios" to exercise their software. It's fascinating, but I don't want to spend thousands of dollars on QA robots if I can avoid it!
I need tools that allow agents to clearly demonstrate their work to me, while minimizing the opportunities for them to cheat about what they've done.
Showboat is the tool I built to help agents demonstrate their work to me.
It's a CLI tool (a Go binary, optionally wrapped in Python to make it easier to install) that helps an agent construct a Markdown document demonstrating exactly what their newly developed code can do.
It's not designed for humans to run, but here's how you would run it anyway:
showboat init demo.md 'How to use curl and jq'
showboat note demo.md "Here's how to use curl and jq together."
showboat exec demo.md bash 'curl -s https://api.github.com/repos/simonw/rodney | jq .description'
showboat note demo.md 'And the curl logo, to demonstrate the image command:'
showboat image demo.md 'curl -o curl-logo.png https://curl.se/logo/curl-logo.png && echo curl-logo.png'Here's what the result looks like if you open it up in VS Code and preview the Markdown:

Here's that demo.md file in a Gist.
So a sequence of showboat init, showboat note, showboat exec and showboat image commands constructs a Markdown document one section at a time, with the output of those exec commands automatically added to the document directly following the commands that were run.
The image command is a little special - it looks for a file path to an image in the output of the command and copies that image to the current folder and references it in the file.
That's basically the whole thing! There's a pop command to remove the most recently added section if something goes wrong, a verify command to re-run the document and check nothing has changed (I'm not entirely convinced by the design of that one) and a extract command that reverse-engineers the CLI commands that were used to create the document.
It's pretty simple - just 172 lines of Go.
I packaged it up with my go-to-wheel tool which means you can run it without even installing it first like this:
uvx showboat --helpThat --help command is really important: it's designed to provide a coding agent with everything it needs to know in order to use the tool. Here's that help text in full.
This means you can pop open Claude Code and tell it:
Run "uvx showboat --help" and then use showboat to create a demo.md document describing the feature you just built
And that's it! The --help text acts a bit like a Skill. Your agent can read the help text and use every feature of Showboat to create a document that demonstrates whatever it is you need demonstrated.
Here's a fun trick: if you set Claude off to build a Showboat document you can pop that open in VS Code and watch the preview pane update in real time as the agent runs through the demo. It's a bit like having your coworker talk you through their latest work in a screensharing session.
And finally, some examples. Here are documents I had Claude create using Showboat to help demonstrate features I was working on in other projects:
showboat image command.row-state-sql CLI Demo shows a new row-state-sql command I added to that same project.
Change grouping with Notes demonstrates another feature where groups of changes within the same transaction can have a note attached to them.
I've now used Showboat often enough that I've convinced myself of its utility.
(I've also seen agents cheat! Since the demo file is Markdown the agent will sometimes edit that file directly rather than using Showboat, which could result in command outputs that don't reflect what actually happened. Here's an issue about that.)
Many of the projects I work on involve web interfaces. Agents often build entirely new pages for these, and I want to see those represented in the demos.
Showboat's image feature was designed to allow agents to capture screenshots as part of their demos, originally using my shot-scraper tool or Playwright.
The Showboat format benefits from CLI utilities. I went looking for good options for managing a multi-turn browser session from a CLI and came up short, so I decided to try building something new.
Claude Opus 4.6 pointed me to the Rod Go library for interacting with the Chrome DevTools protocol. It's fantastic - it provides a comprehensive wrapper across basically everything you can do with automated Chrome, all in a self-contained library that compiles to a few MBs.
All Rod was missing was a CLI.
I built the first version as an asynchronous report prototype, which convinced me it was worth spinning out into its own project.
I called it Rodney as a nod to the Rod library it builds on and a reference to Only Fools and Horses - and because the package name was available on PyPI.
You can run Rodney using uvx rodney or install it like this:
uv tool install rodney(Or grab a Go binary from the releases page.)
Here's a simple example session:
rodney start # starts Chrome in the background
rodney open https://datasette.io/
rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)'
rodney click 'a[href="/for"]'
rodney js location.href
rodney js document.title
rodney screenshot datasette-for-page.png
rodney stopHere's what that looks like in the terminal:
![;~ % rodney start
Chrome started (PID 91462)
Debug URL: ws://127.0.0.1:64623/devtools/browser/cac6988e-8153-483b-80b9-1b75c611868d
~ % rodney open https://datasette.io/
Datasette: An open source multi-tool for exploring and publishing data
~ % rodney js 'Array.from(document.links).map(el => el.href).slice(0, 5)'
[
"https://datasette.io/for",
"https://docs.datasette.io/en/stable/",
"https://datasette.io/tutorials",
"https://datasette.io/examples",
"https://datasette.io/plugins"
]
~ % rodney click 'a[href="/for"]'
Clicked
~ % rodney js location.href
https://datasette.io/for
~ % rodney js document.title
Use cases for Datasette
~ % rodney screenshot datasette-for-page.png
datasette-for-page.png
~ % rodney stop
Chrome stopped](https://static.simonwillison.net/static/2026/rodney-demo.jpg)
As with Showboat, this tool is not designed to be used by humans! The goal is for coding agents to be able to run rodney --help and see everything they need to know to start using the tool. You can see that help output in the GitHub repo.
Here are three demonstrations of Rodney that I created using Showboat:
After being a career-long skeptic of the test-first, maximum test coverage school of software development (I like tests included development instead) I've recently come around to test-first processes as a way to force agents to write only the code that's necessary to solve the problem at hand.
Many of my Python coding agent sessions start the same way:
Run the existing tests with "uv run pytest". Build using red/green TDD.
Telling the agents how to run the tests doubles as an indicator that tests on this project exist and matter. Agents will read existing tests before writing their own so having a clean test suite with good patterns makes it more likely they'll write good tests of their own.
The frontier models all understand that "red/green TDD" means they should write the test first, run it and watch it fail and then write the code to make it pass - it's a convenient shortcut.
I find this greatly increases the quality of the code and the likelihood that the agent will produce the right thing with the smallest amount of prompts to guide it.
But anyone who's worked with tests will know that just because the automated tests pass doesn't mean the software actually works! That’s the motivation behind Showboat and Rodney - I never trust any feature until I’ve seen it running with my own eye.
Before building Showboat I'd often add a “manual” testing step to my agent sessions, something like:
Once the tests pass, start a development server and exercise the new feature using curl
Both Showboat and Rodney started life as Claude Code for web projects created via the Claude iPhone app. Most of the ongoing feature work for them happened in the same way.
I'm still a little startled at how much of my coding work I get done on my phone now, but I'd estimate that the majority of code I ship to GitHub these days was written for me by coding agents driven via that iPhone app.
I initially designed these two tools for use in asynchronous coding agent environments like Claude Code for the web. So far that's working out really well.
Tags: go, projects, testing, markdown, ai, generative-ai, llms, ai-assisted-programming, coding-agents, async-coding-agents
2026-02-10 07:56:51
Structured Context Engineering for File-Native Agentic Systems
New paper by Damon McMillan exploring challenging LLM context tasks involving large SQL schemas (up to 10,000 tables) across different models and file formats:Using SQL generation as a proxy for programmatic agent operations, we present a systematic study of context engineering for structured data, comprising 9,649 experiments across 11 models, 4 formats (YAML, Markdown, JSON, Token-Oriented Object Notation [TOON]), and schemas ranging from 10 to 10,000 tables.
Unsurprisingly, the biggest impact was the models themselves - with frontier models (Opus 4.5, GPT-5.2, Gemini 2.5 Pro) beating the leading open source models (DeepSeek V3.2, Kimi K2, Llama 4).
Those frontier models benefited from filesystem based context retrieval, but the open source models had much less convincing results with those, which reinforces my feeling that the filesystem coding agent loops aren't handled as well by open weight models just yet. The Terminal Bench 2.0 leaderboard is still dominated by Anthropic, OpenAI and Gemini.
The "grep tax" result against TOON was an interesting detail. TOON is meant to represent structured data in as few tokens as possible, but it turns out the model's unfamiliarity with that format led to them spending significantly more tokens over multiple iterations trying to figure it out:

Via @omarsar0
Tags: ai, prompt-engineering, generative-ai, llms, paper-review, context-engineering
2026-02-10 00:43:07
AI Doesn’t Reduce Work—It Intensifies It
Aruna Ranganathan and Xingqi Maggie Ye from Berkeley Haas School of Business report initial findings in the HBR from their April to December 2025 study of 200 employees at a "U.S.-based technology company".This captures an effect I've been observing in my own work with LLMs: the productivity boost these things can provide is exhausting.
AI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in parallel, or reviving long-deferred tasks because AI could “handle them” in the background. They did this, in part, because they felt they had a “partner” that could help them move through their workload.
While this sense of having a “partner” enabled a feeling of momentum, the reality was a continual switching of attention, frequent checking of AI outputs, and a growing number of open tasks. This created cognitive load and a sense of always juggling, even as the work felt productive.
I'm frequently finding myself with work on two or three projects running parallel. I can get so much done, but after just an hour or two my mental energy for the day feels almost entirely depleted.
I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
The HBR piece calls for organizations to build an "AI practice" that structures how AI is used to help avoid burnout and counter effects that "make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity".
I think we've just disrupted decades of existing intuition about sustainable working practices. It's going to take a while and some discipline to find a good new balance.
Via Hacker News
Tags: careers, ai, generative-ai, llms, ai-assisted-programming, ai-ethics
2026-02-09 01:25:07
Friend and neighbour Karen James made me a Kākāpō mug. It has a charismatic Kākāpō, four Kākāpō chicks (in celebration of the 2026 breeding season) and even has some rimu fruit!


I love it so much.
2026-02-08 10:25:53
People on the orange site are laughing at this, assuming it's just an ad and that there's nothing to it. Vulnerability researchers I talk to do not think this is a joke. As an erstwhile vuln researcher myself: do not bet against LLMs on this.
Axios: Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source
I think vulnerability research might be THE MOST LLM-amenable software engineering problem. Pattern-driven. Huge corpus of operational public patterns. Closed loops. Forward progress from stimulus/response tooling. Search problems.
Vulnerability research outcomes are in THE MODEL CARDS for frontier labs. Those companies have so much money they're literally distorting the economy. Money buys vuln research outcomes. Why would you think they were faking any of this?
Tags: thomas-ptacek, anthropic, claude, security, generative-ai, ai, llms, open-source