2026-03-13 04:08:55
MALUS - Clean Room as a Service
Brutal satire on the whole vibe-porting license washing thing (previously):Finally, liberation from open source license obligations.
Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems..
I admit it took me a moment to confirm that this was a joke. Just too on-the-nose.
Via Hacker News
Tags: open-source, ai, generative-ai, llms, ai-ethics
2026-03-13 03:23:44
Coding After Coders: The End of Computer Programming as We Know It
Epic piece on AI-assisted development by Clive Thompson for the New York Times Magazine, who spoke to more than 70 software developers from companies like Google, Amazon, Microsoft, Apple, plus other individuals including Anil Dash, Thomas Ptacek, Steve Yegge, and myself.I think the piece accurately and clearly captures what's going on in our industry right now in terms appropriate for a wider audience.
I talked to Clive a few weeks ago. Here's the quote from me that made it into the piece.
Given A.I.’s penchant to hallucinate, it might seem reckless to let agents push code out into the real world. But software developers point out that coding has a unique quality: They can tether their A.I.s to reality, because they can demand the agents test the code to see if it runs correctly. “I feel like programmers have it easy,” says Simon Willison, a tech entrepreneur and an influential blogger about how to code using A.I. “If you’re a lawyer, you’re screwed, right?” There’s no way to automatically check a legal brief written by A.I. for hallucinations — other than face total humiliation in court.
The piece does raise the question of what this means for the future of our chosen line of work, but the general attitude from the developers interviewed was optimistic - there's even a mention of the possibility that the Jevons paradox might increase demand overall.
One critical voice came from an Apple engineer:
A few programmers did say that they lamented the demise of hand-crafting their work. “I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that,” one Apple engineer told me. (He asked to remain unnamed so he wouldn’t get in trouble for criticizing Apple’s embrace of A.I.)
That request to remain anonymous is a sharp reminder that corporate dynamics may be suppressing an unknown number of voices on this topic.
Tags: new-york-times, careers, ai, generative-ai, llms, ai-assisted-programming, press-quotes, deep-blue
2026-03-13 00:28:07
Here's what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.
Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.
Now there's a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.
— Les Orchard, Grief and the AI Split
Tags: les-orchard, ai-assisted-programming, generative-ai, ai, llms, careers, deep-blue
2026-03-12 06:58:06
Today in animated explanations built using Claude: I've always been a fan of animated demonstrations of sorting algorithms so I decided to spin some up on my phone using Claude Artifacts, then added Python's timsort algorithm, then a feature to run them all at once. Here's the full sequence of prompts:
Interactive animated demos of the most common sorting algorithms
This gave me bubble sort, selection sort, insertion sort, merge sort, quick sort, and heap sort.
Add timsort, look up details in a clone of python/cpython from GitHub
Let's add Python's Timsort! Regular Claude chat can clone repos from GitHub these days. In the transcript you can see it clone the repo and then consult Objects/listsort.txt and Objects/listobject.c. (I should note that when I asked GPT-5.4 Thinking to review Claude's implementation it picked holes in it and said the code "is a simplified, Timsort-inspired adaptive mergesort".)
I don't like the dark color scheme on the buttons, do better
Also add a "run all" button which shows smaller animated charts for every algorithm at once in a grid and runs them all at the same time
It came up with a color scheme I liked better, "do better" is a fun prompt, and now the "Run all" button produces this effect:

Tags: algorithms, computer-science, javascript, sorting, ai, explorables, generative-ai, llms, claude, vibe-coding
2026-03-11 22:47:09
It is hard for less experienced developers to appreciate how rarely architecting for future requirements / applications turns out net-positive.
— John Carmack, a tweet in June 2021
Tags: john-carmack, software-engineering, yagni
2026-03-11 06:25:09
Agentic Engineering Patterns >
Many developers worry that outsourcing their code to AI tools will result in a drop in quality, producing bad code that's churned out fast enough that decision makers are willing to overlook its flaws.
If adopting coding agents demonstrably reduces the quality of the code and features you are producing, you should address that problem directly: figure out which aspects of your process are hurting the quality of your output and fix them.
Shipping worse code with agents is a choice. We can choose to ship code that is better instead.
I like to think about shipping better code in terms of technical debt. We take on technical debt as the result of trade-offs: doing things "the right way" would take too long, so we work within the time constraints we are under and cross our fingers that our project will survive long enough to pay down the debt later on.
The best mitigation for technical debt is to avoid taking it on in the first place.
In my experience, a common category of technical debt fixes is changes that are simple but time-consuming.
All of these changes are conceptually simple but still need time dedicated to them, which can be hard to justify given more pressing issues.
Refactoring tasks like this are an ideal application of coding agents.
Fire up an agent, tell it what to change and leave it to churn away in a branch or worktree somewhere in the background.
I usually use asynchronous coding agents for this such as Gemini Jules, OpenAI Codex web, or Claude Code on the web. That way I can run those refactoring jobs without interrupting my flow on my laptop.
Evaluate the result in a Pull Request. If it's good, land it. If it's almost there, prompt it and tell it what to do differently. If it's bad, throw it away.
The cost of these code improvements has dropped so low that we can afford a zero tolerance attitude to minor code smells and inconveniences.
Any software development task comes with a wealth of options for approaching the problem. Some of the most significant technical debt comes from making poor choices at the planning step - missing out on an obvious simple solution, or picking a technology that later turns out not to be exactly the right fit.
LLMs can help ensure we don't miss any obvious solutions that may not have crossed our radar before. They'll only suggest solutions that are common in their training data but those tend to be the Boring Technology that's most likely to work.
More importantly, coding agents can help with exploratory prototyping.
The best way to make confident technology choices is to prove that they are fit for purpose with a prototype.
Is Redis a good choice for the activity feed on a site which expects thousands of concurrent users?
The best way to know for sure is to wire up a simulation of that system and run a load test against it to see what breaks.
Coding agents can build this kind of simulation from a single well crafted prompt, which drops the cost of this kind of experiment to almost nothing. And since they're so cheap we can run multiple experiments at once, testing several solutions to pick the one that is the best fit for our problem.
Agents follow instructions. We can evolve these instructions over time to get better results from future runs, based on what we've learned previously.
Dan Shipper and Kieran Klaassen at Every describe their company's approach to working with coding agents as Compound Engineering. Every coding project they complete ends with a retrospective, which they call the compound step where they take what worked and document that for future agent runs.
If we want the best results from our agents, we should aim to continually increase the quality of our codebase over time. Small improvements compound. Quality enhancements that used to be time-consuming have now dropped in cost to the point that there's no excuse not to invest in quality at the same time as shipping new features. Coding agents mean we can finally have both.
Tags: coding-agents, ai-assisted-programming, generative-ai, agentic-engineering, ai, llms