2026-01-19 03:00:00
In his talk, I like the way Jake Nations pits easy vs. simple:
Easy means you can add it to your system quickly. Simple means you can understand the work that you’ve done.
I like this framing.
Easy means you can do with little effort.
Simple means you can understand what you do with little effort.
In other words: easy measures the effort in doing, while simple measures the effort in understanding the doing.
For example: npm create framework@latest or “Hey AI, build an instagram clone”. These both get you a website with little effort (easy) but do you understand what you just did (simple)?
It’s easy to get complexity, but it’s not easy to get simplicity.
(I get this is arguing semantics and definitions, but I find it to be a useful framing personally. Thanks Jake!)
2026-01-12 03:00:00
I’ve been slowly reading my copy of “The Internet Phone Book” and I recently read an essay in it by Elan Ullendorff called “The New Turing Test”.
Elan argues that what matters in a work isn’t the tools used to make it, but the “expressiveness” of the work itself (was it made “from someone, for someone, in a particular context”):
If something feels robotic or generic, it is those very qualities that make the work problematic, not the tools used.
This point reminded me that there was slop before AI came on the scene.
A lot of blogging was considered a primal form of slop when the internet first appeared: content of inferior substance, generated in quantities much vaster than heretofore considered possible.
And the truth is, perhaps a lot of the content in blogosphere was “slop”.
But it wasn’t slop because of the tools that made it — like Movable Type or Wordpress or Blogger.
It was slop because it lacked thought, care, and intention — the “expressiveness” Elan argues for.
You don’t need AI to produce slop because slop isn’t made by AI. It’s made by humans — AI is just the popular tool of choice for making it right now.
Slop existed long before LLMs came onto the scene.
It will doubtless exist long after too.
2026-01-08 03:00:00
Matthias Ott shared a link to a post from Anthropic titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, which I read because I’m interested in the messy intersection of AI and security.
I gotta say: I don’t know if I’ve ever read anything quite like this article.
At first, the article felt like a responsible disclosure — “Hey, we’re reaching an inflection point where AI models are being used effectively for security exploits. Look at this one.”
But then I read further and found statements like this:
[In the attack] Claude didn’t always work perfectly. It occasionally hallucinated […] This remains an obstacle to fully autonomous cyberattacks.
Wait, so is that a feature or a bug? Is it a good thing that your tool hallucinated and proved a stumbling block? Or is this bug you hope to fix?
The more I read, the more difficult it became to discern whether this security incident was a helpful warning or a feature sell.
With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and resourced groups can now potentially perform large-scale attacks of this nature.
Shoot, this sounds like a product pitch! Don’t have the experience or resources to keep up with your competitors who are cyberattacking? We’ve got a tool for you!
Wait, so if you’re creating something that can cause so much havoc, why are you still making it? Oh good, they address this exact question:
This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.
Ok, so the article is a product pitch:
But that’s my words. Here’s theirs:
A fundamental change has occurred in cybersecurity. We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response. We also advise developers to continue to invest in safeguards across their AI platforms, to prevent adversarial misuse. The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.
It appears AI is simultaneously the problem and the solution.
It’s a great business to be in, if you think about it. You sell a tool for security exploits and you sell the self-same tool for protection against said exploits. Everybody wins!
I can’t help but read this post and think of a mafia shakedown. You know, where the mafia implies threats to get people to pay for their protection — a service they created the need for in the first place. ”Nice system you got there, would be a shame if anyone hacked into it using AI. Better get some AI to protect yourself.”
I find it funny that the URL slug for the article is:
/disrupting-AI-espionage
That’s a missed opportunity. They could’ve named it:
/causing-and-disrupting-AI-espionage
2026-01-05 03:00:00
The setup for my notes blog looks like this:
I try to catch spelling issues and what not before I publish, but I’m not perfect.
I can proofread a draft as much as I want, but nothing helps me catch errors better than hitting publish and re-reading what I just published on my website.
If that fails, kind readers will often reach out and say “Hey, I found a typo in your post [link].”
To fix these errors, I will:
However, the “Open iA Writer” and “Find the post” are points of friction I’ve wanted to optimize.
I’ve found myself thinking: “When I’m reading a post on notes.jim-nielsen.com and I spot a mistake, I wish I could just click an ‘Edit’ link right there and be editing my file.”
You might be thinking, “Yeah that’s what a hosted CMS does.”
But I like my plain-text files. And I love my native writing app.
What’s one to do?
Well, turns out iA Writer supports opening files via links with this protocol:
ia-writer://open?path=location:/path/to/file.md
So, in my case, I can create a link for each post on my website that will open the corresponding plain-text file in iA Writer, e.g.
<article>
<!-- content of post here -->
<a href="ia-writer://open?path=notes:2026-01-04T2023.md">
Edit
</a>
</article>
And voilà, my OS is now my CMS!

It’s not a link to open the post in a hosted CMS somewhere. It’s a link to open a file on the device I’m using — cool!
My new workflow looks like this:
It works great. Here’s an example of opening a post from the browser on my laptop:
And another on my phone:
Granted, these “Edit” links are only useful to me. So I don’t put them in the source markup. Instead, I generate them with JavaScript when it’s just me browsing.
How do I know it’s just me?
I wrote a little script that watches for the presence of a search param ?edit=true. If that is present, my site generates an “Edit” link on every post with the correct href and stores that piece of state in localstorage so every time I revisit the site, the “Edit” links are rendered for me (but nobody else sees them).
Well, not nobody. Now that I revealed my secret I know you can go get the “Edit” links to appear. But they won’t work for you because A) you don’t have iA Writer installed, or B) you don’t have my files on your device. So here’s a little tip if you tried rendering the “Edit” links: do ?edit=false to turn them back off :)
2026-01-05 03:00:00
I don’t much enjoy being a lab rat to your half-baked ideas.
I can tell when your approach to what I use is: “Ship it and let’s see how people respond.”
Well let me tell you something: I’m not going to respond.
My desire to give you constructive feedback is in direct correlation to your effort to care — about your communications, about what you ship, even about what you don’t ship.
Just because you ship some half-baked feature doesn’t mean I’m going to take the time to tell you whether I find it any good.
Doubly so in the age of AI. I know how easy it is for you to ship slop, why should I take the time to formulate careful feedback on your careless output?
I can disagree with product decisions, but I won’t get mad at thoughtfulness and care. I respect that.
But I will very much disagree with and get mad at product decisions devoid of thought and care. I have no respect for that.
It’s not really worth my time to respond to such a posture of shipping software, and yet here I am writing about it. Because I care about the things I choose to (or am required to) use.
So this is my one-time, general-purpose piece of feedback to all such purveyors of digital goods and tools. Just because nobody tells you that what you shipped sucks, doesn’t mean it’s worth keeping. You can’t measure an apathetic response because it is, by definition, the absence of data.
2025-12-29 03:00:00
In “The Future of Software Development is Software Developers” Jason Gorman alludes to how terrible natural language is at programming computers:
The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.
The work is the translation, from thought to tangible artifact. Like making a movie: everyone can imagine one, but it takes a director to produce one.
This is also the work of software development: translation. You take an idea — which is often communicated via natural language — and you translate it into functioning software. That is the work.
It’s akin to someone who translates natural languages, say Spanish to English. The work isn’t the words themselves, though that’s what we conflate it with.
You can ask to translate “te quiero” into English. And the resulting words “I love you” may seem like a job complete. But the work isn’t coming up with the words. The work is gaining the experience to know how and when to translate the words based on clues like tone, context, and other subtleties of language. You must decipher intent. Does “te quiero” here mean “I love you” or “I like you” or “I care about you”?
This is precisely why natural language isn’t a good fit for programming: it’s not very precise. As Gorman says, “Natural languages have not evolved to be precise enough and unambiguous enough” for making software. Code is materialized intent. The question is: whose?
The request ”let users sign in” has to be translated into constraints, validation, database tables, async flows, etc. You need pages and pages of the written word to translate that idea into some kind of functioning software. And if you don’t fill in those unspecified details, somebody else (cough AI cough) is just going to guess — and who wants their lives functioning on top of guessed intent?
Computers are pedants. They need to be told precisely in everything, otherwise you’ll ask for one thing and get another. “Do what I mean, not what I say” is a common refrain in working with computers. I can’t tell you how many times I’ve spent hours troubleshooting an issue only to realize a minor syntactical mistake. The computer was doing what I typed, not what I meant.
So the work of making software is translating human thought and intent into functioning computation (not merely writing, or generating, lines of code).