2025-12-29 03:00:00
In “The Future of Software Development is Software Developers” Jason Gorman alludes to how terrible natural language is at programming computers:
The hard part of computer programming isn’t expressing what we want the machine to do in code. The hard part is turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.
The work is the translation, from thought to tangible artifact. Like making a movie: everyone can imagine one, but it takes a director to produce one.
This is also the work of software development: translation. You take an idea — which is often communicated via natural language — and you translate it into functioning software. That is the work.
It’s akin to someone who translates natural languages, say Spanish to English. The work isn’t the words themselves, though that’s what we conflate it with.
You can ask to translate “te quiero” into English. And the resulting words “I love you” may seem like a job complete. But the work isn’t coming up with the words. The work is gaining the experience to know how and when to translate the words based on clues like tone, context, and other subtleties of language. You must decipher intent. Does “te quiero” here mean “I love you” or “I like you” or “I care about you”?
This is precisely why natural language isn’t a good fit for programming: it’s not very precise. As Gorman says, “Natural languages have not evolved to be precise enough and unambiguous enough” for making software. Code is materialized intent. The question is: whose?
The request ”let users sign in” has to be translated into constraints, validation, database tables, async flows, etc. You need pages and pages of the written word to translate that idea into some kind of functioning software. And if you don’t fill in those unspecified details, somebody else (cough AI cough) is just going to guess — and who wants their lives functioning on top of guessed intent?
Computers are pedants. They need to be told precisely in everything, otherwise you’ll ask for one thing and get another. “Do what I mean, not what I say” is a common refrain in working with computers. I can’t tell you how many times I’ve spent hours troubleshooting an issue only to realize a minor syntactical mistake. The computer was doing what I typed, not what I meant.
So the work of making software is translating human thought and intent into functioning computation (not merely writing, or generating, lines of code).
2025-12-22 03:00:00
I recommended against using an AI browser unless you wanted to participate in a global experiment in security. My recommendation did come with a caveat:
But probably don’t listen to me. I’m not a security expert
Well, now the experts (that you pay for) have weighed in.
Gartner, the global research and advisory firm, has come to the conclusion that agentic browsers are too risky for most organizations.
Ground breaking research.
But honestly, credit where it’s due: they’re not jumping on the hype train. In fact, they’re advising against it.
I don’t have access to the original paper (because I’d have to pay Gartner for it), but the reporting on Gartner’s research says this:
research VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts observe “Default AI browser settings prioritize user experience over security.”
C’mon, let’s call a spade a spade: they prioritize their maker’s business model over security.
Continuing:
Gartner’s fears about the agentic capabilities of AI browser relate to their susceptibility to “indirect prompt-injection-induced rogue agent actions, inaccurate reasoning-driven erroneous agent actions, and further loss and abuse of credentials if the AI browser is deceived into autonomously navigating to a phishing website.”
And that’s just the beginning! It gets worse for large organizations.
The real horror of these AI browsers is that they can help employees to autonomously complete their mandatory trainings:
The authors also suggest that employees “might be tempted to use AI browsers and automate certain tasks that are mandatory, repetitive, and less interesting” and imagine some instructing an AI browser to complete their mandatory cybersecurity training sessions.
The horror!
In this specific case, maybe AI browsers aren’t the problem? Maybe they’re a symptom of the agonizing online instructional courses that feign training in the name of compliance?
But I digress. Ultimately, the takeaway here is:
the trio of analysts think AI browsers are just too dangerous to use
Imagine that: you take a tool that literally comes with a warning of being untrustworthy, you embed it as foundational in another tool, and now you have two tools that are untrustworthy. Who would’ve thought?
2025-12-19 03:00:00
If you subscribe to this blog, you must like it — right? I mean, you are subscribed to it.
And if you like this blog, you might also like my notes blog.
It’s where I take short notes of what I read, watch, listen to, or otherwise consume, add my two cents, and fire it off into the void of the internet.
It’s sort of like a “link blog” but I’m not necessarily recommending everything I link to. It’s more of “This excerpt stood out to me in some way, here’s my thoughts on why.”
It’s nice to have a place where I can jot down a few notes, fire off my reaction, and nobody can respond to it lol. At least, not in any easy, friction-less way. You’d have to go out of your way to read my commentary, find my contact info, and fire off a message (critiquing or praising). That’s how I like it. Cuts through the noise.
Anyway, this is all a long way of saying: if you didn’t already know about my notes blog, you might like it. Check it out or subscribe.
Today, for example, I posted lots of grumpy commentary.
2025-12-17 03:00:00
My last article was blogging off Jeremey’s article which blogged off Chris’ article and, after publishing, a reader tipped me off to the Gell-Mann amnesia effect which sounds an awful lot like Chris’ “Jeopardy Phenomenon”. Here’s Wikipedia:
The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.
According to Wikipedia, the concept was named by Michael Crichton because of conversation he once had with physicist Murray Gell-Mann (humorously, he said by associating a famous name to the concept he could imply greater importance to it — and himself — than otherwise possible).
Here’s Crichton:
you read with exasperation or amusement the multiple errors in a story—and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.
He argues that this effect doesn’t seem to translate to other aspects of our lives. The courts, for example, have a related concept of “false in one thing, false in everything”.
Even in ordinary life, Crichton says, “if somebody consistently exaggerates or lies to you, you soon discount everything they say”.
In other words: if your credibility takes a hit in one area, it’s gonna take a hit across the board.
At least, that’s his line of reasoning.
It’s kind of fascinating to think about this in our current moment of AI. Allow me to re-phrase Crichton.
You read with exasperation the multiple errors in AI’s “answer”, then start a new chat and read with renewed interest and faith as if the next “answer” is somehow more accurate than the last. You start a new prompt and forget what you know.
If a friend, acquaintance, or family member were to consistently exaggerate or lie to you, you’d quickly adopt a posture of discounting everything they say. But with AI — which even comes with a surgeon general’s warning, e.g. “AI can make mistakes. Check important info.” — we forgive and forget.
Forget. Maybe that’s the keyword for our behavior. It is for Crichton:
The only possible explanation for our behavior is amnesia.
2025-12-15 03:00:00
There’s the thing where if you’re reading an article in the newspaper, and it’s about stuff you don’t know a ton about, it all seems well and good. Then you read another article in the same paper and it’s about something you know intimately (your job, your neighborhood, your hobby, etc) there is a good chance you’ll be like hey! that’s not quite right!
Chris extends this idea to AI-generated code, i.e. if you don’t know or understand the generated code you probably think, “Looks good to me!” But if you do know it you probably think, “Wait a second, that’s not quite right.”
Here’s Jeremy Keith riffing on Chris’ thoughts:
I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”
Kind of feels like this boils down to: How do we know what we know?
To be fair, that’s a question I’ve wrestled with my whole life.
And the older I get, the more and more I realize how often we barely know anything.
There’s a veneer of surety everywhere in the world.
There are industries of people and services who will take your money in exchange for a sense of surety — influencers, consultants, grifters, AI, they all exist because we are more than willing to pay with our time, attention, and money to feel like we “know” something.
“You’re absolutely right!”
But I, for one, often feel increasingly unsure of everything I thought I knew.
For example: I can’t count the number of times I thought I understood a piece of history, only to later find out that the commonly-accepted belief comes to use from a single source, written decades later in a diary or on a piece of parchment or on a stone, by someone with blind spots, questionable incentives, or a flair for the dramatic, all of which leaves me seriously questioning the veracity and objectivity of something I thought I knew.
Which leads me to the next, uncomfortable question: How many other things are there that I thought I knew but are full of uncertainty just like this?
All surety vanishes.
And that’s an uncomfortable place to be. Who wants to admit “I don’t know”?
It’s so easy to take what’s convenient over what corresponds to reality.
And that’s what scares me about AI.
After publishing, I was tipped off to the Gell-Mann amnesia effect which is right up the subject alley of this post.
Reply via: Email · Mastodon · Bluesky
Related posts linking here: (2025) The “A” in “AI” Stands For Amnesia
2025-12-08 03:00:00
I complained about this on the socials, but I didn’t get it all out of my system. So now I write a blog post.
I’ve never liked the philosophy of “put an icon in every menu item by default”.
Google Sheets, for example, does this. Go to “File” or “Edit” or “View” and you’ll see a menu with a list of options, every single one having an icon (same thing with the right-click context menu).

It’s extra noise to me. It’s not that I think menu items should never have icons. I think they can be incredibly useful (more on that below). It’s more that I don’t like the idea of “give each menu item an icon” being the default approach.
This posture lends itself to a practice where designers have an attitude of “I need an icon to fill up this space” instead of an attitude of “Does the addition of a icon here, and the cognitive load of parsing and understanding it, help or hurt how someone would use this menu system?”
The former doesn’t require thinking. It’s just templating — they all have icons, so we need to put something there. The latter requires care and thoughtfulness for each use case and its context.
To defend my point, one of the examples I always pointed to was macOS. For the longest time, Apple’s OS-level menus seemed to avoid this default approach of sticking icons in every menu item.
That is, until macOS Tahoe shipped.
Tahoe now has icons in menus everywhere. For example, here’s the Apple menu:

Let’s look at others. As I’m writing this I have Safari open. Let’s look at the “Safari” menu:

Hmm. Interesting. Ok so we’ve got an icon for like half the menu items. I wonder why some get icons and others don’t?
For example, the “Settings” menu item (third from the top) has an icon. But the other item in its grouping “Privacy Report” does not. I wonder why? Especially when Safari has an icon for Privacy report, like if you go to customize the toolbar you’ll see it:

Hmm. Who knows? Let’s keep going.
Let’s look at the "File" menu in Safari:

Some groupings have icons and get inset, while other groupings don’t have icons and don’t get inset. Interesting…again I wonder what the rationale is here? How do you choose? It’s not clear to me.
Let’s keep going. Let’s go to the "View" menu:

Oh boy, now we’re really in it. Some of these menu items have the notion of a toggle (indicated by the checkmark) so now you’ve got all kinds of alignment things to deal with. The visual symbols are doubling-up when there’s a toggle and an icon.
The “View” menu in Mail is a similar mix of:

You know what would be a fun game? Get a bunch of people in a room, show them menus where the textual labels are gone, and see who can get the most right.

But I digress.
In so many of these cases, I honestly can’t intuit why some menus have icons and others do not. What are so many of these icons affording me at the cost of extra visual and cognitive parsing? I don’t know.
To be fair, there are some menus where these visual symbols are incredibly useful. Take this menu from Finder:

The visual depiction of how those are going to align is actually incredibly useful because it’s way easier for my brain to parse the symbol and understand where the window is going to go than it is to read the text and imagine in my head what “Top Left” or “Bottom & Top” or “Quarters” will mean. But a visual symbol? I instantly get it!
Those are good icons in menus. I like those.
What I find really interesting about this change on Apple’s part is how it seemingly goes against their own previous human interface guidelines (as pointed out to me by Peter Gassner).
They have an entire section in their 2005 guidelines (and 1992 and 2020) titled “Using Symbols in Menus”:

See what it says?
There are a few standard symbols you can use to indicate additional information in menus…Don’t use other, arbitrary symbols in menus, because they add visual clutter and may confuse people.
Confused people. That’s me.
They even have an example of what not to do and guess what it looks like? A menu in macOS Tahoe.

It’s pretty obvious how I feel. I’m tired of all this visual noise in my menus.
And now that Apple has seemingly thrown in with the “stick an icon in every menu by default” crowd, it’s harder than ever for me to convince people otherwise. To persuade, “Hey, unless you can articulate a really good reason to add this, maybe our default posture should be no icons in menus?”
So I guess this is the world I live in now. Icons in menus. Icons in menus everywhere.
Send help.