MoreRSS

site iconJim NielsenModify

Designer. Engineer. Writer.20+ years at the intersection of design & code on the web.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Jim Nielsen

Collective Speed Is Not the Summation of Individual Speed

2026-04-27 03:00:00

I’ve been thinking about speed which is why Chris Coyier caught my attention in his latest piece discussing how AI might be 10✕ing the speed with which we code, but it’s not making our software 10✕ better:

Faster individuals don’t make a fast company

My mind immediately went to the 4✕100 relay at the Olympics.

(Not sure which race that is? Watch the London 2012 one.)

Imagine you were put in charge of winning the 4✕100 relay.

All you gotta do is find the four faster sprinters in your country — right?

I’m no track and field expert, but I doubt it’s that simple.

In a relay race, the baton is arguably the most critical element. Passing it cleanly is vital because if you fumble it you’re easily behind a few meters or maybe even disqualified.

So, one could argue, a sprinter’s ability to pass and receive the baton is more important than speed because all the speed in the world won’t help you overcome a dropped baton.

(There are other considerations too, like which leg each runner takes, which sequence works best given individual pairings and rapport, and whether a slower veteran might perform better in the heat of the moment.)

Faster runners won’t guarantee a faster team.

And faster coders won’t guarantee a faster company.

Like a relay race, it might be worth giving some thought to the relationships and interfaces between people.


Reply via: Email · Mastodon · Bluesky

Hook It Up to the Machine

2026-04-20 03:00:00

In the early 2000’s, my parents took us on a road trip to Glacier National Park in Montana.

We made the journey in our new (used) family van: a green Dodge Caravan whose reputation was soon to become “a lemon”.

I was a teenager and didn’t pay a lot of attention to the details of what was happening around me, but I do remember how the van kept overheating. It ran fine on the interstate, but anything under 40MPH had the car’s temperature gauge rising into unsafe zones.

I remember stopping in some small town in Montana to get it checked out by a mechanic. He checked it out, took it for a test drive, etc., and told my Dad the reason the car was overheating was because the idling fan wasn’t turning on. At higher speeds, like on the interstate, that was fine because there was enough airflow to keep the engine cool but at lower speeds the car would overheat. The mechanic said he didn’t know why the fan wasn’t turning on. There was nothing wrong mechanically from what he could see. But he couldn't fix it. He told my Dad that this was one of those increasingly common “computerized” cars that you have to hook up to another computer to diagnose the source of the issue. And he didn’t have one of those computers.

So we continued on our way. The rest of the trip required my Dad taking “the long way around”, like back roads where he could keep up his speed in order to avoid the car overheating. It was all very amusing to us as kids, almost thrilling because Dad had a legitimate excuse to drive fast (suffice it to say, Mom did not like this).

Once the trip was over and we returned home, my Dad was able to get the car in to a dealer where they hooked up the car’s computer to another computer to diagnose and fix the issue. I don’t really remember the specifics, but the issue was seemingly some failed digital sensor that prevented the idling fan from turning on. Once the sensor was replaced, things worked again.

Computers talking to computers.

Growing up in an era that shifted so many things from analog to digital, mechanical to electronic, I’ve thought about this trip a lot.

And I’m thinking about it again in this new era of building software with LLMs.

I think about that mechanic. This guy who grew up around mechanical cars that could be physically inspected, diagnosed, and repaired. So much of his experience and knowledge unusable in the face of a computerized car.

You can tell when a mechanical switch has failed with your eyes, but not a digital one. You need a computer to help you understand the computer.

Will this be my future?

If a codebase was made with the assistance of an LLM, will its complexity and bugs only be inspectable, understandable, diagnosable, and fixable with an LLM?

“Hey, can you help me, there’s a problem with my codebase?”

“Ok, I can confirm the issue, but I can’t fix it without hooking your codebase up to an LLM.”


Reply via: Email · Mastodon · Bluesky

Speed is Not Conducive to Wisdom

2026-04-16 03:00:00

Speed has become the primary virtue of the modern world. Everything is sacrificed to it.

Move fast (and break things, not as a goal but as a consequence).

Wisdom requires allowing yourself to be undone by experience:

  • An opinion dismantled by reality.
  • An artifact torn apart by the real world.
  • An idea destroyed by its own shortsightedness.

Experiencing these can be slow and uncomfortable, but if you keep up your speed you can outrun them — never reflecting on what happened in your wake.

Speed is how you avoid reckoning. It guarantees you miss things, and you can’t learn from what you don’t notice.

Wisdom’s feedback loop is slow.

Wise people I’ve met seem unhurried. I don’t think it’s because they’re slow thinkers or actors. I think it’s because they’ve learned that important things take the time they take, no amount of urgency changes that.

Wisdom is chasing all of us, but we’re going too fast to notice what it’s trying to teach us.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2026) Collective Speed Is Not the Summation of Individual Speed

That’s a Skill Issue

2026-04-13 03:00:00

I quipped on BlueSky:

It’s interesting how AI proponents are often like "skill issue" when the LLM doesn't work like someone expects.

Whereas when human-centered UX people see someone using it wrong, they're like "skill issue on us, the people who made this"

This is top of mind because I’ve been working with Jan Miksovsky on his project Web Origami and he exemplified this to me recently.

I was working with some part of Origami and I was “holding it wrong”. I kept apologizing for my misunderstanding and misuse. And Jan — rather than being like “Yeah, that’s a skill issue on your part, but you’ll get there” — his posture as tool-maker was one of introspection. He took the time to consider that perhaps the technology he was building was not properly aligning with my expectations as a user (or human-centered factors more generally). And he graciously explained that perspective to me, making me feel — well, not like an idiot.

My inability to find the results others claim with AI often has me saying either 1) “these claims are obviously BS”, or 2) “I guess it’s a skill issue on my part”.

And it kinda sucks to be saying (2) to yourself all the time, regardless of the technology.

A tech-centered approach treats the technology as a fixed point: if you don’t get what you want, you’re not using it right. The burden is entirely on you, the user, to learn the technology’s language.

Whereas a human-centered approach flips that: the technology exists to serve people as they actually are, not as we wish them to be. Confusion is allowed to be seen as a design failure, not a user failure.

What’s interesting is I think a lot of __insert technology here__ advocates would likely claim they’re “human-centered”. But when the response to failure is “learn the tech better”, it introduces a skill ceiling which naturally creates a priesthood of people who are “in-the-know” on how to make a technology work with the right incantation.

I’ve used AI as an example in this post, but it’s not really about AI specifically. This seems to be generally applicable, AI is just the current flavor.

I don’t have a big takeaway here. Just reflecting.

I love human-centered technology and technologists.


Reply via: Email · Mastodon · Bluesky

Fewer Computers, Fewer Problems: Going Local With Builds & Deployments

2026-04-10 03:00:00

Me, in 2025, on Mastodon:

I love tools like Netlify and deploying my small personal sites with git push

But i'm not gonna lie, 2025 might be the year I go back to just doing builds locally and pushing the deploys from my computer.

I'm sick of devops'ing stupid stuff because builds work on my machine and I have to spend that extra bit of time to ensure they also work on remote linux computers.

Not sure I need the infrastructure of giant teams working together for making a small personal website.

It’s 2026 now, but I finally took my first steps towards this.

The Why

One of the ideas I really love around the “local-first” movement is this notion that everything canonical is done locally, then remote “sync” is an enhancement.

For my personal website, I want builds and deployments to work that way.

All data, build tooling, deployment, etc., happens first and foremost on my machine.

From there, having another server somewhere else do it is purely a “progressive enhancement”. If it were to fail, fine. I can resort back to doing it locally very easily because all the tooling is optimized for local build and deployment first (rather than being dependent on fixing some remote server to get builds and deployments working).

It’s amazing how many of my problems come from the struggle to get one thing to work identically across multiple computers.

I want to explore a solution that removes the cause of my problem, rather than trying to stabilize it with more time and code.

“The first rule of distributed computing is don’t distribute your computing unless you absolutely have to” — especially if you’re just building personal websites.

The What

So I un-did stuff I previously did (that’r right, my current predicament is self-inflicted — imagine that).

My notes site used to work like this:

  • Content lives in Dropbox
  • Code is on GitHub
  • Netlify’s servers pull both, then run a build and deploy the site

It worked, but sporadically. Sometimes it would fail, then start working again, all without me changing anything. And when it did work, it often would take a long time — like five, six minutes to run a build/deployment.

I never could figure out the issue. Some combination of Netlify’s servers (which I don’t control and don’t have full visibility into) talking to Dropbox’s servers (which I also don’t control and don’t have full visibility into).

I got sick of trying to make a simple (but distributed) build process work across multiple computers when 99% of the time, I really only need it to work on one computer.

So I turned off builds in Netlify, and made it so my primary, local computer does all the work. Here are the trade-offs:

  • What I lose: I can no longer make edits to notes, then build/deploy the site from my phone or tablet.
  • What I gain: I don’t have to troubleshoot build issues on machines I don’t own or control. Now, if it “works on my machine”, it works period.

The How

The change was pretty simple.

First, I turned off builds in Netlify. Now when I git push Netlify does nothing.

Next, I changed my build process to stop pulling markdown notes from the Dropbox API and instead pull them from a local folder on my computer. Simple, fast.

And lastly, as a measure to protect myself from myself, I cloned the codebase for my notes to a second location on my computer. This way I have a “working copy” version of my site where I do local development, and I have a clean “production copy” of my site which is where I build/deploy from. This helps ensure I don’t accidentally build and deploy my “working copy” which I often leave in a weird, half-finished state.

In my package.json I have a deploy command that looks like this:

git pull && npm ci && netlify deploy --build --prod

That’s what I run from my “clean” copy. It pulls down any new changes, makes sure I have the latest deps, builds the site, then lets Netlify’s CLI deploy it.

As extra credit, I created a macOS shortcut

# So it knows where to get the right $PATH to node
source ~/.zshrc

# Then switch to my dir and run the command
cd ~/Sites/com.jim-nielsen.notes/
npm run deploy

So I can do CMD + Space, type “Deploy notes.jim-nielsen.com” to trigger a build, then watch the little shortcut run to completion in my Mac’s menubar.

I’ve been living with this setup for a few weeks now and it has worked beautifully. Best part is: I’ve never had to open up Netlify’s website to check the status of a build or troubleshoot a deployment.

That’s an enhancement I can have later — if I want to.


Reply via: Email · Mastodon · Bluesky

Prototyping with LLMs

2026-04-07 03:00:00

Did you know that Jesus gave advice about prototyping with an LLM? Here’s Luke 14:28-30:

Suppose one of you wants to build a tower. Won’t you first sit down and estimate the cost to see if you have enough money to complete it? For if you lay the foundation and are not able to finish it, everyone who sees it will ridicule you, saying, ‘This person began to build and wasn’t able to finish.’

That pretty much sums me up when I try to vibe a prototype.

Don’t get me wrong, I’m a big advocate of prototyping.

And LLMs make prototyping really easy and interesting.

And because it’s so easy, there’s a huge temptation to jump straight to prototyping.

But what I’ve been finding in my own behavior is that I’ll be mid-prototyping with the LLM and asking myself, “What am I even trying to do here?”

And the thought I have is: “I’d be in a much more productive place right now if I’d put a tiny bit more thought upfront into what I am actually trying to build.” Instead, I just jumped right in, chasing a fuzzy feeling or idea only to end up in a place where I’m more confused about what I set out to do than when I started.

Don’t get me wrong, that’s fine. That’s part of prototyping. It’s inherent to the design process to get more confused before you find clarity.

But there’s an alternative to LLM prototyping that’s often faster and cheaper: sketching.

I’ve found many times that if I start an idea by sketching it out, do you know where I end up? At a place where I say, “Actually, I don’t want to build this.” And in that case, all I have to do is take my sketch and throw it away. It didn’t cost me any tokens or compute to figure that out. Talk about efficiency!

I suppose what I’m saying here is: it’s good to think further ahead than the tracks you’re laying out immediately in front of you. Sketching is a great way to do that.

(Thanks to Facundo for prompting these thoughts out of me.)


Reply via: Email · Mastodon · Bluesky