2025-10-08 03:00:00
I’ve done something few on the internet do. I’ve changed my mind.
A few posts on my blog have started to unfurl social share imagery.
You might be wondering, “Wait Jim I thought you hated those things?”
It’s not that I hate social share imagery. I just think…well, I’ve shared my thoughts before (even made a game) so I won’t get on my soapbox.
But I think these “previews” have their place and, when used as a preview — i.e. an opportunity to graphically depict a brief portion of the actual, underlying content — these function well in service of readers.
For example, I often write posts that have zero images in them. They’re pure text. I don’t burden myself with the obligation to generate a graphical preview of the ideas contained in those posts.
But, sometimes, I create posts that have lots of imagery in them, or even just a good meme-like photo and it feels like a shame to not surface that imagery in some way.
So, in service of that pursuit, I set out to resolve how I could do og:images in my posts.
It’s not as easy as “just stick it your front-matter” because my markdown files don’t use front-matter. And I didn’t want to “just add front-matter”. I have my own idiosyncratic way of writing markdown for my blog, which means I need my own idiosyncratic way of denoting “this post has an og:image and here’s the URL”.
After giving it some thought, I realized that all my images are expressed in markdown as HTML (this lets me easily add attributes like alt
, width
, and height
) so if I wanted to mark one of my images as the “preview” image for a post, I could just add a special data attribute like so:
You guys, I made the funniest image to depict this:
<img data-og-image src="" width="" height="" alt="">
Isn’t that hilarious?
Then my markdown processor can extract that piece of meta information and surface it to each post template, essentially like this:
<html>
<title>{post.title}</title>
{post.ogimage &&
<meta property="og:image" content={post.ogimage}>}
<body>
<h1>{post.title}</h1>
{post.content}
I love this because it allows me to leverage existing mechanisms in both the authoring and development processes (data attributes in HTML that become metadata on the post
object), without needing to introduce an entirely new method of expression (e.g. front-matter).
It also feels good because:
It’s technology in service of content, rather than content in service of technology.
Or at least that’s what I like to tell myself :)
2025-10-03 03:00:00
I have a standing desk that goes up and down via a manual crank.
I’ve had it for probably ten years.
Every time I raise or lower that thing, it gets my blood pumping.
I often think: “I should upgrade to one of those standing desks that goes up and down with the push of a button.”
Then there’s the other voice in my head: “Really? Are you so lazy you can’t put your snacks down, get out of your comfy chair, in your air conditioned room, and raise or lower your desk using a little elbow grease? That desk is just fine.”
While writing this, I get out of my chair, star the timer, and raise my desk to standing position. 35 seconds.
That’s the cost: 35 seconds, and an elevated heart rate.
As I have many times over the last ten years, I recommit to keeping it — mostly as a reminder that it’s ok to do some things manually. Not everything in my life needs to be available to me at the push of a button.
2025-09-29 03:00:00
I love a good look at modern practices around semantic versioning and dependency management (Rick Hickey’s talk “Spec-ulation” is the canonical one I think of).
Niki recently wrote a good ‘un at tonsky.me called “We shouldn’t have needed lockfiles”.
What struck me was this point about how package manifests allow version ranges like ^1.2.3
which essentially declare support for future versions of software that haven’t yet been written:
Instead of saying “libpupa
1.2.3
depends on liblupa0.7.8
”, [version ranges] are saying “libpupa1.2.3
depends on whatever the latest liblupa version is at the time of the build.”Notice that this is determined not at the time of publishing, but at the time of the build! If the author of libpupa has published 1.2.3 a year ago and I’m pulling it now, I might be using a liblupa version that didn’t even exist at the time of publishing!
The funny thing is, we use version ranges only to go freeze them with lock files:
version ranges end up not being used anyway. You lock your dependencies once in a lockfile and they stay there, unchanged
In other words: we avoid locking ourselves to specific versions in package.json
by using version ranges, only to then go lock ourselves to specific versions in package-lock.json
— lol!
I mean, that’s funny when you think about it.
But to go back to Niki’s earlier point: version ranges let us declare to ourselves that some code that exists today is compatible with some other future code that has yet to be written.
This idea allows us to create automated build systems that resolve to an artifact whose dependencies have never existed before in that given combination — let alone tested and executed together in that combination.
Now I get it, semantic versioning is an idea not a guarantee. But it’s also pretty wild when you think about it — when you encounter the reality of how semantic versioning plays out in the day-to-day world of building software.
I guess that’s a way of acknowledging out loud that we have normalized shipping production systems on top of the assumption that untested, unwritten combinations of software will behave well together — if not better, since patch updates fix bugs right?
And that’s not even getting into the security side of the equation. Future versions of packages have no guarantee to be as safe as previous ones, as we’ve seen with some of the npm supply chain attacks which rely on version ranges for their exploits. (Funny, isn’t it? Upgrading to the latest version of a package can get you into trouble. The solution? Upgrading to the latest version of a package.)
Anyhow, this all gets me thinking that version ranges and dependency management were the gateway drug to the non-determinism of LLMs.
2025-09-24 03:00:00
There was a time when I could ask, “Did you see the latest NPM attack?” And your answer would be either “Yes” or “No”.
But now if I ask, “Did you see the latest NPM attack?” You’ll probably answer with a question of your own: “Which one?”
In this post, I’m talking about the Qix incident:
When I first heard about it, I thought “Oh boy, better not npm i
on the old personal machine for a little while.”
But as details began to emerge, I realized the exploit wasn’t targeting my computer. It was targeting the computers of people downstream from me: end users.
The malicious code didn’t do anything when running npm install
. Instead, it laid there dormant, waiting to be bundled up alongside a website’s otherwise normal code and served to unsuspecting end users.
Maybe we should rename “bundlers” to “trojan horses”, lol.
That’s all to say: you didn’t have to run npm install
to be affected by this attack. You just had to visit a website whose code was sourced via npm install
. (You needed a bitcoin wallet too, as that was the target of the exploit.)
It’s wild because browsers work really hard to make it safe to visit any webpage in the world — to do a GET to any URL. But attacks like this chip away at those efforts.
So while it’s easy to think NPM can be unsafe for your computer because running npm install
allows running arbitrary code, that’s not the whole story. npm install
can be unsafe for:
preinstall
, install
, postinstall
) allow running arbitrary code which can read/write files locally, steal keys and tokens, install malware, and otherwise exfiltrate data.Reply via: Email · Mastodon · Bluesky
Related posts linking here: (2025) Social Share Imagery via a Data Attribute
2025-09-22 03:00:00
I was reading Chase McCoy’s article “Antibuildings” where he cites Wikipedia’s entry on the term “Antilibrary” which points to another entry about the Japanese concept of Tsundoku, all of which deal with this idea of things we do with intention but that never make it to fruition.
Antilibraries are the books we buy but never read.
Antibuildings the architect’s version of sketches and plans drafted but buildings never made.
It got me thinking about the stuff I’ve started with intention but never brought to fruition — my own anti-*’s.
To name a few:
And last, but certainly not least — in fact, probably grandest of them all:
Reply via: Email · Mastodon · Bluesky
Related posts linking here: (2025) Social Share Imagery via a Data Attribute
2025-09-19 03:00:00
Richard MacManus just posted “Chrome Switches on AI: The Future of Browsing Begins Now” where he points out that what we think of today as “browsers” is undergoing a radical change. Here’s the lay of the land:
Safari is notably absent from that list.
This all leads Richard to ask:
One has to wonder if “browser” is even the right word for what products like Chrome and Edge are evolving into. We are moving further away from curiosity-driven exploration of the web — the modern browser is becoming an automaton, narrowing what we can discover and reducing the serendipity.
The Chrome folks don’t appear to be shying away from the fact that they’re keen on killing evolving this long-held definition of what a “browser” is. From their announcement:
This isn’t just about adding new features; it’s about fundamentally changing the nature of browsing
One of the examples they give is that of a student researching a topic for a paper with dozens of tabs:
Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome — can do it for you.
Wait what? Jumping between sources and trying to connect dots is literally the work of research and learning. The paper is merely a proxy to evaluate the success of the work. Can you automate learning?
But I digress.
Look, I like browsers. No, I LOVE browsers.
But it does kinda feel like “browser” is undergoing a similar redefinition as “phone”.
“Phones” used to be these devices that allowed you to speak with other people who weren’t within earshot of you.
Now they do that and a million other things, like let you listen to music, watch videos, order pizza, transfer money, find love, rent a vacation home, be radicalized, purchase underwear, take photos, etc.
However, despite all those added features, we still call them “phones”.
Perhaps I need to start redefining what I mean when I say “browser”.
Reply via: Email · Mastodon · Bluesky
Related posts linking here: (2025) Social Share Imagery via a Data Attribute