MoreRSS

site iconJim NielsenModify

Designer. Engineer. Writer.20+ years at the intersection of design & code on the web.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Jim Nielsen

Social Share Imagery via a Data Attribute

2025-10-08 03:00:00

I’ve done something few on the internet do. I’ve changed my mind.

Double take meme of girl with a disgusted face on one side then the same girl with a face of changed opinion. Both faces have the text “og:image” superimposed.

A few posts on my blog have started to unfurl social share imagery.

Screenshot of a post from @jimniels@mastodon.social showing a link to blog.jim-nielsen.com and an accompanying og:image preview.

You might be wondering, “Wait Jim I thought you hated those things?”

It’s not that I hate social share imagery. I just think…well, I’ve shared my thoughts before (even made a game) so I won’t get on my soapbox.

But I think these “previews” have their place and, when used as a preview — i.e. an opportunity to graphically depict a brief portion of the actual, underlying content — these function well in service of readers.

For example, I often write posts that have zero images in them. They’re pure text. I don’t burden myself with the obligation to generate a graphical preview of the ideas contained in those posts.

But, sometimes, I create posts that have lots of imagery in them, or even just a good meme-like photo and it feels like a shame to not surface that imagery in some way.

So, in service of that pursuit, I set out to resolve how I could do og:images in my posts.

It’s not as easy as “just stick it your front-matter” because my markdown files don’t use front-matter. And I didn’t want to “just add front-matter”. I have my own idiosyncratic way of writing markdown for my blog, which means I need my own idiosyncratic way of denoting “this post has an og:image and here’s the URL”.

After giving it some thought, I realized that all my images are expressed in markdown as HTML (this lets me easily add attributes like alt, width, and height) so if I wanted to mark one of my images as the “preview” image for a post, I could just add a special data attribute like so:

You guys, I made the funniest image to depict this:

<img data-og-image src="" width="" height="" alt="">

Isn’t that hilarious?

Then my markdown processor can extract that piece of meta information and surface it to each post template, essentially like this:

<html>
  <title>{post.title}</title>
  {post.ogimage &&
    <meta property="og:image" content={post.ogimage}>}
  <body>
    <h1>{post.title}</h1>
    {post.content}

I love this because it allows me to leverage existing mechanisms in both the authoring and development processes (data attributes in HTML that become metadata on the post object), without needing to introduce an entirely new method of expression (e.g. front-matter).

It also feels good because:

  1. It’s good for me. It doesn’t require any additional work on my part. I don’t have to create additional images for my posts. I’m merely marking images I’ve already created — which were done in service of a post’s content — as “previews” for the post.
  2. It’s good for users. Readers of my site get image previews that are actually, well, previews — e.g. a graphical representation that will contextually reappear in the post, (as opposed to an image template whose contents do nothing to provide an advanced graphical preview of what’s to follow in the post itself).

It’s technology in service of content, rather than content in service of technology.

Or at least that’s what I like to tell myself :)


Reply via: Email · Mastodon · Bluesky

Doing It Manually

2025-10-03 03:00:00

I have a standing desk that goes up and down via a manual crank.

I’ve had it for probably ten years.

Every time I raise or lower that thing, it gets my blood pumping.

I often think: “I should upgrade to one of those standing desks that goes up and down with the push of a button.”

Then there’s the other voice in my head: “Really? Are you so lazy you can’t put your snacks down, get out of your comfy chair, in your air conditioned room, and raise or lower your desk using a little elbow grease? That desk is just fine.”

While writing this, I get out of my chair, star the timer, and raise my desk to standing position. 35 seconds.

That’s the cost: 35 seconds, and an elevated heart rate.

As I have many times over the last ten years, I recommit to keeping it — mostly as a reminder that it’s ok to do some things manually. Not everything in my life needs to be available to me at the push of a button.


Reply via: Email · Mastodon · Bluesky

Running Software on Software You’ve Never Run

2025-09-29 03:00:00

I love a good look at modern practices around semantic versioning and dependency management (Rick Hickey’s talk “Spec-ulation” is the canonical one I think of).

Niki recently wrote a good ‘un at tonsky.me called “We shouldn’t have needed lockfiles”.

What struck me was this point about how package manifests allow version ranges like ^1.2.3 which essentially declare support for future versions of software that haven’t yet been written:

Instead of saying “libpupa 1.2.3 depends on liblupa 0.7.8”, [version ranges] are saying “libpupa 1.2.3 depends on whatever the latest liblupa version is at the time of the build.”

Notice that this is determined not at the time of publishing, but at the time of the build! If the author of libpupa has published 1.2.3 a year ago and I’m pulling it now, I might be using a liblupa version that didn’t even exist at the time of publishing!

The funny thing is, we use version ranges only to go freeze them with lock files:

version ranges end up not being used anyway. You lock your dependencies once in a lockfile and they stay there, unchanged

In other words: we avoid locking ourselves to specific versions in package.json by using version ranges, only to then go lock ourselves to specific versions in package-lock.json — lol!

I mean, that’s funny when you think about it.

But to go back to Niki’s earlier point: version ranges let us declare to ourselves that some code that exists today is compatible with some other future code that has yet to be written.

This idea allows us to create automated build systems that resolve to an artifact whose dependencies have never existed before in that given combination — let alone tested and executed together in that combination.

Now I get it, semantic versioning is an idea not a guarantee. But it’s also pretty wild when you think about it — when you encounter the reality of how semantic versioning plays out in the day-to-day world of building software.

I guess that’s a way of acknowledging out loud that we have normalized shipping production systems on top of the assumption that untested, unwritten combinations of software will behave well together — if not better, since patch updates fix bugs right?

And that’s not even getting into the security side of the equation. Future versions of packages have no guarantee to be as safe as previous ones, as we’ve seen with some of the npm supply chain attacks which rely on version ranges for their exploits. (Funny, isn’t it? Upgrading to the latest version of a package can get you into trouble. The solution? Upgrading to the latest version of a package.)

Anyhow, this all gets me thinking that version ranges and dependency management were the gateway drug to the non-determinism of LLMs.


Reply via: Email · Mastodon · Bluesky

The Risks of NPM

2025-09-24 03:00:00

There was a time when I could ask, “Did you see the latest NPM attack?” And your answer would be either “Yes” or “No”.

But now if I ask, “Did you see the latest NPM attack?” You’ll probably answer with a question of your own: “Which one?”

In this post, I’m talking about the Qix incident:

  • Prolific maintainer Qix was phished.
  • Qix is a co-maintainer on many packages with Sindre Sorhus, the most popular maintainer on NPM (by download count).
  • Attackers pushed malicious code to packages that are indirectly depended by a huge portion of the ecosystem (hundreds of millions of downloads a week).

When I first heard about it, I thought “Oh boy, better not npm i on the old personal machine for a little while.”

But as details began to emerge, I realized the exploit wasn’t targeting my computer. It was targeting the computers of people downstream from me: end users.

The malicious code didn’t do anything when running npm install. Instead, it laid there dormant, waiting to be bundled up alongside a website’s otherwise normal code and served to unsuspecting end users.

Maybe we should rename “bundlers” to “trojan horses”, lol.

Graphic depicting many source assets on the left, like .js files, passing through a trojan horse in the middle and coming out as singular files on the right.

That’s all to say: you didn’t have to run npm install to be affected by this attack. You just had to visit a website whose code was sourced via npm install. (You needed a bitcoin wallet too, as that was the target of the exploit.)

It’s wild because browsers work really hard to make it safe to visit any webpage in the world — to do a GET to any URL. But attacks like this chip away at those efforts.

So while it’s easy to think NPM can be unsafe for your computer because running npm install allows running arbitrary code, that’s not the whole story. npm install can be unsafe for:

  • Your computer (install time execution)
    • Lifecycle scripts (preinstall, install, postinstall) allow running arbitrary code which can read/write files locally, steal keys and tokens, install malware, and otherwise exfiltrate data.
  • Your dev/CI computer(s) (build time execution)
    • Compilers, bundlers, transpilers, plugins, etc., can all execute arbitrary code and leak secrets, corrupt build artifacts, add hidden payloads, etc.
  • Your application server (server runtime execution)
    • Any dependency runs top-level in production and exposes risk to data exfiltration, unsafe privilege escalation, remote command execution, etc.
  • Your users’ computers (client runtime execution)
    • Bundled dependencies ship with your website, exposing your users to malicious code that runs in their browser and can exfiltrate data, insert hidden trackers/miners, etc.

Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Social Share Imagery via a Data Attribute

Anti-*: The Things We Do But Not All The Way

2025-09-22 03:00:00

I was reading Chase McCoy’s article “Antibuildings” where he cites Wikipedia’s entry on the term “Antilibrary” which points to another entry about the Japanese concept of Tsundoku, all of which deal with this idea of things we do with intention but that never make it to fruition.

Antilibraries are the books we buy but never read.

Antibuildings the architect’s version of sketches and plans drafted but buildings never made.

It got me thinking about the stuff I’ve started with intention but never brought to fruition — my own anti-*’s.

To name a few:

  • Antidomains: the domains I bought and had big plans for, but they never progressed beyond being parked at my registrar. (Zach Leatherman recently made a list kinda like this, if you haven’t seen it.)
  • Antiwebsites: the sites I was gonna make, but never shipped.
  • Antilayers: the Photoshop, Sketch, or Figma designs I painstakingly crafted to the level of “completeness”, but then never began building with code.
  • Anticode: the changes I made that functioned to the level of being usable and shippable, but then I never could pull the trigger on ‘em.
  • Antiposts: (also known as “drafts”, lol) all those blog posts I poured time and energy into researching, writing, and editing, but never could take all the way to “published”.
  • Antitweets: all the Tweets/Toots/Skeets I meticulously crafted as witty comebacks or sarcastic quips, but then never posted (honestly, probably for the better).

And last, but certainly not least — in fact, probably grandest of them all:

  • Antitabs: all the browser tabs of articles, videos, recipes, and other good things I collected and was going to read, watch, bake, etc. but never did.

Photo of a bookshelf on top with lots of books, below that a screenshot of a bunch of tabs where all you can see is favicons


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Social Share Imagery via a Data Attribute

RIP “Browsers”

2025-09-19 03:00:00

Richard MacManus just posted “Chrome Switches on AI: The Future of Browsing Begins Now” where he points out that what we think of today as “browsers” is undergoing a radical change. Here’s the lay of the land:

  • Microsoft launched “Copilot Mode” on Edge and promotes it as an “AI-powered browser.”
  • Mozilla is baking AI into Firefox
  • Atlassian is into browsers now with their acquisition of The Browser Company and its AI browser Dia (my computer autocorrected that to “Die” and I reluctantly changed it back).
  • AI-first companies like Perplexity are releasing their own AI browsers.
  • OpenAI hired ex-Chrome engineers and the rumor is they are building a browser.

Safari is notably absent from that list.

Browser logos for Chrome, Firefox, and Edge on a tombstone that says “RIP” (Safari is on the outside of the tombstone). To the right of that is a photo of a bunch of newborn babies with their faces covered by a bunch of AI browser logos like Chrome, Edge, Firefox, Comet, and Dia.

This all leads Richard to ask:

One has to wonder if “browser” is even the right word for what products like Chrome and Edge are evolving into. We are moving further away from curiosity-driven exploration of the web — the modern browser is becoming an automaton, narrowing what we can discover and reducing the serendipity.

The Chrome folks don’t appear to be shying away from the fact that they’re keen on killing evolving this long-held definition of what a “browser” is. From their announcement:

This isn’t just about adding new features; it’s about fundamentally changing the nature of browsing

One of the examples they give is that of a student researching a topic for a paper with dozens of tabs:

Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome — can do it for you.

Wait what? Jumping between sources and trying to connect dots is literally the work of research and learning. The paper is merely a proxy to evaluate the success of the work. Can you automate learning?

But I digress.

Look, I like browsers. No, I LOVE browsers.

But it does kinda feel like “browser” is undergoing a similar redefinition as “phone”.

“Phones” used to be these devices that allowed you to speak with other people who weren’t within earshot of you.

Now they do that and a million other things, like let you listen to music, watch videos, order pizza, transfer money, find love, rent a vacation home, be radicalized, purchase underwear, take photos, etc.

However, despite all those added features, we still call them “phones”.

Perhaps I need to start redefining what I mean when I say “browser”.


Reply via: Email · Mastodon · Bluesky

Related posts linking here: (2025) Social Share Imagery via a Data Attribute