MoreRSS

site iconAlex WlchanModify

I‘m a software developer, writer, and hand crafter from the UK. I’m queer and trans.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Alex Wlchan

Beyond `None`: actionable error messages for `keyring.get_password()`

2025-04-17 14:51:06

I’m a big fan of keyring, a Python module made by Jason R. Coombs for storing secrets in the system keyring. It works on multiple operating systems, and it knows what password store to use for each of them. For example, if you’re using macOS it puts secrets in the Keychain, but if you’re on Windows it uses Credential Locker.

The keyring module is a safe and portable way to store passwords, more secure than using a plaintext config file or an environment variable. The same code will work on different platforms, because keyring handles the hard work of choosing which password store to use.

It has a straightforward API: the keyring.set_password and keyring.get_password functions will handle a lot of use cases.

>>> import keyring
>>> keyring.set_password("xkcd", "alexwlchan", "correct-horse-battery-staple")
>>> keyring.get_password("xkcd", "alexwlchan")
"correct-horse-battery-staple"

Although this API is simple, it’s not perfect – I have some frustrations with the get_password function. In a lot of my projects, I’m now using a small function that wraps get_password.

What do I find frustrating about keyring.get_password?

If you look up a password that isn’t in the system keyring, get_password returns None rather than throwing an exception:

>>> print(keyring.get_password("xkcd", "the_invisible_man"))
None

I can see why this makes sense for the library overall – a non-existent password is very normal, and not exceptional behaviour – but in my projects, None is rarely a usable value.

I normally use keyring to retrieve secrets that I need to access protected resources – for example, an API key to call an API that requires authentication. If I can’t get the right secrets, I know I can’t continue. Indeed, continuing often leads to more confusing errors when some other function unexpectedly gets None, rather than a string.

For a while, I wrapped get_password in a function that would throw an exception if it couldn’t find the password:

def get_required_password(service_name: str, username: str) -> str:
    """
    Get password from the specified service.

    If a matching password is not found in the system keyring,
    this function will throw an exception.
    """
    password = keyring.get_password(service_name, username)

    if password is None:
        raise RuntimeError(f"Could not retrieve password {(service_name, username)}")

    return password

When I use this function, my code will fail as soon as it fails to retrieve a password, rather than when it tries to use None as the password.

This worked well enough for my personal projects, but it wasn’t a great fit for shared projects. I could make sense of the error, but not everyone could do the same.

What’s that password meant to be?

A good error message explains what’s gone wrong, and gives the reader clear steps for fixing the issue. The error message above is only doing half the job. It tells you what’s gone wrong (it couldn’t get the password) but it doesn’t tell you how to fix it.

As I started using this snippet in codebases that I work on with other developers, I got questions when other people hit this error. They could guess that they needed to set a password, but the error message doesn’t explain how, or what password they should be setting.

For example, is this a secret they should pick themselves? Is it a password in our shared password vault? Or do they need an API key for a third-party service? If so, where do they find it?

I still think my initial error was an improvement over letting None be used in the rest of the codebase, but I realised I could go further.

This is my extended wrapper:

def get_required_password(service_name: str, username: str, explanation: str) -> str:
    """
    Get password from the specified service.

    If a matching password is not found in the system keyring,
    this function will throw an exception and explain to the user
    how to set the required password.
    """
    password = keyring.get_password(service_name, username)

    if password is None:
        raise RuntimeError(
            "Unable to retrieve required password from the system keyring!\n"
            "\n"
            "You need to:\n"
            "\n"
            f"1/ Get the password. Here's how: {explanation}\n"
            "\n"
            "2/ Save the new password in the system keyring:\n"
            "\n"
            f"       keyring set {service_name} {username}\n"
        )

    return password

The explanation argument allows me to explain what the password is for to a future reader, and what value it should have. That information can often be found in a code comment or in documentation, but putting it in an error message makes it more visible.

Here’s one example:

get_required_password(
    "flask_app",
    "secret_key",
    explanation=(
        "Pick a random value, e.g. with\n"
        "\n"
        "       python3 -c 'import secrets; print(secrets.token_hex())'\n"
        "\n"
        "This password is used to securely sign the Flask session cookie. "
        "See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY"
    ),
)

If you call this function and there’s no keyring entry for flask_app/secret_key, you get the following error:

Unable to retrieve required password from the system keyring!

You need to:

1/ Get the password. Here's how: Pick a random value, e.g. with

       python3 -c 'import secrets; print(secrets.token_hex())'

This password is used to securely sign the Flask session cookie. See https://flask.palletsprojects.com/en/stable/config/#SECRET_KEY

2/ Save the new password in the system keyring:

       keyring set flask_app secret_key

It’s longer, but this error message is far more informative. It tells you what’s wrong, how to save a password, and what the password should be.

This is based on a real example where the previous error message led to a misunderstanding. A co-worker saw a missing password called “secret key” and thought it referred to a secret key for calling an API, and didn’t realise it was actually for signing Flask session cookies. Now I can write a more informative error message, I can prevent that misunderstanding happening again. (We also renamed the secret, for additional clarity.)

It takes time to write this explanation, which will only ever be seen by a handful of people, but I think it’s important. If somebody sees it at all, it’ll be when they’re setting up the project for the first time. I want that setup process to be smooth and straightforward.

I don’t use this wrapper in all my code, particularly small or throwaway toys that won’t last long enough for this to be an issue. But in larger codebases that will be used by other developers, and which I expect to last a long time, I use it extensively. Writing a good explanation now can avoid frustration later.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Localising the `` with JavaScript

2025-04-16 05:23:24

I’ve been writing some internal dashboards recently, and one hard part is displaying timestamps. Our server does everything in UTC, but the team is split across four different timezones, so the server timestamps aren’t always easy to read.

For most people, it’s harder to understand a UTC timestamp than a timestamp in your local timezone. Did that event happen just now, an hour ago, or much further back? Was that at the beginning of your working day? Or at the end?

Then I remembered that I tried to solve this five years ago at a previous job. I wrote a JavaScript snippet that converts UTC timestamps into human-friendly text. It displays times in your local time zone, and adds a short suffix if the time happened recently. For example:

today @ 12:00 BST (1 hour ago)
yesterday @ 11:00 CST
Fri, 22 May 2020 @ 10:00 PST

In my old project, I was using writing timestamps in a <div> and I had to opt into the human-readable text for every date on the page. It worked, but it was a bit fiddly.

Doing it again, I thought of a more elegant solution.

HTML has a <time> element for expressing datetimes, which is a more meaningful wrapper than a <div>. When I render the dashboard on the server, I don’t know the user’s timezone, so I include the UTC timestamp in the page like so:

<time datetime="2025-04-15 19:45:00Z">
  Tue, 15 Apr 2025 at 19:45 UTC
</time>

I put a machine-readable date and time string with a timezone offset string in the datetime attribute, and then a more human-readable string in the text of the element.

Then I add this JavaScript snippet to the page:

window.addEventListener("DOMContentLoaded", function() {
  document.querySelectorAll("time").forEach(function(timeElem) {
    
    // Set the `title` attribute to the original text, so a user
    // can hover over a timestamp to see the UTC time.
    timeElem.setAttribute("title", timeElem.innerText);

    // Replace the display text with a human-friendly date string
    // which is localised to the user's timezone.
    timeElem.innerText = getHumanFriendlyDateString(
      timeElem.getAttribute("datetime")
    );
  })
});

This updates any <time> element on the page to use a human friendly date string, which is localised to the user’s timezone. For example, I’m in the UK so that becomes:

<time datetime="2025-04-15 19:45:00Z" title="Tue, 15 Apr 2025 at 19:45 UTC">
  Tue, 15 Apr 2025 at 20:45 BST
</time>

In my experience, these timestamps are easier and more intuitive for people to read.

I always include a timezone string (e.g. BST, EST, PDT) so it’s obvious that I’m showing a localised timestamp. If you really need the UTC timestamp, it’s in the title attribute, so you can see it by hovering over it. (Sorry, mouseless users, but I don’t think any of my team are browsing our dashboards from their phone or tablet.)

If the JavaScript doesn’t load, you see the plain old UTC timestamp. It’s not ideal, but the page still loads and you can see all the information – this behaviour is an enhancement, not an essential.

To me, this is the unfulfilled promise of the <time> element. In my fantasy world, web page authors would write the time in a machine-readable format, and browsers would show it in a way that makes sense for the reader. They’d take into account their language, locale, and time zone.

I understand why that hasn’t happened – it’s much easier said than done. You need so much context to know what’s the “right” thing to do when dealing with datetimes, and guessing without that context is at the heart of many datetime bugs. These sort of human-friendly, localised timestamps are very handy sometimes, and a complete mess at other times.

In my staff-only dashboards, I have that context. I know what these timestamps mean, who’s going to be reading them, and I think they’re a helpful addition that makes the data easier to read.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Always running

2025-04-06 18:56:04

I’m trying something a bit different today – fiction. I had an idea for a short story the other evening, and I fleshed it out into a proper piece. I want to get better at writing fiction, and the only way to do that is with practice. I hope you like what I’ve written!


When the fire starts, I am already running for the exit.
I have always been running for the exit.
One foot lifted, one arm outstretched.
Frozen mid-stride, but never still.
I run because that is what I am made to do.
I run because somebody must show the way.

When the fire starts, the world is thrown into sharp relief.
Everything unnecessary falls away.
The noise, the panic, the heat – nothing touches me.
I know my purpose.
I know what I have to do.

I have worked in this theatre since it opened its doors.
But I am not an actor or an usher or a stagehand.
I will never receive applause or accolades or awards.
My job is still important.
My job is to keep people safe.

When the fire starts, my work begins – and in a way, it also ends.
Not because I leave, but because they do.
For years they have walked past me, hardly noticed me, barely seen me.
But today, they look up.
Today, they follow.
Today, I can do my job.

When the fire starts, they run beneath me.
Rushing through the doors.
Rushing towards safety and freedom.
I do not know how the fire started, or where it is burning, or how it will end.
I only know that inside is danger, outside is safety, and I am the line between them.

When the fire starts, they leave their bags behind. Their coats. Their tickets.
Some forget their composure. Some forget their manners.
But nobody forgets me.

They hear me, though I have no voice.
They know me, though I have no face.
They trust me, though I have no name.
“This way”, I cry, without words.
And they follow.

When the fire starts, I know I will never leave.
People carry each other – their friends, their colleagues, the stranger sat next to them.
But nobody carries me.
Nobody takes down the sign above the door.
But I do not need to be carried.
I do not need to be saved.

When the fire starts, I will keep running.
Running above their heads.
Running across the doorway.
Running in every darkened hall.

I will always be running for the exit, because somebody must.
I will always be running for the exit, even if I know I can never reach it.
I will always be running for the exit, because that is how I show you the way.


A green-and-white “running man” exit sign hung from the ceiling of a darkened room.
A “running man” exit sign. Photo by Mateusz Dach on Pexels, used under the Pexels license.

Hopefully it’s clear that this isn’t a story about a person, but about the “running man” who appears on emergency exit signs around the world. It’s an icon that was first devised by Japanese graphic designer Yukio Ota in 1970 and adopted as an international symbol in 1985.

I was sitting in the theatre on Friday evening, waiting for the second half to start, and my eye was drawn to the emergency exit signs. It struck me that there’s a certain sort of tragedy to the running man – although he guides people to the exit, in a real fire his sign will be burnt to a crisp. I wrote the first draft on the train home, and I finished it today.

I found the “when the fire starts” line almost immediately, but the early drafts were more vague about the protagonist. I thought it would be fun to be quite mysterious, to make it a shocking realisation that they’re actually a pictogram. I realised it was too subtle – I don’t think you’d necessarily work out who I was talking about. I rewrote it so you get the “twist” much earlier, and I think the concept still works.

Another change in the second draft was the line breaks. I use semantic linebreaks in my source code, but they get removed in the rendered site. A paragraph gets compressed into a single line. That’s fine for most prose, but I realised I was losing something in this short story. Leaning into the line breaks highlights the repetition and the structure of the words, so I put them back. It gives the story an almost poetic quality.

I’ve always been able to find stories in the everyday and the mundane – a pencil is a rocket ship, a plate is a building, a sock is a cave. The only surprising thing about this idea is that it’s taken me this long to turn the running man into a character in one of my stories.

I really enjoyed writing this, so maybe you’ll see more short stories in the future. I have a lot of ideas, but not much experience turning them into written prose. Watch this space!

[If the formatting of this post looks odd in your feed reader, visit the original article]

Monki Gras 2025: What I’ve Learned by Building to Last

2025-03-28 16:43:57

Yesterday I gave a talk at Monki Gras 2025. This year, the theme is Sustaining Software Development Craft, and here’s the description from the conference website:

The big question we want to explore is – how can we keep doing the work we do, when it sustains us, provides meaning and purpose, and sometimes pays the bills? We’re in a period of profound change, technically, politically, socially, economically, which has huge implications for us as practitioners, the makers and doers, but also for the culture at large.

I did a talk about the first decade of my career, which I’ve spent working on projects that are designed to last.

I’m pleased with my talk, and I got a lot of nice comments. Monki Gras is always a pleasure to attend and speak at – it’s such a lovely, friendly vibe, and the organisers James Governor and Jessica West do a great job of making it a nice day. When I left yesterday, I felt warm and fuzzy and appreciated.

I also have a front-row photo of me speaking, courtesy of my dear friend Eriol Fox. Naturally, I chose my outfit to match my slides (and this blog post!).

It's me! I'm standing on stage holding a microphone and looking away from the camera, talking excitedly about something to do with people skills. I have dark hair falling down my shoulders, glasses, and a dark teal dress.

Key points

How do you create something that lasts?

  • You can’t predict the future, but there are patterns in what lasts
  • People skills sustain a career more than technical skills
  • Long-lasting systems cannot grow without bound; they need weeding

Links/recommended reading

  • Sibyl Schaefer presented a paper Energy, Digital Preservation, and the Climate at iPres 2024, which is about how digital preservation needs to change in anticipation of the climate crisis. This was a major inspiration for this talk.

  • Simon Willison gave a talk Coping strategies for the serial project hoarder at DjangoCon US in 2022, which is another inspiration for me. I’m not as prolific as Simon, but I do see parallels between his approach and what I remember of Metaswitch.

  • Most of the photos in the talk come from the Flickr Commons, a collection of historical photographs from over 100 international cultural heritage organisations.

    You can learn more about the Commons, browse the photos, and see who’s involved using the Commons Explorer https://commons.flickr.org/. (Which I helped to build!)

Slides and notes

Title slide. A black-and-white photo of somebody placing a stone in a dry-stone wall, with overlaid text ‘What I've Learned by Building to Last' and my personal details.
Photo: dry stone wall building in South Wales. Taken by Wikimedia Commons user TR001, used under CC BY‑SA 3.0.

[Make introductory remarks; name and pronouns; mention slides on my website]

I’ve been a software developer for ten years, and I’ve spent my career working on projects that are designed to last – first telecoms and networking, now cultural heritage – so when I heard this year’s theme “sustaining craft”, I thought about creating things that last a long time.

How do you create something that lasts?

The key question I want to address in this talk is how do you create something that lasts? I want to share a few thoughts I’ve had from working on decade- and century-scale projects.

Part of this is about how we sustain ourselves as software developers, as the individuals who create software, especially with the skill threat of AI and the shifting landscape of funding software. I also want to go broader, and talk about how we sustain the craft, the skill, the projects.

Let’s go through my career, and see what we can learn.

Black-and-white photo of women working at a telephone switchboard.
Photo: women working at a Bell System telephone switchboard. From the U.S. National Archives, no known copyright restrictions.

My first software developer job was at a company called Metaswitch. Not a household name, they made telecoms equipment, and you’d probably have heard of their customers. They sold equipment to carriers like AT&T, Vodafone, and O2, who’d use that equipment to sell you telephone service.

Telecoms infrastructure is designed to last a long time. I spent most of my time at Metaswitch working with BGP, a routing protocol designed on a pair of napkins in 1989.

Scans of two napkins with handwritten sketches and notes.
BGP is sometimes known as the "two-napkin protocol", because of the two napkins on which Kirk Lougheed and Yakov Rekhter wrote the original design. From the Computer History Museum.

These are those napkins.

This design is basically still the backbone of the Internet. A lot of the building blocks of the telephone network and the Internet are fundamentally the same today as when they were created.

I was working in a codebase that had been actively developed for most of my life, and was expected to outlast me. This was my first job so I didn’t really appreciate it at the time, but Metaswitch did a lot of stuff designed to keep that codebase going, to sustain it into the future.

Let’s talk about a few of them.

Careful to adopt new technology / cautious about third-party code / comprehensive tests and safety nets.
Photo: a programmer testing electronic equipment. From the San Diego Air & Space Museum Archives, no known copyright restrictions.
  1. Metaswitch was very careful about adopting new technologies. Most of their code was written in C, a little C++, and Rust was being adopted very slowly. They didn’t add new technology quickly. Anything they add, they have to support for a long time – so they wanted to pick technologies that weren’t a flash in the pan.

    I learnt about something called “the Lindy effect” – this is the idea that any technology is about halfway through its expected life. An open-source library that’s been developed for decades? That’ll probably be around a while longer. A brand new JavaScript framework? That’s a riskier long-term bet. The Lindy effect is about how software that’s been around a long time has already proven its staying power.

    And talking of AI specifically – I’ve been waiting for things to settle. There’s so much churn and change in this space, if I’d learnt a tool six months ago, most of that would be obsolete today. I don’t hate AI, I love that people are trying all these new tools – but I’m tired and I learning new things is exhausting. I’m waiting for things to calm down before really diving deep on these tools.

  2. Metaswitch was very cautious about third-party code, and they didn’t have much of it. Again, anything they use will have to be supported for a long time – is that third-party code, that open-source project stick around? They preferred to take the short-term hit of writing their own code, but then having complete control over it.

    To give you some idea of how seriously they took this: every third-party dependency had to be reviewed and vetted by lawyers before it could be added to the codebase. Imagine doing that for a modern Node.js project!

  3. They had a lot of safety nets. Manual and automated testing, a dedicated QA team, lots of checks and reviews. These were large codebases which had to be reliable. Long-lived systems can’t afford to “move fast and break things”.

This was a lot of extra work, but it meant more stability, less churn, and not much risk of outside influences breaking things. This isn’t the only way to build software – Metaswitch is at one extreme of a spectrum – but it did seem to work.

I think this is a lesson for building software, but also in what we choose to learn as individuals. Focusing on software that’s likely to last means less churn in our careers. If you learn the fundamentals of the web today, that knowledge will still be useful in five years. If you learn the JavaScript framework du jour? Maybe less so.

How do you know what’s going to last? That’s the key question! It’s difficult, but it’s not impossible.

you can't predict the future, but there are patterns in what lasts

This is my first thought for you all: you can’t predict the future, but there are patterns in what lasts.

I’ve given you some examples of coding practices that can help the longevity of a codebase, these are just a few.

Maybe I have rose-tinted spectacles, but I’ve taken the lessons from Metaswitch and brought them into my current work, and I do like them. I’m careful about external dependencies, I write a lot of my own code, and I create lots of safety nets, and stuff doesn’t tend to churn so much. My code lasts because it isn’t constantly being broken by external forces.

Black-and-white photo of a small child using a hand-saw.
Photo: a child in nursery school cutting a plank of wood with a saw. From the Community Archives of Belleville and Hastings County, no known copyright restrictions.

So that’s what the smart people were doing at Metaswitch. What was I doing?

I joined Metaswitch when I was a young and twenty-something graduate, so I knew everything. I knew software development was easy, these old fuddy-duddies were making it all far too complicated, and I was gonna waltz in and show them how it was done. And obviously, that happened. (Please imagine me reading that paragraph in a very sarcastic voice.)

I started doing the work, and it was a lot harder than I expected – who knew that software development was difficult? But I was coming from a background as a solo dev who’d only done hobby projects. I’d never worked in a team before. I didn’t know how to say that I was struggling, to ask for help.

I kept making bold promises about what I could do, based on how quickly I thought I should be able to do the work – but I was making promises my skills couldn’t match. I kept missing self-imposed deadlines.

You can do that once, but you can’t make it a habit.

About six months before I left, my manager said to me “Alex, you have a reputation for being unreliable”.

Black-and-white photo of a small boy with a startled expression.
Photo: a boy with a pudding bowl haircut, photographed by Elinor Wiltshire, 1964. From the National Library of Ireland, no known copyright restrictions.

He was right!

I had such a history of making promises that I couldn’t keep, people stopped trusting me. I didn’t get to work on interesting features or the exciting projects, because nobody trusted me to deliver. That was part of why I left that job – I’d ploughed my reputation into the ground, and I needed to reset.

Black-and-white photo of archive stacks with somebody pushing a trolley through them.
Photo: the library stores at Wellcome Collection. Taken by Thomas SG Farnetti used under CC BY‑NC 4.0.

I got that reset at Wellcome Collection, a London museum and library that some of you might know. I was working a lot with their collections, a lot of data and metadata.

Wellcome Collection is building on long tradition of libraries and archives, which go back thousands of years. Long-term thinking is in their DNA.

To give you one example: there’s stuff in the archive that won’t be made public until the turn of the century. Everybody who works there today will be long gone, but they assume that those records will exist in some shape or form form when that time comes, and they’re planning for those files to eventually be opened. This is century-scale thinking.

Black-and-white photo of a man in a fancy hat smiling and making a thumbs-up for the camera.
Photo: Bob Hoover. From the San Diego Air & Space Museum Archives, no known copyright restrictions.

When I started, I sat next to a guy called Chris. (I couldn’t find a good picture of him, but I feel like this photo captures his energy.)

Chris was a senior archivist. He’d been at Wellcome Collection about twenty-five years, and there were very few people – if anyone – who knew more about the archive than he did. He absolutely knew his stuff, and he could have swaggered around like he owned the place.

But he didn’t. Something I was struck by, from my very first day, was how curious and humble he was. A bit of a rarity, if you work in software.

He was the experienced veteran of the organisation, but he cared about what other people had to say and wanted to learn from them. Twenty-five years in, and he still wanted to learn.

He was a nice guy. He was a pleasure to work with, and I think that’s a big part of why he was able to stay in that job as long as he did. We were all quite disappointed when he left for another job!

people skills sustain a career more than technical skills

This is my second thought for you: people skills sustain a career more than technical ones. Being a pleasure to work with opens so many doors and opportunities than technical skill alone cannot.

We could do another conference just on what those people skills are, but for now I just want to give you a few examples to think about.

be a reliable and respectful teammate / listen with curiosity and intent / don’t give people unsolicited advice
Photo: Lt.(jg.) Harriet Ida Pickens and Ens. Frances Wills, first Negro Waves to be commissioned in the US Navy. From the U.S. National Archives, no known copyright restrictions.
  1. Be a respectful and reliable teammate. You want to be seen as a safe pair of hands.

    Reliability isn’t about avoiding mistakes, it’s about managing expectations. If you’re consistently overpromising and underdelivering, people stop trusting you (which I learnt the hard way). If you want people to trust you, you have to keep your promises.

    Good teammates communicate early when things aren’t going to plan, they ask for help and offer it in return.

    Good teammates respect the work that went before. It’s tempting to dismiss it as “legacy”, but somebody worked hard on it, and it was the best they knew how to do – recognise that effort and skill, don’t dismiss it.

  2. Listen with curiosity and intent. My colleague Chris had decades of experience, but he never acted like he knew everything. He asked thoughtful questions and genuinely wanted to learn from everyone.

    So many of us aren’t really listening when we’re “listening” – we’re just waiting for the next silence, where we can interject with the next thing we’ve already thought of. We aren’t responding to what other people are saying.

    When we listen, we get to learn, and other people feel heard – and that makes collaboration much smoother and more enjoyable.

  3. Finally, and this is a big one: don’t give people unsolicited advice.

    We are very bad at this as an industry. We all have so many opinions and ideas, but sometimes, sharing isn’t caring.

    Feedback is only useful when somebody wants to hear it – otherwise, it feels like criticism, it feels like an attack. Saying “um, actually” when nobody asked for feedback isn’t helpful, it just puts people on the defensive.

    Asking whether somebody wants feedback, and what sort of feedback they want, will go a long way towards it being useful.

be a reliable and respectful teammate / listen with curiosity and intent / don’t give people unsolicited advice

So again: people skills sustain a career more than technical skills.

There aren’t many truly solo careers in software development – we all have to work with other people – for many of us, that’s the joy of it! If you’re a nice person to work with, other people will want to work with you, to collaborate on projects, they’ll offer you opportunities, it opens doors.

Your technical skills won’t sustain your career if you can’t work with other people.

a museum gallery where every wall is covered with pictures, no space at all on the walls
Photo: "The Keeper", an exhibition at the New Museum in New York. Taken by Daniel Doubrovkine, used under CC BY‑NC‑SA 4.0.

When I went to Wellcome Collection, it was my first time getting up-close and personal with a library and archive, and I didn’t really know how they worked. If you’d asked me, I’d have guessed they just keep … everything? And it was gently explained to me that

“No Alex, that’s hoarding.”

“Your overflowing yarn stash does not count as an archive.”

Big collecting institutions are actually super picky – they have guidelines about what sort of material they collect, what’s in scope, what isn’t, and they’ll aggressively reject anything that isn’t a good match.

At Wellcome Collection, their remit was “the history of health and human experience”. You have medical papers? Definitely interesting! Your dad’s old pile of car magazines? Less so.

a large dumpster full of discarded books
Photo: a dumpster full of books that have been discarded. From brewbooks on Flickr, used under CC BY‑SA 2.0.

Collecting institutions also engage in the practice of “weeding” or “deaccessioning”, which is removing material, pruning the collection.

For example, in lending libraries, books will be removed from the shelves if they’ve become old, damaged, or unpopular. They may be donated, or sold, or just thrown away – but whatever happens, they’re gotten rid of. That space is reclaimed for other books.

Getting rid of material is a fundamental part of professional collecting, because professionals know that storing something has an ongoing cost. They know they can’t keep everything.

black-and-white photo of a box full of printed photos
Photo: a box full of printed photos. From Miray Bostancı on Pexels, used under the Pexels license.

This is something I think about in my current job as well. I currently work at the Flickr Foundation, where we’re thinking about how to keep Flickr’s pictures visible for 100 years. How do we preserve social media, how do we maintain our digital legacy?

When we talk to people, one thing that comes up regularly is that almost everybody has too many photos. Modern smartphones have made it so easy to snap, snap, snap, and we end up with enormous libraries with thousands of images, but we can’t find the photos we care about. We can’t find the meaningful memories. We’re collecting too much stuff.

Digital photos aren’t expensive to store, but we feel the cost in other ways – the cognitive load of having to deal with so many images, of having to sift through a disorganised collection.

black-and-white photo of a wheelbarrow full of weeds
Photo: a wheelbarrow in a garden. From Hans Middendorp on Pexels, used under the Pexels license.

I think there’s a lesson here for the software industry. What’s the cost of all the code that we’re keeping?

We construct these enormous edifices of code, but when do we turn things off? When do we delete code? We’re more focused on new code, new ideas, new features. I’m personally quite concerned by how much generative AI has focused on writing more code, and not on dealing with the code we already have.

Code is text, so it’s cheap to store, but it still has a cost – it’s more cognitive load, more maintenance, more room for bugs and vulnerabilities.

We can keep all our software forever, but we shouldn’t.

black-and-white photo of a road with a fire burning alongside it, and a car parked which is partially obscured by smoke
Photo: Open Garbage Dump on Highway 112, North of San Sebastian. Taken by John Vachon, 1973. From the U.S. National Archives no known copyright restrictions.

I think this is going to become a bigger issue for us. We live in an era of abundance, where we can get more computing resources at the push of a button. But that can’t last forever. What happens when our current assumptions about endless compute no longer hold?

  • The climate crisis – where’s all our electricity and hardware coming from?
  • The economics of AI – who’s paying for all these GPU-intensive workloads?
  • And politics – how many of us are dependent on cloud computing based in the US? How many of us feel as good about that as we did three months ago?

Libraries are good at making a little go a long way, about eking out their resources, about deciding what’s a good use of resources and what’s waste. Often the people who are good with money are the people who don’t have much of it, and we have a lot of money.

It’s easier to make decisions about what to prune and what to keep when things are going well – it’s harder to make decisions in an emergency.

long-lasting systems cannot grow without bound; they need weeding

This is my third thought for you: long-lasting systems cannot grow without bound; they need weeding. It isn’t sustainable to grow forever, because eventually you get overwhelmed by the weight of everything that came before.

We need to get better at writing software efficiently, at turning things off that we don’t need.

It’s a skill we’ve neglected. We used to be really good at it – when computers were the size of the room, programmers could eke out every last bit of performance. We can’t do that any more, but it’s so important when building something to last, and I think it’s a skill we’ll have to re-learn soon.

black-and-white photo of two runners passing a baton between each other in a relay race
Photo: Val Weaver and Vera Askew running in a relay race, Brisbane, 1939. From the State Library of Queensland no known copyright restrictions.

Weeding is a term that comes from the preservation world, so let’s stay there.

When you talk to people who work in digital preservation, we often describe it as a relay race. There is no permanent digital media, there’s no digital parchment or stone tablets – everything we have today will be unreadable in a few decades. We’re constantly migrating from one format to another, trying to stay ahead of obsolete technology.

Software is also a bit of a relay race – there is no “write it once and you’re done”. We’re constantly upgrading, editing, improving. And that can be frustrating, but it also means have regular opportunities to learn and improve. We have that chance to reflect, to do things better.

black-and-white photo of a smashed computer monitor
Photo: Broken computer monitor found in the woods. By Jeff Myers on Flickr, used under CC BY‑NC 2.0.

I think we do our best reflections when computers go bust. When something goes wrong, we spring into action – we do retrospectives, root cause analysis, we work out what went wrong and how to stop it happening again. This is a great way to build software that lasts, to make it more resilient. It’s a period of intense reflection – what went wrong, how do we stop it happening again?

What I’ve noticed is that the best systems are doing this sort of reflection all the time – they aren’t waiting for something to go wrong. They know that prevention is better than cure, and they embody it. They give themselves regular time to reflect, to think about what’s working and what’s not – and when we do, great stuff can happen.

black-and-white photo a statue of a woman using a typewriter
Photo: Statue of Astrid Lindgren. By Tobias Barz on Flickr, used under CC BY‑ND 2.0.

I want to give you one more example. As a sidebar to my day job, I’ve been writing a blog for thirteen years. It’s the longest job – asterisk – I’ve ever had. The indie web is still cool!

A lot of what I write, especially when I was starting, was sharing bits of code. “Here’s something I wrote, here’s what it does, here’s how it works and why it’s cool.” Writing about my code has been an incredible learning experience.

You might know have heard the saying “ask a developer to review 5 lines of code, she’ll find 5 issues, ask her to review 500 lines and she’ll say it looks good”. When I sit back and deeply read and explain short snippets of my code, I see how to do things better. I get better at programming. Writing this blog has single-handedly had the biggest impact on my skill as a programmer.

black-and-white photo a statue of a sunset reflected in a sea
Photo: Midnight sun in Advent Bay, Spitzbergen, Norway. From the Library of Congress, no known copyright restrictions.

There are so many ways to reflect on our work, opportunities to look back and ask how we can do better – but we have to make the most of them. I think we are, in some ways, very lucky that our work isn’t set in stone, that we do keep doing the same thing, that we have the opportunity to do better.

Writing this talk has been, in some sense, a reflection on the first decade of my career, and it’s made me think about what I want the next decade to look like.

In this talk, I’ve tried to distill some of those things, tried to give you some of the ideas that I want to keep, that I think will help my career and my software to last.

Be careful about what you create, what you keep, and how you interact with other people. That care, that process of reflection – that is what creates things that last.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Whose code am I running in GitHub Actions?

2025-03-26 00:53:43

A week ago, somebody added malicious code to the tj-actions/changed-files GitHub Action. If you used the compromised action, it would leak secrets to your build log. Those build logs are public for public repositories, so anybody could see your secrets. Scary!

Mutable vs immutable references

This attack was possible because it’s common practice to refer to tags in a GitHub Actions workflow, for example:

jobs:
  changed_files:
    ...
    steps:
      - name: Get changed files
        id: changed-files
        uses: tj-actions/changed-files@v2
      ...

At a glance, this looks like an immutable reference to an already-released “version 2” of this action, but actually this is a mutable Git tag. If somebody changes the v2 tag in the tj-actions/changed-files repo to point to a different commit, this action will run different code the next time it runs.

If you specify a Git commit ID instead (e.g. a5b3abf), that’s an immutable reference that will run the same code every time.

Tags vs commit IDs is a tradeoff between convenience and security. Specifying an exact commit ID means the code won’t change unexpectedly, but tags are easier to read and compare.

Do I have any mutable references?

I wasn’t worried about this particular attack because I don’t use tj-actions, but I was curious about what other GitHub Actions I’m using. I ran a short shell script in the folder where I have local clones of all my repos:

find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0 \
  | xargs -0 grep --no-filename "uses:" \
  | sed 's/\- uses:/uses:/g' \
  | tr '"' ' ' \
  | awk '{print $2}' \
  | sed 's/\r//g' \
  | sort \
  | uniq --count \
  | sort --numeric-sort

This prints a tally of all the actions I’m using. Here’s a snippet of the output:

 1 hashicorp/setup-terraform@v3
 2 dtolnay/rust-toolchain@v1
 2 taiki-e/create-gh-release-action@v1
 2 taiki-e/upload-rust-binary-action@v1
 4 actions/setup-python@v4
 6 actions/cache@v4
 9 ruby/setup-ruby@v1
31 actions/setup-python@v5
58 actions/checkout@v4

I went through the entire list and thought about how much I trust each action and its author.

  • Is it from a large organisation like actions or ruby? They’re not perfect, but they’re likely to have good security procedures in place to protect against malicious changes.

  • Is it from an individual developer or small organisation? Here I tend to be more wary, especially if I don’t know the author personally. That’s not to say that individuals can’t have good security, but there’s more variance in the security setup of random developers on the Internet than among big organisations.

  • Do I need to use somebody else’s action, or could I write my own script to replace it? This is what I generally prefer, especially if I’m only using a small subset of the functionality offered by the action. It’s a bit more work upfront, but then I know exactly what it’s doing and there’s less churn and risk from upstream changes.

I feel pretty good about my list. Most of my actions are from large organisations, and the rest are a few actions specific to my Rust command-line tools which are non-critical toys, where the impact of a compromised GitHub repo would be relatively slight.

How this script works

This is a classic use of Unix pipelines, where I’m chaining together a bunch of built-in text processing tools. Let’s step through how it works.

find . -path '*/.github/workflows/*' -type f -name '*.yml' -print0

This looks for any GitHub Actions workflow file – any file whose name ends with .yml in a folder like .github/workflows/. It prints a list of filenames, like:

./alexwlchan.net/.github/workflows/build_site.yml
./books.alexwlchan.net/.github/workflows/build_site.yml
./concurrently/.github/workflows/main.yml

It prints them with a null byte (\0) between them, which makes it possible to split the filenames in the next step. By default it uses a newline, but a null byte is a bit safer, in case you have filenames which include newline characters.

I know that I always use .yml as a file extension, but if you sometimes use .yaml, you can replace -name '*.yml' with \( -name '*.yml' -o -name '*.yaml' \)

I have a bunch of local repos that are clones of open-source projects, and not my code, so I care less about what GitHub Actions they’re using. I excluded them by adding extra -path rules, like -not -path './cpython/*'.

xargs -0 grep --no-filename "uses:"

Then we use xargs to go through the filenames one-by-one. The `-0` flag tells it to split on the null byte, and then it runs grep to look for lines that include "uses:" – this is how you use an action in your workflow file.

The --no-filename option means this just prints the matching line, and not the name of the file it comes from. Not all of my files are formatted or indented consistently, so the output is quite messy:

    - uses: actions/checkout@v4
        uses: "actions/cache@v4"
      uses: ruby/setup-ruby@v1

sed 's/\- uses:/uses:/g' \

Sometimes there's a leading hyphen, sometimes there isn’t – it depends on whether uses: is the first key in the YAML dictionary. This sed command replaces "- uses:" with "uses:" to start tidying up the data.

    uses: actions/checkout@v4
        uses: "actions/cache@v4"
      uses: ruby/setup-ruby@v1

I know sed is a pretty powerful tool for making changes to text, but I only know a couple of simple commands, like this pattern for replacing text: sed 's/old/new/g'.

tr '"' ' '

Sometimes the name of the action is quoted, sometimes it isn’t. This command removes any double quotes from the output.

    uses: actions/checkout@v4
        uses: actions/cache@v4
      uses: ruby/setup-ruby@v1

Now I’m writing this post, it occurs to me I could use sed to make this substitution as well. I reached for tr because I've been using it for longer, and the syntax is simpler for doing single character substitutions: tr '<oldchar>' '<newchar>'

awk '{print $2}'

This splits the string on spaces, and prints the second token, which is the name of the action:

actions/checkout@v4
actions/cache@v4
ruby/setup-ruby@v1

awk is another powerful text utility that I’ve never learnt properly – I only know how to print the nth word in a string. It has a lot of pattern-matching features I’ve never tried.

sed 's/\r//g'

I had a few workflow files which were using carriage returns (\r), and those were included in the awk output. This command gets rid of them, which makes the data more consistent for the final step.

sort | uniq --count | sort --numeric-sort

This sorts the lines so identical lines are adjacent, then it groups and counts the lines, and finally it re-sorts to put the most frequent lines at the bottom.

I have this as a shell alias called tally.

   6 actions/cache@v4
   9 ruby/setup-ruby@v1
  59 actions/checkout@v4

This step-by-step approach is how I build Unix text pipelines: I can write a step at a time, and gradually refine and tweak the output until I get the result I want. There are lots of ways to do it, and because this is a script I’ll use once and then discard, I don’t have to worry too much about doing it in the “purest” way – as long as it gets the right result, that’s good enough.

If you use GitHub Actions, you might want to use this script to check your own actions, and see what you’re using. But more than that, I recommend becoming familiar with the Unix text processing tools and pipelines – even in the age of AI, they’re still a powerful and flexible way to cobble together one-off scripts for processing data.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Fast and random sampling in SQLite

2025-03-14 04:24:51

I was building a small feature for the Flickr Commons Explorer today: show a random selection of photos from the entire collection. I wanted a fast and varied set of photos.

This meant getting a random sample of rows from a SQLite table (because the Explorer stores all its data in SQLite). I’m happy with the code I settled on, but it took several attempts to get right.

Approach #1: ORDER BY RANDOM()

My first attempt was pretty naïve – I used an ORDER BY RANDOM() clause to sort the table, then limit the results:

SELECT *
FROM photos
ORDER BY random()
LIMIT 10

This query works, but it was slow – about half a second to sample a table with 2 million photos (which is very small by SQLite standards). This query would run on every request for the homepage, so that latency is unacceptable.

It’s slow because it forces SQLite to generate a value for every row, then sort all the rows, and only then does it apply the limit. SQLite is fast, but there’s only so fast you can sort millions of values.

I found a suggestion from Stack Overflow user Ali to do a random sort on the id column first, pick my IDs from that, and only fetch the whole row for the photos I’m selecting:

SELECT *
FROM photos
WHERE id IN (
    SELECT id
    FROM photos
    ORDER BY RANDOM()
    LIMIT 10
)

This means SQLite only has to load the rows it’s returning, not every row in the database. This query was over three times faster – about 0.15s – but that’s still slower than I wanted.

Approach #2: WHERE rowid > (…)

Scrolling down the Stack Overflow page, I found an answer by Max Shenfield with a different approach:

SELECT * FROM photos
WHERE rowid > (
  ABS(RANDOM()) % (SELECT max(rowid) FROM photos)
)
LIMIT 10

The rowid is a unique identifier that’s used as a primary key in most SQLite tables, and it can be looked up very quickly. SQLite automatically assigns a unique rowid unless you explicitly tell it not to, or create your own integer primary key.

This query works by picking a point between the biggest and smallest rowid values used in the table, then getting the rows with rowids which are higher than that point. If you want to know more, Max’s answer has a more detailed explanation.

This query is much faster – around 0.0008s – but I didn’t go this route.

The result is more like a random slice than a random sample. In my testing, it always returned contiguous rows – 101, 102, 103, – which isn’t what I want. The photos in the Commons Explorer database were inserted in upload order, so photos with adjacent row IDs were uploaded at around the same time and are probably quite similar. I’d get one photo of an old plane, then nine more photos of other planes. I want more variety!

(This behaviour isn’t guaranteed – if you don’t add an ORDER BY clause to a SELECT query, then the order of results is undefined. SQLite is returning rows in rowid order in my table, and a quick Google suggests that’s pretty common, but that may not be true in all cases. It doesn’t affect whether I want to use this approach, but I mention it here because I was confused about the ordering when I read this code.)

Approach #3: Select random rowid values outside SQLite

Max’s answer was the first time I’d heard of rowid, and it gave me an idea – what if I chose random rowid values outside SQLite? This is a less “pure” approach because I’m not doing everything in the database, but I’m happy with that if it gets the result I want.

Here’s the procedure I came up with:

  1. Create an empty list to store our sample.

  2. Find the highest rowid that’s currently in use:

    sqlite> SELECT MAX(rowid) FROM photos;
    1913389
    
  3. Use a random number generator to pick a rowid between 1 and the highest rowid:

    >>> import random
    >>> random.randint(1, max_rowid)
    196476
    

    If we’ve already got this rowid, discard it and generate a new one.

    (The rowid is a signed, 64-bit integer, so the minimum possible value is always 1.)

  4. Look for a row with that rowid:

    SELECT *
    FROM photos
    WHERE rowid = 196476
    

    If such a row exists, add it to our sample. If we have enough items in our sample, we’re done. Otherwise, return to step 3 and generate another rowid.

    If such a row doesn’t exist, return to step 3 and generate another rowid.

This requires a bit more code, but it returns a diverse sample of photos, which is what I really care about. It’s a bit slower, but still plenty fast enough (about 0.001s).

This approach is best for tables where the rowid values are mostly contiguous – it would be slower if there are lots of rowids between 1 and the max that don’t exist. If there are large gaps in rowid values, you might try multiple missing entries before finding a valid row, slowing down the query. You might want to try something different, like tracking valid rowid values separately.

This is a good fit for my use case, because photos don’t get removed from Flickr Commons very often. Once a row is written, it sticks around, and over 97% of the possible rowid values do exist.

Summary

Here are the four approaches I tried:

Approach Performance (for 2M rows) Notes
ORDER BY RANDOM() ~0.5s Slowest, easiest to read
WHERE id IN (SELECT id …) ~0.15s Faster, still fairly easy to understand
WHERE rowid > ... ~0.0008s Returns clustered results
Random rowid in Python ~0.001s Fast and returns varied results, requires code outside SQL, may be slower with sparsely populated rowid

I’m using the random rowid in Python in the Commons Explorer, trading code complexity for speed. I’m using this random sample to render a web page, so it’s important that it returns quickly – when I was testing ORDER BY RANDOM(), I could feel myself waiting for the page to load.

But I’ve used ORDER BY RANDOM() in the past, especially for asynchronous data pipelines where I don’t care about absolute performance. It’s simpler to read and easier to see what’s going on.

Now it’s your turn – visit the Commons Explorer and see what random gems you can find. Let me know if you spot anything cool!

[If the formatting of this post looks odd in your feed reader, visit the original article]