2026-03-19 15:07:21
I love public libraries, and they’re where I get about half the books I read.
When people talk about why libraries should exist, it’s usually framing them as a social good. Libraries provide low-cost access to books and information, they’re a hub for digital literacy and access to technology, and they’re one of the few remaining third places where people are treated as citizens rather than consumers. Their role has grown from being a lender of books to being a vital community resource.
Those are noble reasons for libraries to exist, but not necessarily reasons to use them. I sometimes hear them said with a patronising tone, from people who think libraries are only for other people. To them, a library is a charity for the poor and illiterate, not a destination for the well-read.
I think this is a failure of imagination. Even though I can afford to buy my own books and you could argue I don’t “need” a library, using them has made me a happier reader.
Libraries are a cheap and safe way to me to try lots of different books, including books I’m not sure if I’ll like. Sometimes, those experiments become new favourites.
If I’m buying books in a bookshop, I lean towards the familiar, towards books like the ones I already enjoy. It’s unusual for me to get anything radically different, because I don’t want to gamble my money on a complete unknown.
Borrowing a book from the library is free, so it’s easier to try a new author or genre. I can read two chapters and return a book guilt-free if it’s not my cup of tea. But sometimes, I try something very different, and I discover a whole new collection of books to enjoy.
I found some of my favourite books and authors through library books I probably wouldn’t have picked off a bookshop shelf:
Most library books are just “fine” – I enjoy them and return them to the shelf. I wouldn’t want every book to be a massive revelation, but those discoveries happen more often because the library reduces the cost of being curious.
I love living in a house with books, but I only have so much shelf space. Libraries allow me to read lots of books without cluttering up my home.
If I buy a book, I’m also buying a future decision: when I’m done, do I keep it, gift it, or donate it? It’s not a difficult decision, but it’s just another thing to think about. It’s easy for a book to get “stuck” in my home for years, even when I don’t actually want to keep it.
When I finish a library book, there is no decision to make: I know I have to return it to the library. When I’m done, I drop it in the returns bin and forget about it. It’s an easy, safe default that keeps my home clutter-free.
I especially love using libraries for books that I know I’m only going to read once, like romance novels and murder mysteries. Once I know the ending, I’m unlikely to revisit those books unless they’re really exceptional.
I still buy books, and if I really like a library book I’ll buy my own copy – but for everything else, the library has helped refine my shelves into the set of books I really love, not just a record of everything I’ve ever touched.
As I prepare to move house later this year, I’ve been uncovering stashes of unread books. Some have followed me across multiple moves; some I’ve owned for over a decade. I used to tell myself that these books were “maturing” on my shelf, but really they were just stagnating. I’ve donated many of them to my local charity shop, because I’ve finally admitted I’ll never actually read them.
I don’t have this problem with library books, because the return period triggers a “use it or lose it” response in my brain. I have to read a book before it’s gone, or decide not to and return it. This works even though I know it’s a completely artificial deadline – if I run out of time, I can always renew or re-borrow a book – I still feel that sense of urgency.
Return periods are some form of literary placebo: the ticking clock tricks me into prioritising a book, not letting it blend into the furniture.
My “To Be Read” list is longer than ever, but most of it is now in the library’s digital catalogue rather than physical piles in my home. When I get a book, I read it quickly or not at all – and either way, it doesn’t sit around untouched for fifteen years.
Public libraries will never be my sole source of books – I have to go elsewhere for niche, specialist, and academic texts – but using them has helped me read more and find new favourites. They’re a vital social good, but I don’t use them out of a sense of civic duty. I use them because they make me a happier, more adventurous, and more prolific reader.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2026-03-12 22:25:03
I want my current computer to last for a decade. That’s an eternity in the tech world, far longer than most people keep their hardware, but I don’t think it’s an unreasonable goal. Personal computers keep getting faster, but my needs aren’t changing.
I use my computer for the same fundamental tasks I did ten years ago: browsing the web, writing, editing photos, running scripts, and building small websites. Today’s computers can do all that and have power to spare. You can still push their limits with high-end tasks like video editing, 3D modelling, or gaming – but I don’t do any of those things.
I don’t need the latest and greatest, and I haven’t for a long time. Add in the expense, the hassle of upgrading, and the environmental impact of new hardware, and you can see why I’m keen to use my computers for as long as possible.
This won’t be easy. My needs might not change, but the world around me will. I won’t get software updates forever, the web is a bloated mess that becomes more resource-hungry every day, and AI may introduce unforeseen demands on my computer. I’ve had to set up my computer carefully to give it the best chance of lasting the decade.
In my first job, we sold telecoms hardware that sat in data centres for years, unmodified. We had to write software updates that would run on the machine as-is, because hardware upgrades were impossible. If a new feature needed more resources, we had to find a way to make the existing code more efficient to compensate. It was a stark contrast to cloud computing, where a more powerful machine is just a few clicks in a console. We had to be in the habit of thinking about efficiency, because there was no other option.
That habit has stuck. I try to be efficient my personal devices, and I’m very conservative about what I install – what apps, dependencies, and processes I allow to run.
I also write a lot of my own tools. If something feels slow or sluggish, I don’t have to buy a faster machine; I can look for a way to improve my code.
This might look like a process that requires discipline, but at this point it’s just my standard routine. I’ve always tried to use my computer efficiently and it’s meant my computers last a long time; it’s only with my latest purchase that I’ve made it an explicit goal.
I bought this computer as I was wrapping up my career in digital preservation, and that’s why I approached it with such a long-term mindset. In that job, I’d been designing collections to survive over decades and centuries; what seems like an eternity in tech is a heartbeat in heritage. With that mindset, trying to keep a computer for a decade didn’t seem so ridiculous – especially when I remembered that I almost did it already, with an eight-year-old iMac that was running perfectly until the desk underneath it collapsed, a calamity that would kill any computer.
Global politics is another factor; I’m keen to avoid needing to buy a new computer in the near future, because I’m not sure how easy it will be. Right now I can just walk into a high street store, but that relies on a fragile and complex supply chain that’s showing cracks.
I bought my computer in November 2024, just after Trump was re-elected as US president, and his campaign threatened heavy tariffs and trade wars. A year later, that trade uncertainty has become the status quo; his war with Iran threatens global energy markets; computer prices are rising as parts are diverted to AI data centres; and the majority of the world’s microprocessors are still built in Taiwan, under the constant shadow of a Chinese invasion. And the background to all of this is climate change, which won’t make manufacturing computers any easier.
I hope I’m wrong, and that buying a new computer continues to be as simple as it is today. But if I’m right, and they become scarce or expensive, I’ll be glad to have a device that I’m ready to use for years more, rather than be stuck with something too slow that I can’t afford to upgrade.
I’ll have to replace it eventually, but hopefully I can be patient and outlast any short-term disruptions to the supply chain. And if there are long-term disruptions, I’ll have more time to plan my next purchase.

I have a home office with a fixed setup, and my main computer is a desktop, which makes this easier – I don’t know if I could make a laptop last ten years. A desktop never moves, so it’s less vulnerable to dings and drops (assuming the desk stays standing), and there’s no internal battery to degrade or swell. I also don’t eat or drink at my desk, so there’s minimal risk of liquid damage.
I use Macs, and Apple offers three Mac desktops: the Mac mini, the Mac Studio, and the Mac Pro. The Studio and Pro are overpowered for my needs, and while that extra power would give me headroom, it would be a lot of extra expense for marginal gain. Instead, I looked at the Mac mini.
When I was buying, Apple offered two stock configurations of the Mac mini: an M4 chip with 16GB of RAM and 256GB of storage for £599, or an M4 Pro chip with 24GB of RAM and 512GB of storage for £1399. Both models got favourable reviews and seemed like good value, because they avoid Apple’s egregiously-priced upgrades.
I bought the M4 Pro rather than the base M4 – I think I’d been fine with the base M4 for now, but 16GB of RAM might become tight as macOS gets more memory hungry. I do want some headroom, I just don’t want to pay Mac Studio prices for it.
I’ve expanded the storage with a 4TB external SSD which is permanently plugged in. It was much cheaper than Apple’s upgrades, and it means I won’t run out of space any time soon. It also reduces wear on the internal SSD, which feels like the most likely component to fail.
The big question mark is software support, and I’m keeping my fingers crossed. Macs are typically supported by the latest version of macOS for six to eight years, and they get security updates for another two years after that. That should take me close to a decade, if not all the way.
For comparison, the M1 MacBook Air was released in November 2020, and I expect it will still be supported in this year’s macOS 27 release. Apple have already announced that this release will drop support for Intel Macs; it would be aggressive to drop M1 support at the same time, especially as an M1 MacBook Air was on sale at Walmart until a few weeks ago. If so, the M1 will get macOS updates until at least autumn 2027, and security updates until 2029 – a nine year span. Suddenly, running my M4 Mac mini for ten years doesn’t feel so ridiculous.
Apple’s hardware is in fantastic shape, and I absolutely believe their Mac minis can run for a decade without failing. (Maybe their hardware chief should be in charge of more things?) There’s always a risk of buying a lemon which has a manufacturing defect, but I’ve had mine for over a year and nothing has failed yet. I’m confident this machine can go the distance.

So far, it’s great – my Mac mini is a fantastic machine. It never feels slow; it’s never crashed; it takes up a tiny space on my desk; and I have enough storage that I never need to worry about cleaning up files. It’s just what I want a computer to be – an appliance I never have to think about.
You’d expect it to feel easy right now, because I’m still in the usual lifetime of this product. This will get harder over time, and the first year will be easier than the final year, but it’s still an encouraging start.
I hope that I’ve bought a decade of not having to think about hardware. Modern computers are ridiculously capable, and short of a catastrophic failure, it’s hard to imagine a reason to upgrade.
See you again in 2034!
[If the formatting of this post looks odd in your feed reader, visit the original article]
2026-03-05 16:58:35
During the COVID lockdowns, I spent long evenings at home on my own, and I amused myself by dressing up in extravagant and glamorous clothing. One dark night, I realised I could use my home working setup to have some fun, with just a webcam and a monitor.
I turned off every light in my office, cranked up my monitor to max brightness, then I changed the colour on the screen to turn my room red or green or pink. Despite the terrible image quality, I enjoyed looking at myself in the webcam as my outfits took on a vivid new hue.
Here are three pictures with my current office lit up in different colours, each with a distinct vibe:



For a while I was using Keynote to change my screen colour, and Photo Booth to use the webcam. It worked, but juggling two apps was clunky, and a bunch of the screen was taken up with toolbars or UI that diluted the colour.
To make it easier, I built a tiny web app that helps me take these silly pictures. It’s mostly a solid colour background, with a small preview from the webcam, and buttons to take a picture or change the background colour. It’s a fun little toy, and it’s lived on my desktop ever since.
Here’s a screenshot:

If you want to play with it yourself, turn out the lights, crank up the screen brightness, and visit alexwlchan.net/fun-stuff/gumdrop.html. All the camera processing runs locally, so the webcam feed is completely private – your pictures are never sent to me or my server.
The picture quality on my webcam is atrocious, even more so in a poorly-lit room, but that’s all part of the fun. One thing I discovered is that I prefer this with my desktop webcam rather than my iPhone – the iPhone is a better camera, but it does more aggressive colour correction. That makes the pictures less goofy, which defeats the purpose!
I’m not going to explain how the code works – most of it comes from an MDN tutorial which explains how to use a webcam from an HTML page, so I’d recommend reading that.
I don’t play dress up as much as I used to, but on occasion I’ll still break it out and amuse myself by seeing what I look like in deep blue, or vivid green, or hot pink. It’s also how I took one of my favourite pictures of myself, a witchy vibe I’d love to capture more often:

Computers can be used for serious work, but they can do silly stuff as well.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2026-02-17 16:08:40
I have some personal Git repos that I want to sync between my devices – my dotfiles, text expansion macros, terminal colour schemes, and so on.
For a long time, I used GitHub as my sync layer – it’s free, convenient, and I was already using it – but recently I’ve been looking at alternatives. I’m trying to reduce my dependency on cloud services, especially those based in the USA, and I don’t need most of GitHub’s features. I made these repos public, in case somebody else might find them useful, but in practice I think very few people ever looked at them.
There are plenty of GitHub-lookalikes, which are variously self-hosted or hosted outside the USA, like GitLab, Gitea, or Codeberg – but like GitHub, they all have more features than I need. I just care about keeping my files in sync. Maybe I could avoid introducing another service?
As I thought about how Git works, I thought of a much simpler way – and I’m almost embarrassed by how long it took me to figure this out.
In Git repos, there’s a .git folder which holds the complete state of the repo.
It includes the branches, the commits, and the contents of every file.
If you copy that .git folder to a new location, you’d get another copy of the repo.
You could copy a repo with basic utilities like cp or rsync – at least, as a one-off.
I wouldn’t recommend using them for regular syncing; it would be easy to lose data, because they don’t know how to merge changes from different devices.
Git’s built-in push and pull commands are smarter: they can synchronise this state between locations, compare the history of different copies, and stitch the changes together safely.
Within a repo, you can create a remote location, a pointer to another copy of the repo that lives somewhere else.
When you push or pull, your local .git folder gets synchronised with that other copy.
We’ve become used to the idea that the remote location is a cloud service – but it can just as easily be a folder on your local disk – and that gives me everything I want.
Before I explain the steps, I need to explain the difference between bare and non-bare repositories.
In our day-to-day work, we use non-bare repositories.
They have a “working directory” – the files you can see and edit.
The .git folder lives under this directory, and stores the entire history of the repo.
The working directory is a view into a particular point in that history.
By contrast, a bare repository is just the .git folder without the working directory.
It’s the history without the view.
You can’t push changes to a non-bare repo – if you try, Git will reject your push.
This is to avoid confusing situations where the working directory and the .git folder get out of sync.
Imagine if you had the repo open in a text editor, and somebody else pushed new code to the repo – suddenly your files would no longer match the Git history.
Whenever we push, we’re normally pushing to a bare repository. Because nobody can “work” inside a bare repo, it’s always safe to receive pushes from other locations – there’s no working directory to get out of sync.
I have a home desktop which is always running, and it’s connected to a large external drive. For each repo, there’s a bare repository on the external drive, and then all my devices have a checked-out copy that points to the path on that external drive as their remote location. The desktop connects to the drive directly; the other devices connect over SSH.
This only takes a few commands to set up:
Create a bare repository on the external drive.
$ cd /Volumes/Media/bare-repos
$ git init --bare dotfiles
Set the bare repository as a remote location.
On the home desktop, which mounts the external drive directly:
$ cd ~/repos/dotfiles
$ git remote add origin /Volumes/Media/bare-repos/dotfiles
On a machine, which can access the drive over SSH:
$ cd ~/repos/dotfiles
$ git remote add origin alexwlchan@desktop:/Volumes/Media/bare-repos/dotfiles
This allows me to run git push and git pull commands as normal, which will copy my history to the bare repository.
Clone the bare repository to a new location.
When I set up a new computer:
$ git clone /Volumes/Media/bare-repos/dotfiles ~/repos/dotfiles
This approach is very flexible, and you can store your bare repository in any location that’s accessible on your local filesystem or SSH. You could use an external drive, a web server, a NAS, whatever. I’m using Tailscale to get SSH access to my repos from other devices, but any mechanism for connecting devices over SSH will do. (Disclaimer: I work at Tailscale.)
Of course, this is missing many features of GitHub and the like – there’s no web interface, no issue tracking, no collaboration – but for my small, personal repos, that’s fine. There’s also no third-party hosting, no risk of outages, no services to manage. I’m just moving files about over the filesystem. It feels like the Git equivalent of static websites, in a good way.
I used to throw every scrap of code onto GitHub in the vague hope of “sharing knowledge”, but most of it was digital clutter.
Nobody was reading my personal repos in the hope of learning something. They’re a grab bag of assorted snippets, with only a loose definition or purpose – it’s unlikely another person would know what they could find, or spend the time to go looking. Sharing knowledge requires more than just publishing code somewhere; you need to make it possible for somebody to find.
Extracting my ideas into standalone, searchable snippets makes them dramatically more useful and discoverable. There are single blog posts that have done more good than my entire corpus of code on GitHub – and I have hundreds of blog posts.
I still have plenty of public repos, but it’s specific libraries or tools with a clear purpose. It’s more obvious whether you might want to read it, and better documented if you do. It’s an intentional selection, not a random set of things I want to keep in sync.
For years, I’ve been using a social media site as a glorified file-syncing service, but I don’t need pull requests, an issue tracker, or a CI/CD pipeline to move a few macros between my machines – just a place to put my code. As with so many digital things, files and folders are all I need.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2026-02-05 16:21:14
I’m currently restructuring my site, and I’m going to change some of the URLs. I don’t want to break inbound links to the old URLs, so I’m creating redirects between old and new.
My current web server is Caddy, so I define redirects in my Caddyfile with the redir directive.
Here’s an example that creates permanent redirects for three URLs:
alexwlchan.net {
redir /videos/crossness_flywheel.mp4 /files/2017/crossness_flywheel.mp4 permanent
redir /2021/12/2021-in-reading/ /2021/2021-in-reading/ permanent
redir /2022/12/print-sbt/ /til/2022/print-sbt/ permanent
}
This syntax is easy to write by hand, but it’s annoying if I want to define lots of redirects – and when I’m doing a big restructure, I do. In particular, it’s tricky to write scripts to modify this file.
This is a good use case for Cog, made by Ned Batchelder.
Cog is a tool for running snippets of Python inside text files, allowing you to generate content without external templates or additional files. When you process a file with Cog, it finds those snippets of Python, executes them, then inserts the output back into the original file.
Here’s an example:
alexwlchan.net {
#[[[cog
# import cog
#
# redirects = [
# {"old_url": "/videos/crossness_flywheel.mp4", "new_url": "/files/2017/crossness_flywheel.mp4"},
# {"old_url": "/2021/12/2021-in-reading/", "new_url": "/2021/2021-in-reading/"},
# {"old_url": "/2022/12/print-sbt/", "new_url": "/til/2022/print-sbt/"},
# ]
#
# for r in redirects:
# cog.outl(f"redir {r['old_url']} {r['new_url']} permanent")
#]]]
#[[[end]]]
}
All the Python code that Cog runs is inside a comment, so it will be ignored by Caddy.
The [[[cog …]]] and [[[end]]] markers tell Cog where to find the code, and it’s smart enough to remove the leading whitespace and comment markers.
When I process this file with Cog (pip install cogapp; cog Caddyfile), it runs the Python snippet, and anything passed to cog.outl() is written between the markers.
This is the output, which gets printed to stdout:
alexwlchan.net {
#[[[cog
# import cog
#
# redirects = [
# {"old_url": "/videos/crossness_flywheel.mp4", "new_url": "/files/2017/crossness_flywheel.mp4"},
# {"old_url": "/2021/12/2021-in-reading/", "new_url": "/2021/2021-in-reading/"},
# {"old_url": "/2022/12/print-sbt/", "new_url": "/til/2022/print-sbt/"},
# ]
#
# for r in redirects:
# cog.outl(f"redir {r['old_url']} {r['new_url']} permanent")
#]]]
redir /videos/crossness_flywheel.mp4 /files/2017/crossness_flywheel.mp4 permanent
redir /2021/12/2021-in-reading/ /2021/2021-in-reading/ permanent
redir /2022/12/print-sbt/ /til/2022/print-sbt/ permanent
#[[[end]]]
}
If I want to write the output back to the file, I run Cog with the -r flag (cog -r Caddyfile).
All the original Cog code is preserved, so I can run it again and again to regenerate the file.
This means that if I want to add a new redirect, I can edit the list and run Cog again.
Cog is running a full version of Python, so I can rewrite the snippet to read the list of redirects from an external file. Here’s another example:
alexwlchan.net {
#[[[cog
# import cog
# import json
#
# with open("redirects.json") as in_file:
# redirects = json.load(in_file)
#
# for r in redirects:
# cog.outl(f"redir {r['old_url']} {r['new_url']} permanent")
#]]]
redir /videos/crossness_flywheel.mp4 /files/2017/crossness_flywheel.mp4 permanent
redir /2021/12/2021-in-reading/ /2021/2021-in-reading/ permanent
redir /2022/12/print-sbt/ /til/2022/print-sbt/ permanent
#[[[end]]]
}
This is a powerful change – unlike the original Caddyfile, it’s easy to write scripts that insert entries in this external JSON file, and now I can programatically update this file.
My scripts that are rearranging my URLs can populate redirects.json, then I only need to re-run Cog and I have a complete set of redirects in my Caddyfile.
I usually run Cog with two flags:
-r writes the output back to the original file, and-c adds a checksum to the end marker, like [[[end]]] (sum: Rwh4n2CfQD).
This checksum allows Cog to detect if the output has been manually edited since it last processed the file – and if so, it will refuse to overwrite those changes.
You have to revert the manual edits or remove the checksum.You can also run Cog with a --check flag, which checks if a file is up-to-date.
I run this as a continuous integration task, to make sure I’ve updated my files properly.
What separates Cog from traditional templating engines like Jinja2 or Liquid is that it operates entirely in-place on the original file. Usually, you have a source template file and a build step which produce a separate output file, but with Cog, the source and the result are stored in the same document. Storing templates in separate files is useful for larger projects, but it’s overkill for something like my Caddyfiles.
Having everything in a single file makes it easy to resume working on a file managed with Cog. I don’t need to remember where I saved the build script or the template; I can operate directly on that single text file. If I come back to this project in six months, the instructions for how the file is generated are right in front of me.
The design also means that I’m not locked into using Cog. At any point, I could delete the Cog comments and still have a fully functional file.
Cog isn’t a replacement for a full-blown templating language, and it’s not the right tool for larger projects – but it’s indispensable for small amounts of automation. If you’ve never used it, I recommend giving it a look – it’s a handy tool to know.
[If the formatting of this post looks odd in your feed reader, visit the original article]
2026-01-31 15:43:53
On Sunday evening, I quietly swapped out a key tool that I use to write this site. It’s a big deal for me, but hopefully nobody else noticed.
The tool I changed was my static site generator. I write blog posts in text files using Markdown, and then my static site generator converts those text files into HTML pages. I upload those HTML pages to my web server, and they become available as my website.
I’ve been using a Ruby-based static site generator called Jekyll since late 2017, and I’ve replaced it with a Python-based static site generator called Mosaic. It’s a new tool I wrote specifically to build this website, so I know exactly how it works. I’m getting rid of a Ruby tool I only half-understand, in favour of a Python tool I understand well.
Nothing is changing for readers (yet). I tried hard to avoid breaking anything – URLs haven’t changed, pictures look identical, the RSS feed should be the same as before. Please let me know if you spot something broken!
You’ll see more changes soon, because I have lots of ideas to try this year. I want to make this website into more of a “digital garden”, getting even further away from a single list of chronologically ordered posts. I don’t want to build that with Jekyll – or to be precise, I don’t want to build it with Ruby.
I don’t want to sound dismissive of Jekyll. It’s an impressive project that powers thousands of sites, and I used it happily for over eight years. I pushed it to build a lot of custom and bespoke pages, and it handled it with ease.
Jekyll’s superpower is its theming and plugin system, which allow you to customise its behaviour. Want something that Jekyll can’t do out of the box? Create your own template or plugin. But those plugins have to be written in Ruby, the same language as Jekyll itself – and I only write Ruby to make blog plugins. I can do it, but I’m slow, I’m unsure, and writing Ruby has never felt familiar.
You can build a digital garden with Jekyll and Ruby – plenty of people already have – but I know I’d find it a difficult and frustrating experience. My lack of Ruby experience would slow me down.
While my Ruby knowledge has sat still, I’ve become a much better Python programmer. Since I set up Jekyll in 2017, I’ve worked on big Python projects with extensive tests, thorough data validation, and an explicit goal of longevity. I tried writing a Python static site generator in 2016 and I got stuck; a decade later and I’m ready for another attempt.
This isn’t just general Python expertise – I’ve written about how I’m using static websites for tiny archives, and all the surrounding tools are written in Python. Porting this website to Python means I can reuse a lot of that code.
I hacked together an experimental Python static site generator over Christmas, and I wrote it properly over the last few weeks. I named it “Mosaic” after the square-filled headers on every page, and I really like it. I already feel faster when I’m working on the site, writing a language I know properly.
Mosaic works like other static site generators: it reads a folder full of Markdown files, converts them to HTML, and writes the HTML into a new folder. And just like Jekyll and similar tools, I’m building on powerful open-source libraries.
Here’s a comparison of the key dependencies:
| Purpose | Jekyll | Mosaic |
|---|---|---|
| Templates | Liquid | Jinja |
| Markdown rendering | kramdown | Mistune |
| Image generation | ruby-vips | Pillow |
| Syntax highlighting | Rouge | Pygments |
| Data validation | json-schema | Pydantic |
| HTML linting | HTMLProofer | ??? |
Here are some thoughts on each.
Jinja is the templating engine used by Flask, a framework I’ve used to build dozens of small web apps, so I was very familiar with the basic syntax.
It’s similar to Liquid – both use {% … %} for operators and {{ … }} to insert values – so I could reuse my templates with only small changes.
The tricky part was replicating my custom tags, which I’d previously implemented using Jekyll plugins.
I had to write my own Jinja extensions, which are harder than writing Jekyll tags.
In Jinja, I have to interact directly with the lexer and parser, whereas a Jekyll plugin is a simple render function.
Mistune is a Markdown library I discovered while working on this project.
I used Python-Markdown previously, but Mistune is faster and easier to extend.
In particular, it provides a friendly way to customise the HTML output by overriding named methods.
For example, I can add an id attribute to my headings by overriding the header(text, level) method.
The tricky part about changing Markdown renderer is all the subtle differences in the places where Markdown isn’t defined clearly. Mistune and kramdown return the same output in 95% of cases, but there’s a lot of variation and broken HTML in the remaining 5%.
One particular difficulty was all my inline HTML.
This is one of my favourite Markdown features – you can include arbitrary HTML and it gets passed through as-is – and I make heavy use of it in this blog.
But kramdown and Mistune disagree about where inline HTML starts and ends, and Mistune was wrapping <p> tags around HTML that kramdown left unchanged.
I had to adjust my templates and whitespace to help Mistune distinguish Markdown and HTML.
I generate multiple sizes and formats for every image, so they get served in a fast and efficient way. I use Pillow to generate each of those derivatives.
Pillow is easier to install and supports a wider range of image formats than any of the Ruby gems I tried; it’s a highlight of the Python ecosystem.
The picture handling code has always been the thorniest bit of the website, and I hope that building it atop a nicer library will give me the space to simplify that code.
Rouge and Pygments are both capable libraries, and they return compatible HTML which made it easy to switch – I could reuse my CSS and my syntax highlighting tweaks.
I think Pygments theoretically supports highlighting a wider variety of languages, but I never found Rouge lacking so it’s not a meaningful improvement.
Every Markdown file in my site has YAML “front matter” for storing metadata, for example:
---
layout: article
title: Swapping gems for tiles
---
Jekyll treats this as arbitrary data and doesn’t do any validation on it, which made it harder to change and keep consistent as the site evolved. I built a rudimentary validation layer using json-schema, but it was always an add-on.
In Mosaic, this front matter is parsed straight into a Pydantic model, so it’s type-checked throughout my code. This means I can write stricter validation checks, and catch more issues and inconsistencies before they break the website.
I’ve been using the HTMLProofer gem to check my HTML since 2019. It checks my HTML for errors like broken links or missing images, so I’m less likely to publish a broken page. It’s caught so many mistakes.
There’s no obvious Python equivalent, so for now I’m still running it as a separate step after I generate my HTML. It has a much lower overhead than running Jekyll so I’m not in a hurry to remove it – although eventually I’d like to reimplement the checks I care about with BeautifulSoup, so I can fully expunge Ruby.
I’m also considering using Playwright for some static site testing, but that’s a larger piece of work.
The name isn’t so important, because I’m the only person who will ever use this tool – but I discovered a fun nugget that’s too juicy not to share.
I named my tool “Mosaic” after the tiled headers that appear at the top of every page. Those headers are a design element I added in 2016, and I’m so fond of them now I can’t imagine getting rid of them. I later remembered that Mosaic is also the name of a discontinued web browser, and I like the “old web” vibes of that name. One of the best compliments I’ve ever received about this site was “it looks like something from the 1990s” – fast, clean, and not junked up with ads.
One of the bizarre things I discovered while writing this post is that it’s not the first time the names “Mosaic” and “Jekyll” have appeared alongside each other.
There’s a small historical island off the coast of Georgia (the USA one) called Jekyll Island. It includes bike trails, golf courses, a beach that’s been in several films… and a history museum called Mosaic. What are the chances?
I know nothing about Jekyll Island or the history of Georgia, but if I ever feel safe enough to return to the US, I’d love to visit.
I’ve been using Mosaic for several weeks and I’m really enjoying it. I wouldn’t recommend using it for anything else – it’s only designed to build this exact site – but all the source code is public, if you’d like to read it and understand how it works.
Switching to Mosaic has allowed me to start working on three improvements to the site:
Replace my “today I learned” (TIL) posts with “notes”. I really like how the TIL section has allowed me to write more frequent, smaller posts, but they’re still point-in-time snapshots. I want to replace them with notes that aren’t tied to a particular date, and instead can be living documents I update as I learn more.
Make the list of topics more useful. My current tags page is a wall of text, a list of 241 keywords with minimal context or explanation. Nobody is wading through that to find something interesting – I want to add some hierarchy to make it easier to read, and give a better overview of the site.
Fold my book reviews into my main site. My book reviews currently live on a separate site, which is only half-maintained. I’d like to merge them into the main site, let them benefit from the design improvements here, and start writing reviews of other entertainment.
I’ve had these ideas for months, and I’m excited to finally ship them, and bring this site closer to my idea of a “digital garden”
[If the formatting of this post looks odd in your feed reader, visit the original article]