MoreRSS

site iconAlex WlchanModify

I‘m a software developer, writer, and hand crafter from the UK. I’m queer and trans.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Alex Wlchan

Using perceptual distance to create better headers

2026-01-13 02:01:38

For nearly a decade, the header of this website has been decorated with a mosaic-like pattern of coloured squares. I can choose a colour for individual posts or pages, and that tints the title, the links, and the header. It adds some texture and visual interest, without being too distracting.

The implementation is pretty straightforward: I have one function that generates the coordinates of each square, and another that generates varying shades of the tint colour. Put those together, and it draws the header image.

I recently improved the way I choose the shades of the tint colour, which makes the headers look more coherent, especially in dark mode. The change is subtle, but a definite improvement.

The old approach: varying the HSL lightness

Before, this is how I generated the shades:

  1. Map to HSL. Convert the tint colour to the hue-saturation-lightness (HSL) colour space.
  2. Define the bounds. I chose 7/8 and 8/7 of the original lightness, because it looked good in the first few colours I tried.
  3. Jitter lightness. Pick a random lightness value in this range.
  4. Recombine and convert. Pair this new lightness with the original hue and saturation, and convert back to sRGB.

I was trying to create colours which looked similar and varied only in lightness, so you’d see lighter or darker shades of the tint colour. My headers are PNG images, which are usually saved as sRGB, which is I why I convert back in the final step.

Here’s what the old code looked like:

require 'color'

# Given a hex colour as a string (e.g. '#123456') generate
# an infinite sequence of colours which vary only in brightness.
def get_colours_like(hex)
  seeded_random = Random.new(hex[1..].to_i(16))

  hsl = Color::RGB.by_hex(hex).to_hsl
  
  min_luminosity = hsl.luminosity * 7 / 8
  max_luminosity = hsl.luminosity * 8 / 7
  luminosity_diff = max_luminosity - min_luminosity
  
  Enumerator.new do |enum|
    loop do
      new_hsl = Color::HSL.from_values(
        hsl.hue,
        hsl.saturation,
        min_luminosity + (seeded_random.rand * luminosity_diff)
      )
      enum.yield new_hsl.to_rgb
    end
  end
end

I seeded the random generator so it always returned the same colours – this meant my local dev environment and web server would always generate identical header images. Note that it’s seeded based on the colour, so different tint colours will have light/dark squares in different places.

All the colour calculations are done by Austin Ziegler’s excellent color gem, which saved me from implementing colour conversions myself.

This approach is simple, but it has problems. Varying the lightness by proportion means the range varied from colour to colour – headers for dark colours didn’t have enough contrast, while light colours had too much contrast.

Here are three examples – notice how the dark header is almost solid colour, while the light header has enough contrast to become distracting:

Dark red coloured squares, which all blend into a dark red mush
#470906
Brighter red coloured squares, with some visible variation but not much
#d01c11
Very bright red coloured squares, some of which are almost white or light pink
#f69b96

This heuristic worked for the first colour I tried (#d01c11, the site’s original tint colour) but it breaks down as I’ve added more colours, especially in dark mode.

I could replace the percentages with fixed offsets – for example, plus or minus 25% lightness – but this wouldn’t fix the problem. Humans aren’t machines; we don’t perceive colours as linear numerical values. The human eye is more sensitive to some colours than others, so the same numerical jump in HSL doesn’t feel like the same visual difference.

Let’s look at another example, where I’ll fix the hue and saturation, and step the lightness by 25%. These differences don’t feel the same:

A deep blue square which is highly saturated
hsl(240, 100%, 50%)
A lavender-coloured square
hsl(240, 100%, 75%)
A square of pure white
hsl(240, 100%, 100%)

There are alternative colour spaces like OKLCH and CIELAB which try to capture the nuances of human biology and how we interpret colours, and that’s where I looked at for a replacement.

The CIELAB colour space

The CIELAB colour space is based on opponent process theory, which suggests that we perceive colour as a battle of three opposing pairs: black vs. white, red vs. green, and blue vs. yellow. Think about how you never see a reddish-green or a blueish-yellow – these colours are opposites.

These three pairs give us the three coordinates in CIELAB space:

  • L* is the perceptual lightness (black vs. white)
  • a* is the red-green axis
  • b* is the blue-yellow axis

(The other three letters stand for Commission internationale de l’éclairage, the standards body who developed CIELAB in 1976.)

Within this colour space, we can calculate the perceptual difference between two colours. Ideally, that numerical distance should match our human perception of the change. The goal is perceptually uniformity: if you move a fixed numerical distance anywhere in the space, the “amount” of change should feel the same to a human observer.

That’s much easier said than done: the measurement formulas (like Delta E) have been refined over decades, and deficiences have been found in CIELAB, especially for shades of blue. Newer spaces like OKLAB try to capture the nuances of human biology even more accurately. But for the purpose of my header images, CIELAB is good enough, and a big improvement over HSL.

One place I already use CIELAB is in my tool for extracting dominant colours. I’m using k‑means clustering to group colours that are “close” together, and it makes sense to measure closeness using perceptual distance.

The Ruby gem I’m using supports CIELAB but not OKLAB, which also informed my decision. Colour maths is complicated, and I’d rather use an existing implementation than write it all myself.

My new approach: varying the CIELAB perceptual lightness

Here’s my new heuristic:

  1. Map to CIELAB. Convert the tint colour to CIELAB space.
  2. Define the bounds. Choose a fixed distance, and find how much you need to increase/decrease the perceptual brightness L* to reach that distance.
  3. Jitter lightness. Pick a random L* value in this range.
  4. Recombine and convert. Pair this new lightness with the original a* and b* components, and convert back to sRGB.

To find the bounds, I do a binary search on the possible lightness values to find the perceptual lightness which gets me closest to the target distance. If I’m looking for the lighter shade, I search the range (</mo>L*,100)</ml>. If I’m looking for the darker shade, I search the range (0,L*)</ml>.

Here’s the code:

require 'color'

# Find the perceptual lightness of a CIELAB colour that's a specific
# perceptual difference (target_distance) from the original colour, while
# maintaining the original hue and colourfulness.
def lightness_at_distance(original_lab, direction, target_distance)
  # 1. Define the search range for L*
  if direction == 'lighter'
    low_l = original_lab.l
    high_l = 100
  else
    low_l = 0
    high_l = original_lab.l
  end

  # 2. Run a binary search on L*
  best_lab = original_lab
  best_delta = 0

  15.times do
    mid_l = (low_l + high_l) / 2.0

    candidate_lab = Color::CIELAB.from_values(mid_l, original_lab.a, original_lab.b)
    candidate_delta = original_lab.delta_e2000(candidate_lab)

    # Are we closer than the current best colour? If so, replace it.
    if (candidate_delta - target_distance).abs < (best_delta - target_distance).abs
      best_lab = candidate_lab
      best_delta = candidate_delta
    end

    if candidate_delta < target_distance
      # We need more distance, move away from the original L*
      direction == 'lighter' ? (low_l = mid_l) : (high_l = mid_l)
    else
      # We've gone too far, move back toward the original L*
      direction == 'lighter' ? (high_l = mid_l) : (low_l = mid_l)
    end
  end

  best_lab.l
end

Then I can write a very similar function to what I wrote for HSL:

# Given a hex colour as a string (e.g. '#123456') generate
# an infinite sequence of colours which vary only in lightness.
def get_colours_like(hex)
  seeded_random = Random.new(hex[1..].to_i(16))
  
  lab = Color::RGB.by_hex(hex).to_lab

  min_lightness = lightness_at_distance(lab, 'darker',  6)
  max_lightness = lightness_at_distance(lab, 'lighter', 6)
  lightness_diff = max_lightness - min_lightness

  Enumerator.new do |enum|
    loop do
      new_lab = Color::CIELAB.from_values(
        min_lightness + (seeded_random.rand * lightness_diff),
        lab.a,
        lab.b
      )
      
      # Discard colours which don't map cleanly from CIELAB to sRGB
      if new_lab.delta_e2000(new_lab.to_rgb.to_lab) > 1
        next
      end
      
      enum.yield new_lab.to_rgb
    end
  end
end

One gotcha is that CIELAB is a wider range than sRGB, so CIELAB colours don’t always map cleanly into sRGB. For example, certain bright colours like neon green may lose their vibrancy when converted from CIELAB to sRGB.

When it does the conversion, the color gem automatically clamps colours to fit into the sRGB space, but this creates some unusually dark or bright squares. I check if this clipping has occurred by converting back to CIELAB and looking at the distance – if there’s too much drift, I discard the colour and pick another. This is another subtle difference, but I think it improves the overall vibe.

Let’s look at the results, which compare the HSL heuristic (top), the original tint colour (middle), and the CIELAB heuristic (bottom):

Dark red coloured squares with a horizontal dark red stripe. The squares on the bottom have slightly more variety than the top.
#470906
Brighter red coloured squares, with the top and bottom looking about the same
#d01c11
Very bright red coloured squares on the top, more muted squares which match the salmon pink tint colour
#f69b96

The dark squares have a bit more variety, while the light squares have much less and avoid the bright and noticeable shades. It’s a particular improvement in dark mode, where I always use light tint colours. There’s almost no difference for the middle colour, which makes sense because it was how I designed the original heuristic. It already looked pretty good.

The new colours are closer to what I want: a bit of subtle texture, not loud enough to draw attention. I switched to them a fortnight ago, and nobody noticed. It’s small refinement, not a radical change.

[If the formatting of this post looks odd in your feed reader, visit the original article]

The passwords I actually memorise

2026-01-11 04:31:10

The promise of a password manager is that it remembers and autofills all of your passwords, so you only have to remember one – the password that unlocks your password manager.

In practice, I have a handful passwords that I think worth memorising. It’s still a short list, but I’m not convinced that a single password is either sensible or feasible. I generally trust my password manager, but I don’t want it to be a single point of failure for my entire digital life.

What passwords do I try to remember?

  1. The login password for my computer. I configure my Mac to sleep after a few minutes of inactivity, then ask for my login password when I try to resume. Although I often use Touch ID to log in, I remember this password because I still have to enter it multiple times a day.

  2. The master password for my password manager. This unlocks all of my other passwords.

  3. The login passcode for my phone. I use an alphanumeric password, and I remember it because I have to enter it multiple times a day.

  4. My email password. My email account is the gateway to all my other digital accounts. If I lost access to my password manager but could still receive email, I could reset my passwords and regain access to everything.

  5. My remote backup password. I have offsite backups of all of my computers with Backblaze. If all of my devices were destroyed at once (for example, in a house fire), this would allow me to retrieve files from my backups, even without access to my password manager.

  6. The encryption password for my multi-factor authentication (MFA) recovery codes. I have an MFA app on my phone, protected by Face ID or my passcode – but in an emergency, I have single-use recovery codes I can use instead. These are stored in an encrypted disk image.

  7. My Apple Account password. I’m heavily enmeshed in the Apple ecosystem, and this account has powerful access to my devices, including remote wiping and backups.

  8. The “memorable word” for my online banking. When I log in to my bank account, my password manager autofills a password, and then I have to fill in three characters of a longer “memorable word”. For example, I might be asked to enter the 1st, 5th, and 8th characters.

    I memorise this both for security and convenience. If somebody compromises my password manager, my bank account is safe – and even if this was in my password manager, it can’t fill in single characters this way.

All of these passwords are long, alphanumeric, and unique.

I have a regular calendar reminder to review them, and make sure I still remember them correctly. This would be a useful feature in a password manager – periodic tests on whether you still remember important passwords.

Where are these passwords stored?

Although I’ve memorised all eight passwords, there are some copies elsewhere.

Five of them are stored in my password manager, because it’s convenient: my computer’s login password, my phone’s login passcode, my email password, my remote backup password, and my Apple account password. (My email and Apple account are protected by multi-factor authentication, and the codes aren’t in my password manager.)

Two of them aren’t written down anywhere, but they might be soon: the master password for my password manager, and the encryption password for my MFA recovery codes. At some point I’d like to change this, probably with a paper copy in a fire safe or similar. This would allow my family to retrieve those passwords in an emergency.

The “memorable word” for my online banking isn’t written down anywhere, and I doubt it ever will be. If I lose access to my bank account and I’m really stuck, I can visit a physical branch.

How would I regain access to my accounts?

Here’s how I’d get back into my key accounts:

  • Remote backups. My Backblaze account is only protected with a password, not MFA. I have this memorised, so I could download files from my remote backups on any device.

  • MFA recovery codes. These are in an encrypted disk image in my remote backups. I’ve memorised the disk image password, so I could retrieve my MFA codes once I get to my remote backups.

  • Email inbox. This is protected by a password and MFA. I’ve memorised the password, and I could use an MFA recovery code to regain access to the account.

  • Password manager. I use 1Password. Logging in on a new device needs two secrets: my Master Password and Secret Key.

    I remember the former, but the latter is a random UUID I don’t type in or see regularly. Instead, I have a 1Password Emergency Kit which includes my Secret Key (but not my Master Password). I have a printed copy of this kit in my folder of important papers, and a digital copy in my disk image of MFA recovery codes.

    If I can get a copy of the Emergency Kit, I can regain access to my password manager.

What passwords don’t I remember?

There are a couple of important passwords you might expect me to memorise, but I don’t:

  1. The email password for my work email. This password is stored in the password manager I use at work, and if I unexpectedly lost access, I’d contact the IT team for help. I don’t need self-service recovery for this account.

  2. The master password for my password manager at work. For similar reasons to work email, I’d rely on the IT team to regain access in an emergency.

  3. My banking app username or password password. Logging into my bank requires three values: my username, the full-length password, and the “memorable information”. I’ve memorised the memorable information, but not the username or password. If I need emergency access to my bank account, I can visit a high street branch.

What scenario am I trying to prevent?

Imagine you lost all of your devices. Could you regain access to your digital life? That’s my worst-case scenario that I’m trying to avoid, and these memorised passwords should be enough to bootstrap everything.

I can retrieve my MFA recovery codes from my remote backups, and then I can either log into my password manager and retrieve the current passwords, or log into my email inbox and reset all my passwords. Either way, I’m back into my accounts.

This doesn’t cover the scenario where I lose access to both my email inbox and my password manager, but that would be a catastrophic digital disaster.

It also doesn’t cover the scenario where I’m incapacitated and a family member needs emergency access to my digital accounts. That’s something I’m planning to fix this year. My plan is to purchase a fire safe that somebody else can open, in which I’d place printed instructions for access my password manager. Inside my password manager, I’ll have a note that explains what the key accounts are, which I can update regularly without reopening the safe.

I hope I can continue to rely on my password manager, and I never encounter one of these emergency scenarios – but I feel better knowing I’ve tried to prepare.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Where I store my multi-factor recovery codes

2026-01-11 00:30:24

When I read advice about passwords and online accounts, it usually goes something like this:

  1. Create unique passwords for each account, and store them in a password manager.
  2. Enable multi-factor authentication (MFA), and use an authenticator app or hardware token as your second factor.

But enabling MFA isn’t everything – what if you lose access to that second factor? For example, I store my MFA codes in an app on my phone. What happens if my phone is broken or stolen?

Most services that support MFA give me a set of recovery codes I can use in an emergency to regain access to my account, but don’t explain what to do with them. I’m advised to “store them securely”, but what does that mean in practice?

I don’t want to store my recovery codes in my password manager, because that compresses multiple authentication factors back into one. Somebody who compromised my password manager would have access to everything. (That’s the same reason I don’t store my MFA codes in there.)

Instead, I have an encrypted disk image on my Mac, which I created using Disk Utility. The password is a long, unique password that I only use for this purpose, and I only keep the disk image mounted when I’m editing or using a recovery code.

This disk image contains two files:

  • My 1Password Emergency Kit, a PDF document that contains the details for my 1Password account – including a secret key that I don’t see or type on a day-to-day basis
  • An HTML file I write by hand, which has all my MFA recovery codes and notes on when I created them

Here’s what the HTML file looks like (with fake data, obviously):

A page with a couple of sections (headed 'Apple Account', 'Etsy', 'GitHub'). In each section is a bit of explanatory text, about when I saved the recovery codes and what account they're for, then the recovery codes in a monospaced font.

You can download the HTML file I use as a template.

When I need to save some new recovery codes, I mount the disk image, edit the HTML file, then eject the disk image. When I need to use a recovery code, I mount the disk image, copy a code out of the HTML file, make a note that I’ve used it, then eject the disk image. This is a plain text file that’s not dependent on proprietary software or cloud services.

I have a second disk image on my work laptop, with a similar file, where I store recovery codes related to my work accounts.

A malicious program could theoretically read the HTML file while the disk image is mounted, so I try to keep it mounted as little as possible. But if a malicious program had long-running access to arbitrary files on my Mac, it can do more damage than just reading my recovery codes.

This setup assumes I’ll always have access to this disk image, which is why I have offsite backups that include it (in encrypted form, of course). I’ve memorised the password for my offsite backups, which gives me a clear recovery path if I lose my phone and all my home devices:

  1. Log into my offsite backup
  2. Download the encrypted disk image
  3. Mount the disk image
  4. Retrieve my 1Password Secret Key and MFA recovery codes

From there, I can get access to everything in my digital life.

Later this year, I plan to get a fire safe where I can store important documents, and a paper copy of these codes will definitely be in there. That’s why I include dates in the notes, and the current date at the top of the file – that way, I can see if the printed version is up-to-date.

I’m fortunate that I’ve never had to use this system in a real emergency – and helpful as it could be, let’s hope I never have to.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Quick-and-dirty print debugging in Go

2026-01-08 16:54:38

I’ve been writing a lot of Go in my new job, and trying to understand a new codebase.

When I’m reading unfamiliar code, I like to use print debugging to follow what’s happening. I print what branches I’m in, the value of different variables, which functions are being called, and so on. Some people like debuggers or similar tools, but when you’re learning a new language they’re another thing to learn – whereas printing “hello world” is the first step in every language tutorial.

The built-in way to do print debugging in Go is fmt.Printf or log.Printf. That’s fine, but my debug messages get interspersed with the existing logs so they’re harder to find, and it’s easy for those debug statements to slip through code review.

Instead, I’ve taken inspiration from Ping Yee’s Python module “q”. If you’re unfamiliar with it, I recommend his lightning talk, where he explains the frustration of trying to find a single variable in a sea of logs. His module provides a function q.q(), which logs any expressions to a standalone file. It’s quick and easy to type, and the output is separate from all your other logging.

I created something similar for Go: a module which exports a single function Q(), and logs anything it receives to /tmp/q.txt. Here’s an example:

package main

import (
	"github.com/alexwlchan/q"
	"os"
)

func printShapeInfo(name string, sides int) {
	q.Q("a %s has %d sides", name, sides)
}

func main() {
	q.Q("hello world")

	q.Q(2 + 2)

	_, err := os.Stat("does_not_exist.txt")
	q.Q(err)

	printShapeInfo("triangle", 3)
}

The logged output in /tmp/q.txt includes the name of the function and the expression that was passed to Q():

main: "hello world"

main: 2 + 2 = 4

main: err = stat does_not_exist.txt: no such file or directory

printShapeInfo: a triangle has 3 sides

I usually open a terminal window running tail -f /tmp/q.txt to watch what gets logged by q.

The module is only 120 lines of Go, and available on GitHub. You can copy it into your project, or it’s simple enough that you could write your own version. It has two interesting ideas that might have broader use.

Getting context with the runtime package

When you call Q(), it receives the final value – for example, if you call Q(2 + 2), it receives 4 – but I wanted to log the original expression and function name. This is a feature from Ping’s Python package, and it’s what makes q so pleasant to use. This gives context for the log messages, and saves you typing that context yourself.

I get this information from Go’s runtime package, in particular the runtime.Caller function, which gives you information about the currently-running function.

I call runtime.Caller(1) to step up the callstack by 1, to the actual line in my code where I typed Q(). It tells me the “program counter”, the filename, and the line number. I can resolve the program counter to a function name with runtime.FuncForPC, and I can just open the file and look up that line to read the expression. (This assumes the source code hasn’t changed since compilation, which is always true when I’m doing local debugging.)

Not affecting my coworkers with a local gitignore

To use this file, I copy q.go into my work repos and add it to my .git/info/exclude. The latter is a local-only ignore file, unlike the .gitignore file which is checked into the repo. This means I won’t accidentally check in q.go or push it to GitHub.

It also means I can’t forget to remove my debugging code, because if I do, the tests in CI will fail when they can’t find q.go.

This avoids other approaches that would be more disruptive or annoying, like making it a project dependency or adding it to the shared .gitignore file.

[If the formatting of this post looks odd in your feed reader, visit the original article]

My favourite books from 2025

2025-12-31 21:33:19

I’ve read 54 books this year – a slight dip from last year, but still at least one book a week. I try not to set myself rigid targets, but I hope to reverse the downward trend in 2026.

I’m a bit disappointed in the books I read this year; compared to previous years, there were only a few books that I feel compelled to recommend. I’m not sure if it was bad luck or sticking too close to familiar favouites – but I can’t help notice that all of this year’s favourites are from new authors. That feels like a sign to look further afield in 2026.

What saved the reading year was community and connection. My a book club just passed its third anniversary, and the discussions are always a highlight of my month. In particularly enjoy the conversation if it’s a book we all liked – it’s more fun to celebrate what works than to tear a book to shreds. Two of my top picks below come from the book club list.

I also found some unexpected serendipity: Lizzie Huxley-Jones’s festive romance Make You Mine This Christmas has a meetcute in a bookshop in St Pancras station. I use the station regularly so I know the shop well, and it’s where my partner and I took our first photo together as a couple, at the beginning of our second date.

I track the books I read at books.alexwlchan.net, and writing the annual round-up post has become a fun tradition. You can see how my tastes have changed in 2024, 2023, 2022, and 2021.

Here are my favourite books from 2025, in the order I read them.


The cover of “Service Model”. A white robotic hand holds a teacup, while a devastated landscape lit in dark blue and red is visible in the background.

Service Model

by Adrian Tchaikovsky (2024)

read 8 January 2025

What if the robot apocalypse happened, but nobody told the robots?

We follow Charles, a robot butler who finds himself unexpectedly unemployed, and he travels through an apocalyptic wasteland to find a new purpose. In a world mostly devoid of humans, he struggles to find another household to serve. It’s a dark and absurd journey which I very much enjoyed, and the style reminds me of Douglas Adams. This world isn’t tragic, but absurd.

Beneath the lyrical and humorous style are messages about automation, class division, and our attitude towards work. The world is full of robots who are doing things because that’s Their Purpose, with no thought for who the automation is serving or whether it’s still necessary.

If you enjoy this, you should follow up by reading Human Resources, the prequel short story about a “human resources” department that only exists to fire all the humans.

I’d also recommend The Incomparable podcast’s Book Club episodes; I read Service Model on the strength of Jason Snell’s recommendation.

The cover of “How You Get the Girl”. Two women are holding hands and walking towards a basketball hoop. One of them has long red hair and a turquoise jacket, while the other has short dark hair, a black jacket, and is spinning a basketball.

How You Get the Girl

by Anita Kelly (2024)

read 24 June 2025

A charming sapphic romance between a basketball teacher and a professional player.

Elle is a famous basketball player who’s proud and confident about being queer, but she struggles to be a foster parent to her niece, Vanessa. Julie is a capable high school coach who bonds well with her team, but feels unsure and uncertain about her queer identity. They both look up to the other, and are looked up to in return.

It felt like a very balanced romance, and I enjoyed our discussion of it at Ace Book Club. It hits all the classic sapphic tropes, and it has a feel-good ending.

My particular reading was enhanced by the annotations – my partner read this book first, and she highlighted passages with comments like “this seems familiar” or “remind you of anyone?”. Sadly that’s not a transferable experience, but I can tell you that I enjoyed it.

Surprisingly, I didn’t enjoy Anita Kelly’s other books. This book is the third in the trilogy, and I tried to read the other two – one of them was so-so, and the other I gave up on.

The cover of “The End Crowns All”. It's an abstract blue design mostly taken up by the title and author name, but at the base of the cover is a series of ships sailing away from a bright yellow sun.

The End Crowns All

by Bea Fitzgerald (2024)

read 9 September 2025

A sapphic retelling of the Trojan War, in which Cassandra’s curse is cast by a petty Apollo who just wants sex, and an enemies-to-lovers romance between Cassandra and Helen.

I really enjoyed this. It’s a well-written story and I enjoyed the first-person perspective of the two protagonists. It builds well towards its conclusion – a lot of stuff that becomes relevant later is established early and builds towards the end. Apollo’s curses on Cassandra, the gods forcing a narrative on Troy, how the story eventually deviates from the conventional myth.

The book has modern sensibilities, but retrofits them in a thoughtful way. It discusses consent, rape culture, and asexuality – in particular, Cassandra is implied to be ace – but never uses those words explicitly. These themes fit into the narrative, and don’t stand out as twenty-first century terminology or ideas shoved into Greek myth.

This was another book club pick, and I’m planning to read Bea Fitzgerald’s other books next year.

The cover of “Finding Hester”. A photograph of a young woman is placed on a brown file with a British government crest, with several red annotations drawn in pen.

Finding Hester

by Erin Edwards, Greg Callus, Rose Crossgrove, and others (2025)

read 17 November 2025

This is the true story of Hester Leggatt, a woman who wrote fake love letters for Operation Mincemeat during World War II, then became a character in a hit musical.

Unlike the men in the story, almost nothing was known of Hester when SpitLip wrote the musical version of Operation Mincemeat, except that she wrote the fake love letters. Her character has the most emotional song in the show, but the fictional version had to be invented from scratch.

A group of fans were dissatisfied with this gap in history, and tracked down the real Hester. They traced the initial mistake to an interview that misspelt her name as Leggett instead of Leggatt. Once they knew the correct surname, they went through paper archives, old school records, even contacted MI5 – all to reconstruct her life story.

The book weaves this discovered history into a narrative, which is organised it into a coherent and readable story of Hester’s life – her place of birth, her career before and after the war, her love life, and even coincidental similarities to her fictional depiction.

I’m biased because several of those fans are dear friends, and I enjoyed watching their work from the sidelines – but I enjoyed reading the details even more so.

[If the formatting of this post looks odd in your feed reader, visit the original article]

Drawing Truchet tiles in SVG

2025-12-22 02:06:56

I recently read Ned Batchelder’s post about Truchet tiles, which are square tiles that make nice patterns when you tile them on the plane. I was experimenting with alternative headers for this site, and I thought maybe I’d use Truchet tiles. I decided to scrap those plans, but I still had fun drawing some pretty pictures.

One of the simplest Truchet tiles is a square made of two colours:

These can be arranged in a regular pattern, but they also look nice when arranged randomly:

The tiles that really caught my eye were Christopher Carlson’s. He created a collection of “winged tiles” that can be arranged with multiple sizes in the same grid. A tile can be overlaid with four smaller tiles with inverted colours and extra wings, and the pattern still looks seamless.

He defined fifteen tiles, which are seven distinct patterns and then various rotations:

The important thing to notice here is that every tile only really “owns” the red square in the middle. When laid down, you add the “wings” that extend outside the tile – this is what allows smaller tiles to seamlessly flow into the larger pattern.

Here’s an example of a Carlson Truchet tiling:

Conceptually, we’re giving the computer a bag of tiles, letting it pull tiles out at random, and watching what happens when it places them on the page.

In this post, I’ll explain how to do this: filling the bag of tiles with parametric SVGs, then placing them randomly at different sizes. I’m assuming you’re familiar with SVG and JavaScript, but I’ll explain the geometry as we go.

Filling the bag of tiles

Although Carlson’s set has fifteen different tiles, they’re made of just four primitives, which I call the base, the slash, the wedge, and the bar.

The first step is to write SVG definitions for each of these primitives that we can reuse.

Whenever I’m doing this sort of generative art, I like to define it parametrically – writing a template that takes inputs I can change, so I can always see the relationship between the inputs and the result, and I can tweak the settings later. There are lots of templating tools; I’m going to write pseudo-code rather than focus on one in particular.

For these primitives, there are two variables, which I call the inner radius and outer radius. The outer radius is the radius of the larger wings on the corner of the tile, while the inner radius is the radius of the foreground components on the middle of each edge. For the slash, the wedge, and the bar, the inner radius is half the width of the shape where it meets the edge of the tile.

This diagram shows the two variables, plus two variables I compute in the template:

outer radiusinner radiustile sizepadding

Here’s the template for these primitives:

<!-- What's the length of one side of the tile, in the red dashed area?
     tileSize = (innerR + outerR) * 2 -->

<!-- How far is the tile offset from the edge of the symbol/path?
     padding = max(innerR, outerR) -->

<symbol id="base">
  <!--
    For the background, draw a square that fills the whole tile, then
    four circles on each of the corners.
    -->
  <g class="background">
    <rect x="{{ padding }}" y="{{ padding }}" width="{{ tileSize }}" height="{{ tileSize }}"/>
    <circle cx="{{ padding }}"            cy="{{ padding }}"            r="{{ outerR }}"/>
    <circle cx="{{ padding + tileSize }}" cy="{{ padding }}"            r="{{ outerR }}"/>
    <circle cx="{{ padding }}"            cy="{{ padding + tileSize }}" r="{{ outerR }}"/>
    <circle cx="{{ padding + tileSize }}" cy="{{ padding + tileSize }}" r="{{ outerR }}"/>
  </g>
  <!--
    For the foreground, draw four circles on the middle of each tile edge.
    -->
  <g class="foreground">
    <circle cx="{{ padding }}"            cy="{{ tileSize / 2 }}"       r="{{ innerR }}"/>
    <circle cx="{{ padding + tileSize }}" cy="{{ tileSize / 2 }}"       r="{{ innerR }}"/>
    <circle cx="{{ tileSize / 2 }}"       cy="{{ padding }}"            r="{{ innerR }}"/>
    <circle cx="{{ tileSize / 2 }}"       cy="{{ padding + tileSize }}" r="{{ innerR }}"/>
  </g>
</symbol>

<!--
  Slash:
    - Move to the top edge, left-hand vertex of the slash
    - Line to the top edge, right-hand vertex
    - Smaller arc to left egde, upper vertex
    - Line down to left edge, lower vertex
    - Larger arc back to the start
-->
<path
  id="slash"
  d="M {{ padding + outerR }} {{ padding }}
     l {{ 2 * innerR }} 0
     a {{ outerR }} {{ outerR }} 0 0 0 {{ outerR }} {{ outerR }}
     l 0 {{ 2 * innerR }}
     a {{ innerR*2 + outerR }} {{ innerR*2 + outerR }} 0 0 1 {{ -innerR*2 - outerR }} {{ -innerR*2 - outerR }}"/>

<!--
  wedge:
    - Move to the top edge, left-hand vertex of the slash
    - Line to the top edge, right-hand vertex
    - Smaller arc to left egde, upper vertex
    - Line to centre of the tile
    - Line back to the start
-->
<path
  id="wedge"
  d="M {{ padding + outerR }} {{ padding }}
     l {{ 2 * innerR }} 0
     a {{ outerR }} {{ outerR }} 0 0 0 {{ outerR }} {{ outerR }}
     l {{ 0 }} {{ 2 * innerR }}
     l {{ -innerR*2 - outerR }} 0"/>

<!--
  Bar: horizontal rectangle that spans the tile width and is the same height
  as a circle on the centre of an edge.
  -->
<rect
  id="bar"
  x="{{ padding }}" y="{{ padding + outerR }}"
  width="{{ tileSize }}"
  height="{{ 2 * innerR }}"/>

The foreground/background classes are defined in CSS, so I can choose the colour of each.

This template is more verbose than the rendered SVG, but I can see all the geometric expressions – I find this far more readable than a file full of numbers. This also allows easy experimentation – I can change an input, re-render the template, and instantly see the new result.

I can then compose the tiles by referencing these primitive shapes with a <use> element. For example, the “T” tile is made of a base and two wedge shapes:

<!-- The centre of rotation is the centre of the whole tile, including padding.
     centreRotation = outerR + innerR -->

<symbol id="carlsonT">
  <use href="#base"/>
  <use href="#wedge" class="foreground"/>
  <use href="#wedge" class="foreground" transform="rotate(90 {{ centreRotation }} {{ centreRotation }})"/>
</symbol>

After this, I write a similar <symbol> definition for all the other tiles, plus inverted versions that swap the background and foreground.

Now we have a bag full of tiles, let’s tell the computer how to place them.

Placing the tiles on the page

Suppose the computer has drawn a tile from the bag. To place it on the page, it needs to know:

  • The x, y position, and
  • The layer – should it place a full-size tile, or is it a smaller tile subdividing a larger tile

From these two properties, it can work out everything else – in particular, whether to invert the tile, and how large to scale it.

The procedure is straightforward: get the position of all the tiles in a layer, then decide if any of those tiles are going to be subdivided into smaller tiles. Use those to position the next layer, and repeat. Continue until the next layer is empty, or you hit the maximum number of layers you want.

Here’s an implementation of that procedure in JavaScript:

function getTilePositions({
  columns,
  rows,
  tileSize,
  maxLayers,
  subdivideChance,
}) {
  let tiles = [];
  
  // Draw layer 1 of tiles, which is a full-sized tile for
  // every row and column.
  for (i = 0; i < columns; i++) {
    for (j = 0; j < rows; j++) {
      tiles.push({ x: i * tileSize, y: j * tileSize, layer: 1 });
    }
  }
  
  // Now go through each layer up to maxLayers, and decide which
  // tiles from the previous layer to subdivide into four smaller tiles.
  for (layer = 2; layer <= maxLayers; layer++) {
    let previousLayer = tiles.filter(t => t.layer === layer - 1);
    
    // The size of tiles halves with each layer.
    // On layer 2, the tiles are 1/2 the size of the top layer.
    // On layer 3, the tiles are 1/4 the size of the top layer.
    // And so on.
    let layerTileSize = tileSize * (0.5 ** (layer - 1));
    
    previousLayer.forEach(tile => {
      if (Math.random() < subdivideChance) {
        tiles.push(
          { layer, x: tile.x,                 y: tile.y                 },
          { layer, x: tile.x + layerTileSize, y: tile.y                 },
          { layer, x: tile.x,                 y: tile.y + layerTileSize },
          { layer, x: tile.x + layerTileSize, y: tile.y + layerTileSize },
        )
      }
    })
  }
  
  return tiles;
}

Once we know the positions, we can lay them out in our SVG element.

We need to make sure we scale down smaller tiles to fit, and adjust the position – remember each Carlson tile only “owns” the red square in the middle, and the wings are meant to spill out of the tile area. Here’s the code:

function drawTruchetTiles(svg, tileTypes, tilePositions, padding) {
  tilePositions.forEach(c => {
    // We need to invert the tiles every time we subdivide, so we use
    // the inverted tiles on even-numbered layers.
    let tileName = c.layer % 2 === 0
      ? tileTypes[Math.floor(Math.random() * tileTypes.length)] + "-inverted"
      : tileTypes[Math.floor(Math.random() * tileTypes.length)];
      
    // The full-sized tiles are on layer 1, and every layer below
    // that halves the tile size.
    const scale = 0.5 ** (c.layer - 1);
    
    // We don't want to draw a tile exactly at (x, y) because that
    // would include the wings -- we add negative padding to offset.
    //
    // At layer 1, adjustment = padding
    // At layer 2, adjustment = padding * 1/2
    // At layer 3, adjustment = padding * 1/2 + padding * 1/4
    //
    const adjustment = -padding * Math.pow(0.5, c.layer - 1);

    svg.innerHTML += `
      <use
        href="${tileName}"
        x="${c.x / scale}"
        y="${c.y / scale}"
        transform="translate(${adjustment} ${adjustment}) scale(${scale})"/>`;
  });
}

The padding was fiddly and took me a while to work out, but now it works fine. The tricky bits are another reason I like defining my SVGs parametrically – it forces me to really understand what’s going on, rather than tweaking values until I get something that looks correct.

Demo

Here’s a drawing that uses this code to draw Carlson truchet tiles:

It was generated by your browser when you loaded the page, and there are so many possible combinations that it’s a unique image.

If you want a different picture, reload the page, or tell the computer to draw some new tiles.

These pictures put me in mind of an alien language – something I’d expect to see etched on the wall in a sci-fi movie. I can imagine eyes, tentacles, roads, and warnings left by a long-gone civilisation.

It’s fun, but not really the tone I want for this site – I’ve scrapped my plan to use Truchet tiles as header images. I’ll save them for something else, and in the meantime, I had a lot of fun.

[If the formatting of this post looks odd in your feed reader, visit the original article]