MoreRSS

site iconXe IasoModify

Senior Technophilosopher, Ottawa, CAN, a speaker, writer, chaos magician, and committed technologist.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Xe Iaso

Claude Code won April Fools Day this year

2026-04-01 08:00:00

April Fools Day is somewhat of a legendary day among nerds. Historically it's been when the nerds at GMail introduced GMail Custom Time, where you could interrupt causality by making GMail look like you sent a message before it was actually sent. It actually worked.

Sometimes this gets taken too far and the joke falls flat, causing a lot more problems than would exist if the joke never happened in the first place. Incidents like this have resulted in many companies just putting in policies against doing that to avoid customer growth impact.

It's refreshing to see the Claude Code team introduce the /buddy system this year. When you run /buddy, it hatches a coding companion that hangs out in your Claude Code interface like a tamagochi. Here's my buddy Xentwine:

╭──────────────────────────────────────╮
        │                                      │
        │  ★★★ RARE                     ROBOT  │
        │                                      │
        │     (   )                            │
        │     .[||].                           │
        │    [ @  @ ]                          │
        │    [ ==== ]                          │
        │    `------´                          │
        │                                      │
        │  Xentwine                            │
        │                                      │
        │  "A methodical circuit-whisperer     │
        │  obsessed with untangling logical    │
        │  snarls; speaks in patient,          │
        │  patronizing riddles and will        │
        │  absolutely let you sit in your own  │
        │  bug for three minutes before        │
        │  offering the blindingly obvious     │
        │  fix."                               │
        │                                      │
        │  DEBUGGING  █████░░░░░  47           │
        │  PATIENCE   █████░░░░░  47           │
        │  CHAOS      ██░░░░░░░░  21           │
        │  WISDOM     █████████░  92           │
        │  SNARK      █████░░░░░  49           │
        │                                      │
        ╰──────────────────────────────────────╯
        

Here's what it looks like in the Claude Code app:

I think this is the best April Fools Day feature in recent memory because it seems intentionally designed to avoid impacting users in a way that would cause problems:

  • You have to take manual action to create your coding buddy, it's off by default.
  • It mostly stays out of the way when you do create it, meaning that it doesn't impact your normal working process.
  • Your buddy sometimes randomly interjects like a tamagochi.
  • You can pet the dog, dragon, or robot with /buddy pet.

This is the kind of harmless prank that all nerds should aspire for. 10/10.

Small note about AI 'GPUs'

2026-03-30 08:00:00

I've been seeing talk around about wanting to capitalize on the AI bubble popping and picking up server GPUs for pennies on the dollar so they can play games in higher fidelity due to server GPUs having more video ram. I hate to be the bearer of bad news here, but most of those enterprise GPUs don't have the ability to process graphics.

Yeah, that's right, in order to pack in as much compute as possible per chip, they removed video output and graphics processing from devices we are calling graphics processing units. The only thing those cards will be good for is CUDA operations for AI inference, AI training, or other things that do not involve gaming.


On a separate note, I'm reaching the point in recovery where I am getting very bored and am so completely ready to just head home. At least the diet restrictions end this week, so that's something to look forward to. God I want a burrito.

Homelab downtime update: The fight for DNS supremacy

2026-03-18 08:00:00

Hey all, quick update continuing from yesterday's announcement that my homelab went down. This is stream of consciousness and unedited. Enjoy!

Turns out the entire homelab didn't go down and two Kubernetes nodes survived the power outage somehow.

Two Kubernetes controlplane nodes.

Kubernetes really wants there to be an odd number of controlplane nodes and my workloads are too heavy for any single node to run and Longhorn really wants there to be at least three nodes online. So I had to turn them off.

How did I get in? The Mac mini that I used for Anubis CI. It somehow automatically powered on when the grid reset and/or survived the power outage.

xe@t-elos:~$ uptime
         09:45:55 up 66 days,  9:51,  4 users,  load average: 0.37, 0.22, 0.18
        

Holy shit, that's good to know!

Anyways the usual suspects for trying to debug things didn't work (kubectl get nodes got a timeout, etc.), so I did an nmap across the entire home subnet. Normally this is full of devices and hard to read. This time there's basically nothing. What stood out was this:

Nmap scan report for kos-mos (192.168.2.236)
        Host is up, received arp-response (0.00011s latency).
        Scanned at 2026-03-18 09:23:09 EDT for 1s
        Not shown: 996 closed tcp ports (reset)
        PORT      STATE SERVICE   REASON
        3260/tcp  open  iscsi     syn-ack ttl 64
        9100/tcp  open  jetdirect syn-ack ttl 64
        50000/tcp open  ibm-db2   syn-ack ttl 64
        50001/tcp open  unknown   syn-ack ttl 64
        MAC Address: FC:34:97:0D:1E:CD (Asustek Computer)
        
        Nmap scan report for ontos (192.168.2.237)
        Host is up, received arp-response (0.00011s latency).
        Scanned at 2026-03-18 09:23:09 EDT for 1s
        Not shown: 996 closed tcp ports (reset)
        PORT      STATE SERVICE   REASON
        3260/tcp  open  iscsi     syn-ack ttl 64
        9100/tcp  open  jetdirect syn-ack ttl 64
        50000/tcp open  ibm-db2   syn-ack ttl 64
        50001/tcp open  unknown   syn-ack ttl 64
        MAC Address: FC:34:97:0D:1F:AE (Asustek Computer)
        

Those two machines are Kubernetes controlplane nodes! I can't SSH into them because they're running Talos Linux, but I can use talosctl (via port 50000) to shut them down:

$ ./bin/talosctl -n 192.168.2.236 shutdown --force
        WARNING: 192.168.2.236: server version 1.9.1 is older than client version 1.12.5
        watching nodes: [192.168.2.236]
            * 192.168.2.236: events check condition met
        
        $ ./bin/talosctl -n 192.168.2.237 shutdown --force
        WARNING: 192.168.2.237: server version 1.9.1 is older than client version 1.12.5
        watching nodes: [192.168.2.237]
            * 192.168.2.237: events check condition met
        

And now it's offline until I get home.

This was causing the sponsor panel to be offline because the external-dns pod in the homelab was online and fighting my new cloud deployment for DNS supremacy. The sponsor panel is now back online (I should have put it in the cloud in the first place, that's on me) and peace has been restored to most of the galaxy, at least as much as I can from here.

Action items:

  • Figure out why ontos and kos-mos came back online
  • Make all nodes in the homelab resume power when wall power exists again
  • Review homelab for PSU damage
  • Re-evaluate usage of Talos Linux, switch to Rocky?

My homelab will be down for at least 20 days

2026-03-17 08:00:00

Quick post for y'all now that I can use my macbook while standing (long story, I can't sit due to surgical recovery, it SUCKS). My homelab went offline at about 13:00 UTC today likely because of a power outage. I'm going to just keep it offline and not fight it. I'll get home in early April and restore things then.

An incomplete list of the services that are down:

  • The within.website vanity Go import server
  • The preview site for this blog
  • Various internal services including the one that announces new posts on social media
  • My experimental OpenClaw bot Moss I was using to kill time in bed
  • My DGX Spark for self hosted language models, mainly used with Moss

Guess it's just gonna be down, hope I didn't lose any data. I'll keep y'all updated as things change if they do.

I don't know if I like working at higher levels of abstraction

2026-03-11 08:00:00

Whenever I have Claude do something for me, I feel nothing about the results. It feels like something happens around me, not through me. That's the new level of abstraction: you stop writing code and start describing intent. You stop crafting and start delegating. I've been doing this professionally long enough to have an opinion, and I don't like what it's doing to me.

All of it focuses on getting things done rather than on quality or craft. I'm more productive than I've ever been. I ship more. I finish more. Each thing lands with the emotional weight of a form letter.

Cadey is coffee
Cadey

"The emotional weight of a form letter." Yeah, that tracks.

When I write, I try to make people feel something. My goal is to provoke emotion when you read me. Generative AI sands down the hard things. Everything becomes homogenous, converging toward the average. The average makes nobody feel anything.

Sure, you can still make people feel things using this flow. I've done it recently. But we're trading away the texture. The rough edges, the weird phrasing, the choices too specific and too human for a statistical model to generate.

I'm going to keep talking to you as an equal. It's the most effective part of my style: I write like I'm sitting across from you, not lecturing down at you. Generative AI defaults to the authoritative explainer voice — the one that sounds like every other. Resisting that pull now takes conscious effort.

Aoi is wut
Aoi

So the tools are making it harder to sound like yourself?

Cadey is coffee
Cadey

Not harder exactly. More like... the path of least resistance leads to sounding like everyone else. You have to actively choose to be yourself now.

People I know are trying to break into this industry as juniors, and I honestly have no idea how to help them. This industry has historically valued quality and craft, yet somebody can yolo something out with Cursor and get hired by Facebook for it. The signal for "this person knows what they're doing" grows noisier every day.

This part of the industry runs on doublethink. Nuance and opinions that don't fit into tweets. Senior engineers say "AI is just a tool" while their companies lay off the juniors who would've learned to use that tool responsibly. Leadership says "we value craft" while setting deadlines that make craft impossible without the machine. Nobody lies exactly, but nobody tells the whole truth either.

Using these tools at this level of abstraction costs us something essential. I use them every day and I'm telling you: the default output has no soul. It's correct. It's competent. It's fine. And "fine" is the enemy of everything I care about as a writer and an engineer.

Numa is neutral
Numa

"Fine" is the ceiling that gets installed when you stop paying attention to the floor.

I'm using these tools deliberately to find where the bar actually is. I want to see what's possible at this level of abstraction. Seeing what's possible requires expensive tools and uncomfortable honesty about what they can't do.

The voice is non-negotiable. The weird, specific, occasionally self-indulgent voice that makes my writing mine. If higher abstraction means sounding like everyone else, I'll take the lower abstraction and the extra hours. Every time.

Vibe Coding Trip Report: Making a sponsor panel

2026-03-09 08:00:00

I'm on medical leave recovering from surgery. Before I went under, I wanted to ship one thing I'd been failing to build for months: a sponsor panel at sponsors.xeiaso.net. Previous attempts kept dying in the GraphQL swamp. This time I vibe coded it — pointed agent teams at the problem with prepared skills and let them generate the gnarly code I couldn't write myself.

And it works.

The GraphQL swamp

Go and GraphQL are oil and water. I've held this opinion for years and nothing has changed it. The library ecosystem is a mess: shurcooL/graphql requires abusive struct tags for its reflection-based query generation, and the code generation tools produce mountains of boilerplate. All of it feels like fighting the language into doing something it actively resists.

Cadey is coffee
Cadey

GitHub removing the GraphQL explorer made this even worse. You used to be able to poke around the schema interactively and figure out what queries you needed. Now you're reading docs and guessing. Fun.

I'd tried building this panel before, and each attempt died in that swamp. I'd get partway through wrestling the GitHub Sponsors API into Go structs, lose momentum, and shelve it. At roughly the same point each time: when the query I needed turned out to be four levels of nested connections deep and the struct tags looked like someone fell asleep on their keyboard.

Vibe coding was a hail mary. I figured if it didn't work, I was no worse off. If it did, I'd ship something before disappearing into a hospital for a week.

Preparing the skills

Vibe coding is not "type a prompt and pray." Output quality depends on the context you feed the model. Templ — the Go HTML templating library I use — barely exists in LLM training data. Ask Claude Code to write Templ components cold and it'll hallucinate syntax that looks plausible but doesn't compile. Ask me how I know.

Aoi is wut
Aoi

Wait, so how do you fix that?

I wrote four agent skills to load into the context window:

  • templ-syntax: Templ's actual syntax, with enough detail that the model can look up expressions, conditionals, and loops instead of guessing.
  • templ-components: Reusable component patterns — props, children, composition. Obvious if you've used Templ, impossible to infer from sparse training data.
  • templ-htmx: The gotchas when combining Templ with HTMX. Attribute rendering and event handling trip up humans and models alike.
  • templ-http: Wiring Templ into net/http handlers properly — routes, data passing, request lifecycle.

With these loaded, the model copies patterns from authoritative references instead of inventing syntax from vibes. Most of the generated Templ code compiled on the first try, which is more than I can say for my manual attempts.

Mara is hacker
Mara

Think of it like giving someone a cookbook instead of asking them to invent recipes from first principles. The ingredients are the same, but the results are dramatically more consistent.

Building the thing

I pointed an agent team at a spec I'd written with Mimi. The spec covered the basics: OAuth login via GitHub, query the Sponsors API, render a panel showing who sponsors me and at what tier, store sponsor logos in Tigris.

Cadey is enby
Cadey

I'm not going to pretend I wrote the spec alone. I talked through the requirements with Mimi and iterated on it until it was clear enough for an agent team to execute. The full spec is available as a gist if you want to see what "clear enough for agents" looks like in practice.

One agent team split the spec into tasks and started building. A second reviewed output and flagged issues. Meanwhile, I provisioned OAuth credentials in the GitHub developer settings, created the Neon Postgres database, and set up the Tigris bucket for sponsor logos. Agents would hit a point where they needed a credential, I'd paste it in, and they'd continue — ops work and code generation happening in parallel.

The GraphQL code the agents wrote is ugly. Raw query strings with manual JSON parsing that would make a linting tool weep. But it works. The shurcooL approach uses Go idioms, sure, but it requires so much gymnastics to handle nested connections that the cognitive load is worse. Agent-generated code is direct: send this query string, parse this JSON, done. I'd be embarrassed to show it at a code review. I'd also be embarrassed to admit how many times I failed to ship the "clean" version.

// This is roughly what the agent generated.
        // It's not pretty. It works.
        query := `{
          viewer {
            sponsors(first: 100) {
              nodes {
                ... on User {
                  login
                  name
                  avatarUrl
                }
                ... on Organization {
                  login
                  name
                  avatarUrl
                }
              }
            }
          }
        }`
        
Numa is neutral
Numa

This code exists because the "proper" way kept killing the project. I'll take ugly-and-shipped over clean-and-imaginary.

The stack

The full stack:

  • Go for the backend, because that's what I know and what my site runs on
  • Templ for HTML rendering, because I'm tired of html/template's limitations
  • HTMX for interactivity, because I refuse to write a React app for something this simple
  • PostgreSQL via Neon for persistence
  • GitHub OAuth for authentication
  • GitHub Sponsors GraphQL API for the actual sponsor data
  • Tigris for sponsor logo storage — plugged it in and it Just Works™

The warts

Org sponsorships are still broken. The schema for organization sponsors differs enough from individual sponsors that it needs its own query path and auth flow. I know what the fix looks like, but it requires reaching out to other devs who've cracked GitHub's org-level sponsor queries.

The code isn't my usual style either — JSON parsing that makes me wince, variable names that are functional but uninspired, missing error context in a few places. I'll rewrite chunks of this after I've recovered. The panel exists now, though. It renders real data. People can OAuth in and see their sponsorship status. Before this attempt, it was vaporware.

Cadey is percussive-maintenance
Cadey

I've been telling people "just ship it" for years. Took vibe coding to make me actually do it myself.

What I actually learned

I wouldn't vibe code security-critical systems or anything I need to audit line-by-line. But this project had stopped me cold on every attempt, and vibe coding got it across the line in a weekend.

Skills made the difference here. Loading those four documents into the context window turned Claude Code from "plausible but broken Templ" into "working code on the first compile." I suspect that gap will only matter more as people try to use AI with libraries that aren't well-represented in training data.

This sponsor panel probably won't look anything like it does today in six months. I'll rewrite the GraphQL layer once I find a pattern that doesn't make me cringe. Org sponsorships still need work. HTMX might get replaced.

But it exists, and before my surgery, shipping mattered more than polish.


The sponsor panel is at sponsors.xeiaso.net. The skills are in my site's repo under .claude/skills/.