MoreRSS

site iconXe IasoModify

Senior Technophilosopher, Ottawa, CAN, a speaker, writer, chaos magician, and committed technologist.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Xe Iaso

Tormentmaxxing 'simple requests'

2026-01-15 08:00:00

I don't like being interrupted when I'm deep in flow working on things. When my flow is interrupted, it can feel like my focus was violently stolen from me and the mental context that was crystalline falls apart into a thousand pieces before it is lost forever. With this in mind, being asked to do a "quick" 5 minute task can actually result in over an hour of getting back up to speed.

This means that I sometimes will agree to do things, go back into flow (because if I get back into flow almost instantly I'm more likely to not lose any context), forget about them, and then look bad as a result. This is not ideal for employment uptime.

When you work at a startup, you don't do your job; you project the perception of doing it and ensure that the people above you are happy with what you are doing. This is a weird fundamental conflict and understanding this at a deep level has caused a lot of strange thoughts about the nature of the late-stage capitalism that we find ourselves in.

Tormentmaxxing it

However, it's the future and we have tools like Claude Code. As much as I am horrified by the massive abuses the AI industry is doing to the masses with abusive scraping, there are real things that the tools the AI industry can do today. The biggest thing they can do is just implement those "quick requests" because most of them are on the line of:

  • Delete this paragraph from the readme please.
  • This thing is confusing, can you reword or remove it?
  • You forgot to xyz.

Nearly 90% of these are in fact things that tools the AI industry has released can do today. I could just open an AI coding agent and tell it to go to town, but we can do better.

Claude Code has custom slash command support. In Claude Code land, slash commands are prompt templates that you can hydrate with arguments. This means you can just describe the normal workflow process and have the agent dutifully go about and get that done for you while you focus on more important things.

Here's what those commands look like in practice:

Please make the following change:

$ARGUMENTS

When you are done, do the following:

  • Create a Linear issue for this task.
  • Create a branch based on the changes to be made and my github username (eg: ty/update-readme-not-mention-foo).
  • Make a commit with the footer Closes: (linear issue ID) and use the --signoff flag.
  • Push that branch to GitHub.
  • Create a pull request for that branch.
  • Make a comment on that pull request mentioning ${CEO_GITHUB_USERNAME}.

When all that is done, please reply with a message similar to the following:

> Got it, please review this PR when you can: (link).

So whenever I get a "quick request", I can open a new worktree in something like Conductor, copy that Slack message verbatim, then type in:

/quick-request add a subsection to the README pointing people to the Python repository (link) based on the subsections for Go and JavaScript

From there all I have to do is hit enter and then go back to writing. The agent will dutifully Just Solve The Thing™️ using GLM 4.7 via their coding plan. It's not as good as Anthropic's models, but it works well enough and has a generous rate limit. It's good enough, and good enough is good enough for me.

I realize the fundamental conflict between what I work on with Anubis and this tormentmaxxing workflow, but if these tools are going to exist regardless of what I think is "right", is decently cheap, and is genuinely useful, I may as well take advantage of this while the gravy train lasts.

Remember: think smarter, not harder.

I made a simple agent for PR reviews. Don't use it.

2026-01-11 08:00:00

My coworkers really like AI-powered code review tools and it seems that every time I make a pull request in one of their repos I learn about yet another AI code review SaaS product. Given that there are so many of them, I decided to see how easy it would be to develop my own AI-powered code review bot that targets GitHub repositories. I managed to hack out the core of it in a single afternoon using a model that runs on my desk. I've ended up with a little tool I call reviewbot that takes GitHub pull request information and submits code reviews in response.

reviewbot is powered by a DGX Spark, llama.cpp, and OpenAI's GPT-OSS 120b. The AI model runs on my desk with a machine that pulls less power doing AI inference than my gaming tower pulls running fairly lightweight 3D games. In testing I've found that nearly all runs of reviewbot take less than two minutes, even at a rate of only 60 tokens per second generated by the DGX Spark.

reviewbot is about 350 lines of Go that just feeds pull request information into the context window of the model and provides a few tools for actions like "leave pull request review" and "read contents of file". I'm considering adding other actions like "read messages in thread" or "read contents of issue", but I haven't needed them yet.

To make my life easier, I distribute it as a Docker image that gets run in GitHub Actions whenever a pull review comment includes the magic phrase /reviewbot.

The main reason I made reviewbot is that I couldn't find anything like it that let you specify the combination of:

  • Your own AI model name
  • Your own AI model provider URL
  • Your own AI model provider API token

I'm fairly sure that there are thousands of similar AI-powered tools on the market that I can't find because Google is a broken tool, but this one is mine.

How it works

When reviewbot reviews a pull request, it assembles an AI model prompt like this:

Pull request info:
        
        <pr>
        <title>Pull request title</title>
        <author>GitHub username of pull request author</author>
        <body>
        Text body of the pull request
        </body>
        </pr>
        
        Commits:
        
        <commits>
        <commit>
        <author>Xe</author>
        <message>
        chore: minor formatting and cleanup fixes
        
        - Format .mcp.json with prettier
        - Minor whitespace cleanup
        
        Assisted-by: GLM 4.7 via Claude Code
        Reviewbot-request: yes
        Signed-off-by: Xe Iaso <[email protected]>
        </message>
        </commit>
        </commits>
        
        Files changed:
        
        <files>
        <file>
        <name>.mcp.json</name>
        <status>modified</status>
        <patch>
        @@ -3,11 +3,8 @@
             "python": {
               "type": "stdio",
               "command": "go",
        -      "args": [
        -        "run",
        -        "./cmd/python-wasm-mcp"
        -      ],
        +      "args": ["run", "./cmd/python-wasm-mcp"],
               "env": {}
             }
           }
        -}
        \ No newline at end of file
        +}
        </patch>
        </file>
        </files>
        
        Agent information:
        
        <agentInfo>
        [contents of AGENTS.d in the repository]
        </agentInfo>
        

The AI model can return one of three results:

  • Definite approval via the submit_review tool that approves the changes with a summary of the changes made to the code.
  • Definite rejection via the submit_review tool that rejects the changes with a summary of the reason why they're being rejected.
  • Comments without approving or rejecting the code.

The core of reviewbot is the "AI agent loop", or a loop that works like this:

  • Collect information to feed into the AI model
  • Submit information to AI model
  • If the AI model runs the submit_review tool, publish the results and exit.
  • If the AI model runs any other tool, collect the information it's requesting and add it to the list of things to submit to the AI model in the next loop.
  • If the AI model just returns text at any point, treat that as a noncommittal comment about the changes.

Don't use reviewbot

reviewbot is a hack that probably works well enough for me. It has a number of limitations including but not limited to:

  • It does not work with closed source repositories due to the gitfs library not supporting cloning repositories that require authentication. Could probably fix that with some elbow grease if I'm paid enough to do so.
  • A fair number of test invocations had the agent rely on unpopulated fields from the GitHub API, which caused crashes. I am certain that I will only find more such examples and need to issue patches for them.
  • reviewbot is like 300 lines of Go hacked up by hand in an afternoon. If you really need something like this, you can likely write one yourself with little effort.

Frequently asked questions

When such an innovation as reviewbot comes to pass, people naturally have questions. In order to give you the best reading experience, I asked my friends, patrons, and loved ones for their questions about reviewbot. Here are some answers that may or may not help:

Does the world really need another AI agent?

Probably not! This is something I made out of curiosity, not something I made for you to actually use. It was a lot easier to make than I expected and is surprisingly useful for how little effort was put into it.

Is there a theme of FAQ questions that you're looking for?

Nope. Pure chaos. Let it all happen in a glorious way.

Where do we go when we die?

How the fuck should I know? I don't even know if chairs exist.

Has anyone ever really been far even as decided to use even go want to do look more like?

At least half as much I have wanted to use go wish for that. It's just common sense, really.

If you have a pile of sand and take away one grain at a time, when does it stop being a pile?

When the wind can blow all the sand away.

How often does it require oatmeal?

Three times daily or the netherbeast will emerge and doom all of society. We don't really want that to happen so we make sure to feed reviewbot its oatmeal.

How many pancakes does it take to shingle a dog house?

At least twelve. Not sure because I ran out of pancakes.

Will this crush my enemies, have them fall at my feet, their horses and goods taken?

Only if you add that functionality in a pull request. reviewbot can do anything as long as its code is extended to do that thing.

Why should I use reviewbot?

Frankly, you shouldn't.

2026 will be my year of the Linux desktop

2026-01-02 08:00:00

TL;DR: 2026 is going to be The Year of The Linux Desktop for me. I haven't booted into Windows in over 3 months on my tower and I'm starting to realize that it's not worth wasting the space for. I plan to unify my three SSDs and turn them all into btrfs drives on Fedora.

I've been merely tolerating Windows 11 for a while but recently it's gotten to the point where it's just absolutely intolerable. Somehow Linux on the desktop has gotten so much better by not even doing anything differently. Microsoft has managed to actively sabotage the desktop experience through years of active disregard and spite against their users. They've managed to take some of their most revolutionary technological innovations (the NT kernel's hybrid design allowing it to restart drivers, NTFS, ReFS, WSL, Hyper-V, etc.) then just shat all over them with start menus made with React Native, control-alt-delete menus that are actually just webviews, and forcing Copilot down everyone's throats to the point that I've accidentally gotten stuck in Copilot in a handheld gaming PC and had to hard reboot the device to get out of it. It's as if the internal teams at Microsoft have had decades of lead time in shooting each other in the head with predictable results.

To be honest, I've had enough. I'm going to go with Fedora on my tower and Bazzite (or SteamOS) on my handhelds.

I think that Linux on the desktop is ready for the masses now, not because it's advanced in a huge leap/bound. It's ready for the masses to use because Windows has gotten so much actively worse that continuing to use it is an active detriment to user experience and stability. Not to mention with the price of ram lately, you need every gigabyte you can get and desktop Linux lets you waste less of it on superfluous bullshit that very few people actually want.

Cadey is coffee
Cadey

Oh, and if I want a large language model integrated into my tower, I'm going to write the integration myself with the model running on hardware I can look at.

At the very least, when something goes wrong on Linux you have log messages that can let you know what went wrong so you can search for it.

Git's HTTP server side design does not scale

2025-12-29 08:00:00

UPDATE(2025-12-29T13:04Z-5): If you run a git forge: disable unauthenticated clones for repos larger than 512Mi until further notice.

Recently Sourceware had to disable git clone over HTTP due to an attack where lots of random Git clients are cloning repositories. This was surprising to me, I thought the Git client didn't need any smarts on the server and most of the "magic" was just serving flat files based on the client needs. It turns out that the git HTTP backend is way more complicated than I thought it was and the actual problem boils down to something that's as old as I am: the Common Gateway Interface (CGI).

A CGI handler is a program that gets request metadata from environment variables and standard input, then returns the result over standard output. This means that the web server has to fork/exec a new process for every request. If your service ends up getting very popular very quickly, this can incur forkbomb attacks.

The default and recommended configuration for serving git repositories over HTTP is to use the git-http-backend CGI handler to serve traffic. This means that every time a git client is cloning a repo, the server side needs to spawn a new copy of the process to handle the request. In most cases, this is fine. In extreme cases where lots of residential proxies are cloning every repo they can and making the server calculate absurd diffs between random commit hashes, this results in the opposite of rejoicing.

I am not entirely sure what to suggest to users of Anubis that serve git repositories with git-http-backend. My SRE instinct is that the entire model of using fork/exec with CGI is fundamentally broken and the git-http-backend service needs to be its own HTTP server that can run persistently and concurrently handle requests, but that is not something that can be slapped together instantly.

Am I missing something really simple that I don't know about? Google has failed me.

Arcane Cheese with Doomtrain Extreme

2025-12-24 08:00:00

Spoiler Warning

If you want to go through the Final Fantasy 14 duty Hell on Rails (Extreme) blind, don't read this guide as it spoils how to easily solve one of the mechanics in it.

If you don't play Final Fantasy 14, most of the words in this article are going to make no sense to you and I will make no attempt to explain them. Just know that most of the words I am saying do have meaning even though they aren't in The Bible.

In phase 4 of Hell on Rails (Extreme), the boss will cast Arcane Revelation, which makes the arena look something like this:

There will be a very large circle of bad moving around the arena. One tank and one healer will be marked with an untelegraphed AoE attack that MUST be soaked by at least one other player (or two for healers). Doomtrain will move the circle of bad anywhere from 1-3 times and leave only a small area of the arena safe. Normally you're supposed to solve it something like this:

Instead of normal light party groups, break up into two groups: melee and casters. This will allow the melees to keep as much uptime as the mechanics allow, but also let the casters get uptime at a distance. Solving this is pretty easy with practice.

However as a caster this is kinda annoying because when the North side is safe, you have to fall down off the ledge and the only way to get back is by going around the long way with the janky teleporters that are annoying to hit on purpose but very easy to hit on accident. There is an easier way: you just stand in the upper corners so your melees can greed uptime and just soak all of the bad:

This looks a lot easier but is actually very technically complicated for nearly every class. My example solve for this includes the following party members:

The light party assignment is as follows:

  1. WAR, WHM, SAM, DNC
  2. GNB, SGE, RPR, PCT

Arcane Revelation can perform up to three hits. In each of the hits you need to mitigate the damage heavily or you will wipe. I've found the most consistent results doing this:

First hit: WAR casts Shake it Off, Reprisal, and Rampart; WHM casts Plenary Indulgence and Medica III; SGE casts Kerachole and Eukrasian Prognosis II; SAM (and RPR) casts Bloodbath and mostly focuses on DPSing as much as possible to heal from the massive damage you will be taking throughout this mechanic; DNC casts Shield Samba.

After the hit: heal as much as you can to offset the hit you took. If you're lucky you didn't take much. If you're not: you took a lot. Dancer's Curing Waltz can help here.

Second hit: GNB casts Heart of Light, Reprisal, and Rampart; SGE casts Holos and Eukrasian Prognosis II; PCT casts Addle.

After the hit: SGE casts a Zoe-boosted Pneuma. Generally you do what you can to heal and maintain DPS uptime. Hopefully you don't have to take another heavy hit.

Third hit: One of the tanks uses a Tank Limit Break 2, Healers dump as many mits as they have left, hopefully you won't die but getting to this point means you got very very unlucky.

Between each of these hits you need to heal everyone up to 100% as soon as possible otherwise you WILL wipe. Most of the damage assumptions in this guide assume that everyone is at 100% health. The melee classes can mostly be left to their own devices to greed as much uptime as possible, but they may need Aquaveil, Taurochole, or other single target damage mitigations as appropriate. By the end of this you will have used up all of your mitigations save tank invulns.

Here's a video of the first time I did this as Sage:

Want to watch this in your video player of choice? Take this:
https://files.xeiaso.net/blog/2025/doomtrain-ex-cheese/sge-vid/index.m3u8
Cadey is coffee
Cadey

That exasperated laugh is because previously Arcane Revelation was my hard prog point as even though I was able to do it consistently, others were not. This caused many wipes 7 minutes into a 10 minute fight. This cheese makes it consistent with random people on Party Finder.

One of the tanks will need to soak a stack tower with an invuln. Everyone else runs to the back of the car to enter the next phase and then you continue the fight as normal.

New AI slop signal: code blocks with weird indentation

2025-12-01 08:00:00

I just discovered a new way to tell if a blogpost is AI slop or at least if someone blindly copied and pasted commands from Claude Code: the first line of a group of commands isn't indented but the rest are, like this:

sudo apt update
          sudo apt upgrade
          sudo apt autoremove
          sudo apt autoclean
        

This happens because the raw CLI output of Claude code for this question looks like this:

> What are the commands to fully update an Ubuntu system? Just list the commands.
        
        ● sudo apt update
          sudo apt upgrade
          sudo apt autoremove
          sudo apt autoclean
        

And then the writer copied from the beginning of the set of commands to the end. Their text editor / formatting tool did not remove the preceding spaces because that's sometimes syntactically relevant in code blocks.

I'll keep y'all updated as I find more indicators. There are so many in the wild and it's making me grow weary.