MoreRSS

site iconErseiModify

My hobbies outside of computer stuff include: video games (okay maybe that's computer-related), running, skateboarding, jewelry-making, and pretty much anything that I wanna try.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ersei

This Holiday Season, Drive Home in a Brand New Server Updates

2024-12-17 12:16:00

I may be a little full of myself, publishing two server updates and five actual blog posts. Alas, college is hard and I did not really publish anything big this fall semester (though I have worked on some fancy projects that are sitting half-completed in my drafts folder). Here are all of the small projects and other miscellaneous items that didn't warrant their own blog posts, but I worked on anyway.

Social Media, Smocial Smedia

Starting off with Bluesky! I got Bluesky FOMO so I spun up my own PDS (thanks to TheShadowEevee for telling me about the proper paths I had to reverse proxy in Nginx), and now I post anything non-incriminating at @ersei.net. The requirements to become my follower are practically nonexistent compared to my Fediverse account, especially since Bluesky does not have follow requests yet.

So far, I think it's okay. It does have a really big bot problem, and I've been blocking-on-sight as fast as I can. Other than that, I think it's nice. I don't like how every post is trivially public, so anything even mildly personal is going to stay on Fedi. It does have a slick UI, but there are a few extremely minor bugs on the iOS app that bother me. Maybe later I can run my own labeller or aggregator as a fun little sidequest, but that's a project for another day.

Dreams of DNS Past

In my last update post, I mentioned setting up my own authoritative DNS server to run my infrastructure. While I spun up an instance on a VPS, Purdue was unhappy with my request to unblock port 53/udp, despite the security measures I took to prevent reflection attacks. Months of back-and-forth led to an escalation in the ticket, and another refusal from even higher up.

Unfortunately, after much consideration and deliberation from our security team, we are going to go ahead and deny this request based on the information we have received and in line with our policies.

Let us know if you have questions, otherwise, we will consider this resolved from our side.

While I could have just had the server solely on some VPSs, that wasn't what I was envisioning, and I ended up giving up on the project after I had a replicated PostgreSQL database backing PowerDNS with DDoS protection and ratelimiting. I had to settle for running my own DoH server instead. What's Purdue going to do, block port 443? (Please don't)

DoH: I Hate Ads

I don't like ads, and the best way I've found to block ads on Safari on my iPhone is to use a custom DNS server to block requests made to the likes of ads dot google dot com (I shant speak its name, lest it appears). Running the DoH server took an incredible amount of tinkering over the past year, especially since the specification requires DoH to run over HTTP/2 and TLS. This led to every single DoH implementation to not fit all of my criteria:

  • I can override DNS entries
  • Can be run behind an HTTP proxy (i.e. Nginx)

It was always one or another, since most DoH server implementations decided to handle TLS themselves, stealing away port 443 from Nginx. I ran the DoH server on a VPS until I was tired of not running everything myself, so I had to figure out how to run DoH behind Nginx.

I settled on using Unbound to handle the DoH queries, as it ran fine without specifying a TLS certificate to use. I configured it to disable DNSSEC1 so I could modify DNS responses, and pointed the resolver at my locally-running instance of DNSCrypt-Proxy that ran the ads filtering and other forwarding junk that I wanted server-wide and not just over DoH.

Unfortunately, this did not solve all of my problems with ads on my phone, as YouTube used the same domain to serve ads as it does to serve video. Fortunately, my friend informed me about the incredible YouTube Rehike project. Rehike is a locally-hosted piece of software that pretends to be YouTube, much like Invidious, but meant to run at the youtube.com domain so that it can grab full-resolution video files from googlevideo.com without eating through all the server's bandwidth2.

Because I already had Unbound set up to intercept DNS requests, I added a few lines to forward YouTube requests to my server:

local-zone: "www.youtube.com" redirect
local-data: "www.youtube.com A 128.210.6.106"
local-data: "www.youtube.com AAAA 2607:ac80:303:102:638a:ba7f:2013:b0c"
local-zone: "m.youtube.com" redirect
local-data: "m.youtube.com A 128.210.6.106"
local-data: "m.youtube.com AAAA 2607:ac80:303:102:638a:ba7f:2013:b0c"

Might as well set up Rimgo to avoid using the horrid Imgur frontend, and likewise Redlib for Reddit. To ensure that these connections did not complain about HTTPS, I put together my own CA and issued my own certificates for YouTube, Imgur, and the rest of the sites I was overriding following Armin Reiter's tutorial.

The second half of this was pointing my phone to my DoH server, which is something iOS has supported for a while now. I put together a management profile for my phone, which is in the only good configuration language today3.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>PayloadDisplayName</key>
        <string>erseiDoHsr0</string>
        <key>PayloadIdentifier</key>
        <string>net.ersei.doh</string>
        <key>PayloadUUID</key>
        <string>BFEB188D-39FC-4455-9061-4C3FB34A432E</string>
        <key>PayloadDescription</key>
        <string>Ersei DoH</string>
        <key>PayloadRemovalDisallowed</key>
        <false/>
        <key>PayloadVersion</key>
        <integer>1</integer>
        <key>PayloadType</key>
        <string>Configuration</string>
        <key>PayloadContent</key>
        <array>
            <dict>
                <key>PayloadDisplayName</key>
                <string>Ersei DoH</string>
                <key>PayloadType</key>
                <string>com.apple.dnsSettings.managed</string>
                <key>PayloadIdentifier</key>
                <string>com.apple.dnsSettings.managed.4E9BA4B7-FD73-4858-AD6D-F4976EC88389</string>
                <key>PayloadUUID</key>
                <string>230D9056-D82A-4AEA-953D-F44519C17D9C</string>
                <key>PayloadVersion</key>
                <integer>1</integer>
                <key>ProhibitDisablement</key>
                <false/>
                <key>DNSSettings</key>
                <dict>
                    <key>DNSProtocol</key>
                    <string>HTTPS</string>
                    <key>ServerAddresses</key>
                    <array>
                        <string>128.210.6.106</string>
                        <string>2607:ac80:303:102:638a:ba7f:2013:b0c</string>
                    </array>
                    <key>ServerURL</key>
                    <string>https://doh.ersei.net/dns-query</string>
                </dict>
            </dict>
        </array>
    </dict>
</plist>

I signed this configuration with my CA certificate with openssl smime -sign -signer erseinet-rootca.crt -inkey erseinet-rootca.key -nodetach -outform der -in doh.mobileconfig -out doh-signed.mobileconfig, imported both the CA certificate and configuration file into iOS, and it worked!

Mostly.

The main issue is that sometimes iOS decides to ignore your DoH server and just doesn't use it. You have to go into settings to disable and reenable your DoH config to have your phone properly do DNS again. This is better than the previous solution, which was to use a faux-VPN provided by the DNSCloak iOS app. That app conflicted with ZeroTier, but this DoH solution does not conflict!

There's just one last problem, and perhaps it's the most cursed part of this endeavour. Unbound only supports listening on HTTP/2 (even if unencrypted). Nginx does not support reverse-proxying to HTTP/2, but it does support reverse-proxying to gRPC, which is close enough to HTTP/2 that it doesn't matter.

location / {
        grpc_pass grpc://localhost:4932; # Unbound on HTTP/2
        grpc_read_timeout 86400; # iOS tries to keep DoH connections alive
        grpc_connect_timeout 75s;
}

But hey, if it works, it works. I don't see ads on Safari anymore (unless if iOS unilaterally decides to stop using my DoH server occasionally), which is nice. Unfortunately, apps have a lot more control over their networking stack than websites do, so they can pin DNS queries (or use their own DoH servers) and bypass my adblocking. Good thing I refuse to install apps unless if I really have to, I guess.

RAM Galore!

A big issue I faced with my new server was that it was good. Too good. I expected too much from it and its paltry(‽) 64GB of RAM. That was too little to run my Nix Hydra CI VM (many tens of GB from compiling big programs), Minecraft servers, ZFS cache, and Matrix at the same time. Fortunately for me, I got some RAM from another very good friend (thank you) so I can avoid the alerting emails I was getting stating that "95% of RAM is used".

RAM usage graph spiking down from 80% average use to about 30% average

Padding Out The DevOps Section of my Résumé

I've begun a slow migration to Grafana/Prometheus/Loki for monitoring, hopefully relying less on Zabbix for things Zabbix isn't really meant to monitor. This gives me the ability to ingest my Nginx logs and perform queries on them, so I can see how many unique IPs access a blog post over time in a nice graph. Really, setting that up was in response to my Google Drive post hitting the news cycle for some reasons and I wanted to see the numbers go up.

A graph showing the rate of requests per second spiking up and then settling into a long tail

Speaking of résumés, I am looking for a job/internship for the Summer of 2025, so if you know someone who's hiring and feel like I'd be a good fit, please contact me,

Chrome Finally Catches Up To Firefox

According to Can I use's page on MathML, there are enough people using browsers that support MathML that I can move my math rendering plugin away from KaTeX to render math to Temml, and render MathML exclusively. Not only does this look nicer across browsers, but also it is more consistent and works properly in RSS readers! I also finally figured out Grav's renderer mechanism, so I no longer need to write ugly shortcodes to render math and can use the standard-ish $$math$$ syntax to render it. While that standardized syntax version is not released yet (I'm still stress-testing it on my website, making sure it behaves properly with other plugins), you can grab the latest source on Sourcehut.

CBdl=μ0(Ienc+ε0ddtSEn^da)

Likewise, I modified the server-side highlighting plugin to not use shortcodes either, and instead use the standard triple-backtick syntax for syntax highlighting. That plugin is not quite as visible to you, reader, but it's really nice for me when I'm writing these posts.

The modified highlight plugin still needs some work to publish as a fork with the modifications (namespace changes, documentation, copyright, etc), but the development version can too be downloaded from Sourcehut.

Both of these modified plugins are running on my website, and if you see any issues, feel free to contact me in the usual places so I can fix them.

Cellular Automata in an 88×31 Button

This was a fun little project to do while I was procrastinating on schoolwork and everything else. I've known that for the past few years that a lot of people have had 88×31 buttons on their websites for a retro feel. I wanted to get in on that (much like with how I wanted to get in on Bluesky), so I reached out to someone to commission a button. That fell through, so I ignored the whole button fiasco until I had a burst of inspiration to make my own button.

I wanted a simple button: my website's title and my domain in front of some wires with electrons crawling across them. I had been watching some TodePond recently, so cellular automata was on my mind. The problem I described reminded me a lot about Wireworld, so all I had to do was to make a static 88×31 button in GIMP or something, then run it through a program to simulate the electrons it detects!

GIMP development showing a static image of the below button, with a grid overlay

I'll spare you the development story, but the TL;DR was that I wrote the first version to not count neighbours if they are diagonal. The code was fine, but I wanted to make the cellular automata Turing-complete, so I modified the bad code I had to check diagonals as well.

Reader, the code was horrible. If you want to make fun of it, you can see the earlier version here. I quickly wised up, and against my better judgement, I rewrote the whole thing, added loop detection, and various other goodies to make an actual cool button.

It took a lot of optimization and redesigns to get to this point, and I think I'm quite happy with it. Of course, it's still like 80KB and accounts for most of the bandwidth on my home page, but I think it needs all 540-some frames so it loops properly.

Send me your 88×31 button and we can link to each others' websites now, if you want. I'm not sure how I feel about hotlinking, since I might update the button later for more efficiency or tweak the design slightly, but for now go for it. If your website has a lot of traffic, you might wanna instead use a local copy that you periodically sync instead.

The current good source and GIMP source files are also available on Sourcehut. Feel free to customize it to your liking.

Notes Noted, RSS Syndicated

Inspired by Jim Nielsen's blog post on using iOS shortcuts to deploy his blog, I implemented something similar to post a note from my phone from the share menu. It calls a webhook on my server with a password that triggers the creation of a file with the note contents, and then a background daemon will periodically refresh the notes and push the RSS feeds if needed. It has helped me with posting there more, but evidently not enough.

A screenshot of an iPhone showing a shortcut steps

All I have to do is hit the "share" button, and then press "Note post", fill out the relevant information, and it's transmitted to all of you who've subscribed to the notes. It's like social media, but with instant notifications with WebSub (which I also set up) and RSS. Someone should make a decentralized, federated social media system based on this technology...

The shortcut in action

It's not the best experience, since sometimes I need to reference the thing I'm posting to write the note itself, but I can do that in the text editor app.

As for the RSS system, I trigger a WebSub event upon every Git pull my server performs with a simple Git hook, so that people with compatible RSS readers can get new articles instantly instead of waiting for the next refresh. If you don't care because you pull the RSS feed every five minutes, shame on you.

Speaking of RSS, RSS has been superseded by Atom feeds on this website permanently. The feed links have linked to the Atom feeds forever, but some people got a hold of the RSS feeds instead. I didn't want to continue maintaining two different feed templates, so RSS feeds were phased out over the course of a week, and those using RSS-only feeds have gotten a notification that they should've used Atom instead. It seems like the migration has gone well, with just one person still pulling the RSS feed and ignoring the permanent redirect. Hopefully my feed hasn't broken silently in their reader. If this is you, you should use Atom feeds instead.


That's probably all that's noteworthy that I've been up to recently. I might update this post if something I forgot comes to mind. Post-publish update count: 1.

Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.


  1. It's appalling that you can just... disable a security measure and intercept DNS requests. Isn't that what DNSSEC is meant to do? We should really enforce using DNSSEC. 

  2. Invidious is meant to run under a separate domain (such as invidious.example.com), the content security policy of googlevideo.com (where the actual video chunks are hosted) is set to only allow requests to HD video if the originator is youtube.com. Invidious gets around this by having the server download and stitch together the video chunks and then sends those to the client from the Invidious server, eating a lot of bandwidth (and being slow). However, because Rehike is set up so that youtube.com is overridden to point to Rehike instead of Google, a web browser using Rehike can access the HD video streams directly without violating the security policy. 

  3. It's a type-safe configuration language, what more could you want? 

Playing Bad Apple over RSS

2024-11-13 02:00:00

I swear, I'm surrounded by people who want to watch me waste my time. I was telling the Purdue Linux Users Group about the wonders of RSS and how great it is, but my friend overheard the conversation and said, jokingly, "You should do Bad Apple, over RSS". I whined, complained, claimed it would be "too easy" and that "I need a harder project". Alas, my harder projects have remained stagnant for a few weeks now, eschewed for schoolwork.

And so I ran Bad Apple on RSS.

The moment that I heard "Bad Apple" and "RSS" in the same sentence, I thought of big feed that contains every frame as a feed entry. But that would be cheating, so my alternative plan formed fairly quickly: an RSS feed, that, every time it is requested, will update to show a new frame. Writing the backend to generate the RSS feed was pretty easy. All I needed was to keep track of a query parameter and link that against an integer representing the frame, using PHP's APCu cache as a basic key/value store.

<?php
header('Content-Type: application/atom+xml; charset=utf-8'); // Return an Atom file
header('Content-Disposition: filename="ba.xml"'); // Download as an XML
$q = $_SERVER['QUERY_STRING']; // Get the query parameter (the bit after the question mark)
if ($q == "") { // If there is no query, don't return anything
    die();
}
$v_pat = "/^[a-f0-9]{16}$/i"; // Verify that the query parameter is a 16-character hexadecimal string
if (preg_match($v_pat, $q) != 1) {
    die();
}

$frame = 1; // Initial frame value

// Load frame if it exists, and update it
if (apcu_exists($q)) {
    $frame = apcu_fetch($q);
    apcu_store($q, $frame + 1);
} else {
    apcu_add($q, 1);
}
$frame = min($frame, 6572); // Cap frame at 6572
// Return the Atom feed
?>
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Bad Apple RSS</title>
    <link href="https://ba.ersei.net/ba" />
    <link rel="self" href="https://ba.ersei.net/ba.php?<?php echo $q;?>" />
    <subtitle>Bad Apple as an RSS feed</subtitle>
    <updated><?php echo date("c");?></updated>
    <author>
        <name>Ersei</name>
    </author>
    <id>https://ba.ersei.net/ba.php?<?php echo $q;?></id>
        <entry>
            <title>Bad Apple Frame <?php echo str_pad($frame, 4, "0", STR_PAD_LEFT);?></title>
            <id>https://ba.ersei.net/img/<?php echo $frame;?></id>
            <updated><?php echo date("c");?></updated>
            <published><?php echo date("c");?></published>
            <link href="https://ba.ersei.net/img/<?php echo $frame;?>"/>
            <content type="html">
            <![CDATA[
                <p><img src="https://ba.ersei.net/img/<?php echo $frame;?>.jpg"></p>
            ]]>
            </content>
    </entry>
</feed>

There seems to be a bug where the first frame shows up twice, but I don't care enough to figure out why that happens. All that matters is that it works. Now I just need to patch a feed reader to request the RSS feed 30 times a second...

Javascript Will Be The Death Of Me

I must've muttered that phrase dozens of times throughout this project. Despite the beautiful sensibilities of PHP, I could find no such reprise in Javascript. I just needed an RSS reader that could handle refreshing articles smoothly every thirty-three milliseconds. I could've used a good RSS reader like FreshRSS or NewsFlash but FreshRSS's reload model doesn't mesh with the "30 times a second" target I was going for, and I couldn't get NewsFlash to compile on NixOS, so I decided to go with something Javascript-y, as it would probably refresh how I wanted it to.

This was a mistake. Before I could stop myself, I dove into the code of Fluent Reader, the first Javascript-written desktop RSS reader that dredged itself up from the depths of my mind.

Compiling Fluent Reader from scratch led me to discover that it has not been updated in thirteen months, and that installing Electron through NPM doesn't work on NixOS. I'm sure that this would be fine.

I built Fluent Reader, replaced its Electron binary with my own, and launched it. I added my feed, and lo and behold, the first frame of "Bad Apple" emerges from the Microsoft-themed abyss.

Bad Apple Frame 1

Great! I locate the "refresh feeds" button, and tap my laptop touchpad as furiously as my fingers would allow me. One small problem, however.

Bad Apple frames out of order

The frames are out of order? I thought about it for a moment, and realized that the Atom format returns date/times accurate only to the nearest second. When getting multiple feed entries per second, the frames returned will then be sorted by alphabetical order instead of the time they were received.

Simple enough fix. I just had to find the code where the feed is being returned and change out the sort by title to be inverted. How hard could it be?

It took me thirty minutes to figure out how to sort the output. Maybe it's just because I'm bad at React. Maybe it's because nothing in this codebase is documented. Maybe it's because the universe is telling me that this is a stupid idea and that I'm stupid for even trying.

My fingers can tap-tap the refresh button fast enough for the video to kind of play now. Unfortunately, I get tired after a hundred frames and I'm too inconsistent.

Being a "good programmer", I decide to automate it. I'm already familiar with the code, I just need to add a setInterval that triggers the fetchItems function, all controlled with a nice toggle button.

My misplaced confidence is becoming something of a cliché. But! It's not all my fault! Fluent Reader uses something called React class components, which, if you follow the link, you're greeted with a marvellous alert telling you to please, for the love of god, stop using class components.

Pitfall We recommend defining components as functions instead of classes. See how to migrate.

It took me hours to figure out how to add one button that toggles the setInterval. Nothing was allowed to interact with each other. I had to pass in app state where there definitely should not have been app state passed. But once I figured out that I had to replace all of the minified vendored font files, my button finally worked!

Simpsons meme where it's Bart going "this is the worst day of my life", but it's about a project instead. "this is the hardest part of my project". "this is the hardest part of your project so far"

If you would like to see the modifications I made to Fluent Reader, I uploaded my changes on Sourcehut.

Now I just had to show it off.

Wait, Video Editing on Linux is How Bad?

It took two hours to bend React to my will. I just had to record my screen playing the video, sync it to some music, and then become famous among a niche group of internet-dwellers.

The cliché continues, but it's still not my fault that my laptop thermal throttles at the slightest provocation. I connect my laptop to my server over Ethernet, and let the RSS reader rip. It resulted in a six and a half minute recording. For reference, Bad Apple is meant to only be four minutes or so. My CPU was pinned, Electron was struggling1, and neither my network bandwidth nor my server was the issue. Easy enough, I'll just scale the entire video by a constant factor to speed it up.

There's those accursed phrases again: "easy enough" and "just". When will I learn that it's never that easy? The recording was running at different speeds throughout.

I ended up spending a few hours setting up a couple dozen or so keyframes to synchronize my RSS Bad Apple with the actual video. This was made harder by the fact that I hadn't actually watched Bad Apple all the way through, so I kept getting confused where I was (that may have been my mistake) and that the synchronization seems to really mess with Kdenlive. I overlaid the real video on one of the little RSS boxes, and watched closely to see if anything got out of sync. If it did, I added a keyframe to ensure that part was in sync. I repeated this process until everything was properly synchronized.

If I didn't know the lyrics to Bad Apple going into this project, I certainly do know now.

Time remapping in Kdenlive. The top line and the bottom line are parallel, and points on the top line are mapped to the bottom line. There are 22 connections.

In case that wasn't bad enough, YouTube wanted me to now give them my phone number to set my own thumbnail and a facial scan to add a clickable link to this blog post in the description.

Google wants either a face scan, a picture of my ID, or me to wait two months

I figured it out though. Here's your video. I hope you enjoy it.

YouTube video embed, but it's a photo that takes you to the website


Questions? Thoughts? Concerns? Pure unbridled anger at the fact that this is the post I publish after months? Feel free to contact me!

I'm taking down ba.ersei.net in the meantime, just so that people don't try it out on my website.


  1. It was probably because Electron was running on Wayland. Running it under Xwayland, it performed a lot better, but because my laptop has a HiDPI screen and fractional scaling on Xwayland is still broken on Sway (and I didn't have the foresight to turn down the scaling beforehand), I recorded it in pure Wayland mode. 

No IP? No Problem!

2024-10-30 06:40:00

This is meant as a tutorial on how to use a VPS to get a public IPv4 address for self-hosting reasons. Often, people want to run a server out of their college dorm that doesn't give them a public IPv4 address, or out of their house from behind CGNAT.

It's a simple solution and an excellent alternative to software like Rathole or Cloudflare Tunnel because this solution will properly pass the correct connecting IP addresses through, transparently. If you look at your webserver access logs while using software like Rathole or Cloudflare Tunnel, all the connections are arriving from 127.0.0.1, and not from the real client address.

As a quick introduction, we will create a Wireguard connection between the server without an IP address and a virtual machine in the cloud. Only incoming traffic will go through the cloud VM. All outbound traffic will continue as normal. This means that there will be no latency for normal outbound internet use. The latency will only appear when someone accesses your website.

Step 1: Acquire a VPS with a Public IPv4 Address

This is not an endorsement, but Oracle Cloud and Google Cloud both have generous free tiers and will give you a static IPv4 address with one virtual machine. Pick a region that is geographically close to where your server is. The specs of the VM are not important—this is extremely lightweight. The most important component is bandwidth, as the bandwidth of this machine will become the bandwidth of the incoming connections. This VPS must be running a modern Linux distribution.

Step 2: Install Required Software

Verify that you have nftables installed on the VPS by running nft --version. If it is not installed, do so. Occasionally the nft command is kept in /usr/sbin, and may not be in the path of a non-root user. Additionally, some distributions may come with alternative firewall software, such as firewalld or ufw. Please ensure those are uninstalled.

Install Wireguard on both machines. For Debian-based distributions, this will look like sudo apt install wireguard, and for Fedora-based distributions sudo dnf install wireguard-tools.

Step 3: Set Up Firewall

On the cloud provider, open the port 51820/UDP in the cloud firewall. Instructions vary by provider.

Then, on the cloud virtual machine, create the file /etc/nftables/proxy.nft:

table ip nat {
    chain PREROUTING {
        type nat hook prerouting priority dstnat; policy accept;
        meta l4proto tcp tcp dport 80   dnat to 192.168.77.2:80
        meta l4proto tcp tcp dport 443  dnat to 192.168.77.2:443
    }

    chain INPUT {
        type nat hook input priority 100; policy accept;
    }

    chain POSTROUTING {
        type nat hook postrouting priority srcnat; policy accept;
    }

    chain OUTPUT {
        type nat hook output priority -100; policy accept;
    }
}

Also create the file /etc/nftables/main.nft:

# Sample configuration for nftables service.
# Load this by calling 'nft -f /etc/nftables/main.nft'.

# Note about base chain priorities:
# The priority values used in these sample configs are
# offset by 20 in order to avoid ambiguity when firewalld
# is also running which uses an offset of 10. This means
# that packets will traverse firewalld first and if not
# dropped/rejected there will hit the chains defined here.
# Chains created by iptables, ebtables and arptables tools
# do not use an offset, so those chains are traversed first
# in any case.

# drop any existing nftables ruleset
flush ruleset

# a common table for both IPv4 and IPv6
table inet nftables_svc {

    # protocols to allow
    set allowed_protocols {
        type inet_proto
        elements = { icmp, icmpv6 }
    }

    # interfaces to accept any traffic on
    set allowed_interfaces {
        type ifname
        elements = { "lo" }
    }

    # services to allow
    set allowed_tcp_dports {
        type inet_service
        elements = { ssh, http, https, 51820 }
    }

    # services to allow
    set allowed_udp_dports {
        type inet_service
        elements = { 51820 }
    }

    # this chain gathers all accept conditions
    chain allow {
        ct state established,related accept

        meta l4proto @allowed_protocols accept
        iifname @allowed_interfaces accept
        tcp dport @allowed_tcp_dports accept
        udp dport @allowed_udp_dports accept
    }

    # base-chain for traffic to this host
    chain INPUT {
        type filter hook input priority filter + 20
        policy accept

        jump allow
        reject with icmpx type port-unreachable
    }
}

include "/etc/nftables/proxy.nft"

This firewall rule will NOT close SSH access. If you have publicly available SSH, that is a bad idea, and you should adjust allowed_tcp_dports to not include SSH. This default configuration will only pass through HTTP and HTTPS. Adjust allowed_tcp_dports to allow your TCP port, and allowed_udp_dports to allow your UDP port. In the first file, use the example HTTP/HTTPS configuration to forward another port. Keep in mind that this port forwarding will take priority! If you have SSH open to the VPS and you try forwarding SSH, you WILL lose SSH access!

Add the line include /etc/nftables/main.nft; at the end of the file /etc/nftables.conf (the semicolon is important), and then restart the firewall (and ensure it persists across reboots):

cloudvm# systemctl enable nftables
cloudvm# systemctl restart nftables

Finally, enable IP forwarding and make it persist across reboots:

cloudvm# sysctl -w net.ipv4.ip_forward=1
cloudvm# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf

Step 4: Set Up Wireguard

First, set up the Wireguard keys. On the cloud VM, run this command as root:

cloudvm# wg genkey | tee privatekey | wg pubkey > publickey

Keep these generated files (privatekey, publickey) in a safe place.

Repeat this generation command on the other machine.

First, create the file /etc/wireguard/wg0.conf on the cloud VM:

[Interface]
Address = 192.168.77.1/24
ListenPort = 51820
PrivateKey = [FIRST_GENERATED_PRIVATE_KEY]

[Peer]
PublicKey = [THE_PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
AllowedIPs = 192.168.77.2/32
PersistentKeepalive = 30

Create the file /etc/wireguard/wg0.conf on the other machine:

[Interface]
PrivateKey = [PRIVATE_KEY_GENERATED_ON_THIS_MACHINE]
Address = 192.168.77.2/32

Table = 123
PreUp = ip rule add from 192.168.77.2 table 123 priority 456
PostDown = ip rule del from 192.168.77.2 table 123 priority 456

[Peer]
PublicKey = [PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
Endpoint = [IP_ADDRESS_OF_THE_CLOUD_VM]:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 30

Then start and persist the Wireguard tunnel on both machines:

cloudvm# systemctl enable --now [email protected]
server# systemctl enable --now [email protected]

That should be all you need to have a public static IPv4 address when self-hosting in an environment where you don't have any such address. If you have any questions, feel free to contact me. Please try fixing your own problem before asking me for help, though. If you do ask me for help, please be as descriptive as possible and tell me the troubleshooting steps you've taken. I'll ignore the cry for help otherwise.

Booting Linux off of Google Drive

2024-07-01 21:20:00

Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.

Like all good projects, this began with an Idea.

My mind reached out and grabbed wispy tendrils from the æther, forcing the disparate concepts to coalesce. The Mass gained weight in my hands, and a dark, swirling colour promising doom to those who gazed into it for long.

On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.

Booting Linux off of a Google Drive root.

But How?

I wanted this to remain self-contained, so I couldn't have a second machine act as a "helper". My mind went immediately to FUSE—a program that acts as a filesystem driver in userspace (with cooperation from the kernel).

I just had to get FUSE programs installed in the Linux kernel initramfs and configure networking. How bad could it be?

The Linux Boot Process

The Linux boot process is, technically speaking, very funny. Allow me to pretend I understand for a moment1:

  1. The firmware (BIOS/UEFI) starts up and loads the bootloader
  2. The bootloader loads the kernel
  3. The kernel unpacks a temporary filesystem into RAM which has the tools to mount the real filesystem
  4. The kernel mounts the real filesystem and switches the process to the init system running on the new filesystem

As strange as the third step may seem, it's very helpful! We can mount a FUSE filesystem in that step and boot normally.

A Proof of Concept

The initramfs needs to have both network support as well as the proper FUSE binaries. Thankfully, Dracut makes it easy enough to build a custom initramfs.

I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it works, as opposed to something like Alpine.

$ git clone https://github.com/dracutdevs/dracut
$ podman run -it --name arch -v ./dracut:/dracut docker.io/archlinux:latest bash

In the container, I installed some packages (including the linux package because I need a functioning kernel), compiled dracut from source, and wrote a simple module script in modules.d/90fuse/module-setup.sh:

#!/bin/bash
check() {
    require_binaries fusermount fuseiso mkisofs || return 1
    return 0
}

depends() {
    return 0
}

install() {
    inst_multiple fusermount fuseiso mkisofs
    return 0
}

That's it. That's all the code I had to write. Buoyed by my newfound confidence, I powered ahead, building the EFI image.

$ ./dracut.sh --kver 6.9.6-arch1-1 \
    --uefi efi_firmware/EFI/BOOT/BOOTX64.efi \
    --force -l -N --no-hostonly-cmdline \
    --modules "base bash fuse shutdown network" \
    --add-drivers "target_core_mod target_core_file e1000" \
    --kernel-cmdline "ip=dhcp rd.shell=1 console=ttyS0"
$ qemu-kvm -bios ./FV/OVMF.fd -m 4G \
    -drive format=raw,file=fat:rw:./efi_firmware \
    -netdev user,id=network0 -device e1000,netdev=network0 -nographic
...
...
dracut Warning: dracut: FATAL: No or empty root= argument
dracut Warning: dracut: Refusing to continue

Generating "/run/initramfs/rdsosreport.txt"
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

To get more debug information in the report,
reboot with "rd.debug" added to the kernel command line.

Dropping to debug shell.

dracut:/#

Hacker voice I'm in. Now to enable networking and mount a test root. I have already extracted an Arch Linux root into a S3 bucket running locally, so this should be pretty easy, right? I just have to manually set up networking routes and load the drivers.

dracut:/# modprobe fuse
dracut:/# modprobe e1000
dracut:/# ip link set lo up
dracut:/# ip link set eth0 up
dracut:/# dhclient eth0
dhcp: PREINIT eth0 up
dhcp: BOUND setting up eth0
dracut:/# ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
dracut:/# s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
dracut:/# ls /sysroot
bin   dev  home  lib64  opt   root  sbin  sys  usr
boot  etc  lib   mnt    proc  run   srv   tmp  var
dracut:/# switch_root /sysroot /sbin/init
switch_root: failed to execute /lib/systemd/systemd: Input/output error
dracut:/# ls
sh: ls: command not found

Honestly, I don't know what I expected. Seems like everything is just... gone. Alas, not even tab completion can save me. At this point, I was stuck. I had no idea what to do. I spent days just looking around, poking at the switch_root source code, all for naught. Until I remembered a link Anthony had sent me: How to shrink root filesystem without booting a livecd. In there, there was a command called pivot_root that switch_root seems to call internally. Let's try that out.

dracut:/# logout
...
[  430.817269] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100 ]---
...
dracut:/# cd /sysroot
dracut:/sysroot# mkdir oldroot
dracut:/sysroot# pivot_root . oldroot
pivot_root: failed to change root from `.' to `oldroot': Invalid argument

Apparently, pivot_root is not allowed to pivot roots if the root being switched is in the initramfs. Unfortunate. The Stack Exchange answer tells me to use switch_root, which doesn't work either. However, part of that answer sticks out to me:

initramfs is rootfs: you can neither pivot_root rootfs, nor unmount it. Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs with the new root (cd /newmount; mount --move . /; chroot .), attach stdin/stdout/stderr to the new /dev/console, and exec the new init.

Would it be possible to manually switch the root without a specialized system call? What if I just chroot?

...
dracut:/# mount --rbind /sys /sysroot/sys
dracut:/# mount --rbind /dev /sysroot/dev
dracut:/# mount -t proc /proc /sysroot/proc
dracut:/# chroot /sysroot /sbin/init
Explicit --user argument required to run as user manager.

Oh, I need to run the chroot command as PID 1 so Systemd can start up properly. I can actually tweak the initramfs's init script and just put my startup commands in there, and replace the switch_root call with exec chroot /sbin/init.

I put this in modules.d/99base/init.sh in the Dracut source after the udev rules are loaded and bypassed the root variable checks earlier.

modprobe fuse
modprobe e1000
ip link set lo up
ip link set eth0 up
dhclient eth0
ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
mount --rbind /sys /sysroot/sys
mount --rbind /dev /sysroot/dev
mount -t proc /proc /sysroot/proc

I also added exec chroot /sysroot /sbin/init at the end instead of the switch_root command.

Rebuilding the EFI image and...

A screenshot of a Linux login screen

I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right?

Nobody stopped me, so I kept going.

I log in with the very secure password root as root, and it unceremoniously drops me into a shell.

[root@archlinux ~]# mount
s3fs on / type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
...
[root@archlinux ~]#

At last, Linux booted off of an S3 bucket. I was compelled to share my achievement with others—all I needed was a fetch program to include in the screenshot:

[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
 core.db failed to download
error: failed retrieving file 'core.db' from geo.mirror.pkgbuild.com : Could not resolve host: geo.mirror.pkgbuild.com
warning: fatal error from geo.mirror.pkgbuild.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.rackspace.com : Could not resolve host: mirror.rackspace.com
warning: fatal error from mirror.rackspace.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.leaseweb.net : Could not resolve host: mirror.leaseweb.net
warning: fatal error from mirror.leaseweb.net, skipping for the remainder of this transaction
error: failed to synchronize all databases (invalid url for server)
[root@archlinux ~]#

Uh, seems like DNS isn't working, and I'm missing dig and other debugging tools.

Wait a minute! My root filesystem is on S3! I can just mount it somewhere else with functional networking, chroot in, and install all my utilities!

Some debugging later, it seems like systemd-resolved doesn't want to run because it Failed to connect stdout to the journal socket, ignoring: Permission denied. I'm not about to try to debug systemd because it's too complicated and I'm lazy, so instead I'll just use Cloudflare's.

[root@archlinux ~]# echo "nameserver 1.1.1.1" > /etc/resolv.conf
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
 core is up to date
 extra is up to date
...
[root@archlinux ~]# fastfetch

Fastfetch showing the system running in QEMU

I look around, making sure that nobody had tried to stop me. My window was intact, my security system had not tripped, the various canaries I had set up around the house had not been touched. I was safe to continue.

I was ready to have it run on Google Drive.

Google Gets Involved

There's a project already that does Google Drive over FUSE for me already: google-drive-ocamlfuse. Thankfully, I have a Google account lying around that I haven't touched in years ready to go! I follow the instructions, accept the terms of service I didn't read, create all the oauth2 secrets, enable the APIs, install google-drive-ocamlfuse from the AUR into my Arch Linux VM, patch some PKGBUILDs (it's been a while), and lo and behold! I have mounted Google Drive! Mounting Drive and a few very long rsync runs later, I have Arch Linux on Google Drive.

Just kidding, it's never that easy. Here's a non-exhausive list of problems I ran into:

  1. Symlinks to symlinks don't work (very important for stuff in /usr/lib)
  2. Hardlinks don't work
  3. It's so slowwwww
  4. Relative symlinks don't work at all
  5. No dangling symlinks (important for stuff that links to /proc and isn't mounted, or stuff that just hasn't copied over yet)
  6. Symlinks outside of Google Drive don't work
  7. Permissions don't work (neither do attributes)
  8. Did I mention it's SLOW

With how many problems there are with symlinks, I have half a mind to change the FUSE driver code to just create a file that ends in .internalsymlink to fix all of that, Google Drive compatibility be damned.

But, I have challenged myself to do this without modifying anything important (no kernel tweaking, no FUSE driver tweaking), so I'll just have to live with it and manually create the symlinks that rsync fails to make with a hacky sed command to the rsync error logs.

In the meantime, I added the token files generated from my laptop into the initramfs, as well as the Google Drive FUSE binary and SSL certificates, and tweaked a few settings2 to make my life slighty easier.

...
inst ./gdfuse-config /.gdfuse/default/config
inst ./gdfuse-state /.gdfuse/default/state
find /etc/ssl -type f -or -type l | while read file; do inst "$file"; done
find /etc/ca-certificates -type f -or -type l | while read file; do inst "$file"; done
...

A screenshot of Google Drive showing the root of a typical Linux filesystem

It's nice to see that timestamps kinda work, at least. Now all that's left is to wait for the agonizingly slow boot!

chroot: /sbin/init: File not found

Perhaps they did not bother to stop me because they knew I would fail.

I know the file exists since, well, it exists, so why is it not found? Simple: Linux is kinda weird and if the binary you call depends on a library that's not found, then you'll get "File not found".

dracut:/# ldd /sysroot/bin/bash
    linux-vdso.so.1 (0x00007e122b196000)
    libreadline.so.8 => /usr/lib/libreadline.so.8 (0x00007e122b01a000)
    libc.so.6 => /usr/lib/libc.so.6 (0x00007e122ae2e000)
    libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007e122adbf000)
    /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007e122b198000)

However, these symlinks don't actually exist! Remember how earlier we noted that relative symlinks don't work? Well, that's come back to bite me. The Kernel is looking for files in /sysroot inside /sysroot/sysroot. Luckily, this is an easy enough fix: we just need to have /sysroot linked to /sysroot/sysroot without links:

dracut:/# mkdir /sysroot/sysroot
dracut:/# mount --rbind /sysroot /sysroot/sysroot

Now time to boot!

It took five minutes for Arch to rebuild the dynamic linker cache, another minute per systemd unit, and then, nothing. The startup halted in its tracks.

[ TIME ] Timed out waiting for device /dev/ttyS0.
[DEPEND] Dependency failed for Serial Getty on ttyS0.

Guess I have to increase the timeout and reboot. In /etc/systemd/system/dev-ttyS0.device, I put:

[Unit]
Description=Serial device ttyS0
DefaultDependencies=no
Before=sysinit.target
JobTimeoutSec=infinity

Luckily, it did not take infinite time to boot.

A Linux login prompt

I'm so close to victory I can taste it! I just have to increase another timeout. I set LOGIN_TIMEOUT to 0 in /etc/login.defs in Google Drive, and tried logging in again.

Thankfully, there's a cache, so subsequent file reads aren't nearly as slow.

Fastfetch in Google Drive root, showing that the root partition is mounted as fuse.google-drive-ocaml

Here I am, laurel crown perched upon my head, my chimera of Linux and Google Drive lurching around.

But I'm not satisfied yet. Nobody had stopped me because they want me to succeed. I have to take this further. I need this to work on real hardware.

Now Do It On Real Hardware

Fortunately for me, I switched servers and now have an extra laptop with no storage just lying around! A wonderful victim3 for my test!

There are a few changes I'll have to make:

  1. Use the right ethernet driver and not the default e1000
  2. Do not use a serial display
  3. Change the network settings to match my house's network topology

All I need is the r8169 driver for my ethernet port, and let's throw in a Powerline into the mix, because it's not going to impact the performance in any way that matters, and I don't have an ethernet cord that can reach my room.

I build the unified EFI file, throw it on a USB drive under /BOOT/EFI, and stick it in my old server. Despite my best attempts, I couldn't figure out what the modprobe directive is for the laptop's built-in keyboard, so I just modprobed hid_usb and used an external keyboard to set up networking.

A screenshot of fastfetch and mount on bare metal showing that we're booted off of Google Drive

This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.

Nice thing is, I can just grab the screenshot4 from Google Drive and put it here!

Woe! Cloud Native Computer Be Upon Ye

Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.

If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.

Unfortunately, I don't know what to do next with this. Maybe I should install Nix?


Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.


  1. I understand mostly because I read this Archwiki article. This section ends up being a wispy summarization. 

  2. I set acknowledge_abuse=true, and root_folder=fuse-root

  3. No computers were (physically) harmed in the making of this project. 

  4. I used fbgrab to take the screenshot 

Yeah, I Skate(board)

2024-04-02 04:10:00

As some of you may know, I go to college in the midwestern United States. This means that the land here is pretty flat, and my classes are like a mile from my apartment. Why would I want to walk back and forth multiple times a day when I could do it in style.

Of course, I am not the only one beridden by this plight. Popular solutions include:

  • The alright bus system
  • Buy a bike and use the dedicated bike lanes and the plethora of bike racks
  • Those little rent-an-electric-scooter things that cost too much money per minute
  • Buy an electric scooter or bike
  • Attach a two-stroke engine to your bike and call it a day
  • Or attach it to your scooter????
  • Build an electric longboard
  • Buy an electric longboard
  • Skateboard 😎
  • Scooter

These are all valid options, and do not include some of the more esoteric solutions I have seen around campus. After all, Purdue is an engineering school, and midwestern engineering is the best kind of engineering. Of course, I entered college with delusions of grandeur. I had been watching too much of engineering YouTube recently, so I had an idea of what I would need in terms of motor controllers, brushless motors, LiPo batteries, and so on.

mount stupid

Yeah, I never ended up doing anything with that budget scooter from Wal·Mart, other than ride it around for the first month and a half of my college career, burn out the wheels, and sell it for a few bucks to someone else.

Thus begins my journey into skating.

The Terror Begins

Many many years ago, in a state far far away, little Ersei saw the Cool Older Kids (they were like ten years old at the time) on their new RipStiks. For those out of the know, a RipStik is a caster board, so two pivoting wheels on two joined twisty boards that you can move by twisting your lower body.

Years later, I got my wish and became the sickest, most badical person on the block. Buoyed by my ego, I went down the fairly-steep hill in my neighbourhood, hit a pebble, and tumbled a solid ten feet.

Luckily, I landed halfway in the grass and only got a few scrapes (and I was wearing a helmet).

This exemplified the largest issue with my dreams of skating around as a means of transport: the wheels were unforgiving, and the roads were brutal. Of course, I ignored all of those warning signs and continued with my dreams.

Scooter, Begone!

I was back home in October, and I had come with an idea: I would take my old, decrepit, and waterlogged skateboard that my parents bought for me a decade ago to soothe my jealousy at the older kids, and make it work for me. In previous years, it had been used exclusively as a seat for the swingset I hung from a sturdy tree about when I got it.

My friend, Merrick, tried to teach me how to skate all those years ago, but I was too scared to get on the skateboard. His voice lives on in my head, echoing the one bit of knowledge that has survived the test of time: "bigger wheels means you go faster". I will never forget that.

the transformers skateboard, with wheels that seem unnaturally big

I put bigger wheels I cannibalized from my old RipStik on the skateboard and left everything else as-is. Eagle-eyed readers will notice a crack in the plastic. This will come back to haunt me later.

It Is Haunting Me Now, Which Is Later, Due to the Linearity of Experiencing Time

As I got back to college after whatever break happened during October 11, 2022, I got to learning how to skate. After a few days and one helmet, I got the hang of it and started using it to get around campus.

Soon afterward, I hit something in the road, fell on my face, and messed up my hand and wrist1. I refuse to upload a picture.

Naturally, this did not stop me, and I continued to skate around.

In Which I Ride Around on a Deathtrap

This skateboard was trying to kill me. There is no other explanation. A few of my friends who had been skating for years tried out the abomination I made and collectively agreed to never touch it again. And yet I continued, as skateboard parts are expensive.

However, I was flush with money from graduation, and a few Amazon gift cards later, I decided to overhaul my skateboard. I spent a few bucks putting on new grip tape (badly) as the original board had practically none left. I got some secondhand trucks, nice bearings, and big squishy wheels. For safety.

Bunch of skateboard parts on the ground, and it's time to put it all together

And wow was this a lot better. The only part I kept around was the deck, since the Bumblebee decal had since become iconic2.

the new and improved skateboard put together, with a nice view of the bumblebee transformers decal

A Bad Artist Blames Their Tools, A Good Artist Blames Themselves

I did continue to eat asphalt on occasion as I got better at skateboarding. Here is a non-exhaustive list of the lessons I've learned so far:

  • Do not trip over your own skateboard, as you will fall.
  • Do not be scared to switch to a wider stance, as that is more stable.
  • Do not have your foot too far forward, as you will fall.
  • Do not have your foot too far backward, as you will fall.
  • Do not ignore the ground in front of you, as sometimes chunks are missing in the pavement.
  • Do not assume that people will stay on the sidewalk, as someone will walk/drive into the bike lane in front of you.
  • Do not be afraid to bail.

These lessons were written in blood. Don't worry, I jumped off of the skateboard before I crashed into the person who started crossing the street without looking, and the car driven by someone on their phone turning into the bike lane3.

A great artist blames everyone else, I guess.

Oh You Skateboard? Can You Do Any Tricks?

No, since this skateboard I've put together is too bottom-heavy to do any kind of tricks. I can do a manual, and jump curbs, but that's about it.

A small wrench thrown into my plans for tricks is that my skateboard deck is apparently really small. It's a child's deck, since, y'know, it was originally bought for a child (me). It also seems like new decks are expensive, so whenever this one breaks I'll upgrade to a properly-sized one.

But, Why Do You Skateboard if You Get Hurt?

Ignoring the surface level arguments that I can throw out, like the "convenience" of being able to carry it around, or how "cool" skating looks, or how "low-maintenance"4 a skateboard is, the reason I like skating is that it is fun.

In the end, why do anything if it's not enjoyable, either now or for delayed gratification? I can't count the number of nights I was feeling terrible, and skating outside for a few miles made me feel a lot better. Skateboarding is enjoyable, much like writing. There is always a risk to be taken with activities, and I've gotten good enough that I no longer worry about falling off my skateboard and getting hurt.

There's a certain amount of enjoyment that you can derive from anything, and a good balance of enjoyment and need is needed. An activity that is pure enjoyment and with no need (Hermitcraft) must be balanced out by something that is all need and no enjoyment (homework). Skateboarding lies somewhere in between, as an activity that you need to do, but also enjoy doing. It balances itself. Spending a few hours aimlessly skating around and enjoying the weather is offset by the need to get to class on time.

I'm not alone in enjoying skateboarding. There are plenty of people around campus who get around with longboards and skateboards, or even something weirder.

A really really long and tall longboard. It's like four feet tall

That brings me to hobbies. Hobbies ride that line between work and pleasure, a mix of "need" and "want". Take selfhosting as an example—what started out as learning and fun has evolved into a "need", in which I now need to keep my DNS server up, my RSS reader, my photo backups, and so on. But that "need" does not make it bad! Quite the contrary, it balances out the "want" of selfhosting, and makes it, in my opinion, ultimately more enjoyable.

This can be used in the opposite direction too. With the rise of "hustle culture", and the rampant "monetize your hobby", it swings the balance in the other direction. What was once enjoyable is now work, and it becomes unsatisfying and sad.

That does bring into question going into the technology industry as someone who is already pursuing computers as a hobby, much like myself. In high school, computers were a hobby for me. I was not graded on how well I could do computer science, but in college I am.

The scale has shifted, and computer science and programming is not nearly as fun anymore. Staying consistent with my argument, however, I have found other activities to balance it out, like skateboarding, making jewelry, and writing, all things that can not be monetized (or if they can, they won't make enough money to be worthwhile). However, this is not sustainable, as every hobby requires time, and time is finite. Adding more work and expecting to balance it will not succeed the moment the time required for work is more than the time required for enjoyment.

Please don't turn your hobby into a job. It's not worth it.

So Tell Me More About Your Skateboard, I'm Interested Still

Gladly! My skateboard is a 7.75 inch wide deck with 70 Millimeter 78A Sector 9 Nineballs wheels (they're pretty worn, and I wanna try harder wheels to learn to slide), Bones Reds steel bearings (maybe I'll upgrade to ceramic if I want to do more wet-weather riding), and Paris 129 Millimeter Street TKP trucks (lubricated with bar soap). The only original parts are the deck and the screws holding the trucks on, but those are rusted through, and I need to replace them.

It's developing a nasty razor tail, and the board is beginning to chip a little at the back, mostly since I dropped it.

I avoid riding it if water has pooled outside, but it's fine to ride when it's wet due to the bearings being properly lubricated and sealed and the wheels being sufficiently soft to retain grip when it's wet outside.

I don't wear anything special when skateboarding, just jeans, a hoodie/beanie if it's not warm enough, and gloves to protect my hands if it's cold or if I am scared of falling off that day. I just wear my normal shoes, but that's a bad idea since the grip has disappeared in like two months.

The skateboard is a bit high off of the ground and the wheels are pretty soft, so it does take more energy to skate around compared to normal skateboards or dropped longboards5. The height does help when going off of curbs, so that's nice.

Still have questions about my skateboarding hobby? Feel free to contact me.


This was written for April Cools, and I've been meaning to write, so here y'all go.

I've written the word skate so many times that it's beginning to look weird.


  1. Second time I've messed up my wrist, the first was falling off of my bike like a decade ago. 

  2. People actually recognize my skateboard around campus because of that and people come up to me at one in the morning in a Five Guys to comment on it. I do like the attention. 

  3. This was a strange story, but I had not five minutes prior fallen off my skateboard. If I had not, then the timing would be perfect for me to hit the car. The person coming the opposite direction on their electric skateboard was not nearly so lucky, but they managed to dodge the car as well, and was not badly hurt. 

  4. Once every month or so, I open up the ball bearings, clean them out, and put new lubricant in them. 

  5. A longboard that has the wheels mounted higher than the rest deck so the deck is closer to the ground. 

Server Updates: In Which I Have A Server Now, For Real

2024-02-26 23:45:00

After years of yearning, longing, and pining (let's not forget pining), I finally got myself an actual server. Or rather, a friend of mine got sick of listening to me whine about my old, terrible server, and gave me a spare that he cannibalized for parts (but was still very good and usable).

I am now the proud owner of an HPE ProLiant DL360 Gen9.

This story starts a few months back, with the Purdue Linux Users Group (of which I am an officer) deciding to put together a mini-datacenter for students to use. It's nothing too fancy, really just a 12U rack sitting in a room with a handful of static IPs from Purdue University. I like the rack, it's a nice source of white noise for when I need a good study space.

12U rack server with a few slots populated. It's kinda messy with wires everywhere. The floor is dirty, yes, I know, I'm working on it

I'm the server second-from-the-bottom! My friend has the top rack mount server (and the precarious hard-drives), and my other friend has the bottom-most server.

I got the server, set up the BIOS configurations, and thought to myself, "What the hey, might as well just take the drive out of my old server and put it in the new one!"

I made the mistake of appending that line of thought with the ill-fated words, "What could go wrong?"

It Went Wrong

It is the afternoon of Valentine's Day. I have a meeting in less than an hour, and I have blocked out time later for Valentine's Day Activities™. There are but a few scant hours that day to work on the server. I have less than an hour to work on the server.

Of course, I chose this time to move the drive over.

I skateboard the half-odd mile to my dorm, grab my laptop charger (as I neglected to take it with me in the morning), break it down into the two cables, and stuff it into my jacket. With trepidatious fingers, I shut off my laptop-server, fumbled with the screwdriver for too long, and extracted the expensive SSD I put in there after my last drive failure, proprietary laptop caddy and all. I wrap the drive in a paper bag to hopefully insulate it against my winter jacket's static, lock up my room, and skateboard as fast as I can back to the datacenter.

The adrenaline had taken over my body. I unwrapped the drive, praying that it was intact, and removed the four screws holding it into the Dell laptop caddy. A glance at my phone tell me I have less than thirty minutes before I had to go to my meeting.

I look at the screws I extracted from the drive caddy, and horror of realization dawns on me—these screws looked too short to be used in the server's drive caddies. The panic sets in. My hands are unsteady, I can't align the screws and the drive. I tried to put the screws in anyway, out of desperation, and three screws go in. The fourth stubbornly refused.

Three out of four is good enough for me. Less than twenty minutes left.

I put the caddy into the first drive bay, suffer through the terribly long boot time of enterprise hardware, and open the one-time boot menu.

I had only the option to boot from the network. The drive was not visible.

But Wait, It Gets Worse!

With the fifteen minutes I had left, I booted from an Arch Linux USB drive I had lying around, hoping that it wouldn't cause issues (foreshadowing), and ran fdisk -l, hoping that the drive had survived the journey and was not dead.

Thankfully, my data was intact. My mind is racing—what to do now? Perhaps I have to reinstall Grub? And so I chroot'd into the drive, pulled up the documentation for rebuilding Grub, and was promptly interrupted by my phone telling me that I had to leave now for the meeting otherwise I would be late.

Calming Down, Just a Little

I leave for the meeting, show up too early (I could have worked on the server!), and return to my server an hour and a half later. At this point the downtime had reached about two hours.

I now had just a few more hours before my class. I thought to myself, "I can do this. I have time. I just need to approach this rationally."

In my mind I had narrowed it down to a few possibilities:

  1. Grub was broken
  2. The server could not find the EFI file
  3. Server is broken
  4. Some strange hardware incompatibility that would be hard to debug

After an hour of fumbling around with the Arch Linux ISO and running into weird incompatibility issues with the differing installed Kernel versions, I decided to give up and use the Rocky Linux ISO to recover the Rocky Linux install I had on my server. It took me another hour of reading through forums and paywalled documentation to realize that reinstalling Grub was as easy as dnf reinstall grub2-efi grub2-efi-modules.

And yet, the server refused to detect the drive on boot.

The Troubleshooting Rabbit-Hole

I moved onto the next item on my list: the server couldn't find the EFI file. Issue was, efibootmgr showed that the boot menu entry already existed. The server found the boot entry just fine, after it had already booted. Changing the boot order in efibootmgr did nothing—the server refused to detect the SSD on boot.

There's no small amount of disappointment in my mind as I start to fear that my shiny old-new server is broken. I had already updated the iLO1, but the BIOS updates were hidden behind the HPE support contract. I do not have a contract. That did not stop me from finding the files updating the BIOS, however.

Now that I was no longer running firmware from 2015, I booted the server, checked the boot menu, and the SSD still was not visible.

There was only one thing left on the list: some weird hardware incompatibility. You see, this server has an integrated RAID card for redundant storage at the hardware level. Currently, the RAID card was configured in passthrough mode. From tinkering around the server's guts, I knew that there were additional SAS ports on the motherboard. Currently, the backplane was connected to the RAID card, so I swapped around the connection.

The boot menu showed my SSD. This was it, no more downtime.

I boot into the server, go to log in to set up networking, and get hit with ersei: no shell: permission denied.

Oh great, another problem. I don't really know for sure what caused the problem, but my bet is that the Arch Linux chroot messed with the SELinux labels on the drive. Booting into the recovery Rocky Linux USB and creating a /.autorelabel file fixed the problem.

We were in! I set the public IPs, started the systemD services2, connected to the proxy VM, and loaded up my website on my phone.

It's up! My server has successfully moved! After sorting through the hundred or so Zabbix notifications and ensuring that everything was really back up, I was finally done!

Nope, Not Done Yet

See, now I have a server that isn't horribly weak. Naturally, I have to move everything over. I first started with the networking and stopped using the proxy. For the first time in years I finally had a public IP address for my website! I moved my DNS from Cloudflare to Porkbun (slightly more expensive, but only marginally) and started using Hurricane Electric as my nameserver. Why? Because Cloudflare doesn't let you set custom nameservers, and I wanted to run my own nameservers.

I moved the Minecraft servers in Oracle Cloud over to the local machine (the CPU graph doesn't look like it's at zero in Zabbix now, it's now marginally more!) and shut off the Oracle Cloud VMs after the DNS changes had fully propagated. I moved my Minio from the singular SSD to the pool of drives that came with the server by making a new Minio instance and moving the buckets over with mc mirror (I now have erasure coding and some form of redundancy, yay!)

So that brings us to now. Believe it or not, I'm glossing over a bunch of minor issues because they really aren't as funny or interesting (mostly just mucking around in iLO or BIOS) and could just be a list of bullet points:

  • Intel TXT was enabled in BIOS which caused the Arch ISO to not boot
  • One of the fans had a shot bearing, so I got a replacement from the cannibalized remains of another server
  • Moving Minio was unnecessarily painful
  • The drives that came with the server did not have SATA secure erase functioning
  • Lots more that I do not remember

Wait, There's More?

Oh yeah. You see, I did not get the server on Valentine's Day. I got it a few days beforehand. Of course, I needed to get the environment all set up. Beyond just cleaning out the hardware, updating the iLO firmware, and setting the proper BIOS configuration options, I had to do some networking.

The rack's networking is set up like so: there is one upstream Gigabit network connection to the wall. We are allocated static IPv4 addresses and SLAAC'd IPv6 addresses. Anything that wants a static IP address connects to that network (via a network switch). There is an EdgeRouter X that has NAT enabled for connections that do not require a public address (or IPv6, we'll get to that later).

The main thing connected to the EdgeRouter's NAT'd network are the iLO and iDRAC cards of various servers. Earlier I had set up port forwarding at a high, random port, but this was not ideal as anybody could plausibly access the management console of my server (although it is password-protected). After poking around in the documentation and blog posts of various age and quality, I installed ZeroTier on the EdgeRouter and set up ethernet bridging so that I could access the internal network from anywhere if I was connected to the ZeroTier network. Despite my best efforts to build a newer ZeroTier package for the EdgeRouter, I gave up and decided to live with the older version, which worked fine enough.

However, I cannot, for the life of me, figure out how to pass a public IPv6 address through to the network behind the EdgeRouter. It was tempting to install OpenWRT on the EdgeRouter instead, but that would probably cause more problems than it would solve.

In the end, I gave up and resolved myself to a life of no IPv6 on the internal network.

Speaking of the Internal Network

I broke the network for everybody a few times.

To those whose Mastodon instances went down, I apologize.

Here's what happened: the EdgeRouter only has five total ports, of which one is used for the upstream WAN connection, leaving four connections. We were reaching the point where there were more than four connections to the EdgeRouter, so one had to be disconnected every now and then. We were timesharing the last remaining ethernet port on the EdgeRouter. This was not ideal. The large network switch, however, had 24 ports, of which less than half were used.

I had the genius idea to split the switch into two VLANs: the internal network and the external network. The ports on the right-hand side of the switch were for the internal network (connected to the EdgeRouter), and the left-hand side was connected to the WAN.

Here is a non-exhaustive list of what went wrong:

  1. I could not connect to the switch's management interface because of how the network was structured. It took an hour to piece together a suitable DB-9 serial-to-USB cable.
  2. Figuring out the serial connection parameters took another hour of trawling shady manuals websites and a lot of guesswork.
  3. Going through the basic setup program to reset the password from the defaults caused SPT to activate, which Purdue's network didn't like and killed the connection to our network port for five minutes.
  4. In an attempt to avoid any more downtime, I connected two network connections to the switch from the wall, so I could disconnect the other and move the connections around to the right places. This also killed the connection to the port for five minutes.
  5. I got the numbering of the switch ports wrong, so the top half and the bottom half of the switch were split into two VLANs, not the left and right side.

I am never touching that switch again. On the plus side, all the potential failure modes are documented now!

Aside: Server Rails

Server rails are a fucking mess. It took weeks to find the compatible rails for my server, and Lily's server is using the wrong rails and is hanging down slightly. I ended up giving up looking for the right rails and decided that two left rails worked just fine (surprisingly well, actually, but the server ears did not lock in on one side, which was fine).

I had to trawl through literal buckets of server rails down at the Purdue surplus store to find the "matching" pairs. I ended up buying four left rails, because I could not find the matching right rails.

I wonder who has them.

So What's Next?

I have plenty of ideas for what to do next. In the time it took me to put this post together, I have already completed the following:

  1. A Nix Hydra server to build all of my custom packages for me
  2. My own authoritative DNS server (blog post soon!)
  3. Minecraft servers

At the time of writing, I still have the following planned:

  1. Redundant monitoring so if my server goes down I can get notified
  2. Monitoring iLO
  3. Monitoring other self-hosted software
  4. Actually using the authoritative DNS servers, as soon as Purdue unblocks port 53

I can not wait to write about how strange running an authoritative and redundant DNS server is. See you all in the next one.


Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.

I've also moved to my own Fediverse instance: @[email protected].


  1. Another computer that manages your server and can give you a remote console. 

  2. For some reason, all the systemD services were now disabled on boot.