2024-11-13 02:00:00
I swear, I'm surrounded by people who want to watch me waste my time. I was telling the Purdue Linux Users Group about the wonders of RSS and how great it is, but my friend overheard the conversation and said, jokingly, "You should do Bad Apple, over RSS". I whined, complained, claimed it would be "too easy" and that "I need a harder project". Alas, my harder projects have remained stagnant for a few weeks now, eschewed for schoolwork.
And so I ran Bad Apple on RSS.
The moment that I heard "Bad Apple" and "RSS" in the same sentence, I thought of big feed that contains every frame as a feed entry. But that would be cheating, so my alternative plan formed fairly quickly: an RSS feed, that, every time it is requested, will update to show a new frame. Writing the backend to generate the RSS feed was pretty easy. All I needed was to keep track of a query parameter and link that against an integer representing the frame, using PHP's APCu cache as a basic key/value store.
<?php
header('Content-Type: application/atom+xml; charset=utf-8'); // Return an Atom file
header('Content-Disposition: filename="ba.xml"'); // Download as an XML
$q = $_SERVER['QUERY_STRING']; // Get the query parameter (the bit after the question mark)
if ($q == "") { // If there is no query, don't return anything
die();
}
$v_pat = "/^[a-f0-9]{16}$/i"; // Verify that the query parameter is a 16-character hexadecimal string
if (preg_match($v_pat, $q) != 1) {
die();
}
$frame = 1; // Initial frame value
// Load frame if it exists, and update it
if (apcu_exists($q)) {
$frame = apcu_fetch($q);
apcu_store($q, $frame + 1);
} else {
apcu_add($q, 1);
}
$frame = min($frame, 6572); // Cap frame at 6572
// Return the Atom feed
?>
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Bad Apple RSS</title>
<link href="https://ba.ersei.net/ba" />
<link rel="self" href="https://ba.ersei.net/ba.php?<?php echo $q;?>" />
<subtitle>Bad Apple as an RSS feed</subtitle>
<updated><?php echo date("c");?></updated>
<author>
<name>Ersei</name>
</author>
<id>https://ba.ersei.net/ba.php?<?php echo $q;?></id>
<entry>
<title>Bad Apple Frame <?php echo str_pad($frame, 4, "0", STR_PAD_LEFT);?></title>
<id>https://ba.ersei.net/img/<?php echo $frame;?></id>
<updated><?php echo date("c");?></updated>
<published><?php echo date("c");?></published>
<link href="https://ba.ersei.net/img/<?php echo $frame;?>"/>
<content type="html">
<![CDATA[
<p><img src="https://ba.ersei.net/img/<?php echo $frame;?>.jpg"></p>
]]>
</content>
</entry>
</feed>
There seems to be a bug where the first frame shows up twice, but I don't care enough to figure out why that happens. All that matters is that it works. Now I just need to patch a feed reader to request the RSS feed 30 times a second...
I must've muttered that phrase dozens of times throughout this project. Despite the beautiful sensibilities of PHP, I could find no such reprise in Javascript. I just needed an RSS reader that could handle refreshing articles smoothly every thirty-three milliseconds. I could've used a good RSS reader like FreshRSS or NewsFlash but FreshRSS's reload model doesn't mesh with the "30 times a second" target I was going for, and I couldn't get NewsFlash to compile on NixOS, so I decided to go with something Javascript-y, as it would probably refresh how I wanted it to.
This was a mistake. Before I could stop myself, I dove into the code of Fluent Reader, the first Javascript-written desktop RSS reader that dredged itself up from the depths of my mind.
Compiling Fluent Reader from scratch led me to discover that it has not been updated in thirteen months, and that installing Electron through NPM doesn't work on NixOS. I'm sure that this would be fine.
I built Fluent Reader, replaced its Electron binary with my own, and launched it. I added my feed, and lo and behold, the first frame of "Bad Apple" emerges from the Microsoft-themed abyss.
Great! I locate the "refresh feeds" button, and tap my laptop touchpad as furiously as my fingers would allow me. One small problem, however.
The frames are out of order? I thought about it for a moment, and realized that the Atom format returns date/times accurate only to the nearest second. When getting multiple feed entries per second, the frames returned will then be sorted by alphabetical order instead of the time they were received.
Simple enough fix. I just had to find the code where the feed is being returned and change out the sort by title to be inverted. How hard could it be?
It took me thirty minutes to figure out how to sort the output. Maybe it's just because I'm bad at React. Maybe it's because nothing in this codebase is documented. Maybe it's because the universe is telling me that this is a stupid idea and that I'm stupid for even trying.
My fingers can tap-tap the refresh button fast enough for the video to kind of play now. Unfortunately, I get tired after a hundred frames and I'm too inconsistent.
Being a "good programmer", I decide to automate it. I'm already familiar with the code, I just need to add a setInterval
that triggers the fetchItems
function, all controlled with a nice toggle button.
My misplaced confidence is becoming something of a cliché. But! It's not all my fault! Fluent Reader uses something called React class components, which, if you follow the link, you're greeted with a marvellous alert telling you to please, for the love of god, stop using class components.
It took me hours to figure out how to add one button that toggles the setInterval
. Nothing was allowed to interact with each other. I had to pass in app state where there definitely should not have been app state passed. But once I figured out that I had to replace all of the minified vendored font files, my button finally worked!
If you would like to see the modifications I made to Fluent Reader, I uploaded my changes on Sourcehut.
Now I just had to show it off.
It took two hours to bend React to my will. I just had to record my screen playing the video, sync it to some music, and then become famous among a niche group of internet-dwellers.
The cliché continues, but it's still not my fault that my laptop thermal throttles at the slightest provocation. I connect my laptop to my server over Ethernet, and let the RSS reader rip. It resulted in a six and a half minute recording. For reference, Bad Apple is meant to only be four minutes or so. My CPU was pinned, Electron was struggling1, and neither my network bandwidth nor my server was the issue. Easy enough, I'll just scale the entire video by a constant factor to speed it up.
There's those accursed phrases again: "easy enough" and "just". When will I learn that it's never that easy? The recording was running at different speeds throughout.
I ended up spending a few hours setting up a couple dozen or so keyframes to synchronize my RSS Bad Apple with the actual video. This was made harder by the fact that I hadn't actually watched Bad Apple all the way through, so I kept getting confused where I was (that may have been my mistake) and that the synchronization seems to really mess with Kdenlive. I overlaid the real video on one of the little RSS boxes, and watched closely to see if anything got out of sync. If it did, I added a keyframe to ensure that part was in sync. I repeated this process until everything was properly synchronized.
If I didn't know the lyrics to Bad Apple going into this project, I certainly do know now.
In case that wasn't bad enough, YouTube wanted me to now give them my phone number to set my own thumbnail and a facial scan to add a clickable link to this blog post in the description.
I figured it out though. Here's your video. I hope you enjoy it.
Questions? Thoughts? Concerns? Pure unbridled anger at the fact that this is the post I publish after months? Feel free to contact me!
I'm taking down ba.ersei.net
in the meantime, just so that people don't try it out on my website.
It was probably because Electron was running on Wayland. Running it under Xwayland, it performed a lot better, but because my laptop has a HiDPI screen and fractional scaling on Xwayland is still broken on Sway (and I didn't have the foresight to turn down the scaling beforehand), I recorded it in pure Wayland mode. ↩
2024-10-30 06:40:00
This is meant as a tutorial on how to use a VPS to get a public IPv4 address for self-hosting reasons. Often, people want to run a server out of their college dorm that doesn't give them a public IPv4 address, or out of their house from behind CGNAT.
It's a simple solution and an excellent alternative to software like Rathole or Cloudflare Tunnel because this solution will properly pass the correct connecting IP addresses through, transparently. If you look at your webserver access logs while using software like Rathole or Cloudflare Tunnel, all the connections are arriving from 127.0.0.1
, and not from the real client address.
As a quick introduction, we will create a Wireguard connection between the server without an IP address and a virtual machine in the cloud. Only incoming traffic will go through the cloud VM. All outbound traffic will continue as normal. This means that there will be no latency for normal outbound internet use. The latency will only appear when someone accesses your website.
This is not an endorsement, but Oracle Cloud and Google Cloud both have generous free tiers and will give you a static IPv4 address with one virtual machine. Pick a region that is geographically close to where your server is. The specs of the VM are not important—this is extremely lightweight. The most important component is bandwidth, as the bandwidth of this machine will become the bandwidth of the incoming connections. This VPS must be running a modern Linux distribution.
Verify that you have nftables
installed on the VPS by running nft --version
. If it is not installed, do so. Occasionally the nft
command is kept in /usr/sbin
, and may not be in the path of a non-root user. Additionally, some distributions may come with alternative firewall software, such as firewalld
or ufw
. Please ensure those are uninstalled.
Install Wireguard on both machines. For Debian-based distributions, this will look like sudo apt install wireguard
, and for Fedora-based distributions sudo dnf install wireguard-tools
.
On the cloud provider, open the port 51820/UDP
in the cloud firewall. Instructions vary by provider.
Then, on the cloud virtual machine, create the file /etc/nftables/proxy.nft
:
table ip nat {
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto tcp tcp dport 80 dnat to 192.168.77.2:80
meta l4proto tcp tcp dport 443 dnat to 192.168.77.2:443
}
chain INPUT {
type nat hook input priority 100; policy accept;
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
}
}
Also create the file /etc/nftables/main.nft
:
# Sample configuration for nftables service.
# Load this by calling 'nft -f /etc/nftables/main.nft'.
# Note about base chain priorities:
# The priority values used in these sample configs are
# offset by 20 in order to avoid ambiguity when firewalld
# is also running which uses an offset of 10. This means
# that packets will traverse firewalld first and if not
# dropped/rejected there will hit the chains defined here.
# Chains created by iptables, ebtables and arptables tools
# do not use an offset, so those chains are traversed first
# in any case.
# drop any existing nftables ruleset
flush ruleset
# a common table for both IPv4 and IPv6
table inet nftables_svc {
# protocols to allow
set allowed_protocols {
type inet_proto
elements = { icmp, icmpv6 }
}
# interfaces to accept any traffic on
set allowed_interfaces {
type ifname
elements = { "lo" }
}
# services to allow
set allowed_tcp_dports {
type inet_service
elements = { ssh, http, https, 51820 }
}
# services to allow
set allowed_udp_dports {
type inet_service
elements = { 51820 }
}
# this chain gathers all accept conditions
chain allow {
ct state established,related accept
meta l4proto @allowed_protocols accept
iifname @allowed_interfaces accept
tcp dport @allowed_tcp_dports accept
udp dport @allowed_udp_dports accept
}
# base-chain for traffic to this host
chain INPUT {
type filter hook input priority filter + 20
policy accept
jump allow
reject with icmpx type port-unreachable
}
}
include "/etc/nftables/proxy.nft"
This firewall rule will NOT close SSH access. If you have publicly available SSH, that is a bad idea, and you should adjust allowed_tcp_dports
to not include SSH. This default configuration will only pass through HTTP and HTTPS. Adjust allowed_tcp_dports
to allow your TCP port, and allowed_udp_dports
to allow your UDP port. In the first file, use the example HTTP/HTTPS configuration to forward another port. Keep in mind that this port forwarding will take priority! If you have SSH open to the VPS and you try forwarding SSH, you WILL lose SSH access!
Add the line include /etc/nftables/main.nft;
at the end of the file /etc/nftables.conf
(the semicolon is important), and then restart the firewall (and ensure it persists across reboots):
cloudvm# systemctl enable nftables
cloudvm# systemctl restart nftables
Finally, enable IP forwarding and make it persist across reboots:
cloudvm# sysctl -w net.ipv4.ip_forward=1
cloudvm# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf
First, set up the Wireguard keys. On the cloud VM, run this command as root:
cloudvm# wg genkey | tee privatekey | wg pubkey > publickey
Keep these generated files (privatekey
, publickey
) in a safe place.
Repeat this generation command on the other machine.
First, create the file /etc/wireguard/wg0.conf
on the cloud VM:
[Interface]
Address = 192.168.77.1/24
ListenPort = 51820
PrivateKey = [FIRST_GENERATED_PRIVATE_KEY]
[Peer]
PublicKey = [THE_PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
AllowedIPs = 192.168.77.2/32
PersistentKeepalive = 30
Create the file /etc/wireguard/wg0.conf
on the other machine:
[Interface]
PrivateKey = [PRIVATE_KEY_GENERATED_ON_THIS_MACHINE]
Address = 192.168.77.2/32
Table = 123
PreUp = ip rule add from 192.168.77.2 table 123 priority 456
PostDown = ip rule del from 192.168.77.2 table 123 priority 456
[Peer]
PublicKey = [PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
Endpoint = [IP_ADDRESS_OF_THE_CLOUD_VM]:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 30
Then start and persist the Wireguard tunnel on both machines:
cloudvm# systemctl enable --now [email protected]
server# systemctl enable --now [email protected]
That should be all you need to have a public static IPv4 address when self-hosting in an environment where you don't have any such address. If you have any questions, feel free to contact me. Please try fixing your own problem before asking me for help, though. If you do ask me for help, please be as descriptive as possible and tell me the troubleshooting steps you've taken. I'll ignore the cry for help otherwise.
2024-07-01 21:20:00
Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.
Like all good projects, this began with an Idea.
My mind reached out and grabbed wispy tendrils from the æther, forcing the disparate concepts to coalesce. The Mass gained weight in my hands, and a dark, swirling colour promising doom to those who gazed into it for long.
On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.
Booting Linux off of a Google Drive root.
I wanted this to remain self-contained, so I couldn't have a second machine act as a "helper". My mind went immediately to FUSE—a program that acts as a filesystem driver in userspace (with cooperation from the kernel).
I just had to get FUSE programs installed in the Linux kernel initramfs and configure networking. How bad could it be?
The Linux boot process is, technically speaking, very funny. Allow me to pretend I understand for a moment1:
As strange as the third step may seem, it's very helpful! We can mount a FUSE filesystem in that step and boot normally.
The initramfs needs to have both network support as well as the proper FUSE binaries. Thankfully, Dracut makes it easy enough to build a custom initramfs.
I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it works, as opposed to something like Alpine.
$ git clone https://github.com/dracutdevs/dracut
$ podman run -it --name arch -v ./dracut:/dracut docker.io/archlinux:latest bash
In the container, I installed some packages (including the linux
package because I need a functioning kernel), compiled dracut
from source, and wrote a simple module script in modules.d/90fuse/module-setup.sh
:
#!/bin/bash
check() {
require_binaries fusermount fuseiso mkisofs || return 1
return 0
}
depends() {
return 0
}
install() {
inst_multiple fusermount fuseiso mkisofs
return 0
}
That's it. That's all the code I had to write. Buoyed by my newfound confidence, I powered ahead, building the EFI image.
$ ./dracut.sh --kver 6.9.6-arch1-1 \
--uefi efi_firmware/EFI/BOOT/BOOTX64.efi \
--force -l -N --no-hostonly-cmdline \
--modules "base bash fuse shutdown network" \
--add-drivers "target_core_mod target_core_file e1000" \
--kernel-cmdline "ip=dhcp rd.shell=1 console=ttyS0"
$ qemu-kvm -bios ./FV/OVMF.fd -m 4G \
-drive format=raw,file=fat:rw:./efi_firmware \
-netdev user,id=network0 -device e1000,netdev=network0 -nographic
...
...
dracut Warning: dracut: FATAL: No or empty root= argument
dracut Warning: dracut: Refusing to continue
Generating "/run/initramfs/rdsosreport.txt"
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
To get more debug information in the report,
reboot with "rd.debug" added to the kernel command line.
Dropping to debug shell.
dracut:/#
Hacker voice I'm in. Now to enable networking and mount a test root. I have already extracted an Arch Linux root into a S3 bucket running locally, so this should be pretty easy, right? I just have to manually set up networking routes and load the drivers.
dracut:/# modprobe fuse
dracut:/# modprobe e1000
dracut:/# ip link set lo up
dracut:/# ip link set eth0 up
dracut:/# dhclient eth0
dhcp: PREINIT eth0 up
dhcp: BOUND setting up eth0
dracut:/# ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
dracut:/# s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
dracut:/# ls /sysroot
bin dev home lib64 opt root sbin sys usr
boot etc lib mnt proc run srv tmp var
dracut:/# switch_root /sysroot /sbin/init
switch_root: failed to execute /lib/systemd/systemd: Input/output error
dracut:/# ls
sh: ls: command not found
Honestly, I don't know what I expected. Seems like everything is just... gone. Alas, not even tab completion can save me. At this point, I was stuck. I had no idea what to do. I spent days just looking around, poking at the switch_root
source code, all for naught. Until I remembered a link Anthony had sent me: How to shrink root filesystem without booting a livecd. In there, there was a command called pivot_root
that switch_root
seems to call internally. Let's try that out.
dracut:/# logout
...
[ 430.817269] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100 ]---
...
dracut:/# cd /sysroot
dracut:/sysroot# mkdir oldroot
dracut:/sysroot# pivot_root . oldroot
pivot_root: failed to change root from `.' to `oldroot': Invalid argument
Apparently, pivot_root
is not allowed to pivot roots if the root being switched is in the initramfs. Unfortunate. The Stack Exchange answer tells me to use switch_root
, which doesn't work either. However, part of that answer sticks out to me:
initramfs is rootfs: you can neither pivot_root rootfs, nor unmount it. Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs with the new root (cd /newmount; mount --move . /; chroot .), attach stdin/stdout/stderr to the new /dev/console, and exec the new init.
Would it be possible to manually switch the root without a specialized system call? What if I just chroot?
...
dracut:/# mount --rbind /sys /sysroot/sys
dracut:/# mount --rbind /dev /sysroot/dev
dracut:/# mount -t proc /proc /sysroot/proc
dracut:/# chroot /sysroot /sbin/init
Explicit --user argument required to run as user manager.
Oh, I need to run the chroot
command as PID 1 so Systemd can start up properly. I can actually tweak the initramfs's init script and just put my startup commands in there, and replace the switch_root
call with exec chroot /sbin/init
.
I put this in modules.d/99base/init.sh
in the Dracut source after the udev rules are loaded and bypassed the root
variable checks earlier.
modprobe fuse
modprobe e1000
ip link set lo up
ip link set eth0 up
dhclient eth0
ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
mount --rbind /sys /sysroot/sys
mount --rbind /dev /sysroot/dev
mount -t proc /proc /sysroot/proc
I also added exec chroot /sysroot /sbin/init
at the end instead of the switch_root
command.
Rebuilding the EFI image and...
I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right?
Nobody stopped me, so I kept going.
I log in with the very secure password root
as root
, and it unceremoniously drops me into a shell.
[root@archlinux ~]# mount
s3fs on / type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
...
[root@archlinux ~]#
At last, Linux booted off of an S3 bucket. I was compelled to share my achievement with others—all I needed was a fetch program to include in the screenshot:
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
core.db failed to download
error: failed retrieving file 'core.db' from geo.mirror.pkgbuild.com : Could not resolve host: geo.mirror.pkgbuild.com
warning: fatal error from geo.mirror.pkgbuild.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.rackspace.com : Could not resolve host: mirror.rackspace.com
warning: fatal error from mirror.rackspace.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.leaseweb.net : Could not resolve host: mirror.leaseweb.net
warning: fatal error from mirror.leaseweb.net, skipping for the remainder of this transaction
error: failed to synchronize all databases (invalid url for server)
[root@archlinux ~]#
Uh, seems like DNS isn't working, and I'm missing dig
and other debugging tools.
Wait a minute! My root filesystem is on S3! I can just mount it somewhere else with functional networking, chroot
in, and install all my utilities!
Some debugging later, it seems like systemd-resolved doesn't want to run because it Failed to connect stdout to the journal socket, ignoring: Permission denied
. I'm not about to try to debug systemd because it's too complicated and I'm lazy, so instead I'll just use Cloudflare's.
[root@archlinux ~]# echo "nameserver 1.1.1.1" > /etc/resolv.conf
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
core is up to date
extra is up to date
...
[root@archlinux ~]# fastfetch
I look around, making sure that nobody had tried to stop me. My window was intact, my security system had not tripped, the various canaries I had set up around the house had not been touched. I was safe to continue.
I was ready to have it run on Google Drive.
There's a project already that does Google Drive over FUSE for me already: google-drive-ocamlfuse. Thankfully, I have a Google account lying around that I haven't touched in years ready to go! I follow the instructions, accept the terms of service I didn't read, create all the oauth2 secrets, enable the APIs, install google-drive-ocamlfuse
from the AUR into my Arch Linux VM, patch some PKGBUILD
s (it's been a while), and lo and behold! I have mounted Google Drive! Mounting Drive and a few very long rsync
runs later, I have Arch Linux on Google Drive.
Just kidding, it's never that easy. Here's a non-exhausive list of problems I ran into:
/usr/lib
)/proc
and isn't mounted, or stuff that just hasn't copied over yet)With how many problems there are with symlinks, I have half a mind to change the FUSE driver code to just create a file that ends in .internalsymlink
to fix all of that, Google Drive compatibility be damned.
But, I have challenged myself to do this without modifying anything important (no kernel tweaking, no FUSE driver tweaking), so I'll just have to live with it and manually create the symlinks that rsync
fails to make with a hacky sed
command to the rsync
error logs.
In the meantime, I added the token files generated from my laptop into the initramfs, as well as the Google Drive FUSE binary and SSL certificates, and tweaked a few settings2 to make my life slighty easier.
...
inst ./gdfuse-config /.gdfuse/default/config
inst ./gdfuse-state /.gdfuse/default/state
find /etc/ssl -type f -or -type l | while read file; do inst "$file"; done
find /etc/ca-certificates -type f -or -type l | while read file; do inst "$file"; done
...
It's nice to see that timestamps kinda work, at least. Now all that's left is to wait for the agonizingly slow boot!
chroot: /sbin/init: File not found
Perhaps they did not bother to stop me because they knew I would fail.
I know the file exists since, well, it exists, so why is it not found? Simple: Linux is kinda weird and if the binary you call depends on a library that's not found, then you'll get "File not found".
dracut:/# ldd /sysroot/bin/bash
linux-vdso.so.1 (0x00007e122b196000)
libreadline.so.8 => /usr/lib/libreadline.so.8 (0x00007e122b01a000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007e122ae2e000)
libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007e122adbf000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007e122b198000)
However, these symlinks don't actually exist! Remember how earlier we noted that relative symlinks don't work? Well, that's come back to bite me. The Kernel is looking for files in /sysroot
inside /sysroot/sysroot
. Luckily, this is an easy enough fix: we just need to have /sysroot
linked to /sysroot/sysroot
without links:
dracut:/# mkdir /sysroot/sysroot
dracut:/# mount --rbind /sysroot /sysroot/sysroot
Now time to boot!
It took five minutes for Arch to rebuild the dynamic linker cache, another minute per systemd unit, and then, nothing. The startup halted in its tracks.
[ TIME ] Timed out waiting for device /dev/ttyS0.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
Guess I have to increase the timeout and reboot. In /etc/systemd/system/dev-ttyS0.device
, I put:
[Unit]
Description=Serial device ttyS0
DefaultDependencies=no
Before=sysinit.target
JobTimeoutSec=infinity
Luckily, it did not take infinite time to boot.
I'm so close to victory I can taste it! I just have to increase another timeout. I set LOGIN_TIMEOUT
to 0
in /etc/login.defs
in Google Drive, and tried logging in again.
Thankfully, there's a cache, so subsequent file reads aren't nearly as slow.
Here I am, laurel crown perched upon my head, my chimera of Linux and Google Drive lurching around.
But I'm not satisfied yet. Nobody had stopped me because they want me to succeed. I have to take this further. I need this to work on real hardware.
Fortunately for me, I switched servers and now have an extra laptop with no storage just lying around! A wonderful victim3 for my test!
There are a few changes I'll have to make:
e1000
All I need is the r8169
driver for my ethernet port, and let's throw in a Powerline into the mix, because it's not going to impact the performance in any way that matters, and I don't have an ethernet cord that can reach my room.
I build the unified EFI file, throw it on a USB drive under /BOOT/EFI
, and stick it in my old server. Despite my best attempts, I couldn't figure out what the modprobe directive is for the laptop's built-in keyboard, so I just modprobed hid_usb
and used an external keyboard to set up networking.
This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.
Nice thing is, I can just grab the screenshot4 from Google Drive and put it here!
Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.
If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.
Unfortunately, I don't know what to do next with this. Maybe I should install Nix?
Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.
2024-04-02 04:10:00
As some of you may know, I go to college in the midwestern United States. This means that the land here is pretty flat, and my classes are like a mile from my apartment. Why would I want to walk back and forth multiple times a day when I could do it in style.
Of course, I am not the only one beridden by this plight. Popular solutions include:
These are all valid options, and do not include some of the more esoteric solutions I have seen around campus. After all, Purdue is an engineering school, and midwestern engineering is the best kind of engineering. Of course, I entered college with delusions of grandeur. I had been watching too much of engineering YouTube recently, so I had an idea of what I would need in terms of motor controllers, brushless motors, LiPo batteries, and so on.
Yeah, I never ended up doing anything with that budget scooter from Wal·Mart, other than ride it around for the first month and a half of my college career, burn out the wheels, and sell it for a few bucks to someone else.
Thus begins my journey into skating.
Many many years ago, in a state far far away, little Ersei saw the Cool Older Kids (they were like ten years old at the time) on their new RipStiks. For those out of the know, a RipStik is a caster board, so two pivoting wheels on two joined twisty boards that you can move by twisting your lower body.
Years later, I got my wish and became the sickest, most badical person on the block. Buoyed by my ego, I went down the fairly-steep hill in my neighbourhood, hit a pebble, and tumbled a solid ten feet.
Luckily, I landed halfway in the grass and only got a few scrapes (and I was wearing a helmet).
This exemplified the largest issue with my dreams of skating around as a means of transport: the wheels were unforgiving, and the roads were brutal. Of course, I ignored all of those warning signs and continued with my dreams.
I was back home in October, and I had come with an idea: I would take my old, decrepit, and waterlogged skateboard that my parents bought for me a decade ago to soothe my jealousy at the older kids, and make it work for me. In previous years, it had been used exclusively as a seat for the swingset I hung from a sturdy tree about when I got it.
My friend, Merrick, tried to teach me how to skate all those years ago, but I was too scared to get on the skateboard. His voice lives on in my head, echoing the one bit of knowledge that has survived the test of time: "bigger wheels means you go faster". I will never forget that.
I put bigger wheels I cannibalized from my old RipStik on the skateboard and left everything else as-is. Eagle-eyed readers will notice a crack in the plastic. This will come back to haunt me later.
As I got back to college after whatever break happened during October 11, 2022, I got to learning how to skate. After a few days and one helmet, I got the hang of it and started using it to get around campus.
Soon afterward, I hit something in the road, fell on my face, and messed up my hand and wrist1. I refuse to upload a picture.
Naturally, this did not stop me, and I continued to skate around.
This skateboard was trying to kill me. There is no other explanation. A few of my friends who had been skating for years tried out the abomination I made and collectively agreed to never touch it again. And yet I continued, as skateboard parts are expensive.
However, I was flush with money from graduation, and a few Amazon gift cards later, I decided to overhaul my skateboard. I spent a few bucks putting on new grip tape (badly) as the original board had practically none left. I got some secondhand trucks, nice bearings, and big squishy wheels. For safety.
And wow was this a lot better. The only part I kept around was the deck, since the Bumblebee decal had since become iconic2.
I did continue to eat asphalt on occasion as I got better at skateboarding. Here is a non-exhaustive list of the lessons I've learned so far:
These lessons were written in blood. Don't worry, I jumped off of the skateboard before I crashed into the person who started crossing the street without looking, and the car driven by someone on their phone turning into the bike lane3.
A great artist blames everyone else, I guess.
No, since this skateboard I've put together is too bottom-heavy to do any kind of tricks. I can do a manual, and jump curbs, but that's about it.
A small wrench thrown into my plans for tricks is that my skateboard deck is apparently really small. It's a child's deck, since, y'know, it was originally bought for a child (me). It also seems like new decks are expensive, so whenever this one breaks I'll upgrade to a properly-sized one.
Ignoring the surface level arguments that I can throw out, like the "convenience" of being able to carry it around, or how "cool" skating looks, or how "low-maintenance"4 a skateboard is, the reason I like skating is that it is fun.
In the end, why do anything if it's not enjoyable, either now or for delayed gratification? I can't count the number of nights I was feeling terrible, and skating outside for a few miles made me feel a lot better. Skateboarding is enjoyable, much like writing. There is always a risk to be taken with activities, and I've gotten good enough that I no longer worry about falling off my skateboard and getting hurt.
There's a certain amount of enjoyment that you can derive from anything, and a good balance of enjoyment and need is needed. An activity that is pure enjoyment and with no need (Hermitcraft) must be balanced out by something that is all need and no enjoyment (homework). Skateboarding lies somewhere in between, as an activity that you need to do, but also enjoy doing. It balances itself. Spending a few hours aimlessly skating around and enjoying the weather is offset by the need to get to class on time.
I'm not alone in enjoying skateboarding. There are plenty of people around campus who get around with longboards and skateboards, or even something weirder.
That brings me to hobbies. Hobbies ride that line between work and pleasure, a mix of "need" and "want". Take selfhosting as an example—what started out as learning and fun has evolved into a "need", in which I now need to keep my DNS server up, my RSS reader, my photo backups, and so on. But that "need" does not make it bad! Quite the contrary, it balances out the "want" of selfhosting, and makes it, in my opinion, ultimately more enjoyable.
This can be used in the opposite direction too. With the rise of "hustle culture", and the rampant "monetize your hobby", it swings the balance in the other direction. What was once enjoyable is now work, and it becomes unsatisfying and sad.
That does bring into question going into the technology industry as someone who is already pursuing computers as a hobby, much like myself. In high school, computers were a hobby for me. I was not graded on how well I could do computer science, but in college I am.
The scale has shifted, and computer science and programming is not nearly as fun anymore. Staying consistent with my argument, however, I have found other activities to balance it out, like skateboarding, making jewelry, and writing, all things that can not be monetized (or if they can, they won't make enough money to be worthwhile). However, this is not sustainable, as every hobby requires time, and time is finite. Adding more work and expecting to balance it will not succeed the moment the time required for work is more than the time required for enjoyment.
Please don't turn your hobby into a job. It's not worth it.
Gladly! My skateboard is a 7.75 inch wide deck with 70 Millimeter 78A Sector 9 Nineballs wheels (they're pretty worn, and I wanna try harder wheels to learn to slide), Bones Reds steel bearings (maybe I'll upgrade to ceramic if I want to do more wet-weather riding), and Paris 129 Millimeter Street TKP trucks (lubricated with bar soap). The only original parts are the deck and the screws holding the trucks on, but those are rusted through, and I need to replace them.
It's developing a nasty razor tail, and the board is beginning to chip a little at the back, mostly since I dropped it.
I avoid riding it if water has pooled outside, but it's fine to ride when it's wet due to the bearings being properly lubricated and sealed and the wheels being sufficiently soft to retain grip when it's wet outside.
I don't wear anything special when skateboarding, just jeans, a hoodie/beanie if it's not warm enough, and gloves to protect my hands if it's cold or if I am scared of falling off that day. I just wear my normal shoes, but that's a bad idea since the grip has disappeared in like two months.
The skateboard is a bit high off of the ground and the wheels are pretty soft, so it does take more energy to skate around compared to normal skateboards or dropped longboards5. The height does help when going off of curbs, so that's nice.
Still have questions about my skateboarding hobby? Feel free to contact me.
This was written for April Cools, and I've been meaning to write, so here y'all go.
I've written the word skate so many times that it's beginning to look weird.
Second time I've messed up my wrist, the first was falling off of my bike like a decade ago. ↩
People actually recognize my skateboard around campus because of that and people come up to me at one in the morning in a Five Guys to comment on it. I do like the attention. ↩
This was a strange story, but I had not five minutes prior fallen off my skateboard. If I had not, then the timing would be perfect for me to hit the car. The person coming the opposite direction on their electric skateboard was not nearly so lucky, but they managed to dodge the car as well, and was not badly hurt. ↩
Once every month or so, I open up the ball bearings, clean them out, and put new lubricant in them. ↩
A longboard that has the wheels mounted higher than the rest deck so the deck is closer to the ground. ↩
2024-02-26 23:45:00
After years of yearning, longing, and pining (let's not forget pining), I finally got myself an actual server. Or rather, a friend of mine got sick of listening to me whine about my old, terrible server, and gave me a spare that he cannibalized for parts (but was still very good and usable).
I am now the proud owner of an HPE ProLiant DL360 Gen9.
This story starts a few months back, with the Purdue Linux Users Group (of which I am an officer) deciding to put together a mini-datacenter for students to use. It's nothing too fancy, really just a 12U rack sitting in a room with a handful of static IPs from Purdue University. I like the rack, it's a nice source of white noise for when I need a good study space.
I'm the server second-from-the-bottom! My friend has the top rack mount server (and the precarious hard-drives), and my other friend has the bottom-most server.
I got the server, set up the BIOS configurations, and thought to myself, "What the hey, might as well just take the drive out of my old server and put it in the new one!"
I made the mistake of appending that line of thought with the ill-fated words, "What could go wrong?"
It is the afternoon of Valentine's Day. I have a meeting in less than an hour, and I have blocked out time later for Valentine's Day Activities™. There are but a few scant hours that day to work on the server. I have less than an hour to work on the server.
Of course, I chose this time to move the drive over.
I skateboard the half-odd mile to my dorm, grab my laptop charger (as I neglected to take it with me in the morning), break it down into the two cables, and stuff it into my jacket. With trepidatious fingers, I shut off my laptop-server, fumbled with the screwdriver for too long, and extracted the expensive SSD I put in there after my last drive failure, proprietary laptop caddy and all. I wrap the drive in a paper bag to hopefully insulate it against my winter jacket's static, lock up my room, and skateboard as fast as I can back to the datacenter.
The adrenaline had taken over my body. I unwrapped the drive, praying that it was intact, and removed the four screws holding it into the Dell laptop caddy. A glance at my phone tell me I have less than thirty minutes before I had to go to my meeting.
I look at the screws I extracted from the drive caddy, and horror of realization dawns on me—these screws looked too short to be used in the server's drive caddies. The panic sets in. My hands are unsteady, I can't align the screws and the drive. I tried to put the screws in anyway, out of desperation, and three screws go in. The fourth stubbornly refused.
Three out of four is good enough for me. Less than twenty minutes left.
I put the caddy into the first drive bay, suffer through the terribly long boot time of enterprise hardware, and open the one-time boot menu.
I had only the option to boot from the network. The drive was not visible.
With the fifteen minutes I had left, I booted from an Arch Linux USB drive I had lying around, hoping that it wouldn't cause issues (foreshadowing), and ran fdisk -l
, hoping that the drive had survived the journey and was not dead.
Thankfully, my data was intact. My mind is racing—what to do now? Perhaps I have to reinstall Grub? And so I chroot'd into the drive, pulled up the documentation for rebuilding Grub, and was promptly interrupted by my phone telling me that I had to leave now for the meeting otherwise I would be late.
I leave for the meeting, show up too early (I could have worked on the server!), and return to my server an hour and a half later. At this point the downtime had reached about two hours.
I now had just a few more hours before my class. I thought to myself, "I can do this. I have time. I just need to approach this rationally."
In my mind I had narrowed it down to a few possibilities:
After an hour of fumbling around with the Arch Linux ISO and running into weird incompatibility issues with the differing installed Kernel versions, I decided to give up and use the Rocky Linux ISO to recover the Rocky Linux install I had on my server. It took me another hour of reading through forums and paywalled documentation to realize that reinstalling Grub was as easy as dnf reinstall grub2-efi grub2-efi-modules
.
And yet, the server refused to detect the drive on boot.
I moved onto the next item on my list: the server couldn't find the EFI file. Issue was, efibootmgr
showed that the boot menu entry already existed. The server found the boot entry just fine, after it had already booted. Changing the boot order in efibootmgr
did nothing—the server refused to detect the SSD on boot.
There's no small amount of disappointment in my mind as I start to fear that my shiny old-new server is broken. I had already updated the iLO1, but the BIOS updates were hidden behind the HPE support contract. I do not have a contract. That did not stop me from finding the files updating the BIOS, however.
Now that I was no longer running firmware from 2015, I booted the server, checked the boot menu, and the SSD still was not visible.
There was only one thing left on the list: some weird hardware incompatibility. You see, this server has an integrated RAID card for redundant storage at the hardware level. Currently, the RAID card was configured in passthrough mode. From tinkering around the server's guts, I knew that there were additional SAS ports on the motherboard. Currently, the backplane was connected to the RAID card, so I swapped around the connection.
The boot menu showed my SSD. This was it, no more downtime.
I boot into the server, go to log in to set up networking, and get hit with ersei: no shell: permission denied
.
Oh great, another problem. I don't really know for sure what caused the problem, but my bet is that the Arch Linux chroot messed with the SELinux labels on the drive. Booting into the recovery Rocky Linux USB and creating a /.autorelabel
file fixed the problem.
We were in! I set the public IPs, started the systemD services2, connected to the proxy VM, and loaded up my website on my phone.
It's up! My server has successfully moved! After sorting through the hundred or so Zabbix notifications and ensuring that everything was really back up, I was finally done!
See, now I have a server that isn't horribly weak. Naturally, I have to move everything over. I first started with the networking and stopped using the proxy. For the first time in years I finally had a public IP address for my website! I moved my DNS from Cloudflare to Porkbun (slightly more expensive, but only marginally) and started using Hurricane Electric as my nameserver. Why? Because Cloudflare doesn't let you set custom nameservers, and I wanted to run my own nameservers.
I moved the Minecraft servers in Oracle Cloud over to the local machine (the CPU graph doesn't look like it's at zero in Zabbix now, it's now marginally more!) and shut off the Oracle Cloud VMs after the DNS changes had fully propagated. I moved my Minio from the singular SSD to the pool of drives that came with the server by making a new Minio instance and moving the buckets over with mc mirror
(I now have erasure coding and some form of redundancy, yay!)
So that brings us to now. Believe it or not, I'm glossing over a bunch of minor issues because they really aren't as funny or interesting (mostly just mucking around in iLO or BIOS) and could just be a list of bullet points:
Oh yeah. You see, I did not get the server on Valentine's Day. I got it a few days beforehand. Of course, I needed to get the environment all set up. Beyond just cleaning out the hardware, updating the iLO firmware, and setting the proper BIOS configuration options, I had to do some networking.
The rack's networking is set up like so: there is one upstream Gigabit network connection to the wall. We are allocated static IPv4 addresses and SLAAC'd IPv6 addresses. Anything that wants a static IP address connects to that network (via a network switch). There is an EdgeRouter X that has NAT enabled for connections that do not require a public address (or IPv6, we'll get to that later).
The main thing connected to the EdgeRouter's NAT'd network are the iLO and iDRAC cards of various servers. Earlier I had set up port forwarding at a high, random port, but this was not ideal as anybody could plausibly access the management console of my server (although it is password-protected). After poking around in the documentation and blog posts of various age and quality, I installed ZeroTier on the EdgeRouter and set up ethernet bridging so that I could access the internal network from anywhere if I was connected to the ZeroTier network. Despite my best efforts to build a newer ZeroTier package for the EdgeRouter, I gave up and decided to live with the older version, which worked fine enough.
However, I cannot, for the life of me, figure out how to pass a public IPv6 address through to the network behind the EdgeRouter. It was tempting to install OpenWRT on the EdgeRouter instead, but that would probably cause more problems than it would solve.
In the end, I gave up and resolved myself to a life of no IPv6 on the internal network.
I broke the network for everybody a few times.
To those whose Mastodon instances went down, I apologize.
Here's what happened: the EdgeRouter only has five total ports, of which one is used for the upstream WAN connection, leaving four connections. We were reaching the point where there were more than four connections to the EdgeRouter, so one had to be disconnected every now and then. We were timesharing the last remaining ethernet port on the EdgeRouter. This was not ideal. The large network switch, however, had 24 ports, of which less than half were used.
I had the genius idea to split the switch into two VLANs: the internal network and the external network. The ports on the right-hand side of the switch were for the internal network (connected to the EdgeRouter), and the left-hand side was connected to the WAN.
Here is a non-exhaustive list of what went wrong:
I am never touching that switch again. On the plus side, all the potential failure modes are documented now!
Server rails are a fucking mess. It took weeks to find the compatible rails for my server, and Lily's server is using the wrong rails and is hanging down slightly. I ended up giving up looking for the right rails and decided that two left rails worked just fine (surprisingly well, actually, but the server ears did not lock in on one side, which was fine).
I had to trawl through literal buckets of server rails down at the Purdue surplus store to find the "matching" pairs. I ended up buying four left rails, because I could not find the matching right rails.
I wonder who has them.
I have plenty of ideas for what to do next. In the time it took me to put this post together, I have already completed the following:
At the time of writing, I still have the following planned:
I can not wait to write about how strange running an authoritative and redundant DNS server is. See you all in the next one.
Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.
I've also moved to my own Fediverse instance: @[email protected].