2025-03-12 10:15:00
The worst "street-cred" I have is that I've been using tiling window managers for thirty-five percent of my life: five years with Sway and two with i3. As the realization of those numbers (and my age) dawns upon me, an irresistible urge wells up in my chest, threatening to overwhelm me. I try to tamp it down, but the urge is too strong—I must Give My Opinion.
This may be worse than finding grey hairs.
I switched to Wayland before it was cool1, so a lot of stuff was broken, and I got used to it being broken, much like my entire Linux-on-modern-laptop experience. I was fine with Sway, since I had gotten used to the workflow over the years. After all, it was what all the cool kids online were using, and I was too young to make good decisions. I went about my life, going through most of high school and all of college with a tiling window manager, dismissing alternatives as straying from the One True Path set forth by anonymous forum-goers.
Until about a month ago.
Sway broke me (emotionally) with a click-and-drag issue where selecting text and dragging the selection (a pretty bog-standard thing people do with their computers) changed somehow to keep the selection happening after you released the mouse.
My decades of muscle memory stopped working—I felt lost, adrift on a rough sea, the hot sun bearing down on me. Would I (the bug) ever be rescued (fixed)? Only time would tell, but I was getting desperate.
At first, I thought I could handle it and someone would quickly fix the bug. Days turned into weeks, and I was losing my mind. The Sway IRC was silent to my pleads for help, and I had developed a Pavlovian response to clicking on text to highlight it—a burst of panic in my chest as I dread the mouse continuing to drag after I had let go.
Naturally, instead of figuring out what library made a breaking change and spending four hours running git bisect
, I decided to throw nearly a decade of muscle-memory and workflow refinements out the window. I was getting bored of Sway anyway. Let's switch to Niri!
For those unaware, Niri is a scrollable-tiling window manager: each workspace is an infinitely-wide strip you can scroll side-to-side on. It's easier to show their official demo video than to try to explain it with words (you don't have to watch the whole thing):
It was new, dangerous, risky, and most of all, really cool. I just had to try it, transporting me back to my youth distro-hopping and window-manager-hopping(?) with reckless abandon.
It seems to be a recurring theme, throwing myself into a new productivity-altering technology in March when I should be doing more important things instead.
It wasn't as bad this time around! Within a few hours, I had a setup that worked fine enough. Within a week, I had Niri working better than Sway, and I was greatly enjoying the changes (read: improvements) it brought.
Opening a window does not alter other windows: I can keep my focus and Firefox doesn't scroll to another dimension if I open a terminal in its vicinity.
Unlike Sway, Niri supports per-window screensharing, as well as "blackout window from appearing in screen sharing". I've been streaming my homework and it's much nicer without needing to worry about an email notification from my bank showing up in the top corner.
Niri's built-in screenshot tool is really nice, especially compared to Sway's recommended grim+slurp.
I was so excited about Niri that I tried my hand at adding a feature to its IPC2 for something or the other, and I greatly enjoyed it! Unlike Sway/wlroots, Niri/Smithay are written in Rust and are surprisingly accessible to hack on.
I genuinely can't see myself going back to a traditional tiling window manager, Niri just brings too many improvements to my workflow.
Traditional tiling window managers have a side effect of forcing you to be as efficient as possible with your window layout. There is an additional cognitive load incentivizing you to optimize for the wrong thing: minimizing window reflows. If you don't find yourself constantly swapping between fullscreen and non-fullscreen views and running out of workspaces, you don't have very many windows open. Don't even get me started on tabbed/stacked layouts with nested containers, the least ergonomic Band-Aid™ for the space issue I've ever seen.
After many many years of optimizing for the wrong thing with Sway, Niri blesses me with the realization that I can have the speed of a traditional tiling window manager without the space limitations.
On Sway, I often had eleven workspaces open. Ever since I switched to a tiling window manager, I kept running out of space and added shortcuts to workspaces 11-20. I drilled it into myself to close windows when I was done with them, often losing my flow when I come back to the projects I closed, all to save imaginary space I feel should be infinite. With Niri, I can have three large projects open, various chat apps, a YouTube video, and three classes worth of schoolwork and never use more than five workspaces. The same setup would have me spilling into workspace fifteen on Sway, and I would quickly get confused and forget where I put my math textbook, switching between each workspace until I find the right one, often the very last one I check!
Wow, I did not realize how much repressed anger I had at traditional tiling window managers until now.
Given the variety of screen sizes3 and improved processing power I do not think that the traditional tiling window manager ought to be the power-user workflow of choice. It artificially limits space, forces content reflows, and does not work well with nonstandard monitor layouts.
If you are using Sway or another Wayland traditional tiling window manager, you should try Niri. Right now. My configurations are published on Sourcehut if you want to have a Sway-like experience with my keybindings.
Go on then, what are you waiting for?
Thoughts? Comments? Want to hire me? Feel free to get in touch!
I'm also on Bluesky and the Fediverse, if you're into me making bad puns.
The main reason I switched was because of mixed DPI. When you have a 4K monitor and a normal FHD monitor at about the same physical size, the pixels are a lot "denser" on the 4K monitor (~4 times as dense). As such, a window with a set pixel width that looks like one handspan on the 4K monitor will look like four handspans on the FHD monitor. This is annoying, since one display will have small windows, and another will have large. This can be fixed with "scaling" windows so that they take up about the same physical space. X11's implementation was a hacky mess and did not work for me (it has since gotten better), but Wayland supported it as a core feature. X11 also struggles with fractional scaling, or scaling windows in non-integer multiples (ie 1.6 for a 1440p screen). Wayland is much better at this. ↩
Inter-process communication, so I can send messages to Niri from a program (i.e. move a column around, blank the screen, and so on). ↩
A scrollable-tiling window manager is a match made in heaven for an ultrawide monitor, unlike a traditional tiling window manager. Sway would fill up the entire ultrawide with a new window, but Niri's model allows for the tiling to happen much more naturally and with a better use of the ultrawide's space. Now I only need to get an ultrawide monitor to try Niri on... ↩
2024-12-17 12:16:00
I may be a little full of myself, publishing two server updates and five actual blog posts. Alas, college is hard and I did not really publish anything big this fall semester (though I have worked on some fancy projects that are sitting half-completed in my drafts folder). Here are all of the small projects and other miscellaneous items that didn't warrant their own blog posts, but I worked on anyway.
Starting off with Bluesky! I got Bluesky FOMO so I spun up my own PDS (thanks to TheShadowEevee for telling me about the proper paths I had to reverse proxy in Nginx), and now I post anything non-incriminating at @ersei.net. The requirements to become my follower are practically nonexistent compared to my Fediverse account, especially since Bluesky does not have follow requests yet.
So far, I think it's okay. It does have a really big bot problem, and I've been blocking-on-sight as fast as I can. Other than that, I think it's nice. I don't like how every post is trivially public, so anything even mildly personal is going to stay on Fedi. It does have a slick UI, but there are a few extremely minor bugs on the iOS app that bother me. Maybe later I can run my own labeller or aggregator as a fun little sidequest, but that's a project for another day.
In my last update post, I mentioned setting up my own authoritative DNS server to run my infrastructure. While I spun up an instance on a VPS, Purdue was unhappy with my request to unblock port 53/udp
, despite the security measures I took to prevent reflection attacks. Months of back-and-forth led to an escalation in the ticket, and another refusal from even higher up.
Unfortunately, after much consideration and deliberation from our security team, we are going to go ahead and deny this request based on the information we have received and in line with our policies.
Let us know if you have questions, otherwise, we will consider this resolved from our side.
While I could have just had the server solely on some VPSs, that wasn't what I was envisioning, and I ended up giving up on the project after I had a replicated PostgreSQL database backing PowerDNS with DDoS protection and ratelimiting. I had to settle for running my own DoH server instead. What's Purdue going to do, block port 443? (Please don't)
I don't like ads, and the best way I've found to block ads on Safari on my iPhone is to use a custom DNS server to block requests made to the likes of ads dot google dot com (I shant speak its name, lest it appears). Running the DoH server took an incredible amount of tinkering over the past year, especially since the specification requires DoH to run over HTTP/2 and TLS. This led to every single DoH implementation to not fit all of my criteria:
It was always one or another, since most DoH server implementations decided to handle TLS themselves, stealing away port 443 from Nginx. I ran the DoH server on a VPS until I was tired of not running everything myself, so I had to figure out how to run DoH behind Nginx.
I settled on using Unbound to handle the DoH queries, as it ran fine without specifying a TLS certificate to use. I configured it to disable DNSSEC1 so I could modify DNS responses, and pointed the resolver at my locally-running instance of DNSCrypt-Proxy that ran the ads filtering and other forwarding junk that I wanted server-wide and not just over DoH.
Unfortunately, this did not solve all of my problems with ads on my phone, as YouTube used the same domain to serve ads as it does to serve video. Fortunately, my friend informed me about the incredible YouTube Rehike project. Rehike is a locally-hosted piece of software that pretends to be YouTube, much like Invidious, but meant to run at the youtube.com
domain so that it can grab full-resolution video files from googlevideo.com
without eating through all the server's bandwidth2.
Because I already had Unbound set up to intercept DNS requests, I added a few lines to forward YouTube requests to my server:
local-zone: "www.youtube.com" redirect
local-data: "www.youtube.com A 128.210.6.106"
local-data: "www.youtube.com AAAA 2607:ac80:303:102:638a:ba7f:2013:b0c"
local-zone: "m.youtube.com" redirect
local-data: "m.youtube.com A 128.210.6.106"
local-data: "m.youtube.com AAAA 2607:ac80:303:102:638a:ba7f:2013:b0c"
Might as well set up Rimgo to avoid using the horrid Imgur frontend, and likewise Redlib for Reddit. To ensure that these connections did not complain about HTTPS, I put together my own CA and issued my own certificates for YouTube, Imgur, and the rest of the sites I was overriding following Armin Reiter's tutorial.
The second half of this was pointing my phone to my DoH server, which is something iOS has supported for a while now. I put together a management profile for my phone, which is in the only good configuration language today3.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PayloadDisplayName</key>
<string>erseiDoHsr0</string>
<key>PayloadIdentifier</key>
<string>net.ersei.doh</string>
<key>PayloadUUID</key>
<string>BFEB188D-39FC-4455-9061-4C3FB34A432E</string>
<key>PayloadDescription</key>
<string>Ersei DoH</string>
<key>PayloadRemovalDisallowed</key>
<false/>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadType</key>
<string>Configuration</string>
<key>PayloadContent</key>
<array>
<dict>
<key>PayloadDisplayName</key>
<string>Ersei DoH</string>
<key>PayloadType</key>
<string>com.apple.dnsSettings.managed</string>
<key>PayloadIdentifier</key>
<string>com.apple.dnsSettings.managed.4E9BA4B7-FD73-4858-AD6D-F4976EC88389</string>
<key>PayloadUUID</key>
<string>230D9056-D82A-4AEA-953D-F44519C17D9C</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>ProhibitDisablement</key>
<false/>
<key>DNSSettings</key>
<dict>
<key>DNSProtocol</key>
<string>HTTPS</string>
<key>ServerAddresses</key>
<array>
<string>128.210.6.106</string>
<string>2607:ac80:303:102:638a:ba7f:2013:b0c</string>
</array>
<key>ServerURL</key>
<string>https://doh.ersei.net/dns-query</string>
</dict>
</dict>
</array>
</dict>
</plist>
I signed this configuration with my CA certificate with openssl smime -sign -signer erseinet-rootca.crt -inkey erseinet-rootca.key -nodetach -outform der -in doh.mobileconfig -out doh-signed.mobileconfig
, imported both the CA certificate and configuration file into iOS, and it worked!
Mostly.
The main issue is that sometimes iOS decides to ignore your DoH server and just doesn't use it. You have to go into settings to disable and reenable your DoH config to have your phone properly do DNS again. This is better than the previous solution, which was to use a faux-VPN provided by the DNSCloak iOS app. That app conflicted with ZeroTier, but this DoH solution does not conflict!
There's just one last problem, and perhaps it's the most cursed part of this endeavour. Unbound only supports listening on HTTP/2 (even if unencrypted). Nginx does not support reverse-proxying to HTTP/2, but it does support reverse-proxying to gRPC, which is close enough to HTTP/2 that it doesn't matter.
location / {
grpc_pass grpc://localhost:4932; # Unbound on HTTP/2
grpc_read_timeout 86400; # iOS tries to keep DoH connections alive
grpc_connect_timeout 75s;
}
But hey, if it works, it works. I don't see ads on Safari anymore (unless if iOS unilaterally decides to stop using my DoH server occasionally), which is nice. Unfortunately, apps have a lot more control over their networking stack than websites do, so they can pin DNS queries (or use their own DoH servers) and bypass my adblocking. Good thing I refuse to install apps unless if I really have to, I guess.
A big issue I faced with my new server was that it was good. Too good. I expected too much from it and its paltry(‽) 64GB of RAM. That was too little to run my Nix Hydra CI VM (many tens of GB from compiling big programs), Minecraft servers, ZFS cache, and Matrix at the same time. Fortunately for me, I got some RAM from another very good friend (thank you) so I can avoid the alerting emails I was getting stating that "95% of RAM is used".
I've begun a slow migration to Grafana/Prometheus/Loki for monitoring, hopefully relying less on Zabbix for things Zabbix isn't really meant to monitor. This gives me the ability to ingest my Nginx logs and perform queries on them, so I can see how many unique IPs access a blog post over time in a nice graph. Really, setting that up was in response to my Google Drive post hitting the news cycle for some reasons and I wanted to see the numbers go up.
Speaking of résumés, I am looking for a job/internship for the Summer of 2025, so if you know someone who's hiring and feel like I'd be a good fit, please contact me,
According to Can I use's page on MathML, there are enough people using browsers that support MathML that I can move my math rendering plugin away from KaTeX to render math to Temml, and render MathML exclusively. Not only does this look nicer across browsers, but also it is more consistent and works properly in RSS readers! I also finally figured out Grav's renderer mechanism, so I no longer need to write ugly shortcodes to render math and can use the standard-ish $$math$$
syntax to render it. While that standardized syntax version is not released yet (I'm still stress-testing it on my website, making sure it behaves properly with other plugins), you can grab the latest source on Sourcehut.
Likewise, I modified the server-side highlighting plugin to not use shortcodes either, and instead use the standard triple-backtick syntax for syntax highlighting. That plugin is not quite as visible to you, reader, but it's really nice for me when I'm writing these posts.
The modified highlight plugin still needs some work to publish as a fork with the modifications (namespace changes, documentation, copyright, etc), but the development version can too be downloaded from Sourcehut.
Both of these modified plugins are running on my website, and if you see any issues, feel free to contact me in the usual places so I can fix them.
This was a fun little project to do while I was procrastinating on schoolwork and everything else. I've known that for the past few years that a lot of people have had 88×31 buttons on their websites for a retro feel. I wanted to get in on that (much like with how I wanted to get in on Bluesky), so I reached out to someone to commission a button. That fell through, so I ignored the whole button fiasco until I had a burst of inspiration to make my own button.
I wanted a simple button: my website's title and my domain in front of some wires with electrons crawling across them. I had been watching some TodePond recently, so cellular automata was on my mind. The problem I described reminded me a lot about Wireworld, so all I had to do was to make a static 88×31 button in GIMP or something, then run it through a program to simulate the electrons it detects!
I'll spare you the development story, but the TL;DR was that I wrote the first version to not count neighbours if they are diagonal. The code was fine, but I wanted to make the cellular automata Turing-complete, so I modified the bad code I had to check diagonals as well.
Reader, the code was horrible. If you want to make fun of it, you can see the earlier version here. I quickly wised up, and against my better judgement, I rewrote the whole thing, added loop detection, and various other goodies to make an actual cool button.
It took a lot of optimization and redesigns to get to this point, and I think I'm quite happy with it. Of course, it's still like 80KB and accounts for most of the bandwidth on my home page, but I think it needs all 540-some frames so it loops properly.
Send me your 88×31 button and we can link to each others' websites now, if you want. I'm not sure how I feel about hotlinking, since I might update the button later for more efficiency or tweak the design slightly, but for now go for it. If your website has a lot of traffic, you might wanna instead use a local copy that you periodically sync instead.
The current good source and GIMP source files are also available on Sourcehut. Feel free to customize it to your liking.
Inspired by Jim Nielsen's blog post on using iOS shortcuts to deploy his blog, I implemented something similar to post a note from my phone from the share menu. It calls a webhook on my server with a password that triggers the creation of a file with the note contents, and then a background daemon will periodically refresh the notes and push the RSS feeds if needed. It has helped me with posting there more, but evidently not enough.
All I have to do is hit the "share" button, and then press "Note post", fill out the relevant information, and it's transmitted to all of you who've subscribed to the notes. It's like social media, but with instant notifications with WebSub (which I also set up) and RSS. Someone should make a decentralized, federated social media system based on this technology...
It's not the best experience, since sometimes I need to reference the thing I'm posting to write the note itself, but I can do that in the text editor app.
As for the RSS system, I trigger a WebSub event upon every Git pull my server performs with a simple Git hook, so that people with compatible RSS readers can get new articles instantly instead of waiting for the next refresh. If you don't care because you pull the RSS feed every five minutes, shame on you.
Speaking of RSS, RSS has been superseded by Atom feeds on this website permanently. The feed links have linked to the Atom feeds forever, but some people got a hold of the RSS feeds instead. I didn't want to continue maintaining two different feed templates, so RSS feeds were phased out over the course of a week, and those using RSS-only feeds have gotten a notification that they should've used Atom instead. It seems like the migration has gone well, with just one person still pulling the RSS feed and ignoring the permanent redirect. Hopefully my feed hasn't broken silently in their reader. If this is you, you should use Atom feeds instead.
That's probably all that's noteworthy that I've been up to recently. I might update this post if something I forgot comes to mind. Post-publish update count: 1
.
Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.
It's appalling that you can just... disable a security measure and intercept DNS requests. Isn't that what DNSSEC is meant to do? We should really enforce using DNSSEC. ↩
Invidious is meant to run under a separate domain (such as invidious.example.com
), the content security policy of googlevideo.com
(where the actual video chunks are hosted) is set to only allow requests to HD video if the originator is youtube.com
. Invidious gets around this by having the server download and stitch together the video chunks and then sends those to the client from the Invidious server, eating a lot of bandwidth (and being slow). However, because Rehike is set up so that youtube.com
is overridden to point to Rehike instead of Google, a web browser using Rehike can access the HD video streams directly without violating the security policy. ↩
It's a type-safe configuration language, what more could you want? ↩
2024-11-13 02:00:00
I swear, I'm surrounded by people who want to watch me waste my time. I was telling the Purdue Linux Users Group about the wonders of RSS and how great it is, but my friend overheard the conversation and said, jokingly, "You should do Bad Apple, over RSS". I whined, complained, claimed it would be "too easy" and that "I need a harder project". Alas, my harder projects have remained stagnant for a few weeks now, eschewed for schoolwork.
And so I ran Bad Apple on RSS.
The moment that I heard "Bad Apple" and "RSS" in the same sentence, I thought of big feed that contains every frame as a feed entry. But that would be cheating, so my alternative plan formed fairly quickly: an RSS feed, that, every time it is requested, will update to show a new frame. Writing the backend to generate the RSS feed was pretty easy. All I needed was to keep track of a query parameter and link that against an integer representing the frame, using PHP's APCu cache as a basic key/value store.
<?php
header('Content-Type: application/atom+xml; charset=utf-8'); // Return an Atom file
header('Content-Disposition: filename="ba.xml"'); // Download as an XML
$q = $_SERVER['QUERY_STRING']; // Get the query parameter (the bit after the question mark)
if ($q == "") { // If there is no query, don't return anything
die();
}
$v_pat = "/^[a-f0-9]{16}$/i"; // Verify that the query parameter is a 16-character hexadecimal string
if (preg_match($v_pat, $q) != 1) {
die();
}
$frame = 1; // Initial frame value
// Load frame if it exists, and update it
if (apcu_exists($q)) {
$frame = apcu_fetch($q);
apcu_store($q, $frame + 1);
} else {
apcu_add($q, 1);
}
$frame = min($frame, 6572); // Cap frame at 6572
// Return the Atom feed
?>
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Bad Apple RSS</title>
<link href="https://ba.ersei.net/ba" />
<link rel="self" href="https://ba.ersei.net/ba.php?<?php echo $q;?>" />
<subtitle>Bad Apple as an RSS feed</subtitle>
<updated><?php echo date("c");?></updated>
<author>
<name>Ersei</name>
</author>
<id>https://ba.ersei.net/ba.php?<?php echo $q;?></id>
<entry>
<title>Bad Apple Frame <?php echo str_pad($frame, 4, "0", STR_PAD_LEFT);?></title>
<id>https://ba.ersei.net/img/<?php echo $frame;?></id>
<updated><?php echo date("c");?></updated>
<published><?php echo date("c");?></published>
<link href="https://ba.ersei.net/img/<?php echo $frame;?>"/>
<content type="html">
<![CDATA[
<p><img src="https://ba.ersei.net/img/<?php echo $frame;?>.jpg"></p>
]]>
</content>
</entry>
</feed>
There seems to be a bug where the first frame shows up twice, but I don't care enough to figure out why that happens. All that matters is that it works. Now I just need to patch a feed reader to request the RSS feed 30 times a second...
I must've muttered that phrase dozens of times throughout this project. Despite the beautiful sensibilities of PHP, I could find no such reprise in Javascript. I just needed an RSS reader that could handle refreshing articles smoothly every thirty-three milliseconds. I could've used a good RSS reader like FreshRSS or NewsFlash but FreshRSS's reload model doesn't mesh with the "30 times a second" target I was going for, and I couldn't get NewsFlash to compile on NixOS, so I decided to go with something Javascript-y, as it would probably refresh how I wanted it to.
This was a mistake. Before I could stop myself, I dove into the code of Fluent Reader, the first Javascript-written desktop RSS reader that dredged itself up from the depths of my mind.
Compiling Fluent Reader from scratch led me to discover that it has not been updated in thirteen months, and that installing Electron through NPM doesn't work on NixOS. I'm sure that this would be fine.
I built Fluent Reader, replaced its Electron binary with my own, and launched it. I added my feed, and lo and behold, the first frame of "Bad Apple" emerges from the Microsoft-themed abyss.
Great! I locate the "refresh feeds" button, and tap my laptop touchpad as furiously as my fingers would allow me. One small problem, however.
The frames are out of order? I thought about it for a moment, and realized that the Atom format returns date/times accurate only to the nearest second. When getting multiple feed entries per second, the frames returned will then be sorted by alphabetical order instead of the time they were received.
Simple enough fix. I just had to find the code where the feed is being returned and change out the sort by title to be inverted. How hard could it be?
It took me thirty minutes to figure out how to sort the output. Maybe it's just because I'm bad at React. Maybe it's because nothing in this codebase is documented. Maybe it's because the universe is telling me that this is a stupid idea and that I'm stupid for even trying.
My fingers can tap-tap the refresh button fast enough for the video to kind of play now. Unfortunately, I get tired after a hundred frames and I'm too inconsistent.
Being a "good programmer", I decide to automate it. I'm already familiar with the code, I just need to add a setInterval
that triggers the fetchItems
function, all controlled with a nice toggle button.
My misplaced confidence is becoming something of a cliché. But! It's not all my fault! Fluent Reader uses something called React class components, which, if you follow the link, you're greeted with a marvellous alert telling you to please, for the love of god, stop using class components.
It took me hours to figure out how to add one button that toggles the setInterval
. Nothing was allowed to interact with each other. I had to pass in app state where there definitely should not have been app state passed. But once I figured out that I had to replace all of the minified vendored font files, my button finally worked!
If you would like to see the modifications I made to Fluent Reader, I uploaded my changes on Sourcehut.
Now I just had to show it off.
It took two hours to bend React to my will. I just had to record my screen playing the video, sync it to some music, and then become famous among a niche group of internet-dwellers.
The cliché continues, but it's still not my fault that my laptop thermal throttles at the slightest provocation. I connect my laptop to my server over Ethernet, and let the RSS reader rip. It resulted in a six and a half minute recording. For reference, Bad Apple is meant to only be four minutes or so. My CPU was pinned, Electron was struggling1, and neither my network bandwidth nor my server was the issue. Easy enough, I'll just scale the entire video by a constant factor to speed it up.
There's those accursed phrases again: "easy enough" and "just". When will I learn that it's never that easy? The recording was running at different speeds throughout.
I ended up spending a few hours setting up a couple dozen or so keyframes to synchronize my RSS Bad Apple with the actual video. This was made harder by the fact that I hadn't actually watched Bad Apple all the way through, so I kept getting confused where I was (that may have been my mistake) and that the synchronization seems to really mess with Kdenlive. I overlaid the real video on one of the little RSS boxes, and watched closely to see if anything got out of sync. If it did, I added a keyframe to ensure that part was in sync. I repeated this process until everything was properly synchronized.
If I didn't know the lyrics to Bad Apple going into this project, I certainly do know now.
In case that wasn't bad enough, YouTube wanted me to now give them my phone number to set my own thumbnail and a facial scan to add a clickable link to this blog post in the description.
I figured it out though. Here's your video. I hope you enjoy it.
Questions? Thoughts? Concerns? Pure unbridled anger at the fact that this is the post I publish after months? Feel free to contact me!
I'm taking down ba.ersei.net
in the meantime, just so that people don't try it out on my website.
It was probably because Electron was running on Wayland. Running it under Xwayland, it performed a lot better, but because my laptop has a HiDPI screen and fractional scaling on Xwayland is still broken on Sway (and I didn't have the foresight to turn down the scaling beforehand), I recorded it in pure Wayland mode. ↩
2024-10-30 06:40:00
This is meant as a tutorial on how to use a VPS to get a public IPv4 address for self-hosting reasons. Often, people want to run a server out of their college dorm that doesn't give them a public IPv4 address, or out of their house from behind CGNAT.
It's a simple solution and an excellent alternative to software like Rathole or Cloudflare Tunnel because this solution will properly pass the correct connecting IP addresses through, transparently. If you look at your webserver access logs while using software like Rathole or Cloudflare Tunnel, all the connections are arriving from 127.0.0.1
, and not from the real client address.
As a quick introduction, we will create a Wireguard connection between the server without an IP address and a virtual machine in the cloud. Only incoming traffic will go through the cloud VM. All outbound traffic will continue as normal. This means that there will be no latency for normal outbound internet use. The latency will only appear when someone accesses your website.
This is not an endorsement, but Oracle Cloud and Google Cloud both have generous free tiers and will give you a static IPv4 address with one virtual machine. Pick a region that is geographically close to where your server is. The specs of the VM are not important—this is extremely lightweight. The most important component is bandwidth, as the bandwidth of this machine will become the bandwidth of the incoming connections. This VPS must be running a modern Linux distribution.
Verify that you have nftables
installed on the VPS by running nft --version
. If it is not installed, do so. Occasionally the nft
command is kept in /usr/sbin
, and may not be in the path of a non-root user. Additionally, some distributions may come with alternative firewall software, such as firewalld
or ufw
. Please ensure those are uninstalled.
Install Wireguard on both machines. For Debian-based distributions, this will look like sudo apt install wireguard
, and for Fedora-based distributions sudo dnf install wireguard-tools
.
On the cloud provider, open the port 51820/UDP
in the cloud firewall. Instructions vary by provider.
Then, on the cloud virtual machine, create the file /etc/nftables/proxy.nft
:
table ip nat {
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
meta l4proto tcp tcp dport 80 dnat to 192.168.77.2:80
meta l4proto tcp tcp dport 443 dnat to 192.168.77.2:443
}
chain INPUT {
type nat hook input priority 100; policy accept;
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
}
}
Also create the file /etc/nftables/main.nft
:
# Sample configuration for nftables service.
# Load this by calling 'nft -f /etc/nftables/main.nft'.
# Note about base chain priorities:
# The priority values used in these sample configs are
# offset by 20 in order to avoid ambiguity when firewalld
# is also running which uses an offset of 10. This means
# that packets will traverse firewalld first and if not
# dropped/rejected there will hit the chains defined here.
# Chains created by iptables, ebtables and arptables tools
# do not use an offset, so those chains are traversed first
# in any case.
# drop any existing nftables ruleset
flush ruleset
# a common table for both IPv4 and IPv6
table inet nftables_svc {
# protocols to allow
set allowed_protocols {
type inet_proto
elements = { icmp, icmpv6 }
}
# interfaces to accept any traffic on
set allowed_interfaces {
type ifname
elements = { "lo" }
}
# services to allow
set allowed_tcp_dports {
type inet_service
elements = { ssh, http, https, 51820 }
}
# services to allow
set allowed_udp_dports {
type inet_service
elements = { 51820 }
}
# this chain gathers all accept conditions
chain allow {
ct state established,related accept
meta l4proto @allowed_protocols accept
iifname @allowed_interfaces accept
tcp dport @allowed_tcp_dports accept
udp dport @allowed_udp_dports accept
}
# base-chain for traffic to this host
chain INPUT {
type filter hook input priority filter + 20
policy accept
jump allow
reject with icmpx type port-unreachable
}
}
include "/etc/nftables/proxy.nft"
This firewall rule will NOT close SSH access. If you have publicly available SSH, that is a bad idea, and you should adjust allowed_tcp_dports
to not include SSH. This default configuration will only pass through HTTP and HTTPS. Adjust allowed_tcp_dports
to allow your TCP port, and allowed_udp_dports
to allow your UDP port. In the first file, use the example HTTP/HTTPS configuration to forward another port. Keep in mind that this port forwarding will take priority! If you have SSH open to the VPS and you try forwarding SSH, you WILL lose SSH access!
Add the line include /etc/nftables/main.nft;
at the end of the file /etc/nftables.conf
(the semicolon is important), and then restart the firewall (and ensure it persists across reboots):
cloudvm# systemctl enable nftables
cloudvm# systemctl restart nftables
Finally, enable IP forwarding and make it persist across reboots:
cloudvm# sysctl -w net.ipv4.ip_forward=1
cloudvm# echo net.ipv4.ip_forward = 1 >> /etc/sysctl.conf
First, set up the Wireguard keys. On the cloud VM, run this command as root:
cloudvm# wg genkey | tee privatekey | wg pubkey > publickey
Keep these generated files (privatekey
, publickey
) in a safe place.
Repeat this generation command on the other machine.
First, create the file /etc/wireguard/wg0.conf
on the cloud VM:
[Interface]
Address = 192.168.77.1/24
ListenPort = 51820
PrivateKey = [FIRST_GENERATED_PRIVATE_KEY]
[Peer]
PublicKey = [THE_PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
AllowedIPs = 192.168.77.2/32
PersistentKeepalive = 30
Create the file /etc/wireguard/wg0.conf
on the other machine:
[Interface]
PrivateKey = [PRIVATE_KEY_GENERATED_ON_THIS_MACHINE]
Address = 192.168.77.2/32
Table = 123
PreUp = ip rule add from 192.168.77.2 table 123 priority 456
PostDown = ip rule del from 192.168.77.2 table 123 priority 456
[Peer]
PublicKey = [PUBLIC_KEY_GENERATED_ON_THE_OTHER_MACHINE]
Endpoint = [IP_ADDRESS_OF_THE_CLOUD_VM]:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 30
Then start and persist the Wireguard tunnel on both machines:
cloudvm# systemctl enable --now [email protected]
server# systemctl enable --now [email protected]
That should be all you need to have a public static IPv4 address when self-hosting in an environment where you don't have any such address. If you have any questions, feel free to contact me. Please try fixing your own problem before asking me for help, though. If you do ask me for help, please be as descriptive as possible and tell me the troubleshooting steps you've taken. I'll ignore the cry for help otherwise.
2024-07-01 21:20:00
Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.
Like all good projects, this began with an Idea.
My mind reached out and grabbed wispy tendrils from the æther, forcing the disparate concepts to coalesce. The Mass gained weight in my hands, and a dark, swirling colour promising doom to those who gazed into it for long.
On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.
Booting Linux off of a Google Drive root.
I wanted this to remain self-contained, so I couldn't have a second machine act as a "helper". My mind went immediately to FUSE—a program that acts as a filesystem driver in userspace (with cooperation from the kernel).
I just had to get FUSE programs installed in the Linux kernel initramfs and configure networking. How bad could it be?
The Linux boot process is, technically speaking, very funny. Allow me to pretend I understand for a moment1:
As strange as the third step may seem, it's very helpful! We can mount a FUSE filesystem in that step and boot normally.
The initramfs needs to have both network support as well as the proper FUSE binaries. Thankfully, Dracut makes it easy enough to build a custom initramfs.
I decide to build this on top of Arch Linux because it's relatively lightweight and I'm familiar with how it works, as opposed to something like Alpine.
$ git clone https://github.com/dracutdevs/dracut
$ podman run -it --name arch -v ./dracut:/dracut docker.io/archlinux:latest bash
In the container, I installed some packages (including the linux
package because I need a functioning kernel), compiled dracut
from source, and wrote a simple module script in modules.d/90fuse/module-setup.sh
:
#!/bin/bash
check() {
require_binaries fusermount fuseiso mkisofs || return 1
return 0
}
depends() {
return 0
}
install() {
inst_multiple fusermount fuseiso mkisofs
return 0
}
That's it. That's all the code I had to write. Buoyed by my newfound confidence, I powered ahead, building the EFI image.
$ ./dracut.sh --kver 6.9.6-arch1-1 \
--uefi efi_firmware/EFI/BOOT/BOOTX64.efi \
--force -l -N --no-hostonly-cmdline \
--modules "base bash fuse shutdown network" \
--add-drivers "target_core_mod target_core_file e1000" \
--kernel-cmdline "ip=dhcp rd.shell=1 console=ttyS0"
$ qemu-kvm -bios ./FV/OVMF.fd -m 4G \
-drive format=raw,file=fat:rw:./efi_firmware \
-netdev user,id=network0 -device e1000,netdev=network0 -nographic
...
...
dracut Warning: dracut: FATAL: No or empty root= argument
dracut Warning: dracut: Refusing to continue
Generating "/run/initramfs/rdsosreport.txt"
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.
To get more debug information in the report,
reboot with "rd.debug" added to the kernel command line.
Dropping to debug shell.
dracut:/#
Hacker voice I'm in. Now to enable networking and mount a test root. I have already extracted an Arch Linux root into a S3 bucket running locally, so this should be pretty easy, right? I just have to manually set up networking routes and load the drivers.
dracut:/# modprobe fuse
dracut:/# modprobe e1000
dracut:/# ip link set lo up
dracut:/# ip link set eth0 up
dracut:/# dhclient eth0
dhcp: PREINIT eth0 up
dhcp: BOUND setting up eth0
dracut:/# ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
dracut:/# s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
dracut:/# ls /sysroot
bin dev home lib64 opt root sbin sys usr
boot etc lib mnt proc run srv tmp var
dracut:/# switch_root /sysroot /sbin/init
switch_root: failed to execute /lib/systemd/systemd: Input/output error
dracut:/# ls
sh: ls: command not found
Honestly, I don't know what I expected. Seems like everything is just... gone. Alas, not even tab completion can save me. At this point, I was stuck. I had no idea what to do. I spent days just looking around, poking at the switch_root
source code, all for naught. Until I remembered a link Anthony had sent me: How to shrink root filesystem without booting a livecd. In there, there was a command called pivot_root
that switch_root
seems to call internally. Let's try that out.
dracut:/# logout
...
[ 430.817269] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100 ]---
...
dracut:/# cd /sysroot
dracut:/sysroot# mkdir oldroot
dracut:/sysroot# pivot_root . oldroot
pivot_root: failed to change root from `.' to `oldroot': Invalid argument
Apparently, pivot_root
is not allowed to pivot roots if the root being switched is in the initramfs. Unfortunate. The Stack Exchange answer tells me to use switch_root
, which doesn't work either. However, part of that answer sticks out to me:
initramfs is rootfs: you can neither pivot_root rootfs, nor unmount it. Instead delete everything out of rootfs to free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs with the new root (cd /newmount; mount --move . /; chroot .), attach stdin/stdout/stderr to the new /dev/console, and exec the new init.
Would it be possible to manually switch the root without a specialized system call? What if I just chroot?
...
dracut:/# mount --rbind /sys /sysroot/sys
dracut:/# mount --rbind /dev /sysroot/dev
dracut:/# mount -t proc /proc /sysroot/proc
dracut:/# chroot /sysroot /sbin/init
Explicit --user argument required to run as user manager.
Oh, I need to run the chroot
command as PID 1 so Systemd can start up properly. I can actually tweak the initramfs's init script and just put my startup commands in there, and replace the switch_root
call with exec chroot /sbin/init
.
I put this in modules.d/99base/init.sh
in the Dracut source after the udev rules are loaded and bypassed the root
variable checks earlier.
modprobe fuse
modprobe e1000
ip link set lo up
ip link set eth0 up
dhclient eth0
ip route add default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15
s3fs -o url=http://192.168.2.209:9000 -o use_path_request_style fuse /sysroot
mount --rbind /sys /sysroot/sys
mount --rbind /dev /sysroot/dev
mount -t proc /proc /sysroot/proc
I also added exec chroot /sysroot /sbin/init
at the end instead of the switch_root
command.
Rebuilding the EFI image and...
I sit there, in front of my computer, staring. It can't have been that easy, can it? Surely, this is a profane act, and the spirit of Dennis Ritchie ought't've stopped me, right?
Nobody stopped me, so I kept going.
I log in with the very secure password root
as root
, and it unceremoniously drops me into a shell.
[root@archlinux ~]# mount
s3fs on / type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
...
[root@archlinux ~]#
At last, Linux booted off of an S3 bucket. I was compelled to share my achievement with others—all I needed was a fetch program to include in the screenshot:
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
core.db failed to download
error: failed retrieving file 'core.db' from geo.mirror.pkgbuild.com : Could not resolve host: geo.mirror.pkgbuild.com
warning: fatal error from geo.mirror.pkgbuild.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.rackspace.com : Could not resolve host: mirror.rackspace.com
warning: fatal error from mirror.rackspace.com, skipping for the remainder of this transaction
error: failed retrieving file 'core.db' from mirror.leaseweb.net : Could not resolve host: mirror.leaseweb.net
warning: fatal error from mirror.leaseweb.net, skipping for the remainder of this transaction
error: failed to synchronize all databases (invalid url for server)
[root@archlinux ~]#
Uh, seems like DNS isn't working, and I'm missing dig
and other debugging tools.
Wait a minute! My root filesystem is on S3! I can just mount it somewhere else with functional networking, chroot
in, and install all my utilities!
Some debugging later, it seems like systemd-resolved doesn't want to run because it Failed to connect stdout to the journal socket, ignoring: Permission denied
. I'm not about to try to debug systemd because it's too complicated and I'm lazy, so instead I'll just use Cloudflare's.
[root@archlinux ~]# echo "nameserver 1.1.1.1" > /etc/resolv.conf
[root@archlinux ~]# pacman -Sy fastfetch
:: Synchronizing package databases...
core is up to date
extra is up to date
...
[root@archlinux ~]# fastfetch
I look around, making sure that nobody had tried to stop me. My window was intact, my security system had not tripped, the various canaries I had set up around the house had not been touched. I was safe to continue.
I was ready to have it run on Google Drive.
There's a project already that does Google Drive over FUSE for me already: google-drive-ocamlfuse. Thankfully, I have a Google account lying around that I haven't touched in years ready to go! I follow the instructions, accept the terms of service I didn't read, create all the oauth2 secrets, enable the APIs, install google-drive-ocamlfuse
from the AUR into my Arch Linux VM, patch some PKGBUILD
s (it's been a while), and lo and behold! I have mounted Google Drive! Mounting Drive and a few very long rsync
runs later, I have Arch Linux on Google Drive.
Just kidding, it's never that easy. Here's a non-exhausive list of problems I ran into:
/usr/lib
)/proc
and isn't mounted, or stuff that just hasn't copied over yet)With how many problems there are with symlinks, I have half a mind to change the FUSE driver code to just create a file that ends in .internalsymlink
to fix all of that, Google Drive compatibility be damned.
But, I have challenged myself to do this without modifying anything important (no kernel tweaking, no FUSE driver tweaking), so I'll just have to live with it and manually create the symlinks that rsync
fails to make with a hacky sed
command to the rsync
error logs.
In the meantime, I added the token files generated from my laptop into the initramfs, as well as the Google Drive FUSE binary and SSL certificates, and tweaked a few settings2 to make my life slighty easier.
...
inst ./gdfuse-config /.gdfuse/default/config
inst ./gdfuse-state /.gdfuse/default/state
find /etc/ssl -type f -or -type l | while read file; do inst "$file"; done
find /etc/ca-certificates -type f -or -type l | while read file; do inst "$file"; done
...
It's nice to see that timestamps kinda work, at least. Now all that's left is to wait for the agonizingly slow boot!
chroot: /sbin/init: File not found
Perhaps they did not bother to stop me because they knew I would fail.
I know the file exists since, well, it exists, so why is it not found? Simple: Linux is kinda weird and if the binary you call depends on a library that's not found, then you'll get "File not found".
dracut:/# ldd /sysroot/bin/bash
linux-vdso.so.1 (0x00007e122b196000)
libreadline.so.8 => /usr/lib/libreadline.so.8 (0x00007e122b01a000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007e122ae2e000)
libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007e122adbf000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007e122b198000)
However, these symlinks don't actually exist! Remember how earlier we noted that relative symlinks don't work? Well, that's come back to bite me. The Kernel is looking for files in /sysroot
inside /sysroot/sysroot
. Luckily, this is an easy enough fix: we just need to have /sysroot
linked to /sysroot/sysroot
without links:
dracut:/# mkdir /sysroot/sysroot
dracut:/# mount --rbind /sysroot /sysroot/sysroot
Now time to boot!
It took five minutes for Arch to rebuild the dynamic linker cache, another minute per systemd unit, and then, nothing. The startup halted in its tracks.
[ TIME ] Timed out waiting for device /dev/ttyS0.
[DEPEND] Dependency failed for Serial Getty on ttyS0.
Guess I have to increase the timeout and reboot. In /etc/systemd/system/dev-ttyS0.device
, I put:
[Unit]
Description=Serial device ttyS0
DefaultDependencies=no
Before=sysinit.target
JobTimeoutSec=infinity
Luckily, it did not take infinite time to boot.
I'm so close to victory I can taste it! I just have to increase another timeout. I set LOGIN_TIMEOUT
to 0
in /etc/login.defs
in Google Drive, and tried logging in again.
Thankfully, there's a cache, so subsequent file reads aren't nearly as slow.
Here I am, laurel crown perched upon my head, my chimera of Linux and Google Drive lurching around.
But I'm not satisfied yet. Nobody had stopped me because they want me to succeed. I have to take this further. I need this to work on real hardware.
Fortunately for me, I switched servers and now have an extra laptop with no storage just lying around! A wonderful victim3 for my test!
There are a few changes I'll have to make:
e1000
All I need is the r8169
driver for my ethernet port, and let's throw in a Powerline into the mix, because it's not going to impact the performance in any way that matters, and I don't have an ethernet cord that can reach my room.
I build the unified EFI file, throw it on a USB drive under /BOOT/EFI
, and stick it in my old server. Despite my best attempts, I couldn't figure out what the modprobe directive is for the laptop's built-in keyboard, so I just modprobed hid_usb
and used an external keyboard to set up networking.
This is my magnum opus. My Great Work. This is the mark I will leave on this planet long after I am gone: The Cloud Native Computer.
Nice thing is, I can just grab the screenshot4 from Google Drive and put it here!
Despite how silly this project is, there are a few less-silly uses I can think of, like booting Linux off of SSH, or perhaps booting Linux off of a Git repository and tracking every change in Git using gitfs. The possibilities are endless, despite the middling usefulness.
If there is anything I know about technology, it's that moving everything to The Cloud is the current trend. As such, I am prepared to commercialize this for any company wishing to leave their unreliable hardware storage behind and move entirely to The Cloud. Please request a quote if you are interested in True Cloud Native Computing.
Unfortunately, I don't know what to do next with this. Maybe I should install Nix?
Thoughts? Comments? Opinions? Feel free to share (relevant) ones with me! Contact me here if you want.
2024-04-02 04:10:00
As some of you may know, I go to college in the midwestern United States. This means that the land here is pretty flat, and my classes are like a mile from my apartment. Why would I want to walk back and forth multiple times a day when I could do it in style.
Of course, I am not the only one beridden by this plight. Popular solutions include:
These are all valid options, and do not include some of the more esoteric solutions I have seen around campus. After all, Purdue is an engineering school, and midwestern engineering is the best kind of engineering. Of course, I entered college with delusions of grandeur. I had been watching too much of engineering YouTube recently, so I had an idea of what I would need in terms of motor controllers, brushless motors, LiPo batteries, and so on.
Yeah, I never ended up doing anything with that budget scooter from Wal·Mart, other than ride it around for the first month and a half of my college career, burn out the wheels, and sell it for a few bucks to someone else.
Thus begins my journey into skating.
Many many years ago, in a state far far away, little Ersei saw the Cool Older Kids (they were like ten years old at the time) on their new RipStiks. For those out of the know, a RipStik is a caster board, so two pivoting wheels on two joined twisty boards that you can move by twisting your lower body.
Years later, I got my wish and became the sickest, most badical person on the block. Buoyed by my ego, I went down the fairly-steep hill in my neighbourhood, hit a pebble, and tumbled a solid ten feet.
Luckily, I landed halfway in the grass and only got a few scrapes (and I was wearing a helmet).
This exemplified the largest issue with my dreams of skating around as a means of transport: the wheels were unforgiving, and the roads were brutal. Of course, I ignored all of those warning signs and continued with my dreams.
I was back home in October, and I had come with an idea: I would take my old, decrepit, and waterlogged skateboard that my parents bought for me a decade ago to soothe my jealousy at the older kids, and make it work for me. In previous years, it had been used exclusively as a seat for the swingset I hung from a sturdy tree about when I got it.
My friend, Merrick, tried to teach me how to skate all those years ago, but I was too scared to get on the skateboard. His voice lives on in my head, echoing the one bit of knowledge that has survived the test of time: "bigger wheels means you go faster". I will never forget that.
I put bigger wheels I cannibalized from my old RipStik on the skateboard and left everything else as-is. Eagle-eyed readers will notice a crack in the plastic. This will come back to haunt me later.
As I got back to college after whatever break happened during October 11, 2022, I got to learning how to skate. After a few days and one helmet, I got the hang of it and started using it to get around campus.
Soon afterward, I hit something in the road, fell on my face, and messed up my hand and wrist1. I refuse to upload a picture.
Naturally, this did not stop me, and I continued to skate around.
This skateboard was trying to kill me. There is no other explanation. A few of my friends who had been skating for years tried out the abomination I made and collectively agreed to never touch it again. And yet I continued, as skateboard parts are expensive.
However, I was flush with money from graduation, and a few Amazon gift cards later, I decided to overhaul my skateboard. I spent a few bucks putting on new grip tape (badly) as the original board had practically none left. I got some secondhand trucks, nice bearings, and big squishy wheels. For safety.
And wow was this a lot better. The only part I kept around was the deck, since the Bumblebee decal had since become iconic2.
I did continue to eat asphalt on occasion as I got better at skateboarding. Here is a non-exhaustive list of the lessons I've learned so far:
These lessons were written in blood. Don't worry, I jumped off of the skateboard before I crashed into the person who started crossing the street without looking, and the car driven by someone on their phone turning into the bike lane3.
A great artist blames everyone else, I guess.
No, since this skateboard I've put together is too bottom-heavy to do any kind of tricks. I can do a manual, and jump curbs, but that's about it.
A small wrench thrown into my plans for tricks is that my skateboard deck is apparently really small. It's a child's deck, since, y'know, it was originally bought for a child (me). It also seems like new decks are expensive, so whenever this one breaks I'll upgrade to a properly-sized one.
Ignoring the surface level arguments that I can throw out, like the "convenience" of being able to carry it around, or how "cool" skating looks, or how "low-maintenance"4 a skateboard is, the reason I like skating is that it is fun.
In the end, why do anything if it's not enjoyable, either now or for delayed gratification? I can't count the number of nights I was feeling terrible, and skating outside for a few miles made me feel a lot better. Skateboarding is enjoyable, much like writing. There is always a risk to be taken with activities, and I've gotten good enough that I no longer worry about falling off my skateboard and getting hurt.
There's a certain amount of enjoyment that you can derive from anything, and a good balance of enjoyment and need is needed. An activity that is pure enjoyment and with no need (Hermitcraft) must be balanced out by something that is all need and no enjoyment (homework). Skateboarding lies somewhere in between, as an activity that you need to do, but also enjoy doing. It balances itself. Spending a few hours aimlessly skating around and enjoying the weather is offset by the need to get to class on time.
I'm not alone in enjoying skateboarding. There are plenty of people around campus who get around with longboards and skateboards, or even something weirder.
That brings me to hobbies. Hobbies ride that line between work and pleasure, a mix of "need" and "want". Take selfhosting as an example—what started out as learning and fun has evolved into a "need", in which I now need to keep my DNS server up, my RSS reader, my photo backups, and so on. But that "need" does not make it bad! Quite the contrary, it balances out the "want" of selfhosting, and makes it, in my opinion, ultimately more enjoyable.
This can be used in the opposite direction too. With the rise of "hustle culture", and the rampant "monetize your hobby", it swings the balance in the other direction. What was once enjoyable is now work, and it becomes unsatisfying and sad.
That does bring into question going into the technology industry as someone who is already pursuing computers as a hobby, much like myself. In high school, computers were a hobby for me. I was not graded on how well I could do computer science, but in college I am.
The scale has shifted, and computer science and programming is not nearly as fun anymore. Staying consistent with my argument, however, I have found other activities to balance it out, like skateboarding, making jewelry, and writing, all things that can not be monetized (or if they can, they won't make enough money to be worthwhile). However, this is not sustainable, as every hobby requires time, and time is finite. Adding more work and expecting to balance it will not succeed the moment the time required for work is more than the time required for enjoyment.
Please don't turn your hobby into a job. It's not worth it.
Gladly! My skateboard is a 7.75 inch wide deck with 70 Millimeter 78A Sector 9 Nineballs wheels (they're pretty worn, and I wanna try harder wheels to learn to slide), Bones Reds steel bearings (maybe I'll upgrade to ceramic if I want to do more wet-weather riding), and Paris 129 Millimeter Street TKP trucks (lubricated with bar soap). The only original parts are the deck and the screws holding the trucks on, but those are rusted through, and I need to replace them.
It's developing a nasty razor tail, and the board is beginning to chip a little at the back, mostly since I dropped it.
I avoid riding it if water has pooled outside, but it's fine to ride when it's wet due to the bearings being properly lubricated and sealed and the wheels being sufficiently soft to retain grip when it's wet outside.
I don't wear anything special when skateboarding, just jeans, a hoodie/beanie if it's not warm enough, and gloves to protect my hands if it's cold or if I am scared of falling off that day. I just wear my normal shoes, but that's a bad idea since the grip has disappeared in like two months.
The skateboard is a bit high off of the ground and the wheels are pretty soft, so it does take more energy to skate around compared to normal skateboards or dropped longboards5. The height does help when going off of curbs, so that's nice.
Still have questions about my skateboarding hobby? Feel free to contact me.
This was written for April Cools, and I've been meaning to write, so here y'all go.
I've written the word skate so many times that it's beginning to look weird.
Second time I've messed up my wrist, the first was falling off of my bike like a decade ago. ↩
People actually recognize my skateboard around campus because of that and people come up to me at one in the morning in a Five Guys to comment on it. I do like the attention. ↩
This was a strange story, but I had not five minutes prior fallen off my skateboard. If I had not, then the timing would be perfect for me to hit the car. The person coming the opposite direction on their electric skateboard was not nearly so lucky, but they managed to dodge the car as well, and was not badly hurt. ↩
Once every month or so, I open up the ball bearings, clean them out, and put new lubricant in them. ↩
A longboard that has the wheels mounted higher than the rest deck so the deck is closer to the ground. ↩