MoreRSS

site iconJonas HietalaModify

A writer, developer and wannabe code monkey.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Jonas Hietala

Planning my Kubernetes homelab

2026-05-05 08:00:00

The Kubernetes iceberg.

If I’d have to describe my homelab setup via analogy I guess it would be similar to me on a unicycle carrying plates with both of my hands, or maybe a leaking barrel with water that I try to patch up with silver tape.

I’ve also been Kubernetes-curious so I decided to completely redesign my homelab, centered around Kubernetes. It was a bit painful but at least it fulfilled my need for procrastination very well.

Overarching goals

I’ve got three goals with the setup:

  1. Declarative, reproducible, and automated

    The big goal is to have everything declarative in a single git repository and to easily be able to bootstrap from nothing to a fully working setup.

    I want to use Infrastructure as Code to create the Kubernetes cluster and GitOps to populate it with all my services automatically from the repo. It should be really easy to make a change; I want to move away from having to ssh into the correct repo and manually do stuff.

  2. Backups, backups, backups

    While a proper GitOps setup means that infrastructure and configuration files are inherently backed up, a proper backup setup is still crucial.

    Ask me how I know.
    No, please don’t.

    I haven’t had a proper (as in working) backup solution for years and this time I should have it from the start.

  3. Documentation

    What if I could document my setup, so future me has a chance to understand what’s happening? Writing documentation is boring, so I’ll write some blog posts instead.

I’m a bit skeptical that I can fulfill all three goals, but if I manage 2/3 or even 1/3 it’s still a big win compared to my old setup.

Kubernetes, too complicated?

It’s a fair question and the most common critique towards Kubernetes is that’s just too complicated (especially for a homelab). Discussions online are filled with comments such as:

Kubernetes has to be most complex software I’ve ever tried to learn. I eventually gave up and decided to stick with simple single machine docker-compose deployments.

“Let’s use Kubernetes!”
Now you have 8 problems

So why would I choose Kubernetes?

Because, for whatever reason, Kubernetes is very popular and for every comment complaining about complexity you have comments extolling it’s virtues:

I was skeptical about Kubernetes but I now understand why it’s popular. The alternatives are all based on kludgy shell/Python scripts or proprietary cloud products.

Kubernetes is the biggest quality-of-life improvement I’ve experienced in my career

Having experienced the single machine docker-compose deployments, kludgy shell scripts, and proprietary cloud products; I think I need to use Kubernetes myself to be able to form an opinion on it.

And in some ways, isn’t experimentation a core part homelabbing?

Tech stack

There are many valid tech choices for this kind of setup and many of them are reasonable. I don’t know if my choices are reasonable—most were chosen because they sounded cool, others because I just picked one.

Here’s list of some of the choices I made, which we’ll setup in this series:

  • Talos Linux for Kubernetes nodes.

    The coolest way to run Kubernetes. Lightweight and secure, what’s not to like?

  • Terraform to provision VMs on Proxmox and to initialize Talos Linux.

  • Cilium for proxying, CNI, load balancer, and Gateway API provider.

    I opted for Cilium as it’s one dependency replacing several alternatives (such as kube-proxy, Metallb, and Traefik, which I was leaning towards at first). Gateway API is the new thing you “should” use instead of ingress, and I wanted to try it out.

  • ArgoCD for GitOps.

    If it was purely for myself FluxCD might have been the better, simpler, choice but we might use ArgoCD at work and I don’t want to deal with two separate systems at the moment.

  • Renovate to keep dependencies up-to-date.

  • CloudNativePG for Postgres on Kubernetes.

    I’ll also setup timescaledb, although we won’t use it in this series. It’s just to prepare for the future migration of long-term statistics from Home Assistant.

  • Longhorn on NVMEs for persistent storage.

    Data is backed up using VolSync and Restic.

  • Sanoid, Syncoid and Kopia for backup archive management.

    Backups are snapshotted and stored in ZFS, which are also encrypted and shipped off-site to Backblaze for storage in the cloud. Backups from Longhorn and Postgres arrives to ZFS via Garage, a self-hosted S3 service.

  • Authentik as an identity provider and single-sign-on platform.

    It’s nice to not have to login manually everywhere.

Huh. Displayed like this it looks like a lot, but fear not! It’ll be worth it in the end.

In the next part we’ll start by creating VMs and getting a Kubernetes cluster up and running.

From GitHub to Codeberg/Forgejo

2026-04-28 08:00:00

Respect your users and their confidence in you, “Microsoft” GitHub.

After years of waffling around I finally bit the bullet and migrated away from GitHub onto Codeberg and a private Forgejo instance. If Codeberg is good enough for Gentoo then it’s good enough for me.

What’s the problem with GitHub?

One part of my GitHub aversion is me being anti the big American tech corporations for ideological reasons. I’d like to reduce my usage and dependence of Google/Facebook/Apple/Microsoft/Amazon etc where I can and moving away from GitHub fits that goal nicely.

The other reason is GitHub’s enshittification. GitHub has been slow and slightly buggy for years and it’s not getting better. They push out badly planned features while shipping this kind of code in GitHub actions runner:

#!/bin/bash
SECONDS=0
while [[ $SECONDS != $1 ]]; do
:
done

(This apparently broke Zig and caused them to leave for Codeberg.)

You may not like it but this is what peak vibe coding looks like

I know it’s a snarky comment, but with a CEO that says “embrace AI or get out” then it’s hard to resist.

There’s empirical data to back up GitHub’s unreliability; just check out these uptime logs (taken 2026-04-27 from third party sites since the official status page predictably lies):

Screenshot from https://mrshu.github.io/github-statuses/
Screenshot from https://damrnelson.github.io/github-historical-uptime/

They don’t call it “Microslop” for nothing.

Self-hosted + managed

Codeberg is based on Forgejo, which is great to self-host. I’ve had it running a few weeks when I’ve been playing with my homelab and it feels exceptionally fast. The web UI is super responsive and I frequently have to double-check that I pushed as it finished so quickly.

I would love to have the speed and privacy for all my repositories but I’ve got some that I want to be public (the source for this site for example). I considered a few different setups:

  1. Sync back changes to GitHub via Forgejo’s built-in GitHub sync?

    (Keeping GitHub active would defeat the point a little though.)

  2. Sync changes from my Forgejo instance to Codeberg?

    (Maybe annoying to manage multiple repos?)

  3. Only use Codeberg?

    (I’d lose speed and privacy for my private repos.)

  4. Expose my Forgejo instance running in my homelab?

    (The internet is a scary place.)

  5. Setup a public Forgejo on my Hetzner VPS?

    (I’d still have to protect it and manage traffic.)

In the end I decided to use Codeberg as for my public-facing repositories and Forgejo as my main interface (for both public and private repos).

Some of my public repos are close to read-only (this site’s source for instance) so I’ve setup a mirror where Forgejo will push changes to Codeberg automatically. However, it’s weird to also pull changes from Codeberg to Forgejo. I guess I could setup a script to do it, but pull requests from others are rare enough that I can do it manually. Other repos (such as tree-sitter-djot) are left alone as they’re more collaborative in nature and I can’t be bothered to keep two sources in sync.

Is it good?

Yes, both Codeberg and Forgejo are very good. They are snappy and speedy and there are no features I miss from either GitHub or GitLab (and plenty I’m glad to avoid—getting AI shoved into every crevice for instance).
(Yes, I used an em-dash on purpose.)

At the moment Codeberg is admittedly having periods with pretty bad performance issues. This is because they’ve been under a DDOS attack for quite some time, which has been frustrating.

The migration

The migration wasn’t difficult, just a bit repetitive.

For private repositories I just deleted them from GitHub and pushed them to Forgejo.

Public repositories had a few more steps:

  1. Push them to Forgejo

  2. Push them to Codeberg

  3. Add a header redirecting to Codeberg similar to this:

  4. Archive them on GitHub

A work week one bag travel

2026-03-10 08:00:00

Life begins at the end of your comfort zone.

Neale Donald Walsch

I’m lucky that I have a job where I can work remotely as it allows me to live in a small community where there are no tech jobs anywhere close. It does require me to travel a few weeks per year to the office but I don’t mind that much as I appreciate minor dozes of socializing occasionally.

I recently spent five nights on a trip with only a single backpack and it was a surprisingly great experience.

How I used to travel

I’ve previously used these two bags for my trips.

I’ve had these work trips for years and I didn’t put too much thought into how to travel. Like most people I simply filled a suitcase that I checked in together with a backpack that I brought on the airplane.

I didn’t quite know how much to pack so I always packed a little more than I needed. For example, if I’m away 5 nights then I brought 7 pairs of underwear (if disaster strikes twice). When I returned I always had a bunch of unused clothes, but that felt better than having to little.

Because I had so much space I could bring a lot of things; my own pillow, a handful of books, gigantic boardgames, and I still had space left to bring back lots of gifts.

What bags?

I’ve had two backpacks that I used to travel with:

  1. Datsusaru Battlepack Core

    It’s a really sturdy backpack that’s great to bring to BJJ training, but it’s very heavy and it’s not ideal as a traveling bag.

  2. M-Tac Backpack Urban Line

    I bought this bag recently but I wasn’t too impressed. It was too small for a traveling bag and the zipper broke after just a few trips.

I’ve had a few suitcases that have broken down but I can’t remember the brand of. Most recently I’ve been using the Samsonite Essens 69cm that have held out great so far.

One bag travel

I’ve been spending five nights away 4–5 times a year on business travels. It’s not a crazy amount but also not negligible, so I figured it’s worth trying to optimize them a bit.

Enter one bag travel.

While I was reasonably comfortable during my travels there’s a few things that intrigued me about one bag travel:

  1. I wouldn’t have to roll around a big suitcase.
  2. The trip would be more streamlined without having to check-in and wait for the luggage during flights.
  3. I wouldn’t have to worry about lost luggage (although to be fair, so far I haven’t had that happen to me).
  4. No more trying to find space for my suitcase on the train, or worrying that I won’t see if someone decides to grab it and walk off the train.

In short, the actual trip would be more convenient and less worrisome…
If I could make it fit.

The packing list

All the stuff I packed into a bag for a week of travel.
  • Water bottle

    I later decided not to bring it.

  • 5 pair of socks and underwear

  • 3 t-shirts, 1 long-sleeve t-shirt

  • 1 slightly thicker long-sleeve shirt

  • Training gear for Submission Wrestling

    Shorts, rashguard, spats, knee pads, and mouth guard.

  • 1 pair of pants

  • Mobile phone charger

  • A Framework 13 laptop + charger

  • Laptop–headphone cable

    I also brought the Sony WH-1000X M3 noise canceling headphones that I wore during the trip.

  • Remarkable 2

  • A bag for dirty clothes

  • Toothbrush & toothpaste

  • Power bank

  • Vitamins & medicine

  • Nail clippers, tape, Whoop body holder

  • Earbuds & sleep mask

  • A book—the first two books in the Murderbot Diaries series.

Things to wear

Clothes I wore during the travel.

I saw the advice that the clothes you travel with is also very important. They have a point.

  • Pants with large pockets with zippers.

    I could use the pants the whole week if I wanted to.

  • A Houdini hoodie.

    Really warm and cozy, should last the whole week.

  • A thin jacket.

    Paired with the hoodie it provides a decent enough protection against the weather. I’m not going on a hiking trip; I’ll be moving between the office, the hotel, and restaurants.

    It’s also thin enough so I can fit it into the fully packed backpack.

  • A cap and gloves.

    I’m traveling in Sweden, it’s still fairly cold here.

What I wish I brought

  1. It was a bit too cold on some days. A warmer jacket, long-johns, or a nudge would’ve been good.
  2. The next Murderbot book. The first was fantastic! I had to visit a local book store where I bought Lock In. Loved that too.

Gearing up

Before I could try out one bag travel I had to get a new backpack. I ended up with the Fyro Levo 30L backpack mostly because the creator had a bunch of cool videos about bags… As shipping was expensive I also got their packing cubes, the retractable key leash, and a 1L sling (that I didn’t use).

This isn’t a review of their stuff; there’s probably better options out there but I’m too inexperienced to say. Maybe I should’ve gotten the 36L backpack, and the sling was an unnecessary purchase, but other than that I’ve been very happy with the Fyro products.

Packing in

Will it fit?

All clothes are packed into the packing cubes. One cube with the Submission Wrestling training clothes, one with underwear, and the largest with the rest of the clothes.
The packing cubes fits into the main compartment perfectly.
Medicine, earplugs, mouth guard, and other stuff went into these pockets in the main compartment.
Laptop, Remarkable goes in the back. The electronics compartment is a little tight with the charger, power bank, and some other stuff, but it closes.
The quick access compartment in the front with toothpaste and mobile charger.

One of the biggest critiques against the bag is that the pockets here are very loose as things might fall out. When the bag is fully packed this has not been my experience—almost the reverse. Fully packed, it’s a bit too tight. But empty they’re too loose.

Keys and travel documents in the “passport” pocket right next to the back.
The fully packed bag.

I did fit everything I wanted to bring. Although I skipped the water bottle it would’ve fit into the water bottle holder on the side. Maybe I’ll bring it next time.

The fully packed bag weighed in at 7.3 kg.

On the road

I managed to push down my jacket, gloves, cap, and book into the front pocket.

I do love the large front compartment.

The Levo has a small loop to hang up the bag in the bathroom. Nice.
With some force I could fit the bag under the seat in front of me in the train.
The bag under the seat in the domestic SAS flight.

I was a bit worried that the bag wouldn’t fit under the front seat but it worked out well. I didn’t take a picture of it but it fit in the overhead compartment during the flight too.

Although the extra space of the 36L backpack would’ve been greatly appreciated I fear that it might have made the bag too large to fit under the seat. In the end I think I prefer the convenience of having the bag under the seat over the extra 6L of space.

From hotel to office

Bag without packing cubes.

The bag was also used to transport the laptop and other electronics between the hotel and office. I was worried that a huge bag would be a chore to bring but that’s been a non-issue. The only minor problem is that the bag doesn’t stand by itself without the packing cubes.

Pros and cons

One’s destination is never a place, but a new way of seeing things.

Henry Miller

Overall I’ve really enjoyed my one bag traveling experience. I managed to fit into a single bag and the trip was a lot easier and more streamlined.

There are however some downsides that bothers me quite a bit:

  • I couldn’t bring Luthier, my new favorite boardgame on the trip.

    My work trips have been one of the best ways to reduce my boardgame list-of-shame (bought and unplayed boardgames). Or to play my favorite games again.

  • There’s not enough space to bring back gifts, such as clothes or LEGO.

    Then again, maybe I don’t need to always bring back gifts? Or maybe small gifts are good enough?

  • I can’t go nuts in the local book stores.

    Last time I brought back five new fantasy and sci-fi books. This time I bought a small book that I could fit at the back above the laptop.

I haven’t yet decided if I shall continue with the one bag travel, or if I shall start checking in a suitcase again. Either way it’s been eye opening how much you can do with a regular backpack.

2025 in review

2026-01-03 08:00:00

It’s time for a yearly review again. Time flies.

Nerdy things I enjoyed

  • I read a few books this year.

    The whole Dungeon Crawler Carl series was absolutely amazing and it quickly jumped up to one of my favorite series of all time.

    I can’t be held accountable for everything I’ve ever said to a stripper.

    Princess Donut
  • Board games are great.

    I still don’t play nearly as much as I’d like to but my kids are quickly growing up and to my eternal joy they’re all interested in board games! Isidor (8) has been loving Radlands, Loke (5) likes Chronicles of Avel, and Freja (3) loves Dragon’s Breath. My personal favorite is Luthier and I think it might be my absolute favorite game right now.

  • Computer games have made a small comeback.

    As my kids have been gaming more I’ve regained a little bit of interest in gaming too. I’ve been playing a little Core Keeper and Beyond All Reason with Isidor and it felt really fun.

  • Migrated my Neovim config to Fennel.

    Eh, it was fun to play with a new language.

Things I accomplished

  • I wrote 12 blog posts, not great, not terrible.

  • I managed to push back my fantasy writing ambitions to the future.

    It’s important as I tend to get stuck on things, preventing me from making progress in other parts. I really don’t have the time or energy to try to write a fantasy book right now, but making my brain see that wasn’t easy.

  • 2025 was the first full year running my company.

    I’m still consulting for the same company I started out with and I enjoy it. It feels good to say that I haven’t blown it completely just yet.

  • Switched to GrapheneOS.

    I took a big step forward in de-Googling and to increase my personal security by moving to GrapheneOS. I’m super happy about it.

  • Built my second 3D-printer, a VORON 0.

    My VORON 0.

    It’s good! It’s fun! And I’m already planning my next printer…

  • We migrated away from Sonos in our Kitchen.

    We got tired of buggy vendor lock-in and now we use my own Music Assistant based setup. If things break we can now blame me instead of Sonos.

  • I built my server first rack to host a new homelab server.

    A server rack with homelab stuff.

    I still need to cleanup the cables and write a blog post about it.

Tentative plans/goals/wishes for 2026

  1. Go on some roller coasters with the family.

    We’re planning to go to Liseberg in Göteborg and I know the kids will love it.

  2. Lose weight and build muscle.

    I’ve been having a rough year training and health wise. Not being able to get a consistent strength training routine, inconsistent sleep, and spouts of depression has made me gain too much weight again.

    Getting it sorted should (must) be a high priority this year.

  3. Get our own training space for our martial arts club.

    I’ve been wanting to get our own place for quite a while and I feel that we need to make it happen sooner rather than later.

  4. Develop a company product MVP.

    While I enjoy the consulting work I do at the moment my goal is to one day have a project of my own that can earn me money. I have a couple of ideas I want to pursue and I was going to do it in 2025 but I didn’t get far enough for that.

  5. Migrate my homelab to the recently built server and expand on it.

    I have this newly built server that’s just sitting idle. I need to migrate my old services to it and I’ve got ideas for lots of stuff I want to put there.

3D printer repairing and modding

2025-12-02 08:00:00

I’ve had my VORON Trident for 2 years and I’ve run it for 2600 hours. Overall I’m happy with the printer but I’ve been itching to make some more mods to it. Having finally finished the VORON 0 (with mods) I now have a backup printer I can use to rescue myself when I screw up.

As the printer was starting to crap out with a leadscrew starting to grind down again, the chamber thermistor stopped working, and PLA clogging up the Rapido hotend again it was time for a bit of a rebuild.

The plan

Besides fixing the printer I also wanted to prepare for a multi-color solution such as the Box Turtle and make some quality of life changes.

  1. Replace the problematic leadscrew with a replacement part I received from LDO and replace the POM nuts on the other leadscrews.

  2. Install the Inverted electronics mod.

    I’ve been using the RockNRoll mod to give access to the electronics compartment by tilting the printer backwards. The Inverted electronics mod would instead allow me to lift the bottom plate to access the electronics compartment and I want to do it before installing a Box Turtle on top of the printer.

  3. Replace the Stealthburner with the Jabberwocky toolhead.

    This introduced a series of changes:

    1. Move to an umbilical setup with the Nitehawk36 toolboard.

    2. Use sensorless homing to get rid of the Y drag chain.

    3. Replace TAP with the Beacon probe.

    4. Finally, install the Jabberwocky.

Replacing the POM nuts

I’ve had issues before where one of the POM nuts were ground down and I felt it was happening again. The printer didn’t completely fail like before but it was sometimes getting really bad first layers in that same corner and the Z probe was occasionally failing to configure Z tilt.

I replaced all three POM nuts together with the whole lead screw (I got a new one sent to me by LDO the first time it failed but I hadn’t installed it yet).

Dust on the lead screw and all three nuts show signs of damage, although the leftmost is clearly worse off.

This is apparently a common problem with some LDO kits that have coated lead screws. I still have two of the old ones that I may have to replace in the future.

Inverted electronics

I’ve been looking at the Inverted electronics mod even before finishing my Trident printer. But it wasn’t possible with the Print It Forward service I used to print parts for my first 3D printer, and after the printer was completed I didn’t feel the need to redo the wiring again.

Wiring before ripping it all out.
Underside of the printer with the inverted rails installed.
Installing the first components on the rails.
The electronics are reinstalled and up and running. This is not the final configuration, just a snapshot of when I got it running.
The electronics with cables cleaned up a little.

Overall it was surprisingly easy to reinstall all the electronics. It was made easier by the move to umbilical and a single USB cable to the toolhead as it removed quite a bit of wiring:

Heap of things I removed from the printer when moving to umbilical and sensorless homing.

One issue I had with the mod is that the cutouts for the Z motors were a bit large, with gaps where stray filament or heat can escape through. I tried to cover them up by placing some foam tape around the motors:

The Z motor mounts have a gap between them and the electronics cover. I tried to fill them in with foam tape from below.

Why the Jabberwocky?

I’ve been wanting to replace the Stealthburner toolhead a long time:

  1. The cooling for PLA is quite bad.
  2. PLA has a tendency to clog (seems like a decently common problem with Rapido and Stealthburner).
  3. Resolving a clog when it happens is a pain in the ass.
  4. It lacks a filament sensor and a cutter (for multi-color).

But what to choose? There are quite a few interesting toolheads I considered:

  1. Dragon Burner

    I use the Dragon Burner in my VORON 0 and using the same toolhead is boring.

  2. Archetype

    A pretty fun toolhead and I was considering the Mjölnir version. It does require you to flip your XY joints to hang upside down and I couldn’t find a filament sensor or filament cutter for it, so I ended up skipping it.

  3. XOL

    XOL seems like a very well regarded and mature option with tons of support. It boasts much better cooling for PLA, which is one of the main reasons I want to migrate away from the Stealthburner.

  4. A4T-toolhead

    A4T seems similar to XOL, while having even better cooling and a slightly simpler assembly. It would also make use of the Dragon hotend I’ve got lying here, gathering dust.

  5. Jabberwocky

    An all-in-one toolhead solution with filament sensors and a filament cutter that seems to have some quality of life features I think I’d really enjoy:

    Flip up Extruder. Probably an industry first, a tool-less easy to access toolhead design so that one can access the blade or the filament path for servicing and troubleshooting. This allows a user, in the event of hopefully a rare problem during a filament changing print the ability to access the filament path to clear it of issues and continue with a print job.

The A4T-toolhead is interesting but the (supposedly) easier maintenance and multi-color consistency of the Jabberwocky really appealed to me.

Building the Jabberwocky

The bottom of the extruder with a piece of filament sticking through.

I struggled a bit to get the filament to load/unload consistently by hand. I rebuilt the toolhead but in the end I believe I just didn’t have enough grip on the filament to guide it past the gears down into bottom hole.

The bottom part of the toolhead with fans installed.
The back with Nitehawk36 but without the hotend installed.
The front but without the cover for the upper LED. (I forgot to print it before the printers went uncooperative.)

Beacon wiring

Beacon is installed.

Most of the wiring came as-is except for the cable between the Beacon and the Nitehawk36. I got the Nitehawk36 side of the cable pre-made in the Nitehawk36 kit but I had to pin the Beacon side myself.

The colors of the wires in cable were all over the place but there’s a description on the PCB of both the Nitehawk36 and Beacon so I just had to take care to match them. I also referenced the Nitehawk36 documentation and the Beacon documentation.

The wire between the Beacon and Nitehawk36.

Cutter installation woes

I had real difficulties installing the blade into the blade holder. There was some filament in the hole (likely due to poor print tuning) and I managed to break the holder when I tried to install the blade:

I broke the blade holder when I tried to force in the blade.
The lower part of the extruder where the blade will cut the filament.

As I didn’t have a working printer when it broke I had to make it work without the filament cutter initially. Luckily I didn’t break anything crucial…

Software setup

I had to make some software changes but luckily they were quite straightforward:

  1. Use sensorless homing.

    I just followed the VORON documentation.

  2. Setup the Nitehawk36 toolboard.

    LDO has setup instructions and the Jabberwocky GitHub contains klipper settings.

  3. Setup Beacon for Z offset and mesh calibration.

    Their quickstart documentation was fast and easy. I did not setup Beacon Contact; maybe I’ll get to it one day.

What’s next?

The printer is finally printing again!

After months of not having a working 3D printer I’ve gotten renewed energy to play around with the printer again. I’ve got some loose plans for some mods to make on this printer:

… Or maybe something else? Who knows!

Packing Neovim with Fennel

2025-10-29 08:00:00

I’ve got lots of stuff to do but I ended up rewriting my Neovim config instead.

… anyone can do any amount of work, provided it isn’t the work he is supposed to be doing at that moment.

Robert Benchley, in Chips off the Old Benchley, 1949

My partner Veronica is amazing as she’ll listen to my bullshit and random whims (or at least, pretend to). That’s a big benefit to having a blog: so I have an outlet for rambling about my weird projects and random fixations and spare Veronica’s sanity a little.

I know that Veronica won’t be impressed by another Neovim config rewrite (even when done in Lisp!) so I’ll simply write a big blog post about it.

The rewrite

I wanted to rewrite my Neovim configuration in Fennel (a Lisp that compiles to Lua) and while doing so I wanted to migrate from lazy.nvim to Neovim’s new built-in package manager vim.pack.

This included bootstrapping Fennel compilation for Neovim; replicating missing features from lazy.nvim such as running build scripts and lazy loading; modernizing my LSP and treesitter config; and trying out some new interesting plugins.

Please see my new Neovim config here.

Why Fennel?

Lua has been a fantastic boon to Neovim and it’s a significant improvement over Vimscript, yet I can’t help but raise an eyebrow when I hear people describe Lua as a great language. It’s definitely great at being a simple and fast embeddable language but the language itself leaves me wanting more.

Fennel doesn’t solve all issues as some of Lua’s quirks bleeds through but it should make it a little bit nicer. I particularly like the destructuring; more functional programming constructs; macros for convenient DSL; and the amazing pipe operator.

But the biggest reason is that I’m simply a bit bored and trying out new programming languages is fun.

Why vim.pack?

I don’t rewrite my config often. But when I do, I do it properly.

Ancient Neovim wisdom

Folke, maker of many popular plugins such as lazy.nvim and snacks.nvim, recently had a ~5 month break from working on his plugins. Of course, they continued to work and anyone working on open source projects can (and should) take a break whenever they want.

But it exemplifies that core Neovim features will likely be better maintained than standalone plugins and should probably be preferred (if they provide the features you need).

While Neovim’s built-in plugin manager is a work in progress and still a bit too simplistic for my needs I wanted to try it out.

Structuring the config

If you’ve got a small configuration having it all inside a single init.lua is probably fine. Somehow I’ve gathered almost 6k lines of Lua code under ~/.config/nvim so showing it all in one file isn’t that appealing.

I first wanted to separate the configuration into a core/plugin split, where non-plugin configuration happens in core and plugin configuration lives under plugin. However, to support lazy loading with a single call to vim.pack.add I decided to go back to letting the files under plugin/ return plugin specs, like lazy.nvim does for you.

With Fennel support under fnl/ this is how my configuration is structured:

init.lua ; Minimal bootstrap to load Fennel files
fnl ; All Fennel source in the `fnl/` folder
├── config
│   ├── init.fnl ; Loaded by `init.lua` and loads the rest
│   ├── colorscheme.fnl
│   ├── keymaps.fnl
│   ├── lsp.fnl ; Config may reference plugins
│   └── ...
├── macros.fnl ; Custom Fennel macros goes here
└── plugins
   ├── init.fnl ; Loads everything under `plugins/`
   ├── appearance.fnl
   ├── coding.fnl
   └── ...
lua ; Lua stuff is still loaded transparently
ftplugin
└── djot.lua ; nvim-thyme doesn't load `ftplugin/`

It’s not a perfect system as I’d ideally want the plugins/ to only add packages while I would configure the plugins in config/. But some plugins use lazy loading making it more convenient to do it together with the plugin spec.

Bootstrapping Fennel

There are a handful of different plugins that allows you to easily write your Neovim config in Fennel. I ended up choosing nvim-thyme because it’s fast (it hooks into require and only compiles on-demand) and it allows you to mix Fennel and Lua source files.

nvim-thyme contains installation instructions for lazy.nvim and it references a bootstrapping function to run git to manually clone packages. But we’re going to use vim.pack and it makes the bootstrap a bit cleaner:

vim.pack.add({
-- Fennel environment and compiler.
"https://github.com/aileot/nvim-thyme",
"https://git.sr.ht/~technomancy/fennel",
-- Gives us some pleasant fennel macros.
"https://github.com/aileot/nvim-laurel",
-- Enables lazy loading of plugins.
"https://github.com/BirdeeHub/lze",
}, { confirm = false })

(I added lze to my bootstrapping too as I’ll use it later when adding lazy loading support, it was simpler having it in the bootstrap.)

nvim-thyme also instructs us to override require() calls (so it can compile on demand) and to setup the cache path (where it’ll store the compiled Lua files):

-- Override package loading so thyme can hook into `require` calls
-- and generate lua code if the required package is a fennel file.
table.insert(package.loaders, function(...)
return require("thyme").loader(...)
end)
-- Setup the compile cache path for thyme.
local thyme_cache_prefix = vim.fn.stdpath("cache") .. "/thyme/compiled"
vim.opt.rtp:prepend(thyme_cache_prefix)

And now we’re ready to write the rest of the config with Fennel!

-- Load the rest of the config with transparent fennel support.
require("config")

Now we can continue with Fennel fnl/config.fnl or fnl/config/init.fnl:

;; Load all plugins
(require :plugins)
;; Load the other config files
(require :config.colorscheme)
(require :config.keymaps)
(require :config.lsp)
;; etc...

There’s one last thing we should do to make the bootstrap complete: we should call :ThymeCacheClear when nvim-laurel or nvim-thyme changes. The recommended way is to use the PackChanged event, with something like this:

vim.api.nvim_create_autocmd("PackChanged", {
callback = function(event)
local name = event.data.spec.name
if name == "nvim-thyme" or name == "nvim-laurel" then
require("thyme").setup()
vim.cmd("ThymeCacheClear")
end
end,
group = vim.api.nvim_create_augroup("init.lua", { clear = true }),
})
vim.pack.add(...)

But if we for example force an update for nvim-laurel (by deleting it with vim.pack.del({"nvim-laurel"}) and restart Neovim) we get this error:

Error in /home/tree/code/nvim-conf/init.lua..PackChanged Autocommands for "*":
Lua callback: /home/tree/code/nvim-conf/init.lua:12: module 'thyme' not found:
no field package.preload['thyme']
cache_loader: module 'thyme' not found
cache_loader_lib: module 'thyme' not found
no file './thyme.lua'
...

There is no order guarantee for the packages and so PackChanged for nvim-laurel may run before thyme has been loaded. I worked around this with a variable that I check after vim.pack.add, which will guarantee that all packages have been added before we try to require a package:

local rebuild_thyme = false
vim.api.nvim_create_autocmd("PackChanged", {
callback = function(event)
local name = event.data.spec.name
if name == "nvim-thyme" or name == "nvim-laurel" then
rebuild_thyme = true
end
end,
group = vim.api.nvim_create_augroup("init.lua", { clear = true }),
})
vim.pack.add(...)
table.insert(package.loaders, function(...)
return require("thyme").loader(...)
end)
local thyme_cache_prefix = vim.fn.stdpath("cache") .. "/thyme/compiled"
vim.opt.rtp:prepend(thyme_cache_prefix)
require("thyme").setup()
-- Rebuild thyme cache after `vim.pack.add` to avoid dependency issues
-- and to make sure all packages are loaded.
if rebuild_thyme then
vim.cmd("ThymeCacheClear")
end

Building a convenient plugin management system

I wanted to migrate to vim.pack but it’s missing a few key features from lazy.nvim:

  • It can’t automatically require all files under a directory.
  • There’s no lazy loading support.
  • It can’t run build scripts (such as make after install or update).

I could’ve given up and gone back to lazy.nvim but that just wouldn’t do.

Source pack specs from files

I want to be able to create a file under plugins/, have it return a vim.pack.Spec, and have it automatically added. This is similar to the structured plugins approach of lazy.nvim.

To build this I first list all files under plugins/ like so:

;; List all files, with absolute paths.
(local paths (-> (vim.fn.stdpath "config")
(.. "/fnl/plugins/*")
(vim.fn.glob)
(vim.split "\n")))

This uses Fennel’s -> threading macro, Fennel’s version of the pipe operator. It’s one of my favorite features of Elixir and was stoked to discover that Fennel has it too. (Fennel actually has even more power with the ->>, -?>, and -?>> operators!)

Now we need to loop through and transform the paths to relative paths and evaluate the files to get our specs. (I’m using accumulate to explicitly build a list instead of collect as we’ll soon expand on it):

;; Make the paths relative to plugins and remove extension, e.g. "plugins/snacks"
;; and require those packages to get our pack specs.
(local specs (accumulate [acc [] _ abs_path (ipairs paths)]
(do
(local path (string.match abs_path "(plugins/[^./]+)%.fnl$"))
(if (and path (not= path "plugins/init"))
(do
(local mod_res (require path))
(table.insert acc mod_res))
acc)
acc))))

Now we can populate specs from files under plugins/, for example like this that returns a single spec:

{:src "https://github.com/romainl/vim-cool"}

But I also want to be able to return multiple specs:

[{:src "https://github.com/nvim-lua/popup.nvim"}
{:src "https://github.com/nvim-lua/plenary.nvim"}]

To support this we can match on the return value to see if it’s a list, and then loop and insert each spec in the list, otherwise we do as before:

(local specs (accumulate [acc [] _ abs_path (ipairs paths)]
(do
(local path (string.match abs_path "(plugins/[^./]+)%.fnl$"))
(if (and path (not= path "plugins/init"))
(do
(local mod_res (require path))
(case mod_res
;; Flatten return if we return a list of specs.
[specs]
(each [_ spec (ipairs mod_res)]
(table.insert acc spec))
;; Can return a string or a single spec.
_
(table.insert acc mod_res))
acc)
acc))))

Now all that’s left is to call vim.pack.add with our list of specs and our plugins are now automatically added from files under plugins/:

(vim.pack.add specs {:confirm false})

Lazy loading with lze

lze is a nice and simple plugin to add lazy-loading to vim.pack.

We’ve already added it as a dependency in our init.lua so all we need to do is modify the load parameter to vim.pack.add like so:

;; Override loader when adding to let lze handle lazy loading
;; when specified via the `data` attribute.
(vim.pack.add specs {:load (fn [p]
(local spec (or p.spec.data {}))
(set spec.name p.spec.name)
(local lze (require :lze))
(lze.load spec))
:confirm false})

Now we can specify lazy loading via the data parameter in our specs:

{:src "https://github.com/romainl/vim-cool"
:data {:event ["BufReadPost" "BufNewFile"]}}

It relies on wrapping configuration under data but that’s annoying, so let’s simplify things a little.

Simplified specifications

The idea here is to transform the specs before we call vim.pack.add.

We can do it easily when we collect our specs by calling the transform_spec function:

(local specs (accumulate [acc [] _ abs_path (ipairs paths)]
(do
(local path (string.match abs_path "(plugins/[^./]+)%.fnl$"))
(if (and path (not= path "plugins/init"))
(do
(local mod_res (require path))
(case mod_res
[specs]
(each [_ spec (ipairs mod_res)]
(table.insert acc (transform_spec spec)))
_
(table.insert acc (transform_spec mod_res)))
acc)
acc))))

I want transform_spec to transform this:

{:src "https://github.com/romainl/vim-cool"
:event ["BufReadPost" "BufNewFile"]}

Into this:

{:src "https://github.com/romainl/vim-cool"
:data {:event ["BufReadPost" "BufNewFile"]}}

By storing keys other than src, name, and version under a data table. This is what I came up with:

(λ transform_spec [spec]
"Transform a vim.pack spec and move lze arguments into `data`"
(case spec
{}
(do
;; Split keys to vim.pack and rest into `data`.
(local pack_args {})
(local data_args {})
(each [k v (pairs spec)]
(if (vim.list_contains [:src :name :version] k)
(tset pack_args k v)
(tset data_args k v)))
(set pack_args.data data_args)
pack_args)
;; Bare strings are valid vim.pack specs too.
other
other))

Another quality of life feature I’d like is to make it simpler to call setup functions. lazy.nvim again does this well and it’s pretty convenient.

For example, this is how it looks like with lze to add an after hook and call a setup function:

{:src "https://github.com/A7Lavinraj/fyler.nvim"
:on_require :fyler
:after (λ []
(local fyler (require :fyler))
(fyler.setup {:icon_provider "nvim_web_devicons"
:default_explorer true}))}

What if we could instead do:

{:src "https://github.com/A7Lavinraj/fyler.nvim"
:on_require :fyler
:setup {:icon_provider "nvim_web_devicons" :default_explorer true}}]

But this is just data and we can transform the second case to the first one fairly easily. In the transform_spec function:

(λ transform_spec [spec]
"Transform a vim.pack spec and move lze arguments into `data`
and create an `after` hook if `setup` is specified."
(case spec
{}
(do
;; Split keys to vim.pack and rest into `data`.
(local pack_args {})
(local data_args {})
(each [k v (pairs spec)]
(if (vim.list_contains [:src :name :version] k)
(tset pack_args k v)
(tset data_args k v)))
(λ after [args]
;; Call `setup()` functions if needed.
(when spec.setup
(local pkg (require spec.on_require))
(pkg.setup spec.setup))
;; Load user specified `after` if it exists.
(when spec.after
(spec.after args)))
(set data_args.after after)
(set pack_args.data data_args)
pack_args)
;; Bare strings are valid vim.pack specs too.
other
other))

How to figure out the package name to require (since it may differ from the path)? lazy.nvim has a bunch of rules to try to figure this out automatically but I chose to be explicit. lze uses the on_require argument so it can load on a require call (on (require :fyler) for example), which seems like a good idea to reuse.

And just to prevent me from making mistakes, I added a sanity check:

;; `:setup` needs to know what package to require,
;; therefore we use `:on_require`
(when (and spec.setup (not spec.on_require))
(error (.. "`:setup` specified without `on_require`: "
(vim.inspect spec))))
(λ after [args]
;; ...

Build scripts via PackChanged events

There’s one last feature I really want from lazy.nvim and that’s to automatically run build scripts after a package is installed or updated.

I basically want to specify this in my specs:

{:src "https://github.com/eraserhd/parinfer-rust"
:build ["cargo" "build" "--release"]}

Again, we’ll rely on PackChanged for this:

;; Before `vim.pack.add` to capture changes.
(augroup! :plugin_init (au! :PackChanged pack_changed))

The above code uses macros from nvim-laurel to define an autocommand that calls the pack_changed function. That function will then run pack_changed when the package is updated or installed:

(λ pack_changed [event]
(when (vim.list_contains [:update :install] event.data.kind)
(execute_build event.data))
;; Return false to not remove the autocommand.
false)
(λ execute_build [pack]
;; `?.` will prevent crashing if any field is nil.
(local build (?. pack :spec :data :build))
(when build
(run_build_script build pack)))

To run the scripts I use vim.system with some simple printing:

(λ run_build_script [build pack]
(local path pack.path)
(vim.notify (.. "Run `" (vim.inspect build) "` for " pack.spec.name)
vim.log.levels.INFO)
(vim.system build {:cwd path}
(λ [exit_obj]
(when (not= exit_obj.code 0)
;; If I use `vim.notify` it errors with:
;; vim/_editor.lua:0: E5560: nvim_echo must not be called in a fast event context
;; Simply printing is fine I guess, it doesn't have to be the prettiest solution.
(print (vim.inspect build) "failed in" path
(vim.inspect exit_obj))))))

This will now allow us to run build scripts like cargo build --release or make after a package is installed or updated. It’s a bit too basic as there’s no visible progress bar for long running builds (Rust, I’m looking at you!) and it doesn’t handle build errors that well but it works well enough I guess.

But what about user commands or requiring a package? For example with nvim-treesitter you’d want to run :TSUpdate after an update, something like this:

{:src "https://github.com/nvim-treesitter/nvim-treesitter"
:version :main
:build #(vim.cmd "TSUpdate")}

Let’s try it by allowing functions in the build parameter (and bare strings because why not):

(λ execute_build [pack]
(local build (?. pack :spec :data :build))
(when build
(case (type build)
;; We can specify either "make" or ["make"]
"string" (run_build_script [build] pack)
"table" (run_build_script build pack)
;; Run a callback instead.
"function" (call_build_cb build pack))))
(λ call_build_cb [build pack]
(vim.notify (.. "Call build hook for " pack.spec.name) vim.log.levels.INFO)
(build pack))

If we run this though it doesn’t work:

Error in /home/tree/code/nvim-conf/init.lua..PackChanged Autocommands for "*":
Lua callback: vim/_editor.lua:0: /home/tree/code/nvim-conf/init.lua..PackChanged Autocommands for "*"..script nvim_exec2() called
at PackChanged Autocommands for "*":0, line 1: Vim:E492: Not an editor command: TSUpdate

The problem is that PackChanged is run before the pack is loaded. Maybe we could work around this by calling packadd ourselves but that would shortcut lazy loading. In this instance we’d like to run TSUpdate after the pack is loaded but only if it’s been updated or installed so we don’t run it after every restart.

What I did was introduce an after_build parameter to the spec that’s run after load if a PackChanged event was seen before:

{:src "https://github.com/nvim-treesitter/nvim-treesitter"
:version :main
:after_build #(vim.cmd "TSUpdate")}

Then in plugins/init.fnl I use a local variable packs_changed that’s updated on PackChanged like so:

;; Capture packs that are updated or installed.
(g! :packs_changed {})
(λ set_pack_changed [name event]
;; Maybe there's an easier way of updating a table global...?
(var packs vim.g.packs_changed)
(tset packs name event)
(g! :packs_changed packs))
(λ pack_changed [event]
(when (vim.list_contains [:update :install] event.data.kind)
(local pack event.data)
(set_pack_changed pack.spec.name event)
(execute_build pack))
false)

Then we’ll call after_build from the after hook we setup before:

(λ transform_spec [spec]
(case spec
{}
(do
;; Split keys to vim.pack and rest into `data`.
;; ...
(λ after [args]
(local pack_changed_event (. vim.g.packs_changed args.name))
(set_pack_changed args.name false)
(when spec.setup
(local pkg (require spec.on_require))
(pkg.setup spec.setup))
;; Run `after_build` scripts if a `PackChanged` event
;; was run with `install` or `update`.
(when (and spec.after_build pack_changed_event)
(spec.after_build args))
(when spec.after
(spec.after args)))
(set data_args.after after)
(set pack_args.data data_args)
pack_args)
other
other))

With this we can finally specify build actions such as these:

{:build "make"
:build ["cargo" "build" "--release"]
:after_build #(vim.cmd "TSUpdate")}

Some Fennel examples

You’ve already seen how Fennel code looks like but what about configuration with Fennel? One of the negative things of moving my configuration from Vimscript to Lua was that simple things such as settings options or simple keymaps is more verbose.

So how does Fennel compare for the simpler, more declarative stuff?

Options

set relativenumber
set clipboard^=unnamed,unnamedplus
set backupdir=~/.config/nvim/backup
let mapleader=" "
vim.opt.relativenumber = true
vim.opt.clipboard:append({ "unnamed", "unnamedplus" })
vim.opt.backupdir = vim.fn.expand("~/.config/nvim/backup")
vim.g.mapleader = [[ ]]
(set! :relativenumber true)
(set! :clipboard + ["unnamed" "unnamedplus"])
(set! :backupdir (vim.fn.expand "~/.config/nvim/backup"))
(g! :mapleader " ")

With nvim-laurel macros I think Fennel is decent. Slightly better than Lua but not as convenient as Vimscript.

Keymaps

local map = vim.keymap.set
map("n", "<localleader>D", vim.lsp.buf.declaration,
{ silent = true, buffer = buffer, desc = "Declaration" }
)
map("n", "<leader>ep", function() find_org_file("projects") end,
{ desc = "Org projects" }
)
map("n", "]t", function()
require("trouble").next({ skip_groups = true, jump = true })
end, {
desc = "Trouble next",
silent = true,
})
(bmap! :n "<localleader>D" vim.lsp.buf.declaration
{:silent true :desc "Declaration"})
(map! :n "<leader>ep" #(find_org_file "projects")
{:desc "Org projects"})
(map! :n "]t" #(do
(local trouble (require :trouble))
(trouble.next {:skip_groups true :jump true}))
{:silent true :desc "Trouble next"})

Not a huge difference to be honest. I like the #(do_the_thing) shorthand for anonymous functions fennel has. Having to (sometimes) split up require and method calls on separate lines in Fennel is annoying.

Overriding highlight groups

One example that was a big step up with Fennel is overriding highlight groups. I’m using melange which is a fantastic and underrated color scheme but I’ve collected a fair bit of overrides for it.

In Lua you use nvim_set_hl to add an override, for example like this:

vim.api.nvim_set_hl(0, "@symbol.elixir", { link = "@label" })

When you do this 100 times this is annoying so I made an override table to accomplish the job:

local overrides = {
{ name = "@symbol.elixir", val = { link = "@label" } },
{ name = "@string.special.symbol.elixir", val = { link = "@label" } },
{ name = "@constant.elixir", val = { link = "Constant" } },
-- And around 100 other overrides...
}
for _, v in pairs(overrides) do
vim.api.nvim_set_hl(0, v.name, v.val)
end

In Fennel with the hi! macro this all becomes as simple as:

(hi! "@symbol.elixir" {:link "@label"})
(hi! "@string.special.symbol.elixir" {:link "@label"})
(hi! "@constant.elixir" {:link "Constant"})

Autocommands

Here are some autocommands to enable cursorline only in the currently active window (while skipping buffers such as the dashboard):

local group = augroup("my-autocmds", { clear = true })
autocmd({ "VimEnter", "WinEnter", "BufWinEnter" }, {
group = group,
callback = function(x)
if string.len(x.file) > 0 then
vim.opt_local.cursorline = true
end
end,
})
autocmd("WinLeave", {
group = group,
callback = function()
vim.opt_local.cursorline = false
end,
})
(augroup! :my-autocmds
(au! [:VimEnter :WinEnter :BufWinEnter]
#(do
(when (> (string.len $1.file) 0)
(let! :opt_local :cursorline true))
false))
(au! :WinLeave #(do
(let! :opt_local :cursorline false)
false)))

Plugin specs

One thing I like more in Lua compared to Fennel is how readable tables are. The Fennel formatter fnlfmt might be partly to blame as it has a tendency to use very little whitespace. Regardless, I prefer this Lua code:

return {
"https://github.com/stevearc/conform.nvim",
{ src = "https://github.com/mason-org/mason.nvim", dep_of = "mason-lspconfig.nvim" },
{ src = "https://github.com/neovim/nvim-lspconfig", dep_of = "mason-lspconfig.nvim" },
"https://github.com/mason-org/mason-lspconfig.nvim",
{
src = "https://github.com/nvim-treesitter/nvim-treesitter",
version = "main",
after = function()
vim.cmd("TSUpdate")
end,
},
}

Over this corresponding Fennel code:

["https://github.com/stevearc/conform.nvim"
{:src "https://github.com/mason-org/mason.nvim" :dep_of :mason-lspconfig.nvim}
{:src "https://github.com/neovim/nvim-lspconfig" :dep_of :mason-lspconfig.nvim}
"https://github.com/mason-org/mason-lspconfig.nvim"
{:src "https://github.com/nvim-treesitter/nvim-treesitter"
:version :main
:after #(vim.cmd "TSUpdate")}

To me the Lua code is for some reason easier to read.

Similarly I don’t have a problem with this lazy.nvim spec:

return {
"folke/snacks.nvim",
priority = 1000,
lazy = false,
opts = {
indent = {
indent = {
enabled = true,
char = "",
},
scope = {
enabled = true,
only_current = true,
},
},
scroll = {
animate = {
duration = { step = 15, total = 150 },
},
},
explorer = {},
},
}

But with this new Fennel spec I use—even though it’s simpler in some ways—it’s harder for me to quickly see what table the keys belong to:

{:src "https://github.com/folke/snacks.nvim"
:on_require :snacks
:lazy false
:setup {:indent {:indent {:enabled true :char "┆"}
:scope {:enabled true :only_current true}}
:scroll {:animate {:duration {:step 15 :total 150}}}
:explorer {}}}

Maybe it’s something you’ll get used to?

Notable plugin updates

Neovim is moving quickly and I’ve had a bit of catching up to do in the plugin department. I won’t bore you with an exhaustive list; just a few highlights.

Native undotree

I’ve been using undotree a long time and it’s excellent. This feature was recently merged into Neovim:

;; It's optional so we need to use packadd to activate the plugin:
(vim.cmd "packadd nvim.undotree")
;; Then we can add a keymap to open it:
(map! :n "<leader>u" #(: (require :undotree) :open {:command "topleft 30vnew"}))

Simplified LSP config

Neovim routinely gets shit on for LSPs being so hard to setup. Yes, it could probably be easier but Neovim has recently made some changes to streamline LSP configuration and it’s not nearly as involved as it used to be.

Here’s how my base config looks like:

(require-macros :macros)
;; Convenient way of installing LSPs and other tools.
(local mason (require :mason))
(mason.setup)
;; Convenient way of automatically enabling LSPs installed via Mason.
(local mason-lspconfig (require :mason-lspconfig))
(mason-lspconfig.setup {:automatic_enable true})
;; Show diagnostics as virtual lines on the current line.
;; It's pretty cool actually, you should try it out.
(vim.diagnostic.config {:virtual_text false
:severity_sort true
:virtual_lines { :current_line true })
;; I like inlay hints.
(vim.lsp.inlay_hint.enable true)
(augroup! :my-lsps
(au! :LspAttach
(λ [_]
(local snacks (require :snacks))
(bmap! :n "<localleader>D" snacks.picker.lsp_declarations
{:silent true :desc "Declaration"})
(bmap! :n "<localleader>l"
#(vim.diagnostic.open_float {:focusable false})
{:silent true :desc "Diagnostics"})
;; etc

I also use nvim-lspconfig but it doesn’t do anything magical (anymore). It’s basically a collection of LSP configs, so I don’t have to fill my config with things like this:

(vim.lsp.config "expert"
{:cmd ["expert"]
:root_markers ["mix.exs" ".git"]
:filetypes ["elixir" "eelixir" "heex"]})
(vim.lsp.enable "expert")

If you don’t want to change the keymaps (Neovim comes with defaults that I personally dislike) or customize specific LSPs then there’s not that much left. Mason is also totally optional and if you want to manage your LSPs outside of Neovim you can totally do that. The only thing missing is autocomplete, which blink.cmp provides out of the box.

Automatically install and enable treesitter grammars

Another thing that has changed since my last config overhaul is nvim-treesitter being rewritten and is now a much simpler plugin. The new version lives on the main branch and the old archived one on master and it contains a bunch of breaking changes.

For example, it no longer supports installing and activating grammars automatically. I think I saw a plugin for that somewhere but here’s some Fennel code that sets it up:

(require-macros :macros)
(local nvim-treesitter (require :nvim-treesitter))
;; Ignore auto install for these filetypes:
(local ignored_ft [])
(augroup! :treesitter
(au! :FileType
(λ [args]
(local bufnr args.buf)
(local ft args.match)
;; Auto install grammars unless explicitly ignored.
(when (not (vim.list_contains ignored_ft ft))
(: (nvim-treesitter.install ft) :await
(λ []
;; Enable highlight only if there's an installed grammar.
(local installed (nvim-treesitter.get_installed))
(when (and (vim.api.nvim_buf_is_loaded bufnr)
(vim.list_contains installed ft))
(vim.treesitter.start bufnr))))))))

If you use nvim-treesitter-textobjects (which you should) remember to migrate to the main branch there too.

Some new fun plugins

  1. fyler.nvim, edit a file explorer like a buffer

    oil.nvim is a great plugin that allows you to manage files by simply editing text. fyler.nvim takes it to the next level by combining it with a tree-style file explorer.

  2. blink.cmp, faster autocomplete

    I’ve been using nvim-cmp as my completion plugin but I migrated to blink.cmp as it’s faster and more actively maintained. It’s too bad that it broke my custom nvim-cmp source for my blog but it wasn’t too hard to migrate.

  3. snacks.nvim, a better picker

    telescope.nvim has been a solid picker but it’s no longer actively developed and the snacks.nvim is the replacement I settled on.

    I tried fff.nvim for file picking but surprisingly it felt really slow compared to snacks.nvim. fzf-lua is another great alternative that I haven’t given enough attention to.

  4. grug-far.nvim, global query replace

    I’ve been happy with Neovim’s regular %s/foo/bar for single files (aided by search-replace.nvim for easy population). But query replace in multiple files has always felt lacking. I used to use telescope.nvim to populate the quickfix window and then use replacer.nvim to make it editable, updating multiple files.

    It worked but was a bit annoying so now I’m trying grug-far.nvim as a more “over engineered” solution. I haven’t used it that long to say for sure but I’m hopeful.

Ending thougths

It would be better to gradually evolve your Neovim config over time instead of doing these large rewrites. But afterwards it feels pretty good as I can once more try to claim with a straight face that I know what’s in my configuration and what it’s doing.

The vim.pack migration was more painful than I had expected. It’s still an experimental nightly feature and it’s missing a lot of nice features that lazy.nvim has. I’ll keep using vim.pack as I think I’ve gotten it to a state of good enough but I’m looking forward to vim.pack becoming more feature complete.

Fennel is fun to write in and I will keep using it where I can. To be honest though, for basic configuration I was expecting Fennel to make a bigger difference than it did. It’s nicer for sure but it’s nothing revolutionary.

Then again, it’s the little things in life that matters.