I started out with PHP using the Kohana Framework and I still have fond memories of their excellent documentation. Although I had figured out how to create a website, it never graduated to a real blog.
2024-11-26 08:00:00
This is the keyboard layout I’m using for my custom keyboard that I generated, printed, and hand-wired. It’s a minimalistic keyboard of 35 keys and features an integrated trackball on the right-hand side.
The keyboard layout started out as a direct copy of the T-34 keyboard layout, with some small modifications from the 34-key keyboard T-34 was designed for:
While the layout has diverged since then, the design philosophy from original T-34 post still holds true and I recommend it as it may explain why the layout looks like it does.
I use quite a number of special features for the keys and I’ve tried to color code according to the above legend.
Layers are super important for smaller keyboards and I use them a ton.
Z
and Q
, together with a bunch of other keys, are on combos.
F2
, F12
and FUN
are just extras and aren’t in a comfortable enough position to warrant anything more common.
When I want to write Swedish I activate this layer that replaces ()_
with åäö
, or I use combos from any layer.
I typically use combos to output symbols (following the same layout pattern as the symbols layer).
The symbols layer is mostly used to roll symbol pairs like {}
or #[
.
Some common symbol sequences (like ->
, !=
, or ```
) exists as combos and others as long press.
While I can activate the number layer persistently (using leader sequences) I typically use combos for single digitis (like 0
), or NUMWORD for larger numbers (like 1984
).
NUMWORD makes the number layer smart, so it will deactivate when certain keys are pressed.
It’s used to type numbers in text or code and for relative movement in Vim, where 17J
would move 17 lines down and then turn off the number layer.
Jumping directly to a line in Vim with 12G
is also made convenient.
If I want to enter the layer without it turning off I can either use leadere sequences to activate it persistently or hold the NUMWORD combo (hold both thumbs).
The layer won’t release until both thumb keys are released, so Space
can be tapped with the left thumb without leaving the number layer.
@u
is there to easily activate macros in Vim. For example 7@u
in the number layer would run the u
macro 7 times and then turn off NUMWORD.
DPI
can be lowered and raised at runtime.
Gui-W
, Gui-E
and Gui-R
are used to switch between monitors and Gui-J
/Gui-K
to switch windows in xmonad.
Shift
+ Left Mouse
can be used to drag, Ctrl
+ A
to select, and Ctrl
+ C
to copy (on long press).
Back
/ Fwd
mouse buttons goes backwards and forwards in history.
Ctrl
+ arrow is used to switch windows in Vim.
Space
, and then holding the right thumb key (WNAV
).
This layer exists for the rare occasions I want to use all the arrow keys with the left hand instead of the right.
This is used for all window and workspace management in xmonad. Some common operations are also on the navigation layer.
Auto shift works and can used to send a window to another workspace (Gui
+ Shift
+ 2
).
This is purely to enable window switching using Alt-Tab
and Ctrl-Alt-Tab
, without releasing Alt
.
The dead keys add diacritic to any letter. For example, to get é
you can use the dead key ´
then e
, and the operating system will merge them together.
(É
also exists as a combo.)
I typically use long press for shift and combos for other modifiers, this layer is a fallback for when those aren’t enough (the layer is mostly used for Right Alt
).
Combos is another fantastic tool that I (ab)use a lot. Simply put it allows you to press multiple keys at once and acts as another key—very useful for smaller layouts.
These combos are made by keys next to each other, either horizontally (pressed with two fingers) or vertically (pressed with one finger in the middle of two keys).
Escape
activates the symbols layer, allowing me to output []
easily.
vsplit
splits a window vertically in Vim and hsplit
splits it horizontally, and Close Window
closes a window in Vim (<C-w>q
).
Clear
resets all states; sets the base layer, releases modifiers, stops CAPSWORD and NUMWORD, and clears other persistent states.
Ctrl
+ Shift
+ M
is the shortcut to mute/unmute in Teams.
SWE
activates the Swedish layer, and if prefixed with ()_
it will replace that with åäö
and vice versa. So for example if I typed hall(
I would press SWE
to get hallå
, with the Swedish layer activated.
Ctrl W
is used to close tabs in Firefox.
Save Vim
is a 4-key combo that saves the buffer in Vim.
These split combos uses the ring and index finger.
T
+ A
once activates CAPSWORD, tapping again makes it persistent (CAPS LOCK
), and a third tap to deactivate CAPS LOCK
.
Space
+ E
activates NUMWORD and tapping them again activates the number layer persistently.
The repeat key works with the above, making them easier to double-tap.
_
and -
.
I have a bunch of 2-key thumb
+ key
combos:
The logic here is that same-side thumb
+ key
= symbol
and opposite-side thumb
+ key
= digit
, following the placements of the numbers, symbols and swedish layers. They’re used if I want to just type a single character, without having to activate a layer first.
I have similar combos for the function keys.
whe keycode QMK_BOOT
enters boot mode for the microcontroller connected via USB, making it easy to update the keymap on the keyboard.
These two 5-key combos (one for each half) are almost impossible to trigger accidentally while being easily accessible.
While layers and combos are the two main features I use, QMK has a lot of other nifty features (and you roll your own implementation of them too).
Most keys have a different behaviour when tapped compared to a long press. Most commonly I use this to produce shifted keys (called auto shift).
So tapping the A
key will output a
as normal and if it it A
will appear instead.
There are a bunch of special cases as well (many on top of combos):
Tap | Long press |
---|---|
_ < > / \ #
|
Double, e.g __
|
" ' = ` 0 .
|
Triple, e.g """
|
| & =
|
Double with spaces, e.g ||
|
! |
!= (with spaces) |
? |
{:?} |
# |
{:#?} |
% |
%{} |
( [ {
|
Close and move cursor between |
@ |
@u (paired with qu combo for Vim macro execution) |
I use the combo l
+ )
as the leader key.
This will wait for a sequence of key presses (in contrast to combos where keys must be pressed at the same time).
I use this with mnemonics for rarely used outputs:
Leader sequence | Action |
---|---|
l + ) , c
|
Caps lock |
l + ) , s
|
Swedish input in Linux (mapped in xmonad) |
l + ) , t , n
|
Toggle Number layer |
l + ) , t , s
|
Toggle Symbols layer |
l + ) , t , f
|
Toggle Function layer |
l + ) , t , c
|
Toggle Caps lock escape swap |
l + ) , Esc
|
Ctrl Shift Escape |
CAPSWORD is a “smart caps lock”. It works like a regular caps lock, except it automatically turns off after certain keys are typed (most commonly space).
It will not turn off on letters, numbers, _
-
Backspace
and the Repeat
keys.
NUMWORD is a “smart layer”. It’s similar to CAPSWORD, except it activates and then turns off the numbers layer instead of caps lock.
It will not turn off on these keys: 0-9
%
/
+
*
-
_
.
,
:
=
x
Backspace
Enter
and the Repeat
keys.
The repeat key simply repeats the previous key. So to type fall
I can type f
a
l
Repeat
, using four different fingers instead of pressing l
twice. It can also repeat things like Ctrl-c
or Delete
, and unlike regular keys that use auto shift the Repeat
key can be held.
The trackball is normally configured to move the mouse as a regular trackball.
There are different modes that alters the behavior of the trackball:
Space
is held (the mouse moves slower when the navigation layer is active).
MOD
combo is held (the mouse moves faster).
SYM
combo is held.
Read the T-34 series for the design process and motivations of my other keyboard layout (it’s the same layout with minor refinements and additions).
See the post Building my ultimate keyboard for how I designed and built the keyboard I’m using this layout with.
For implementation details and the most up-to-date reference check out the layout’s QMK source code.
Copied the T-34 layout and adapted it for the new keyboard by adding a mouse layer, remove the shortcut layer, and changed the activation of the specials layer.
Moved -
to an angled combo, moving the WIN
key to the top row,
and move %
to the home-row and !
to the bottom row.
Reworked the mouse layer and use a more advanced triggering mechanism to be more explicit about when the layer is turned on and off.
-
back to it’s original position and placed %
on the angled combo.
-
and _
for the languages that use kebab-case
.
Reworked the navigation layer to keep the original positions for PgUp
, PgDn
, and Tab
s.
To allow this I moved the mouse click to index finger and demoted the up
/down
to the top row.
2024-11-26 08:00:00
What comes to mind when you see the description “the ultimate keyboard”?
There are many keyboards in this world; here are some that might fit the “ultimate” moniker:
Some even have “ultimate” in their name, although I’ll assert that they’re far from ultimate.
Any man who must say, “I am the King”, is no true king.
I’ll go one step further to say that no keyboard is universally the ultimate because it’s impossible to agree on how to rank different keyboards. For example, while I personally prefer a split keyboard, you might not. Some people have very long fingers and some have very short fingers, making some layouts more preferable. Others may not even have 10 fingers (or both hands), requiring more drastic modifications.
If an ultimate keyboard exists, it differs from person to person. This is my attempt to build my ultimate keyboard.
To me, the ultimate keyboard should have these features:
Should be split to support a more natural typing position.
Really the biggest ergonomical leap in my opinion.
Customized for my own fingers and typing eccentricities.
Column stagger, curvatures and tenting are features I think I want but they need to be tuned, probably by trial-and-error. The position of the thumb keys is another sticking point that the other keyboards I’ve tried have failed to get just right.
Have an integrated trackball or trackpad.
This way I don’t have to move my hand so far and I can free up some valuable desk space. It shouldn’t be operated with my thumb due to my RSI.
Contain the keys I need but no more.
I like smaller keyboards and I’ve been very happy and with my custom keyboard layout that only has 34 keys. Some modifications are fine of course but for the most part I want to be able to use the same layout on both the Ferris and my new keyboard.
Having looked around, I probably want something similar to a Dactyl / Dactyl Manuform (many variants exists). They’re keyboards you generate from parameters (such as number of rows and columns and the amount of curvature). I’ve always wanted to try one and now with a 3D printer, I can.
When looking for a generator I stumbled upon the Cosmos keyboard configurator and I want to gush about it a little because it’s excellent.
It’s excellent because it allows a clueless sod like me to configure a keyboard the way I want to and it has an impressive feature list:
Expert
mode that allows you to customize anything via JavaScript.
.stl
for easy printing or .step
you can import to CAD.
Here’s a small snippet from how the code in Expert
mode might look like:
const =
curvatureOfColumn: ,
curvatureOfRow: ,
spacingOfRows: , // 18x19 Choc spacing
spacingOfColumns: ,
arc: ,
};
/**
* Useful for setting a different curvature
* for the pinky keys.
*/
const =
...curvature,
curvatureOfColumn: ,
};
/**
* The plane used to position the upper keys.
* It's rotated by the tenting and x rotation
* then translated by the z offset.
*/
const = new
// `20` specifies the tenting angle.
.
.
.;
The entire state of the keyboard is also stored in the url, so I can easily share my config by including a link: Cosmos reference of the final keyboard configuration. (Barring any breaking changes in the tool of course…)
Even with a keyboard configurator I needed a way to start. I already have a layout that I really like so I wasn’t starting from nothing. These were the important parts going into the design process:
A 3x5 grid with 1-2 thumb keys (in practice one thumb key is enough).
If you question why I want to build such a small keyboard I’ll redirect you to the discussion in The T-34 keyboard layout post.
Integrated trackball on the right-hand side.
Choc switches.
One of the major decisions with a keyboard is what kind of switch to use. While MX-style switches are the most common I personally really love Choc switches for a couple of reasons:
While a low profile switch is more important for a flat keyboard, not a tented and curved one like I’m building now, the flatter keycaps and the switches being closer together is crucial for pressing two keys with one finger:
The low-actuation force is also more comfortable to me as it helps reduce the strain on my fingers, and makes combos (pressing several switches at once) generally more pleasant.
It’s not enough with just a 3D printer, to build a working keyboard you need a bunch of hardware:
Two microcontrollers.
I got the Liatris microcontroller as it has enough pins to connect a trackball sensor and it supports QMK.
Switches
What kind of Choc switch should I use?
Linear, tactile, or clicky?
Exactly how heavy should they be?
Should they be silent?
I wasn’t sure so I ordered a sampling of different switches to try.
For the final keyboard I used the Ambients silent Noctural (linear / 20gf) switches, where the deciding factor was getting as light switches as possible. (I’ve previously used modded 15gf switches, which were even better, but I couldn’t find a way to buy them.)
Keycaps
Keycaps aren’t only for looking cool. A convex keycap for the thumb button instead of the standard concave one makes it much more comfortable:
I also got keycaps for the index row with these small homing notches to help my fingers more easily find the home row.
A pair of TRRS connectors and a TRRS cable.
A Trackball with a matching sensor.
I decided to pick up this PMW3389 sensor because it was recommended in the keyboard configurator and a red 34mm trackball from Amazon.
Filament for the 3D printed pieces.
I ended up settling on the PolyTerra PLA Army Purple for the case but I used a bunch of different filament during the prototype phase.
Diodes, screws, heatset inserts, and cable to do the wiring.
When you’re trying to design something like a custom keyboard I think you need to go through a bunch of trial-and-error until you find something that fits.
Here’s a short rundown of some of the significant revisions I went through, mostly to illustrate that it’s very much an iterative process.
For my first print I mostly wanted to print it out and test how a keyboard with a standard curvature felt. I also wanted to try to place a trackball somewhere.
I ended up removing a regular thumb key (I’ve used two thumb keys with my keyboard layout) to make it fit and I added a “mouse thumb key” that I plan to use as a left mouse button
when I’m operating the trackball.
It was tricky to place the trackball as I wanted to operate it with my index + middle finger, not my thumb.
Another tweak I made was to reduce the spacing between the keys to be closer to the Choc spacing. Choc spacing seems to be 18.6 x 17.6 mm, but I used 19 x 18 mm spacing—the attraction to round numbers is real.
Most of the keys on the keyboard felt fine but I had one major annoyance: I have a habit of using the ring finger to press the top right key instead of the pinky but with the curvature on the keyboard this just wasn’t possible anymore.
You might wonder, why don’t I just create a new habit and use the pinky as you’re supposed to? The simple answer is that I hate it. To my fingers that feels beyond terrible and I’d rather remove the key and only have two keys in the outermost column. As it happens, pressing the key with my ring finger (on a flat keyboard) feels good so I’d rather adjust the key than remove it.
I also added an extra mouse thumb key and lowered the pinky column a bit.
Pressing p
with my ring finger feels great.
Pressing the thumb normal thumb key feels awful because the mouse thumb keys are in the way when I relax my hand.
Adjustments made:
Although I said I wanted to have a 3x5 grid, the generator included an easy option to include a small bottom row with 2 extra keys (for the ring and middle finger) that I wanted to try out for the left side. They’re… Okay I guess. Not crazy uncomfortable but not quite comfortable enough that I want to have common keys there.
At this point the Beta V3 of configurator is out and in it there’s several improvements, most notably:
Both halves can be configured at the same time.
Can go between the Advanced and Expert tabs! WOW!
I had to manually keep track of the JavaScript changes I made, and update them manually if I wanted to make a change in the UI… But no more!
I had to redo most of the configuration and I think I made some minor changes that I didn’t keep track of, but I made two larger ones:
When I started this project Cosmos only supported a single type of trackball mount: roller bearings. They worked quite poorly for me as the ball was spinning well in one direction but poorly in others.
Luckily new options were added and as I’m writing this there’s 4 different ways you can mount the trackball:
Because I was burned with the bad experience (and I didn’t want to rebuild the keyboard yet again) I made small prototypes of the three different options:
The BTUs had the least friction and it felt really easy to spin the ball but they were also distressingly loud. The static ball bearings had more friction than the BTUs and less than the roller bearings while being completely silent, so I chose to go with the ball bearings.
While they don’t feel nearly as good as the Kensington SlimBlade they’re decent enough. I try not to use the mouse that much and having the trackball so much closer is worth it compared to having a separate trackball unit besides the keyboard.
After having used the keyboard for real I realized that the three keys dedicated to mouse buttons would have to go. There were two major issues with them:
So I had them removed and I rewired the right half for the 3rd time. Sigh.
I think the lesson is that it’s not enough to print a prototype and press switches pretending to type, you have to build and use the keyboard a bunch before you can evaluate some of the design decisions.
While the case is the biggest and most important part of this kind of keyboard, there are a few other parts I had to print to complete the keyboard.
The wrist rests didn’t come with any sort of attachment to the case, so they just always drifted away. I tried to combat this by gluing magnets inside the case and outside the wrist rest, making them stick together just enough to stay together during normal use, while being easily removable.
Despite my efforts, I haven’t been using the printed rests as I reverted to the ”squishy” ones I’ve used before:
The printed felt too uncomfortable and I couldn’t find an angle I liked more than the gel rests. Oh well.
There’s a holder to fasten the microcontroller to the case that I use.
I had to manually make a hole to make the Boot
button accessible, which was easily accomplished when slicing the model.
One problem with the Ferris was that it would sometimes slip on the table. I counteracted this by using an old Netrunner playmat but I wanted another solution.
The keyboard is generated with a bottom plate that’s used to hide and protect the internals. I printed it in TPU, a flexible and rubbery material, that gives enough grip to stay relatively still when I’m typing.
One of the first things you need to do when wiring up a custom keyboard is to plan out a matrix. I guess you could directly wire every switch directly to the controller too, but that’s not feasible if you have a larger amount of keys, so the usual thing is to use a matrix.
What a matrix means is you should wire together all keys in a row and connect that to a pin on the controller, and to the same with the columns.
It might look something like this:
You should also use diodes in the matrix (for either rows or columns, I chose the rows). Pay attention to the diode direction.
The wiring is horrible, I know.
I only lost one microcontroller due to a short… With my wiring prowess I consider that a success!
Controller pin | Connection |
---|---|
1 | Handedness (VCC on the left keyboard and GND on the right) |
2 | TRRS data |
3, 4, 5, 6, 7 | Matrix columns |
20, 22, 26, 27 | Matrix rows |
13 (CS1) | Trackball SS |
14 (SCK1) | Trackball SCK |
15 (TX1) | Trackball MOSI |
16 (RX1) | Trackball MISO |
The QMK cli has the qmk new-keyboard
command that helps you get started.
I couldn’t get the generated template to work for me, so I copied settings from an existing keybord with rp2042
support.
I’ll try to hit on the most important parts of the config, take a look at the source code for all details.
The folder structure for the keyboard looks like this:
cybershard
├── keyboard.json
├── rules.mk
├── halconf.h
├── mcuconf.h
└── keymaps
└── default
├── config.h
├── keymap.c
├── rules.mk
└── ...
(Cybershard is the name I eventually settled on for the keyboard.)
The most important part is keyboard.json
that defines (almost) everything we need for a new keyboard in QMK.
First you need to set the processor
, bootloader
, and usb
values.
The Liatris microcontroller uses the RP2040 MCU, and I just picked some vendor- and product identifiers:
},
}
Then we need to define the matrix (with the pins we soldered) and the layout (how we’ll configure the keymap in keymap.c
):
// We need to use a `GP` prefix for the pins.
},
// First physical row
,
,
,
,
,
// Second row
,
,
,
,
,
// etc...
]
}
}
}
Note that we can pick whatever physical pins we want as we can move around and configure them in software.
The LAYOUT
macro is what we use in keymap.c
to define our keymap.
When defining it we can choose to skip certain keys and reorganize it to be easier to define; for example, there’s no switch at 0,0
in my keyboard so I skip that.
The above LAYOUT
can then be used like this:
SE_J, SE_C, SE_Y, SE_F, SE_P,
SE_R, SE_S, SE_T, SE_H, SE_K,
SE_COMM, SE_V, SE_G, SE_D, SE_B,
SE_A, SE_B,
// Thumb keys
FUN_CLR, MT_SPC,
,
With the above setup we should be able to flash the keyboard by first entering the boot loader and running:
qmk flash -kb cybershard -km default
Now the process of updating the firmware is quite nice and unless I screw up I don’t need to connect another keyboard to do it.
qmk flash
(it will wait until it finds a flashable target).
QK_BOOT
combo (the keyboard becomes unresponsive).
To get the split keyboard feature to work I had to set the SERIAL_DRIVER
option in rules.mk
:
SERIAL_DRIVER = vendor
And add the split
configuration to keyboard.json
and modify the LAYOUT
macro:
// The pin that signals if the current controller is the left (high)
// or right (low) controller.
},
// The TRRS data pin.
// We can override the pins for the right controller.
// Note that GP26 and GP27 are swapped compared to the left side
// due to a mistake I made when soldering.
}
},
// We need to sync the matrix state to allow combos, mods, and
// other stuff to work.
}
}
},
// The rows 0 to 3 specifies rows on the left side and
// 4 to 7 the rows on the right side.
// These 5 keys are the first row on the left side.
,
,
,
,
,
// These 5 keys are the first row on the right side.
,
,
,
,
,
// etc..
]
}
}
}
The LAYOUT
macro is just a function with many arguments but with the right order it can be formatted
to look similar to the physical keyboard.
For example, this is how the base layer of my keyboard could look like:
// Left side // Right side
SE_J, SE_C, SE_Y, SE_F, SE_P, SE_X, SE_W, SE_O, SE_U, SE_DOT,
SE_R, SE_S, SE_T, SE_H, SE_K, SE_M, SE_N, SE_A, SE_I, REPEAT,
SE_COMM, SE_V, SE_G, SE_D, SE_B, SE_SLSH, SE_L, SE_LPRN, SE_RPRN, SE_UNDS,
// The extra two keys on the left side
SE_MINS, SE_PLUS,
// Left thumb keys // Right thumb key
FUN_CLR, MT_SPC, SE_E
,
It took a long time for me to get the trackball working (admittedly, mostly because I soldered the pins wrong). There’s quite a lot of documentation for QMK but curiously enough I didn’t find anything that covered the whole setup. I arrived here by trial and error, trying to piece together parts from other keyboards into a setup that worked for me.
First we need to create the files halconf.h
and mcuconf.h
(they go in the same folder as keyboard.json
) to enable the SPI driver:
And enable the pointing device with the pmw3389
device driver in rules.mk
POINTING_DEVICE_ENABLE = yes
POINTING_DEVICE_DRIVER = pmw3389
Now we need to add the sensor pins to config.h
:
// SPI1, matching mcuconf.h
// The pin connections from the pmw3389 sensor
This should be enough to get the sensor going, but because we have a split keyboard we need to set that up too:
// The trackball is on the right
There are some additional tweaks that I had to play with to make the trackball work well:
// The trackball is quite sensitive to how
// large the liftoff distance should be.
// Sets the mouse resolution, up to 16000.
// The directions where messed up, this fixes it.
With that I got the trackball moves the mouse as expected.
As I struggled to get the trackball working I tried to use the debug output. I’ll include it here for completeness sake:
Enable the console in rules.mk
:
CONSOLE_ENABLE = yes
Enable pointing device debugging in config.h
:
Turn on debugging in keymap.c
:
void
debug_enable = true;
debug_mouse = true;
And then run qmk console
from the command line.
No.
This keyboard is certainly the most comfortable keyboard I’ve used but it’s not close to being an “ultimate” keyboard. Here’s a few things that might improve the keyboard:
The trackball still isn’t nearly as comfortable as the Kensington SlimBlade.
Maybe a keyboard with a larger trackball would be better?
The extra keys on the left side are barely useful.
It’s not a big deal, maybe I can find some usage for them, but to me having barely useful keys feels wrong.
There are more extra features I feel an ultimate keyboard should have.
The keyboard I’ve built is nice… But it’s still just a normal keyboard with a trackball. Maybe a vibration sensor, a display, or even some LEDs? A smart knob with software-configurable endstops and detents would really add some weight to the moniker of an ultimate keyboard.
It’s hard to know how good the keyboard is before I’ve put it through extensive use, and to do that I need to settle on a keyboard layout for the keyboard. I’ve already designed a layout for a 34-key keyboard that should be fairly straightforward to adapt but I still need to figure out how to add mouse keys and what to do with the “extra” keys on the left-hand side.
Check out The current Cybershard layout for how the keyboard layout is coming along.
2024-10-31 08:00:00
I find that ai can help significantly with doing plumbing, but it has no problems with connecting the pipes wrong. I need to double and triple check the updated code - or fix the resulting errors when I don’t do that.
I’ve been skeptical of the AI craze that’s been going on in the developer community. It’s a useful tool but some people behave like large swaths of developers will be replaced by AI tomorrow.
I don’t understand the hype as my experience has been quite different, yet I’ve struggled to pinpoint why. In this post post I’ll try to explain what I think is the fundamental problem I have with letting an AI generate code for me.
I realized what my problem with AI is when I read this comment on Hacker News (emphasis mine):
My theory is the willingness to baby sit and the modality. I’m perfectly fine telling the tool I use its errors and working side by side with it like it was another person. At the end of the day it can belt out lines of code faster than I, or any human, can and I can review code very quickly so the overall productivity boost has been great.
It’s true that I’m not fond of pair programming but the key issue is that I can’t review code quickly. On the contrary I’m quite bad at looking at an unknown piece of code and verify that it’s correct.
This isn’t a problem of mine that’s unique for programming. I’ve been quite good at math (relatively speaking) since I was a child and I breezed through the University math (where I read as many math courses I could get my hands on).
Despite my relative skills I always got marks against me during tests and exams. They weren’t caused by my lack of understanding but by small mistakes like writing numbers wrong. Mistakes that I tried hard to correct; I started to double-check and triple-check my work but they were still slipping through.
I realized that when I was first solving the problem I was focused. I was in the zone and I could keep the problem in my head while I worked.
But when I went back to verify my work my brain wouldn’t engage in the same way. I was trying to but I couldn’t get into the zone. The problem was Done™ and it was like my brain had disengaged. If I was looking at myself in a third-person view I’m sure my eyes would glaze over.
When we write code or solve math problems I think we build up a mental model of the problem we’re trying to solve and the system we’re interacting with; what a variable name signifies, what effects a function call might have, and how pieces of information relate to one another.
This mental model is crucial when reading code or solving math problems and if it’s missing we need to rebuild it. I think this is what happened when I had finished my math problems: when I was finished I dropped the model, so coming back to it was a struggle.
The same is true when reviewing code; you’ll be much more effective when reviewing small changes to a code base you’re familiar with because you already have a mental model of the surrounding systems. It becomes harder when you’re reviewing larger changes, or reviewing changes in an unfamiliar code base, because you have more gaps in your mental model.
Maybe it’s a skill issue but I find it much more difficult to find errors in code others write (or I myself wrote a while ago) than to find errors while I’m developing the code. I get the same “eye glazes over” feeling as when I went back to verify my math problems. I’m slow, I know I’m not doing a good job, and it’s a struggle.
I truly wonder how other people review code in a productive way. Sometimes I feel I need to run, change, and test the code to understand it… But that’s time consuming especially as the amount of code increases. Trusting your fellow developers seems like a necessity.
Some are enamored with how great AI code generation is. And to be sure, compared to just a few years ago it’s unbelievably good. But would I trust the code as much as I’d trust a co-worker? Absolutely not.
In my experience an AI is at best as good as a new developer, often much worse, and sometimes outright horrible. (And no, I don’t blindly trust a new developer. I don’t trust myself either.) At least I can be reasonably sure that other developers test or run their code before I need to look at it.
Relying on AI is like copy pasting code from Stack Overflow: useful but you cannot trust it. While the code may look good on a surface level, it’s often subtly wrong in ways that even a Stack Overflow answer doesn’t quite manage to. Hallucinating a non existing library function or adding an extra argument are quite common.
This is mostly fine for short snippets where it’s easy to run the code and test but the problem becomes significant when you copy paste rely on AI for larger pieces of code.
The crux of the matter is that I’m much more productive when I’m programming than when I’m reviewing code. With most current AI tools it feels like I’m reviewing code more than programming and that’s a bad trade for me to make.
While you’re writing code you’re continually building up your mental model but when you let an AI generate the code you still need to do the hard work of building your mental model.
I don’t think writing code is the most important thing you’re doing while programming—it’s building a mental model of the system you’re building.
Ever felt that it would be faster to just code something yourself than to gently guide a junior developer through a problem? That’s how I feel like when I shepherd an AI, with the difference that teaching a junior programmer is an investment but the AI won’t learn no matter how many times you interact with it.
I need to clarify that while I’m skeptical towards the current AI hype I find some AI tools useful in various contexts.
For programming I’m a heavy user of Kagi’s quick answer functionality that uses AI to summarize the search results and gives you references so you can drill down further if you need to. I use it many times a day to answer questions like:
It’s not bullet proof but the combination of good search results (way better than Google in my opinion) combined with AI’s summarizing ability is absolutely fantastic.
AI dev tools are useful, I just haven’t seen the incredible productivity boost that some say exist. Maybe they are working on different problems in different contexts than I am, have different standards, or just are better at utilizing them than I am?
Because, surely, it would be way to simple to dismiss the productivity claims as people evaluating the tools as how useful they may become instead of how useful they are right now?
2024-10-08 08:00:00
I’ve been a fan of Home Assistant a while now; it’s a great platform for home automation with its beginner friendly and feature rich UI, support for a ton of different devices and integrations, and there’s a bunch of ways to create automations.
But there’s no engine for writing automations in Elixir that I could find; this post addresses this fatal weakness.
Specifically, in this post I’ll go through:
Ever since I started with home automation I’ve thought that it would be a great match for the concurrency model that Elixir uses. You’ll have all sorts of automations running concurrently, reacting to different triggers, waiting for different actions, and interacting with each other; something I think Elixir excels at.
Now, there are many options for writing automations for Home Assistant that already work well, the biggest reason I wanted to use Elixir is because I like it. That Elixir happens to be a good fit for home automation is just a bonus.
I’ve tried to write automations via the Home Assistant UI (meh), using YAML configuration (hated it), visual programming with Node-RED (I want real programming), and in Python using Pyscript (pretty good). In the end I simply enjoyed writing automations in Elixir more.
The very first thing we need to solve is how do we get data from Home Assistant and how to call services (now called actions)?
Home Assistant has a websocket API and a REST API that we can use to implement our engine. As we can get entity states and call services over the websocket there’s no need to bother with the REST API for our example.
I used WebSockex to setup the websocket connection to Home Assistant. Here’s a tentative start that connects and receives a message:
use WebSockex
require Logger
# Adjust to your Home Assitant instance
@url
WebSockex.start_link(@url, __MODULE__, %, name: __MODULE__)
end
@impl true
case Jason.decode(msg) do
->
Logger.debug()
handle_msg(msg, state)
->
Logger.warning()
end
end
Logger.warning()
end
end
As with all concurrent services in Elixir Websockex should be started in a supervision tree. Under the main Application Supervisor works well:
@moduledoc false
use Application
@impl true
children = [Haex.WebsocketClient]
Supervisor.start_link(children, strategy: :one_for_one)
end
end
If we run this then Home Assistant will send us a message upon connection:
[warning] Unhandled message: %{"ha_version" => "2024.10.0", "type" => "auth_required"}
This means we need to authenticate using a long lived access token.
Reading the websocket API we should respond with an
message:
token = Application.fetch_env!(:haex, :access_token)
reply =
Jason.encode!(%
type: ,
access_token: token
})
end
It’s prudent to fetch secrets from environment variables in runtime.exs
:
config :haex, access_token: System.fetch_env!()
And now we get another unhandled message, telling us our auth succeeded:
[warning] Unhandled message: %{"ha_version" => "2024.10.0", "type" => "auth_ok"}
After authenticating we can tell Home Assistant that we’d like to subscribe to all state changes in the system (so we can write automations that trigger on a state change).
I’m lazy so I send the subscription message when I’m handling (ignoring) the
message:
reply = Jason.encode!(%)
end
With this up we’ll get another acknowledgment that our subscribe command succeeded (matching
id: 1
):
[warning] Unhandled message: %{"id" => 1, "result" => nil, "success" => true, "type" => "result"}
And we start receiving state changed messages:
[warning] Unhandled message: %{"event" => %{"context" => %{"id" => "01J9DK3CN0CEEWGCV1139HTC11", "parent_id" => nil, "user_id" => nil}, "data" => %{"entity_id" => "sensor.vardagsrum_innelampor_switch_power", "new_state" => %{"attributes" => %{"device_class" => "power", "friendly_name" => "Vardagsrum innelampor switch Power", "state_class" => "measurement", "unit_of_measurement" => "W"}, "context" => %{"id" => "01J9DK3CN0CEEWGCV1139HTC11", "parent_id" => nil, "user_id" => nil}, "entity_id" => "sensor.vardagsrum_innelampor_switch_power", "last_changed" => "2024-10-05T05:40:36.640422+00:00", "last_reported" => "2024-10-05T05:40:36.640422+00:00", "last_updated" => "2024-10-05T05:40:36.640422+00:00", "state" => "4.6"}, "old_state" => %{"attributes" => %{"device_class" => "power", "friendly_name" => "Vardagsrum innelampor switch Power", "state_class" => "measurement", "unit_of_measurement" => "W"}, "context" => %{"id" => "01J9DK37CMJBDFK7M5VGYJ1CZG", "parent_id" => nil, "user_id" => nil}, "entity_id" => "sensor.vardagsrum_innelampor_switch_power", "last_changed" => "2024-10-05T05:40:31.252863+00:00", "last_reported" => "2024-10-05T05:40:31.252863+00:00", "last_updated" => "2024-10-05T05:40:31.252863+00:00", "state" => "4.5"}}, "event_type" => "state_changed", "origin" => "LOCAL", "time_fired" => "2024-10-05T05:40:36.640422+00:00"}, "id" => 1, "type" => "event"}
[warning] Unhandled message: %{"event" => %{"context" => %{"id" => "01J9DK3CQ27BWBX0R9MAP5SRM9", "parent_id" => nil, "user_id" => nil}, "data" => %{"entity_id" => "sensor.dishwasher_plug_voltage", "new_state" => %{"attributes" => %{"device_class" => "voltage", "friendly_name" => "Dishwasher plug Voltage", "state_class" => "measurement", "unit_of_measurement" => "V"}, "context" => %{"id" => "01J9DK3CQ27BWBX0R9MAP5SRM9", "parent_id" => nil, "user_id" => nil}, "entity_id" => "sensor.dishwasher_plug_voltage", "last_changed" => "2024-10-05T05:40:36.706679+00:00", "last_reported" => "2024-10-05T05:40:36.706679+00:00", "last_updated" => "2024-10-05T05:40:36.706679+00:00", "state" => "232.5"}, "old_state" => %{"attributes" => %{"device_class" => "voltage", "friendly_name" => "Dishwasher plug Voltage", "state_class" => "measurement", "unit_of_measurement" => "V"}, "context" => %{"id" => "01J9DK37THDW13GTP09KXNMG0Q", "parent_id" => nil, "user_id" => nil}, "entity_id" => "sensor.dishwasher_plug_voltage", "last_changed" => "2024-10-05T05:40:31.697304+00:00", "last_reported" => "2024-10-05T05:40:31.697304+00:00", "last_updated" => "2024-10-05T05:40:31.697304+00:00", "state" => "232.18"}}, "event_type" => "state_changed", "origin" => "LOCAL", "time_fired" => "2024-10-05T05:40:36.706679+00:00"}, "id" => 1, "type" => "event"}
...
At this point I’d like to take a step and plan ahead a little. We have our state changed events but how should we send them to the automations we’ll write?
One option might be to let
WebSocketClient
loop over all automations and call them directly:
for automation <- automations do
automation.state_changed(msg)
end
end
But that’s not very flexible.
We’d have to keep the automations
list updated and what about other services that might want to subscribe to state changes but aren’t automations?
Instead I opted to use Phoenix.PubSub, a publisher/subscriber service that can broadcast messages throughout your application.
First we’ll need to start an instance in our supervision tree (called
Haex.PubSub
):
@impl true
children =
[
,
Haex.WebsocketClient
]
Supervisor.start_link(children, strategy: :one_for_one)
end
Then we can broadcast messages to anyone who cares to listen:
Phoenix.PubSub.broadcast(
Haex.PubSub,
,
%
entity_id: event[],
new_state: event[],
old_state: event[]
}}
)
end
If a service wants to receive the messages they’ll subscribe to the
channel:
Phoenix.PubSub.subscribe(Haex.PubSub, )
There’s key component left and that’s how do call a service / execute an action?
You call a service by sending this type of message over the websocket:
# This message turns on a light.
%
id: 2,
type: :call_service,
domain: :light,
service: :turn_on,
target: %
entity_id:
}
service_data: %
color_name: ,
brightness: 100
}
}
You’ll then receive a successful result message corresponding to the id
of the message.
You’re supposed to correlate the id
s of the messages you send and receive, but it’s not central to this post so I’ll gloss over that implementation detail.
I decided to create automations as regular GenServers that subscribes to triggers and then does stuff. An automation might look like something like this:
use GenServer
alias Phoenix.PubSub
GenServer.start_link(__MODULE__, opts)
end
@impl true
PubSub.subscribe(Haex.PubSub, )
end
@impl true
# Do something at a specific time
end
end
If you’re unfamiliar with GenServers then the gist is that a GenServer is an isolated process that receives messages and should be started in a supervision tree.
In the above example we subscribe to the
channel and then receive a message with the
handle_info
callback.
(The
message is generated from a
message for the entity
sensor.time
that’s updated every minute.)
It’s finally time for the ultimate expression of home automation:
controlling a light source.
Gentlemen I am now about to send a signal from this laptop through our local ISP racing down fiber-optic cable at the speed of light to San Francisco, bouncing off a satellite in geosynchronous orbit to Lisbon Portugal where the data packets will be handed off to submerge transatlantic cables terminating in Halifax Nova Scotia, and transferred across the continent via microwave relays back to our ISP and the XM receiver attached to this…
Lamp.
Jokes aside, controlling a light is great because it’s easy to start with (turn on/off), you’ll get to see results in the real world (the light changes color), and you can increase the complexity if you want (create a sunrise alarm, use circadian lighting, flash during a fire alarm, etc).
Let’s ease into an automation by turning on a light on a specific time:
use GenServer
alias Phoenix.PubSub
alias Haex.Light
# This is the Home Assistant entity I want to control.
@entity
GenServer.start_link(__MODULE__, opts)
end
@impl true
PubSub.subscribe(Haex.PubSub, )
end
@impl true
# Note that time only ticks every minute so seconds will always be zero.
if time == do
Light.turn_on(@entity, color_name: , brightness_pct: 80, transition: 10)
end
end
end
That was easy. Let’s try something bit more interesting: a wake-up sequence.
Specifically I’d like to gradually change the brightness and color of the light from a dim red to a bright, white light.
We could hardcode it with something like this:
cond do
time == ->
Light.turn_on(@entity, brightness_pct: 10, color_name: , transition: 450)
time == ->
Light.turn_on(@entity, brightness_pct: 70, color_name: , transition: 450)
time == ->
Light.turn_on(@entity, brightness_pct: 80, color_name: , transition: 450)
time == ->
Light.turn_on(@entity, brightness_pct: 100, kelvin: 2700, transition: 450)
true ->
nil
end
end
But that’s not flexible if we for example want the start time to be configurable via the UI in the future. While refactoring it let’s try to implement the transitions using a message passing approach:
@impl true
if time == do
send(self(), :transition_sunrise)
else
end
end
At line 3 we’re using send()
to send the message
:transition_sunrise
to ourselves and at line 4 we’re tracking inserting
:light_state
as
, to let the GenServer keep track of what transition we should perform.
This message is again handled by handle_info
:
case set_sunrise_light(state) do
:done ->
# We've reached our last transition.
->
# We still have transitions left to handle,
# send another :transition_sunrise message after 10 minutes,
# repeating the loop.
Process.send_after(self(), :transition_sunrise, 10 * 60 * 1000)
end
end
The function set_sunrise_light
sets the light depending on
and returns
:done
when we’ve set the last transition.
Pay attention to line 10 where we send another
:transition_sunrise
message but with a delay, continuing the recursion until we’ve set handled all transitions.
I’m not thrilled about the implementation of set_sunrise_light
but here it is:
transitions =
[
[brightness_pct: 10, color_name: , transition: 450],
[brightness_pct: 70, color_name: , transition: 450],
[brightness_pct: 80, color_name: , transition: 450],
[brightness_pct: 100, kelvin: 2700, transition: 450]
]
# Transform the list into a map with index => transition.
# Yes, it's a shoddy imitation of an array.
|> Enum.with_index()
|> Map.new(fn -> end)
last_state = Enum.count(transitions) - 1
=
if sunrise_state >= last_state do
else
end
Light.turn_on(@entity, light_opts)
next_transition
end
I’d like to add the ability to abort the sunrise alarm by turning off the lamp. It’s fairly straightforward:
Subscribe to a state change:
PubSub.subscribe(Haex.PubSub, <> @entity)
(I use a simplified message instead of the raw
message we’ve seen before.)
Change the state if we’re in a sunrise:
end
end
We still have a
:transition_sunrise
message that will arrive later but the fallback handle_info
will ignore it.
If we’ll implement a snooze or restart for our sunrise this may become a problem.
What we’ve done so far works but the structure isn’t ideal.
The leftover
:transition_sunrise
message bothers me and what if we want to implement another light transition,
either for a bedtime routine or for another light?
Then we’d have to re-implement a large portion of the automation, which isn’t my idea of fun.
We can break out the code into another GenServer, let’s call it
LightTransition
,
and we can let it keep track of the transitions and lets us focus on the more interesting parts of automation writing.
This lets us start a sunrise with something like this:
if time == do
=
LightTransition.start_link(
entity_id: @entity,
transitions: [
[brightness_pct: 10, color_name: , transition: 450],
[brightness_pct: 70, color_name: , transition: 450],
[brightness_pct: 80, color_name: , transition: 450],
[brightness_pct: 100, kelvin: 2700, transition: 450]
]
)
state =
state
|> Map.put(:light_state, :sunrise)
|> Map.put(:transition, transition_pid)
At line 2 we start our transition using start_link
, foregoing the supervision tree as it doesn’t make sense to have the transition without the automation.
We keep track of the service process id at line 15, which we can use to stop the transition if needed:
GenServer.stop(state.transition)
LightTransition
itself is fairly straightforward when we don’t have to keep track of the transition state:
use GenServer
alias Haex.Light
GenServer.start_link(__MODULE__, opts)
end
@impl true
send(self(), :transition)
end
@impl true
case state.transitions do
[] ->
[light_opts | rest] ->
Light.turn_on(state.entity_id, light_opts)
timer = Process.send_after(self(), :transition, light_opts.transition)
end
end
end
With this in place we can support pause and resume by using
Process.read_timer()
and
Process.cancel_timer()
:
@impl true
time_left = Process.read_timer(timer)
Process.cancel_timer(timer)
state =
state
|> Map.put(:time_left, time_left)
|> Map.delete(:timer)
end
Light.turn_on(state.entity_id, state.last)
timer = Process.send_after(self(), :transition, time_left)
state =
state
|> Map.put(:timer, timer)
|> Map.delete(:time_left)
end
I think things turned out pretty well in the end.
So far we only have a sunrise alarm, but it’s easy to imagine more features that our humble lamp could support:
on/off
quickly. Should only end when you turn off the light.
While you could implement them all as separate automations, the more you add the harder it gets to keep them from interfering with each other. You wouldn’t want your sexy time to be interrupted would you?
An alternative is to use a state machine to track the different states, making the state transitions more explicit. Our automation is already a simple state machine and it’s fairly easy to add more states and more functionality to it.
An automation is just an Elixir GenServer, so the same strategies to test a GenServer applies here too. I’ll start with the test I want to write, and we’ll work backwards to make it work:
test , % do
# Start the sunrise by sending a time message to the automation.
send(server, )
# Assert that we'll eventually receive the sunrise transitions.
assert eventually(fn ->
[
%,
%,
%,
%,
%
] =
WebsocketClientCollector.get_messages(
get_service_data: true
)
end)
end
The first thing we’ll need to do is to start the GenServer so we can start interacting with it. We don’t need a supervision tree so we can start it directly and send it to the test:
setup _opts do
= BedroomLight.start_link([])
%
end
test , % do
# ...
end
I like to test against isolated GenServers as it allows parallel testing and it reduces the risk of contamination from other parts of the application.
If we run this test we’ll notice that the automation will only output the first sunrise transition. What gives?
Remember this line?
Process.send_after(self(), :transition_sunrise, 10 * 60 * 1000)
It says that we’ll continue the sunrise transition after 10 minutes. Nobody wants to wait that long for a test to finish…
To get around this I added an option to the automation so that we can override the delay to 1 millisecond during the test:
setup opts do
opts = Map.put_new(opts, :transition_time, 1)
= BoysRoofLight.start_link(opts)
%
end
# And in the automation:
transition_time = state[:transition_time] || 10 * 60 * 1000
Process.send_after(self(), :transition_sunrise, transition_time)
I don’t like modifying code just to make tests work but in this case I think it’s a reasonable workaround.
I want to touch on the eventually
helper that I think is super useful when testing processes in Elixir.
It comes in handy whenever I want to wait for a message to be delivered or wait for a process to reach a certain state.
Here it is:
# Use Task to be able to timeout the execution.
task = Task.async(fn -> _eventually(func) end)
Task.await(task, timeout)
end
try do
if func.() do
# Return true so we can use it in an `assert` statement.
true
else
Process.sleep(10)
_eventually(func)
end
rescue
# Safe up so we don't have to bother with proper matches etc
# inside the predicate function.
_ ->
Process.sleep(10)
_eventually(func)
end
end
Careful use of checkpoints in our tests, where we wait for a state to be fulfilled, is much preferable over sprinkling
Process.sleep()
in our tests, hoping that the race conditions will go away.
The last thing we need is to capture outgoing websocket messages. In fact we also need to block the websocket connection because as it is now the full application will run when we run then tests, including connecting to our Home Assistant instance and start receiving state changed events.
We can do this by replacing the websocket client during tests. The application config is a good place for these settings:
config :haex,
ws_client: Haex.WebsocketClient
config :haex,
ws_client: WebsocketClientCollector,
Then when we send a message we delegate to the proper client:
ws_client().send(data)
end
Application.fetch_env!(:haex, :ws_client)
end
All
WebsocketClientCollector
does is collect sent messages by process id and is able to return a list of them:
use GenServer
GenServer.call(__MODULE__, )
end
GenServer.call(__MODULE__, )
end
# Skipped the implementation ...
end
With this our test for the sunrise alarm should pass!
Tests in an asynchronous and concurrent system—where messages don’t arrive immediately and where services interact with each other—can be very annoying to deal with as it’s easy to introduce race conditions, where a test sometimes fail.
Consider this test where we’ll test that the sunrise is aborted if the light is turned off in the middle:
@tag transition_time: 10
test , % do
send(server, )
assert eventually(fn ->
:sunrise = BedroomLight.get_state(server)
end)
# Should stop the sunrise
send(server, )
assert eventually(fn ->
:day == BedroomLight.get_state(server)
end)
assert Enum.count(WebsocketClientCollector.get_messages(server)) == 1
end
Even though it appears we’re avoiding race conditions by waiting for the automation to change its internal state at line 4 and 11, this test may still fail on occasion.
The issue is that on the last line we’re testing that we only received a single sunrise transition.
But we set a transition time of 10
milliseconds on line 0, and sometimes the messages arrive in such a way that the automation manages to transition twice.
To add some leeway in our test we might try to change the condition to < 4
and to increase the transition time…
We already have a working home automation engine that can be used as-is to control our home. But there are a couple of features that are missing and would enhance the system, for example:
Cron style support.
We can add cron-like scheduling to our automations using libraries such as Quantum or Oban.
A simpler API for simpler automations.
While GenServers are great in many ways they’re a bit verbose for simple automations.
I took inspiration from AppDaemon’s listen_state
for a simpler API:
# This automation turns on a ledstrip behind my monitors when the plug power
# is above 180, which happens when I turn on my three monitors.
listen_state(
,
fn ->
Light.turn_on(, color_temp: 220, brightness_pct: 40)
end,
gt: 180
)
listen_state
is implemented by—you guessed it—a GenServer.
listen_state
registers a trigger callback together with some trigger conditions within the GenServer, then the server calls the callbacks whenever the conditions are met.
This way we don’t need to mess with the internals of a GenServer and can use a declarative approach to create simpler automations.
Querying entity states.
Sometimes we want to only execute an automation if an entity has a specific value, for example:
if is_on() do
# Trigger doorbell
end
I support this with the States
GenServer that holds the state of every entity in Home Assistant.
At startup it fetches all states and uses the state changed event we’ve seen before to keep it in sync.
Generate automation entities.
I want to be able to enable and disable the automations in the system.
I’ve been manually creating input_boolean.<automation>_enabled
entities, but our automation engine could create these manually.
We could keep track of when the automation was last triggered and display the internal state of automations for debugging purposes.
To set states (and create entities) we need to use the REST API.
There’s probably a bunch of things I haven’t yet realized that I need, but at the moment I’m really happy with writing my home automations in Elixir.
2024-10-05 08:00:00
I recently bought the Eight Sleep Pod 4—a smart mattress cover that tracks your heart rate, HRV, snoring, and cools or warms the mattress during the night. There’s a lot to like about the mattress but in the end I opted to return it.
This post describes my experience with the Pod 4.
The Eight Sleep mattress is really expensive but that’s not all—it’s a mattress with a subscription! I hated it when Oura introduced a subscription for their ring, and I hate the world that led us to a mattress with a subscription.
So why bother with the ridiculous pricing?
Because sleep is important.
What would 60 or even 30 minutes of extra sleep per day be worth? Or maybe the same amount of sleep but better? For me, as a parent of young kids that wakes up way too early, the answer is that it would be worth a lot.
That’s why I was able to look past the price and give Eight Sleep a chance.
There are a bunch of things I like about the mattress and a bunch of things I didn’t like about it. The cons outweigh the pros for me but not by much; if my circumstances were a little different I might have kept it.
Sleep generally improved.
I didn’t get the promised +30 minutes of extra sleep but anecdotally it was a positive change.
The mattress could get very hot and cold.
I was worried that the mattress wouldn’t be able to get cold enough but it was able to go really cool.
Tap control on the side works very well.
It was very easy to tap the side of the bed to increase or decrease the temperature (at least for me, the bed is flush to the other wall).
Separate sections of the bed is excellent.
Although our kids slept with us the two sections worked well for us.
It’s a cool gadget—I like gadgets.
There’s no way to connect it to Home Assistant.
I like home automation and I’ll freely admit that if I could’ve connected the bed to Home Assistant I would’ve kept it, everything else be damned.
Using it as a presence sensor and being able to track the temperature of the bed and create my own automations would be glorious.
But alas, Eight Sleep keeps all the data to themselves and want you to pay for the expensive subscription for the privilege of controlling your own mattress.
I sleep parts or even whole nights in my kids’ bed.
To benefit from this kind of mattress you need to sleep on it, which I didn’t always do.
The app is a black box.
I was severely disappointed in the app as it doesn’t provide any insight into what the Autopilot is doing, making you question if it does anything at all or if it’s just empty AI marketing.
There’s no history of the temperature adjustments during the night.
You can’t look back at the night and see your own or the Autopilot’s temperature adjustments. My own adjustments aren’t even saved so the temperature settings for the next night is a guessing game.
Eight Sleep claims that Autopilot is making adjustments but for all I know it’s not doing anything.
The “Autopilot has reduced your X by Y%” messages feels made up.
I didn’t have the Pod 4 Ultra that can elevate the bed, so how can the Autopilot reduce my snoring during the night? Sometimes I didn’t sleep the whole night in the bed yet Autopilot claims it improved my deep sleep with 20%?
I’ll give them the benefit of the doubt and say it’s probably bad statistics rather than regular old bullshit… But how can you tell?
Most importantly it did not achieve the partner approval.
I gave it a shot but after a few weeks I decided to use the generous free 30 day return to send back the pod and get a refund (you throw away the mattress).
It wasn’t the smoothest ride but the customer service did a decent enough job. I had lots of trouble with the pickup, although that was probably an issue with the shipping company rather than Eight Sleep:
I work from home so it wasn’t that big of a deal, although it was a bit stressful.
Still, the free return is great and it might be the biggest reason to try Eight Sleep. In the future, when the kids get older and if someone reverse engineers the next generation of the Pod to connect it to Home Assistant, I might give it a try again.
2024-09-25 08:00:00
Time flies when you’re having fun.
Before you know it, your little babies have started school, you celebrate the 30th anniversary of Jurassic Park, and that little blog you started have now been going for 15 years.
15 years is a long time; longer than I’ve been waiting for Winds of Winter, and that wait has felt like an eternity. How did I—who frequently abandon projects for the next shiny thing—manage to continue this blog for so long?
I’m as surprised as anyone but I’ve tried to make a retrospective of how this may have happened.
I started this blog because I wanted to create a bunch of fast game prototypes and I wanted somewhere I could write about my plans and, ultimately, the games.
You see, I was a budding programmer and I wanted to learn how to program by making a game. Not a simple game like Tetris—that would be way too sensible—no, I wanted to make a big RTS game, like StarCraft or Supreme Commander. And to do that you needed a game engine.
So I got stuck developing my engine with truly groundbreaking features such as:
F2
where you could update variables (such as unit speed) without having to recompile.
Ctrl
, Shift
, and right click behavior.
… But, embarrassingly, I didn’t have anything even resembling a game, and with the development speed I had I doubt I’d be finished to this day.
I’d gotten stuck in the Game Engine Trap, and I hated it.
Then I found The Experimental Gameplay Project (of World of Goo fame) that promoted the idea that you should be able to create a game prototype in just 7 days. That sounded like the perfect cure against the Game Engine Trap, so I created this blog to document my progress.
While the blog fulfilled it’s initial purpose as I developed around a dozen game prototypes that got me out of the Game Engine Trap (and that gave me a small “game engine” library at the end), I soon started write about other things.
There are a number of reasons I continued to blog:
I enjoy writing.
I realize now that the biggest reason I blog is that I enjoy the writing process. I can’t put my finger on why, I just generally like it.
This isn’t always true though and I’ve had years where I’ve barely written anything at all (2022 for example). Sometimes I’ve had to force myself to write something.
I guess the motivation ebbs and flows sometimes.
Writing helps me think more clearly and helps me flesh out ideas.
The act of writing something down helps me find errors in my thinking and helps me consider different viewpoints. Rewriting the text you’ve written has a similar benefit to refactoring your code; your thoughts will be more polished afterwards.
Publishing something forces me to do better.
If I’m going to put something out there I’m going to re-read and rework my text/code/ideas more than if I had kept it for myself. (Even if nobody will read your posts, the mere act of putting something out there has this effect I think.)
For example, my custom keyboard layout wouldn’t have been nearly as well-developed if I hadn’t published it for everyone to see.
Being more thoughtful about how I write is something I’ve become more cognizant of as the years have gone by. My first posts where little more than a stream of thoughts, while the larger posts I gravitate towards today have gone through multiple revisions and rewrites before I publish them.
The blog is a place to document my personal projects.
Over the years I’ve done other projects, such as built a 3D printer and wrote a book. It’s nice to have a place where I can write about them.
Looking at a log of things I’ve done makes me feel better.
I’ve been doing a small yearly review every year where I try to list the highlights of the past year. It’s been super helpful for me as it helps counteract the depressing feeling that nothing has happened and that I haven’t done anything.
Doing a yearly review of some sort is a practice I highly recommend everyone to try, and of course you don’t have to publish it for everyone to see.
I enjoy developing the blog as a project that exclusively solves my problems.
Programming is my biggest hobby and I can’t see myself ever stopping. The blog is a great project as it’s something that exists only for me so I can rewrite, refactor, and add whatever silly features I want and I only have myself to answer to. It’s a nice feeling.
Blogging helps me become a better writer, which in turns helps me become a better developer.
I think communicating well is an important and underrated part of being an effective software developer. Writing well is a skill that can be developed by practice, and maintaining a blog is a pretty good way to practice I’d say.
It’s important to point out that it’s not external feedback that has kept me going all these years. Yes, of course, it’s nice to get the occasional email with compliments, but that’s just a bonus.
I keep this blog for me to write, not necessarily for others to read.
Many of these kinds of retrospectives contain graphs of views over time or the most popular posts; but I’m not showing it to you because I can’t—I don’t keep any statistics whatsoever.
I don’t really care—and I don’t want to care—about how many readers I have or what posts are and aren’t popular. I worry that if I add statistics to the blog it’ll change from an activity I perform for the activity’s sake, to an exercise in hunting clicks where I write for others instead of for myself.
If I were chasing views I would certainly not have continued to blog for as long as I have, and I’d have missed out on the many benefits I’ve gotten from the blog.
One of the reasons I’ve been blogging so long is that I’ve been able to play around with the tech stack of the blog. I’ve changed the tech stack a number of times; from choosing languages I wanted to learn, to a boring setup that “just works”, and back again.
I started out with PHP using the Kohana Framework and I still have fond memories of their excellent documentation. Although I had figured out how to create a website, it never graduated to a real blog.
Then I moved on to rewrite the site in Perl using Mojolicious. I’m not sure my efforts ever resulted in anything tangible but I remember if was fun to play around with.
I stumbled upon the idea of using a static site for my blog and therefore abandoned Perl for Jekyll, a popular static site generator at that time.
I believe it was a smart choice because it helped me start writing, instead of jerking around with cool tech.
Eventually, I grew tired of the boring backend that just got the job done and in my quest to learn Haskell I replaced the generator with Hakyll, another static site generator with a pretty neat DSL.
The earliest Git commit on record.
I’m fairly sure I used Git before this point
(I abandoned SVN for my games in 2009).
Sadly, I never truly graduated from the “throw shit at the wall until it sticks” stage of my Haskell journey, which is why I barely added any features to the blog for many years.
Having outgrown existing solutions I decided to join the Rewrite in Rust club (or is it a cult?)
Religious weirdness aside, having complete control of the site generator made it fun again to tinker and add small features.
Honestly though, my favorite piece of technology on the blog is CSS. I just really like to spend time to fiddle with the design and to make small tweaks here and there. I do use Sass but 95% is just plain CSS.
Modern CSS is honestly great.
Almost by accident I started using Djot instead of Markdown to write my posts. I couldn’t find a Tree-sitter grammar for Djot so I created one.
I’m in the process of connecting the site generator to Neovim to provide autocomplete, diagnostics, jumping between posts, and other cool features.
There’s lots of potential for spending tons of time in this swamp but these IDE-like features really elevate the writing experience.
At the moment the blogging software is a whole project in and of itself (by design; it’s a fun project to tinker with).
:post-stats-graph:
It probably comes as no surprise that my posts have changed a lot since I started the blog. I made the above visualization that counts the words of each post and plots them on a time axis, together with loose grouping of the type of post.
I have two main takeaways:
The posts have grown larger and more ambitious.
In the beginning I treated the blog almost like a Twitter/X feed with short updates on my game making progress. Now I spend weeks or even months slowly working away on a post until I feel it’s interesting and polished enough to publish.
As my interests have changed, so has the focus of my posts.
I only write about my hobbies or things that I’m interested in at that moment so it’s only natural that the theme of the posts have changed. Gaming related posts have given way for more programming and the occasional meat-space related project.
I find almost find it obvious that the blog has changed so much during the 15 years of it’s existence; of course my posts would grow more ambitious as my writing matured and I’d obviously start gravitating away from games towards other projects.
Naturally, it’s just a lie I tell myself with the benefit of hindsight.
Predicting the future is impossible and I have no idea what the blog will look like 15 years from now. While it feels like I’ll keep blogging the same way, it would be foolish to claim that as a fact.
Sometimes it’s best to stop worrying and just enjoy the ride.