Professional developer with 9+ years of experience in creating robust services with Python, Go, Crystal
The RSS's url is : https://alinpanaitiu.com/index.xml
2024-10-12 03:54:06
When I started wood carving, the only sharpening method I remembered was from seeing my mother use some kind of smooth broken stone that she passed over the length of the knife blade before sacrificing a chicken.
I also remember seeing my father use a very coarse stone wheel placed on a motor shaft which threw many sparks when he sharpened some large axe for splitting wood.
I had neither of those around anymore in my rented place in the city so I jumped headfirst in the mind numbing and sometimes esoteric art of getting a sharp blade.
The first blade type I had to sharpen was for my BeaverCraft carving knives.
They fortunately came with a strop, basically a plywood base in the form of a paddle, with leather stuck to it on both sides, and a green waxy bar. Unfortunately I had no idea what to do with it.
Stropping is, at the most basic level, dragging the blade back and forth on a semi-hard surface (like leather). Clean leather won’t make your blade sharper though, and that’s why you have that green bar which you need to rub onto the leather. It contains a fine abrasive that slowly removes tiny amounts of steel from the very tip of your blade to make it sharper.
Reading about stropping and watching videos about it gets into a lot of debate on:
And so on. It’s hard to find much exact science on this, everyone figures it out as they go, and experts share their beliefs based on their extensive experience.
I settled on stropping after every 15 minutes of constant carving, but I had no idea what I was doing or if I got any result.
That’s one thing I learned that would have relieved me of much frustration at the start: find a way to test the outcome of your motions, otherwise you’ll just keep repeating things that don’t work
And it also applies to programming: so many times I’ve seen colleagues writing code but with fear and uncertainty as they didn’t know that what they did would have the expected result. And only because they didn’t know how to test how the code would work.
For example they were backend programmers, that only knew how to run their backend
python
server, but had no idea how to run the whole stack to check if the front end did work correctly with their changes.
Or app devs who didn’t know how to use a local database, and instead always feared release day because of how their changes will impact the production db.
Nowadays I know how to test for sharpness.
If I want a basic working blade I test if the knife cuts printer paper easily and without tearing it. If I want a carving blade, I check if it pops hairs off my arm easily. And I check that often so I don’t do hundreds of blade passes for nothing.
Anyway, back to my carving knives. For a long time I did a pretty bad job at keeping them sharp.
I would carve for a while then feel the need to push too much into the blade or get tear out in the wood, then I would move the blade back and forth on the strop a few dozens of times, then getting back to carving I would notice a slight improvement which went away after just a few cuts, then back to stropping a thousand times for nothing.
Stropping is not sharpening, and after so many cuts those blades needed serious sharpening which a piece of leather doesn’t do.
Well, actually, stropping is sharpening, but it’s a very very fine kind of sharpening. It’s similar to using a super fine grit stone, something between 14000 and 100000 grit.
The green paste is a super fine abrasive that does remove metal from the apex of the blade, but so little at a time that I would need to do thousands of passes to get the same result that a coarse grit sharpening stone would do in a few passes.
I started looking into sharpeners, and because “stones” felt like something old which only my mother used because she didn’t have any better method, I would look into “modern” sharpeners like:
I tried a few of those expensive methods, ruined a few blades with them, wasted days on reading reviews and watching tutorials… none of those methods felt like a deterministic way to get a sharp blade every time.
The precision sharpening rig is probably the closest to that, but I don’t want to waste precious workbench space on that. Also, I can’t take it with me when traveling.
Once I got into larger projects, I discovered how to use wood planes (which seems to be thought of as an antiquated method in Romania, manual woodworking is dead here). I also discovered they need frequent sharpening and their blade shape seemed to be perfect for using a flat surface sharpening method like, you know, a stone.
Every woodworker on YouTube showed how to sharpen planes on stones so I thought I should probably get one eventually. But which type?
There are whetstones, natural stones, ceramic stones, oiled stones, diamond stones, you name it.
After trying a ceramic stone which needed to be soaked in water for 5 minutes before doing any sharpening, and constantly wetted between every few passes, and after making a wet mess and chipping the stone and dreading to sharpen because I had to keep the stone in water, I started looking into drier methods.
Diamond stones seemed to be more up my alley: a modern method based on an old, tried and true idea.
Whetstones and oiled stones are great as well, I just wanted something with less maintenance and more resistant to abuse.
At this point I got sick of wasting money on this hobby, so I got the cheapest set of Chinese diamond plates online. It came in a set of 4 thin steel plates with foam backing, each plate having a different grit: 400, 600, 1000 and 1200. Let’s not get into grit standards, it’s probably FEPA but who knows.
And what do you know, it worked, it was easy to use, cheap and no special instructions needed.
I also got lucky in finding out about the OUTDOORS55 YouTube channel and the science of sharp website who do microscope analysis of blades and methods of sharpening.
They’re getting as close as possible to actually knowing what the heck happens when you move the blade on top of a stone, leather, denim or any type of material people have tried sharpening on.
This helped me cut through the bullshit fast and get to a simple working method of getting any blade from rusty to shaving sharp in a few minutes.
At this point I had these diamond plates scattered on my workbench, I had a leather strop, a green stropping compound bar that painted my fingers green every time I used it, a round diamond file for my hook knives (because you can’t sharpen the inside of curved blades on a flat stone) and a piece of flexible leather for stropping those hook knives.
It worked, but it was a mess. And I also work on the go a lot, I always do some wood project at my parents house, carve some branch at a walk in the woods or need to sharpen both a kitchen knife indoors and a chisel outdoors. I wanted to simplify this, but could not find any ready made product I wanted.
So I got to doing what I know best, half-assing an improvised product that works for me but would be ashamed of showing it to anyone.
I cut a piece of 18mm
thick beech wood into the shape and size of a diamond plate, then stuck a 600 grit
plate on one side, and the leather on the other side.
This not only got the stone and strop higher up from the table which helped a lot with sharpening long knives, but also put them in the same package which I could carry with me.
I called this a sharpening block in my mind.
To keep the block from slipping while I moved the blade over it, I cut a piece from a non-slip silicone baking mat. Even the small pressure of the blade makes the silicone adhere well to both the block and whatever surface it’s sitting on.
I still didn’t know what to do if I needed a coarser/finer grit, what to do about the green waxy mess of the stropping compound, and how to carry the hook knife sharpeners.
This summer, my sister-in-law visited from Italy and asked if I could make a coffee table for her. Timing was tight because I had to make it in a week so she could transport it from Romania back to Italy by car.
I had no time to find wood slabs so I got two oak wood panels from a big-box store, glued them one on top of the other to make the table-top thicker, and figured I’ll find some table legs afterwards.
I found some 15cm
diameter smooth beech logs at a firewood seller nearby. Turns out they were leftovers from a veneer factory that can take a really long and thin log slice automatically, leaving the log core as smooth as a turned piece.
The table turned out pretty nice and solid, her children love sitting on top of it or hiding under it with their toys. I guess it could also be used for holding coffee cups, they didn’t get a chance to try that yet.
So this left me with some 36mm
thick oak wood that I thought could make for a better sharpening block.
This time, I glued two coarse and fine grit plates back to back (either 400/1000
or 600/1200
) and embedded some neodymium magnets inside the wood to keep the diamond steel plates firmly attached but able to flip easily.
For the strop, I didn’t have any leather left and it was crazy expensive, so I looked into alternatives.
You can basically use any semi-hard porous surface for a strop, even plain thick cardboard. It just needs to hold the fine particles of the compound and not flex too much when pressing the blade into it. People seem to use leather, denim, cotton, felt, cardboard, balsa wood etc.
After building my workbench, and the leg vise for it which needed some rubberized cork for the vise faces, I now found myself with 3m²
of that rubberized cork on my hands because I could only buy it in bulk here.
I tested the cork for stropping and I was amazed to see it’s even better than leather:
So I cut and glued a 3mm
thick piece of that cork on the other side of the oak base and loaded it with pink corundum stropping compound.
Yes, this is the time I discovered the even more esoteric world of stropping, lapping and honing compounds which are not green bars of waxy stuff. We’ll get to that.
The 36mm
thick oak gave me enough space to drill 20mm
diameter holes with a forstner bit, where I could place a wooden dowel lined with that same cork for honing the hook knives. Another hole was for the stropping compound.
I finally reached an all-in-one sharpening block, super stable, with coarse enough grit for getting a bad blade into shape fast, fine grit for sharpening, enough strop for a few years and a way to sharpen recurves and hook knives.
It was a bit too thick though…
Most people get a green bar of compound and use it all their life and never think about it. But nooo, I had to do research and see what that compound contains and why is it so waxy and is there better stuff?
My understanding is that the green stuff is made of very fine particles of chromium oxide, which is also used as green pigment in cosmetics and painting, and is why it makes everything in your life green when touched.
I had a Dialux Vert bar, and searching for its material safety data sheet surfaces the following:
Composition/information on ingredients:
mixture of fatty acids and paraffins, aluminum oxide, chromium oxide
So it is made of very fine particles of chromium and aluminum oxide suspended in paraffin wax.
Does that mean it can be melted and poured into cylindrical molds? Yes it does! Adding some green flakes in a small cup with walnut oil in it allows it to be melted in a microwave oven.
Any type of oil works, I tested sunflower oil, olive oil, coconut oil, mineral oil etc. you just need some kind of non-solid fat to make the paste less viscous when melted.
After lining the 20mm
hole with wax paper and pouring the melted mix, it solidified as a green cylinder with a chapstick-like consistency, that I could easily get out and apply to the cork.
In the last decade, diamond pastes and sprays started becoming popular for stropping.
Diamond powder was widely used for lapping gemstones for a long time, and diamond paste was being used in the dental industry for polishing implants. Eventually people figured they can suspend the very fine diamonds in a sprayable emulsion, advertise it for stropping and ask a ton of money for it.
Diamond sprays are not accessible here in Romania, but I was able to get some lapping pastes of 3 microns
and 0.25 microns
particle size to test.
For reference, chromium oxide green polishing compound is usually formulated with
0.5 microns
particle size, but that doesn’t mean that all particles are of that size.That’s like the average particle size which gives you an idea of what kind of polish you can expect from it.
I admit, I like diamonds. They’re easy to apply by squirting the paste from a syringe, I can throw it in a bag without making a green mess, it does seem to cut metal very slightly faster and lasts a bit longer between applications. But a tiny syringe that lasts 3 months costs more than a green bar that lasts me years. I can’t make this compromise.
There are people that bought the diamond powder directly and made their own pastes and emulsions but it all seems too complicated for little gain.
I did try some cheap Chinese AliExpress diamond pastes, but for the life of me I can’t figure out how they calculate the grit and particle size. I bought the finest I could find which is listed as W0.5 micron mesh
. I don’t know what means but it doesn’t get to a shaving sharp blade easily.
Looking into other abrasives, there’s another loved stropping compound: cubic boron nitride or CBN emulsions.
It’s impossible to buy that here, but it’s way too expensive anyway and I’m pretty sure the difference would be marginal.
People tout that CBN and diamonds are actually necessary for harder steels, but I’ve yet to see that tested. My guess is that it’s just people’s way of justifying the purchase of a new and expensive toy, we’re all guilty of that.
Wikipedia says that Cr₂O₃
has a hardness of 8 to 8.5 Mohs
which is far higher than plain steel at 4 Mohs
and even higher than tungsten at 7.5 Mohs
. If it can scratch tungsten, I’d say it can hone a steel knife.
I eventually found a pale pink compound that has the same fine grit as the green one, but without the color problem.
It’s made from corundum, which is aluminum oxide without the chromium. It’s harder than the green stuff (at 9 Mohs
), doesn’t leave its color everywhere, and it’s dirt cheap. I bought the Lea Chromax brand and I’m very happy with it so far. I paid €10
for a huge 800g
brick that will last me a lifetime.
I break small chalk-like pieces from it that I pocket or leave throughout the house to ensure I always hame some on hand. It applies easily to both cork and leather and doesn’t flake off or stick to the blade.
I gifted sharpening blocks along with pieces of stropping compound to many friends and relatives and the pink Chromax block still looks as large as when I bought it.
I am working on a way to merge the all-in-one quality of the thick oak block with the pocketability of the thin beech block.
I also got my hands on some narrow 20mm
width diamond plates of very fine 3000grit
. I find them useful for when I need a very sharp blade for doing finishing cuts on a spoon or when planing dense wood. I’d like to integrate this plate somehow in the block.
It could maybe be glued to the side of a 20mm
thick block of wood, and used hand held instead of on a table.
I did try that on the beech block I already had, and it seems to work nicely. It’s not that hard to sharpen freehand and handheld as long as it’s just for doing the last fine honing on an already sharp blade.
The wax paper method for pouring melted compound is also not ideal. I’m thinking that pouring the compound into a chapstick tube would make it easier to use, and be thinner so it can fit inside a hole in the wood block.
I also have these round diamond files which are great for sharpening round blades, serrated knifes, drill bits, even some rip cut saws. They come as a double-ended rod with a conical file on one end, and a round + flat file on the other end, and they’re meant to be placed in a pencil-like holder.
I cut the rod in half with an angle grinder as I can’t fit the whole length of the rod inside the wood block. With only half of it, I can drill a 6mm
hole in the block to embed the file.
I can’t fit the cork lined wooden rod for honing though. I only have 3mm
thick cork for now which is way too thick for this use case.
There’s this thing called Nanocloth which is like a thin microfiber cloth specifically for CBN and diamond emulsions. It’s hella expensive and I would never buy such a thing, but it gave me an idea: I could try wrapping some thin felt cloth around a 10mm
diameter wooden dowel. It should hopefully fit in the block and hold enough compound for stropping.
And I guess that’s it, that would be my ideal sharpening method:
And all that in a cheap package that should cost less than $30 to assemble.
I’m still working on it, I prepared some new 18mm
oak blocks and I’ll update this post with the results.
2024-04-28 18:01:45
Some of you might remember the legendary comment of Eric Diven on a Docker CLI issue he opened years ago:
@solvaholic: Sorry I missed your comment of many months ago. I no longer build software; I now make furniture out of wood. The hours are long, the pay sucks, and there’s always the opportunity to remove my finger with a table saw, but nobody asks me if I can add an RSS feed to a DBMS, so there’s that :-)
I say legendary because it has over 9000 reactions and most are positive. There’s a reason why so many devs resonate with that comment.
A lot of us said at some point things like “I’m gonna throw my laptop out the window and start a farm”. Even my last team leader sent me a message out of the blue saying “I think I’ll run a bar. I want to be a bartender and listen to other people’s stories, not figure out why protobuf doesn’t deserialize data that worked JUST FINE for the past three years”.
You know the drill, sometimes the world of software development feels so absurd that you just want to buy a hundred alpaca and sell some wool socks and forget about solving conflicts in package.json
for the rest of your life.
I went through those stages too: when the Agile meetings at my last job got so absurd that we were being asked to estimate JIRA task time in T-shirt sizes, I took the decision to quit that comfy well paying job for the uncertainty of making a living from macOS apps. I had only one app that didn’t even work on the latest Apple Silicon chips, and it was making $0, so I really took a bet with it.
Recently, when people started coming with so many unrealistic and absurd expectations and demands about what my apps should do, I started thinking if it would be possible to leave software development for a more physical trade.
Most of my pre-college time was spent on things I didn’t want to do.
I had a bit of childhood, but then I started going to school 6 hours per day, with 1-2 hours spent on commute after 5th grade. I only liked the 10-minute breaks between classes where I played basketball or practiced parkour.
Every day after I came back from school, I had to work in agriculture, either out in the field with crazy winds and sun and UV radiation, or inside a 100-meter long greenhouse where it’s either a 50°C sauna or a muddy rainforest. I was very bad at every job I was given, but it’s what my parents did for a living and I had to help them, no questions asked.
The few hours that remained, usually very late at night, tired both physically and mentally, I spent practicing acoustic guitar, doing bodybuilding exercises, writing bad poetry or drawing graphite portraits.
I almost never did homework or memorize whatever had to be memorized for the next day of school. I just couldn’t justify spending those few hours I had left on even more stuff I did not want to do.
When I found my liberty in college, hundreds of kilometers away from my parents, it’s like something clicked. I suddenly became incapable of doing work that I found meaningless.
Failing classes became acceptable, quitting jobs was something I did with little remorse if I felt I wasn’t helping anyone with the work I was assigned, and bureaucracy became a disease I had to avoid at all costs.
I still washed the dishes though. Cleaning and other “chores” never felt meaningless for some reason.
… was a chess board and piece set. With magnets inside them. Where the pieces look nothing like ordinary chess pieces.
I was trying to get the pieces to snap into place in a satisfying way, and make sure the game stays that way when kids or dogs inevitably bump the table where the board sits.
You know how Magnus Carlsen always adjusts his pieces so meticulously before a game? Well I have half of that obsession as well so I wanted to avoid doing that.
I started with a cheap but hefty pine board which I rounded with a lot of sandpaper. Then I asked my wife to help me colour in the darker squares because I’m pretty bad at colouring inside the edges (both literally and figuratively). We used some wood floor markers for that and the colour seems to be holding well.
Most chess board builds you see on YouTube are done by gluing squares of different wood species with alternating colors, but I had neither the skill nor the tools to do that.
Then I drilled holes for the super strong neodymium magnets from the underside of the board, having to get really close to the top side without passing through. I failed on two squares, but some wood putty took care of that.
I spent a few sunny days on the balcony sculpting the pieces with a badly sharpened knife and my Dremel. This was quite satisfying, there’s something really nice about seeing a non-descript rectangle take the shape of a little horse in your hands. I mean knight, but in Romanian that piece is called “horse”, and I really don’t see any knight there.
Regarding the design, I got some inspiration after seeing these modernist chess sets, which not only looked beautiful in my eyes, but also had these geometric shapes that didn’t need that much sculpting to replicate. I found ready-to-buy spheres and cubes of wood at a craft shop around me (which took care of pawns and rooks), and the rest were carved out of rectangles and cones of wood.
Two Octobers ago, a Romanian music band called Subcarpați was holding a free “make a Kaval with your own hands” course, where a flute artisan taught the basics of his trade for a week.
The Kaval or “caval” is a long flute with 5 holes and a distinct lower register where notes can sound melancholic and coming from far away, as opposed to the thin cheerful sound of the small shepherd flute.
Ever since I bought my first Kaval, I always wanted to learn how to build one myself. It’s one of those trades where there’s very little info on the internet, so it feels almost mystical compared to what I’m used to in programming. I would also have the chance to walk home with the finished flute, so of course I went to the course.
I loved the fact that we worked in teams of two, and that everything had to be done by hand with no power tools. Even the long bore through the 70cm branch of elder tree had to be done with a hand drill, taking turns to rest our hands.
The artisan had been a shepherd himself since childhood, and taught himself with a lot of trial and error about how to build good sounding flutes and how to make the holes so that the flute stays in tune. But he didn’t know why the holes should be at those specific distances or why the wood tube should be of that specific length for each scale.
I wanted to know those things, because I had an idea of making a universal Kaval that can play in any scale.
You see, if you want to play on top of songs in various scales, you need a Kaval made for each specific scale. So you’ll need an A minor flute, and a B minor one and a C minor one and so on, for a total of 12 different flute lengths.
I eventually found info on how a flute works by thinking about it as an open or closed tube where the vibrating air creates nodes and antinodes that should coincide with the hole position. At the moment I’m still studying this and working towards my “universal flute” goal.
For the past 10 years I lived in rented apartments, usually at the 3rd or 4th story with no access to a courtyard. I was never able to get used to that, given that all my childhood I lived and played in a 2000m² courtyard, on a road where there were more slow horse carriages than noisy cars.
This year I moved into a rented house with a tiny but welcoming garden and a bit of paved court and only now I notice the effect this has had on my mind and behaviour.
I develop macOS apps for a living, and there are some unhealthy things in this field that piled up over the years. I get a lot of messages in a demanding and negative tone, and because walking outside the apartment meant unbearable car noise, obnoxious smells and zero privacy, I always defaulted to simply acting on the feedback, putting up with it and working long hours into the night, instead of going for a walk to calm down.
A few months ago, the most absurd demands started coming up for my apps: things like “why does your app not control the volume of my <weird sound device>? why don’t you just do it, people pay you for it” when the app in question is Lunar, an app for controlling monitor brightness, not sound devices.
Or “why do you disable your apps from working on Windows?”, or “make Clop compress text and copy it to clipboard” (where Clop is my app that automatically compresses copied images, videos and PDFs, I have no idea what compressing text even means in that context).
But this time, I was able to simply walk out the front door, grab a branch of beech wood, and, because I remembered my wife saying we forgot to package the french rolling pin when moving, I took out my pocket knife and started carving a simple rolling pin for her. It was so liberating to be able to just ignore those messages for a while and do something with my hands.
I understand that those people don’t know better, and they would have no idea that there’s no checkbox where you can choose whether an app works on macOS, Windows or Linux. I understand how if the app does something with audio volume or compression, some think that it should do everything related to those workloads, even if it’s completely outside the scope of the app.
But the combination of the negative tone and getting message after message, some people being so persistent that they insist on sending me those messages through all possible mediums (email, Discord, Twitter, contact form, they’ll find me everywhere), makes it hard to just ignore them.
There’s also this oily smell of AI and machine learning in the tech atmosphere, where I no longer feel relevant and I seem to have stopped caring about new tech when I noticed that 8 in 10 articles are about some new LLM or image generation model. I guess I like the smell of wood better.
I know I’m privileged to even be able to have the choice of what to do with my time. I got lucky when I chose a computer science university at the right time which allowed me to progress towards a huge semi-passive income in the last 10 years. that doesn’t mean I didn’t work my ass off, but luck plays a huge role too
I got “lucky” to have my mind traumatised into some kind of OCD-like state where I hate leaving a thing unfinished. So I plow through exhaustion, skip meals, miss house chores and annoy dear people around me because I know “I just need to fix this little thing” and I’ll finish this app/feature/task I started. Even though I also know there’s no real deadline and I can leave it half-finished for now.
But even if it sounds annoying for a person like me to whine about how I don’t feel good or I feel burnt out, the privilege doesn’t negate the feelings. The regression to the norm will make everyone, rich or poor, get used to the status quo and complain about every thing that’s just a little worse than their current state. That’s happiness and sadness in a nutshell.
I’m also vaguely aware that software dev as we know it is about to disappear soon, and I got tired of learning the newest thing just to have it replaced next year. I got tired of back pain and chronic finger pain from so many hours of sitting and typing, I’d rather have pain from work that also builds some muscle.
And I got so tired of everything being online, immaterial, ephemeral and lonely, like indie development tends to be.
This house we rented is small and the owners had to fit the bedroom upstairs. I really don’t like climbing stairs up and down, especially when I have to let my dog out three times per night. So we gave up a room and started furnishing our own bedroom downstairs.
I didn’t want to buy bedside tables for the price of the bed itself, so I thought I could maybe make by own. I’m not yet skilled enough to build my own bed though, so we bought that
One day, while walking with my dog, I noticed that some trees were getting trimmed in the vicinity of our house and there were a lot of white birch branches on the side of the road. I said why not?, grabbed some branches and walked like a lunatic with white long sticks dangling up and down and a black dog zig-zagging left and right, all the way home.
I had another small pine panel left from that chess project so I started thinking about the simplest way to turn what I have into a bedside table.
I used low-grit sandpaper to give the board some nice round corners because I love squircles, swallowed about a spoonful of sawdust because I couldn’t find any breathing mask left, criss-crossed 4 branches in a way that would give a stable base, and screwed them to the underside of the board with long wood screws.
The legs would wobble around though, so I drilled small 3mm holes into each branch where they met in the middle, and weaved a florist wire through them to keep the table steady.
After I’ve shown the bedside table to a friend of mine, he said he also needed a laptop table for those mornings when he’d rather not get out of bed. I wanted to say that’s not very healthy, but what got out instead was sure thing, I’ll do it!. Oh well..
I still had the large desk top I glued from smaller beech boards, on which I worked for the past 4 years. It stayed unused currently, so I cut part of it and built this cute thing:
You’ll notice three defining features that every laptop table should have:
To tell the truth, all those are side effects of me drilling holes where there should be no hole, and dropping the board on the ground multiple times because my workbench was not large enough. All the things that could go wrong, went wrong with this table.
I hid the defects by turning them into features.
The whole truth actually is that the table looks nothing like what I planned. I bought these nice hidden brass cylindrical hinges to make the table foldable. That way, you could fold the sides flat inside and use it as some kind of armchair desk if you wanted.
I wasn’t able to drill the correctly sized or positioned holes for the hinges because I still lack a lot of knowledge and skill in working with wood. So after losing my temper with the frickin’ hinges that still didn’t fit after a full day of drilling and chiseling, I glued the sides and inserted 2 trusty long wood screws per side, which I patched with a glue gun that made the screw holes look like eyes.
After I also carved the handles, the table grew kind of a personality of its own, as you can see above.
Why didn’t I do some wood joints, like a dovetail instead of ugly screws and glue?
Because I had no idea they existed. Also, I wasn’t able to fit a simple hinge, I would have probably never finished this table if I tried learning wood joinery on it.
This reminds me of how whenever I did pair programming with a colleague, I noticed how they were doing some “nonoptimal” action and I would say:
Why don’t you just use
ripgrep
instead of sifting through all these files?
Because they don’t know it exists, stupid. Or because they just want to get this thing done and move on, they don’t grep
files all day like you do.
But in my ignorance, I seem to have chosen a good enough joining method. As you can see in this wood joinery comparison, 5cm (2inch) screws can hold more than 50kg (110lbs) of force, and I used even longer screws so I think it’s going to hold a 3kg laptop just fine.
Oh right, forgot about this little detail.. I also added a cork pocket for holding a notebook, tablet, phone etc. which I lined with a microfiber cloth on the inside for strength and sewn to the wood with that leftover alpaca wool for style.
While we were stuck in the apartment in the 2020 pandemic, me and my wife bought a lot of stuff that we thought would help us learn new things and start new hobbies. I thought I’m going to build smart LED lighting all my life and my wife would become a professional wool knitter. We were losing our minds, for sure.
So now we were stuck with crates of stuff we haven’t used in years, and didn’t want to start unpacking them around the house. The clutter that followed after the pandemic, tired our minds just as much as the lockdown itself.
We dumped the crates on an unused stairway spot, and I thought that a bookshelf as large as that spot would clear the clutter.
But I could not find any bookshelf that large, certainly not for cheap. So I traced a few lines in Freeform, took some measurements, and ordered a bunch of large pine boards and a ton of long screws.
I also ordered the cheapest portable workbench I could find ($30) that had a vise, so I can stop making sawdust inside.
A few days later, I got to sawing the shelves with my cheap Japanese pull saw I bought from Lidl years ago.
Hint: Hand sawing a long wood board with no skill will certainly end up with a crooked edge. Stacking up 5 boards one on top of the other will still end up crooked.
Uhm, I guess the hint is, buy a track saw, or make sure the crooked edge isn’t visible.
My wife helped a lot with measuring and figuring out where to drill holes and place the screws, while my dog inspected the work regularly to make sure the defects were hidden correctly.
It took two days of screwing.. erm.. driving screws, I mean. But in the end we got the result we wanted! And I got sores in my right arm for days, driving those long screws is harder than I thought.
In the thumbnail of this post you can see the current “workbench” I use, which is basically that $30 vise workbench I bought for the bookshelf, with the top of my previous “coding desk” attached in the front.
In the image you can see (bottom-left to top, then right):
I also own some no-name chisels that work well enough and some card scrapers that I still struggle sharpening.
The only power tools I have are a Makita drill and a random orbit sander on which I did spend some money, an old circular saw I found in that old shed (it was good enough to cut miters on that laptop table) and a Dremel I use rarely because I don’t like its power cord. I prefer battery powered tools.
Our dog Cora loves sitting at the window, growling at old people and barking at children passing around. Yeah, she’s terrified of children for some reason.
But the window sill is not wide enough and her leg kept falling with a “clang” on the radiator below. So I widened it by placing two glued up boards of pine on top of the radiator, that I planed and smoothed beforehand.
This is when I learned that a hand plane is not some antique tool that nobody uses anymore, but a quite versatile piece that can easily smoothen grain where I would waste 5 sheets of sandpaper and choke on sawdust.
I had to still let the heat radiate somehow, so I drilled large holes with a forstner bit, but I also blew the grain fibers on the underside because I had no idea of this possible problem. Turns out there is a simple solution to drilling large holes without ripping the fibers:
We also wanted to sit with Cora and there was not much space between the bed and the radiator, so I built a narrow bench. I used another two pine boards of the same size, but this time glued them on the side to create a wider board.
For the legs, well the tree trimming continued near us, so one day I found some thick cherry branches which I brought home, scraped the bark from them, then attached them to the bench using screws from the top side.
I was ok with a rustic look so I didn’t spend much on finishing, patching holes, or even proper wood drying. I did use the hand plane to chamfer the edges though, I love taking those thin continuous wood shavings from the edge.
We recently visited my parents, and loved how the grass finally started growing in some spots where their house and court renovation was finished and was no longer spewing cement dust. It was an abnormally sunny April and I wanted to chat with them at a coffee outside in the early morning before they started the field work, but there was nowhere to place the coffee outside.
First world problems right? If you read about The tail end, you might already understand why a trivial thing like coffee time with my parents feels so important to me.
So one day, while walking on a gravel road near their house, I noticed one neighbour had these huge logs of beech that were recently cut. I thought that would be easy to turn into a small exterior coffee table, so I went to ask if I could buy one.
Well I kind of had to yell “HELLO!” at their gate because I didn’t know their name, and did that a few times until a seemingly sleepy old man appeared at the front door (it was 5 PM) asking what I want. I asked how much he’d want for one of those logs, but he just said to get one, no money needed. Ok, there’s no point in insisting, I chose a wide enough but not too wide log, because these things are heavy and I wasn’t sure I could lift it, and rolled it slowly back home.
I didn’t have my usual tools at my parents house, so I improvised. I found a battered cleaver which my dad used for chopping kindling for the barbecue. I sharpened it as well as I could, then used a hammer to roll a burr on the back of the cleaver that I could use for scraping.
Beech wood has such a smooth hard wood under the bark that it didn’t even need sanding. I used my dad’s power planer to smooth out the top and make a quasi-flat surface then finished it with some walnut oil and it was (almost) ready!
Because the wood was so green, it was certain that it will crack and roughen as it dried. So I cut a groove and wrapped a flat iron band around the top to keep it from moving too much. The bottom can expand as much as it wants, I’m actually quite curious to watch the table morph throughout the summer as we use it.
Because we were born in villages that aren’t that far apart, me and my wife always visit both our parents in the same trip. This time when I got to my parents-in-law, I took a stroll through their little orchard. They added new trees this year! I can’t wait to taste the large apricots.
What struck me as odd about the orchard was that there was no patch of grass to lay on. They like digging up the soil every year, and leaving it like that: an arid looking patch of land made of dry dirt boulders. I thought a bench would be a good solution and what do you know, there was an old broken door thrown in the firewood pile just outside the orchard, that had the perfect length and width for a bench.
I forgot to take a photo of the door, but it looked kind of like this one, only worse and with a large rhomboid ◊ hole at the top.
I got to work immediately, dismantling the door piece by piece and pulling out nail after nail (they really liked their nails in those old times). I was left with two long and narrow wooden boards, a pile of rotten wood and two pocketfuls of rusted nails.
I sawed the broken ends of the boards, then I used my father-in-law’s power planer to remove the old gray wood from the top, bottom and sides to get to the fresh wood below. There were a lot of holes and valleys so I had to scrape them by hand with sandpaper rolled around a screwdriver. This took a few more days than I expected, but I eventually got two cleanish boards of.. fir? pine? No idea.
I used a velcro sandpaper attachment for the battery powered drill to sand out the rotten sides and give the boards a curvy and smooth live edge.
For the legs, I stole some more firewood from their pile, where I found some thick branches of unidentified species that were roughly the same length. Stripping the bark with an axe made them look good enough so I screwed them at the four corners of the board. The bench was wobbly with just the legs, so I strengthened it sideways by adding shorter and thinner branches of more unidentified wood between the legs and the center of the board.
I had to do something with the rhomboid ◊ hole, so I filled it with a square 4-by-4 salvaged from a recently dismantled shed, and now the bench has 5 legs. Instead of sawing the leg to size, I left it protruding above the bench and placed another thick salvaged board on top of it to serve as an arm rest, or coffee table, or a place for the bowl of cherries.
For the finish, I burned the bench and the bottom of the legs to get a honey-brown aspect and to make it water resistant. I put a very thin layer of whatever wood lacquer I found in my in-laws shed, just for resistance because I don’t like glossy wood.
We don’t have much space on the current eating table, so I built a two-shelf stand where we can place the always present water filter jug and the glasses and free up some of the center space.
It’s incredible how strong just a few screws can be.
I thought I should finally do something about the kavals always rolling around on some table or couch throughout the house, so I made a stand from long thin wood boards glued on the side, and finished it with sunflower oil to give it a golden/orange colour.
This way I can always expand it by adding more boards to the side if I want to add more flutes.
I need to sharpen blades almost daily, be it the pocket knife, axe, plane blade or chisels. So I made a custom sharpening block with the perfect tools for my sharpening technique.
It has a $5 diamond plate with 600
grit on one side and a $5 leather strop (a piece of leather belt might work just as well) on the other side. I attached the leather with two small screws at the top so I can take it out easily if I need a flexible strop for my carving gouge for example. It is loaded with diamond paste which can be found for cheap at gemstone cutting online stores (the knife-specific pastes are a lot more expensive and I’m not sure why).
To be honest, a $0.5 green compound (chromium oxide) works just as well for stropping, that’s what I used before and still use for my detail carving knives. It gives a smoother edge than the diamond, the disadvantage being that it needs to be re-applied more often on the leather and that you need a bit more blade passes to get the same result. The diamonds seem be cutting faster, but really not much faster.
I went through all the phases with sharpening tools. I’ve used water stones, natural stones, ceramic stones, pull-through carbide sharpeners (don’t use these), powered belt sharpeners, wheel sharpeners.
Aside from the pull-through sharpeners and the steel rods, all the others work just as well with the right technique. I settled on the diamond plate because they’re cheap, stay flat, need zero maintenance, and can cut through any type of metal. Paired with a leather strop, for me it’s the simplest way to sharpen.
I recommend this OUTDOORS55 video for a no-bullshit sharpening tutorial and the Science of Sharp blog if you’re curious what the different sharpening techniques do to an edge under a microscope.
2023-08-08 15:14:26
It was the spring of 2014, over 9 years ago, just 6 months into my first year of college, when my Computer Architecture teacher stopped in the middle of an assembly
exercise to tell us that Bitdefender is hiring juniors for Malware Researcher positions.
I had no idea what that is, but boy, did it sound cool?…
I fondly remember how at that time we weren’t chasing high salaries and filtering jobs by programming languages and frameworks. We just wanted to learn something.
As students, we needed money as well of course, but when I got the job for
1750 lei
(~€350), I suddenly became the richest 18 year old in my home town, so it wasn’t the top priority.
And we learnt so much in 2 years.. obscure things like AOP, a lot of x86 assembly, reverse engineering techniques which dumped us head first into languages like Java, .NET, ActionScript? (malware authors were creative).
But most of all, we did tons of Python scripting, and we loved every minute of it. It was my first time getting acquainted with fast tools like Sublime Text and FAR Manager. Coming from Notepad++ and Windows Explorer, I felt like a mad hacker with the world at my fingertips.
I’m known as a macOS app dev nowadays, but 9 years ago, I actually started by writing petty Python scripts which spurred the obsessive love I have nowadays for clean accolade-free code and indentation based languages.
What does all that have to do with static websites though?
Well, 5 years ago, when I launched my first macOS app, I found myself needing to create a simple webpage to showcase the app and at the very least, provide a way to download it.
And HTML I did not want to write. The XML like syntax is something I always dreaded, so overfilled with unnecessary </>
symbols that make both writing and reading much more cumbersome. I wanted Python syntax for HTML so I went looking for it.
I went through pug…
|
|
pretty, but still needs ()
for attributes, and I still need accolades in CSS and JS
then haml…
|
|
even more symbols: %
, :
, =>
and /
for self-closing tags
…and eventually stumbled upon Slim and its Python counterpart: Plim
|
|
ahhh.. so clean!
Here’s how that example looks like if I would have to write it as HTML:
|
|
not particulary hard to read, but writing would need a lot of Shift-holding and repeating tags
The thing I like most about Plim, and why I stuck with it, is that it can parse my other favorite symbol-hating languages without additional configuration:
style
tagsscript
tagsHere’s a more complex example to showcase the above features (might require sunglasses):
example of writing a HDR page section, similar to the one on lunar.fyi
|
|
And best of all, there is no crazy toolchain, bundler or dependency hell involved. No project structure needed, no configuration files. I can just write a contact.plim
file, compile it with plimc
to a readable contact.html
and have a /contact
page ready!
So that’s how it went with my app: I wrote a simple index.plim
, dropped it on Netlify and went on with my day.
stylus
and coffeescript
(optional)The app managed to get quite a bit of attention, and while I kept developing it, for the next 4 years the website remained the same heading - image - download button single page. It was only a side project after all.
Working for US companies from Romania made good money, but it was so tiring to get through 3h of video meetings daily, standups, syntax nitpicking in PR review, SCRUM bullshit, JIRA, task writing, task assigning, estimating task time in T-shirt sizes??
In April 2021 I finally got tired of writing useless code and selling my time like it was some grain silo I could always fill back up with even more work…
I bet on developing my app further. Since my college days, I always chose the work that helps me learn new concepts. At some point I had to understand that I learnt enough and had to start sharing. This time I really wanted to write software that helped people, and was willing to spend my savings on it.
A more complete app also required a more complete presentation website, but the styling was getting out of hand. You would think that with flexbox
and grids
, you can just write vanilla CSS these days, but just adding a bit of variation requires constant jumping between the CSS and HTML files.
A presentation page is usually only 10% HTML markup. The rest is a ton of styling and copy text, so I wanted to optimize my dev experience for that.
There’s no “go to definition” on HTML .classes
or #ids
because their styles can be defined ✨anywhere✨. So you have to Cmd-F like a madman and be very rigorous on your CSS structure.
The controversial but very clever solution to this was Tailwind CSS: a large collection of short predefined classes that mostly style just the property they hint at.
For example in the first code block I had to write a non-reusable 5-line style to center the body contents.
|
|
With Tailwind, I would have written the body
tag like so:
|
|
That might not seem like much, some would argue that it’s even a lot less readable than the CSS one. Can’t I just define a .center
class that I can reuse?
Well, think about a few things:
.md:flex-row.flex-col
is what you would write in Tailwind.dark:bg-white.bg-black
looks simple enough.shadow.hover:shadow-xl
creates a lift off the page effect on hover by making the shadow larger.blur.active:blur-none
un-blurs an element when you click on it.bg-red-500.text-white
sets white text on saturated redred-100
is less saturated, towards whitered-900
is darker, towards blackSure, long lines of classes might not be so readable, but neither are long files of CSS styling. At least the Tailwind classes are right there at your fingertips, and you can replace a -lg
with a -xl
to quickly fine tune your style.
So many people obsess over the size of their JS or CSS, but fail to realize that the bulk of their page is unnecessarily large and not well compressed images.
Of course, I was one of those people.
For years, my app’s website had a screenshot of its window as an uncompressed PNG, loading slowly from top to bottom and chugging the user’s bandwidth.
I had no idea, but screenshots and screen recordings are most of the time up to 10x larger than their visually indistinguishable compressed counterparts.
I even wrote an app to fix that since I’m constantly sending screenshots to people and was tired of waiting for 5MB images to upload in rapid chats.
It’s called Clop if you want to check it out.
Yes, just like that famous ransomware, it wasn’t that famous at the time of naming the app.
I needed a lot more images to showcase the features of an app controlling monitor brightness and colors, so I had to improve on this.
Delivering the smallest image necessary to the user is quite a complex endeavour:
webp
, avif
or JPEG XL for smallest file sizeI did so much of that work manually in the past… thankfully nowadays I have imgproxy to do the encoding, optimization and resizing for me.
I just have to write the srcset, for which I defined Plim and Python functions to do the string wrangling for me.
|
|
|
|
After 2 weeks of editing the page, Cmd-Tab to the browser, Cmd-R to refresh, I got really tired of this routine.
I worked with Next.js before on Noiseblend and loved how each file change automatically gets refreshed in the browser. Instantly and in-place as well, not a full page refresh. I got the same experience when I worked with React Native.
There should be something for static pages too, I thought. Well it turns out there is, it’s called LiveReload and I had to slap my forehead for not searching for it sooner.
After installing the browser extension, and running the livereloadx --static
file watcher, I got my hot reloading dev experience back.
Actually now that I think about it, Hugo has super fast hot reloading, how does it accomplish that? Yep, turns out Hugo uses LiveReload as well.
After releasing the new app version, many things were broken, expectedly.
People tried to reach me in so many ways: Github issues, personal email, through the app licensing provider, even Facebook Messenger. I had no idea that including an official way of contact would be so vital.
And I had no idea how to even do it. A contact form needs, like, a server to POST
to, right? And that server needs to notify me in some way, and then I have to respond to the user in some other way… sigh
I thought about those chat bubbles that a lot of sites have, but I used them on Noiseblend and did not like the experience. Plus I dislike seeing them myself, they’re an eyesore and a nuisance obscuring page content and possibly violating privacy.
After long searches, not sure why it took so long, I stumbled upon Formspark: a service that gives you a link to POST
your form to, and they send you an email with the form contents. The email will contain the user email in ReplyTo
so I can just reply normally from my mail client.
|
|
None, I guess. I just hope that the prolific but unique Formspark dev doesn’t die or get kidnapped or something.
It’s not. Really. It’s crazy what I had to go through to get to a productive setup that fits my needs.
One could say I could have spent all that time on writing vanilla HTML, CSS and JS and I would have had the same result in the same amount of time. I agree, if time would be all that mattered.
But for some people (like me), feeling productive, seeing how easy it is to test my ideas and how code seems to flow from my fingertips at the speed of thought, is what decides if I’ll ever finish and publish something, or if I’ll lose my patience and fallback to comfort zones.
Having to write the same boilerplate code over and over again, constant context switching between files, jumping back into a project after a few days and not knowing where everything was in those thousand-lines files.. these are all detractors that will eventually make me say ”f••k this! at least my day job brings money”.
So many JS frameworks were created in the name of reusable components, but they all failed for me.
I mean sure, I can “npm install” a React calendar, and I am now “reusing” and not “reimplementing” the hard work of someone better than me at calendar UIs. But just try to stray away a little from the happy path that the component creator envisioned, and you will find it is mind-bendingly hard to bend the component to your specific needs.
You might raise a Github issue and the creator will add a few params so you can customize that specific thing, but so will others with different and maybe clashing needs. Soon enough, that component is declared unwieldy and too complex to use, the dev will say “f••k this! I’d rather do furniture” and someone else will come out and say: here’s the next best thing in React calendar libraries, so much simpler to use than those behemoths!
I never had this goal in mind but unexpectedly, the above setup is generic enough that I was able to extract it into a set of files for starting a new website. I can now duplicate that folder and start changing site-specific bits to get a new website.
Here are the websites I’ve done using this method:
And the best thing I remember is that for each website I published a working version, good looking enough, with a contact page and small bandwidth requirements, in less than a day.
How does this solve the problem of straying away from the happy path? Well, this is not an immutable library residing in node_modules, or a JS script on a CDN. It is a set of files I can modify to the site’s needs.
There is no high wall to jump (having to fork a library, figuring out its unique build system etc.) or need to stick to a specific structure. Once the folder is duplicated, it has its own life.
For those interested, here is the repo containing the current state of my templates: github.com/alin23/plim-website
I don’t recommend using it, it’s possible that I’m the only one who finds it simple because I know what went into it. But if you do, I’d love to hear your thoughts.
Weirdly, this website I’m writing on is not made with Plim. At some point I decided to start a personal website, and I thought it probably needs a blog-aware site builder.
At the time, I didn’t know that RSS is an easily templatable XML file, and that all I need for a blog is to write Markdown.
I remember trying Gatsby and not liking the JS ecosystem around it. Jekyll was my second choice with Github Pages, but I think I fumbled too much with ruby
and bundle
to get it working and lost patience.
Both problems stemmed from my lack of familiarity with their ecosystems, but my goal was to write a blog, not learn Ruby and JS.
Hugo seemed much simpler, and it was also written in Go and distributed as a standalone binary, which I always like for my tools.
I marveled at Hugo’s speed, loved the fact that it supports theming (although it’s not as simple as it sounds) and that it has a lot of useful stuff built-in like syntax highlighting, image processing, RSS generator etc. But it took me sooo long to understand its structure.
There are many foreign words (to me) in Hugo: archetypes, taxonomies, shortcodes, partials, layouts, categories, series. Unfortunately, by the time I realized that I don’t need the flexibility that this structure provides, I had already finished this website and written my first article.
I also used a theme that uses the Tachyons CSS framework, for which I can never remember the right class to use. I thought about rewriting the website in Plim but converting everything to Tailwind or simple CSS would have been a lot of work.
I eventually started writing simple Markdown files for my notes, and have Caddy convert and serve them on the fly. Helps me write from my phone and not have to deal with Git and Hugo.
I still keep this for longform content, where a laptop is usually needed anyway.
2023-01-17 21:16:13
You just got a large, Ultrawide monitor for your MacBook. You hook it up and marvel at the amount of pixels.
You notice you never use the MacBook built-in display anymore, and it nags you to have it in your lower peripheral vision.
Closing the lid is not an option because you still use the keyboard and trackpad, maybe even the webcam and TouchID from time to time. So you try things:
Brightness Down
again, I can live with thatCmd
+Brightness Down
Why isn’t there a way to actually disable this screen?
Because a lot of users of my 🌕 Lunar app told me about their grievances with not being able to turn off individual displays in software, I went down the rabbit hole of display mirroring and automated all of the above.
Now someone can turn off and on any display at will using keyboard shortcuts, and can even automate the above MacBook + monitor workflow to trigger when an external monitor gets connected and disconnected.
But it’s still nagging me that somehow macOS can actually disable the internal screen completely, but we’re stuck with this zero-brightness-mirroring abomination.
When closing the MacBook lid while a monitor is still connected, the internal screen disappears from the screen list and the external monitors remain available.
This function is called clamshell mode in the laptop world. Congratulations, your $3000 all-in-one computer is now just an SoC with some USB-C ports. Ok, you also get the speakers and the inefficient cooling system.
In the pre-chunky-MacBook-Pro-with-notch era, the lid was detected as being closed using magnets in the lid, and some hall effect sensors. So you were able to trick macOS into thinking the lid was closed by simply placing two powerful magnets at its sides.
With the new 2021 design, the MacBook has a hinge sensor, that can detect not only if the lid is closed, but also the angle of its closing. Magnets can’t trick’em anymore.
But all these sensors will probably just trigger some event in software, where a handler will decide if the display should be disabled or not, and call some disableScreenInClamshellMode
function.
So where is that function, and can we call it ourselves?
Since Apple Silicon, most userspace code lives in a single file called a DYLD Shared Cache. Since Ventura, that is located in a Cryptex (a read-only volume) at the following path:
/System/Cryptexes/OS/System/Library/dyld/dyld_shared_cache_arm64e
Since that file is mostly an optimised concatenation of macOS Frameworks, we can extract the binaries using keith/dyld-shared-cache-extractor:
|
|
Let’s extract the exported and unexported symbols in text format to be able to search them easily using something like ripgrep.
I’m using /usr/bin/nm
with fd’s -x
option to take advantage of parallelisation. I like its syntax more than parallel
’s since it has integrated interpolation for the basename/dirname of the argument (note the {/}
)
|
|
Searching for clamshell
gives us interesting results. The most notable is this one inside SkyLight:
|
|
SkyLight.framework
is what handles window and display management in macOS and it usually exports enough symbols that we can use from Swift so I’m inclined to follow this path.
Let’s see if the internet has anything for us. I usually search for code on SourceGraph as it has indexed some large macOS repos with dyld dumps. Looking for RequestClamshellState
gives us something far more interesting though:
Looks like Apple open sourced the power management code, nice! It even has recent ARM64 code in there, are we that lucky?
Here’s an excerpt of something relevant to our cause:
|
|
So it’s instantiating an SLSDisplayPowerControlClient
then calling its requestStateChange
method. SLS
is a prefix related to SkyLight (probably standing for SkyLightServer), let’s see if we have that code in our version of the framework.
I prefer to do that using Hopper and its Read File From DYLD Cache feature which can extract a framework from the currently in-use cache:
Ok the class and methods are there, let’s look for what uses them. Since it’s most likely a daemon handling power management, I’ll look for it in /System/Library
.
And looks like powerd
is what we’re looking for, containing exactly the code that we saw on SourceGraph.
|
|
To link and use SLSDisplayPowerControlClient
we need some headers, as Swift doesn’t have the method signatures available.
Looking for SLSDisplayPowerControlClient
on SourceGraph gives us more than we need.
Let’s create a bridging header so that Swift can link to Objective-C symbols, and a Swift file to where we’ll try to replicate what powerd
does.
|
|
|
|
|
|
To compile the binary using swiftc
we have to point it to the location of SkyLight.framework which is located at /System/Library/PrivateFrameworks
.
We then tell it to link the framework using -framework SkyLight
and import our bridging header. Then we run the resulting binary.
I prefer to run this using entr
to watch the files for changes. With the code editor on the left and the terminal on the right, I can iterate and try things faster by just editing and saving the file, then watch the output on the right.
|
|
Well.. it’s not working. The error is not helpful at all, there’s nothing on the internet related to it.
|
|
Maybe the system log has something for us. One can check that using Console.app but I prefer looking at it in the Terminal through the /usr/bin/log
utility.
|
|
Something from AMFI about the binary signature. CMS stands for Cryptographic Message Syntax which is what codesign
adds to a binary when it signs it with a certificate.
|
|
I have GateKeeper disabled and running the binary from a terminal that’s added to the special Developer Tools section of Security & Privacy, so this shouldn’t cause any problems.
I checked just to be sure, and signing it with my $100/year Apple Developer certificate gets rid of the CMS blob
error but doesn’t change anything in the result.
I just arrived after a long train ride at the house I'm rebuilding with my wife, and wanted to share this nice view with you 😌
It's January, but the sun is warming our faces and the hazelnut trees are already producing their yellow catkins.
Ten years ago, the children of the house's previous owners were walking in knee deep snow and coasting downhill on their wooden sleds, hurting a few young fir trees on the way down. 🌲
Seasons are changing.
Some system capabilities can only be accessed if the binary has been signed by Apple and has specific entitlements. Checking for powerd
’s entitlements gives us something worrying.
The binary seems to use com.apple.private.*
entitlements. This usually means that some APIs will fail if the required entitlements are not present.
|
|
We can try to add the entitlements ourselves. We just need to create a plist file and use it in codesign
:
|
|
Sign the binary with entitlements and run it:
|
|
Looks like we’re getting killed instantly. The log stream shows AMFI is doing that because we’re not Apple and we’re not supposed to use that entitlement.
|
|
What’s this AMFI exactly and why is it telling us what we can and cannot do on our own device?
The acronym stands for Apple Mobile File Integrity and it’s the process enforcing code signature at the system level.
By default, the OS locks these private APIs because if we would be able to use them, a malware or a bad actor would be able to do it as well. With it locked by default, malware authors are deterred from trying to use these APIs on targets of lower importance as this would usually need a 0-day exploit.
In the end it’s just another layer of security, and if in the rare case someone needs to bypass it, Apple provides a way to do it. The process involves disabling System Integrity Protection and adding amfi_get_out_of_my_way=1
as a boot arg.
|
|
I don’t recommend doing this as it puts you at great risk, since the system volume is no longer read only, and code signatures are no longer enforced.
I only keep this state for research that I do in short periods of time, then turn SIP back on for normal day to day usage.
In case you need to revert the above changes:
|
|
Unfortunately even after disabling AMFI, we’re still encountering the CoreGraphicsError 1004
. It’s true, AMFI is not complaining about the entitlements anymore, they’re accepted and the binary is not SIGKILL
ed.
But we still can’t get into clamshell mode using just software.
If you haven’t heard of it, Frida is this awesome tool that lets you inject code into already running processes, hook functions by name (or even by address), observe how and when they’re called, check their arguments and even make your own calls.
Let me share with you another macOS boot arg that I like:
|
|
This one enables code injection. Now we can use Frida to hook the SkyLight power control methods to see how they are called as we close and open the lid:
|
|
We got our confirmation at least. powerd
is indeed calling SLSDisplayPowerControlClient.requestStateChange(2)
when closing the lid.
Let’s check what happens when we try to call that method in Clamshell.swift
.
We first add the line readLine(strippingNewline: true)
at the top of the Clamshell.swift
file to make the binary wait for us to press Enter
. This is so that we have a running process that we can attach to with Frida.
|
|
Everything looks the same, seems that we’re not looking deep enough.
The request method seems to access the service
property which is an SLSXPCService
. XPC Services are what macOS uses for low-level interprocess communication.
A process can expose an XPC Service using a label (e.g. com.myapp.RemoteControlService
) and listen to requests coming through, other processes can connect to it using the same label and send requests.
The system handles the routing part. And the authentication part.
Looks like an XPC Service can also be restricted to specific code signing requirements, is it possible that this is what we’re running into here?
Let’s trace SLSXPCService
methods as well using Frida:
|
|
Great! or not?
I’m not sure if I should be happy that we found that our clamshell request doesn’t work because we don’t have an XPC connection, or if I should be worried that this means we won’t be able to make this work with SIP enabled.
I guess it’s time to go deeper to find out.
Now that we have access to Frida, we can use the handy xpcspy tool to sniff the XPC communication of powerd
.
I’m thinking maybe we can find the endpoint name of the XPC listener and just connect to it and send a raw message directly, instead of relying on SkyLight to do that.
|
|
So we have name = (anonymous), listener = false, pid = 30630
.
An anonymous listener, can it get even worse? The PID coincides with WindowServer --daemon
so it’s definitely the message we’re also trying to send. But with an anonymous listener, we’re stuck to relying on SkyLight’s exported code to reach it.
I guess we need to go back to do some old-school assembly reading.
After renaming some sub-procedures in Hopper, looking at the graph reveals the different code paths that powerd
and Clamshell
are taking through SLSXPCService.reinitConnection
.
enabled
and connected
properties are true
reinitConnection
connection
.enabled
, connected
and autoreconnect
are false
CGError
true
it would go on the right-side code path which
0x20
and 0x28
are non-zeroAdding some Memory.readPointer
calls inside __handlers__/SLSXPCService/reinitConnection.js
shows us what SkyLight is expecting to see at 0x20
and 0x28
:
Two NSMallocBlock
s right after the OS_xpc_connection
and the OS_dispatch_queue_serial
properties.
|
|
Judging by the contents of SLSXPCService.h, those are the closures for errorBlock
and notificationBlock
:
|
|
I’m inching closer to the good code path but I seem to never get there.
So here’s what I did so far in Clamshell.swift
before calling requestClamshellState
:
|
|
After calling requestClamshellState
, the code crashes with SIGSEGV
inside createNoSenderRecvPairWithQueue:errorHandler:eventHandler:
because it branches to the 0x0
address.
|
|
Unfortunately I’m a bit lost here. I’ll take a break and hope that the solution comes in a dream or on a long walk like in those mythical stories.
The article is already longer than I’d be inclined to read so if anyone reaches this point, congrats, you have the patience of a monk.
If there are better ways to approach a problem like this one, I’d be glad to hear about it through the contact form.
I’m not always happy to learn that I’ve wasted 4 days on a problem that could have been solved in a few hours with the right tools, but at least I’ll learn how not to bore people with writings on rudimentary tasks next time.
2022-08-03 02:01:04
Not really, no. Not without annoying workarounds and a confusing user experience.
Another email, another annoyed user: Firefox not loading websites when launched through rcmd! It works when launched from Alfred.. Please fix ASAP!! I’m gonna fix this Firefox issue once and for all!
Launch Xcode, open the
rcmd project, check the launchApp
function code, it’s just a NSWorkspace.open
call on Firefox.app, what does Alfred do differently?
Disassemble Alfred.app in Hopper, look for NSWorkspace.open
, of course it’s there, it’s the exact same thing.
Try open /Applications/Firefox.app
in a terminal, it works, websites load as expected.
Breakpoint on launchApp
, check the debugger again, let’s be rigorous, what am I really calling open
on?
Argument is /System/Volumes/Data/Applications/Firefox.app
which is just a symlink to /Applications/Firefox.app
right? .. or was it the other way around? Anyway let’s just try it for the sake of it, I’m desperate.
Run open /System/Volumes/Data/Applications/Firefox.app
, huh?? no websites load? THAT WAS IT?!
Add path.replacingOccurrences(of: "/System/Volumes/Data", with: "")
, build, run, hold Right Command
, press F
, Firefox launches and holy cow everything works!!
I don’t even care why anymore, let’s just release this fix on the App Store.
And while I’m at it, why not try to add that window switching capability that people have been asking about?
I remember something about Accessibility permissions not being available in the sandbox, but I just used an App Store app that was able to request the permissions so there has to be a way, how hard could it be?
Well it turns out it’s pretty darn hard, and I’m still working on this window switching thing to this day.. sigh.. let me tell you about it.
There’s an important distinction between switching windows and switching apps on the Mac. As opposed to Microsoft Windows where you just Alt-Tab
through .. well, windows, on macOS you Command Tab
through apps by default. When an app with multiple windows is focused, Command backtick
will cycle through the windows of that app.
Six years ago I was a Windows power user, and when I got my first Mac, Command Tabbing through apps felt very weird. Suddenly I was closing all windows of Sublime but its icon was still there in the Command Tab list, or I would minimize Chrome and focusing its icon didn’t unminimize it. The app vs window distinction just didn’t exist in my mind.
Now, after 6 years, the macOS way feels a lot more intuitive:
Command backtick
Of course it might just be the power of habit, after all I was able to be just as productive with the Windows way in the past ¯\_(ツ)_/¯
The app centric approach is nice but having to switch between 10 different apps at a time gets annoying fast.
Pressing Tab 5 times in a row to get to the app I want could be categorized as a first world problem and I should just get used to it. But doing that 50 times a day and having to always visually check if I chose the right icon, tends to break my flow of thinking, and makes me get tired faster because of all the context switching.
That’s the main reason I created rcmd, to switch apps without thinking about switching apps.
My right thumb rests nicely on the Right Command
key and I barely use that easy to reach key. So I turned it into a dedicated app switching key.
I decided to dynamically assign each app the first letter of its name so that I don’t have to try to remember what key did I assign to Xcode?. I just hold Right Command
and press X
without any mental effort because I know I have no other app starting with X
.
And if I forgot that Xcode is not already running (or if it crashes in the background like it sometimes does), rcmd launches it automatically (since I clearly wanted it running if I tried to focus it).
Xcode is a happy case though. I have so many apps starting with S
that I decided custom assignments might be a better fit for that. I left Sublime Text for the S
key since it’s my most used app, and then assigned mnemonic keys for others:
O
for
SoulverP
for
SpotifyE
for
Sketch (because K
is taken by the
Kitty terminal)B
for
Safari browserrcmd-rshift-s
(it’s good enough for me as I rarely have those open)Often I need to check the status of an app briefly and then get back to what I was doing. Some examples
That’s why I added the Hide action in rcmd.
Now I just hold Right Command
and press K
to check the
Kitty terminal, then, without lifting any finger, press K
again to hide it and get back to what I was doing.
This also allows the system to activate App Nap for the hidden app and put it into a lower energy usage state until I need it again.
Unfortunately yes, there are many cases where an app might have a lot of windows open:
App Expose: Command Tab allows pressing the ↓ Down Arrow
key with the app icon selected, to expose all the windows of that app for visual selection.
Command backtick `
: this native macOS hotkey will cycle through the windows of the current app but we’re back to square one where you have to visually analyze each window to see if you got the right one in focus.
Alt-Tab: this is a really nice open source app which replicates the Microsoft Windows way of selecting windows by thumbnails.
Contexts.co: a fuzzy searcher for window titles. I’ve used it in the past and it was definitely faster than the rest but it still required more key presses than I wanted
Stage Manager: the new addition in macOS Ventura, which in its current state is just discoverable Spaces.
It’s a sunny day in Brașov, I’m on my balcony taking in the sun, testing and perfecting XDR Brightness to make working in direct sunlight easier on my MacBook 14” while also rewriting parts of the Lunar UI in SwiftUI.
I’ve already written a lot of SwiftUI boilerplate in my other projects, so I’m mostly copy pasting stuff between Sublime Text windows. I also have three Sublime windows with disassembled macOS private frameworks to look for the hidden functions I need to improve the XDR Brightness curve and responsiveness.
Juggling with all these windows suddenly became very frustrating.
Why can’t I focus exactly the window I want with one hotkey just like I focus apps with rcmd?
I’m probably going to have the same set of windows for the next few days, I know the names of the projects I have open in them, I could use the first letter of the project name to reference a specific window.
The Right Command
key is taken, but right beside it stands another rarely used key: the Right Option
key (ralt
for short)
I want to be able to press ralt-r
to focus the Sublime window containing the rcmd project, ralt-l
to focus the Lunar project, ralt-v
for the Volum project, ralt-p
to get to the PrivateFrameworks folder and so on.
The plan seems simple enough:
Right Option
+ letter
to some focusWindow
functionIt’s not like the above hasn’t been done before, there are plenty of window switcher and snap/resize examples on macOS, some of them are even open source:
One window snapping tools is even on the App Store: Magnet
But why are there no window switchers on the App Store?
Well, for app switching, Apple provides a really nice API to enumerate and activate running apps without needing any intrusive permissions: NSRunningApplication
Finding Xcode and focusing it
|
|
But there’s no such thing for enumerating the windows of those running apps. All of the apps that work with app windows, need to tap into the Accessibility API, the one that gives you full access to extract and modify the contents of everything visible and invisible.
And so, window enumeration becomes possible, by fetching the array of UI elements under the AXWindows
attribute of an app.
But since a window is like just any other UI element, then there’s no focus
or activate
method, so how do these apps manage to focus a window?
Take a look at this nice and intuitive snippet extracted from yabai:
|
|
Even though I knew that key window meant focused window in macOS terminology, it still took me a while to land on this code and start believing that this is really focusing a window.
In the end, what that code represents is message passing to the SkyLight private framework, the one that handles the macOS window management, Dock, Spaces and a ton of other stuff. I’m guessing someone sneaked in a VM debugger or looked through the assembly code to find the right bytes to send.
Ok, enumeration and focusing is doable, what else do we need? Right, Accessibility permissions. Here comes the biggest hurdle.
You don’t.
On macOS, an app can be run:
App Store apps can only run inside the sandbox, and within that, an app can’t ask for Accessibility permissions. The API for that just throws a silent error and does nothing.
But then how does Magnet do it, and a few other apps as well like Peek or PopClip for example?
Turns out, these apps have a special exception from Apple, mostly because they were on the App Store before the sandbox has become mandatory: objective c - How to use Accessibility with sandboxed app? - Stack Overflow
I can barely get my apps to not be rejected by the App Store reviewers, I’m not going to get an exception just so that rcmd can focus specific windows. So now what?
I thought, if there was an app running outside the sandbox and listening for rcmd’s listWindows
and focusWindow
commands, I might be able to get this working.
I remembered Hammerspoon having a really complete window management support and it also being scriptable with Lua made it the perfect choice.
HTTP would probably be overkill for this, I knew Hammerspoon had an inter-process communication (IPC) API built-in so I tried to use that.
|
|
Well nope, the sandbox doesn’t allow that.
What about the hs
CLI that Hammerspoon provides, I knew that you could send arbitrary IPC messages using that, right?
Nope again, any process run by a sandboxed app will inherit that sandbox limitations.
Ok fine, HTTP it is! Thankfully Hammerspoon provides an HTTP server and I just need to register a callback and make it listen on a port. Since we’ve already reached this madness, let’s go straight to websockets.
|
|
Alright, this seems to work. I can connect to the Hammerspoon websocket, get all windows, and focus windows by their IDs.
Now how do I explain to rcmd users that in order to focus windows, they need to:
~/.hammerspoon
directoryThe App Store guidelines explicitly forbid an app from installing another app or binary to enhance its capabilities.
2.4.5 Apps distributed via the Mac App Store have some additional requirements to keep in mind:
(iv) They may not download or install standalone apps, kexts, additional code, or resources to add functionality or significantly change the app from what we see during the review process.
So I can’t install Hammerspoon automatically (it would be a bad idea anyway, this is malware behavior), but I can try to automate most of the stuff and present it as a 1-button install action.
So I wrote a function to download Hammerspoon.zip
, unzip it in a temporary folder, move it to /Applications
, write init.lua
and rcmd.lua
inside the ~/.hammerspoon
directory, launch Hammerspoon and wait for the websocket to be available.
The user only has to click an Install window switcher button, no big deal.
You see, when a sandboxed app downloads a file, the system automatically adds the com.apple.quarantine
extended attribute to the file.
|
|
This means that macOS GateKeeper will prevent you from launching any downloaded app or running any binary directly from code.
Even if the user tries to launch the downloaded app manually afterwards, it will still fail with the App can’t be opened error.
No amount of xattr -cr Hammerspoon.app
will fix this if run from the sandbox.
Great. Scrap the download and install part, split the button into two buttons:
I’ve streamlined this process as much as the sandbox allows me, and after giving the app to some beta testers, every single one of them found it so confusing that they said they would not use it.
And who can blame them, I myself find it too convoluted whenever I test it.
Yes, surprisingly. It passed App Review without a single rejection.
I hid the feature behind a Try experimental window switching
red button to deter support emails on the subject, but it’s there for anyone to try and use.
After the initial setup, it actually works pretty reliably, and the websocket connection to Hammerspoon is so fast that I don’t ever notice this happens over the network. It feels like a native window switcher to me.
But I wasn’t able to create a seamless experience like I did for app switching.
Oh well, at least I solved my own problem and can get back to what I was doing.
One month later.
2022-02-05 01:26:36
Update: I finally found a way to go over the limit in Lunar v5.5.1
Exactly 3 months and a day after placing an order through a Romanian Apple reseller, I finally got my 14-inch M1 Max.
Well, actually.. I first got the wrong configuration (base model instead of CTO), had to return it to them after wasting a day on migrating my data to it, they sent my money back by mistake, had to pay them again, and after many calls and emails later the correct laptop arrived.
As soon as these devices were in the hands of users, requests started coming in for Lunar to provide an option to get past the 500 nits limit for everyday usage
Over the last week I tried my best to figure out how to do this, but it’s either impossible to raise the nits limit from userspace, or I just don’t have the necessary expertise.
I’ll share some details that I found while reverse engineering my way through the macOS part that handles brightness.
I first started by playing this HDR test video (open it in latest Chrome or Safari for best results): hdr-test-pattern.webm
Which resulted in a blinding white at 1600 nits:
This generated the following logs in Console.app:
|
|
After setting the display brightness to max, I could see in the logs that SDR (Standard Dynamic Range) was being capped at 400 nits:
|
|
Shining a flashlight directly into the Ambient Light Sensor allowed SDR to jump up to 500 nits:
|
|
Since Big Sur, macOS transitioned from having the frameworks on the disk as separate binaries, to having a single file containing all the system libraries, called a dyld_shared_cache
.
- New in macOS Big Sur 11.0.1, the system ships with a built-in dynamic linker cache of all system-provided libraries. As part of this change, copies of dynamic libraries are no longer present on the filesystem. Code that attempts to check for dynamic library presence by looking for a file at a path or enumerating a directory will fail. Instead, check for library presence by attempting to dlopen() the path, which will correctly check for the library in the cache. (62986286)
Searching for keywords from the above logs surfaced only the dyld cache as expected.
I used dyld-shared-cache-extractor to drop the separate binaries on disk, then did another search there.
This surfaced up QuartzCore
as the single place where that string could be found.
After looking through the QuartzCore binary with Ghidra and finding some iOS headers for it on limneos.net, I created a sample Swift project to try to use some of the exported functions from it: monitorpanel - main.swift
Based on some open-sourced iOS jailbreak tweaks, I noticed that developers used the CAWindowServer
class to interface with the display and HID components directly. The class was available here so I tried to do the same on macOS.
Unfortunately, CAWindowServer.serverIfRunning
always returns nil
and while CAWindowServer.server(withOptions: nil)
returns a seemingly valid server, all external displays are forcefully disconnected when that server is created.
Using the below code, I succeeded in producing the commitBrightness
log line in Console, but nothing really changed.
code from main.swift
|
|
commitBrightness
log line
|
|
While looking through Ghidra, I noticed that QuartzCore
finally calls into CoreBrightness
functions to increase the nits limit, so I took a look at the exported symbols on that binary.
Unfortunately, all the possibly useful symbols are not exported and trying to link against them would result in the undefined symbols
error.
Adding the private symbols in the CoreBrightness.tbd file doesn’t help in this case.
|
|
I knew from previous work on window management that the SkyLight
framework is closely related to the WindowServer so I took a look at that too.
SkyLight exports a lot of symbols, and fortunately I had a good example on how to use them inside yabai, a macOS window manager similar to i3 and bspwm.
But again, nothing useful is exported.
The function kSLSBrightnessRequestEDRHeadroom
seemed promising but I always got a SIGBUS
when trying to call it. I can’t find its implementation so I don’t know what parameters I should pass. I just guessed the first one could be a display ID.
As one Hacker News user pointed out, kSLSBrightnessRequestEDRHeadroom
is actually a constant. And of course it is! It has the usual k
prefix.. how did I miss that?
|
|
While discussing this matter with István Tóth, the developer of BetterDummy, he came up with an interesting idea.
CGVirtualDisplay
with the same size as the built-in displayCGDisplayStream
that video to the virtual displayThe streaming part already works in the latest Beta of BetterDummy and seems pretty fast as well. But adding tone mapping might cause this to be too resource intensive to be used.
I think linking can be done against private symbols using memory offsets, I remember doing something like that 8 years ago at BitDefender, while trying to use the unexported _decrypt
and _generate_domain
methods of some DGA malware.
But the dyld_shared_cache
model of macOS is something new to me and I don’t have enough knowledge to be able to do that right now.
If someone has any idea how this can be achieved, I’d be glad if you could send me a hint through the Contact page.
2021-12-04 01:28:39
Let’s set the stage first. So, it’s Tuesday night and I’m Command
Tab
-ing my way through 10 different apps, some with 3-4 windows, while trying to patch bugs in
Lunar faster than the users can submit the reports. I’m definitely failing.
I feel my brain pulsing and my ring finger going numb on the Tab
key. I stop switching apps and just stare at the Xcode window, containing what I knew was Swift code but looked like gibberish right now.
“Feels like burnout” I think. Wasn’t that what I ran away from when I quit my job to make apps for a living?
I heard a joke recently:
It’s probably only funny for a small group of workaholics, but the reality of those words struck me in the middle of the hysterical laughter I was trying to stop.
Why am I still developing this app?
Why am I adding all the features the users are asking for, then deal with the flood of frustrated emails saying “what an overcomplicated stupid app, I just want to make my screen brighter!!”, then try to hide advanced features to make it simpler, then get assaulted with the confused “I can’t change volume anymore fix this ASAP!!!” because UI changes can very easily introduce bugs by simply forgetting to bind a slider to a value, then get back to scotch taping broken parts slower than the users can report them?
Those features should have probably been their own independent app.
I start to feel my fingers again, press Command
Tab
once more, and while looking at the list of app icons I realise something.
Maybe pressing Tab
4-5 times while visually assessing if the selected app icon is the one I want to focus, isn’t the best solution for this kind of workflow.
So what does my brain do when I feel burnt out? Gives me ideas for even more apps…
That’s how the idea of rcmd began. We have two Command keys on a Mac keyboard, and the right hand side one is almost never used. What if I use it exclusively for switching apps?
When I used Windows for reverse engineering malware, I liked switching apps using Win
+ Number
where the number meant the position of the app icon in the taskbar. I didn’t like counting apps however.
Using the app name felt the most natural. I remembered using Contexts for a while, which provides a Spotlight like search bar for fuzzy searching your running apps. But that needed a bit more key presses than I wanted (that is 1) and more attention than I wanted to give (which is none).
My idea sounded a bit simpler: Right Command
+ the first letter of the app name
So simple that people were offended by it…
I pitched this idea to Ovidiu Rusu, a very good friend of mine, who surprisingly seemed to have the same need as me. We created the first prototype in about a week (icons and graphics take so much time…) and started using it in our day to day work to see if it made sense.
In less than a day, rcmd became so ingrained in our app switching that we got incredibly annoyed when we had to quit the app for recompiling and debugging. We just kept pressing Right Command
X
and staring at the screen like complete idiots, not understanding why Xcode wasn’t being focused.
What most people overlook when they have a simple idea is that 80% of the effort goes into handling edge cases that are not visible in the original idea.
Just for this simple app we had to solve the following problems:
This last question is what led me to write this article. It turned out we needed to do quite a few hacks if we wanted to publish this app in the App Store.
Every app that is submitted to the App Store must be compiled to run within a sandbox. This means that the app will run in a container which will have the same structure as your home directory, but with mostly empty folders.
The sandbox also limits what APIs you can use, and which system components you can communicate with.
The defacto way of reacting to Right Command
+ some other key
is to monitor all key events (yes, just like a keylogger), and discard events that don’t contain the Right Command modifier flag.
|
|
Easy peasy, right? Well no, because that’s not allowed on the App Store.
To use that API you need to first request Accessibility Permissions from the user. Those permissions are prohibited inside the Sandbox, because with those permissions, an app would be able to do all kinds of nasty stuff:
Those are perfectly reasonable things in the context of assistive software, because you need the computer to do stuff for you when you aren’t able to use a keyboard or a mouse/trackpad.
And you need the computer to read out text from other apps, or show choice buttons which you can trigger with your voice.
Technical content ahead. Click to skip this section if you’re not interested in macOS internals.
But for rcmd’s use case, we’re restricted to APIs that don’t require these permissions. APIs so old that 64-bit wasn’t even a thing when they launched and they require passing C function pointers instead of our beloved powerful Swift closures.
That’s the Carbon API and it goes a little something like this:
|
|
Not so pretty as the NSEvent
method, but does the job. Kind of.
You see, that beautiful code macaroni above only lets us listen to Any Command
+ R
, not specifically the Right Command
. There’s no way to pass something like rightCmdKey
into RegisterEventHotKey
.
A workaround I found for this was:
flagsChanged
true
when the there’s a rightCommand
modifierfalse
|
|
Doing this reminded me of the days I worked with Rust, and how wonderfully impossible a task like this would be. I don’t think I’m touching it again, I like my global atomic booleans.
Now the weirdest limitation hits me. There’s no way to discard a hotkey event and forward it back to the system so it can use it for the next handler.
Say I register Command
C
and I only want to do something when Right Command
is held. If I do nothing when Left Command
is held, then you can’t copy text anymore using Command
C
.
I tried returning the inappropriately named OSStatus(eventNotHandledErr)
but the event still doesn’t return to the handler chain.
At this point we seriously considered dropping the App Store idea and just going the self publishing route.
But just thinking what we would have to do for that triggered something akin to PTSD.
Here’s a list with what I can remember off the top of my head from Lunar:
Finding yet another workaround seemed much easier.
Thankfully it really was easy. It turns out that RegisterEventHotKey
is plenty fast. So fast that we were able to register the hotkeys only when Right Command
was being held, and unregister them when the key was released.
|
|
Now rcmd was ready for publishing on the App Store.
There was one little thing that bothered me though. I usually keep 4-5 separate projects open in Sublime Text, each with its own window. Because of the sandbox, there’s no way to get a list of windows for an app and, say, focus a specific one, or cycle between them.
But I found a little gem while I was customising my fork of yabai, a way to trigger Exposé for a single app:
|
|
We decided to show Exposé if for example you press rcmd
s
while Sublime Text is already focused. It was good enough for us.
Not for the App Store reviewer though.
I knew private and undocumented APIs are not seen well on the App Store. But I had no idea they will guarantee a rejection.
I like breaking the norm with my creations. Some of them will be flukes, some will be criticised into oblivion, but a small number of them might turn out to be something a lot of people wanted but didn’t know they needed.
rcmd is one of those things: a bit quirky, unique in its approach, and incredibly useful for a specific group of people.
That is also its weak point though. It’s hard to communicate this usefulness without being able to try the app first. But as it turns out, the App Store doesn’t provide any support for creating a free XX-day trial before buying an app.
Free trials for non-subscription apps have been allowed since mid-2018 on the App Store, and are supposed to be implemented using in-app purchases. Unfortunately, this approach has a lot of inconveniences which are very well detailed in this article: Ersatz Free Trials | Bitsplitting.org
These are the biggest shortcomings for my case:
I tried a few dozen apps on the App Store and I couldn’t find a single one offering a free trial for a non-subscription purchase using the above method.
Having to pay upfront is steering away a lot of possible users, but with all that bad UX, we decided to not implement any free trial and just sell the app for a one-time fair price.
3 years ago, I would have probably chosen to make the app open source and give it away for free, just like I did with Lunar.
I would have thought:
I’m making a ton of money at this company, what I would get by selling a small app would be peanuts anyway.
Only recently I realised that this approach kept me dependent on having a job where I click-clack useless programs 8 hours a day, only to get 1-2 hours after work for my projects, and sacrifice my health and sanity in the process.
In my whole 7-year career as a professional API Glue Technician and experienced Wheel Reinventer, I never did anything remotely as useful as even the simplest app I can code and publish in 2 months right now. At those companies, most of my work was scraped anyway when the redesign period of the year came.
So I’d rather have those peanuts please.
Now, with so many limitations, I think we can take a fair guess at why most indie developers choose to distribute their app outside the App Store.
Here are some of the apps I find most useful, and what I think is the main reason for them not being in the App Store:
The app’s main functionality (searching the filesystem) needs Full Disk Access permissions which are not allowed inside the sandbox.
It also uses Accessibility Permissions for auto-expanding snippets and other custom workflows.
Capturing and responding to all kinds of keyboard and trackpad events needs Accessibility Permissions.
The app also encapsulates the older BetterSnapTool utility for snapping windows to a grid. Resizing windows requires the same permissions.
Reacting to and changing keyboard input in realtime needs a special keyboard driver which is only allowed by Apple on a case by case basis. You have to request DriverKit entitlements from Apple, and they have to deem you worthy of those entitlements.
Needless to say, they won’t give hardware driver entitlements for a software app mimicking a keyboard.
Full Disk Access is probably the biggest requirement here.
Of course, there are other code editors on the App Store like BBEdit but they have this initial phase where you have to manually give them access to your /
(root) directory.
Compared to Sublime Text’s launch and edit instantly first time experience, I feel this is a bit annoying. I’m pretty sure this confuses a lot of first time users, and they will probably blame the developer, not knowing that this is the only way to access files from the Sandbox.
Resizing windows, listening for global trackpad gestures, detecting titlebars, moving windows to other spaces/screens. All of these need Accessibility Permissions.
There’s even an FAQ for that on their page:
Why is Swish not on the App Store?
Apple only allows sandboxed apps on the App Store. Swish needs to perform low-level system operations which prevent it from being sandboxed. Read more here.
As outlined in their 2017 article, Moving from Mac App Store, the sandbox limitation is the primary reason
Their Screen Recording feature has three very useful functions:
Honestly, I’m not sure about this one. The App Store is full of image editors and graphic content creation tools.
I thing the unique pricing model is something they would have a hard time implementing on the App Store.
The unique pricing model of Sketch
They actually have an App Store edition, but it’s severely limited.
Sharing things between the host and the VM is probably the largest functionality affected by the sandbox.
They provide a table with everything that’s missing in their App Store version of the app: KB Parallels: What is the difference between Parallels Desktop App Store Edition and Standard Edition?
Low-level communication with monitors is only possible by using a lot of private and reverse engineered APIs (IOKit
, DisplayServices
, IOAVService
etc.)
Accessibility Permissions are also needed for listening and reacting to brightness and volume key events.
Because of the sandbox, the lite App Store version of Lunar only supports software dimming and can only react to F1/F2 keys.
I think the free trial limitation is the only thing keeping such a self-contained app outside the App Store.
Soulver is incredibly complex and useful in its functionality, but I don’t think too many people would splurge $35 on a notepad-calculator app without trying it first. It deserves every single dollar of that price, that I can say for sure.
2021-07-16 23:39:37
One lazy evening in November 2020, I watched how Tim Cook announced a fanless MacBook Air with a CPU faster than the latest 16 inch MacBook, while my work-provided 15 inch 2019 MacBook Pro was slowly frying my lap and annoying my wife with its constant fan noise.
I had to get my hands on that machine. I also had the excuse that users of my app couldn’t control their monitor brightness anymore, so I could justify the expense easily in my head.
So I got it! With long delays and convoluted delivery schemes because living in a country like Romania means incredibly high prices on everything Apple.
This already starts to sound like those happy stories about seeing how awesome M1 is, but it’s far from that.
This is a story about how getting an M1 made me quit my job, bang my head against numerous walls to figure out monitor support for it and turn an open source app into something that I can really live off without needing a “real job”.
I develop an app called Lunar that can adjust the real brightness, contrast and volume of monitors by sending DDC commands through the Mac GPU.
On Intel Macs this worked really well because macOS had some private APIs to find the framebuffer of a monitor, send data to it through I²C, and best of all, someone has already done the hard part in figuring this out in this ddcctl utility.
M1 Macs came with a different kernel, very similar to the iOS one. The previous APIs weren’t working anymore on the M1 GPU, the IOFramebuffer
was now an IOMobileFramebuffer
and the IOI2C*
functions weren’t doing anything.
All of a sudden, I was getting countless emails, Twitter DMs and GitHub issues about how Lunar doesn’t work anymore on macOS Big Sur (most M1 users were thinking the OS upgrade was causing this, disregarding the fact that they’re now using hardware and firmware that was never before seen on the Mac)
This was also a reality check for me. Without analytics, I had no idea that Lunar had so many active users!
|
|
|
|
It was the last day of November. Winter was already coming. Days were cold and less than 10km away from my place you could take a walk through snowy forests.
snowy forests in Răcădău (Braşov, Romania)
But I was fortunate, as I had my trusty 2019 MacBook Pro to keep my hands warm while I was cranking code that will be obsolete in less than 6 months on my day job.
Just as the day turned into evening, the delivery guy called me about a laptop: the custom configured M1 MacBook Pro that costed as much as 7 junior developer monthly salaries has arrived!
After charging the laptop to 100%, I started the installation of my enormous Brewfile and left it on battery as an experiment. Meanwhile I kept working on the 2019 MacBook because my day job was also a night job when deadlines got tight.
Before I went to sleep, I wanted to test Lunar just to get an idea of what happens on M1. I launched it through Rosetta and the app window showed up as expected, every UI interaction worked normally but DDC was unresponsive. The monitor wasn’t being controlled in any way. I just hoped this was an easy fix and headed to bed.
So it turns out the I/O structure is very different on M1 (more similar to iPhones and iPads than to previous Macs). There’s no IOFramebuffer
that we can call IOFBCopyI2CInterfaceForBus
on. There’s now an IOMobileFramebuffer
in its place that has no equivalent function for getting an I²C bus from it.
After days of sifting through the I/O Registry trying to find a way to send I²C data to the monitor, I gave up and tried to find a workaround.
I realized I couldn’t work without Lunar being functional. I went back to doing the ritual I had to do in the first days I got my monitor and had no idea about DDC:
One specific comment was becoming prevalent among Lunar users:
QuickShade works for me on M1. Why can’t Lunar work?
QuickShade uses a black overlay with adjustable opacity to lower the image brightness. It can work on any Mac because it doesn’t depend on some private APIs to change the brightness of the monitor.
it also makes colors look more washed out in low brightness
Actually, unlike Lunar, QuickShade doesn’t change the monitor brightness at all.
QuickShade simulates a lower brightness by darkening the image using a fullscreen click-through black window that changes its opacity based on the brightness slider. The LED backlight of the monitor and the brightness value in its OSD stay the same.
This is by no means a bad critique of QuickShade. It is a simple utility that does its job very well. Some people don’t even notice the difference between an overlay and real brightness adjustments that much so QuickShade might be a better choice for them.
I thought, that isn’t what Lunar set out to do, simulating brightness that is. But at the same time, a lot of users depend on this app and if it could at least do that, people will be just a bit happier.
So I started researching how the brightness of an image is perceived by the human eye, and read way too much content about the Gamma factor.
Here’s a very good article about the subject: What every coder should know about Gamma
I noticed that macOS has a very simple way to control the Gamma parameters so I said why not?. Let’s try to implement brightness and contrast approximation using Gamma table:
|
|
Of course this needed weeks of refactoring because the app was not designed to support multiple ways of setting brightness (as it usually happens in every single-person hacked up project).
And there were so many unexpected issues, like, why does it take more than 5 seconds to apply the gamma values?? ლ(╹◡╹ლ)
It seems that the gamma changes become visible only on the next redraw of the screen. And since I was using the builtin display of the MacBook to write the code and the monitor was just for observing brightness changes, it only updated when I became too impatient and moved my cursor to the monitor in anger.
Now how do I force a screen redraw to make the gamma change apply instantly? (and maybe even transition smoothly between brightness values)
Just draw something on the screen ¯\_(ツ)_/¯
I chose to draw a (mostly hidden) blinking yellow dot when a gamma transition happens, to force screen redraw.
Now I was prepared to release a new version of Lunar with the Gamma approximation thing as a fallback for M1. But as it happens, one specific user sends me an email of how he managed to change the brightness of his monitor from a Raspberry Pi connected to the HDMI input, while the active input was still set to the MacBook’s USB-C.
I have already explored this idea as I have numerous Pis laying around, but I couldn’t get it working at all. I started writing a condescending reply of how I already tried this and how it will never work and he probably just has a monitor that happens to support this and won’t apply for other users.
But then… I realized what I was doing and started pressing backspace backspace backspace… and all the while I was remembering how the best features of Lunar were in fact ideas sent by users and I should stop thinking that I know better.
Instead, I started asking questions:
I probably asked the right questions because the reply was exactly what I needed to get this working right away.
After 30 minutes of downloading the latest Raspberry Pi OS with full desktop environment, flashing it, updating to a beta firmware version, and setting the right values in /boot/config.txt
, the Pi was able to send DDC requests using ddcutil while the monitor was rendering the MacBook desktop.
I couldn’t let this slip away, so I started implementing a network based DDC control for the next version of Lunar:
/:monitor/:control/:value
I established from the start that the local network latency and HTTP overhead was negligible compared to the DDC delay so I didn’t have to look into more complex solutions like USB serial, websockets or MQTT.
Even though side project is such a praised thing in the software development world, I can’t recommend doing such a thing.
It was very hard doing all of the above in the little time I had after working 9+ hours fullstack at an US company (that was also going through 2 different transitions: bought by a conglomerate, merging with another startup).
I owe a lot to my manager there, I wouldn’t have had the strength to do what followed without his encouraging advice and always present genuine smile.
One day, he told me that he finally started working on a bugfix for a long-standing problem in our gRPC gateway. He confessed that it was the first time in two months he found the time to write some code (the thing he actually enjoyed), between all the meetings and video calls. 10 minutes later, another non-US based team needed his help and his coding time got filled with scheduled meetings yet again. That is the life of a technical manager.
Now that Lunar was working on M1 and the Buy me a Coffee donations showed that people find value in this app, I thought it was time to stop doing what I don’t like (working for companies on products that I never use) and start doing what I always seemed to like (creating software which I also enjoy using, and share it with others).
So on April 1st I finished my contract at the US company, and started implementing a licensing system in Lunar.
Sounds simple right? Well it’s far from that. Preparing a product for the purpose of selling it, took me two whole months. And more energy than I put in 4 months of experimenting with Gamma and DDC on M1 (yeah, that was the fun part). This part of the journey is the hardest, and not fun at all.
My take from this is: if you’re at the start of selling your work, choose a payment or licensing solution that requires the least amount of work, no matter how expensive it may seem at first.
I went with Paddle for Lunar because of the following reasons:
Even with that, I made the mistake to choose a licensing system that wasn’t natively supported by Paddle and that made me dig into a 2-month rabbit hole of licensing servers.
I wanted the system that Sketch has: a one-time payment for an unlimited license, that also includes 1 year of free updates.
After a successful launch in June, most users were happy with the Gamma solution, and some even tried the Raspberry Pi method: Lunar.app - a way for M1 Macs to control 3rd Party Monitor’s Brightness and Contrast - Hardware - MPU Talk
Although one user was still persistent in looking for I²C support. Twice he tried to bring to my attention a way to use I²C on M1 and the second time he finally succeeded.
His GitHub comment on the M1 issue for Lunar sparked a new hope among users and some of the more technical users started experimenting with the
IOAVServiceReadI2C
and IOAVServiceWriteI2C
functions.
Because of my shallow understanding of the DDC specification at the time, I couldn’t get a working proof of concept in the first few tries.
I didn’t know exactly what chipAddress and dataAddress were for
|
|
I knew from my experiments with ESP32 and Arduino boards that I²C is in fact a serial bus, which means you can communicate with more than one device from the same 2 pins of the main device by chaining the secondary devices.
That possibility brings the requirement of a chip address which the main device should send over the wire to reach a specific device from that chain.
chaining sensor boards through I²C
In the DDC standard, the secondary device is the monitor and has the chip address 0x37
.
The EDID chip is located at the address 0x50
which is what we have in @zhuowei’s EDID reading example
|
|
But then what is the dataAddress
?
No idea, but thankfully someone reverse engineered the communication protocol and found this to always be 0x51
.
After some trial and error, user @tao-j discovered the above details and managed to finally change the brightness from his M1 MacBook.
Unfortunately, this was just the beginning as the Mac Mini supports more than one monitor and it’s not clear what monitor we’re controlling when calling IOAVServiceCreate()
.
I found a way to get each monitor’s specific AVService by iterating the I/O Kit registry tree and looking for the AppleCLCD2
class. To know which AppleCLCD2
belonged to what monitor, I had to cross reference identification data returned by CoreDisplay_DisplayCreateInfoDictionary
with the attributes of the registry node.
With that convoluted logic, I managed to get DDC working on Mac Mini as well, but only on the Thunderbolt 3 port. The HDMI port still doesn’t work for DDC, and no one knows why.
In the end, DDC on M1 was finally working in the same way it worked on Intel Macs!
|
|
Some quirks are still bothering the users of Lunar though:
For the moment these seem to be hardware problems and I’ll just have to keep responding to the early morning support emails no matter how obvious I make it that these are unsolvable.
I left these at the end because the details may bore most people but they might still be useful for a very small number of readers.
All monitors have a powerful microprocessor inside that has the purpose of receiving video data over multiple types of connections, and creating images from that data through the incredibly tiny crystals of the panel.
That same microprocessor dims or brightens a panel of LEDs behind that panel of crystals based on the Brightness value that you can change in the monitor settings using its physical buttons.
Because the devices that connect to the monitor need to know stuff about its capabilities (e.g. resolution, color profile etc), there needs to be a language known by both the computer and the monitor so that they can communicate.
That language is called a communication protocol. The protocol implemented inside the processors of most monitors is called Display Data Channel or DDC for short.
To allow for different monitor properties to be read or changed from the host device, VESA created the Monitor Control Command Set (or MCCS for short) which works over DDC.
MCCS is what allows Lunar and other apps to change the monitor brightness, contrast, volume, input etc.
I²C is a Wire protocol, which basically specifies how to translate electrical pulses sent over two wires into bits of information.
DDC specifies which sequences of bits are valid, while I²C specifies how a device like the monitor microprocessor can get those bits through wires inside the HDMI, DisplayPort, USB-C etc. cables.
macOS doesn’t block volume, it simply doesn’t implement any way for you to change the volume of a monitor.
Windows actually only changes the software volume, so if your monitor real volume is at 50%, windows can only lower that in software so you’ll hear anything between 0% and 50%. If you check the monitor OSD, you’ll see that the volume value of the monitor always stays at 50%.
Now macOS could probably do that as well, so that at least we’d have a way to lower the volume. But it doesn’t.
So if you want to change the real volume of the monitor on Mac, Lunar can do that.