2026-04-27 22:00:58

When it comes to software developers, there are t a few distinct types. For example, the extroverted, chatty type, who is always going out there to share the latest and newest libraries and projects with everyone, and is very much into bouncing ideas off others, regardless of whether they know what you’re talking about. Then there is the introverted loner, who prefers to tackle programming challenges by bouncing things around inside their own minds and going on long walks to mull things over before committing to anything significant.
This leads to interesting scenarios when it comes to management-enforced ‘optimization’ strategies, like Pair Programming. This approach involves two developers sharing the same computer and keyboard, theoretically doubling the effective output by some kind of metric, but realistically often leading to at least one side feeling pretty miserable and disconnected unless you put two of the chatty types together.
As a certified introverted loner developer, the idea of using an LLM chatbot as a coding assistant naturally triggers unpleasant flashbacks to hours of forced awkward pair ‘programming’. However, maybe using an LLM chatbot could be more pleasant because you can skip the whole awkward socializing bit. In order to give it a shake, I put together a little experiment to see whether LLM-based coding assistants is something that I could come to appreciate, unlike pair programming.
Any good experimental setup features clear goals and parameters that define what will be tested and what the expectations are. Obviously I come from a somewhat negative angle into this whole experiment, so to make it easy I’ll be picking two fairly straightforward scenarios for the LLM to assist with:
These are topics that I’m fairly familiar and comfortable with, so that I know what questions I have here, and what I’m roughly expecting as output. I’ll be treating the chatbot for the most part as I would use StackOverflow or nag people on IRC, with my main fear being that it’ll be expecting pleasantries from me instead of brutal and cold professionalism. Ideally it’ll be a step above me hurling profanities at a search engine for clearly willfully misunderstanding what I am looking for.
My expectations are that it’ll have some answers for me for the questions I have about how to do certain aspects of the tasks, and may even produce half-way usable code that I can fairly easily understand and double-check using my usual documentation references.
This just leaves one big question, being which LLM chatbot to pick and how the heck any of it is supposed to work, since I have avoided the things like the proverbial plague.
Although I am aware that everyone who is into using LLM-assisted programming seem to like to promote LLMs like Claude, I’d ideally not be signing up to another service. This pretty much just leaves GitHub Copilot, which I have access to already. I have written about this particular LLM chatbot quite a bit since it was introduced, with my generally negative feelings towards these tools increasingly backed up by research.
Biased I may be, but to be a true scientist you have to be able to set aside your biases for an experiment and accept reality in the face of new evidence. Thus, with all biases and doubts firmly pushed aside in favor of the aforementioned cold professionalism, let’s get down to brass tacks.
My pet project for STM32-related programming has for a while been my Nodate project, involving the use of the CMSIS standard headers and the macros defined therein in order to write things ranging from start-up to running the Dhrystone benchmark and deciphering the various flavors of real-time clocks.
Much of this work entails digging through datasheets, reference manuals and piles of reference code, as well as throwing queries at search engines to see what potentially useful results percolate out of that particular resource. Coming across the trials and tribulations of fellow STM32 developers in forum threads and the like can be both heartening and disheartening, but all of it tends to condense into something that you can use to progress in the project.
Perhaps ironically, the moment that I tried to use the chatbot in the browser I got an error with the GitHub status page indicating that some of their systems are down, including those for Copilot.

This raises another interesting point: regardless of whether an LLM chatbot makes for a good programming partner, a human partner doesn’t generally randomly keel over or become unresponsive in the midst of trying to do some work together. If they do, however, that’s absolutely a medical emergency and you should call 911, 112, or your local equivalent emergency number stat.

Anyway, after waiting for services to be restored, I was eventually able to ask the chatbot how to properly set the clock speed on an STM32F411 MCU, after getting tripped up previously by the need to set the regulator voltage scaling (VOS) in the power control register (PWR_CR). This is a power saving feature whose adjusting is required for hitting specific and clearly power-wasting clocks.
Shockingly, the chatbot happily spits out ST HAL code and ignores the ‘CMSIS’ bit, although you could maybe argue that the ST HAL uses CMSIS inside. But then so does Arduino code for many MCUs.
To its credit, it does mention in a ‘Key CMSIS Requirements’ list that you need to set PWR_REGULATOR_VOLTAGE_SCALE1 yet without further detail on where to set it. There is also the tiny detail that this isn’t even the CMSIS macro, which would be PWR_CR_VOS to set both bits for the full range.
Fortunately we can do the digital equivalent of smacking the chatbot upside the head and tell it to do the thing we asked it to do. This being to provide the real CMSIS version. Doing so results in another gobsmacking moment when it happily spits out code that doesn’t bother to include the CMSIS headers, but simply copies every single used struct definition and more into the code as well, bloating it up massively:

This is of course very annoying when it should have used #define macros, and it clearly can generate include statements based on its inclusion of <cstdint>, but the absolutely deadly sin here is that his code isn’t even functional for an STM32F411, as can be observed here:

I’m not entirely sure where it got the PWR_CR_VOS_SCALE1 thing from, with asking a friendly search engine leading to just a handful of results, one of which is for an STM32F407 that runs at 168 MHz max. This is hilarious in light of the comments right above the code. It makes you wonder what example code it pilfered from.
At this point I could probably continue to pick at this generated code, but suffice it to say that my confidence level in its generated code and overall output hovers somewhere between ‘low’ and ‘bottom of a black hole’. I’m more than happy to flip this particular table, rage quit, and not lose what remains of my sanity.
Although I had intended to also do some fun porting to Ada together with my buddy Copilot of some C++ networking code in my NymphRPC remote procedure call library, I found my nerves to be sufficiently frayed and the bouts of near-hysterical laughter out of sheer disbelief worrisome enough to abort this attempt.
I also do not feel that it’d do much more than hammer home the point that GitHub Copilot at the very least doesn’t make for a good pair programming partner, nor as a programming tool, or a search engine, or much of anything. When the only thing that it got me was having to check its output for very obvious errors and shaking my head in disbelief when I found them, it beggars belief that anyone would voluntarily use it.
When we also got reports that the use of such LLM chatbots are likely to degrade human cognition and critical thinking skills, not to mention the worrisome prospect of cognitive surrender, then it’s probably best to avoid these chatbots altogether.
I also agree generally with Advait Sarkar et al. in their 2022 paper that you cannot really do pair programming as-such with an LLM chatbot, but that it offers something different. Something that’s very different from using a search engine and digesting various articles and forum posts along with reference material into something new.
Thus, after using an LLM chatbot for some coding ‘assistance’ I’ll be happily scurrying back to my boring references and yelling invectives at search engines.
2026-04-27 19:00:04

Although the game Pizza Tycoon – known as Pizza Connection in Europe – probably doesn’t ring a bell for many folk, this 1994 DOS title is special enough for [cowomaly] to write an open source engine to bring it into the modern age as Pizza Legacy. Along the way, some questions popped up, such as how to animate the little cars that you see driving around in the simulated city and how the heck this was done back in the day on a 25 MHz 386 CPU.
On today’s GHz+, multi-core CPUs, we can just brute-force shovel pixels, sprites, and even 3D models around without a second thought while dedicating an entire core to pathfinding and other algorithms. Naturally, the original game developers had no such luxury. To understand how this animation was originally achieved, [cowomaly] had to dive into the assembly code of the original game.
The original algorithm was very simple: each road tile has at least one direction associated with it, so that a car that is on such a tile knows which direction it can travel, essentially creating a grid of one-way roads. When there’s a crossing, a random direction is picked, with the extra rule that you cannot do two consecutive turns in the same direction, presumably to keep cars from going around in circles.
Meanwhile, collision detection is simply a matter of checking the list of cars for a potential collision and not moving said car if it’s the case. This check is also optimized to take the road directions and one-way nature into account, with a 10-tick wait if there’s a block. Amusingly, this seems to enable the formation of brief traffic jams to add to that feeling of realism.
Although not a perfect algorithm and with some small bugs due to unchecked conditions with collisions, it’s hard to deny that the effect is very natural car movement, something that games like Sim City likely used as well.
2026-04-27 16:00:00

We use CAD packages in our 3D work, and it’s likely that many of us have become annoyed by the limitations of controlling the view of a 3D object using a 2D interface, our mouse. Joystick-like 3D controllers exist for this purpose, but [David Liu] found them inconvenient. He tried a trackball, but that didn’t improve matters. His response was to take the trackball and change the way it controlled the software, turning it from the equivalent of a ball rolling over a surface to a ball representing the object on the screen itself. He can turn and rotate the object intuitively just by moving the ball.
He started with a Kensington off-the-shelf trackball and adapted its electronics and handy twin optical sensors such that it worked in the required fashion. There was a lot of iterating and tuning to get the control feeling right, but he’s ended up with a peripheral that replaces both mouse and 3D joystick, and leaves the other hand free for those keyboard shortcuts.
He’s making a go of it as a product called the Rotatrix, which is definitely worth a look. But we know the Hackaday community, and we’re sure this will have given some of you ideas as to other new ways to control your CAD models. Here’s to a new era of useful peripherals!
2026-04-27 13:00:20

“Lights, camera, action!” might have been the call when recording back in the day, but for an awesome three-dimensional viewing experience, you might try yelling “Mist, Mirrors, Laser!” and following in the footsteps of [Ancient]’s latest adventure in voxel displays, which is also embedded below.
He starts with a naive demonstration: take a laser projector and toss an image into a flat cloud of mist. That demonstrates that yes, the mist does resolve an image, and that the viewing angle is very poor– that is, brightness drops off sharply when you’re out of line from the projector. In this case, that’s a good thing! It means more angles can be projected into that mist for a three-dimensional, hologram effect.
The optical train gets folded up, probably to make this fit on a tabletop: first, an array of flat mirrors in front of the projector splits the image from the projector into multiple viewpoints, which are each bounced to a second flat mirror that sends the image into the fog bank.
Some might call the resulting image a hologram; others might complain that that’s technically something totally different, and that this volumetric display is just all smoke and mirrors. We can hope that [Ancient] sees fit to share more details, like the software stack needed to generate the video feed– though it’s likely using a version of the same software as his last volumetric display, which used the same laser but whose point cloud was made from a bubblegram rather than an actual cloud. With a lot more points, though, the resolution is amazing in comparison, at the cost of appearing fuzzy at the edges. Unfortunately, we do not see the display in this demo run DOOM, as one of his previous projects did.
This video is more of a demo than a how-to, but it’s a heck of an impressive demo. If you don’t feel like watching the assembly, jump right to 9:00 to be impressed. It comes across a lot better on video than in the screenshot.
2026-04-27 10:00:21

It’s been three weeks since the Artemis II crew returned to Earth, and while the mission might be over for Reid Wiseman, Victor Glover, Christina Hammock Koch, and Jeremy Hansen, the work is only just beginning for engineers back at NASA. In a blog post earlier this week, the space agency went over the preliminary post-mission assessments of the spacecraft and its ground support equipment, and detailed some of the work that’s currently taking place as preparations begin for Artemis III.
During Artemis I, higher than expected damage was noted on both the Orion’s heat shield and the Space Launch System (SLS) launch pad. But according to NASA, the changes implemented after that first mission seem to have prevented similar issues this time around. The post also explains that reusable components of the Orion spacecraft, such as the avionics and the crew seats, are already in the process of being removed from Integrity so they can be installed in the next capsule on the production line.
While watching the live stream of the Artemis mission is the closest most of us will ever get to experiencing spaceflight, that doesn’t mean you can’t explore the solar system from the comfort of your own home — or more specifically, your browser. [Sani Huttunen] has created an incredible web-based solar system simulator that lets you explore our celestial neighborhood throughout different periods of time. You can tour the moons of Jupiter, see how the planets aligned on the date of your birth, and even check in on the Voyager probes. There are some very valid reasons to be skeptical about software moving to the web, but we’ve got to admit, this is a very slick demonstration of just how far modern browsers have come.
Speaking of how far things have come, are you ready for a car without a rear window? Polestar certainly hopes so, as their latest model does away with such quaint concepts. The glass panel in the roof ends right around the back headrests, and while the rear of the vehicle does open up for storage, the hatch is completely solid. In place of the traditional mirror, there’s a “high resolution” 1480 x 320 display that shows the feed from a rear-mounted camera.
No, that’s not a typo. At a time when smartphones are shipping with 2K displays, should the driver want to see what’s going on behind their $70,000+ USD electric vehicle, they’re limited to seeing it at a vertical resolution below that of VGA. We’d make a joke about Polestar offering up a “Rearview+” upgrade down the line that would give the driver a higher resolution view, but honestly, it’s getting a little too close to reality to be funny.
If that last one has you wishing for a reminder of simpler times, how about some new software for using the iconic Wii Remote as an input device? The Wii and its revolutionary controllers may be turning 20 later this year, but that hasn’t stopped the dedicated fans. This new wrapper provides accelerometer calibration, infrared tracking, and the ability to remap the Wii Remote’s buttons and create key combos. If you do something cool with it, we’d love to hear about it.
Finally, on the other end of the input spectrum, some details leaked out this weekend about Valve’s upcoming Steam controller — namely, the fact that it will cost players $99 at release. As reported by VICE, a hands-on review of the controller by TechyTalk was accidentally published early on YouTube, providing the public with pricing info ahead of an official announcement.
At first blush, this might seem like a lot of money to pay for a game controller, but it’s actually within striking distance of the sticker price on the standard controllers on the Xbox and PlayStation consoles. Perhaps more critically, it’s around half the price of the official “premium” controller offerings available for the aforementioned systems. Is it really any wonder that we’ve got cars without rearview mirrors when folks are putting down 200 bucks for a fancy PlayStation controller?
See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.
2026-04-27 07:00:00

For those first venturing into sailing, it can be overwhelming since the experience is thick with jargon and skills that don’t often show up in life ashore. With endless choices, including monohulls versus catamarans, fiberglass versus wood, fractional versus masthead rigs, and sloops versus ketches, a new sailor risks doing something like single-handing a staysail schooner when they should have started on a Bermuda-rigged dinghy without a spinnaker. Luckily, there are some shortcuts to picking up the hobby, like the venerable Sunfish or Hobie ships. It’s also possible to build a simple sailing vessel completely out of materials from a local hardware store, as [Cumberland Rover] has been demonstrating.
[Cumberland Rover] has a number of homemade vessels under his belt, from various kayaks and rowboats. His latest project is a 12-foot rowboat, which has the option to add a mast and sail. The hull is made from two 1×12 pieces of lumber, bent around a frame and secured. Plywood makes the bottom, and a few seats finish out the build. He’s also using standard hardware to fasten everything together, which helps with maintenance. It came in handy when he recently added some height to the bow of the boat to improve seaworthiness.
For sailing, the mast is made out of two pieces of 2x lumber glued together and then worked into a more cylindrical shape. It’s unstayed, reducing complexity, and although he broke one in extremely high winds, it is more than strong enough for most of his sailing. The ship is gaff-rigged, with a square sail hoisted up the mast by a wooden spar. All of these design choices make it quick and easy to set the sail up when the wind is good or pack it away fast when it’s time to row.
Although there are paid plans available on his website, the methods used in the video show how simple it can be to get into rowing or sailing with a minimal cost. You’ll still want to learn the basics of sailing before taking one of these out into open water. DIY speedboats are also possible and accessible as well, but there’s the added complexity of a motor here to think about, as well as registration requirements that often accompany powered craft.