2026-03-10 04:00:54

What hardware hacker doesn’t have a soft spot for transparent cases? While they may have fallen out of mainstream favor, they have an undeniable appeal to anyone with an interest in electronic or mechanical devices. Which is why the Orbigator built by [wyojustin] stands out among similar desktop orbital trackers we’ve seen.
Conceptually, it’s very similar to the International Space Station tracking lamp that [Will Dana] built in 2025. In fact, [wyojustin] cites it specifically as one of the inspirations for this project. But unlike that build, which saw a small model of the ISS moving across the surface of the globe, a transparent globe is rotated around the internal mechanism. This not only looks gorgeous, but solves a key problem in [Will]’s design — that is, there’s no trailing servo wiring that needs to be kept track of.
For anyone who wants an Orbigator of their own, [wyojustin] has done a fantastic job of documenting the hardware and software aspects of the build, and all the relevant files are available in the project’s GitHub repository.
The 3D printable components have been created with OpenSCAD, the firmware responsible for calculating the current position of the ISS on the Raspberry Pi Pico 2 is written in MicroPython, and the PCB was designed in KiCad. Incidentally, we noticed that Hackaday alum [Anool Mahidharia] appears to have been lending a hand with the board design.
As much as we love these polished orbital trackers, we’ve seen far more approachable builds if you don’t need something so elaborate. If you’re more interested in keeping an eye out for planes and can get your hands on a pan-and-tilt security camera, it’s even easier.
2026-03-10 02:30:00

While it might not be comprehensive, [Bret.dk] recently posted a retrospective titled “Every Single Board Computer I Tested in 2025.” The post covers 15 boards from 8 different companies. The cheapest board was $42, but the high-end topped out at $590.
We like the structure of the post. The boards are grouped in an under $50 category, another group for $50-100, and a final group for everything north of $100. Then there’s some analysis of what RAM prices are doing to the market, and commentary about CIX P1, Qualcomm, RISC-V, and more.
You get the idea that the post is only summarizing experiences with each board, and, for the intended purpose, that’s probably a good thing. On the other hand, many of the boards have full reviews linked, so be sure to check them out if you want more details. The Arduino Q didn’t fare well in review, nor did the BeagleBoard Green Eco. But the surprise was newcomer CIX. Their SoC powers two entries, one from Radaxa and the other from Orange Pi. In both cases, the performance of these was surprisingly good. There are some concerns with tooling and a few hiccups with things like power consumption, but if those were fixed, the CIX chips could be showing up more often.
[Bret’s] post is very informative. We’d be interested to hear whether you disagree with any of his assessments or have a favorite SBC that didn’t make his list. Let us know in the comments. Of course, there are other boards out there, but you can see that development tools and support often differentiate products more than just raw computing power.
2026-03-10 01:00:57

A friend of mine has been a software developer for most of the last five decades, and has worked with everything from 1960s mainframes to the machines of today. She recently tried AI coding tools to see what all the fuss is about, as a helper to her extensive coding experience rather than as a zero-work vibe coding tool. Her reaction stuck with me; she referenced her grandfather who had been born in rural America in the closing years of the nineteenth century, and recalled him describing the first time he saw an automobile.

We are living amid a wave of AI slop and unreasonable hype so it’s an easy win to dunk on LLMs, but as the whole thing climbs towards the peak of inflated expectations on the Gartner hype cycle perhaps it’s time to look forward. The current AI hype is inevitably going to crash and burn, but what comes afterwards? The long tail of the plateau of productivity will contain those applications in which LLMs are a success, but what will they be? We have yet to hack together a working crystal ball, but perhaps it’s still time to gaze into the future.
To most of the population, AI, which for them mostly means ChatGPT, is a magic tool that can write stuff for them, and make them look smart when they’re not asking it to draw a picture of a cat doing something human. It has replaced a search engine for many people, and become a confidante to many others to the extent that the phrase “Chatbot psychosis” has entered the lexicon.

Having a tool that can write anything you ask it to has of course unleashed that AI slop; whether it’s a useless web page or an equally useless report at your employer, we’re all acquiring the skill of spotting fake content. There are some people who have predicted the demise of human writers as a result, but though the chatbots can do a pretty good job of copying a writer’s style I do not share that view. By the time we’ve reached that long plateau, there will be an enhanced value in content written by meatbags because the consumer will have evolved a hair-trigger response to slop, so rest assured, Hackaday will not succumb.
If I have a prediction for those chatbots it will mirror previous booms and crashes; that the circular economic illusion between chipmakers and AI companies will inevitably derail, and like search engines in the early 2000s, most of them will not survive.
My software developer friend sees an LLM as a productivity aid in her coding to be something with a future, but where do I as a writer and Hackaday scribe see them going? It’s something I’ve given quite some thought to, and my conclusion is one that is much less all-encompassing. The privacy aspect of sharing your innermost thoughts, business decisions, or whatever other valuable stuff with a third party will inevitably catch up with the LLM industry, whether it’s through an unscrupulous data sharing deal or an LLM revealing things it shouldn’t to others. I thus think that the most ubiquitous LLMs in our future will be ones that are much more local, with less reliance on those power-hungry datacentres. I can’t predict all their applications, but I’m going to give a couple of examples in the here and now which have caught my attention.
The first example comes from my experience outside Hackaday, over a long career in the publishing and documentation industry, Many organisations have huge libraries of information on their intranets which is commercially sensitive enough that it can’t leave the site for processing by external AI company. Imagine documentation, product specifications, and the like. There’s already a thriving industry of intranet search and retrieval products in this space, and the AI companies naturally want a piece of it too. I can see a future in which a local LLM equivalent of those old yellow Google Search rack servers provides an intelligent interface to those troves of data, without the danger of leaks, or of going off piste.

The second comes from both a 1980s British TV sit-com, and from the LLM projects we’re starting to see here at Hackaday. In short, I think that appliances you can talk to will find their way into the consumer market, and nowhere will be safe from the Red Dwarf Talkie Toaster.
Jokes about maniacal kitchen appliances aside, we are now at the point at which the latest Raspberry Pi can just about run a functioning speech-based chatbot. Given a few years more microprocessor and microcontroller development, and the current cost, of a Pi with the accelerator board, will drop to a few dollars for a high-end microcontroller to do the same task.
I see it as inevitable that there will be a class of chip that will be offered out of the box with some kind of LLM capability, and that in no time the most unlikely of appliances will have personalities. It will inevitably be annoying, but out of that will come a few that might be useful.
So along with my software developer friend I’ve tried to move beyond my writer’s disdain for the very obvious negative side of the LLM bubble, and look ahead to a future when using a chatbot is no longer thought to make you look smart. In a few years time an LLM will be one of those things that’s just there, and what form will it take? Like that early-20th-century American who looked at a car and saw it was going to have an impact on the future I know I’m looking at something that’s going to remain with me whether I like it or not. I’ve speculated on how that might happen in a couple of ways above, but what about you? Are the agents which are the darling of the AI crowd at the moment going to take over our lives? Or will it be something else? As always, the comments are below.
2026-03-09 23:30:00

Regular Hackaday readers will no doubt be familiar with the work of Matthew Alt, AKA [wrongbaud]. His deep-dive blog posts break down hardware hacking and reverse engineering concepts in an engaging way, with practical examples that make even the most complex of topics approachable.
But one of the problems with having a back catalog of written articles is making sure they remain accessible as time goes on. (Ask us how we know.) Without some “algorithm” at play that’s going to kick out the appropriate article when it sees you’re interested in sniffing SPI, there needs to be a way to filter through the posts and find what’s relevant. Which is why the new “Roadmap” feature that [wrongbaud] has implemented on his site is so handy.
At the top of the page you’ll find [wrongbaud]’s recommended path for new players: it starts with getting your hardware and software together, and moves through working with protocols of varying complexity until it ends up at proper techno wizardry like fault injection.
Clicking any one of these milestones calls up the relevant articles — beginners can step through the whole process, while those with more experience can jump on wherever they feel comfortable. There’s also buttons that let you filter articles by topic, so for example you can pull up anything related to I2C or SPI.
Further down the page, there’s a helpful “Common Questions” section that gives you a brief overview of how to accomplish various goals, such as identify an unknown UART baud rate, or extract the contents of an SPI flash chip.
Based on the number and quality of the articles, [wrongbaud]’s site has always been on our shortlist of must-see content for anyone looking to get started with hardware hacking, and we think this new interface is going to make it even more useful for beginners who appreciate a structured approach to learning.
2026-03-09 22:00:00

Cryptography is a funny thing. Supposedly, if you do the right kind of maths to a message, you can send it off to somebody else, and as long as they’re the only one that knows a secret little thing, nobody else will be able to read it. We have all sorts of apps for this, too, that are specifically built for privately messaging other people.
Only… sometimes just having such an app is enough to get you in trouble. Even just the garbled message itself could be proof against you, even if your adversary can’t read it. Enter The Guardian. The UK-based media outlet has deployed a rather creative and secure way of accepting private tips and information, one which seeks to provide heavy cover for those writing in with the hottest scoops.
There are plenty of encrypted messaging apps out there, of greater or lesser value. Ultimately, though, they all have a similar flaw. If you have one of these ultra-secure apps on your phone, or malicious authorities capture you sending lots of messages to such a server, it can be somewhat obvious that you’re doing something worth hiding. You might not be—you might just have a penchant for keeping your fantasy football submissions under wraps. Regardless, using heavily-encrypted messaging systems can put a bit of a beacon on you, at a time when you might be hoping to stay as unobtrusive as possible.

It’s this precise problem that The Guardian and developers at the University of Cambridge hoped to solve with the CoverDrop messaging system. It’s designed specifically for users of news apps to be able to make confidential submissions to journalists without leaving a telltale trail of evidence that could reveal their actions. It’s intended to be suitable for implementation by a wide range of news agencies if so desired, as laid out in the project white paper.
The CoverDrop system uses multiple techniques to not just encrypt messages, but hide whether or not any messaging is happening in the first place. The key is that CoverDrop is integrated into every copy of the Guardian’s news app out there, and each app sends small amounts of encrypted information to the system at regular intervals. Most of the time, this is just meaningless text with no information content whatsoever.

That is, unless somebody has a message to send to a journalist. In that case, the message and the source’s public key is encrypted with the journalist’s public key, packaged up, and sent in such a way that it appears fundamentally no different to any other garbage message that is being sent to the CoverDrop servers. Both real and cover messages are encrypted the same way and have the same length, and are sent at the same times, so anyone monitoring network traffic won’t be able to tell the difference.
At the receiving end, CoverDrop’s secure servers remove an initial layer of encryption to filter out real messages from the cover messages. These are then provided to journalists via a dead drop delivery system, which pads the still-encrypted real messages with some cover messages to ensure the drops are always the same size. In the event a dead drop contains a message for a given journalist, they can decrypt it since it was encrypted with their public key in the first place. Since the messages also include the source’s public key, replies can be sent in the reverse fashion in a similarly secure way.

As for on-device security, the system is designed to be as unrevealing as possible as to whether it has been used for secure messaging or not. Message storage vaults used by the app are encrypted, maintained at a regular size, and are routinely modified at regular periods whether covert messages are being sent or not. Unless the decryption passphrase is known, there is no obvious evidence that the app has been used to send any messages at all.
For those eager to implement the system, or merely audit its functionality, the CoverDrop codebase is available on Github. Providing a secure and deniable method of submitting sensitive tips is desirable to many newsrooms, which could lead to wider adoption or similar systems popping up elsewhere. Of course, no system is absolutely secure, but having a messaging system that focuses on more than just simple encryption will be a boon to those looking to communicate with less fear of surveillance or retribution.
2026-03-09 19:00:05

When Asteroid 2024 YR4 was first discovered, it created a bit of a kerfuffle when it was reported it had a couple-percent chance of hitting the Earth in 2032. At 60 meters (196 feet) across, this would have been in the “city killer” class that nobody really wants to see make landfall, so NASA and the ESA scrambled all assets to refine its trajectory in time to do something about it. Amongst those assets was the James Webb Space Telescope (JWST), which is now reporting it will miss both us and our moon.

We reported that JWST was being tapped for this task over a year ago, when the main concern was still if YR4 might hit Earth or not. An Earth impact was fairly quickly ruled out as the window narrowed to include only to Earth’s moon, and concern shifted to excitement. A city killer striking Earth is obviously bad news. The same thing happening to the Moon is a chance to do science — and 2032 would have been plenty of time to get assets in place to observe the impact.
Unfortunately for the impact-curious, JWST was able to narrow down the trajectory further — and we’ve now gone from up to a 4% chance of hitting Luna to a sure miss of 20,000 km or more.
As this game of cosmic billiards we call a solar system continues, it’s only a matter of time before Earth or her moon is struck by another object. Unless we can deflect it, that is — NASA and partnering agencies have been testing how to do that.