MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

From 8086 to Z80: Building a NASM-Inspired SDK for 8-Bit Retro Computing

2026-03-17 23:30:43

Assembler syntax is a touchy subject, with many a flamewar having raged over e.g. Intel vs AT&T style syntax. Thus when [Humberto Costa] recently acquired an MSX system for some fun retro-style ASM programming, he was dismayed to see that the available Z80 assemblers did not support the syntax of his favorite ASM tool, NASM. Thus was born the HC SDK project, which seeks to bring more NASM to the Z80, 8085 and a slew of other processors.

There’s both a project site and a GitHub repository, from where both source and pre-compiled releases can be obtained. Supported host platforms are macOS, Windows, OpenBSD, FreeBSD, and Linux, with currently supported targets the 8080, 8085, 8086 and Z80. Support for the 6502 is currently in progress.

The Netwide Assembler (NASM), targets only the x86 architecture, being one of the most popular assemblers for Linux and x86. It uses a variant of the Intel ASM syntax, which contrasts it strongly with the GNU Assembler (GAS) that uses AT&T syntax. Of course, in an ironic twist of fate NASM now also supports AT&T syntax and vice versa, albeit with some subtle gotchas.

Regardless, if ASM for these retro architectures is your thing, then the HC SDK may be worth checking out. [Humberto] also says that he’s looking at adding higher-level language support to make it a more complete development environment for these old systems and new takes on them.

Thanks to [Albert Wolf] for the tip.

Fictional Moon: Reality TV and SciFi Don’t Mix

2026-03-17 22:00:42

It is a safe bet that nearly all Hackaday readers like to at least imagine what it would be like to build and live in an orbital station, on the moon, or on another planet. Moon bases and colonies show up all the time in fictional writing and movies, too. For the Hackaday crowd, some of these are plausible, and others are — well — a bit fanciful. However, there’s one fictional moonbase that we think might have been too realistic: Moonbase 3.

View of the base from above.

If that didn’t ring a bell, we aren’t surprised. The six-episode series was a co-production between Twentieth Century Fox and the BBC that aired in 1973. To make matters worse, after the initial airings in the UK, Australia, and New Zealand, the video master tapes were wiped out. Until 1993, there were no known copies of the show, but then one turned up in a US television station.

The show had many links to Dr. Who and, in fact, if you think the spacesuits look familiar, they made later appearances in two Dr. Who episodes.

These spacesuits would later show up in Dr. Who.

Consider the year 1973. Four years earlier, the US went to the moon after essentially starting from scratch ten years earlier. The show was set in the far future of 2003, so it is easy to imagine that a lot would happen in the next 30 years. Sadly, that wasn’t the case, but you can hardly blame the writers.

The premise was that there were five moonbases, each with a number. The US and Russia had Moonbases 1 and 2. The Europeans had the titular Moonbase 3. China and Brazil had the final two moonbases.

The goal of the Moonbase was to conduct scientific research on materials such as foamed metals and exotic fuels. Of the six episodes, the final one is amazing and redeems the rest of the series. However, overall, the show is competent but nothing special. However, as I mentioned, it is almost too realistic.

The Realism

The show had a real science advisor, BBC science correspondent [James Burke], later of “Connections” fame, so things looked mostly good. The NASA-like chatter is realistic, and they talk about computers using nouns and verbs like the Apollo computers, but which didn’t turn out to be especially accurate in the far future of 2003. The producers’ aim was to make a realistic program and stay away from “bug-eyed monsters.” It is true, though, that one episode at least hinted at monsters, but, in the end, it turned out to be a false alarm.

The tech isn’t amazingly realistic, but none of it is just crazy fantasy either. But the true realism — and the part that might have prevented it from being a big hit like Star Trek or Dr. Who — was the story content itself.

Another day at the (lunar) office.

Most of the stories show people in slightly futuristic-looking offices talking about how to maintain their funding from Earth. If you’ve ever worked on a government project, you know this is probably the most realistic thing you could do on a show like this. It is also tedious and boring.

Sure, there are stories about psychological stress, accidents (which, of course, threaten funding), and erratic scientists. There’s a Mr. Scott-like engineer who needs rescuing by a Russian — heady stuff for 1973. But the thread through it all is worrying about budget cuts or a shutdown order.

That said, none of the episodes are especially bad, either. The first episode, “Departure and Arrival,” has the old director leaving and a new director arriving, which makes it handy to introduce everyone to the audience. The other episodes were filmed in a different order than the airing order, so it doesn’t hurt much to skip around, but we’d suggest saving the last episode for last.

We don’t write TV dramas, but we imagine the same could be said of most genres. If you made a realistic show about the police force, the fire department, and a hospital emergency room, too much realism would probably be a real drag. No one wants to see the department have mandatory safety training or check hoses for defects. There might be some excitement, but the ratio of excitement to mundanity is probably pretty lopsided toward the boring.

Some of what the show predicted came true: Russia and the US would cooperate in space. The moon did have ice. But like most shows of its era, it missed the boat on things like personal communication, flat screens, and other modern tech.

Unrealistic

Not that it is all realistic. For some reason, the low gravity on the moon is only apparent outside the Moonbase, but there doesn’t seem to be any artificial gravity. The model work leaves something to be desired, and while you can excuse it as quaint, other shows of the same time or earlier did better.

To build drama, the characters had to make mistakes. A lot of them. “Oh! I ran out of oxygen!” “Drat! My spaceship was throwing an error, but it fixed itself, and now it’s back!” Things like that. It is hard to imagine that, given the hostile environment and the cost of a base like this, the people would be so careless.

The final episode features a scientific project that’s hard to imagine, but I won’t say more because I don’t want to spoil the best episode.

Of course, there are plenty of technical errors if you consider what really happened in 2003, but you can forgive those.

Your Favorite

I don’t mean to pan the show. You should hang in there for episode six. I don’t recommend skipping right to it, either. It may not become your favorite moonbase, but the show is highly watchable. You can find a few copies of the entire series on YouTube. There are also a few copies on Archive.org.

What’s your favorite fictional moonbase? We wish some of the planned moonbases had become real, but alas, they, too, were fictions. While not a moonbase, the Great Moon Hoax was fictional, even though it claimed to be factual.

FullSpectrum is Like HueForge for 3D Models, but Bring Your Toolchanger

2026-03-17 19:00:42

Two test towers, showing the palette potential of three (R, B, Y) filaments.

Full-color 3D printing is something of a holy grail, if nothing else just because of how much it impresses the normies. We’ve seen a lot of multi-material units the past few years, and with Snapmaker’s U1 and the Prusa XL it looks like tool changers are coming back into vogue. Just in time, [Radoux] has a fork of OrcaSlicer called FullSpectrum that brings HueForge-like color mixing to tool changing printers.

The hook behind FullSpectrum is very simple: stacking thin layers of colors, preferably with semi-translucent filament, allows for a surprising degree of mixing. The towers in the image above have only three colors: red, blue, and yellow. It’s not literally full-spectrum, but you can generate surprisingly large palettes this way. You aren’t limited to single-layer mixes, either: A-A-B repeats and even arbitrary patterns of four colors are possible, assuming you have a four-head tool changing printer like the Snapmaker U1 this is being developed for.

FullSpectrum is in fact a fork of Snapmaker’s fork of OrcaSlicer, which is itself forked from Bambu Slicer, which forked off of PrusaSlicer, which originated as a fork of Slic3r. Some complain about the open-source chaos of endless forking, but you can see in that chain how much innovation it gets us — including this technique of color mixing by alternating layers.

[Wombly Wonders] shows the limits of this in his video: you really want layer heights of 0.8 mm to 0.12 mm, as the standard 0.2 mm height introduces striping, particularly with opaque filaments. Depending on the colors and the overhang, you might get away with it, but thinner layers generally going to be a safer bet. Fully translucent filaments can blend a little too well at the edges, but the HueForge community — that we’ve covered previously — has already got a good handle on characterizing translucency and we’ll likely see a lot of that knowledge applied to FullSpectrum OrcaSlicer as time goes on.

Now, you could probably use this technique with an multi-material unit (MMU), but the tool-changing printers are where it is going to shine because they’re so much faster at it. With the right tool-changer, it’s actually faster to run off a model mixing colors from the cyan-yellow-magenta color space that it is to print the same model with the exact colors needed loaded on an MMU. That’s unexpected, but [Wombly] does demonstrate in his video with a chicken that’s listed as taking nineteen hours on Bambu’s MakerWorld as taking under seven hours.

Could this be the killer app that pushes tool-change printers into the spotlight? Maybe! Tool changing printers are nothing new, after all. We’ve even seen it done with a delta, and lots of other DIY options if you don’t fancy buying the big Prusa. If you’ve been lusting after such a beast, though, you might finally have your excuse.

Polyphonic Tunes On The Sharp PC-E500

2026-03-17 16:00:22

If you’re a diehard fan of the chiptune scene, you’ve probably heard endless beautiful compositions on the Nintendo Game Boy, Commodore 64, and a few phat FM tracks from Segas of years later. What the scene is yet to see is a breakout artist ripping hot tracks on the Sharp PC-E500. If you wanted to, though, you’d probably find use in this 3-voice music driver for the ancient 1993 mini-PC. 

This comes to us from [gikonekos], who dug up the “PLAY3” code from the Japanese magazine “Pocket Computer Journal” published in November 1993. Over on GitHub, the original articles have been scanned, and the assembly source code for the PLAY3 driver has been reconstructed. There’s also documentation of how the driver actually works, along with verification against RAM dumps from actual Sharp PC-E500 hardware. The driver itself runs as a machine code extension to the BASIC interpreter on the machine. The “PLAY” command can then be used to specify a string of notes to play at a given tempo and octave. Polyphony is simulated using time-division sound generation, with output via the device’s rather pathetic single piezo buzzer.

It’s very cool to see this code preserved for the future. That said, don’t expect to see it on stage at the next Boston Bitdown or anything—as this example video shows, it’s not exactly the punchiest chiptune monster out there. We’ll probably stick to our luscious fake-bit creations for now, while Nintendo hardware will still remain the bedrock of the movement.

A Voltage Regulator Before Electronics

2026-03-17 13:00:15

Did you ever wonder how the mechanical voltage regulator — that big black box wired up to the generator on a car from the ’60s or before — worked? [Jonelsonster] has some answers.

For most people in 2026 an old car perhaps means one from the 20th century, now that vehicles from the 1990s and 2000s  have become the beloved jalopies of sallow youths with a liking for older cars and a low budget. But even a 1990s vehicle is modern in terms of its technology, because a computer controls the show. It has electronic fuel injection (EFI), anti-lock braking system (ABS), closed loop emissions control, and the like.

Go back in time to the 1970s, and you’ll find minimal electronics in the average car. The ABS is gone, and the closest thing you might find to EFI is an electronic ignition where the points in the distributor have been replaced with a simple transistor. Perhaps an electronic voltage regulator on the alternator. Much earlier than that and everything was mechanical, be that the ignition, or that regulator.

The video below the break has a pair of units, it seems from 1940s tractors. They would have had a DC generator, a spinning coil with a commutator and brushes, in a magnetic field provided by another coil. These things weren’t particularly powerful by today’s standards and sometimes their charging could be a little lackluster, but they did work. We get to see how, as he lifts the lid off to reveal what look like a set of relays.

We’re shown the functions of each of the three coils with the aid of a lab power supply; we have a reverse current relay that disconnects the generator if the battery tries to power it, an over-current relay that disconnects the field coil if the current is too high, and an over-voltage relay that does the same for voltage. The regulating comes down to the magnetic characteristics, and while it’s crude, it does the job.

We remember European devices with two coils and no field terminal, but the principle is the same. There is never a dull moment when you own an all mechanical car.

Ternary RISC Processor Achieves Non-Binary Computing Via FPGA

2026-03-17 10:00:44

You would be very hard pressed to find any sort of CPU or microcontroller in a commercial product that uses anything but binary to do its work. And yet, other options exist! Ternary computing involves using trits with three states instead of bits with two. It’s not popular, but there is now a design available for a ternary processor that you could potentially get your hands on.

The device in question is called the 5500FP, as outlined in a research paper from [Claudio Lorenzo La Rosa.] Very few ternary processors exist, and little effort has ever been made to fabricate such a device in real silicon. However, [Claudio] explains that it’s entirely possible to implement a ternary logic processor based on RISC principles by using modern FPGA hardware. The impetus to do so is because of the perceived benefits of ternary computing—notably, that with three states, each “trit” can store more information than regular old binary “bits.” Beyond that, the use of a “balanced ternary” system, based on logical values of -1, 0 , and 1, allows storing both negative and positive numbers without a wasted sign bit, and allows numbers to be negated trivially simply by inverting all trits together.

The research paper does a good job of outlining the basis of this method of computing, as well as the mode of operation of the 5500FP processor. For now, it’s a 24-trit device operating at a frequency of 20MHz, but the hope is that in future it would be possible to move to custom silicon to improve performance and capability. The hope is that further development of ternary computing hardware could lead to parts capable of higher information density and lower power consumption, both highly useful in this day and age where improvements to conventional processor designs are ever hard to find.

Head over to the Ternary Computing website if you’re intrigued by the Ways of Three and want to learn more. We perhaps don’t expect ternary computing to take over any time soon, given the Soviets didn’t get far with it in the 1950s. Still, the concept exists and is fun to contemplate if you like the mental challenge. Maybe you can even start a rumor that the next iPhone is using an all-ternary processor and spread it across a few tech blogs before the week is out. Let us know how you get on.