MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

Python Comes to the Arduino Uno Q

2026-04-16 07:00:46

MicroPython is a well-known and easy-to-use way to program microcontrollers in Python. If you’re using an Arduino Uno Q, though, you’re stuck without it. [Natasha] saves the day by bringing us a a subset reimplementation of machine for the Arduino Uno Q.

In the past, microcontrollers were primarily programmed in C, but since MicroPython’s popularity increased over the years, it has become more and more common for introductory microcontroller programming to be in Python. Python, of course, is generally considered more beginner-friendly than C. [Natasha] presumably wanted to teach this way using an Uno Q, but the usual MicroPython APIs weren’t available. And so, in true hacker fashion, they simply made their own library to implement the most important bits of the familiar API. It currently implements a subset of the machine module: Pin, PWM, ADC, I2C, SPI and UART. While not complete, this certainly has potential to make the Uno Q easier to use for those familir with MicroPython.

Hacking Fermentation for Infinite Pickles from Pass-thru Bioreactor

2026-04-16 04:00:42

Home-fermented foods are great– they’re healthier, more flavourful, and cheaper than store-bought alternatives. What they aren’t is convenient: you need to prep a big batch of veggies, let it sit, and then you have to store the excess pickles. If you’re not careful, you end up with ancient, over-fermented pickles at the bottom of the crock, or worse– run out of pickles! Surely a fate worse than death. [Cody] at Cody’s Lab has a solution: a continous-flow fermentation process that keeps just the right supply of pickles coming at all times. Our grandmothers who kept a crock for months in the cold room or root cellar might be confused, but this hack brings pickles into the Just-In-Time framework of the 21st century.

Specifically this is for lactic acid fermentation, the type that gets you kosher dills, saurkraut and kimchi along with a whole mess of other tangy, tasty vegetable treats. Vinegar pickles are a whole other thing. It’s done in a brine, as the lactic acid bacteria are salt tolerant in a way that most things that would rot your food and/or make you sick would not. You can reuse the brine over and over, which is what [Cody] is doing: he crafts a U-shaped crock out of old glass bottles and a couple of pickle jars. He cuts the jars into angled pipe segments that are held together with aquarium sealant, which is apparently food safe. It holds water and looks surprisingly good, in that it isn’t hideous.

The bioreactor gets loaded up with veggies on one end, plus lots of salt and spices to taste, plus some cultured brine from an old batch to kickstart everything. The starter isn’t necessary; it just gets things going faster. The initial packing is the hardest: after filling it the first time, one needs only press new veggies in at one end, while removing tasty treats at the other. A special packing tool [Cody]makes helps with that, but he plans on adding a larger feed side. Thanks to that kickstart, the pickles were ready to try after about a week– which means his tube is a bit long, for his desired dwell time. If you like more fermentation to your pickles, then you might like this size.

May be the first time pickles have been featured on Hackaday without turning them into LEDs. We’ve featured plenty of fermentation projects, with automation to help make the best brew or a build for better tempeh, but not a lot of vegetables.

Thanks to [cam72cam] for the tip!

A Look at Full Spectrum 3D Printing

2026-04-16 02:30:51

Many modern desktop 3D printers include the ability to print in multiple colors. However, this typically only works with a few colors at a time, and the more colors you can use, the higher the machine’s cost and complexity. However, a recent technique allows printers to mix new colors by overlaying thin sheets of different filaments. [YGK3D] looks at how it works in a recent video.

In the early days of 3D printing, there were several competing approaches. You could have separate extruders, each with a different color. Some designs used a single extruder and switched between different filaments on demand. Others melted different filaments together in the hot end.

One advantage of the hotends that melted different materials is that you could make different colors by adjusting the feed rates of the plastics. However, that has its own problems with maintaining flow rate, and you can’t really use multiple material types. But using single or multiple hotends that take one filament at a time means you can only handle as many colors as you have filaments. You can’t mix, say, white and black to get gray.

Using Full Spectrum, you can define virtual filaments, and the software figures out how to approximate the color you want by using thin layers of different colors. The results are amazing. While this technically could work on any printer, in reality, a filament-switching printer will create a ton of waste to mix colors, and a single-filament machine will drive you batty manually swapping filament.

So you probably really need a tool changer and translucent plastic. You can see the difference in the test article when using opaque filament vs translucent ones. At low layer heights, four filament colors can give you 39 different colors. At more common layer heights, you may have to settle for 24 different colors.

One issue is that the top and bottom surfaces don’t color well. However, a new plugin that adds texture to the surfaces may help overcome that problem.

We looked at Full Spectrum earlier, but development continues. If you are still trying to get a handle on your filament-switching printer, we can help.

AI For The Skeptics: Attempting To Do Something Useful With It

2026-04-16 01:00:50

There are some subjects as a writer in which you know they need to be written, but at the same time you feel it necessary to steel yourself for the inevitable barrage of criticism once your work reaches its audience. Of these the latest is AI, or more specifically the current enthusiasm for Large Language Models, or LLMs. On one side we have the people who’ve drunk a little too much of the Kool-Aid and are frankly a bit annoying on the subject, while on the other we have those who are infuriated by the technology. Given the tide of low quality AI slop to be found online, we can see the latter group’s point.

This is the second in what may become an occasional series looking at the subject from the perspective of wanting to find the useful stuff behind the hype; what is likely to fall by the wayside, and what as yet unheard of applications will turn this thing into something more useful than a slop machine or an agent that might occasionally automate some of your tasks correctly. In the previous article I examined the motivation of that annoying Guy In A Suit who many of us will have encountered who wants to use AI for everything because it’s shiny and new, while in this one I’ll try to do something useful with it myself.

What is an LLM good at doing, and What Can it Do For Me?

A screen grab of the BBC News webside on April 2nd 2026, showing news from the war in the Persian Gulf.
In turbulent times such as these, news analysis tools can deliver useful insights that aren’t readily visible.

There is plenty of fun to be had in pointing out that AI is good at making low quality but superficially impressive content, and pictures of people who won the jackpot when they were handing out extra fingers. But given an LLM to talk to, why not name a task it can do really well?

I had this chat with a friend of mine, and I agree with him that these things are excellent at summarising information. This is partly what has Guy In A Suit excited because it makes him feel smart, but as it happens I have a real world task at which that might just be useful.

In the past I have occasionally written about a long-time side interest of mine, the computational analysis of news data. I have my own functional but rather clunky software suite for it, and the whole thing runs day in day out on a Raspberry Pi here in my office. As part of this over the last couple of decades I’ve tried to tackle quite a few different computational challenges, and one which has eluded me is sentiment analysis. Using a computer to scan a particular piece of text, and work out how positive or negative it is towards a particular subject is particularly useful when it comes to working with news analysis, and since it’s a specialist instance of summarising information, it might be suitable for an LLM.

Sentiment analysis appears at first sight to be easy, but it’s one of those things which the further you descend into it, the more labyrinthine it gets. It’s very easy to rate a piece of text against a list of positive and negative words and give it a positivity score, for example, but it becomes much more difficult  once you understand that the context of what is being said. It becomes necessary to perform part-of-speech and object analysis, in order to analyse what is being said in relation to whom, and then compute a more nuanced score based upon that. The code quickly becomes a quagmire in trying to perform a task that’s easy for a human, and though I have tried, I have never really managed to crack it.

By contrast, an LLM is good at analysing context in a piece of text, and can be instructed in natural language by means of a prompt. I can even tell it how I want the results, which in my case would be a simple numerical index rather than yet more text. It’s almost sounding as though I have the means for a GetSentimentAnalysis(subject,text) function.

First, Find Your LLM

Finding an LLM is as easy as firing up ChatGPT or similar for most people, but taking this from the point of view I have, I’d prefer to run one not sitting on a large dataslurping company’s cloud servers. I need a local LLM, and for that I am pleased to say the path is straightforward. I need two things, the model itself which is the collection of processed data, and an inference engine which is the software required to perform queries upon it. In reality this means installing the inference engine, and then instructing it to pick up the model from its repository.

There are several choices to be found when it comes to an open source inference engine, and among them I use Ollama. It’s a straightforward to use piece of software that provides a ChatGPT-compatible API for programming and has a simple text interface, and perhaps most importantly it’s in the repositories for my distro so installing it is particularly easy. ollama serve got me the API on http://localhost:11434, I went for the Llama3.2 model as suitable for a workaday laptop by typing ollama pull llama3.2, and I was ready to go. Typing ollama run llama3.2:latest got me a chat prompt in a terminal. It’s shockingly simple, and I can now generate hallucinatory slop in my terminal or by passing bits of JSON to the API endpoint.

In Which I Become A Prompt Engineer

There are a few things amid the AI hype, I have to admit, that get my goat. One of them is the job description “Prompt engineer”. I’m not one of those precious engineers who gets offended at heating engineers using the word “engineer”, but maybe there are limits when “writer” is much closer to the mark. Anyway, if anyone wants to pay me scads of money to write clear English instructions as an engineer with the bit of paper to prove it I am right here, having written the following for my sentiment analyser.

I am going to ask you to perform sentiment analysis on a piece of text, 
where your job is to tell me whether the sentiment towards the subject 
I specify is positive or negative. You will return only a number on a 
linear scale starting at +10 for fully positive, decreasing as positivity 
decreases, through 0 for neutral, and decreasing further as negativity 
increases, to -10 for fully negative. Please do not return any extra notes. 
Please perform sentiment analysis on only the following text, towards 
( put the subject of your query here ):

There are enough guides to using the API that it’s not worth making another one here, but passing this to the API is a simple enough process. On a six-year-old ThinkPad that’s also running the usual software of a working Hackaday writer it’s not especially fast, taking around twenty seconds to return a value. I’ve been trying it with the text of BBC News articles covering global events, and I can say that for relatively little work I’ve created an effective sentiment analyser. It will compute sentiment for multiple people mentioned in an article, and it will return 0 as a neutral value for people who don’t appear in the source text.

Wow! I Did Something Useful With It!

So in this piece I’ve taken a particularly annoying problem I’ve faced in the past and failed at, identified it as something at which an LLM might deliver, and in a surprisingly short time, come up with a working solution. I am of course by no means the first person to use an LLM for this particular task. If you want you can use it as an effective but slow and energy intensive sentiment analyser, but maybe that’s not the point here.

What I’m trying to demonstrate is that the LLM is just another tool, like your pliers. Just like your pliers it can do jobs other than the ones it was designed for, but some of them it’s not very good at and it’s certainly not the tool to replace all tools. If you identify a task at which it’s particularly good though, then just like your pliers it can do a very effective job.

I wish some people would take the above paragraph to heart.

A 6502 All in the Data

2026-04-15 23:30:05

Emulating a 6502 shouldn’t be that hard on a modern computer. Maybe that’s why [lasect] decided to make it a bit harder. The PG_6502 emulator uses PostgreSQL. All the CPU resources are database tables, and all opcodes are stored procedures. Huh.

The database is pretty simple. The pg6502.cpu table has a single row that holds the registers. Then there is a pg6502.mem table that has 64K rows, each representing a byte. There’s also a pg6502.opcode_table that stores information about each instruction. For example, the 0xA9 opcode is an immediate LDA and requires two bytes.

The pg6502.op_lda procedure grabs that information and updates the tables appropriately. In particular, it will load the next byte, increment the program counter, set the accumulator, and update the flags.

Honestly, we’ve wondered why more people don’t use databases instead of the file system for structured data but, for us, this may be a bit much. Still, it is undoubtedly unique, and if you read SQL, you have to admit the logic is quite clear.

We can’t throw stones. We’ve been known to do horrible emulators in spreadsheets, which is arguably an even worse idea. We aren’t the only ones.

A Tale of Cheap Hard Drives and Expensive Lessons

2026-04-15 22:00:46

When it comes to electronic gadgets, I’m a sucker for a good deal. If it’s got a circuit board on the inside and a low enough price tag on the outside, you can be pretty sure I’ll be taking it home with me. So a few years ago, when I saw USB external hard drives on the shelf of a national discount chain for just $10, I couldn’t resist picking one up. What I didn’t realize at the time however, was that I’d be getting more in the bargain than just some extra storage space.

It’s a story that I actually hadn’t thought of for some time — it only came to mind recently after reading about how the rising cost of computer components has pushed more users to the secondhand market than ever before. That makes the lessons from this experience, for both the buyer and the seller, particularly relevant.

What’s in the Box?

It wasn’t just the low price that attracted me to these hard drives, it was also the stated capacity. They were listed as 80 GB, which is an unusually low figure to see on a box in 2026. Obviously nobody is making 80 GB drives these days, so given the price, my first thought was that it would contain a jerry-rigged USB flash drive. But if that was the case, you would expect the capacity to be some power of two.

Upon opening up the case, what I found inside was somehow both surprising and incredibly obvious. The last thing I expected to see was an actual spinning hard drive, but only because I lacked the imagination of whoever put this product together. I was thinking in terms of newly manufactured, modern, hardware. Instead, this drive was nearly 20 years old, and must have been available for pennies on the dollar since they were presumably just collecting dust in a warehouse somewhere.

Or at least, that’s what I assumed. After all, surely nobody would have the audacity to take a take a bunch of ancient used hard drives and repackage them as new products…right?

Certified Pre-Owned

Once I saw that the drive inside the enclosure was older than both of my children, I got curious about its history. Especially given the scuff marks and dirt on the drive itself. A new old stock drive from 2008 is one thing, but if this drive actually had any time on the clock, that’s a very different story. Forget the implications of selling used merchandise as new — if the drive has seen significant use, even $10 is a steep price.

Fortunately, we can easily find out this information through Self-Monitoring, Analysis, and Reporting Technology (SMART). Using the smartctl tool, we can get a readout of all the drive’s SMART parameters and figure out what we’re dealing with:

Well, now we know why these things are so cheap. According to the SMART data, this particular drive has gone through 9,538 power cycles and accumulated a whopping 31,049 hours of total powered on time. I’ll save you the math, that’s a little over 3.5 years.

The term “used” barely covers it, this drive has been beat to hell.

Buried Treasure

It’s a fair bet that anyone finding themselves regularly reading Hackaday possesses an inquisitive mind. So at this point, I’m willing to bet you’re wondering the same thing I did: if this drive has been used for years, could it still contain files from its previous life?

Obviously it was formatted before getting boxed up and put back on the shelf. But frankly, anyone who’s unscrupulous enough to pass off decades-old salvaged drives as new probably isn’t putting in the effort to make sure said drives are securely wiped.

I was willing to bet that the drive went through nothing more than a standard quick format, and that even a simplistic attempt at file recovery would return some interesting results. As it so happens, “Simplistic Attempt” is basically my middle name, so I fired up PhotoRec and pointed it at our bargain drive.

It only took a few minutes before the file counters started jumping, proving that no effort was made to properly sanitize the drive before repackaging it. So not only is this drive old and used, but it still contains information from wherever it was for all those years. If it came from an individual’s personal computer, the information could be private in nature. If it was a business machine, the files may contain valuable proprietary data.

In this case, it looks to be a little of both. I didn’t spend a lot of time poring over the recovered files, but I spot checked enough of them to know that there’s somebody in China who probably wouldn’t be too happy to know their old hard drive ended up on the shelf in an American discount store.

For one thing we’ve got hundreds of personal photographs, ranging from vacation shots to formal portraits.

The pictures show fun in the sun, but the DOC and PDF files are all business. I won’t reveal the name of the company this individual worked for, but I found business proposals for various civil engineering projects within the Minhang District of Shanghai worth millions of dollars.

Once is Happenstance….

I know what you’re wondering, Dear Reader. If the first drive I pulled off the shelf happened to have a trove of personal and professional information on it, what are the chances that it would happen again? Perhaps it was a fluke, and the rest of the drives would be blank.

That’s an excellent question, and of course we can’t make a determination either way with only a single point of data. Which is why I went back the next day and bought three more drives.

Right off the bat, it’s worth noting that no two drives are actually the same. Two are Western Digital and two are Fujitsu, but none of them have the same model number. The keen-eyed reader will also note that one of the drives is 100 GB, but it has been partitioned to 80 GB to match the others.

Three of the drives were manufactured in 2008, and one is from 2007. I won’t go through the SMART data for each one, but suffice it to say that each drive has several thousand hours on the clock. Although for what it’s worth, the first drive is the lifetime leader by far.

In terms of file recovery, each drive gave up several gigabytes worth of data. In addition to the one we’ve already looked at, two more were clearly the primary drives in Windows boxes, and each contained a mix of personal data and technical documents such as AutoCAD drawings, datasheets, bills of materials, and schematics. Given their contents, I would guess the drives came from off-lease computers that were used by engineering firms.

The fourth drive was different. It contained more than 32 GBs worth of Hollywood movies, the most recent of which was released in 2010. I imagine this drive came out of somebody’s media center. Now I haven’t sailed the high seas, as it were, since my teenage years, but even if I had wanted to add these titles to my ill-gotten trove of films, it was a non-starter. Given the time period they were downloaded in, most of them were below DVD resolution.

Plus, they were all dubbed in Chinese. Not exactly my idea of a movie night.

A Cautionary Tale

Admittedly, given that they were being sold in a home electronics chain-store, the likelihood that these drives would be purchased by somebody with the means to extract any meaningful data from them isn’t very high. But since you’re reading this, you know the chances clearly aren’t zero. I didn’t have any malicious intent, but the same can’t necessarily be said for others.

So what can we take away from this? To start with, if you’re planning on selling or giving away any of your old drives, make sure they are properly wiped. In the dusty past, the recommendation would have been to use the Linux-based Darik’s Boot and Nuke (DBAN) live CD, but the project was was acquired back in 2012 and development was halted a few years later. Luckily, the GPLv2 tool that DBAN actually ran against the drive was forked and is now available as nwipe.

But as mentioned earlier, I get the impression that these drives were from businesses that unloaded their old machines. In that case, the users can’t really be blamed, as they wouldn’t have been able to wipe the drives even if they knew ahead of time their work computers were getting swapped out. But they certainly could have made an effort to keep their personal data off of company property. It’s one thing to have some corporate secrets stolen down the line, but you don’t want pictures of your kids to be in the mix.

In short, nobody cares about what happens with your personal data more than you do, so make sure it doesn’t get away from you. Otherwise some bargain-hunting nerd might be pawing through it in a few years.