MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

Haiku 不再仅限于 X86,已在 QEMU 中实现 ARM 启动

2026-04-16 10:00:57

Ever since it was called OpenBeOS, Haiku has targeted the x86 platform. That makes good sense: it’s hard enough maintaining a niche system on ubiquitous hardware. But x86 isn’t the only game in town anymore. Apple’s doing very well on ARM, Linux runs on oodles of ARM SBCs, and even Windows uh, exists, on that architecture, so why not Haiku? That’s what [smrobtzz] figured, and thanks to his work you can now run Haiku on ARM, in QEMU.

There’s no image available as yet — you still need to bootstrap your own from a working system, and ironically that system cannot be Haiku. [smrobtzz] apparently used MacOS, which makes sense as his ultimate goal is apparently to go where only Aishi Linux has gone before and boot Haiku on his M1 MacBook. There had been previous efforts to get Haiku going on Raspberry Pi hardware, which seems logical considering how lightweight the operating system is, but they’re apparently nowhere near booting either. QEMU is a good start.

Interestingly, according to the ports page, Haiku is “functional” on both RISC V QEMU and the now-discontinued HiFive Unmatched SBC. We don’t seem to have covered it, but that milestone happened five year ago. Given how most RISC V boards currently available are a bit slow for modern desktop Linux, Haiku would likely be a breath of fresh air. The BeOS-descended system might be single user, but it’s snappy.

We reported a couple of years back that Haiku was daily-drivable on x86 ,it’s only gotten better since then, assuming you choose the right hardware. Hardware support is always the hard part about alternative OSes, but Haiku users are absolutely spoiled compared to fans of MorphOS, which still only runs on G4 or G5 PowerPC, and even then not only some hardware.

Python 来到 Arduino Uno 了吗?

2026-04-16 07:00:46

MicroPython is a well-known and easy-to-use way to program microcontrollers in Python. If you’re using an Arduino Uno Q, though, you’re stuck without it. [Natasha] saves the day by bringing us a a subset reimplementation of machine for the Arduino Uno Q.

In the past, microcontrollers were primarily programmed in C, but since MicroPython’s popularity increased over the years, it has become more and more common for introductory microcontroller programming to be in Python. Python, of course, is generally considered more beginner-friendly than C. [Natasha] presumably wanted to teach this way using an Uno Q, but the usual MicroPython APIs weren’t available. And so, in true hacker fashion, they simply made their own library to implement the most important bits of the familiar API. It currently implements a subset of the machine module: Pin, PWM, ADC, I2C, SPI and UART. While not complete, this certainly has potential to make the Uno Q easier to use for those familir with MicroPython.

改造发酵工艺,利用通过式生物反应器实现无限泡菜生产

2026-04-16 04:00:42

Home-fermented foods are great– they’re healthier, more flavourful, and cheaper than store-bought alternatives. What they aren’t is convenient: you need to prep a big batch of veggies, let it sit, and then you have to store the excess pickles. If you’re not careful, you end up with ancient, over-fermented pickles at the bottom of the crock, or worse– run out of pickles! Surely a fate worse than death. [Cody] at Cody’s Lab has a solution: a continous-flow fermentation process that keeps just the right supply of pickles coming at all times. Our grandmothers who kept a crock for months in the cold room or root cellar might be confused, but this hack brings pickles into the Just-In-Time framework of the 21st century.

Specifically this is for lactic acid fermentation, the type that gets you kosher dills, saurkraut and kimchi along with a whole mess of other tangy, tasty vegetable treats. Vinegar pickles are a whole other thing. It’s done in a brine, as the lactic acid bacteria are salt tolerant in a way that most things that would rot your food and/or make you sick would not. You can reuse the brine over and over, which is what [Cody] is doing: he crafts a U-shaped crock out of old glass bottles and a couple of pickle jars. He cuts the jars into angled pipe segments that are held together with aquarium sealant, which is apparently food safe. It holds water and looks surprisingly good, in that it isn’t hideous.

The bioreactor gets loaded up with veggies on one end, plus lots of salt and spices to taste, plus some cultured brine from an old batch to kickstart everything. The starter isn’t necessary; it just gets things going faster. The initial packing is the hardest: after filling it the first time, one needs only press new veggies in at one end, while removing tasty treats at the other. A special packing tool [Cody]makes helps with that, but he plans on adding a larger feed side. Thanks to that kickstart, the pickles were ready to try after about a week– which means his tube is a bit long, for his desired dwell time. If you like more fermentation to your pickles, then you might like this size.

May be the first time pickles have been featured on Hackaday without turning them into LEDs. We’ve featured plenty of fermentation projects, with automation to help make the best brew or a build for better tempeh, but not a lot of vegetables.

Thanks to [cam72cam] for the tip!

全光谱3D打印透视

2026-04-16 02:30:51

Many modern desktop 3D printers include the ability to print in multiple colors. However, this typically only works with a few colors at a time, and the more colors you can use, the higher the machine’s cost and complexity. However, a recent technique allows printers to mix new colors by overlaying thin sheets of different filaments. [YGK3D] looks at how it works in a recent video.

In the early days of 3D printing, there were several competing approaches. You could have separate extruders, each with a different color. Some designs used a single extruder and switched between different filaments on demand. Others melted different filaments together in the hot end.

One advantage of the hotends that melted different materials is that you could make different colors by adjusting the feed rates of the plastics. However, that has its own problems with maintaining flow rate, and you can’t really use multiple material types. But using single or multiple hotends that take one filament at a time means you can only handle as many colors as you have filaments. You can’t mix, say, white and black to get gray.

Using Full Spectrum, you can define virtual filaments, and the software figures out how to approximate the color you want by using thin layers of different colors. The results are amazing. While this technically could work on any printer, in reality, a filament-switching printer will create a ton of waste to mix colors, and a single-filament machine will drive you batty manually swapping filament.

So you probably really need a tool changer and translucent plastic. You can see the difference in the test article when using opaque filament vs translucent ones. At low layer heights, four filament colors can give you 39 different colors. At more common layer heights, you may have to settle for 24 different colors.

One issue is that the top and bottom surfaces don’t color well. However, a new plugin that adds texture to the surfaces may help overcome that problem.

We looked at Full Spectrum earlier, but development continues. If you are still trying to get a handle on your filament-switching printer, we can help.

人工智能给怀疑者:尝试将其用于实际用途

2026-04-16 01:00:50

There are some subjects as a writer in which you know they need to be written, but at the same time you feel it necessary to steel yourself for the inevitable barrage of criticism once your work reaches its audience. Of these the latest is AI, or more specifically the current enthusiasm for Large Language Models, or LLMs. On one side we have the people who’ve drunk a little too much of the Kool-Aid and are frankly a bit annoying on the subject, while on the other we have those who are infuriated by the technology. Given the tide of low quality AI slop to be found online, we can see the latter group’s point.

This is the second in what may become an occasional series looking at the subject from the perspective of wanting to find the useful stuff behind the hype; what is likely to fall by the wayside, and what as yet unheard of applications will turn this thing into something more useful than a slop machine or an agent that might occasionally automate some of your tasks correctly. In the previous article I examined the motivation of that annoying Guy In A Suit who many of us will have encountered who wants to use AI for everything because it’s shiny and new, while in this one I’ll try to do something useful with it myself.

What is an LLM good at doing, and What Can it Do For Me?

A screen grab of the BBC News webside on April 2nd 2026, showing news from the war in the Persian Gulf.
In turbulent times such as these, news analysis tools can deliver useful insights that aren’t readily visible.

There is plenty of fun to be had in pointing out that AI is good at making low quality but superficially impressive content, and pictures of people who won the jackpot when they were handing out extra fingers. But given an LLM to talk to, why not name a task it can do really well?

I had this chat with a friend of mine, and I agree with him that these things are excellent at summarising information. This is partly what has Guy In A Suit excited because it makes him feel smart, but as it happens I have a real world task at which that might just be useful.

In the past I have occasionally written about a long-time side interest of mine, the computational analysis of news data. I have my own functional but rather clunky software suite for it, and the whole thing runs day in day out on a Raspberry Pi here in my office. As part of this over the last couple of decades I’ve tried to tackle quite a few different computational challenges, and one which has eluded me is sentiment analysis. Using a computer to scan a particular piece of text, and work out how positive or negative it is towards a particular subject is particularly useful when it comes to working with news analysis, and since it’s a specialist instance of summarising information, it might be suitable for an LLM.

Sentiment analysis appears at first sight to be easy, but it’s one of those things which the further you descend into it, the more labyrinthine it gets. It’s very easy to rate a piece of text against a list of positive and negative words and give it a positivity score, for example, but it becomes much more difficult  once you understand that the context of what is being said. It becomes necessary to perform part-of-speech and object analysis, in order to analyse what is being said in relation to whom, and then compute a more nuanced score based upon that. The code quickly becomes a quagmire in trying to perform a task that’s easy for a human, and though I have tried, I have never really managed to crack it.

By contrast, an LLM is good at analysing context in a piece of text, and can be instructed in natural language by means of a prompt. I can even tell it how I want the results, which in my case would be a simple numerical index rather than yet more text. It’s almost sounding as though I have the means for a GetSentimentAnalysis(subject,text) function.

First, Find Your LLM

Finding an LLM is as easy as firing up ChatGPT or similar for most people, but taking this from the point of view I have, I’d prefer to run one not sitting on a large dataslurping company’s cloud servers. I need a local LLM, and for that I am pleased to say the path is straightforward. I need two things, the model itself which is the collection of processed data, and an inference engine which is the software required to perform queries upon it. In reality this means installing the inference engine, and then instructing it to pick up the model from its repository.

There are several choices to be found when it comes to an open source inference engine, and among them I use Ollama. It’s a straightforward to use piece of software that provides a ChatGPT-compatible API for programming and has a simple text interface, and perhaps most importantly it’s in the repositories for my distro so installing it is particularly easy. ollama serve got me the API on http://localhost:11434, I went for the Llama3.2 model as suitable for a workaday laptop by typing ollama pull llama3.2, and I was ready to go. Typing ollama run llama3.2:latest got me a chat prompt in a terminal. It’s shockingly simple, and I can now generate hallucinatory slop in my terminal or by passing bits of JSON to the API endpoint.

In Which I Become A Prompt Engineer

There are a few things amid the AI hype, I have to admit, that get my goat. One of them is the job description “Prompt engineer”. I’m not one of those precious engineers who gets offended at heating engineers using the word “engineer”, but maybe there are limits when “writer” is much closer to the mark. Anyway, if anyone wants to pay me scads of money to write clear English instructions as an engineer with the bit of paper to prove it I am right here, having written the following for my sentiment analyser.

I am going to ask you to perform sentiment analysis on a piece of text, 
where your job is to tell me whether the sentiment towards the subject 
I specify is positive or negative. You will return only a number on a 
linear scale starting at +10 for fully positive, decreasing as positivity 
decreases, through 0 for neutral, and decreasing further as negativity 
increases, to -10 for fully negative. Please do not return any extra notes. 
Please perform sentiment analysis on only the following text, towards 
( put the subject of your query here ):

There are enough guides to using the API that it’s not worth making another one here, but passing this to the API is a simple enough process. On a six-year-old ThinkPad that’s also running the usual software of a working Hackaday writer it’s not especially fast, taking around twenty seconds to return a value. I’ve been trying it with the text of BBC News articles covering global events, and I can say that for relatively little work I’ve created an effective sentiment analyser. It will compute sentiment for multiple people mentioned in an article, and it will return 0 as a neutral value for people who don’t appear in the source text.

Wow! I Did Something Useful With It!

So in this piece I’ve taken a particularly annoying problem I’ve faced in the past and failed at, identified it as something at which an LLM might deliver, and in a surprisingly short time, come up with a working solution. I am of course by no means the first person to use an LLM for this particular task. If you want you can use it as an effective but slow and energy intensive sentiment analyser, but maybe that’s not the point here.

What I’m trying to demonstrate is that the LLM is just another tool, like your pliers. Just like your pliers it can do jobs other than the ones it was designed for, but some of them it’s not very good at and it’s certainly not the tool to replace all tools. If you identify a task at which it’s particularly good though, then just like your pliers it can do a very effective job.

I wish some people would take the above paragraph to heart.

6502 一切尽在数据中

2026-04-15 23:30:05

Emulating a 6502 shouldn’t be that hard on a modern computer. Maybe that’s why [lasect] decided to make it a bit harder. The PG_6502 emulator uses PostgreSQL. All the CPU resources are database tables, and all opcodes are stored procedures. Huh.

The database is pretty simple. The pg6502.cpu table has a single row that holds the registers. Then there is a pg6502.mem table that has 64K rows, each representing a byte. There’s also a pg6502.opcode_table that stores information about each instruction. For example, the 0xA9 opcode is an immediate LDA and requires two bytes.

The pg6502.op_lda procedure grabs that information and updates the tables appropriately. In particular, it will load the next byte, increment the program counter, set the accumulator, and update the flags.

Honestly, we’ve wondered why more people don’t use databases instead of the file system for structured data but, for us, this may be a bit much. Still, it is undoubtedly unique, and if you read SQL, you have to admit the logic is quite clear.

We can’t throw stones. We’ve been known to do horrible emulators in spreadsheets, which is arguably an even worse idea. We aren’t the only ones.