2026-04-16 02:30:51

Many modern desktop 3D printers include the ability to print in multiple colors. However, this typically only works with a few colors at a time, and the more colors you can use, the higher the machine’s cost and complexity. However, a recent technique allows printers to mix new colors by overlaying thin sheets of different filaments. [YGK3D] looks at how it works in a recent video.
In the early days of 3D printing, there were several competing approaches. You could have separate extruders, each with a different color. Some designs used a single extruder and switched between different filaments on demand. Others melted different filaments together in the hot end.
One advantage of the hotends that melted different materials is that you could make different colors by adjusting the feed rates of the plastics. However, that has its own problems with maintaining flow rate, and you can’t really use multiple material types. But using single or multiple hotends that take one filament at a time means you can only handle as many colors as you have filaments. You can’t mix, say, white and black to get gray.
Using Full Spectrum, you can define virtual filaments, and the software figures out how to approximate the color you want by using thin layers of different colors. The results are amazing. While this technically could work on any printer, in reality, a filament-switching printer will create a ton of waste to mix colors, and a single-filament machine will drive you batty manually swapping filament.
So you probably really need a tool changer and translucent plastic. You can see the difference in the test article when using opaque filament vs translucent ones. At low layer heights, four filament colors can give you 39 different colors. At more common layer heights, you may have to settle for 24 different colors.
One issue is that the top and bottom surfaces don’t color well. However, a new plugin that adds texture to the surfaces may help overcome that problem.
We looked at Full Spectrum earlier, but development continues. If you are still trying to get a handle on your filament-switching printer, we can help.
2026-04-16 01:00:50

There are some subjects as a writer in which you know they need to be written, but at the same time you feel it necessary to steel yourself for the inevitable barrage of criticism once your work reaches its audience. Of these the latest is AI, or more specifically the current enthusiasm for Large Language Models, or LLMs. On one side we have the people who’ve drunk a little too much of the Kool-Aid and are frankly a bit annoying on the subject, while on the other we have those who are infuriated by the technology. Given the tide of low quality AI slop to be found online, we can see the latter group’s point.
This is the second in what may become an occasional series looking at the subject from the perspective of wanting to find the useful stuff behind the hype; what is likely to fall by the wayside, and what as yet unheard of applications will turn this thing into something more useful than a slop machine or an agent that might occasionally automate some of your tasks correctly. In the previous article I examined the motivation of that annoying Guy In A Suit who many of us will have encountered who wants to use AI for everything because it’s shiny and new, while in this one I’ll try to do something useful with it myself.

There is plenty of fun to be had in pointing out that AI is good at making low quality but superficially impressive content, and pictures of people who won the jackpot when they were handing out extra fingers. But given an LLM to talk to, why not name a task it can do really well?
I had this chat with a friend of mine, and I agree with him that these things are excellent at summarising information. This is partly what has Guy In A Suit excited because it makes him feel smart, but as it happens I have a real world task at which that might just be useful.
In the past I have occasionally written about a long-time side interest of mine, the computational analysis of news data. I have my own functional but rather clunky software suite for it, and the whole thing runs day in day out on a Raspberry Pi here in my office. As part of this over the last couple of decades I’ve tried to tackle quite a few different computational challenges, and one which has eluded me is sentiment analysis. Using a computer to scan a particular piece of text, and work out how positive or negative it is towards a particular subject is particularly useful when it comes to working with news analysis, and since it’s a specialist instance of summarising information, it might be suitable for an LLM.
Sentiment analysis appears at first sight to be easy, but it’s one of those things which the further you descend into it, the more labyrinthine it gets. It’s very easy to rate a piece of text against a list of positive and negative words and give it a positivity score, for example, but it becomes much more difficult once you understand that the context of what is being said. It becomes necessary to perform part-of-speech and object analysis, in order to analyse what is being said in relation to whom, and then compute a more nuanced score based upon that. The code quickly becomes a quagmire in trying to perform a task that’s easy for a human, and though I have tried, I have never really managed to crack it.
By contrast, an LLM is good at analysing context in a piece of text, and can be instructed in natural language by means of a prompt. I can even tell it how I want the results, which in my case would be a simple numerical index rather than yet more text. It’s almost sounding as though I have the means for a GetSentimentAnalysis(subject,text) function.
Finding an LLM is as easy as firing up ChatGPT or similar for most people, but taking this from the point of view I have, I’d prefer to run one not sitting on a large dataslurping company’s cloud servers. I need a local LLM, and for that I am pleased to say the path is straightforward. I need two things, the model itself which is the collection of processed data, and an inference engine which is the software required to perform queries upon it. In reality this means installing the inference engine, and then instructing it to pick up the model from its repository.
There are several choices to be found when it comes to an open source inference engine, and among them I use Ollama. It’s a straightforward to use piece of software that provides a ChatGPT-compatible API for programming and has a simple text interface, and perhaps most importantly it’s in the repositories for my distro so installing it is particularly easy. ollama serve got me the API on http://localhost:11434, I went for the Llama3.2 model as suitable for a workaday laptop by typing ollama pull llama3.2, and I was ready to go. Typing ollama run llama3.2:latest got me a chat prompt in a terminal. It’s shockingly simple, and I can now generate hallucinatory slop in my terminal or by passing bits of JSON to the API endpoint.
There are a few things amid the AI hype, I have to admit, that get my goat. One of them is the job description “Prompt engineer”. I’m not one of those precious engineers who gets offended at heating engineers using the word “engineer”, but maybe there are limits when “writer” is much closer to the mark. Anyway, if anyone wants to pay me scads of money to write clear English instructions as an engineer with the bit of paper to prove it I am right here, having written the following for my sentiment analyser.
I am going to ask you to perform sentiment analysis on a piece of text, where your job is to tell me whether the sentiment towards the subject I specify is positive or negative. You will return only a number on a linear scale starting at +10 for fully positive, decreasing as positivity decreases, through 0 for neutral, and decreasing further as negativity increases, to -10 for fully negative. Please do not return any extra notes. Please perform sentiment analysis on only the following text, towards ( put the subject of your query here ):
There are enough guides to using the API that it’s not worth making another one here, but passing this to the API is a simple enough process. On a six-year-old ThinkPad that’s also running the usual software of a working Hackaday writer it’s not especially fast, taking around twenty seconds to return a value. I’ve been trying it with the text of BBC News articles covering global events, and I can say that for relatively little work I’ve created an effective sentiment analyser. It will compute sentiment for multiple people mentioned in an article, and it will return 0 as a neutral value for people who don’t appear in the source text.
So in this piece I’ve taken a particularly annoying problem I’ve faced in the past and failed at, identified it as something at which an LLM might deliver, and in a surprisingly short time, come up with a working solution. I am of course by no means the first person to use an LLM for this particular task. If you want you can use it as an effective but slow and energy intensive sentiment analyser, but maybe that’s not the point here.
What I’m trying to demonstrate is that the LLM is just another tool, like your pliers. Just like your pliers it can do jobs other than the ones it was designed for, but some of them it’s not very good at and it’s certainly not the tool to replace all tools. If you identify a task at which it’s particularly good though, then just like your pliers it can do a very effective job.
I wish some people would take the above paragraph to heart.
2026-04-15 23:30:05

Emulating a 6502 shouldn’t be that hard on a modern computer. Maybe that’s why [lasect] decided to make it a bit harder. The PG_6502 emulator uses PostgreSQL. All the CPU resources are database tables, and all opcodes are stored procedures. Huh.
The database is pretty simple. The pg6502.cpu table has a single row that holds the registers. Then there is a pg6502.mem table that has 64K rows, each representing a byte. There’s also a pg6502.opcode_table that stores information about each instruction. For example, the 0xA9 opcode is an immediate LDA and requires two bytes.
The pg6502.op_lda procedure grabs that information and updates the tables appropriately. In particular, it will load the next byte, increment the program counter, set the accumulator, and update the flags.
Honestly, we’ve wondered why more people don’t use databases instead of the file system for structured data but, for us, this may be a bit much. Still, it is undoubtedly unique, and if you read SQL, you have to admit the logic is quite clear.
We can’t throw stones. We’ve been known to do horrible emulators in spreadsheets, which is arguably an even worse idea. We aren’t the only ones.
2026-04-15 22:00:46

When it comes to electronic gadgets, I’m a sucker for a good deal. If it’s got a circuit board on the inside and a low enough price tag on the outside, you can be pretty sure I’ll be taking it home with me. So a few years ago, when I saw USB external hard drives on the shelf of a national discount chain for just $10, I couldn’t resist picking one up. What I didn’t realize at the time however, was that I’d be getting more in the bargain than just some extra storage space.
It’s a story that I actually hadn’t thought of for some time — it only came to mind recently after reading about how the rising cost of computer components has pushed more users to the secondhand market than ever before. That makes the lessons from this experience, for both the buyer and the seller, particularly relevant.
It wasn’t just the low price that attracted me to these hard drives, it was also the stated capacity. They were listed as 80 GB, which is an unusually low figure to see on a box in 2026. Obviously nobody is making 80 GB drives these days, so given the price, my first thought was that it would contain a jerry-rigged USB flash drive. But if that was the case, you would expect the capacity to be some power of two.
Upon opening up the case, what I found inside was somehow both surprising and incredibly obvious. The last thing I expected to see was an actual spinning hard drive, but only because I lacked the imagination of whoever put this product together. I was thinking in terms of newly manufactured, modern, hardware. Instead, this drive was nearly 20 years old, and must have been available for pennies on the dollar since they were presumably just collecting dust in a warehouse somewhere.
Or at least, that’s what I assumed. After all, surely nobody would have the audacity to take a take a bunch of ancient used hard drives and repackage them as new products…right?
Once I saw that the drive inside the enclosure was older than both of my children, I got curious about its history. Especially given the scuff marks and dirt on the drive itself. A new old stock drive from 2008 is one thing, but if this drive actually had any time on the clock, that’s a very different story. Forget the implications of selling used merchandise as new — if the drive has seen significant use, even $10 is a steep price.
Fortunately, we can easily find out this information through Self-Monitoring, Analysis, and Reporting Technology (SMART). Using the smartctl tool, we can get a readout of all the drive’s SMART parameters and figure out what we’re dealing with:
Well, now we know why these things are so cheap. According to the SMART data, this particular drive has gone through 9,538 power cycles and accumulated a whopping 31,049 hours of total powered on time. I’ll save you the math, that’s a little over 3.5 years.
The term “used” barely covers it, this drive has been beat to hell.
It’s a fair bet that anyone finding themselves regularly reading Hackaday possesses an inquisitive mind. So at his point, I’m willing to bet you’re wondering the same thing I did: if this drive has been used for years, could it still contain files from its previous life?
Obviously it was formatted before getting boxed up and put back on the shelf. But frankly, anyone who’s unscrupulous enough to pass off decades-old salvaged drives as new probably isn’t putting in the effort to make sure said drives are securely wiped.
I was willing to bet that the drive went through nothing more than a standard quick format, and that even a simplistic attempt at file recovery would return some interesting results. As it so happens, “Simplistic Attempt” is basically my middle name, so I fired up PhotoRec and pointed it at our bargain drive.
It only took a few minutes before the file counters started jumping, proving that no effort was made to properly sanitize the drive before repackaging it. So not only is this drive old and used, but it still contains information from wherever it was for all those years. If it came from an individual’s personal computer, the information could be private in nature. If it was a business machine, the files may contain valuable proprietary data.
In this case, it looks to be a little of both. I didn’t spend a lot of time poring over the recovered files, but I spot checked enough of them to know that there’s somebody in China who probably wouldn’t be too happy to know their old hard drive ended up on the shelf in an American discount store.
For one thing we’ve got hundreds of personal photographs, ranging from vacation shots to formal portraits.
The pictures show fun in the sun, but the DOC and PDF files are all business. I won’t reveal the name of the company this individual worked for, but I found business proposals for various civil engineering projects within the Minhang District of Shanghai worth millions of dollars.
I know what you’re wondering, Dear Reader. If the first drive I pulled off the shelf happened to have a trove of personal and professional information on it, what are the chances that it would happen again? Perhaps it was a fluke, and the rest of the drives would be blank.
That’s an excellent question, and of course we can’t make a determination either way with only a single point of data. Which is why I went back the next day and bought three more drives.
Right off the bat, it’s worth noting that no two drives are actually the same. Two are Western Digital and two are Fujitsu, but none of them have the same model number. The keen-eyed reader will also note that one of the drives is 100 GB, but it has been partitioned to 80 GB to match the others.
Three of the drives were manufactured in 2008, and one is from 2007. I won’t go through the SMART data for each one, but suffice it to say that each drive has several thousand hours on the clock. Although for what it’s worth, the first drive is the lifetime leader by far.
In terms of file recovery, each drive gave up several gigabytes worth of data. In addition to the one we’ve already looked at, two more were clearly the primary drives in Windows boxes, and each contained a mix of personal data and technical documents such as AutoCAD drawings, datasheets, bills of materials, and schematics. Given their contents, I would guess the drives came from off-lease computers that were used by engineering firms.
The fourth drive was different. It contained more than 32 GBs worth of Hollywood movies, the most recent of which was released in 2010. I imagine this drive came out of somebody’s media center. Now I haven’t sailed the high seas, as it were, since my teenage years, but even if I had wanted to add these titles to my ill-gotten trove of films, it was a non-starter. Given the time period they were downloaded in, most of them were below DVD resolution.
Plus, they were all dubbed in Chinese. Not exactly my idea of a movie night.
Admittedly, given that they were being sold in a home electronics chain-store, the likelihood that these drives would be purchased by somebody with the means to extract any meaningful data from them isn’t very high. But since you’re reading this, you know the chances clearly aren’t zero. I didn’t have any malicious intent, but the same can’t necessarily be said for others.
So what can we take away from this? To start with, if you’re planning on selling or giving away any of your old drives, make sure they are properly wiped. In the dusty past, the recommendation would have been to use the Linux-based Darik’s Boot and Nuke (DBAN) live CD, but the project was was acquired back in 2012 and development was halted a few years later. Luckily, the GPLv2 tool that DBAN actually ran against the drive was forked and is now available as nwipe.
But as mentioned earlier, I get the impression that these drives were from businesses that unloaded their old machines. In that case, the users can’t really be blamed, as they wouldn’t have been able to wipe the drives even if they knew ahead of time their work computers were getting swapped out. But they certainly could have made an effort to keep their personal data off of company property. It’s one thing to have some corporate secrets stolen down the line, but you don’t want pictures of your kids to be in the mix.
In short, nobody cares about what happens with your personal data more than you do, so make sure it doesn’t get away from you. Otherwise some bargain-hunting nerd might be pawing through it in a few years.
2026-04-15 19:00:00

The modern web is a major pain to use without a password manager app. However, using such a service requires you to entrust your precious secrets to a third party. They could also be compromised, then you really are in trouble. You could manage passwords with local software or even a notebook, but that adds cognitive load. You could use the same password across multiple sites to reduce the load, but that would be unwise. Now, however, with the HIPPO system, there is another way.
HIPPO is implemented as a browser extension paired with a central server. The idea is not to store any password anywhere, but to compute them on the fly from a set of secrets. One secret at the server end, and one the user supplies as a passphrase. This works via an oblivious pseudorandom function (OPRF) protocol. Details from the linked site are sparse, but we think we’ve figured it out from other sources.
First, the user-supplied master password is hashed with the site identifier (i.e., the domain), blinded with a random number, and then processed using an OPRF, likely built on an elliptic-curve cryptographic scheme. This ensures the server never receives the raw password. Next, the server applies its own secret key via a Pseudorandom Function (PRF) and sends it back to the client. Obviously, its private key is also never sent raw. Next, the client removes the blinding factor (using the same random number it used when sending) from the original key, producing a site-specific high-entropy secret value that the extension passes to a Key Derivation Function (KDF), which formats it into a suitable form for use as a password. Finally, the extension auto-fills the password into the website form, ready to send to the site you want to access. This password is still unique per site and deterministic, which is how this whole scheme can replace a password database. Neat stuff!
This advantage to this whole scheme means there’s no vault to compromise, no storage requirements, and it generates a strong password for each unique site, meaning no password reuse and a low chance of brute-force cracking. The obvious flaw is that it creates a single point of failure (the HIPPO service) and shifts the risk of compromise from vault cracking the master password, infiltrating the server, or compromising its secret key. It’s an interesting idea for sure, but it doesn’t directly manage 2FA, which is a layer you’d want adding on top to ensure adequate security overall, and of course, it’s not a real, live service yet, but when (or if) it becomes one, we’ll be sure to report back.
Confused by all this? Why not dig into this article first? Or maybe you fancy a DIYable hardware solution?
2026-04-15 16:00:12

Although toasters should be among the most boring appliances in a household – with perhaps just a focus on making their toasting more deterministic rather than somewhere between ‘still frozen’ and ‘charcoal’ – somehow companies keep churning out toasters that just add very confusing ‘smart’ features. Of course, if a toaster adds a big touch screen and significant processing power, you may as well run DOOM on it, as was [Aaron Christophel]’s reflexive response.
While unboxing the Aeco Toastlab Elite toaster, [Aaron] is positively dumbfounded that they didn’t also add WiFi to the thing. Although on the bright side, that should mean no firmware updates being pushed via the internet. During the disassembly it can be seen that there’s an unpopulated pad for a WiFi chip and an antenna connection, making it clear that the PCB is a general purpose PCB that will see use in other appliances.
The SoC is marked up as a K660L with an external flash chip. Dumping the firmware is very easy, with highly accessible UART that spits out a ‘Welcome to ArtInChip Luban-Lite’ message. After some reverse-engineering the SoC turned out to be a rebranded RISC-V-based ArtInChip D133CxS, with a very usable SDK by the manufacturer. From there it was easy enough to get DOOM to run, with the bonus feature of needing to complete a level before the toaster will give the slice back.