2026-04-08 19:00:01

“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.
Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.
The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.
Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.
Thanks to [Monika] for the tip!
2026-04-08 16:00:56

In a move that’s no doubt going to upset and confuse many, Espressif has released its newest microcontroller — the ESP32-S31. The confusing part here is that the ESP32-S series was always the one based on Tensilica Xtensa LX7 cores, while the ESP32-C series was the one using RISC-V cores.
That said, if one looks at it as a beefier -S3 MCU it does have some appealing upgrades. The most obvious improvements are with the use of WiFi 6, as well as Bluetooth Classic and LE 5.4, including LE Audio. There is also Thread and Zigbee support for those who are into such things.
The Ethernet MAC got a bump from the 100 Mbit RMII MAC in previous MCUs and is now gigabit-rated, while the number of GPIO is significantly higher at 60 instead of 45 on the -S3. On the RAM side, things are mostly the same, except for DDR PSRAM support, with octal SPI offering up to 250 MHz compared to 80 MHz on the -S3.
On the CPU side the up-to-320 MHz RISC-V cores are likely to be about as powerful as the 240 MHz LX7 cores in the -S3, based on the ESP32-C series performance in terms of IPC. Overall it does seem like a pretty nice MCU, it’s just confusing that it doesn’t use LX7 cores with the series it was put into. When this MCU will be available for sale doesn’t seem to be known yet, with only samples available to select customers.
2026-04-08 13:00:53

A computer does one thing at a time, even if it feels like it’s doing multiple things at once. In reality, it’s just switching between tasks very quickly. But a VLIW (Very Long Instruction Word) computer is different. Today, [Asianometry] tells us about VLIW computing and its history.
Processors have multiple functional units; for example, you might have separate units each for addition, multiplication and division. But because it runs one instruction at a time, these units tend to spend a large amount of time idle. VLIW aims to address this inefficiency by reinventing what an instruction means. Instead of telling the whole processor what to do, a VLIW instruction tells each functional unit what to do at once. Sounds good, right? Well, that was the easy part.
The hard part? How to compile a program for a VLIW computer, that can actually make use of all the functional units at once; after all, the efficiency promise is that the higher activity makes up for larger instruction words to fetch. That is the compiler’s job; VLIW compilers try to reschedule the operations in the program to convert sequential code into more parallel operations then compiled into the titular very long instruction words.
[Asianometry] goes into detail about this, the history, and more in the video after the break.
P.S.: For the sake of the video and article, we’re ignoring the existence of modern concept of out-of-order CPUs; they did not exist in the time period which [Asianometry] is talking about.
2026-04-08 10:00:29

Although everyone’s favorite Linux overlord [Linus Torvalds] has been musing on dropping Intel 486 support for a while now, it would seem that this time now has finally come. In a Linux patch submitted by [Ingo Molnar] the first concrete step is taken by removing support for i486 in the build system. With this patch now accepted into the ‘tip’ branch, this means that no i486-compatible image can be built any more as it works its way into the release branches, starting with kernel 7.1.
No mainstream Linux distribution currently supports the 486 CPU, so the impact should be minimal, and there has been plenty of warning. We covered the topic back in 2022 when [Linus] first floated the idea, as well as in 2025 when more mutterings from the side of [Linus] were heard, but no exact date was offered until now.
It remains to be seen whether 2026 is really the year when Linux says farewell to the Intel 486 after doing so for the Intel 386 back in 2012. We cannot really imagine that there’s a lot of interest in running modern Linux kernels on CPUs that are probably older than the average Hackaday reader, but we could be mistaken.
Meanwhile, we got people modding Windows XP to be able to run on the Intel 486, opening the prospect that modern Windows might make it onto these systems instead of Linux in the ultimate twist of irony.
2026-04-08 07:00:45

When LG left the smartphone market, quite a number of strange devices were left behind. While some, like the Wing, made it to consumers, others did not. The strangest of these would have to be their rollable phone concept; a device which would expand by unrolling a portion of the screen like a scroll. This never made it to market, but one managed to make its way to [JerryRigEverything’s] workbench, and we are fortunate enough to see the insides of this strange device.
There are a few interesting tidbits about the device before even entering the device. Very clearly this phone was ready to be sold, with a tidy user interface for expanding the display, and even animated wallpapers which expand with it. The display, when rolled onto the back of the device, sits behind a glass cover to keep it protected from debris, and can be used to take selfies with the larger sensors of the rear facing cameras. You can also see a bit of the track that the screen rolls on, hinting at what lies inside.

One doesn’t have to get far into a teardown of this phone to find more. A tiny brush hides in the curved corner of the screen rolling mechanism, to keep debris out of the pocket the screen sits inside. This also gives a better look at the aforementioned track system, which guides the display around the corner and keeps it stable and secure.
Further inside, you can see the mechanism which allow the phone to unfurl. Two rather small, but powerful DC motors resting a rack and pinion move the surprisingly strong phone to its full sized state. A number of spring loaded arms provide stability to the mechanism, preventing racking. The mechanism is surprisingly strong, able to push a number of books out of its way. However, if its movement is resisted, it will display a warning that you might damage the phone.
Tearing down a phone that doesn’t exist is not terribly useful, so the focus was very much on the mechanism, with no detours or destructive disassembly. However, if destructive reverse engineering is what your here for, make sure to check out this teardown of a smart LEGO brick next!
2026-04-08 04:00:26

When you’re programming microcontrollers, you’re likely to think in C if you’re old-school, Rust if you’re trendy, or Python if you want it done quick and have resources to spare. What about Go? The programming language, not the game. That’s an option, too, with TinyGo now supporting over 100 different dev boards, along with webASM.
We covered TinyGo back in 2019, but they were just getting started at that point, targeting the Arduino and BBC:micro boards. They’ve grown that list to include everything from most of Adafruit’s fruitful suite of offerings, ESP32s, and even the Nintendo Game Boy Advance. So now you can go program go in Go so you can play go on the go.
The biggest drawback–which is going to be an absolute dealkiller for a lot of applications–is a lack of wireless connectivity support. Claiming to support the ESP8266 while not allowing one to use wifi is a bit of a stretch, considering that’s the whole raison d’être of that particular chip, but it’s usable as a regular microcontroller at least.
They’ve now implemented garbage collection, a selling point for those who like Go, but admit it’s slower in TinyGo compared to its larger cousin and won’t work on AVR chips or in WebAssembly. It’s still not complete Go, however, so just as we reported in 2019, you won’t be able to compile all the standard library packages you might be used to. There are more of them than there were, so progress has been made!
Still, knowing how people get about programming languages, this will please the Go fanatics out there. Others might prefer to go FORTH and program their Arduinos, or to wear out their parentheses keys with LISP. The more the merrier, we say!