2026-04-14 01:00:18

Cognitive processes are not something that we generally pay much attention to until something goes wrong, but they cover the entire scope of us ingesting sensory information, the processing and recalling thereof, as well as any resulting decisions made based on such internal deliberation.
Within that context there has also long been a struggle between those who feel that it’s fine for humans to rely on available technologies to make tasks like information recall and calculations easier, and those who insist that a human should be perfectly capable of doing such tasks without any assistance. Plato argued that reading and writing hurt our ability to memorize, and for the longest time it was deemed inappropriate for students to even consider taking one of those newfangled digital calculators into an exam, while now we have many arguing that using an ‘AI’ is the equivalent of using a calculator.
At the root of this conundrum lies the distinction between that which enhances and that which hampers human cognition. When does one merely offload tasks to a device or object, and when does one harm one’s own cognition?
Cognitive offloading is the practice of shifting cognitive tasks to external aids, and it is thought to make learning complex tasks easier. In contrast to rote memorization of facts like dates of events and formulas, if we consider books to be an external memory storage device, then we can offload such precise memorization to their pages and only require from students that they are capable of efficiently finding information, as well as the judging of these on their merits.
An often misquoted anecdote here pertains to Albert Einstein, who was was once asked why he couldn’t cite the speed of sound from memory. To this he responded with a curt:
[I do not] carry such information in my mind since it is readily available in books. …The value of a college education is not the learning of many facts but the training of the mind to think.
With this statement Einstein makes a clear case for the benefits of cognitive offloading in the sense that rote memorization does not enhance one’s cognition. Similarly, the ability to solve complicated equations and sums without so much as the use of pen and paper is fairly irrelevant when a slide rule and a digital calculator can offload all that work. As a benefit these devices tend to be more precise, faster and very accessible.
It is still important to have an intuitive feeling for whether a calculation is in the expected range, and one should never assume that what is written in a book is the absolute truth. That in a nutshell is the key difference between cognitive offloading and cognitive surrender. If you have entered a series of values into your calculator, the result seems off and you re-type them to be sure, that’s cognitive offloading.
If, however, you accept the outcome of such a calculation, or a text as written without a second thought, that constitutes surrendering an essential part of your cognitive processes to an external source. If we thus replace ‘calculator’ in this context with ‘LLM chatbot’ or an ‘AI summary’, the same caveat applies. Perhaps more so as at least a calculator is fully deterministic and can be proven to be mathematically correct.
So if that’s the case, and modern-day ‘AI’ isn’t really what it’s often cracked up to be, why would a presumably intelligent human being end up accepting their outputs like the literal gospel?
A recent study (DOI link) by Steven D. Shaw and Gideon Nave of the University of Pennsylvania investigated the prevalence of cognitive surrender in the context of LLM chatbots, looking for instances where users are seen to blindly accept the generated answers.
In this study, Shaw et al. had three groups of volunteers take a standardized test, during which one group had to rely purely on their own wits, the second group could use an LLM chatbot which gave correct answers, while a third group also had access to this chatbot, but for them it gave wrong answers.

Perhaps unsurprisingly, the test subjects used the chatbot quite a lot when available, with predictable results. In the ‘tri-system theory of cognition’ that Shaw et al. propose in the paper, the external cognitive system (‘System 3’) is that of the chatbot, whose output is clearly being accepted verbatim by a significant part of the test subjects. If said chatbot output is correct, this is great, but when it’s not, the test results massively suffer.
Where this is worrisome outside of such a self-contained tests is that people are exposed to endless amounts of faulty LLM-generated text, such as for example in the form of ‘AI summaries’ that search engines love to put front and center these days. Back in 2024, for example, Avram Piltch over at Tom’s Hardware compiled a amusing collection of such faulty outputs, some of which are easier to spot than others.
Ranging from the health effects of eating nose pickings to the speed difference between USB 3.2 Gen 1 and USB 3.0, to classics like adding Elmer’s glue to pizza sauce, it’s generally possible to find where on the internet a ridiculous claim was scraped from for the LLM’s dataset, while other types of faulty output are simply due to an LLM not possessing any intelligence or essentials like grasping what a context is.
Meanwhile other types of output are clearly confabulations, a fact which ought to be obvious to any intelligent human being, and yet it seems that so much of it passes whatever sniff test occurs within the cognitive capabilities of the average person.

In the generally accepted model of cognitive decision making we see two internal systems: the first is the fast, intuitive and emotion-driven system. The second is the deliberate and analytical system, which tends to take a backseat to the first system in general, but could be said to be checking the homework of the first.
Although psychology is hardly an exact science, in the scientific fields of systems neuroscience and cognitive neuroscience we can find evidence for how decisions are made in the primate brain – including those of humans – with various cortices involved in the decision-making process. Fascinating here is the activity observed in the parietal cortex where a decision is not only formed, but also apparently assigned a degree of confidence.
Lesions in the anterior cingulate cortex (ACC) have been linked to impaired decision making and the arisal of impulse control issues, as the ACC appears to be instrumental in error detection. Issues in the ACC are thus more likely to result in faulty or flawed decisions and judgements passing by uncorrected. Incidentally, the ACC was found to be heavily affected by environmental tetraethyl lead contamination, underlying the theory that leaded gasoline was responsible for a surge in crime until this additive was discontinued.
Knowing this, we can thus say with a fairly high degree of confidence that the concept of human cognition is very much determined by the physical wiring in the pinkish-white goo that constitutes our brains. A good demonstration of this is the effect of ethanol on the brain, as well as the intense cravings that accompany addictions.

Abnormal activity in the ACC has for example been associated with alcohol addiction, with an implant suggested to adjust said neural activity as detailed in a 2020 Neurotherapeutics study by Sook Ling Leong et al. In this study the eight treatment-resistant alcoholics had electrodes inserted into part of their ACC to provide direct stimulation, leading to a self-reported 60% drop in cravings.
As ethanol can freely pass through the blood-brain barrier, it is free to start binding with GABA receptors and induce the release of dopamine along with a range of other neurological effects that initially induce a feeling of relaxation and well-being, but also suppresses activity in various cortices, including the ACC. Effectively ethanol thus reduces one’s cognitive prowess and with it the ability to recognize flawed decisions.
From this we can thus deduce that activity in the ACC is not only essential for decision-making, but it also illustrates how the pinkish goop in our skulls is a fascinating biochemistry and neurochemistry experiment in which the addition or subtraction of certain substances and poking it with electrodes can induce a wide variety of cognitive outcomes.
Experiments aside, we started our lives off with the baseline that we were born with (‘nature’) and the various neuroplastic alterations made as we grew up (‘nurture’), which along the way led to various cognitive outcomes that we may or may not regret as adults. This leaves us free to learn from our mistakes and do better in as far as neuroplasticity allows.
It’s often said that the most valuable skill in life that adults tend to lose as we mature out of innocent childhood is the incessant ability to ask ‘Why?’. By questioning everything and wanting to know everything, we not only display curiosity, but also nurture the cognitive skills of our brain. If instead our environment pushes back against this, it can harm the development of such cognitive skills, even if the pushback doesn’t rise to the level of childhood trauma.
As a certified ‘nerdy kid’ back in the day who went through all the motions of being bullied, shoved into proverbial lockers and other types of physical abuse at school for having the nerve to like books, science and other ‘nerdy’ things that involved being curious, it’s hard not to feel the social pressure to simply comply and not question things. As an adult such social pressure only gets worse, with skills like critical thinking generally discouraged.
Of course, said critical thinking is exactly what we need when confronted with new technologies and the temptation to simply surrender that cognitive burden instead of asking questions. Yet when cognitive surrendering can have real consequences that may affect not just your own life but also those of others, it’s pretty much a basic survival skill to weapon yourself against it.
In a world where things like politics, idols, religion, and advertising exist, the rise of this purported ‘AI’ in the form of LLM-based chatbots with their often very convincingly human-like and authoritative outputs seem to have hit the same weaknesses that unscrupulous religious leaders and scammers exploit, with sometimes tragic consequences.
Although it’s clear that believing some factual misinformation generated by a chatbot is a far cry from deciding to take fatal actions based on a dialog with said chatbot, it also highlights the importance of retaining your critical thinking skills. Although we often like to think otherwise, people aren’t fully rational beings whose cognitive processes belong completely to themselves.
Answering the question of when we harm our own cognition, it would seem that while we can generally trust a calculator, an LLM-based chatbot is not nearly as reliable or benign. Caution and awareness of the risk of cognitive surrendering are thus well-warranted.
2026-04-13 23:30:00

Moon missions are hot again for the first bit since the space race. While the previous period had us land on the big lunar rock, the missions of tomorrow have us living on it. The initial problem of landing in one piece has been solved, but there are many more puzzles to solve. One major issue of living in the vacuum of space is the lack of breathable air, because, ya know, it’s space.
This brings us to today, where [Blue Origin] has announced a prototype method of turning Moon dust into the valuable gas we call oxygen. [Blue Origin] hasn’t posted much about the actual process behind this feat, terming the system “Air Pioneer”. What we do know is that it requires melting the regolith and then passing current through to release the O2 molecules from their rocky prison.
While some publications on this matter have been calling this a first in its entirety, this isn’t entirely true. NASA has worked on this technology for the past couple of years, called “Gaseous Lunar Oxygen from Regolith Electrolysis”, or (GaLORE). What [Blue Origin] has done, however, is complete the task under a for-profit motive. Perhaps this can introduce the drive needed to accelerate the development of the tech? (If anyone knows any more detail about the Blue Origins system, please let us know.)
Private space is certainly an exciting and quickly moving space in nearly all regards. It’s important to see how far we have come from the initial moon missions. If you want to check out some of the wackier lessons from that era, be sure to read up on the fight for moon cockroaches!
2026-04-13 22:00:13

A crew lives on a station in a hostile environment. Leaving that environment requires oxygen tanks and specialized gear to deal with pressure differentials. A space station? Nah. A base built on the ocean floor. The US Navy was interested in such a base in the 1960s, and bases like this are a staple of science fiction. But today, we see more space stations than underwater bases. Have you ever wondered why?
Diving deep underwater is a tricky business. At a certain depth, the pressure forces gas like nitrogen to dissolve into your body. By itself, this isn’t a problem, but when you ascend, it is a big problem. If the gas all comes out at the same time, you get bubbles, which can cause decompression sickness, commonly called the bends. The exact problems vary, but the bends often cause extreme joint pain, fatigue, or a rash. Sometimes people die.
While you think of the bends as a deep-sea diver’s problem, it can also happen in airplanes and outer space. Any time you go from high pressure to low pressure quickly, you are subject to decompression sickness. Depending on what you are doing, there are different ways to mitigate the problem. For diving, traditionally, you simply don’t surface too quickly.
You dive, do your work, and then head towards the surface, stopping at preset stops to let the pressure equalize gradually. Physics is a bear, though. The longer you stay at a given depth, the longer you have to decompress.
That means you rapidly reach a point of diminishing returns. Suppose you dive to the ocean floor. You spend an hour working. Then you have to spend, say, eight hours gradually rising to the surface. That makes extended operations at significant depth impractical.
George Bond was thinking about all this and had an interesting idea. It is true that, in general, the longer you stay down, the more gas your body absorbs. But it is also true that, eventually, your tissues saturate, and then you don’t absorb any more.
So the counterintuitive insight was not to send a diver down and then back up repetitively. Instead, you keep the diver under pressure for the entire job. Then, once, at the end, you decompress. This is known today as saturation diving.
This leads to a new problem: If you plan to send a diver down to the ocean floor for a week, they can’t just hang out in a wetsuit the whole time. They need somewhere to eat, rest, and all the other things you need to do when you aren’t working. They need a base.
It still isn’t as simple as it might seem. There are problems with oxygen toxicity, the effort to breathe under pressure, and other issues. But these are largely solvable.
George Bond did experiments under the project name “Genesis,” where animals and, eventually, people were subjected to high pressures for extended durations. At roughly the same time, Edwin Link (a familiar name if you know about flight simulators) and famed diver Jacques Cousteau were experimenting with long-term saturation diving as well.
As part of a larger plan, Link experimented with placing one person at a modest depth for a day, and Cousteau had a two-person team at greater depths.
The Navy decided to run some experiments to see if Bond’s ideas would work in reality. They started the “man in the sea” experiments that deployed three prototype “sealabs” that were far more ambitious than previous commercial projects.

In 1964, off the coast of Bermuda, the Navy placed an ambient-pressure cylinder 192 feet down. An umbilical connected the habitat to the surface. You’d think the station would be full of air, but high pressures of nitrogen can cause other health problems, so, instead, the divers had a helium and oxygen mix.
The crew of four was supposed to stay submerged for several weeks. However, an approaching storm cut their stay to only 11 days. Still, the experiment was a success.
It also brought up several problems. If you’ve taken a hit of helium, you know it makes your voice squeaky, which can make it difficult to communicate with other people. More importantly, though, is that helium is a good conductor of heat. Divers get cold fast hanging out in a helium-rich atmosphere.
You can see a video from the Navy in 1965 describing the program below.
As a side note, former astronaut Scott Carpenter was set to be the fifth person in Sealab I, but a scooter accident in Bermuda bumped him from the roster.
In 1965, the Navy tried again with Sealab II off the coast of La Jolla in California at a depth of around 200 feet. This time, Scott Carpenter made the trip.

Sealab II was more complex with demonstration tasks and a planned mission length of up to 30 days. For a long trip like this, the same problems arise as you’d have in a space station. Carbon dioxide needs scrubbing, and oxygen levels need control. Humidity and corrosion are constant problems. Equipment noise affects people over the long term.
The new habitat was twice as large as Sealab I. There were heaters, hot showers, and refrigeration. The idea was to have a crew that rotated every 15 days, but Carpenter spent 30 days inside.
The Navy also tried to train a bottlenose dolphin — Tuffy — to act as a helper to the crew with mixed results. While the mission, overall, was a success, there were issues with the crew feeling isolated and confined, along with sleep problems due to noise and lights.
Famously, President Lyndon Johnson was to speak to Carpenter after his 30-day stay and called while Carpenter was in a decompression chamber full of helium. The resulting confusion among telephone operators is pretty funny, as you can see in the video below.
The next and final attempt to submerge a crew was Sealab III in 1969. At a depth of about 600 feet — 200 feet beyond the normal planned operation depth — the Sealab III mission reused the Sealab II module, refurbished and upgraded. Five teams of nine divers were scheduled to spend 12 days each in the habitat.
At such a depth, problems magnify and margins for error all but disappear. The Navy was already stretched thin in Vietnam, and Sealab III had a difficult time getting not just off the ground, but under the sea. The project was late and overbudget. Work got sloppy, and corners got cut. When the habitat developed a helium leak, four divers volunteered to repair it in place, but failed on their first attempt.
A second attempt had the divers taking amphetamines to stay awake, which went predictably wrong. A diver, Berry Cannon, died. At the time, it was chalked up to improper setup of his rebreather, although a more modern investigation speculates that he may have been electrocuted. Either way, it was enough to end the program. The Navy gave up on the program and focused on other undersea programs, such as submarines. If there are any undersea bases, they are too secret for us to know about them.
You can see a Navy video showing the progress of Sealab III before the accident below. Unfortunately, the audio track isn’t present, so it isn’t always clear what the message is.
You might wonder why someone didn’t continue this work. We don’t have underwater bases, farms, mines, or hotels. Why not? It is true, of course, that the Navy continued to use limited saturation diving for certain, sometimes clandestine, purposes.
Well, the answer is complicated. The Navy’s work on Sealab directly created the tech and techniques used every day by saturation divers around the world, many of whom maintain underwater petroleum production equipment. However, that’s very specialized, and even then, a modern remote vehicle is a better choice for many tasks. Just like space is a harsh place to live, so is the ocean floor. Everything corrodes and leaks.
Now, we build space stations, and the day of the station on the ocean floor will either never come or will be in the future. But regardless, the technology developed by these pioneers will inform human undersea operations for the foreseeable future. Meanwhile, robots are cheaper and more effective for nearly any task. Still, there are times when only a human will do.
2026-04-13 19:00:00

After some water intrusion apparently killed one of [electronupdate]’s Amazon Blink Gen 3 cameras he took this opportunity to do a full teardown and analysis of all the major components. Spread across its three PCBs there are no fewer than two wireless ICs and a custom ASIC for all the major processing. There’s also a blog post with easy-to-ogle pictures.
The most basic PCB is effectively just a PCB antenna for the Silicon Labs EZR32 IC on the main PCB, using which the ~915 MHz connection with the central hub is maintained. The other smaller PCB is a bit surprising in that it contains a Cypress CYW43438 W-Fi b/g/n and BT 5.1 chip. This would seem to be used for the setup process, but considering that it also uses a central hub it is a bit of a mystery as to what it is used for exactly.
Finally, the main PCB contains all the major parts, with the custom Amazon Immedia ASIC that’s an integral part of this very low-power camera. Given that two AA cells being enough to run the camera for about two years, using off-the-shelf parts probably wasn’t good enough without some serious customization.
As for why this outdoors-rated camera failed after a few years in the outdoors, the reason appears to be water intrusion via the speaker opening. As for why a camera needs a speaker and not just the microphone is left as an exercise to the reader, but maybe it could be useful for yelling at the local kids to get off your darn lawn?
2026-04-13 16:00:00

Of all the remote-control vehicles one can build, a submarine is possibly the hardest: if something goes wrong with almost any other vehicle, it’s easy to recover and repair, but a submarine is a very different affair. This nearly lost [James] of [ProjectAir] his latest project, a 2.7-meter long RC submarine, but it survived to make a few test sails.
Before building the full version, [James] made a test prototype. These submarines use large syringes as ballast tanks, pulling water in and out of the submarine body. The plungers are driven by a lead screw, and have a linear potentiometer for feedback. This can be wired in the same way as a servo motor, making it compatible with the RC controller. The controller receives its signal from an antenna in a buoy tethered to the submarine. Since initial tests worked well, [James] moved on to the full-scale model.
This was made out of radially-arranged acrylic tubes, with all but the top tube left open to the water. At the back of the submarine there were servo-actuated fins and a propeller, which would allow it to steer, ascend, and descend underwater. To waterproof the servo motors, [James] sealed them as much as possible, then filled them with oil. The other water-exposed electronics were either potted in epoxy or coated with a waterproofing compound. During testing, the submarine descended without issue, but was reluctant to resurface. Most of the external components had been 3D printed, and water infiltrated the infill below a certain depth. [James], however, managed to recover it before it was permanently lost, and managed to make a few other dives at a very limited depth.
On the other end of the spectrum from an RC submarine, we’ve also seen a rubber band-powered submarine. We’ve also seen a smaller, but more dive-ready RC submarine.
Thanks to [H Hack] for the tip!
2026-04-13 13:00:00

Friends, there will likely come a time in your life when you have trouble sleeping. When this happens, it may behoove you to do some writing, any kind of writing. But consider that a physical journal will force you to turn past pages you’ve already filled, which may leave you deflated if you happen to read them.
So the answer lies in a sort of journalistic deposit box. That’s basically what we have here. [Simon Shimel]’s Bee Write Back writerdeck was inspired by sleepless nights, so you know it’s effective. The form factor is so great for [Simon], in fact, that he has developed more apps and functions for it, including a Claude client.
Inside is a Raspberry Pi Zero 2w, and input comes from an Air40 keyboard with quite awesome low-profile key caps. The display is a 5.5″ AMOLED, which leaves just enough room for a pair of the cutest bees ever. Be sure to check out the short video below for the build guide to accompany the build guide (PDF), and head over to GitHub for the full details.
Want to go even smaller and BYOK? Here’s a cheap writerdeck with an e-ink display.
Thanks to [Kaushlesh Chandel] for the tip!