2026-04-13 22:00:13

A crew lives on a station in a hostile environment. Leaving that environment requires oxygen tanks and specialized gear to deal with pressure differentials. A space station? Nah. A base built on the ocean floor. The US Navy was interested in such a base in the 1960s, and bases like this are a staple of science fiction. But today, we see more space stations than underwater bases. Have you ever wondered why?
Diving deep underwater is a tricky business. At a certain depth, the pressure forces gas like nitrogen to dissolve into your body. By itself, this isn’t a problem, but when you ascend, it is a big problem. If the gas all comes out at the same time, you get bubbles, which can cause decompression sickness, commonly called the bends. The exact problems vary, but the bends often cause extreme joint pain, fatigue, or a rash. Sometimes people die.
While you think of the bends as a deep-sea diver’s problem, it can also happen in airplanes and outer space. Any time you go from high pressure to low pressure quickly, you are subject to decompression sickness. Depending on what you are doing, there are different ways to mitigate the problem. For diving, traditionally, you simply don’t surface too quickly.
You dive, do your work, and then head towards the surface, stopping at preset stops to let the pressure equalize gradually. Physics is a bear, though. The longer you stay at a given depth, the longer you have to decompress.
That means you rapidly reach a point of diminishing returns. Suppose you dive to the ocean floor. You spend an hour working. Then you have to spend, say, eight hours gradually rising to the surface. That makes extended operations at significant depth impractical.
George Bond was thinking about all this and had an interesting idea. It is true that, in general, the longer you stay down, the more gas your body absorbs. But it is also true that, eventually, your tissues saturate, and then you don’t absorb any more.
So the counterintuitive insight was not to send a diver down and then back up repetitively. Instead, you keep the diver under pressure for the entire job. Then, once, at the end, you decompress. This is known today as saturation diving.
This leads to a new problem: If you plan to send a diver down to the ocean floor for a week, they can’t just hang out in a wetsuit the whole time. They need somewhere to eat, rest, and all the other things you need to do when you aren’t working. They need a base.
It still isn’t as simple as it might seem. There are problems with oxygen toxicity, the effort to breathe under pressure, and other issues. But these are largely solvable.
George Bond did experiments under the project name “Genesis,” where animals and, eventually, people were subjected to high pressures for extended durations. At roughly the same time, Edwin Link (a familiar name if you know about flight simulators) and famed diver Jacques Cousteau were experimenting with long-term saturation diving as well.
As part of a larger plan, Link experimented with placing one person at a modest depth for a day, and Cousteau had a two-person team at greater depths.
The Navy decided to run some experiments to see if Bond’s ideas would work in reality. They started the “man in the sea” experiments that deployed three prototype “sealabs” that were far more ambitious than previous commercial projects.

In 1964, off the coast of Bermuda, the Navy placed an ambient-pressure cylinder 192 feet down. An umbilical connected the habitat to the surface. You’d think the station would be full of air, but high pressures of nitrogen can cause other health problems, so, instead, the divers had a helium and oxygen mix.
The crew of four was supposed to stay submerged for several weeks. However, an approaching storm cut their stay to only 11 days. Still, the experiment was a success.
It also brought up several problems. If you’ve taken a hit of helium, you know it makes your voice squeaky, which can make it difficult to communicate with other people. More importantly, though, is that helium is a good conductor of heat. Divers get cold fast hanging out in a helium-rich atmosphere.
You can see a video from the Navy in 1965 describing the program below.
As a side note, former astronaut Scott Carpenter was set to be the fifth person in Sealab I, but a scooter accident in Bermuda bumped him from the roster.
In 1965, the Navy tried again with Sealab II off the coast of La Jolla in California at a depth of around 200 feet. This time, Scott Carpenter made the trip.

Sealab II was more complex with demonstration tasks and a planned mission length of up to 30 days. For a long trip like this, the same problems arise as you’d have in a space station. Carbon dioxide needs scrubbing, and oxygen levels need control. Humidity and corrosion are constant problems. Equipment noise affects people over the long term.
The new habitat was twice as large as Sealab I. There were heaters, hot showers, and refrigeration. The idea was to have a crew that rotated every 15 days, but Carpenter spent 30 days inside.
The Navy also tried to train a bottlenose dolphin — Tuffy — to act as a helper to the crew with mixed results. While the mission, overall, was a success, there were issues with the crew feeling isolated and confined, along with sleep problems due to noise and lights.
Famously, President Lyndon Johnson was to speak to Carpenter after his 30-day stay and called while Carpenter was in a decompression chamber full of helium. The resulting confusion among telephone operators is pretty funny, as you can see in the video below.
The next and final attempt to submerge a crew was Sealab III in 1969. At a depth of about 600 feet — 200 feet beyond the normal planned operation depth — the Sealab III mission reused the Sealab II module, refurbished and upgraded. Five teams of nine divers were scheduled to spend 12 days each in the habitat.
At such a depth, problems magnify and margins for error all but disappear. The Navy was already stretched thin in Vietnam, and Sealab III had a difficult time getting not just off the ground, but under the sea. The project was late and overbudget. Work got sloppy, and corners got cut. When the habitat developed a helium leak, four divers volunteered to repair it in place, but failed on their first attempt.
A second attempt had the divers taking amphetamines to stay awake, which went predictably wrong. A diver, Berry Cannon, died. At the time, it was chalked up to improper setup of his rebreather, although a more modern investigation speculates that he may have been electrocuted. Either way, it was enough to end the program. The Navy gave up on the program and focused on other undersea programs, such as submarines. If there are any undersea bases, they are too secret for us to know about them.
You can see a Navy video showing the progress of Sealab III before the accident below. Unfortunately, the audio track isn’t present, so it isn’t always clear what the message is.
You might wonder why someone didn’t continue this work. We don’t have underwater bases, farms, mines, or hotels. Why not? It is true, of course, that the Navy continued to use limited saturation diving for certain, sometimes clandestine, purposes.
Well, the answer is complicated. The Navy’s work on Sealab directly created the tech and techniques used every day by saturation divers around the world, many of whom maintain underwater petroleum production equipment. However, that’s very specialized, and even then, a modern remote vehicle is a better choice for many tasks. Just like space is a harsh place to live, so is the ocean floor. Everything corrodes and leaks.
Now, we build space stations, and the day of the station on the ocean floor will either never come or will be in the future. But regardless, the technology developed by these pioneers will inform human undersea operations for the foreseeable future. Meanwhile, robots are cheaper and more effective for nearly any task. Still, there are times when only a human will do.
2026-04-13 19:00:00

After some water intrusion apparently killed one of [electronupdate]’s Amazon Blink Gen 3 cameras he took this opportunity to do a full teardown and analysis of all the major components. Spread across its three PCBs there are no fewer than two wireless ICs and a custom ASIC for all the major processing. There’s also a blog post with easy-to-ogle pictures.
The most basic PCB is effectively just a PCB antenna for the Silicon Labs EZR32 IC on the main PCB, using which the ~915 MHz connection with the central hub is maintained. The other smaller PCB is a bit surprising in that it contains a Cypress CYW43438 W-Fi b/g/n and BT 5.1 chip. This would seem to be used for the setup process, but considering that it also uses a central hub it is a bit of a mystery as to what it is used for exactly.
Finally, the main PCB contains all the major parts, with the custom Amazon Immedia ASIC that’s an integral part of this very low-power camera. Given that two AA cells being enough to run the camera for about two years, using off-the-shelf parts probably wasn’t good enough without some serious customization.
As for why this outdoors-rated camera failed after a few years in the outdoors, the reason appears to be water intrusion via the speaker opening. As for why a camera needs a speaker and not just the microphone is left as an exercise to the reader, but maybe it could be useful for yelling at the local kids to get off your darn lawn?
2026-04-13 16:00:00

Of all the remote-control vehicles one can build, a submarine is possibly the hardest: if something goes wrong with almost any other vehicle, it’s easy to recover and repair, but a submarine is a very different affair. This nearly lost [James] of [ProjectAir] his latest project, a 2.7-meter long RC submarine, but it survived to make a few test sails.
Before building the full version, [James] made a test prototype. These submarines use large syringes as ballast tanks, pulling water in and out of the submarine body. The plungers are driven by a lead screw, and have a linear potentiometer for feedback. This can be wired in the same way as a servo motor, making it compatible with the RC controller. The controller receives its signal from an antenna in a buoy tethered to the submarine. Since initial tests worked well, [James] moved on to the full-scale model.
This was made out of radially-arranged acrylic tubes, with all but the top tube left open to the water. At the back of the submarine there were servo-actuated fins and a propeller, which would allow it to steer, ascend, and descend underwater. To waterproof the servo motors, [James] sealed them as much as possible, then filled them with oil. The other water-exposed electronics were either potted in epoxy or coated with a waterproofing compound. During testing, the submarine descended without issue, but was reluctant to resurface. Most of the external components had been 3D printed, and water infiltrated the infill below a certain depth. [James], however, managed to recover it before it was permanently lost, and managed to make a few other dives at a very limited depth.
On the other end of the spectrum from an RC submarine, we’ve also seen a rubber band-powered submarine. We’ve also seen a smaller, but more dive-ready RC submarine.
Thanks to [H Hack] for the tip!
2026-04-13 13:00:00

Friends, there will likely come a time in your life when you have trouble sleeping. When this happens, it may behoove you to do some writing, any kind of writing. But consider that a physical journal will force you to turn past pages you’ve already filled, which may leave you deflated if you happen to read them.
So the answer lies in a sort of journalistic deposit box. That’s basically what we have here. [Simon Shimel]’s Bee Write Back writerdeck was inspired by sleepless nights, so you know it’s effective. The form factor is so great for [Simon], in fact, that he has developed more apps and functions for it, including a Claude client.
Inside is a Raspberry Pi Zero 2w, and input comes from an Air40 keyboard with quite awesome low-profile key caps. The display is a 5.5″ AMOLED, which leaves just enough room for a pair of the cutest bees ever. Be sure to check out the short video below for the build guide to accompany the build guide (PDF), and head over to GitHub for the full details.
Want to go even smaller and BYOK? Here’s a cheap writerdeck with an e-ink display.
Thanks to [Kaushlesh Chandel] for the tip!
2026-04-13 10:00:00

Although the number of uses for a 2009-era Mac Mini aren’t very long, using them to run new-and-upcoming operating systems like Haiku on would seem to be an interesting use case. This is what [The Phintage Collector] recently took a swing at, using both the 2024 Beta 5 release and a current nightly build. The focus was mostly on the 32-bit build, as this has binary compatibility with BeOS applications, but the 64-bit version of Haiku was of course also installed.
One of the main issues with these Mac systems is that they use EFI for the BIOS, so you’re condemned to either take your chances with the always glitchy CSM ‘classical BIOS’ mode, or to make Haiku and EFI get along. While for the 64-bit version of Haiku this wasn’t too much of a struggle, the 32-bit version ran into the problem that the 64-bit EFI BIOS really doesn’t like 32-bit software. After a while the 32-bit version of Haiku was thus abandoned for a later revisit.
With the 64-bit version a lot of things just work, though audio couldn’t be made to work even with a USB dongle, and there’s no hardware acceleration for graphics, so gaming isn’t really going to happen either. The positive thing here is probably that as a test system for 64-bit Haiku such a Mac Mini isn’t too crazy, it being just an Intel system with an Apple-flavor EFI BIOS.
If you’re into giving it a shot yourself, the video description page contains a lot of resources to consult.
2026-04-13 07:00:32

At this point, we’ll assume you already know that four humans took a sightseeing trip around the Moon and made their triumphant return to Earth on Friday. Even if you somehow avoided hearing about it through mainstream channels, we kept a running account of the mission’s highlights stuck to the front page of the site for the ten days that the crew was in space.
On the assumption that you might be a bit burned out with space news at this point, we won’t bring up it up in this post… other than to point out that excitement for the lunar flyby has driven the number of simultaneous players of Kerbal Space Program to its highest count ever — nearly 20,000 armchair astronauts spent this weekend trying to cobble together their own rocket in honor of the Artemis II mission.
With so many folks focused on the Moon it would be the perfect time for a company to sneak out some bad news, which is perhaps why Amazon picked this week to announce they would be dropping support for Kindles released before 2012. Presumably there aren’t too many first and second generation Kindles still out there in the wild, but the 2012 cutoff does mean the first iteration of the Paperwhite will be one of the devices being put out to pasture come May 20th.
Amazon says the pre-2012 Kindles that are currently in user’s hands will still function, but they’ll no longer be able to purchase or download new books. The bigger issue is that you won’t be able to register these older devices after May. So if you have to factory reset your own Kindle, or want to buy one on the second hand market that’s already been wiped, you won’t be able to link it to your account to download books you’ve purchased.
Frankly, the idea that Amazon will no longer have their nose in these devices doesn’t bother us one bit. In fact, it sounds like an improvement over the status quo. If you own one of the device’s in question, now would be a fantastic time to download Calibre and start managing your own offline ebook library. In fact, even if your Kindle is new enough to not be affected by this change, you should still download it. Seriously, just use Calibre.
On the subject of software, an entry for XChat has recently popped up on Apple’s App Store. No, not that XChat. Instead of connecting to your favorite IRC server, the new mobile app will let you send messages to… whoever it is still actively using Twitter X. Confusingly, there’s also an XChat on the Google Play Store, but that appears to be a totally different thing altogether.
Finally, we’ve been seeing a lot of chatter online this weekend about France ditching Windows and switching over to Linux. While we applaud any mainstream push towards open source software, it’s worth digging into the details for this one. The directive says that the Interministerial Digital Directorate (DINUM) will be switching its desktop machines over to Linux, but that only represents a few hundred machines.
The experience gained during this roll-out will help shape a larger scale migration in the future, with the rest of the government asked to come up with a migration plan before the end of the year. When those other agencies, and the thousands of machines they use, will actually be penguin-powered is not clear. It’s possible they could come back and say a full migration would take a decade to complete.
So it’s certainly a step in the right direction, but it will likely be quite some time before any significant part of France’s infrastructure is divorced from the Redmond giant.
See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.