2026-03-05 03:30:50

This week Jonathan chats with Philippe Humeau about Crowdsec! That company created a Web Application Firewall as on Open Source project, and now runs it as a Multiplayer Firewall. What does that mean, and how has it worked out as a business concept? Watch to find out!
Did you know you can watch the live recording of the show right on our YouTube Channel? Have someone you’d like us to interview? Let us know, or have the guest contact us! Take a look at the schedule here.
Direct Download in DRM-free MP3.
If you’d rather read along, here’s the transcript for this week’s episode.
Theme music: “Newer Wave” Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
2026-03-05 02:00:17


If you’ve used Linux for a long time, you know that we are spoiled these days. Getting a new piece of hardware back in the day was often a horrible affair, requiring custom kernels and lots of work. Today, it should be easier. The default drivers on most distros cover a lot of ground, kernel modules make adding drivers easier, and dkms can automate the building of modules for specific kernels, even if it isn’t perfect.
So ordering a cheap WiFi dongle to improve your old laptop’s network connection should be easy, right? Obviously, the answer is no or this would be a very short post.
The USB dongle in question is a newish TP-Link Archer TX50U. It is probably perfectly serviceable for a Windows computer, and I got a “deal” on it. Plugging it in caused it to show up in the list of USB devices, but no driver attached to it, nor were any lights on the device blinking. Bad sign. Pro tip: lsusb -t will show you what drivers are attached to which devices. If you see a device with no driver, you know you have a problem. Use -tv if you want a little more detail.
The lsusb output shows the devices as a Realtek, so that tells you a little about the chipset inside. Unfortunately, it doesn’t tell you exactly which chip is in use.

My first attempt to install a Realtek driver from GitHub failed because it was for what turned out to be the wrong chipset. But I did find info that the adapter had an RTL8832CU chip inside. Armed with that nugget, I found [morrownr] had several versions, and I picked up the latest one.
Problem solved? Turns out, no. I should have read the documentation, but, of course, I didn’t. So after going through the build, I still had a dead dongle with no driver or blinking lights.
Then I decided to read the file in the repository that tells you what USB IDs the driver supports. According to that file, the code matches several Realtek IDs, an MSI device, one from Sihai Lianzong, and three from TP-Link. All of the TP-Link devices use the 35B2 vendor ID, and the last two of those use device IDs of 0101 and 0102.
Suspiciously, my dongle uses 0103 but with a vendor ID of 37AD. Still, it seemed like it would be worth a shot. I did a recursive grep for 0x0102 and found a table that sets the USB IDs in os_dep/linux/usb_intf.c.
Of course, since I had already installed the driver, I had to change the dkms source, not the download from GitHub. That was, on my system, in /usr/src/rtl8852cu-v1.19.22-103/os_dep_linux/usb_intf.c. I copied the 0x0102 line and changed both IDs so there was now a 0x0103 line, too:
{USB_DEVICE_AND_INTERFACE_INFO(0x37ad, 0x0103, 0xff, 0xff, 0xff), .driver_info = RTL8852C},
/* TP-Link Archer TX50U */
Now it was a simple matter of asking dkms to rebuild and reinstall the driver. Blinking lights were a good sign and, in fact, it worked and worked well.
If you haven’t used DKMS much, it is a reasonable system that can rebuild drivers for specific Linux kernels. It basically copies each driver and version to a directory (usually /usr/src) and then has ways to build them against your kernel’s symbols and produce loadable modules.
The system also maintains a build/install state database in /var/lib. A module is “added” to DKMS, then “built” for one or more kernels, and finally “installed” into the corresponding location for use by that kernel. When a new kernel appears, DKMS detects the event — usually via package manager hooks or distribution-specific kernel install triggers — and automatically rebuilds registered modules against the new kernel headers. The system tracks which module versions are associated with which kernels, allowing parallel kernel installations without conflicts. This separation of source registration from per-kernel builds is what allows DKMS to scale cleanly across multiple kernel versions.
If you didn’t use DKMS, you’d have to manually rebuild kernel modules every time you did a kernel update. That would be very inconvenient for things that are important, like video drivers for example.
Of course, not everything is rosy. The NVidia drivers, for example, often depend on something that is prone to change in future Linux kernels. So one day, you get a kernel update, reboot, and you have no screen. DKMS is the first place to check. You’ll probably find it has some errors when building the graphics drivers.
Your choices are to look for a new driver, see if you can patch the old driver, or roll back to a previous working kernel. Sometimes the changes are almost trivial like when an API changes names. Sometimes they are massive changes and you really do want to wait for the next release. So while DKMS helps, it doesn’t solve all problems all the time.
I skipped over the part of turning off secure boot because I was too lazy to add a signing key to my BIOS. I’ll probably go back and do that later. Probably.
You have to wonder why this is so hard. There is already a way to pass the module options. It seems like you might as well let a user jam a USB ID in. Sure, that wouldn’t have helped for the enumeration case, but it would have been perfectly fine to me if I had just had to put a modprobe or insmod with a parameter to make the card work. Even though I’m set up for rebuilding kernel modules and kernels, many people aren’t, and it seems silly to force them to recompile for a minor change like this.
Of course, another fun answer would be to have vendors actually support their devices for Linux. Wouldn’t that be nice?
You could write your own drivers if you have sufficient documentation or the desire to reverse-engineer the Windows drivers. But it can take a long time. User-space drivers are a little less scary, and some people like using Rust.
What’s your Linux hardware driver nightmare story? We know you have one. Let us hear about it in the comments.
2026-03-05 00:30:45

Last summer we took a look at FreeDOS as part of the Daily Drivers series, and found a faster and more complete successor to the DOS of old. The sojourn into the 16-bit OS wasn’t perfect though, as we couldn’t find drivers for the 2010-era network card on our newly DOS-ified netbook. Here’s [Inkbox] following the same path, and bringing with it a fix for that networking issue.
The video below is an affectionate look at the OS alongside coding a TRON clone in assembler, and it shows a capable environment within the limitations of the 16-bit mode. The modern laptop here can’t emulate a BIOS as it’s UEFI only, and after trying a UEFI-to-BIOS emulator with limited success, he hits on a different approach. With just enough Linux to support QEMU, he has a lightweight and extremely fast x86 BIOS platform with the advantage of legacy emulation of network cards and the like.
The point of Daily Drivers is wherever possible to use real hardware and not an emulator, as it’s trying to be the machine you’d use day to day. But we can see in a world where a BIOS is no longer a thing it becomes ever more necessary to improvise, and this approach is better than just firing up an emulator from a full-fat Linux desktop. If you fancy giving it a try, it seems less pain than the route we took.
You can read our look at FreeDOS 1.4 here.
FreeDOS logo: Bas Snabilie for the FreeDOS Project, CC BY 2.5.
2026-03-04 23:00:26

In their recent announcement, NASA has made official what pretty much anyone following the Artemis lunar program could have told you years ago — humans won’t be landing on the Moon in 2028.
It was always an ambitious timeline, especially given the scope of the mission. It wouldn’t be enough to revisit the Moon in a spidery lander that could only hold two crew members and a few hundred kilograms of gear like in the 60s. This time, NASA wants to return to the lunar surface with hardware capable of setting up a sustained human presence. That means a new breed of lander that dwarfs anything the agency, or humanity for that matter, has ever tried to place on another celestial body.
Unsurprisingly, developing such vehicles and making sure they’re safe for crewed missions takes time and requires extensive testing. The simple fact is that the landers, being built by SpaceX and Blue Origin, won’t be ready in time to support the original Artemis III landing in 2028. Additionally, development of the new lunar extravehicular activity (EVA) suits by Axiom Space has fallen behind schedule. So even if one of the landers would have been ready to fly in 2028, the crew wouldn’t have the suits they need to actually leave the vehicle and work on the surface.
But while the Artemis spacecraft and EVA suits might be state of the art, NASA’s revised timeline for the program is taking a clear step back in time, hewing closer to the phased approach used during Apollo. This not only provides their various commercial partners with more time to work on their respective contributions, but critically, provides an opportunity to test them in space before committing to a crewed landing.
Given its imminent launch, there are no changes planned for the upcoming Artemis II mission. In fact, had there not been delays in getting the Space Launch System (SLS) rocket ready for launch, the mission would have already flown by now. Given how slow the gears of government tend to turn, one wonders if the original plan was to announce these program revisions after the conclusion of the mission. The launch is currently slated for April, but could always slip again if more issues arise.

At any rate, the goals for Artemis II have always been fairly well-aligned with its Apollo counterpart, Apollo 8. Just like the 1968 mission, this flight is designed to test the crew capsule and collect real-world experience while in the vicinity of the Moon, but without the added complexity of attempting a landing. Although now, as it was then, the decision to test the crew capsule without its lander wasn’t made purely out of an abundance of caution.
As originally envisioned, Apollo 8 would have seen both the command and service module (CSM) and the lunar module (LM) tested in low Earth orbit. But due to delays in LM production, it was decided to fly the completed CSM without a lander on a modified mission that would put it into orbit around the Moon. This would give NASA an opportunity to demonstrate the critical translunar injection (TLI) maneuver and gain experience operating the CSM in lunar orbit — tasks which were originally scheduled to be part of the later Apollo 10 mission.
In comparison, Artemis II was always intended to be flown with only the Orion crew capsule. NASA’s goal has been to keep the program relatively agnostic when it came to landers, with the hope being that private industry would furnish an array of vehicles from which the agency could chose depending on the mission parameters. The Orion capsule would simply ferry crews to the vicinity of the Moon, where they would transfer over to the lander — either via directly docking, or by using the Lunar Gateway station as a rallying point.
There’s no lander waiting at the Moon for Artemis II, and the fate of Lunar Gateway is still uncertain. But for now, that’s not important. On this mission, NASA just wants to demonstrate that the Orion capsule can take a crew of four to the Moon and bring them back home safely.
For Artemis III, the previous plan was to have the Orion capsule mate up with a modified version of SpaceX’s Starship — known in NASA parlance as the Human Landing System (HLS) — which would then take the crew down to the lunar surface. While the HLS contract did stipulate that SpaceX was to perform an autonomous demonstration landing before Artemis III, the aggressive nature of the overall timeline made no provision for testing the lander with a crew onboard ahead of the actual landing attempt — a risky plan even in the best of circumstances.

The newly announced timeline resolves this issue by not only delaying the actual Moon landing until 2028, to take place during Artemis IV, but to change Artemis III into a test flight of the lander from the relative safety of low Earth orbit in 2027. The crew will liftoff from Kennedy Space Center and rendezvous with the lander in orbit. Once docked, the crews will practice maneuvering the mated vehicles and potentially perform an EVA to test Axiom’s space suits.
This new plan closely follows the example of Apollo 9, which saw the CSM and LM tested together in Earth orbit. At this point in the program, the CSM had already been thuroughly tested, but the LM had never flown in space or had a crew onboard. After the two craft docked, the crew performed several demonstrations, such as verifying that the mated craft could be maneuvered with both the CSM and LM propulsion systems.
The two craft then separated, and the LM was flown independently for several hours before once again docking with the CSM. The crew also performed a brief EVA to test the Portable Life Support System (PLSS) which would eventually be used on the lunar surface.

While the Artemis III and Apollo 9 missions have a lot in common, there’s at least one big difference. At this point, NASA isn’t committing to one particular lander. If Blue Origin gets their hardware flying before SpaceX, that’s what they’ll go with. There’s even a possibility, albeit remote, that they could test both landers during the mission.
After the success of Apollo 9, there was consideration given to making the first landing attempt on the following mission. But key members of NASA such as Director of Flight Operations Christopher C. Kraft felt there was still more to learn about operating the spacecraft in lunar orbit, and it was ultimately decided to make Apollo 10 a dress rehearsal for the actual landing.
The CSM and LM would head to the Moon, separate, and go through the motions of preparing to land. The LM would begin its descent to the lunar surface, but stop at an altitude of 14.4 kilometers (9 miles). After taking pictures of the intended landing site, it would return to the CSM and the crew would prepare for the return trip to Earth. With these maneuvers demonstrated, NASA felt confident enough to schedule the history-making landing for the next mission, Apollo 11.
But this time around, NASA will take that first option. Rather than do a test run out to the Moon with the Orion capsule and attached lander, the plan is to make the first landing attempt on Artemis IV. This is partially because we now have a more complete understanding of orbital rendezvous and related maneuvers in lunar orbit. But also because by this point, SpaceX and Blue Origin should have already completed their autonomous demonstration missions to prove the capabilities of their respective landers.
At this point, the plans for anything beyond Artemis IV are at best speculative. NASA says they will work to increase mission cadence, which includes streamlining SLS operations so the megarocket can be launched at least once per year, and work towards establishing a permanent presence on the Moon. But of course none of that can happen until these early Artemis missions have been successfully executed. Until then it’s all just hypothetical.
While Apollo was an incredible success, one can only follow its example so far. Despite some grand plans, the program petered out once it was clear the Soviet Union was no longer in the game. It cemented NASA’s position as the preeminent space agency, but the dream of exploring the lunar surface and establishing an outpost remained unfulfilled. With China providing a modern space rival, and commercial partners rapidly innovating, perhaps Artemis may be able to succeed where Apollo fell short.
2026-03-04 20:00:30

Phase-coherent lasers are crucial for many precision tasks, including timekeeping. Here on Earth the most stable optical oscillators are used in e.g. atomic clocks and many ultra-precise scientific measurements, such as gravitational wave detection. Since these optical oscillators use cryogenic silicon cavities, it’s completely logical to take this principle and build a cryogenic silicon cavity laser on the Moon.
In the pre-print article by [Jun Ye] et al., the researchers go through the design parameters and construction details of such a device in one of the permanently shadowed regions (PSRs) of the Moon, as well as the applications for it. This would include the establishment of a very precise lunar clock, optical interferometry and various other scientific and telecommunication applications.
Although these PSRs are briefly called ‘cold’ in the paper’s abstract, this is fortunately quickly corrected, as the right term is ‘well-insulated’. These PSRs on the lunar surface never get to warm up due to the lack of an atmosphere to radiate thermal energy, and the Sun’s warm rays never pierce their darkness either. Thus, with some radiators to shed what little thermal energy the system generates and the typical three layers of thermal shielding it should stay very much cryogenic.
Add to this the natural vacuum on the lunar surface, with PSRs even escaping the solar wind’s particulates, and maintaining a cryogenic, ultra-high vacuum inside the silicon cavity should be a snap, with less noise than on Earth. Whether we’ll see this deployed to the Moon any time soon remains to be seen, but with various manned missions and even Moon colony plans in the charts, this could be just one of the many technologies to be deployed on the lunar surface over the next few decades.
2026-03-04 17:00:32

You may or may not be reading this on a smartphone, but odds are that even if you aren’t, you own one. Well, possess one, anyway — it’s debatable if the locked-down, one-way relationships we have with our addiction slabs counts as ownership. [LuckyBor], aka [Breezy], on the other hand — fully owns his 4G smartphone, because he made it himself.
OK, sure, it’s only rocking a 4G modem, not 5G. But with an ESP32-S3 for a brain, that’s probably going to provide plenty of bandwidth. It does what you expect from a phone: thanks to its A7682E simcom modem, it can call and text. The OV2640 Arducam module allows it to take pictures, and yes, it surfs the web. It even has features certain flagship phones lack, like a 3.5 mm audio jack, and with its 3.5″ touchscreen, the ability to fit in your pocket. Well, once it gets a case, anyway.

This is just an alpha version, a brick of layered modules. [LuckyBor] plans on fitting everything into a slimmer form factor with a four-layer PCB that will also include an SD-card adapter, and will open-source the design at that time, both hardware and software. Since [LuckyBor] has also promised the world documentation, we don’t mind waiting a few months.
It’s always good to see another open-source option, and this one has us especially chuffed. Sure, we’ve written about Postmarket OS and other Linux options like Nix, and someone even put the rust-based Redox OS on a phone, but those are still on the same potentially-backdoored commercial hardware. That’s why this project is so great, even if its performance is decidedly weak compared to flagship phones that have as more horsepower as some of our laptops.
We very much hope [LuckyBor] carries through with the aforementioned promise to open source the design.