2026-03-30 19:00:34

The basic principle of radar systems is simple enough: send a radio signal out, and measure the time it takes for a reflection to return. Given the abundant sources of RF signals – television signals, radio stations, cellular carriers, even Wi-Fi – that surround most of us, it’s not even necessary to transmit your own signal. This is the premise of passive radar, which uses passive RF illumination to form an image. The RF signal doesn’t even need to come from a terrestrial source, as [Jean Michel Friedt] demonstrated with a passive radar illuminated by the NISAR radar-imaging satellite (pre-print paper).
NISAR is a synthetic-aperture radar satellite jointly built by NASA and ISRO, and it completes a pass over the world every twelve days. It uses an L-band chirp radar signal, which can be picked up with GNSS antennas. One antenna points up towards the satellite, and has a ground plane blocking the signal from directly reaching the second antenna, which picks up reflections from the landscape under observation. Since the satellite would illuminate the scene for less than a minute, [Jean-Michel] had to predict the moment of peak intensity, and achieved an accuracy of about three seconds.
The signals themselves were recorded with an SDR and a Raspberry Pi. High-end, high-resolution SDRs such as the Ettus B210 gave the best results, but an inexpensive homebuilt MAX2771-based SDR also produced recognizable images. This setup won’t be providing any particularly detailed images, but it did accurately show the contours of the local geography – quite a good result for such a simple setup.
If you’re more interested in tracking aircraft than surveying landscapes, check out this ADS-B-synchronized passive radar system. Although passive radar doesn’t require a transmitter license, that doesn’t mean it’s free from legal issues, as the KrakenSDR team can testify.
2026-03-30 16:00:03

Can you charge those Li-ion based cells with USB-C charging ports without taking them out of the device? While this would seem to be answered with an unequivocal ‘yes’, recently [Colin] found out that this could easily have destroyed the device they were to be installed in.
After being tasked with finding a better way to keep the electronics of some exercise bikes powered than simply swapping the C cells all the time, [Colin] was led to consider using these Li-ion cells in such a manner. Fortunately, rather than just sticking the whole thing together and calling it a day, he decided to take some measurements to satisfy some burning safety questions.
As it turns out, at least the cells that he tested – with a twin USB-C connector on a single USB-A – have all the negative terminals and USB-C grounds connected. Since the cells are installed in a typical series configuration in the device, this would have made for an interesting outcome. Although you can of course use separate USB-C leads and chargers per cell, it’s still somewhat disconcerting to run it without any kind of electrical isolation.
In this regard the suggestion by some commentators to use NiMHs and trickle-charge these in-situ similar to those garden PV lights might be one of the least crazy solutions.
2026-03-30 13:00:32

Anyone who has ever played Nintendo 64 games is probably familiar with the ways that large worlds in these games got split up, with many loading zones. Another noticeable aspect is that of the limited drawing distance, which is why even a large open area such as in Ocarina of Time‘s Hyrule Field has many features that limit how far you can actually see, such as hills and a big farming homestead in the center. Yet as [James Lambert] demonstrates in a recent video, it’s actually possible to create an open world on the N64, including large drawing distances.
As explained in the video, the drawing distance is something that the developer controls, and thus may want to restrict to hit certain performance goals. In effect he developer sets where the far clipping plane is set, beyond which items are no longer rendered. Of course, there are issues with just ramping up the distance to the far clipping plane, as the N64 only has a 15-bit Z-buffer, after which you get ‘Z fighting’, where render order becomes an issue as it’s no longer clear what is in front of what.
One fix is to push the near clipping plane further away from the player, but this comes with its own share of issues. Ergo [James] fixed it by doing two render passes: first all the far-away objects with Z-buffer disabled, and then all the nearby objects. These far-away objects can be rendered back-to-front with low level-of-detail (LoD), so this is relatively fast and also saves a lot of RAM, as the N64 is scraping by in this department at the best of times.
In the video the full details of this rendering approach, as well as a new fog rendering method, are explained, with the code and such available on GitHub for those who wish to tinker with it themselves. [James] and friends intend to develop a full game using this engine as well, so that’s definitely something to look forward to.
2026-03-30 10:00:41

Although generative language models have found little widespread, profitable adoption outside of putting artists out of work and giving tech companies an easy scapegoat for cutting staff, their their underlying technology remains a fascinating area of study. Stepping back to the more innocent time of the late 2010s, before the cultural backlash, we could examine these models in their early stages. Or, we could see how even older technology processes these types of machine learning algorithms in order to understand more about their fundamentals. [Damien Boureille] has put a 60s-era IBM as well as a PDP-11 to work training a transformer algorithm in order to take a closer look at it.
For such old hardware, the task [Damien Boureille] is training his transformer to do is to reverse a list of digits. This is a trivial problem for something like a Python program but much more difficult for a transformer. The model relies solely on self-attention and a residual connection. To fit within the 32KB memory limit of the PDP-11, it employs fixed-point arithmetic and lookup tables to replace computationally expensive functions. Training is optimized with hand-tuned learning rates and stochastic gradient descent, achieving 100% accuracy in 350 steps. In the real world, this means that he was able to get the training time down from hours or days to around five minutes.
Not only does a project like this help understand these tools, but it also goes a long way towards demonstrating that not every task needs a gigawatt datacenter to be useful. In fact, we’ve seen plenty of large language models and other generative AI running on computers no more powerful than an ESP32 or, if you need slightly more computing power, on consumer-grade PCs with or without GPUs.
2026-03-30 07:00:36

Whether it’s a new couch or a rare piece of hardware picked up on eBay, we all know what it feels like to eagerly await a delivery truck. But the CERN researchers involved in a delivery earlier this week weren’t transporting anyone’s Amazon Prime packages, they were hauling antimatter.
Moving antimatter, specifically antiprotons, via trucks might seem a bit ridiculous. But ultimately CERN wants to transfer samples between various European laboratories, and that means they need a practical and reliable way of getting the temperamental stuff from point A to B. To demonstrate this capability, the researchers loaded a truck with 92 antiprotons and drove it around for 30 minutes. Of course, you can’t just put antiprotons in a cardboard box, the experiment utilized a cryogenically cooled magnetic containment unit that they hope will eventually be able to keep antimatter from rudely annihilating itself on trips lasting as long as 8 hours.
Speaking of deliveries, anyone building a new computer should be careful when ordering components. Shady companies are looking to capitalize on the currently sky high prices of solid-state drives by counterfeiting popular models, and according to the Japanese site AKIBA PC Hotline, there are some examples in the wild that would fool all but the most advanced users. They examine a bootleg drive that’s a nearly identical replica of the Samsung 990 PRO — the unit and its packaging are basically a mirror image of the real deal, the stated capacity appears valid, and it even exhibits similar performance when put through a basic benchmark test.
But while the drive’s sequential read and write speeds are within striking distance of the official numbers from Samsung, things start to fall apart when doing random speed tests or performing real-world operations. It took the fake drive over 25 minutes to write a 370 GB file, while the authentic one ripped through the same file in less than 4: giving a true write speed of 261 MB/s and 1,861 MB/s, respectively.
Luckily you don’t have to time how long it takes to dump 100+ GB of data on the drive just to see if it’s legitimate, Samsung offers a tool that can communicate with the drive and determine if it’s an original or not. If they don’t already, we imagine other manufacturers will roll out similar capabilities in an effort to combat these sophisticated clones.
Of course, computers aren’t the only things in our modern world that are impacted by the rising prices of memory and flash storage. On Friday, Sony announced that they would be implementing higher prices across their PlayStation line starting this week to compensate for what they call “pressures in the global economic landscape.”
Starting April 2nd (presumably they didn’t want consumers to think this was a joke), the base model PS5 will be bumped up to $649.99 in the US and €649.99 in Europe, while the PS5 Pro will be set at an eye-watering 899.99 in both currencies. Admittedly we’ve done absolutely no research to support this, but surely that must make the latter system the most expensive home game console in history by a considerable margin. In comparison, Microsoft’s top of the line Xbox Series X is currently priced at $799, though the model with the smaller 1 TB drive is still available for $649.
One might think that the skyrocketing cost of memory would force developers to take a lesson from the early days of computing, and usher in a new era of highly optimized code that manages to do more with less. That would be nice. Instead, we have now have DOOM rendered in the browser using CSS.
As Niels Leenheer explains in the write-up, the original goal was to have the entire game running in CSS. But he quickly ran into issues trying to implement the game logic. So he settled for letting Claude port the open source C code for the base game over to JavaScript, which freed him up to work on doing the graphics in CSS.
If you’re interested in web development it’s a fascinating look at how far the modern browser can be pushed, and even if you don’t, it’s a surprisingly smooth way to play the classic shooter without having to install anything.
Lastly, the public is finally getting some information about the health scare aboard the International Space Station that triggered the first-ever medical evacuation from the orbiting laboratory back in January. As we predicted in our previous coverage, NASA was unwilling to put personal information about one of their astronauts on the public record, and have remained tight-lipped about the situation. So it was Crew-11 Pilot Mike Fincke himself that decided to not only come forward as the individual who experienced the issue, but to detail what he went through in an interview with the Associated Press.
So what happened? Well, nobody is quite sure yet. Fincke says he was eating dinner the night before he was scheduled to go on a spacewalk outside the Station, and suddenly realized he couldn’t speak. His crewmates realized he was in distress, and contacted medical personnel at Mission Control on his behalf. Testing performed both on the Station and back on Earth has yet to provide any explanation for the episode. It lasted approximately 20 minutes, and he’s experienced no issues since. Space is kinda crazy like that sometimes.
See something interesting that you think would be a good fit for our weekly Links column? Drop us a line, we’d love to hear about it.
2026-03-30 04:00:22

Although GNSS systems like GPS have made pin-pointing locations on Earth’s sphere-approximating surface significantly easier and more precise, it’s always possible to go a bit further. The latest innovation involves strapping laser retroreflector arrays (LRAs) to newly launched GPS satellites, enabling ground-based lasers to accurately determine the distance to these satellites.
Similar to the retroreflector array that was left on the Moon during the Apollo missions, these LRAs will be most helpful with scientific pursuits, such as geodesy. This is the science of studying Earth’s shape, gravity and rotation over time, which is information that is also incredibly useful for Earth-observing satellites.
Laser ranging is also essential for determining the geocentric orbit of a satellite, which enables precise calibration of altimeters and increasing the accuracy of long-term measurements. Now that the newly launched GPS III SV-09 satellite is operational this means more information for NASA’s geodesy project, and increased accuracy for GPS measurements as more of its still to be launched satellites are equipped with LRAs.