2026-03-13 07:00:05

Air hockey is one of those sports that’s both incredibly fun, but also incredibly frustrating as playing it by yourself is a rather lonely and unfulfilling experience. This is where an air hockey playing robot like the one by [Basement Builds] could come in handy. After all, after you finished building an air hockey table from scratch, how hard could it be to make a robot that merely moves the paddle around to hit the puck with?
An air hockey table is indeed not extremely complicated, being mostly just a chamber that has lots of small holes on the top through which the air is pushed. This creates the air layer on which the puck appears to float, and allows for super-fast movement. For this part countless chamfered holes were drilled to get smooth airflow, with an inline 12VDC duct fan providing up to 270 CFM (~7.6 m3/minute).
Initially the robot used a CoreXY gantry configuration, which proved to be unreliable and rather cumbersome, so instead two motors were used, each connected to its own gearbox. These manipulate the paddle position by changing the geometry of the arms. Interestingly, the gearbox uses TPU for its gears to absorb any impacts and increase endurance as pure PLA ended up falling apart.
The position of the puck is recorded by an overhead camera, from where a Python script – using the OpenCV library running on a PC – determines how to adjust the arms, which is executed by Arduino C++ code running on a board attached to the robot. All of this is available on GitHub, which as the video makes clear is basically cheating as you don’t get to enjoy doing all the trigonometry and physics-related calculating and debugging fun.
2026-03-13 04:00:27

Sound! It’s a thing you hear, moreso than something you see with your eyes. And yet, it is possible to visualize sound with various techniques. [PlasmatronX] demonstrates this well, using a special scanning technique to visually capture the sound field inside an acoustic levitation device.
If you’re unfamiliar, acoustic levitation devices like this use ultrasound to create standing waves that can hold small, lightweight particles in mid-air. The various nodes of the standing wave are where particles will end up hovering. [PlasmatronX] was trying to calibrate such a device, but it proved difficult without being able to see what was going on with the sound field. Hence, the desire to image it!
Imaging the sound field was achieved with a Schlieren optical setup, which can capture variations in air density as changes in brightness in an image. Normally, Schlieren imaging only works in a two-dimensional slice. However, [PlasmatronX] was able to lean on computed tomography techniques to create a volumetric representation of the sound field in 3D. He refers to this as “computerized acoustical tomography.” Images were captured of the acoustic levitation rig from different angles using the Schlieren optics rig, and then the images were processed in Python to recreate a 3D image of the sound field.
We’ve seen some other entertaining applications of computed tomography techniques before, like inspecting packets of Pokemon cards. Video after the break.
2026-03-13 02:30:01

The early history of colour TV had several false starts, of which perhaps one of the most interesting might-have-beens was the CBS field-sequential system. This was a rival to the nascent system which would become NTSC, which instead of encoding red, green, and blue all at once for each pixel, made sequential frames carry them.
The Korean war stopped colour TV development for its duration in the early 1950s, and by the end of hostilities NTSC had matured into what we know today, so field-sequential colour became a historical footnote. But what if it had survived? [Nicole Express] takes into this alternative history, with a look at how a field-sequential 8-bit home computer might have worked.
The CBS system had a much higher line frequency in order to squeeze in those extra frames without lowering the overall frame rate, so given the clock speeds of the 8-bit era it rapidly becomes obvious that a field-sequential computer would be restricted to a lower pixel resolution than its NTSC cousin. The fantasy computer discussed leans heavily on the Apple II, and we explore in depth the clock scheme of that machine.
While it would have been possible with the faster memory chips of the day to achieve a higher resolution, the conclusion is that the processor itself wasn’t up to matching the required speed. So the field-sequential computer would end up with wide pixels. After a look at a Breakout clone and how a field-sequential Atari 2600 might have worked, there’s a conclusion that field-sequential 8-bit machines would not be as practical as their NTSC cousins. From where we’re sitting we’d expect them to have used dedicated field-sequential CRT controller chips to take away some of the heartache, but such fantasy silicon really is pushing the boundaries.
Meanwhile, while field-sequential broadcast TV never made it, we do have field-sequential TV here in 2026, in the form of DLP projectors. We’ve seen their spinning filter disks in a project or two.
1950 CBS color logo: Archive.org, CC0.
2026-03-12 23:30:50

Classic Mac OS was prized for its clean, accessible GUI when it first hit the scene in the 1980s. Back then, developers hadn’t even conceived of all the weird gewgaws that would eventually be shoehorned into modern operating systems, least of all AI agents that seem to be permeating everything these days. And yet! [SeanFDZ] found a way to cram Claude or other AI agents into the vintage Mac world.
The result of [Sean]’s work is AgentBridge, a tool for interfacing modern AI agents with vintage Mac OS (7-9). AgentBridge itself runs as an application within Mac OS. It works by reading and writing text files in a shared folder which can also be accessed by Claude or whichever AI agent is in use. AgentBridge takes commands from its “inbox”, executes them via the Mac Toolbox, and then writes outputs to its “outbox” where they can be picked up and processed by the AI agent. The specifics of how the shared folder work are up to you—you can use a network share, a shared folder in an emulation environment, or just about any other setup that lets the AI agent and AgentBridge access the same folder.
It’s hard to imagine any mainstream use cases for having a fleet of AI-controlled Macintosh SE/30s. Still, that doesn’t mean we don’t find the concept hilarious. Meanwhile, have you considered the prospect of artificial intelligence running on the Commodore 64?
2026-03-12 22:00:44

Released in 2016, Pokemon Go quickly became a worldwide phenomenon. Even folks who weren’t traditionally interested in the monster-taming franchise were wandering around with their smartphones out, on the hunt for virtual creatures that would appear via augmented reality. Although the number of active users has dropped over the years, it’s estimated that more than 50 million users currently log in and play every month.
From a gameplay standpoint, Go is brilliant. Although the Pokemon that players seek out obviously aren’t real, searching for them closely approximates the in-game experience that the franchise has been known for since its introduction on the Game Boy back in 1996.
But now, instead of moving a character through a virtual landscape in search of the elusive “pocket monsters”, players find them dotted throughout the real world. To be successful, players need to leave their homes and travel to where the Pokemon are physically located — which often happens to be a high-traffic area or other point of interest.
As a game, it’s hard to imagine Pokemon Go being a bigger success. At the peak of its popularity, throngs of players were literally causing traffic jams as they roamed the streets in search of invisible creatures. But what players may not have realized as they scanned the world around them through the game was that they were helping developer Niantic build something even more valuable.
The game has used augmented reality (AR) to bring the world of Pokemon to life since day one, but it wasn’t until the fall of 2020 that Niantic introduced AR Mapping. With this new feature, players could scan real-world locations and objects by walking around them while the software captured images from their smartphone’s camera. This was presented to the player as “Field Research”, and once completed, it would unlock various rewards in the game.
For those with a technical mindset, the implications of this are immediately obvious. Through the Research system, Niantic could direct Pokemon Go players anywhere they wished. Once the imagery from these Research scans were uploaded, they could be used to create detailed 3D models through the use of photogrammetry. The more players that perform Field Research on a particular location, the more accurate the results.
If Niantic wanted to create a 3D model of a statue in a park or the front of a building, they simply needed to assign it a Field Research task and the players would rush out to collect the data. Forget Google’s Street View — rather than sending a camera-laden car out once every year or so to grab new images, Niantic could sit back while millions of players uploaded high resolution pictures of the world around them in exchange for in-game trinkets that have no physical value.
In the tech world there’s a common saying: “If something is free, you’re the product.”
The idea being that if you’re using some service without paying for it, there’s an excellent chance that the company providing said service is somehow making money off of the situation. So for example when a user looks up a particular topic with a search engine, they can be presented with contextually appropriate advertisements. By selling this ad space to companies, the search engine provider generates a profit for each “free” search performed by its users. The personal relevancy offered by such bespoke advertisements can be more effective than traditional TV or print ads, which in turn means the search engine provider can charge a premium for them.
Just as in our hypothetical search engine example, Pokemon Go is offered up to players on Android and iOS free of charge. To date, it’s been downloaded by over a billion total users. To make the game financially viable, Niantic eventually needed to find a way to turn all those free downloads into a revenue stream.
The answer is Niantic Spatial. This spin-off company was announced in March of 2025, and offers a Visual Positioning System (VPS) created in part using the photogrammetry data collected by Pokemon Go. Through this service Niantic Spatial offers centimeter-scale positioning for millions of high-traffic locations all over the globe, even in areas where GPS may be inaccurate.
Earlier this week, Niantic Spatial announced they had entered into an agreement with Coco Robotics to provide VPS for their fleet of delivery robots. Images captured by the robot’s onboard cameras can be fed into the VPS to provide a more accurate position than is possible with GPS, even in the best of conditions. This is particularly important for a robot that not only needs to navigate an ever-changing urban landscape, but must arrive at a precise location to successfully complete its delivery.
At this point, you may be thinking to yourself that this all seems a bit shady. Can Niantic really take the data that was provided to them by Pokemon Go players and spin that off into a commercial venture that monetizes it? Of course they can, because that’s precisely what players agreed to when they installed the game.
Section 5.2 of the Niantic Terms of Service, titled “Rights Granted by You – AR Content”, states that the company retains wide-ranging rights over anything that users upload through the AR functions of their products:
In short, not only can Niantic do anything they want with player submitted data, but they can pass that freedom on to other entities as they see fit. So while Coco Robotics didn’t even exist when the AR Mapping feature was added to Pokemon Go, all of the imagery that players captured since that time — plus any images that they continue to capture — is fair game.
In the end, it’s unlikely that many players will lose any sleep over the fact that they have unwittingly been collecting training data to help robots more effectively deliver pizzas. But it’s also not hard to imagine a scenario in which that data ends up getting licensed out for some purpose they aren’t comfortable with.
If that happens, their options may be limited. A reading of Niantic’s Privacy Policy would seem to indicate that uploaded AR imagery is anonymized during processing, and as such doesn’t need to be treated in the same way that personally identifiable information would be. As such, players have the right to opt-out of uploading additional data going forward, but can’t remove what’s already been pushed into the system.
Regardless of whether or not this situation impacts you directly, it’s an important cautionary tale in an interconnected world where more and more of what users do online is tracked, filtered, processed, and sold off to the highest bidder. Perhaps something to keep in mind before clicking “I Agree.”
2026-03-12 19:00:06

While working on a project that involved super-thin prints, [Julius Curt] came up with selective ironing, a way to put designs on the top surface of a print without adding any height.
For those unfamiliar, ironing is a technique in filament-based 3D printing that uses the extruder to smooth out top surfaces after printing them. The hot nozzle makes additional passes across a top surface, extruding a tiny amount in the process, which smooths out imperfections and leaves a much cleaner surface. Selective ironing is nearly the same process, but applied only in a certain pattern instead of across an entire surface.

While conceptually simple, actually making it work was harder than expected. [Julius] settled on using a mixture of computer-aided design (CAD) work to define the pattern, combined with a post-processing script. More specifically, one models the desired pattern into the object in CAD as a one-layer-tall feature. The script then removes that layer from the model while applying the modified ironing pattern in its place. In this way, one can define the pattern in CAD without actually adding any height to the printed object. You can see it in action in the video, embedded below.
We’ve seen some interesting experiments in ironing 3D prints, including non-planar ironing and doing away with the ironing setting altogether by carefully tuning slicer settings so it is not needed. Selective Ironing is another creative angle, and we can imagine it being used to embed a logo or part number as easily as a pattern.
Selective Ironing is still experimental, but if you find yourself intrigued and would like to give it a try head over to the GitHub repository where you’ll find the script as well as examples to try out.