2026-02-24 23:00:35

In science fiction, the use of gunpowder-based weapons is generally portrayed as something from a savage past, with technology having long since moved on to more civilized types of destructive weaponry, involving lasers, microwaves, and electromagnetism. Instead of messy detonating powder, energy-weapons are used to near-instantly deposit significant amounts of energy into the target, and railguns enable the delivery of projectiles at many times the speed of sound using nothing but the raw power of electricity and some creative physics.
Of course, the reason that we don’t see sci-fi weapons deployed everywhere has arguably less to do with today’s levels of savagery in geopolitics and more with the fact that physical reality is a very harsh mistress, who strongly frowns upon such flights of fancy.
Similarly, the Lorentz force that underlies railguns is extremely simple and effective, but scaled up to weapons-grade dimensions results in highly destructive forces that demolish the metal rails and other components of the railgun after only a few firings. Will we ever be able to fix these problems, or are railguns and similar sci-fi weapons forever beyond our grasp?

The simplest way to think about a railgun is as a linear motor. At its core it consists of two parallel conductors — the rails — with an armature that slides across these rails as it conducts the power between the two rails. This also makes it the equivalent of a homopolar motor, which was the first type of electric motor to be demonstrated.
In the photo on the right you can see a basic example of such a motor, with the neodymium magnet providing the magnetic field and the singular wire the current that interacts with the magnetic field. Using the right-hand rule that was hammered into our heads during high school physics classes we can thus deduce that we get a net force.
With this hand-held demonstration the screw will rotate when current is passed through the wire. For stand-alone homopolar motors with the magnet on the battery’s negative terminal and a conductor loosely placed on the positive terminal while touching the magnet, the Lorentz force will cause the wire to rotate around the battery.

We can visualize this interaction between the current-carrying wire (I), the magnetic field (B) and resulting force vector (F) in such a homopolar motor fairly easy, but how does this work with a railgun?

Rather than a permanent magnet or a complex electromagnet on each rail using many windings, a single current loop is used in a railgun. This means that massive amounts of currents are pumped through one rail, which induces a sufficient strong magnetic field.
The projectile, playing the role of the armature, is located inside the generated magnetic field B, with the current I coursing through the armature, resulting in a net force F that will push it along the rails at a velocity that’s proportional to the strength of B.
Crudely put, the effective speed of a project launched by a railgun is thus determined by the applied current, so unlike it’s close cousin, the coilgun, there is no tricky timing requirement in energizing coils in a sequence.
This also provides some hints as to what major obstacles with railguns are, starting with the immense currents that have to be immediately available for a railgun shot of any significant size. If this is somehow engineered around using massive capacitor banks, then you run into the much more significant issues that have so far prevented railguns from being widely deployed.
Most of this comes down to wear and tear, because going fast comes with certain tradeoffs.

Theoretically you can just scale everything up: creating railguns with larger rails and larger armatures that can launch larger projectiles with increasingly faster speeds. This has been the impetus behind various railgun projects across the world, with notable examples being the railguns developed and tested by the US and Japan.
Railguns were invented all the way back in 1917 by French inventor André Louis Octave Fauchon-Villeplée, when the issue of the massive electricity consumption kept further research on a fairly low level. Even the tantalizing prospect of a weapon system capable of firing at velocities of more than 2,000 m/s couldn’t get into deployment during the time that Nazi Germany was working on their own version.
Ultimately it would take until the 1980s for railgun designs to become practical enough to start testing them for potential deployment at some point in the future, seeing a surge of R&D investment for it and other new weapon systems that could provide an edge during the Cold War and beyond.
Yet despite decades of research by the US military, no viable design has so far appeared, and research has wound down over the past years. Although both China and India are testing their own railgun designs, there are no signs at this point that they haven’t run into the same issues that caused the US to mostly cease research on this topic.
Only Japan’s railgun research seems to so far offer a viable design for deployment, but their focus is purely defensive, for countering ballistic and hypersonic missiles in a close-in role. The size is also limited to the current 40 mm prototype by Japan’s Ministry of Defense ATLA agency.
In a perfect world with zero friction and spherical cows, railguns would be very simple and straightforward, but as we live in messy reality we have to deal with the implications of sending immense amounts of currents through a railgun barrel. A good primer here can be found in a June 1983 report (archived) by O. Fitch and M. F. Rose at the Dahlgren Naval Surface Weapons Center in Virginia.

Much of this comes down to efficiency as you scale up a basic railgun design. The two main factors are basic ohmic resistance (ER) and system inductance (ES). These two factors limit the kinetic energy (EK) and set the losses (EL) of the system, with the losses being in the form of thermal and other energies.
Reducing these losses is one of the primary points of research, and factors like the rail design and alloys as well as the switching of the current pulses play a role in affecting final efficiency, and with it durability of the railgun’s ‘barrel’.
Naturally, that was all the way back in 1983, and since then a few decades of technical and material science progress having occurred. Or so one might be led to believe, if it wasn’t for current research papers striking a rather similar tone. For example Hong-bin Xie et al. in a 2021 paper as published in Defence Technology.

This review article covers the common issues of rail gouging, grooving, arc ablation, and other problems, as well as the current rail materials in use today and their performance characteristics.
Many of these issues are somewhat related, as the moving armature rarely maintains a perfect contact with the rails. This results in arcing, localized heating, ablation, and grooving due to thermal softening. All of these effects result in a rapidly degrading rail surface, and higher currents result in more rapid degradation and even worse contact with subsequent shots.
Various rail metal alloys have been or are being tested, including Cu-Cr, Cu-Cr-Zr and Cu/Al2O3, replacing the pure copper rails of the past. None of these alloys can resist the pitting and other wear effects from repeated railgun firings, however. This has pivoted research towards various coatings that could limit wear instead, such as molybdenum (Mo) or tungsten (W).
Fields of research involve electroplating, cold spraying, supersonic plasma spraying and laser cladding, using a wide variety of coatings. The authors note however that these rail coatings have only begun to be investigated, with success anything but assured.

Quite recently railguns have surged to the forefront in the news cycle courtesy of certain ill-informed fantasies that also involve destroyers which identify as battleships. In these feverish battleship dreams, railguns would act as a kind of super-charged version of the 16″ main guns of the Iowa-class, the last active battleships in history.
Instead of 16″ shells that ponderously arc towards their decidedly doomed target, these railguns would instead send a projectile at a zippy 2-3 km/s towards a target. As tempting as this seems, the big issue is as we have seen of repeatability. The Iowas originally had a barrel life of a few hundred shots before their liner had to be replaced, but this got bumped up to basically ‘infinite’ shots after some changes to their chemical propellant.
A single Mark 7 16″ naval gun fires twice per minute, and this is multiplied by nine if all three turrets are used. The range of projectiles launched included high-explosive, armor-penetrating, and even nuclear shell options, with a range of 39 km (21 nmi) at a leisurely ~800 m/s. To compete with this, a naval railgun would need to be able to keep up a similar firing rate, feature a similar barrel or at least acceptable barrel life, and have a longer range for a similar payload effect.
At this point railguns score pretty poorly on all these counts. Although range of a projectile falls between that of a missile and a Mark 7 naval gun’s projectile, barrel life is still poor, power usage remains very high and the available projectiles at this point in time are basically just relying on their kinetic energy to cause harm, limiting their functionality.
Taking all of this into account, it would seem that the Japanese approach using railguns as a very responsive, close-in weapon is extremely sensible. By keeping the design as small-caliber as possible, reducing rail current, and not caring about range as long as you can hit that hypersonic anti-ship missile, they seem to be keeping rail erosion to a minimum.
Since the average missile tends to perform rather poorly after a 40 mm hole appears through it, courtesy of it briefly sharing the same physical space with a tungsten projectile, this might just be the defensive weapon niche that rail guns can fill.
2026-02-24 20:00:02

It’s quite the understatement to say that at this point in time we don’t quite understand how even the tiniest brain works exactly. Much of this is due to the sheer complexity and scale of these little biological marvels: with the human brain packing billions of neurons and their associated supportive scaffolding into a few kilograms of gooey pink-white mass, the sheer connectivity density is more than we can reasonably hope to measure in-situ. Ergo attempts to recreate digital simulations of small sections of such brains, a process that’s making gradual progress.
Most recently we have been doing mapping of neurons and their connections in the brain of the humble fruitfly, D. melanogaster. Despite their brains being minuscule, with only about 140,000 neurons and 50 million connections, we’re not quite at the level where we can have a simulated fruitfly brain spark to life. This should probably give us some hints as to the sheer complexity of mapping the human brain, never mind simulating even a small part like a cubic millimeter of the temporal cortex with about 57,000 cells and 150 million synapses.
Even once you have all the connectome data of such a bit of brain, it’s not like you can just toss it onto a supercomputer and expect a meaningful simulation. All supercomputers today are massively parallel, meaning thousands of networked computers that require the computing task to be split up and all communication between nodes restricted as much as possible to not starve nodes.

In the paper, these challenges are addressed and a model suggested that should provide the best possible optimization for such a simulation. Both point-to-point and collective communication are investigated on the NVIDIA A100 GPU-equipped supercomputer.
Based on their findings they conclude that the entire 6 MW-rated Leonardo Booster supercomputer with its 3,456 nodes could simulate a model with about 3.5 · 1013 connections, roughly 10% of that of the human cortex if assuming random connectivity. A more realistic model would feature more directed mapping that could be more efficient.
Regardless of this, their conclusion that an optimal design would be a hybrid, with both point-to-point communication for local spikes and collective communication for long-range communication, seems valid. For now it would seem that simulating an entire human brain is still far beyond the realm of possibilities, but we might actually have a shot at simulating the fruitfly brain on a modern supercomputer in the near future.
2026-02-24 17:00:18

If you’ve been to a wedding or a downtown coffee shop in the last 10 years, you’ve probably seen those little lightboxes that are so popular these days. They consist of letters placed on a plastic frame in front of a dim white light, and they became twee about five minutes after your hipster friend first got one. However, they can also make a neat basis for an LED display, as [Folkert van Heusden] demonstrates.
The build is straightforward enough, using daisy chains of 32×8 LED matrix modules, two each for the three rows of the lightbox. This provides for a 24 character textual display, or a total display resolution of 64 x 24 pixels. An ESP8266 is used to command the matrixes, which are run by MAX7219 display controllers. Thanks to the microcontroller’s onboard wireless hardware, the display can be addressed in a number of ways, such as using the LedFX DDP protocol or [Folkert’s] Pixel Yeeter python library. Files are on GitHub for the curious.
Quite a few of these exist out in the wild — [Folkert] has built a variety of modded lightboxes over the years with varying internals. The benefit of the lightbox is that it effectively acts as a handy housing for LED matrixes and supporting electronics, while also providing a neat diffuser effect. The lightboxes are also readily wall mountable and generally look more like an intentional piece of signage than most things we might homebrew in the lab.
We’ve featured similar-looking builds before, like this public transit display that was hacked for custom use. If you’re building your own public information boards or other nifty LED displays, don’t hesitate to notify the tipsline!
2026-02-24 14:00:50

The demoscene is still alive and well, and the proof is in this truly awe-inspiring game demo by [daivuk] : a Quake-like “boomer shooter” squeezed into a Windows executable of only 64 kB he calls “QUOD”. We’ve included the full explanation video below, but before you check out all the technical details, consider playing the game. It’ll make his explanations even more impressive.
OK, what’s so impressive? Well, aside from the fact that this is a playable 3D shooter in 64kB, with multiple enemies, multiple levels, oodles of textures, running, jumping et cetera–it’s so Quake-like he’s using TrenchBroom to make the levels. Of course he’s reprocessing them into a more space-efficient, optimized format. Yeah, unlike the famous .kkrieger and a lot of other demos in the 64kB space, this isn’t all procedurally generated. [daivuk] did make his own image editing program for procedurally generated textures, though. Which makes sense: as a PNG, the QUOD logo is probably half the size of the (compressed) executable.
The low-poly models are created in Blender, and all created to be symmetric–having the engine mirror the meshes saves 50% of the vertex data. . Blender is just exporting half of a low-poly mesh; just as he wrote his own image editor, he has his own bespoke model tool. This allows tiling model elements, as well as handling bones and poses to keyframe the model’s animation.
Audio is treated similarly to textures and meshes: built up at runtime from stored data and a layered series of effects. When you realize all the sounds were put together in his sound tool from square and sine waves, it makes it very impressive. He’s also got an old-style tracker to create the music. All of these tools output byte arrays that get embedded directly in the game code.
The video also gets into some of his optimization techniques; we like his use of a map file and analyzing it with a python tool to find the exact size of game elements and test his optimizations thereby. One thing he notes is that his optmizations are all for space, not for speed. Except, perhaps, for one thing: [daivuk] created a new language and virtual machine for the game, which seems downright extravagant. It actually makes sense, though, as the virtual machine can be optimized for the limits of the game, as he explains starting at about 20 minutes into the video. Apparently it saved a whole 2kB, which seems like nothing these days but actually let [daivuk] fit an extra level into his 64kB limit. Sure, it’s still bigger than Quake13k–and how did we never cover that?–but you get a lot more game, too.
So, to recap: [daivuk] didn’t just make a game with an impressively tiny size on disk, he made the entire toolchain, and a language for it to boot. If you think this is overoptimized, check out Wolfenstien in 600 lines of AWK. Of course in spite of the 1980s file size, this needs modern hardware to run. You can get surprising graphics performance from a fraction of that, like this ATtiny sprite engine.
Thanks to [Keith Olson] for the tip, which probably took up more than 64kB on our tips line.
2026-02-24 11:00:33

The Industrial Revolution was powered by steam, with boilers being a crucial part of each steam engine, yet also one of the most dangerous elements due to the high pressures involved. The five Lancashire boilers at the Claymills Pumping Station are relatively benign in this regard, as they operate at a mere 80 PSI unlike e.g. high-pressure steam locomotives that can push 200 – 300 PSI. This doesn’t mean that refurbishing one of these boilers is an easy task and doesn’t involve plugging a lot of leaks, as the volunteers at this pumping station found out.
At this Victorian-era pumping station there are a total of five of these twin-flue Lancashire boilers, all about 90 years old after a 1930s-era replacement, with them all gradually being brought back into service. The subject of the video is boiler 1, which was last used in 1971 before the pumping station was decommissioned. Boilers 2 and 3 were known to be in a pretty bad condition, and they needed a replacement for boiler 5 as it was about to go down for maintenance soon.
Although the basic idea behind a Lancashire boiler is still to boil water to create steam, it’s engineered to do this as efficiently as possible to save fuel. This is why it has two flues where the burning coal deposits its thermal energy, which then goes on to heat the surrounding water. The resulting pressure from the steam also means that there are a lot of safeties to ensure that things do not get too spicy.
Thus after removing lots of scale, grime and general detritus from decades of neglect, these safeties were all inspected and repaired or replaced as needed. Following this it was time for the hydraulic pressure test, which simulates the pressure from steam, but without all the fuss and lethal dangers. This took a few tries and a number of leaks, issues with old piping and ominous creaking, but eventually the boiler hit the 1.5x safety margin of 120 PSI and stayed there for the required thirty minutes without further issues.
This clears boiler 1 for its official inspection by a boiler inspector who will sign off on it being taken back into use, and allowing this boiler to resume what it was doing up till that day in 1971 when the pump station was decommissioned.
2026-02-24 08:00:08

If you want to reverse engineer a PC board, you could do worse than X-ray it. But thanks to [Philip Giacalone], you could just take a photo, load it into PCB Tracer, and annotate the images. You can see a few of a series of videos about the system below.
The tracer runs in your browser. It can let you mark traces, vias, components, and pads. You can annotate everything as you document it, and it can even call an AI model to help generate a schematic from the net list.
This is one of those things that you could do without. Any photo editor could do the same thing. But having the tool aware of what the photo is showing makes life easier. The built-in features are free, but if you use the AI tool, he says it will cost you about a half-dollar per schematic (paid to the AI company).
Even if you don’t think you need to reverse-engineer anything, you may still find this useful if you are trying to understand a board for repair. We’ve had a good Supercon/Remoticon talk about PCB reverse engineering you can watch. If you want to see what a real X-ray of a board looks like, here you go.