MoreRSS

site iconHackadayModify

Hackaday serves up Fresh Hacks Every Day from around the Internet. Our playful posts are the gold-standard in entertainment for engineers and engineering enthusiasts.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Hackaday

将图像数据存储为模拟音频

2026-02-13 05:00:47

Ham radio operators may be familiar with slow-scan television (SSTV) where an image is sent out over the airwaves to be received, decoded, and displayed on a computer monitor by other radio operators. It’s a niche mode that isn’t as popular as modern digital modes like FT8, but it still has its proponents. SSTV isn’t only confined to the radio, though. [BLANCHARD Jordan] used this encoding method to store digital images on a cassette tape in a custom-built tape deck for future playback and viewing.

The self-contained device first uses an ESP32 and its associated camera module to take a picture, with a screen that shows the current view of the camera as the picture is being taken. In this way it’s fairly similar to any semi-modern digital camera. From there, though, it starts to diverge from a typical digital camera. The digital image is converted first to analog and then stored as audio on a standard cassette tape, which is included in the module in lieu of something like an SD card.

To view the saved images, the tape is played back and the audio signal captured by an RP2040. It employs a number of methods to ensure that the reconstructed image is faithful to the original, but the final image displays the classic SSTV look that these images tend to have as a result of the analog media. As a bonus feature, the camera can use a serial connection to another computer to offload this final processing step.

We’ve been seeing a number of digital-to-analog projects lately, and whether that’s as a result of nostalgia for the 80s and 90s, as pushback against an increasingly invasive digital world, or simply an ongoing trend in the maker space, we’re here for it. Some of our favorites are this tape deck that streams from a Bluetooth source, applying that classic cassette sound, and this musical instrument which uses a cassette tape to generate all of its sounds.

探索 Pokémon Mini 的自制软件(Homebrew)

2026-02-13 03:30:57

Originally only sold at the Pokémon Center New York in late 2001 for (inflation adjusted) $80, the Pokémon Mini would go on to see a release in Japan and Europe, but never had more than ten games produced for it. Rather than Game Boy-like titles, these were distinct mini games that came on similarly diminutive cartridges. These days it’s barely remembered, but it can readily be used for homebrew titles, as [Inkbox] demonstrates in a recent video.

Inside the device is an Epson-manufactured 16-bit S1C88 processor that runs at 4 MHz and handles basically everything, including video output to the monochrome 96×64 pixel display. System RAM is 4 kB of SRAM, which is enough for the basic games that it was designed for.

The little handheld system offered up some capabilities that even the full-sized Game Boy couldn’t match, such as a basic motion sensor in the form of a reed relay. There’s also 2 MB of ROM space directly addressable without banking.

Programming the device is quite straightforward, not only because of the very accessible ISA, but also the readily available documentation and toolchain. This enables development in C, but in the video assembly is used for the added challenge.

Making the screen tiles can be done in an online editor that [Inkbox] also made, and the game tested in an emulator prior to creating a custom cartridge that uses an RP2040-based board to play the game on real hardware. Although a fairly obscure gaming handheld, it seems like a delightful little system to tinker with and make more games for.

基荷消亡及类似电网陈词滥调的终结

2026-02-13 02:00:02

Anyone who has spent any amount of time in or near people who are really interested in energy policies will have heard proclamations such as that ‘baseload is dead’ and the sorting of energy sources by parameters like their levelized cost of energy (LCoE) and merit order. Another thing that one may have noticed here is that this is also an area where debates and arguments can get pretty heated.

The confusing thing is that depending on where you look, you will find wildly different claims. This raises many questions, not only about where the actual truth lies, but also about the fundamentals. Within a statement such as that ‘baseload is dead’ there lie a lot of unanswered questions, such as what baseload actually is, and why it has to die.

Upon exploring these topics we quickly drown in terms like ‘load-following’ and ‘dispatchable power’, all of which are part of a healthy grid, but which to the average person sound as logical and easy to follow as a discussion on stock trading, with a similar level of mysticism. Let’s fix that.

Loading The Bases

Baseload is the lowest continuously expected demand, which sets the minimum required amount of power generating capacity that needs to be always online and powering the grid. Hence the ‘base’ part, and thus clearly not something that can be ‘dead’, since this base demand is still there.

What the claim of ‘baseload is dead’ comes from is the idea that with new types of generation that we are adding today, we do not need special baseload generators any more. After all, if the entire grid and the connected generators can respond dynamically to any demand change, then you do not need to keep special baseload plants around, as they have become obsolete.

Example electrical demand "Duck Curve" using historical data from California. (Credit: ArnoldRheinhold)
Example electrical demand “Duck Curve” using historical data from California. (Credit: ArnoldRheinhold)

A baseload plant is what is what we traditionally call power plants that are designed to run at 100% output or close to it for as long as they can, usually between refueling and/or maintenance cycles. These are generally thermal plants, powered by coal or nuclear fuel, as this makes the most economical use of their generating capacity, and thus for the cheapest form of dispatchable power on the grid.

With only dispatchable generators on the grid this was very predictable, with any peaks handled by dedicated power plants, both load-following and peaking power plants. This all changed when large-scale solar and wind generators were introduced, and with it the duck curve was born.

As both the sun and wind are generally more prevalent during the day, and these generators are not  generally curtailed, this means that suddenly everything else, from thermal power plants to hydroelectric plants, has to throttle back. Obviously, doing so ruins the economics of these dispatchable power sources, but is a big part of why the distorted claim of ‘baseload is dead’ is being made.

Chaos Management

The Fengning pumped storage power station in north China's Hebei Province. (Credit: CFP)
The Fengning pumped storage power station in north China’s Hebei Province. (Credit: CFP)

Suffice it to say that having the entire grid adapt to PV solar and wind farms – whose output can and will fluctuate strongly over the course of the day – is not an incredibly great plan if the goal is to keep grid costs low. Not only can these forms of variable renewable energy (VRE) only be curtailed, and not ramped up, they also add thousands of kilometers of transmission lines and substations to the grid due to the often remote areas where they are installed, adding to the headache of grid management.

Although curtailing VRE has become increasingly more common, this inability to be dispatched is a threat to the stability of the national grids of countries that have focused primarily on VRE build-out, not only due to general variability in output, but also because of “anticyclonic gloom“: times when poor solar conditions are accompanied by a lack of wind for days on end, also called ‘Dunkelflaute’ if you prefer a more German flair.

What we realistically need are generators that are dispatchable – i.e. are available on demand – and can follow the demand – i.e. the load – as quickly as possible, ideally in the same generator. Basically the grid controller has to always have more capacity that can be put online within N seconds/minutes, and have spare online capacity that can ramp up to deal with any rapid spikes.

Although a lot is being made of grid-level storage that can soak up excess VRE power and release it during periods of high demand, there is no economical form of such storage that can also scale sufficiently. Thus countries like Germany end up paying surrounding countries to accept their excess power, even if they could technically turn all of their valleys into pumped hydro installations for energy storage.

This makes it incredibly hard to integrate VRE into an electrical grid without simply hard curtailing them whenever they cut into online dispatchable capacity.

Following Dispatch

Essential to the health of a grid is the ability to respond to changes in demand. This is where we find the concept of load-following, which also includes dispatchable capacity. At its core this means a power generator that – when pinged by the grid controller (transmission system operator, or TSO) – is able to spin up or down its power output. For each generator the response time and adjustment curve is known by the TSO, so that this factor can be taken into account.

European-wide grid oscillations prior to the Iberian peninsula blackout. (Credit: Linnert et al., FAU, 2025)
European-wide grid oscillations prior to the Iberian peninsula blackout. (Credit: Linnert et al., FAU, 2025)

The failure of generators to respond as expected, or by suddenly dropping their output levels can have disastrous effects, particularly on the frequency and thus voltage of the grid. During the 2025 Iberian peninsula blackout, for example, grid oscillations caused by PV solar farms caused oscillation problems until a substation tripped, presumably due to low voltage, and a cascade failure subsequently rippled through the grid. A big reason for this is the inability of current VRE generators to generate or absorb reactive power, an issue that could be fixed with so-called grid-forming converters, but at significant extra cost to the VRE generator owners, as this would add local energy storage requirements such as batteries.

Typically generators are divided into types that prefer to run at full output (baseload), can efficiently adjust their output (load follow) or are only meant for times when demand outstrips the currently available supply (peaker). Whether a generator is suitable for any such task largely depends on the design and usage.

This is where for example a nuclear plant is more ideal than a coal plant or gas turbine, as having either of these idling burns a lot of fuel with nothing to show for it, whereas running at full output is efficient for a coal plant, but is rather expensive for a gas turbine, making them mostly suitable for load-following and peaker plants as they can ramp up fairly quickly.

The nuclear plant on the other hand can be designed in a number of ways, making it optimized for full output, or capable of load-following, as is the case in nuclear-heavy countries like France where its pressurized water reactors (PWRs) use so-called ‘grey control rods’ to finely tune the reactor output and thus provide very rapid and precise load-following capacities.

Overview of the thermal energy transfer in the Natrium reactor design. (Source: TerraPower)

There’s now also a new category of nuclear plant designs that decouple the reactor from the steam turbine, by using intermediate thermal storage. The Terrapower Natrium reactor design – currently under construction – uses molten salt for its coolant, and also molten salt for the secondary (non-nuclear) loop, allowing this thermal energy to be used on-demand instead of directly feeding into a steam turbine.

This kind of design theoretically allows for a very rapid load-following, while giving the connected reactor all the time in the world to ramp up or down its output, or even power down for a refueling cycle, limited only by how fast the thermal energy can be converted into electrical power, or used for e.g. district heating or industrial heat.

Although grid-level storage in the form of pumped hydro is very efficient for buffering power, it cannot be used in many locations, and alternatives like batteries are too expensive to be used for anything more than smoothing out rapid surges in demand. All of which reinforces the case for much cheaper and versatile dispatchable power generators.

Grid Integration

Any power generator on the grid cannot be treated as a stand-alone unit, as each kind of generator comes with its own implications for the grid. This is a fact that is conveniently ignored when the so-called Levelized Cost of Energy (LCoE) metric is used to call VRE the ‘cheapest’ of all types of generators. Although it is true that VRE have no fuel costs, and relatively low maintenance cost, the problem with them is that most of their costs is not captured in the LCoE metric.

What LCoE doesn’t capture is whether it’s dispatchable or not, as a dispatchable generator will be needed when a non-dispatchable generator cannot produce due to clouds, night, heavy snow cover, no wind or overly strong wind. Also not captured in LCoE are the additional costs occurred from having the generator connected to the grid, from having to run and maintain transmission lines to remote locations, to the cost of adjusting for grid frequency oscillations and similar.

Levelized cost of operation of various technologies. (Credit: IEA)
Levelized cost of operation of various technologies. (Credit: IEA, 2020)

Ultimately these can be summarized as ‘system integration costs’, and they are significantly tougher to firmly nail down, as well as highly variable depending on the grid, the power mix and other variables. Correspondingly the cost of electricity from various sources is hotly debated, but the consensus is to use either Levelized Avoided Cost of Energy (LACE) or Value Adjusted LCoE (VALCoE), which do take these external factors into account.

Energy value by technology relative to average wholesale electricity price in the European Union in the Stated Policies Scenario. (Credit: IEA, 2020)
Energy value by technology relative to average wholesale electricity price in the European Union in the Stated Policies Scenario. (Credit: IEA, 2020)

As addressed in the linked IEA article on VALCoE, an implication of this is that the value of VREs drop as their presence on the grid increases. This can be seen in the above graph based on 2020-era EU energy policies, with the graphs for the US and China being different again, but China’s also showing the strong drop in value of PV solar while wind power is equally less affected.

A Heated Subject

It is unfortunate that energy policy has become a subject of heated political and ideological furore, as it should really be just as boring as any other administrative task. Although the power industry has largely tried to stay objective in this matter, it is unfortunately subject to both political influence and those of investors. This has led to pretty amazing and breakneck shifts in energy policy in recent years, such as Belgium’s phase-out of nuclear power, replacing it with multiple gas plants, to then not only decide to not phase out its existing nuclear plants, but also to look at building new nuclear.

Similarly, the US has and continues to see heated debates on energy policy which occasionally touch upon objective truth. Unfortunately for all of those involved, power grids do not care about personal opinions or preferences, and picking the wrong energy policy will inevitably lead to consequences that can cost lives.

In that sense, it is very harmful that corner stones of a healthy grid such as baseload, reactive power handling and load-following are being chipped away by limited metrics such as LCoE and strong opinions on certain types of power technologies. If we cared about a stable grid more than about ‘being right’, then all VRE generators would for example be required to use grid-forming converters, and TSOs could finally breathe a sigh of relief.

Bash 通过转译器

2026-02-13 00:30:22

It is no secret that we often use and abuse bash to write things that ought to be in a different language. But bash does have its attractions. In the modern world, it is practically everywhere. It can also be very expressive, but perhaps hard to read.

We’ve talked about Amber before, a language that is made to be easier to read and write, but transpiles to bash so it can run anywhere. The FOSDEM 2026 conference featured a paper by [Daniele Scasciafratte] that shows how to best use Amber. If you prefer slides to a video, you can read a copy of the presentation.

For an example, here’s a typical Amber script. It compiles fully to a somewhat longer bash script:


import * from "std/env"
fun example(value:Num = 1) {
   if 1 > 0 {
      let numbers = [value, value]
      let sum = 0
      loop i in numbers {
         sum += numbers[i]
    }
    echo "it's " + "me"
    return sum
   }
   fail 1
}

echo example(1) failed {
   echo "What???"
   is_command("echo")
}

The slides have even more examples. The language seems somewhat Python-like, and you can easily figure out most of it from reading the examples. While bash is nearly universal, the programs a script might use may not be. If you have it, the Amber code will employ bshchk to check dependencies before execution.

According to the slides, zsh support is on the way, too. Overall, it looks like it would be a great tool if you have to deploy with bash or even if you just want an easier way to script.

We’ve looked at Amber before. Besides, there are a ton of crazy things you can do with bash.

熟能生巧:湿式着装演练

2026-02-12 23:00:24

If you’ve been even casually following NASA’s return to the Moon, you’re likely aware of the recent Wet Dress Rehearsal (WDR) for the Artemis II mission. You probably also heard that things didn’t go quite to plan: although the test was ultimately completed and the towering Space Launch System (SLS) rocket was fully loaded with propellant, a persistent liquid hydrogen leak and a few other incidental issues lead the space agency to delay further testing for at least a month while engineers make adjustments to the vehicle.

This constitutes a minor disappointment for fans of spaceflight, but when you’re strapping four astronauts onto more than five million pounds of propellants, there’s no such thing as being too cautious. In fact, there’s a school of thought that says if a WDR doesn’t shake loose some gremlins, you probably weren’t trying hard enough. Simulations and estimates only get you so far, the real thing is always more complex, and there’s bound to be something you didn’t account for ahead of time.

Do Not Pass Go

So what exactly is a Wet Dress Rehearsal? In the most basic of terms, its a practice launch where everyone involved does everything exactly the way they would on a real launch, except when the countdown hits zero, nothing actually happens.

It’s the final test of the vehicle and the ground support systems, the last check of fit and function before launch. But there’a also a logistical element. In other words, it’s not just a test of whether or not the vehicle can be fully fueled, it’s also a verification of how long that process takes. Many of the operations that are performed in the WDR would have already been tested in isolation, but this may be the first, and only, time to practice running them concurrently with all of the other elements of the countdown.

A real-time graphic displayed propellant load status during the Wet Dress Rehearsal live stream.

There’s also the human element. Hundreds of individuals have a part to play as the clock ticks down to zero, from the team in mission control to the driver of the astronaut transport vehicle. This is where the Wet Dress Rehearsal truly earns it name. In a sense, launching a rocket is a bit like a theater production. Every player needs to not only have their individual role memorized, but they need to work together effectively with the larger ensemble on the big night.

Although a WDR is meant to simulate an actual launch as closely as possible, the rules are slightly different. If the rocket was actually going to be released there are other variables to contend with, such as the launch window, which is the period of time in which the rocket can actually leave the pad to reach its intended orbit. On a real launch, a delay significant enough to keep the vehicle from lifting off during its pre-determined launch window would generally result in an automatic abort. There is no such constraint for a rehearsal however, which gives teams more flexibility to conduct tests and repair work.

It should be noted that the Artemis II astronauts were not aboard the vehicle for the recent WDR, although ground teams did simulate the process of loading the crew into the Orion capsule. This is partly for the safety of the astronauts should something go wrong during the rehearsal, but is also due to the fact that the Moon-bound crew is kept in quarantine until the actual launch day to reduce the likelihood they will get sick during the mission.

Light the Fires

As mentioned above, for the purposes of the Wet Dress Rehearsal, nothing actually happens when the launch clock hits zero. It’s a test of the pre-launch activities, so actually starting up the engines isn’t part of the exercise.

But of course, testing the engines is an important aspect of launch preparation as well. Such a test is generally referred to as a static fire, where the engines are briefly run while the vehicle is physically held down so as not to leave the pad. Operationally, a wet dress rehearsal could proceed directly into a static fire. On the other hand, a full WDR is not required to perform a static fire.

An RS-25 engine during a test run.

While static fire tests are common for modern rockets such as the Falcon 9, NASA has decided not to conduct them during the Artemis I and II missions. The SLS rocket uses lightly modified RS-25 Space Shuttle Main Engines (SSMEs), which are not only flight proven, but were individually tested before integration with the vehicle. There is also an argument to be made that a full-up static fire on the SLS, like the Space Shuttle before it, isn’t truly possible as the vehicle’s Solid Rocket Boosters (SRBs) can only be ignited once.

The Artemis I rocket did however conduct what NASA calls a Green Run back in 2021. This saw the first stage of the SLS fire its four RS-25 engines for eight minutes to simulate an orbital launch. The first attempt at the Green Run saw the engines shut down prematurely, but they did run for the full duration in a subsequent test.

Although such a test wasn’t conducted for Artemis II, and are not expected for any of the future SLS rockets, NASA is preparing for a Green Run test on the Exploration Upper Stage (EUS). This is an upgraded second stage for the SLS which is intended to support more ambitious missions after the Artemis III landing, although the timeline and status of those missions is tenuous at best.

The Road to the Moon

According to NASA’s latest update, the issues during the Artemis II Wet Dress Rehearsal has pushed the testing campaign back until at least March, at which point they will run a second WDR. But that certainly doesn’t mean it will be the last.

While admittedly no two missions are the same, Artemis I went through four WDRs before it flew. Even then, the last one was aborted before the countdown was completed. Interestingly it was a hydrogen leak that caused that final rehearsal to be cut short, indicating that it may be a more dynamic problem than NASA realized at the time.

Even if the second WDR for Artemis II goes off without a hitch next month, that doesn’t mean the actual launch won’t be hit with its own delays due to technical glitches, poor weather, or any one of a myriad of other possible issues. Getting a rocket off the ground is never easy, and it only gets harder when there are humans onboard and the destination is farther than anyone has flown since the 1970s. An almost endless number of things need to go exactly right before we’ll see Artemis II lift off the pad, but when it goes, you definitely won’t want to miss it.

将电动汽车与更好的空气质量联系起来

2026-02-12 20:00:39

Although at its face the results seem obvious, a recent study by [Sandrah Eckel] et al. on the impact of electric cars in California is interesting from a quantitative perspective. What percentage of ICE-only cars do you need to replace with either full electric or hybrid cars before you start seeing an improvement in air quality?

A key part of the study was the use of the TROPOMI instrument, part of the European Sentinel-5 Precursor satellite. This can measure trace gases and aerosols in the atmosphere, both of which directly correlate with air quality. The researchers used historical TROPOMI data from 2019 to 2023 in the study, combining this data with vehicle registrations in California and accounting for confounding factors, such as a certain pandemic grinding things to a halt in 2020 and massively improving air quality.

Although establishing direct causality is hard using only this observational data, the researchers did show that the addition of 200 electric vehicles would seem to be correlated to an approximate 1.1% drop in measured atmospheric NO2. This nitrogen oxide is poisonous and fatal if inhaled in large quantities. It’s also one of the pollutants that result from combustion, when at high temperatures nitrogen from the air combines with oxygen molecules.

Estimated adjusted associations of annual vehicle registration counts and annual average NO2 in California from longitudinal linear mixed effects models (Sandrah Eckel et al., 2026)
Estimated adjusted associations of annual vehicle registration counts and annual average NO2 in California from longitudinal linear mixed effects models (Sandrah Eckel et al., 2026)

Considering the massive negative health impact of nitrogen dioxide on human health, any reduction here is naturally welcome. Of course, this substance is only one of the many pollutants generated by cars. We are also seeing a lot of fine particulate matter (PM2.5) generated from car tires, with a significant amount of microplastics coming from this source alone.

Add to this the environmentally toxic additive 6PPD that is added to tires along with e.g. carbon black, all of which help to make tires last longer and resist e.g. UV radiation and ozone exposure. While 6PPD isn’t necessarily directly harmful to humans, the PM2.5 pollution definitely is. As for carbon black and other additives, they’re still the subject of ongoing research.

One of the things that make statistics exciting is that of nuance from understanding the subject matter. Without that the adage of ‘Lies, Big Lies and Statistics’ applies, with spurious correlations being often promoted due to either ignorance or for unsavory purposes.

In the case of this study by [Sandrah Eckel] et al., it would seem that they did their due diligence, and the correlation makes sense objectively, in that having fewer ICE cars in favor of non-ICE cars would improve air quality. That said, as the tires of electric vehicles tend to wear faster due to their heavier weight, it remains to be seen whether it’s a net positive.