MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

Why AI Keeps Falling for Prompt Injection Attacks

2026-01-21 21:00:02



Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.

LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot wont tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It wont accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”

AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.

If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.

Human Judgment Depends on Context

Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.

As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.

The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.

A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.

We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.

Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.

So lets return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them youre filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.

Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.

Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.

Pixel art of a fast-food restaurant with a drive-thru, burger, cup, and trees.Humans detect scams and tricks by assessing multiple layers of context. AI systems do not. Nicholas Little

Why LLMs Struggle With Context and Judgment

LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.

While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.

This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.

There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.

The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. Theres a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.

The Limits of AI Agents

Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.

Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.

We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant.

We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.

The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving themworld models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.

Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while its going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.

From Vietnam Boat Refugee to Reliability Engineering Scholar

2026-01-21 03:00:03



Hoang Pham has spent his career trying to ensure that some of the world’s most critical systems don’t fail, including commercial aircraft engines, nuclear facilities, and massive data centers that underpin AI and cloud computing.

A professor of industrial and systems engineering at Rutgers University in New Brunswick, N.J., and a longtime volunteer for IEEE, Pham, an IEEE Life Fellow, is internationally recognized for advancing the mathematical foundations of reliability engineering. His work earned him the IEEE Reliability Society’s Engineer of the Year Award in 2009. He was recognized for helping to shape how engineers model risk in complex, data-rich systems.

Hoang Pham


Employer

Rutgers University in New Brunswick, N.J.

Job title

Professor of industrial and systems engineering

Member grade

Life Fellow

Alma maters

Northeastern Illinois University, in Chicago; University of Illinois at Urbana-Champaign; and SUNY Buffalo.

The discipline that defines his career was forged long before equations, peer-reviewed journals, or keynote speeches. It began on an overcrowded fishing boat in 1979 when he was fleeing Vietnam after the war, when survival as one of the country’s “boat people” depended on endurance, luck, and the fragile reliability of a vessel never meant to carry so many lives. Like thousands of others, he fled from his war-torn country after the fall of Saigon, which was controlled by communist North Vietnamese forces.

To mark the 50th anniversary of the fall of Saigon in 1975, Pham and his son Hoang Jr.—a Rutgers computer science graduate turned filmmaker—produced Unstoppable Hope, a documentary about Vietnam’s boat people. The film tells the stories of a dozen refugees who, like Pham, survived perilous escapes and went on to build successful lives in the United States.

Growing up during the Vietnam War

Pham was born in Bình Thuận, Vietnam. His parents had only a little formal education, having grown up in the 1930s, when schooling was rare. To support their eight children, his parents ran a factory making bricks by hand. Despite their limited means, his parents held an unshakable belief that education was the surest path to a better life.

From an early age, Pham gravitated toward mathematics. Computers were scarce, but numbers and logic came naturally to him. He imagined becoming a teacher or professor and gradually began thinking about how mathematics could be applied to practical problems—how abstract reasoning might improve daily life.

His intellectual curiosity unfolded amid frequent danger. He grew up during the Vietnam War, when dodging gunfire in his province was routine. The 1968 Tet Offensive exposed the full scale of the conflict, making it clear that violence was not an interruption to life but a condition of it.

Pham recalls that after the Communist takeover of South Vietnam in 1975, conditions worsened dramatically. Families without ties to the new government, especially those who operated small businesses, found it increasingly dangerous to work, study, or apply for jobs, he says. People began vanishing. Many attempted to escape by boat, knowing the risks: imprisonment if caught or potentially death at sea.

A successful escape

In June 1979, at the height of Vietnam’s typhoon season, Pham’s mother made an agonizing decision. She placed Pham, then 18 years old, onto a small, overcrowded fishing vessel in the hope that he might reach freedom.

The boat, which was designed to carry about 100 people, departed with 275.

Pham’s 12-day journey was harrowing. He was confined to the lower deck, which was packed so tightly that movement was nearly impossible. Seasickness overwhelmed many passengers, and he remembers losing consciousness shortly after departure. Food was scarce, and safe drinking water was nearly nonexistent. Violent storms battered the vessel, and pirates loomed.

“Every moment felt like a struggle against nature, fate, and internal despair,” Pham says.

The boat eventually washed ashore on a remote island off the Malaysian coast. Arriving at a refugee camp offered little relief; food and clean water were scarce, disease spread rapidly, and nearly everyone—including Pham—contracted malaria. Death came almost nightly.

After two weeks, Malaysian authorities transferred the refugees to a transit camp, where the United Nations provided basic rations. Still, the asylum seekers’ futures remained uncertain. It is estimated by the U.N. Refugee Agency that between 1975 and the early 1990s, roughly 800,000 Vietnamese people attempted to escape by boat. As many as 250,000 did not survive the harrowing journey, the agency estimates.

Starting over with nothing

In January 1980, at age 19, Pham learned that someone in the United States had agreed to sponsor him for entry, he says. He soon boarded an airplane for the first time and landed in Seattle.

His troubles weren’t over, however. He arrived in a city blanketed by snow, wearing thin clothing and carrying only a spare shirt. The frosty weather was not his greatest concern, though. During his first two months, he spent most of his time in a hospital, recovering from malaria and other diseases. And he spoke no English.

Still, Pham—who had been a first-year college student in Vietnam—refused to abandon his goal of becoming a teacher, he says. He enrolled at Lincoln High School in order to gain English proficiency and position himself to enter an American college. One teacher allowed him to test into a calculus class despite his limited English—which he passed.

“That moment told me I could survive here,” Pham says.

Within months, he learned he could attend college on a scholarship. He moved to Chicago in August 1980 to study at the National College of Education, then he transferred to Northeastern Illinois University, also in Chicago, earning bachelor’s degrees in mathematics and computer science in 1982.

Encouraged by mentors, he earned a master’s degree in statistics at the University of Illinois at Urbana-Champaign in 1984, followed by a Ph.D. in reliability engineering at the State University of New York at Buffalo in 1989.

When failure is not an option

Pham’s research direction crystallized in 1988 while searching for a dissertation topic. He was reading the January 1988 issue of IEEE Spectrum and had a flash of inspiration after seeing a classified ad posted by the U.S. Defense Department’s Naval Underwater System Center (now known as the Naval Undersea Warfare Center). The ad asked, “Can your theories solve the unsolvable?” It focused on the reliability of undersea communication and combat decision-making systems.

The ad revealed to him that institutions were actively applying mathematics and statistics to solve engineering problems. Pham says he still keeps a copy of that Spectrum issue in his office.

After completing his Ph.D., he joined Boeing as a senior specialist engineer at its Renton, Wash., facility, working on engine reliability for the 777 aircraft, which was under development.

He worked there for 18 months, then accepted a senior engineering specialist position at the Idaho National Laboratory, in Idaho Falls, where he worked on nuclear systems.

His desire to become an instructor never left him, however. In 1993 he joined Rutgers as an assistant professor of industrial and systems engineering.

Today his research focuses on reliability in modern, data-intensive systems, including AI infrastructure and global data centers.

“The problem now isn’t getting data,” he says. “It’s knowing which data to trust.”

Charting his IEEE journey

Pham joined IEEE in 1985 as a student member and credits the organization with shaping much of his professional life. IEEE provided a platform for scholarship, collaboration, and visibility at critical moments in his career, he says.

He served as associate technical editor of IEEE Communications Magazine from 1992 to 2000, was a guest editor for a special issue on fault-tolerant software in the June 1993 IEEE Transactions on Reliability, and was the program vice chair of the annual IEEE Reliability and Maintainability Symposium in 1994. In 2024 he returned to Vietnam as a plenary speaker at the 16th IEEE/SICE International Symposium on System Integration.

In addition to being named a distinguished professor at Rutgers, he served as chair of the industrial and systems engineering department from 2007 to 2013.

“If my journey holds one lesson,” he says, “it is this: Struggle builds resilience, and resilience makes the extraordinary possible. Even in darkness, perseverance lights the way.”

The Quest to Build a Radio Telescope That Can Hear the Cosmic Dark Ages

2026-01-20 22:00:03



Isolation dictates where we go to see into the far reaches of the universe. The Atacama Desert of Chile, the summit of Mauna Kea in Hawaii, the vast expanse of the Australian Outback—these are where astronomers and engineers have built the great observatories and radio telescopes of modern times. The skies are usually clear, the air is arid, and the electronic din of civilization is far away.

It was to one of these places, in the high desert of New Mexico, that a young astronomer named Jack Burns went to study radio jets and quasars far beyond the Milky Way. It was 1979, he was just out of grad school, and the Very Large Array, a constellation of 28 giant dish antennas on an open plain, was a new mecca of radio astronomy.

But the VLA had its limitations—namely, that Earth’s protective atmosphere and ionosphere blocked many parts of the electromagnetic spectrum, and that, even in a remote desert, earthly interference was never completely gone.

Could there be a better, even lonelier place to put a radio telescope? Sure, a NASA planetary scientist named Wendell Mendell, told Burns: How about the moon? He asked if Burns had ever thought about building one there.

“My immediate reaction was no. Maybe even hell, no. Why would I want to do that?” Burns recalls with a self-deprecating smile. His work at the VLA had gone well, he was fascinated by cosmology’s big questions, and he didn’t want to be slowed by the bureaucratic slog of getting funding to launch a new piece of hardware.

But Mendell suggested he do some research and speak at a conference on future lunar observatories, and Burns’s thinking about a space-based radio telescope began to shift. That was in 1984. In the four decades since, he’s published more than 500 peer-reviewed papers on radio astronomy. He’s been an adviser to NASA, the Department of Energy, and the White House, as well as a professor and a university administrator. And while doing all that, Burns has had an ongoing second job of sorts, as a quietly persistent advocate for radio astronomy from space.

And early next year, if all goes well, a radio telescope for which he’s a scientific investigator will be launched—not just into space, not just to the moon, but to the moon’s far side, where it will observe things invisible from Earth.

“You can see we don’t lack for ambition after all these years,” says Burns, now 73 and a professor emeritus of astrophysics at the University of Colorado Boulder.

The instrument is called LuSEE-Night, short for Lunar Surface Electromagnetics Experiment–Night. It will be launched from Florida aboard a SpaceX rocket and carried to the moon’s far side atop a squat four-legged robotic spacecraft called Blue Ghost Mission 2, built and operated by Firefly Aerospace of Cedar Park, Texas.

Illustration of a four-legged structure with solar panels on the sides on the surface of the moon. In an artist’s rendering, the LuSEE-Night radio telescope sits atop Firefly Aerospace’s Blue Ghost 2 lander, which will carry it to the moon’s far side. Firefly Aerospace

Landing will be risky: Blue Ghost 2 will be on its own, in a place that’s out of the sight of ground controllers. But Firefly’s Blue Ghost 1 pulled off the first successful landing by a private company on the moon’s near side in March 2025. And Burns has already put hardware on the lunar surface, albeit with mixed results: An experiment he helped conceive was on board a lander called Odysseus, built by Houston-based Intuitive Machines, in 2024. Odysseus was damaged on landing, but Burns’s experiment still returned some useful data.

Burns says he’d be bummed about that 2024 mission if there weren’t so many more coming up. He’s joined in proposing myriad designs for radio telescopes that could go to the moon. And he’s kept going through political disputes, technical delays, even a confrontation with cancer. Finally, finally, the effort is paying off.

“We’re getting our feet into the lunar soil,” says Burns, “and understanding what is possible with these radio telescopes in a place where we’ve never observed before.”

Why Go to the Far Side of the Moon?

A moon-based radio telescope could help unravel some of the greatest mysteries in space science. Dark matter, dark energy, neutron stars, and gravitational waves could all come into better focus if observed from the moon. One of Burns’s collaborators on LuSEE-Night, astronomer Gregg Hallinan of Caltech, would like such a telescope to further his research on electromagnetic activity around exoplanets, a possible measure of whether these distant worlds are habitable. Burns himself is especially interested in the cosmic dark ages, an epoch that began more than 13 billion years ago, just 380,000 years after the big bang. The young universe had cooled enough for neutral hydrogen atoms to form, which trapped the light of stars and galaxies. The dark ages lasted between 200 million and 400 million years.


timeline visualization

LuSEE-Night will listen for faint signals from the cosmic dark ages, a period that began about 380,000 years after the big bang, when neutral hydrogen atoms had begun to form, trapping the light of stars and galaxies. Chris Philpot

“It’s a critical period in the history of the universe,” says Burns. “But we have no data from it.”

The problem is that residual radio signals from this epoch are very faint and easily drowned out by closer noise—in particular, our earthly communications networks, power grids, radar, and so forth. The sun adds its share, too. What’s more, these early signals have been dramatically redshifted by the expansion of the universe, their wavelengths stretched as their sources have sped away from us over billions of years. The most critical example is neutral hydrogen, the most abundant element in the universe, which when excited in the laboratory emits a radio signal with a wavelength of 21 centimeters. Indeed, with just some backyard equipment, you can easily detect neutral hydrogen in nearby galactic gas clouds close to that wavelength, which corresponds to a frequency of 1.42 gigahertz. But if the hydrogen signal originates from the dark ages, those 21 centimeters are lengthened to tens of meters. That means scientists need to listen to frequencies well below 50 megahertz—parts of the radio spectrum that are largely blocked by Earth’s ionosphere.

Which is why the lunar far side holds such appeal. It may just be the quietest site in the inner solar system.

“It really is the only place in the solar system that never faces the Earth,” says David DeBoer, a research astronomer at the University of California, Berkeley. “It really is kind of a wonderful, unique place.”

For radio astronomy, things get even better during the lunar night, when the sun drops beneath the horizon and is blocked by the moon’s mass. For up to 14 Earth-days at a time, a spot on the moon’s far side is about as electromagnetically dark as any place in the inner solar system can be. No radiation from the sun, no confounding signals from Earth. There may be signals from a few distant space probes, but otherwise, ideally, your antenna only hears the raw noise of the cosmos.

“When you get down to those very low radio frequencies, there’s a source of noise that appears that’s associated with the solar wind,” says Caltech’s Hallinan. Solar wind is the stream of charged particles that speed relentlessly from the sun. “And the only location where you can escape that within a billion kilometers of the Earth is on the lunar surface, on the nighttime side. The solar wind screams past it, and you get a cavity where you can hide away from that noise.”

How Does LuSEE-Night Work?

LuSEE-Night’s receiver looks simple, though there’s really nothing simple about it. Up top are two dipole antennas, each of which consists of two collapsible rods pointing in opposite directions. The dipole antennas are mounted perpendicular to each other on a small turntable, forming an X when seen from above. Each dipole antenna extends to about 6 meters. The turntable sits atop a box of support equipment that’s a bit less than a cubic meter in volume; the equipment bay, in turn, sits atop the Blue Ghost 2 lander, a boxy spacecraft about 2 meters tall.

A person wearing a hairnet, facemask, and vinyl gloves working on a shiny metal apparatus.

A photo of people wearing hairnets, facemasks, and vinyl gloves working on a shiny metal apparatus.

A person wearing a hairnet, facemask, and vinyl gloves working on a shiny metal apparatus.LuSEE-Night undergoes final assembly [top and center] at the Space Sciences Laboratory at the University of California, Berkeley, and testing [bottom] at Firefly Aerospace outside Austin, Texas. From top: Space Sciences Laboratory/University of California, Berkeley (2); Firefly Aerospace

“It’s a beautiful instrument,” says Stuart Bale, a physicist at the University of California, Berkeley, who is NASA’s principal investigator for the project. “We don’t even know what the radio sky looks like at these frequencies without the sun in the sky. I think that’s what LuSEE-Night will give us.”

The apparatus was designed to serve several incompatible needs: It had to be sensitive enough to detect very weak signals from deep space; rugged enough to withstand the extremes of the lunar environment; and quiet enough to not interfere with its own observations, yet loud enough to talk to Earth via relay satellite as needed. Plus the instrument had to stick to a budget of about US $40 million and not weigh more than 120 kilograms. The mission plan calls for two years of operations.

The antennas are made of a beryllium copper alloy, chosen for its high conductivity and stability as lunar temperatures plummet or soar by as much as 250 °C every time the sun rises or sets. LuSEE-Night will make precise voltage measurements of the signals it receives, using a high-impedance junction field-effect transistor to act as an amplifier for each antenna. The signals are then fed into a spectrometer—the main science instrument—which reads those voltages at 102.4 million samples per second. That high read-rate is meant to prevent the exaggeration of any errors as faint signals are amplified. Scientists believe that a cosmic dark-ages signature would be five to six orders of magnitude weaker than the other signals that LuSEE-Night will record.

The turntable is there to help characterize the signals the antennas receive, so that, among other things, an ancient dark-ages signature can be distinguished from closer, newer signals from, say, galaxies or interstellar gas clouds. Data from the early universe should be virtually isotropic, meaning that it comes from all over the sky, regardless of the antennas’ orientation. Newer signals are more likely to come from a specific direction. Hence the turntable: If you collect data over the course of a lunar night, then reorient the antennas and listen again, you’ll be better able to distinguish the distant from the very, very distant.

What’s the ideal lunar landing spot if you want to take such readings? One as nearly opposite Earth as possible, on a flat plain. Not an easy thing to find on the moon’s hummocky far side, but mission planners pored over maps made by lunar satellites and chose a prime location about 24 degrees south of the lunar equator.

Other lunar telescopes have been proposed for placement in the permanently shadowed craters near the moon’s south pole, just over the horizon when viewed from Earth. Such craters are coveted for the water ice they may hold, and the low temperatures in them (below -240 °C) are great if you’re doing infrared astronomy and need to keep your instruments cold. But the location is terrible if you’re working in long-wavelength radio.

“Even the inside of such craters would be hard to shield from Earth-based radio frequency interference (RFI) signals,” Leon Koopmans of the University of Groningen in the Netherlands, said in an email. “They refract off the crater rims and often, due to their long wavelength, simply penetrate right through the crater rim.”

RFI is a major—and sometimes maddening—issue for sensitive instruments. The first-ever landing on the lunar far side was by the Chinese Chang’e 4 spacecraft, in 2019. It carried a low-frequency radio spectrometer, among other experiments. But it failed to return meaningful results, Chinese researchers said, mostly because of interference from the spacecraft itself.

The Accidental Birth of Radio Astronomy

Sometimes, though, a little interference makes history. Here, it’s worth a pause to remember Karl Jansky, considered the father of radio astronomy. In 1928, he was a young engineer at Bell Telephone Laboratories in Holmdel, N.J., assigned to isolate sources of static in shortwave transatlantic telephone calls. Two years later, he built a 30-meter-long directional antenna, mostly out of brass and wood, and after accounting for thunderstorms and the like, there was still noise he couldn’t explain. At first, its strength seemed to follow a daily cycle, rising and sinking with the sun. But after a few months’ observation, the sun and the noise were badly out of sync.

Black and white photo of a man standing in a field in front of a large structure made of crisscrossing segments and resting on wheels. In 1930, Karl Jansky, a Bell Labs engineer in Holmdel, N.J., built this rotating antenna on wheels to identify sources of static for radio communications. NRAO/AUI/NSF

It gradually became clear that the noise’s period wasn’t 24 hours; it was 23 hours and 56 minutes—the time it takes Earth to turn once relative to the stars. The strongest interference seemed to come from the direction of the constellation Sagittarius, which optical astronomy suggested was the center of the Milky Way. In 1933, Jansky published a paper in Proceedings of the Institute of Radio Engineers with a provocative title: “Electrical Disturbances Apparently of Extraterrestrial Origin.” He had opened the electromagnetic spectrum up to astronomers, even though he never got to pursue radio astronomy himself. The interference he had defined was, to him, “star noise.”

Thirty-two years later, two other Bell Labs scientists, Arno Penzias and Robert Wilson, ran into some interference of their own. In 1965 they were trying to adapt a horn antenna in Holmdel for radio astronomy—but there was a hiss, in the microwave band, coming from all parts of the sky. They had no idea what it was. They ruled out interference from New York City, not far to the north. They rewired the receiver. They cleaned out bird droppings in the antenna. Nothing worked.

Black and white photo of a large triangular structure on a frame, with two people looking up at it.  In the 1960s, Arno Penzias and Robert W. Wilson used this horn antenna in Holmdel, N.J., to detect faint signals from the big bang. GL Archive/Alamy

Meanwhile, an hour’s drive away, a team of physicists at Princeton University under Robert Dicke was trying to find proof of the big bang that began the universe 13.8 billion years ago. They theorized that it would have left a hiss, in the microwave band, coming from all parts of the sky. They’d begun to build an antenna. Then Dicke got a phone call from Penzias and Wilson, looking for help. “Well, boys, we’ve been scooped,” he famously said when the call was over. Penzias and Wilson had accidentally found the cosmic microwave background, or CMB, the leftover radiation from the big bang.

Burns and his colleagues are figurative heirs to Jansky, Penzias, and Wilson. Researchers suggest that the giveaway signature of the cosmic dark ages may be a minuscule dip in the CMB. They theorize that dark-ages hydrogen may be detectable only because it has been absorbing a little bit of the microwave energy from the dawn of the universe.

The Moon Is a Harsh Mistress

The plan for Blue Ghost Mission 2 is to touch down soon after the sun has risen at the landing site. That will give mission managers two weeks to check out the spacecraft, take pictures, conduct other experiments that Blue Ghost carries, and charge LuSEE-Night’s battery pack with its photovoltaic panels. Then, as local sunset comes, they’ll turn everything off except for the LuSEE-Night receiver and a bare minimum of support systems.

Image of the moon's surface, with a closeup of one section. LuSEE-Night will land at a site [orange dot] that’s about 25 degrees south of the moon’s equator and opposite the center of the moon’s face as seen from Earth. The moon’s far side is ideal for radio astronomy because it’s shielded from the solar wind as well as signals from Earth. Arizona State University/GSFC/NASA

There, in the frozen electromagnetic stillness, it will scan the spectrum between 0.1 and 50 MHz, gathering data for a low-frequency map of the sky—maybe including the first tantalizing signature of the dark ages.

“It’s going to be really tough with that instrument,” says Burns. “But we have some hardware and software techniques that…we’re hoping will allow us to detect what’s called the global or all-sky signal.… We, in principle, have the sensitivity.” They’ll listen and listen again over the course of the mission. That is, if their equipment doesn’t freeze or fry first.

A major task for LuSEE-Night is to protect the electronics that run it. Temperature extremes are the biggest problem. Systems can be hardened against cosmic radiation, and a sturdy spacecraft should be able to handle the stresses of launch, flight, and landing. But how do you build it to last when temperatures range between 120 and −130 °C? With layers of insulation? Electric heaters to reduce nighttime chill?

“All of the above,” says Burns. To reject daytime heat, there will be a multicell parabolic radiator panel on the outside of the equipment bay. To keep warm at night, there will be battery power—a lot of battery power. Of LuSEE-Night’s launch mass of 108 kg, about 38 kg is a lithium-ion battery pack with a capacity of 7,160 watt-hours, mostly to generate heat. The battery cells will recharge photovoltaically after the sun rises. The all-important spectrometer has been programmed to cycle off periodically during the two weeks of darkness, so that the battery’s state of charge doesn’t drop below 8 percent; better to lose some observing time than lose the entire apparatus and not be able to revive it.

Lunar Radio Astronomy for the Long Haul

And if they can’t revive it? Burns has been through that before. In 2024 he watched helplessly as Odysseus, the first U.S.-made lunar lander in 50 years, touched down—and then went silent for 15 agonizing minutes until controllers in Texas realized they were receiving only occasional pings instead of detailed data. Odysseus had landed hard, snapped a leg, and ended up lying almost on its side.

Color photo of a metal structure inside an open rocket.  ROLSES-1, shown here inside a SpaceX Falcon 9 rocket, was the first radio telescope to land on the moon, in February 2024. During a hard landing, one leg broke, making it difficult for the telescope to send readings back to Earth.Intuitive Machines/SpaceX

As part of its scientific cargo, Odysseus carried ROLSES-1 (Radiowave Observations on the Lunar Surface of the photo-Electron Sheath), an experiment Burns and a friend had suggested to NASA years before. It was partly a test of technology, partly to study the complex interactions between sunlight, radiation, and lunar soil—there’s enough electric charge in the soil sometimes that dust particles levitate above the moon’s surface, which could potentially mess with radio observations. But Odysseus was damaged badly enough that instead of a week’s worth of data, ROLSES got 2 hours, most of it recorded before the landing. A grad student working with Burns, Joshua Hibbard, managed to partially salvage the experiment and prove that ROLSES had worked: Hidden in its raw data were signals from Earth and the Milky Way.

“It was a harrowing experience,” Burns said afterward, “and I’ve told my students and friends that I don’t want to be first on a lander again. I want to be second, so that we have a greater chance to be successful.” He says he feels good about LuSEE-Night being on the Blue Ghost 2 mission, especially after the successful Blue Ghost 1 landing. The ROLSES experiment, meanwhile, will get a second chance: ROLSES-2 has been scheduled to fly on Blue Ghost Mission 3, perhaps in 2028.

Artist\u2019s rendering of a gray surface with parallel zigzagging lines.  NASA’s plan for the FarView Observatory lunar radio telescope array, shown in an artist’s rendering, calls for 100,000 dipole antennas to be spread out over 200 square kilometers. Ronald Polidan

If LuSEE-Night succeeds, it will doubtless raise questions that require much more ambitious radio telescopes. Burns, Hallinan, and others have already gotten early NASA funding for a giant interferometric array on the moon called FarView. It would consist of a grid of 100,000 antenna nodes spread over 200 square kilometers, made of aluminum extracted from lunar soil. They say assembly could begin as soon as the 2030s, although political and budget realities may get in the way.

Through it all, Burns has gently pushed and prodded and lobbied, advocating for a lunar observatory through the terms of ten NASA administrators and seven U.S. presidents. He’s probably learned more about Washington politics than he ever wanted. American presidents have a habit of reversing the space priorities of their predecessors, so missions have sometimes proceeded full force, then languished for years. With LuSEE-Night finally headed for launch, Burns at times sounds buoyant: “Just think. We’re actually going to do cosmology from the moon.” At other times, he’s been blunt: “I never thought—none of us thought—that it would take 40 years.”

“Like anything in science, there’s no guarantee,” says Burns. “But we need to look.”

NASA Demolishes Historic Test Stands That Built the Space Age

2026-01-18 22:00:01



The thunderous roar that echoed across Huntsville, Alabama, on 10 January wasn’t a rocket launch but something equally momentous: the end of an era. Two massive test stands at Marshall Space Flight Center that helped send humans to the moon collapsed in carefully choreographed implosions, their steel frameworks crumbling in seconds after decades standing as monuments to U.S. spaceflight achievement.

The Dynamic Test Stand and the Propulsion and Structural Test Facility, better known as the T-tower for its distinctive shape, represented more than just obsolete infrastructure. Built in the 1950s and ’60s, these structures witnessed the birth of the space age, serving as proving grounds where engineers pushed the limits of rocket technology and ensured every component could withstand the violence of launch.

T-tower’s Role in Rocket Testing

The T-tower came first, constructed in 1957 by the Army Ballistic Missile Agency before NASA even existed. At just over 50 meters tall, it was designed for static testing, where rockets are fired at full power while restrained and connected to instruments that measure every vibration, temperature spike, and pressure fluctuation. Here, engineers tested components of the Saturn family of launch vehicles under the direction of Wernher von Braun, including the mighty F-1 engines that would eventually power Apollo missions. The tower later proved essential for testing space shuttle solid rocket boosters before being retired in the 1990s.

The Dynamic Test Stand told an even more dramatic story. Built in 1964 and rising over 105 meters above the Alabama landscape, it once stood as the tallest human-made structure in North Alabama. Unlike the T-tower’s static tests, this facility subjected fully assembled Saturn V rockets to the mechanical stresses and vibrations they would experience during actual flight, everything shaking, flexing, and straining just as it would during launch, but without leaving the ground. Engineers couldn’t afford failures once these rockets reached the launchpad at Kennedy Space Center: Saturn V was too powerful, too expensive, and too important to risk.

The stand’s role didn’t end with Apollo. In 1978, it became the first location where engineers integrated all space shuttle elements together: orbiter, external fuel tank, and solid rocket boosters assembled as one complete system. Its final mission came in the early 2000s, when it served as a drop tower for microgravity experiments, a far quieter purpose than its explosive origins.

Both facilities earned designations as National Historic Landmarks in 1985, recognition of their irreplaceable contributions to human spaceflight. That makes their demolition bittersweet but necessary. The structures are no longer safe, and maintaining aging facilities drains resources that could support current missions. Marshall is removing 19 obsolete structures as part of a broader campus transformation, creating a modern, interconnected facility ready for NASA’s next chapter.

“These facilities helped NASA make history. While it is hard to let them go, they’ve earned their retirement. The people who built and managed these facilities and empowered our mission of space exploration are the most important part of their legacy,” said acting Marshall director Rae Ann Meyer in a statement.

NASA has worked to preserve that legacy. Detailed architectural drawings, photographs, and written histories now reside permanently in the Library of Congress. Auburn University created high-resolution digital models using LiDAR and 360-degree photography, capturing the structures in exquisite detail before their destruction. These virtual archives ensure future generations can still appreciate the scale and engineering achievement these towers represented, even after the steel has been cleared away.

Are There Enough Engineers for the AI Boom?

2026-01-17 22:00:01



The AI data center construction boom continues unabated, with the demand for power in the United States potentially reaching 106 gigawatts by 2035, according to a December report from research and analysis company BloombergNEF. That’s a 36 percent jump from the company’s previous outlook, published just seven months earlier. But there are severe constraints in power availability, material, equipment, and—perhaps most significantly—a lack of engineers, technicians, and skilled craftsmen that could turn the data center boom into a bust.

The power grid engineering workforce is currently shrinking, and data center operators are also hurting for trained electrical engineers. Laura Laltrello, the chief operating officer for Applied Digital, says demand has accelerated for civil, mechanical, and electrical engineers, as well as construction management and oversight positions in recent months. (Applied Digital is a data center developer and operator that is building two data center campuses near Harwood, North Dakota, that will require 1.4 GW of power when completed.) The growing demand for skilled workers has forced her company to widen the recruitment perimeter.

“As we anticipate a shortage of traditional engineering talent, we are sourcing from diverse industries,” says Laltrello. “We are finding experts who understand power and cooling from sectors like nuclear energy, the military, and aerospace. Expertise doesn’t have to come from a data center background.”

Growing Demand for Data Center Engineers

For every engineer needed to design, specify, build, inspect, commission, or run a new AI data center, dozens of other positions are in short supply. According to the Association for Computer Operations and Management’s (AFCOM) State of the Data Center Report 2025, 58 percent of data center managers identified multiskilled data center operators as the top area of growth, while 50 percent signaled increasing demand for data center engineers. Security specialists are also a critical need.

Through the next decade, the U.S. Bureau of Labor Statistics projects the need for almost 400,000 more construction workers by 2033. By far the biggest needs are in power infrastructure, electricians, plumbing, and HVAC, and roughly 17,500 electrical and electronics engineers. These categories directly map to the skills required to design, build, commission, and operate modern data centers.

“The challenge is not simply the absolute number of workers available, but the timing and intensity of demand,” says Bill Kleyman, author of the AFCOM report and the CEO of AI infrastructure firm Apolo. “Data centers are expanding at the same time that utilities, manufacturing, renewables, grid infrastructure, and construction are all competing for the same skilled labor pool, and AI is amplifying this pressure.”

Data center developers like Lancium and construction firms like Crusoe face enormous demands to build faster, bigger, and more power-dense facilities. For example, they’re developing the Stargate project in Abilene, Texas, for Oracle and OpenAI. The project has two buildings that went live in October 2025, with another six scheduled for completion by the middle of 2026. The entire AI data center campus, once completed, will require 1.2 GW of power.

Michael McNamara, the CEO of Lancium, says that in one year his company can currently build enough AI data center infrastructure to require 1 GW of power. Big tech firms, he says, want this raised to 1 GW a quarter and eventually 1 GW per month or less.

That kind of ramp-up of construction pace calls for tens of thousands more engineers. The shortage of engineering talent is paralleled by persistent staffing shortages in data center operations and facility management professionals, electrical and mechanical technicians, high-voltage and power systems engineers, skilled HVAC technicians with experience in high-density or liquid cooling, and construction specialists familiar with complex mechanical, electrical, and plumbing (MEP) integration, says Matthew Hawkins, the director of education for Uptime Institute.

“Demand for each category is rising significantly faster than supply,” says Hawkins.

Technical colleges and applied education programs are among the most effective engines for workforce growth in the data center industry. They focus on hands-on skills, facilities operations, power and cooling systems, and real-world job readiness. With so many new data centers being built in Texas, workforce programs are popping up all over that state. One example is the SMU Lyle School of Engineering’s Master of Science in Datacenter Systems Engineering (MS DSE) in Dallas. The program blends electrical engineering, IT, facilities management, business continuity, and cybersecurity. There is also a 12-week AI data center technician program at Dallas College and a similar program at Texas State Technical College near Waco.

“Technical colleges are driving the charge in bringing new talent to an industry undergoing exponential growth with an almost infinite appetite for skilled workers,” says Wendy Schuchart, an association manager at AFCOM.

Vendors and industry associations are actively addressing the talent gap too. Microsoft’s Datacenter Academy is a public-private partnership involving community colleges in regions where Microsoft operates data center facilities. Google supports local nonprofits and colleges offering training in IT and data center operations, and Amazon offers data center apprenticeships.

The Siemens Educates America program has surpassed 32,000 apprenticeships across 32 states, 36 labs, and 72 partner industry labor organizations. The company has committed to training 200,000 electricians and electrical manufacturing workers by 2030. Similarly, the National Electrical Contractors Association (NECA) operates the Electrical Training Alliance; the Society of Manufacturing Engineers (SME) offers ToolingU-SME, aimed at expanding the manufacturing workforce; and Uptime Institute Education programs look to accelerate the readiness of technicians and operators.

“Every university we speak with is thinking about this challenge and shifting its curriculum to prepare students for the future of digital infrastructure,” said Laltrello. “The best way to predict the future is to build it.”

IEEE Medal of Honor Recipient Is Nvidia’s CEO Jensen Huang

2026-01-17 03:00:02



Jensen Huang, founder and CEO of Nvidia, is the 2026 IEEE Medal of Honor recipient. The IEEE honorary member is being recognized for his “leadership in the development of graphics processing units and their application to scientific computing and artificial intelligence.” The news was announced on 6 January by IEEE’s president and CEO, Mary Ellen Randall, at the Consumer Electronics Show in Las Vegas.

Huang helped found Nvidia in 1993. Under his direction, the company introduced the programmable GPU six years later. The device sparked extraordinary advancements that have transformed fields including artificial intelligence, computing, and medicine—influencing how technology improves society.

“[Receiving the IEEE Medal of Honor] is an incredible honor, ” Huang said at the CES event. “I thank [IEEE] for this incredible award that I receive on behalf of all the great employees at Nvidia.”

With a US $2 million prize the award underscores IEEE’s commitment to celebrating visionaries who drive the future of technology for the benefit of humanity.

“The IEEE Medal of Honor is the pinnacle of recognition and our most prestigious award,” Randall said at the event. “[Jensen] Huang’s leadership and technical vision have unlocked a new era of innovation.

“His vision and subsequent development of [Nvidia’s first GPU hardware] is emblematic of the [award].”

Huang’s impact on technology

Huang’s impact has been acknowledged beyond the realm of engineering. He was named as one of the “Architects of AI,” a group of eight tech leaders who were collectively named Time magazine’s 2025 Person of the Year. He was also featured on a 2021 cover of Time magazine, was named the world’s top-performing CEO for 2019 by Harvard Business Review, and was Fortune’s 2017 Businessperson of the Year.

He is also an IEEE–Eta Kappa Nu eminent member.

This year’s IEEE Medal of Honor, along with other high-profile IEEE awards, will be presented during the IEEE Honors Ceremony, to be held in April in New York City. To follow news and updates on IEEE’s most prestigious awards, follow IEEE Awards on LinkedIn.