MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

Military AI Policy Needs Democratic Oversight

2026-03-08 18:00:03



A simmering dispute between the United States Department of Defense (DOD) and Anthropic has now escalated into a full-blown confrontation, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?

The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD unrestricted use of its AI systems. When the company refused, the administration moved to designate Anthropic a supply chain risk and ordered federal agencies to phase out its technology, dramatically escalating the standoff.

Anthropic has refused to cross two lines: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “ideological constraints” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a speech at Elon Musk’s SpaceX last month, “We will not employ AI models that won’t allow you to fight wars.”

Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.

Procurement policies

In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can decline to provide them. For example, a coalition of companies have signed an open letter pledging not to weaponize general-purpose robots. That basic symmetry is a feature of the free market.

Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “supply chain risk.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms.

Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. Hegseth has declared that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face legal challenges, but it raises the stakes well beyond the loss of a single DOD contract.

AI governance

It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.

The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails.

To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code.

Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of harmful or high-risk tasks, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design.

The second issue, opposition to fully autonomous military targeting, is more complex.

The DOD already maintains policies requiring human judgment in the use of force, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.

Reasonable people can disagree about where those lines should be drawn.

But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.

If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.

The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors.

There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens U.S. technological leadership.

The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice.

Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation.

Congress is AWOL

The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity.

At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions.

This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion.

The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them.

Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards.

If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations.

Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations.

This article is adapted by the author with permission from Tech Policy Press. Read the original article.

Laser-Based 3D Printing Could Build Future Bases on the Moon

2026-03-07 22:00:02



Through the Artemis Program, NASA hopes to establish a permanent human presence on the Moon in its southern polar region. China, Russia, and the European Space Agency (ESA) have similar plans, all of which involve building bases near the permanently shadowed regions (PSRs)craters that contain water icethat dot the South Pole-Aitken Basin. For these and other agencies, it is vital that these bases be as self-sufficient as possible since resupply missions cannot be launched regularly and take several days to arrive.

Therefore, any plan for a lunar base must come down to harvesting local resources to meet the needs of its crews as much as possiblea process known as In-Situ Resource Utilization (ISRU). In a recent study, researchers at The Ohio State University (OSU) proposed using a specialized laser-based 3D printing method to turn lunar regolith into hardened building material. According to their findings, this method can produce durable structures that withstand radiation and other harsh conditions on the lunar surface.

The research team was led by Sizhe Xu, a graduate research associate at OSU. He was joined by colleagues from OSU’s Department of Integrated Systems Engineering, Mechanical and Aerospace Engineering, and Materials Science & Engineering. Their paper, “Laser directed energy deposition additive manufacturing of lunar highland regolith simulant,” appeared in the journal Acta Astronautica.

Challenges of Lunar 3D Printing

The importance of ISRU for human exploration has prompted the rapid development of additive manufacturing systems, or 3D printing. These systems have proven effective at fabricating tools, structures, and habitats, effectively reducing dependence on supplies delivered from Earth. Developing such systems for long-duration missions is one of the most challenging aspects of the process, as they must be engineered to operate in the extreme environment on the Moon. This includes the lack of an atmosphere, massive temperature variations, and the ever-present problem of Moon dust.

Scientists use two types of lunar regolith for their experiments and research: Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1). As part of their research, the team used LHS-1, which is rich in basaltic minerals, similar to rock samples obtained by the Apollo missions. They melted this regolith with a laser to produce layers of material and fused them onto a base surface of stainless steel or glass. To assess how well these objects would fare in the lunar environment, the team tested their fabrication process under a range of different environmental conditions.

One thing they noticed was that the fused regolith adhered well to alumina-silicate ceramic, possibly because the two compounds form crystals that enhance heat resistance and mechanical strength. This revealed that the overall quality of the printed material is largely dependent on the surface onto which the regolith is printed. Other environmental factors, such as atmospheric oxygen levels, laser power, and printing speed, also affected the stability of the printed material.

Where 3D-Printed Material Could Help

Deployed to the Moon’s surface, this process could help build habitats and tools that are strong, resilient, and capable of handling the lunar environment. This has the added benefit of increasing independence from Earth, which is key to realizing long-duration missions on the Moon. In addition to assisting astronauts exploring the Moon in the near future (as part of NASA’s Artemis Program), this technology could also lead to resilient habitats that will enable a long-term human presence on the Moon, Mars, and beyond.

However, there are several unknown environmental factors that could limit the effectiveness of these systems on other worlds, and more data is needed before they can be addressed. In their study, the team suggests that instead of being powered by electricity, future scaled-up versions of their method could rely on solar or hybrid power systems. Nevertheless, the potential for space exploration is clear, and the technology also has applications for life here on Earth. Sarah Wolff, an assistant professor in mechanical and aerospace engineering and a lead author on the study, explained:

There are conditions that happen in space that are really hard to emulate in a simulant. It may work in the lab, but in a resource-scarce environment, you have to try everything to maximize the flexibility of a machine for different scenarios. If we can successfully manufacture things in space using very few resources, that means we can also achieve better sustainability on Earth. To that end, improving the machine’s flexibility for different scenarios is a goal we’re working really hard toward.

As the saying goes, “solving for space solves for Earth.” In environments where materials and resources are limited, laser-based 3D printing is one of several technologies that could support sustainable living. This applies equally to extraterrestrial environments and to regions on Earth experiencing the effects of climate change.

Video Friday: A Robot Hand With Artificial Muscles and Tendons

2026-03-07 00:00:05



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process composed of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.

[ Paper ] via [ SRL ]

Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine.

And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.

[ Boston Dynamics ]

This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path for searching in complex and cluttered environments.

[ DRAGON Lab ]

Thanks, Moju!

OmniPlanner is a unified solution for exploration and inspection-path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.

[ NTNU ]

Thanks, Kostas!

In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multirobot teams under outdoor conditions.

[ FZI ]

Welcome to the future, where there are no other humans.

[ Zhejiang Humanoid ]

This is our latest work on robotic fish, and it’s also the first underwater robot from DRAGON Lab.

[ DRAGON Lab ]

Thanks, Moju!

Watch this one simple trick to make humanoid robots cheaper and safer!

[ Zhejiang Humanoid ]

Gugusse and the Automaton is a 1897 French film by Georges Méliès featuring a humanoid robot in a depiction that’s nearly as realistic as some of the humanoid promo videos we’ve seen lately.

[ Library of Congress ] via [ Gizmodo ]

At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.

[ Agility ]

Kamel Saidi, robotics program manager at the National Institute of Standards and Technology (NIST), on how performance standards can pave the way for humanoid adoption.

[ Humanoids Summit ]

Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now at Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.

[ Waymo Podcast ]

This UPenn GRASP SFI Seminar is by Junyao Shi: “Unlocking Generalist Robots with Human Data and Foundation Models.”

Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets that span tasks, environments, and embodiments, thus limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.

[ UPenn ]

The Millisecond That Could Change Cancer Treatment

2026-03-06 22:00:03



Inside a cavernous hall at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist Walter Wuensch surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.

Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at CERN (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.

Photo of a white-haired man standing next to floor-to-ceiling experimental equipment with many tubes and wires. CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”CERN

Radiation therapy has been a cornerstone of cancer treatment since shortly after Wilhelm Conrad Röntgen discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.

FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.

At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.

“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”

The Unlikely Birth of FLASH Therapy

The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at Institut Curie in Orsay, near Paris. Researcher Vincent Favaudon was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.

Puzzled, Favaudon turned to Marie-Catherine Vozenin, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at Geneva University Hospitals, in Switzerland.

How to Measure Radiation Doses


Radiation therapy uses a variety of units to refer to the amount of energy received by the patient. Here are the main ones under the International System of Units, or SI.

Gray (Gy): A measure of the absorbed dose—that is, how much radiation energy is absorbed by the body. One gray equals 1 joule of radiation energy per kilogram of matter. FLASH delivers a single dose of 40 Gy or more in a fraction of a second. Conventional radiation therapy, by contrast, may deliver a total dose of 40 to 80 Gy but over the course of several weeks.

Sievert (Sv): A measure of the effective dose—that is, the health effects of the radiation, with different types of ionizing radiation (gamma rays, X-rays, alpha particles, and so on) having different effects. One sievert equals 1 joule per kilogram weighted for the biological effectiveness of the radiation and the tissues exposed.


The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.

They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in Science Translational Medicine. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)


Three sets of images comparing highly magnified tissue samples.


Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls Billy Loo, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”

But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.

Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.

Adapting Accelerators for FLASH

At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.

“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”

To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.


Photo of floor-to-ceiling electromagnetic hardware with many tubes and pipes, some of which is copper-colored.


They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)

Loo turned to the SLAC National Accelerator Laboratory in Menlo Park, Calif., where physicist Sami Gamal-Eldin Tantawi was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.

Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass.

CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.

What’s the Best Particle for FLASH Therapy?


Even as research on FLASH radiotherapy advances, a central question remains: What kind of particle will deliver it best? The main contenders are electrons, protons, and carbon ions. Each has distinct advantages, limitations, and implications for cost, complexity, and clinical reach.

Electrons—long used to treat surface tumors and to generate X-rays—are light, nimble particles, far easier to control than protons or carbon ions. At low energies, they stop quickly in tissue, but new high-energy systems can drive electrons deeper. Now researchers are working on machines that combine multiple high-energy beams at different angles to let doctors sculpt radiation doses that match the tumor’s shape.

That principle underpins Billy Loo’s PHASER (Pluridirectional High-energy Agile Scanning Electron Radiotherapy) system, developed at Stanford and SLAC and licensed to a startup called TibaRay. An array of high-efficiency linacs generates X-ray beams from many directions at once. Their high output overcomes the inefficiency of electron-to-photon conversion to deliver the dose at FLASH speed. Beam convergence at the tumor and electronic shaping conform the dose in three dimensions, producing uniform coverage with relatively simple infrastructure.

Protons have led the way in early clinical trials, largely because existing proton therapy centers can be adapted to deliver FLASH doses. In 2020, the University of Cincinnati Health launched the first human FLASH trial to use proton beams, to treat cancer that had metastasized to bones. “If I want to be pragmatic, the proton beam is ready to go, so let’s move with what we have,” says Geneva University Hospitals’ Marie-Catherine Vozenin.

Protons can penetrate up to 30 centimeters, reaching deep-seated tumors. But the delivery of protons in a continuous beam limits the dose rates. Also, proton systems are far larger and more expensive than, say, X-ray machines, which will likely constrain their availability to specialized centers.

Carbon ions, used in a handful of elite facilities, offer even higher precision and biological effectiveness compared to electrons and protons. Their Bragg peak—a sudden deposition of energy at a specific depth—makes them appealing for deep or complex tumors. But that unmatched precision comes at a steep price, with each facility costing upward of US $300 million. —T.C.


Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.

At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.

The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.

Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.


Close-up photo of an etched copper disc being held under a microscope by a gloved hand.

Much of this architecture draws directly from the Compact Linear Collider study, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.

Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.

The power behind the high gradients comes from CERN’s Xboxes, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.

According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.

Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.

“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”

To that end, CERN researchers have teamed up with the Lausanne University Hospital (known by its French acronym, CHUV) and the French medical technology company Theryq to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.

Theryq’s Approach to FLASH

Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.

“The solution that we are trying to develop here is something which is extremely versatile,” says Ludovic Le Meunier, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”

Futuristic scientific equipment setup, featuring streamlined machinery and intricate components.Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.THERYQ

Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, FLASHKNiFE, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.

More recently, Theryq launched FLASHLAB, a compact, 7-MeV platform for radiobiology research.

The company’s most ambitious system, FLASHDEEP, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by Leo Cancer Care, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager Philippe Liger.

FLASH Therapy Moves to Animal Tests

While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.

PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the European X-ray Free-Electron Laser. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.

A photo showing a row of experimental electronic equipment on racks

A photo of a closeup of a gloved hand holding a sample of a purple liquid above a piece of equipment.At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].Top: Frieder Mueller; Bottom: MWFK

“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says Anna Grebinyk, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”

The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.

What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”

Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.

“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”

Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.

FLASH as a Research Tool

Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.

Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”

Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.

High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.

The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.

“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.”

This article appears in the March 2026 print issue.

Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)

2026-03-06 19:00:03



Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.

Highlights include:

  • Modeling large satellite constellations
  • Analyzing and visualizing time-varying visibility and link closure
  • Using graphical apps for antenna analysis and RF component design
  • Modeling PAs and digital predistortion
  • Simulating interference effects in communication links

From TV Repairman to Electromagnetic Compatibility Expert

2026-03-06 03:00:03



No one had very high career aspirations for teenager David A. Weston—except for Weston himself. Growing up in London, he scored low on the U.K. national assessment test given to students finishing primary school. The result meant that his next path was either to become a laborer or attend a vocational school to learn a trade.

What Weston really wanted to do was to work as a radio and TV repairman. He was fascinated by how the devices worked. He had taught himself to build an AM radio when he was 15. Even after showing it to his parents and teachers, though, they still didn’t think he was smart enough to pursue his chosen career, he says.

David A. Weston


Employer

EMC Consulting, in in Arnprior, Ont., Canada

Job title

Retired consultant

Member grade

Life member

Alma mater

Croydon Technical College, London


So, later that year, the underweight teen got a job on a construction site carrying heavy loads of building materials in a hod, a three-sided wooden trough. The experience convinced him he wasn’t cut out for manual labor.

He eventually earned a certificate in radio and television, the only credential he holds. The lack of academic degrees did not hold him back, though. He went on to become an expert in electromagnetic interference (EMI) and electromagnetic compatibility (EMC).

An EMI field has unwanted energy that causes interference. EMC is the capacity for electronic devices to work correctly in a shared electromagnetic environment without causing interference or suffering from it in nearby devices or signals.

After working for a number of companies, he launched his own business more than 40 years ago: EMC Consulting, in Arnprior, Ont., Canada. The company has helped clients meet EMI and EMC regulatory requirements.

Now 83 years old and retired, the IEEE life member recently self-published his memoir, From a Hod to an Odd EM Wave.

“My memoir is about engineering persistence and human and technical discoveries,” he says. “I wanted to interest a young person, or perhaps a person later in life, in a career in engineering. If I can show that engineering is a personal, human endeavor with exciting opportunities in different fields such as medical, scientific, and the arts, maybe more women would be attracted to it.”

From repairing radios to designing underwater devices

In 1960 Weston enrolled in the radio and electronics program at London’s Croydon Technical College (now Croydon College). The school covered topics from the City and Guilds of London Institute’s radio and television certificate program. He attended classes one day a week for five years while working to put himself through school.

Although his parents and his teachers might not have recognized Weston’s potential, employers did.

He got his first job in 1960, fixing televisions in a small repair shop. Then he helped repair tape recorders. In his spare time, he studied transistors and semiconductors.

Everything he knows, he says, he learned by reading books and research papers, and from on-the-job training.

Later in 1960, he worked as a mechanical examiner for the U.K. Ministry of Aviation, where he calibrated precision meters and potentiometers, which are variable resistors that monitor, control, and measure industrial equipment.


“Engineering is creative. To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”


He left the ministry in 1963 because he found the work boring, he says, and he was hired as a technician with the Medical Research Council’s neuropsychiatric research unit in Carshalton. The institution researches the biological causes of mental illness. His manager was interested in learning about advances in medical electronics and eagerly shared his knowledge with Weston.

One of Weston’s tasks was to build an electroencephalography (EEG) calibrator to measure responses from a patient’s brain activity. The methods used at the time to detect a brain tumor—before MRI machines were developed—involved monitoring the patient’s speech and coordination, followed by taking a biopsy, which was not without danger, he says.

He used an ultrasonic transmitter and receiver to measure the time of transmission to the midline in the brain to determine whether the person had a tumor. If the midline had shifted, it would indicate the presence of a tumor, and a biopsy would be performed to confirm it. The measure of the evoked response in the brain was the only reliable indicator.

Weston earned his radio and TV certificate in 1965, leaving the research facility a year later to join Divcon (now part of Oceaneering International), a commercial diving company based in London that developed deep-sea helium diving helmets. Weston helped design a waterproof handheld communication device for divers that could withstand the high pressure in diving bells, the open-bottom pressurized chambers that transported them underwater.

Weston then moved to Hamburg, Germany, in 1969 to work for Plath, an electronics manufacturer. He was tasked, along with other engineers from England, to design a servo control loop.

“Unfortunately it oscillated so badly when first being turned on that it shook itself to bits,” he says.

He left to work as a senior engineer at Dr. Staiger Mohilo and Co. (now part of Kistler), in Schorndorf, Germany. It manufactured torque sensors, force transducers, and specialized test stand systems. Weston designed a process control computer. He says his boss told him that the controller had to work in close proximity to—and from the same power source as—a nearby machine without interfering with it or being interfered by it.

“I was thus introduced to the idea of electromagnetic compatibility,” he says.

After three years, he left to join the Siemens Mobility train group in Braunschweig, Germany, where he helped develop an electronic train-crossing light controller. The original warning lights on crossing gates used a mercury tube as a switch.

“The concern was the danger to personnel if the tube broke,” he says. “The simple and inexpensive solution was to put the tube in a metal container.”

Weston and his wife decided to leave Germany for Canada in 1975, after their young son began forgetting how to speak English.

Working on the space shuttle and a particle accelerator

His first job in the country was as an engineer for Canadian Aviation Electronics in Montreal. CAE helped design the remote manipulator system in robotic hand controllers and simulation systems used to train astronauts for the space shuttle.

The robotic arm, known as Canadarm, was used to deploy, maneuver, and capture payloads for the astronauts. Weston’s engineering team designed the display and control panel as well as the hand controllers located in the shuttle’s flight deck.

“I was attracted to the EMC aspects of the project and avidly studied everything I could on the topic,” he says.

He also helped develop a system that would protect an aircraft’s deployable black box from lightning strikes.

“I used a computer program to analyze the EMI field at close proximity to the black box to predict the lightning current flowing into the aircraft structure,” he says.

While enjoying the warm winter weather during a 1975 visit to a supplier on Long Island, N.Y., he decided he wanted to move his family there and asked whether any companies in the area were hiring. He was told that Brookhaven National Laboratory, in Upton, was, so he applied for a position working on the ring system for the Isabelle proton colliding-beam particle accelerator.

The project, later known as the colliding beam accelerator, was a collaboration between the lab and the U.S. Department of Energy. The 200+200 giga-electron volt proton-proton collider was designed to use advanced superconducting magnets cooled by a massive helium refrigeration system to produce high-energy collisions. The GeV refers to the collision energy in a particle accelerator.

Weston’s Advice for Budding Engineers


  • Follow the field in which you are most interested.
  • Don’t be afraid to work in other countries; it can be a rewarding, enriching experience.
  • Question the results of measurements or analyses. If it doesn’t seem right, it probably isn’t. Look at a similar publication on the same topic for a good correlation.
  • Don’t be too shy to ask simple questions. That’s how we learn and grow.
  • Keep an open mind.

The lab hired him in 1978, and the family moved to Long Island. After a few weeks of meeting with different departments, his boss asked him what kind of work he wanted to do. Weston told him about his idea for designing a device to detect a helium leak, should there ever be one. His machine would cover the entire 3,834-meter circumference area of the ring.

“The danger with increased helium-enriched air is that the oxygen level reduces until the person breathing becomes adversely affected,” he wrote in his memoir. “I found that the speed of the sound of helium increased enough to be detected, but not sufficient enough to cause a person trouble if they were in the tunnel.

“Brookhaven was considering machines that only covered a small area of the ring, but these would be unrealistic because too many machines would be needed, and the cost would have been astronomical.”

Weston’s system included an ultrasonic transmitter, a receiver, a power amplifier, and a preamplifier. It would sound an alarm if the helium content went above a certain level. People in the tunnel would be directed to go to the nearest oxygen-breathing equipment, put on a mask, and immediately evacuate. It was successfully tested.

Weston wrote a report detailing the ultrasonic helium leak detector, but shortly after, he and his wife had to return to Canada in 1978 because they were unable to get additional work permits in the United States.

When he returned to Brookhaven for a visit, his former boss told him the report was well-received. And he shared some news that upset Weston.

“My boss told me he took my report, changed the name on the report to his, did not mention me, and published the report as his,” Weston wrote in his memoir.

But the system was never built. The Isabelle project was canceled in July 1983 due to technical problems with fabricating the superconducting magnets.

Weston got a job working for CAL Corp., an aerospace telecommunications company in Montreal. For the next 14 years, he fixed EMI problems for the company’s products, including its charge-coupled device-based space-qualified cameras, which were designed to be carried aboard a satellite.

In 1992 he realized that nearly all his work involved consulting for the company’s customers, so he decided to start his own agency. CAL generously let him take the clients he worked with, he says.

Weston then conducted EMI analysis and testing and designed EMC systems for companies around the world.

“I always had enough customers and have never had to look for work,” he says. “For me, having my own business was more secure than working for a company.”

He retired in 2022.

IEEE as an educator

To broaden his education, he joined IEEE in 1976 to get access to its research papers and attend its conferences, he says. He is a member of the IEEE Electromagnetic Compatibility Society.

Because he is self-educated, he was “keen to learn as much as possible by reading practical papers published by IEEE,” he says. “I met people at IEEE symposiums and listened to the authors presenting their papers.”

Those included EMC experts such as Life Fellows Lothar O. “Bud” Hoeft, Richard J. Mohr, and Clayton R. Paul, whose papers are published in the IEEE Xplore Digital Library. Several of Weston’s papers are in the library as well.

His book Electromagnetic Compatibility: Methods, Analysis, Circuits, and Measurement references many IEEE papers on data and analysis methods.

“Engineering is creative,” he says. “To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”