2026-02-06 00:00:02

While quantum computers continue to slowly grind towards usefulness, some are pursuing a different approach—analog quantum simulation. This path doesn’t offer complete control of single bits of quantum information, known as qubits—it is not a universal quantum computer. Instead, quantum simulators directly mimic complex, difficult-to-access things, like individual molecules, chemical reactions, or novel materials. What analog quantum simulation lacks in flexibility, it makes up for in feasibility: quantum simulators are ready now.
“Instead of using qubits, as you would typically in a quantum computer, we just directly encode the problem into the geometry and structure of the array itself,” says Sam Gorman, quantum systems engineering lead at Sydney-based start-up Silicon Quantum Computing.
Yesterday, Silicon Quantum Computing unveiled its Quantum Twins product, a silicon quantum simulator, which is now available to customers through direct contract. Simultaneously, the team demonstrated that their device, made up of fifteen thousand quantum dots, can simulate an often-studied transition of a material from an insulator to a metal, and all the states between. They published their work this week in the journal Nature.
“We can do things now that we think nobody else in the world can do,” Gorman says.
Though the product announcement came yesterday, the team at Silicon Quantum Computing established its Precision Atom Qubit Manufacturing process following the startup’s establishment in 2017, building on the academic work that the company’s founder, Michelle Simmons, led for over 25 years. The underlying technology is a manufacturing process for placing single phosphorus atoms in silicon with sub-nanometer precision.
“We have a 38-stage process,” Simmons says, for patterning phosphorus atoms into silicon. The process starts with a silicon substrate, which gets coated with a layer of hydrogen. Then, using a scanning-tunneling microscope, individual hydrogen atoms are knocked off the surface, exposing the silicon underneath. The surface is then dosed with phosphine gas, which adsorbs to the surface only in places where the silicon is exposed. With the help of a low temperature thermal anneal, the phosphorus atom is then incorporated into the silicon crystal. Then, layers of silicon are grown on top.
“It’s done in ultra-high vacuum. So it’s a very pure, very clean system,” Simmons says. “It’s a fully monolithic chip that we make with that sub-nanometer precision. In 2014, we figured out how to make markers in the chip so that we can then come back and find where we put the atoms within the device to make contacts. Those contacts are then made at the same length scale as the atoms and dots.”
Though the team is able to place single atoms of phosphorus, they use clusters of ten to fifty such atoms to make up a so-called register for these application-specific chips. These registers act like quantum dots, preserving quantum properties of the individual atoms. The registers are controlled by a gate voltage from contacts placed atop the chip, and interactions between registers can be tuned by precisely controlling the distances between them.
While the company is also pursuing more traditional quantum computing using this technology, they realized they already had the capacity to do useful simulations in the analog domain by putting thousands of registers on a single chip and measuring global properties, without controlling individual qubits.
“The thing that’s quite unique is we can do that very quickly,” Simmons says. “We put 250,000 of these registers [on a chip] in eight hours, and we can turn a chip design around in a week.”
Back in 2022, the team at Silicon Quantum Computing used a previous version of this same technology to simulate a molecule of polyacetylene. The chemical is made up of carbon atoms with alternating single and double bonds, and, crucially, its conductivity changes drastically depending on whether the chain is cut on a single or double bond. In order to accurately simulate single and double carbon bonds, the team had to control the distances of their registers to sub-nanometer precision. By tuning the gate voltages of each quantum dot, the researchers reproduced the jump in conductivity.
Now, they’ve demonstrated the quantum twin technology on a much larger problem—the metal-insulator transition of a two-dimensional material. Where the polyacetylene molecule required ten registers, the new model used 15,000. The metal-insulator model is important because, in most cases, it cannot be simulated on a classical computer. At the extremes—in the fully metal or fully insulating phase—the physics can be simplified and made accessible to classical computing. But in the murky intermediate regime, the full quantum complexity of each electron plays a role, and the problem is classically intractable. “That is the part which is challenging for classical computing. But we can actually put our system into this regime quite easily,” Gorman says.
The metal-insulator model was a proof of concept. Now, Gorman says, the team can design a quantum twin for almost any two-dimensional problem.
“Now that we’ve demonstrated that the device is behaving as we predict, we’re looking at high-impact issues or outstanding problems,” says Gorman. The team plans to investigate things like unconventional superconductivity, the origins of magnetism, and materials interfaces such as those that occur in batteries.
Although the initial applications will most likely be in the scientific domain, Simmons is hopeful that Quantum Twins will eventually be useful for industrial applications such as drug discovery. “If you look at different drugs, they’re actually very similar to polyacetylene. They’re carbon chains, and they have functional groups. So, understanding how to map it [onto our simulator] is a unique challenge. But that’s definitely an area we’re going to focus on,” she says. “We’re excited at the potential possibilities.”
2026-02-05 03:00:02

MVK Chari, a pioneer in finite element field computation, died on 3 December. The IEEE Life Fellow was 97.
Chari developed a finite element method (FEM) for analyzing nonlinear electromagnetic fields—which is crucial for the design of electric machines. The technique is used to obtain approximate solutions to complex engineering and mathematical problems. It involves dividing a complicated object or system into smaller, more manageable parts, known as finite elements, according to Fictiv.
As an engineer and technical leader at General Electric in Niskayuna, N.Y., Chari used the tool to analyze large turbogenerators for end region analysis, starting with 2D and expanding its use over time to quasi-2D and 3D.
During his 25 years at GE, he established a team that was developing finite element analysis (FEA) tools for a variety of applications across the company. They ranged from small motors to large MRI magnets.
Chari received the 1993 IEEE Nikola Tesla Award for “pioneering contributions to finite element computations of nonlinear electromagnetic fields for design and analysis of electric machinery.”
Chari attended Imperial College London to pursue a master’s degree in electrical engineering. There he met Peter P. Silvester, a visiting professor of electrical engineering. Silvester, a professor at McGill University in Montreal, was a pioneer in understanding numerical analysis of electromagnetic fields.
After Chari graduated in 1968, he joined Silvester at McGill as a doctoral student, applying FEM to solve electromagnetic field problems. Silvester applied the method to waveguides, while Chari applied it to saturated magnetic fields.
Chari joined GE in 1970 after earning his Ph.D. in electrical engineering. He climbed the leadership ladder and was a manager of the company’s electromagnetics division when he left in 1995. He joined Rensselaer Polytechnic Institute in Troy, N.Y., as a visiting research and adjunct professor in its electrical, computer, and systems engineering department. Chari taught graduate and undergraduate classes in electric power engineering and mentored many master’s and doctoral students. His strength was nurturing young engineers.
He also conducted research on electric machines and transformers for the Electric Power Research Institute and the U.S. Department of Energy.
In 2008 Chari joined Magsoft Corp., in Clifton Park, N.Y., and conducted advanced work on specialized software for the U.S. Navy until his retirement in 2016.
Chari successfully nominated one of us (Hoole) to be elevated to IEEE Fellow at the age of 40. He helped launch Haran’s career when Chari sent his résumé to GE hiring managers for a position in its applied superconductivity lab.
Chari’s commitment to people came from his family background. His father—M.A. Ayyangar—was known throughout India as a freedom fighter, mathematician, and eventually the speaker of the Indian Parliament’s lower house under Prime Minister Nehru. Chari’s wife, Padma, was a physician in New York.
From Chari’s illustrious family, he was at the peak of South India (Tamil) society.
Chari would fondly and cheerfully tell us the story behind his name. Around the time of his birth, it was common in Tamil society not to have formal names. He went by the informal “house name” Kannah (a term of endearment for Krishna). When it was time for Chari to start school, an auspicious uncle enrolled him. But Chari had no formal name, so the uncle took it upon himself to give him one. He asked Chari if he would like a long or short name, to which he said long. So the uncle named him Madabushi Venkadamachari.
When Chari moved to North America, he shortened his name to Madabushi V.K.
He could also laugh at himself.
A stellar scientist, he also was a role model, guide, and friend to many of us. We thank God for him.
2026-02-05 01:03:54

From 6-22 February, the 2026 Winter Olympics in Milan-Cortina d’Ampezzo, Italy will feature not just the world’s top winter athletes but also some of the most advanced sports technologies today. At the first Cortina Olympics in 1956, the Swiss company Omega—based in Biel/Bienne—introduced electronic ski starting gates and launched the first automated timing tech of its kind.
At this year’s Olympics, Swiss Timing, sister company to Omega under the parent Swatch Group, unveils a new generation of motion analysis and computer vision technology. The new technologies on offer include photofinish cameras that capture up to 40,000 images per second.
“We work very closely with athletes,” says Swiss Timing CEO Alain Zobrist, who has overseen Olympic timekeeping since the winter games of 2006 in Torino “They are the primary customers of our technology and services, and they need to understand how our systems work in order to trust them.”
Using high-resolution cameras and AI algorithms tuned to skaters’ routines, Milan-Cortina Olympic officials expect new figure skating tech to be a key highlight of the games. Omega
Figure skating, the Winter Olympics’ biggest TV draw, is receiving a substantial upgrade at Milano Cortina 2026.
Fourteen 8K resolution cameras positioned around the rink will capture every skater’s movement. “We use proprietary software to interpret the images and visualize athlete movement in a 3D model,” says Zobrist. “AI processes the data so we can track trajectory, position, and movement across all three axes—X, Y, and Z”.
The system measures jump heights, air times, and landing speeds in real time, producing heat maps and graphic overlays that break down each program—all instantaneously. “The time it takes for us to measure the data, until we show a matrix on TV with a graphic, this whole chain needs to take less than 1/10 of a second,” Zobrist says.
A range of different AI models helps the broadcasters and commentators process each skater’s every move on the ice.
“There is an AI that helps our computer vision system do pose estimation,” he says. “So we have a camera that is filming what is happening, and an AI that helps the camera understand what it’s looking at. And then there is a second type of AI, which is more similar to a large language model that makes sense of the data that we collect”.
Among the features Swiss Timing’s new systems provide is blade angle detection, which gives judges precise technical data to augment their technical and aesthetic decisions. Zobrist says future versions will also determine whether a given rotation is complete, so that “If the rotation is 355 degrees, there is going to be a deduction,” he says.
This builds on technology Omega unveiled at the 2024 Paris Olympics for diving, where cameras measured distances between a diver’s head and the board to help judges assess points and penalties to be awarded.
At the 2026 Winter Olympics, ski jumping will feature both camera-based and sensor-based technologies to make the aerial experience more immediate and real-time. Omega
Unlike figure skating’s camera-based approach, ski jumping also relies on physical sensors.
“In ski jumping, we use a small, lightweight sensor attached to each ski, one sensor per ski, not on the athlete’s body,” Zobrist says. The sensors are lightweight and broadcast data on a skier’s speed, acceleration, and positioning in the air. The technology also correlates performance data with wind conditions, revealing environmental factors’ influence on each jump.
High-speed cameras also track each ski jumper. Then, a stroboscopic camera provides body position time-lapses throughout the jump.
“The first 20 to 30 meters after takeoff are crucial as athletes move into a V position and lean forward,” Zobrist says. “And both the timing and precision of this movement strongly influence performance.”
The system reveals biomechanical characteristics in real time, he adds, showing how athletes position their bodies during every moment of the takeoff process. The most common mistake in flight position, over-rotation or under-rotation, can now be detailed and diagnosed with precision on every jump.
This year’s Olympics will also feature a “virtual photo finish,” providing comparison images of when different sleds cross the finish line over previous runs.
Omega’s cameras will provide virtual photo finishes at the 2026 Winter Olympics. Omega
“We virtually build a photo finish that shows different sleds from different runs on a single visual reference,” says Zobrist.
After each run, composite images show the margins separating performances. However, more tried-and-true technology still generates official results. A Swiss Timing score, he says, still comes courtesy of photoelectric cells, devices that emit light beams across the finish line and stop the clock when broken. The company offers its virtual photo finish, by contrast, as a visualization tool for spectators and commentators.
In bobsleigh, as in every timed Winter Olympic event, the line between triumph and heartbreak is sometimes measured in milliseconds or even shorter time intervals still. Such precision will, Zobrist says, stem from Omega’s Quantum Timer.
“We can measure time to the millionth of a second, so 6 digits after the comma, with a deviation of about 23 nanoseconds over 24 hours,” Zobrist explained. “These devices are constantly calibrated and used across all timed sports.”
2026-02-03 23:58:27

This paper discusses how RF propagation simulations empower engineers to test numerous real-world use cases in far less time, and at lower costs, than in situ testing alone. Learn how simulations provide a powerful visual aid and offer valuable insights to improve the performance and design of body-worn wireless devices.
2026-02-03 22:00:02

In 1930, a young physicist named Carl D. Anderson was tasked by his mentor with measuring the energies of cosmic rays—particles arriving at high speed from outer space. Anderson built an improved version of a cloud chamber, a device that visually records the trajectories of particles. In 1932, he saw evidence that confusingly combined the properties of protons and electrons. “A situation began to develop that had its awkward aspects,” he wrote many years after winning a Nobel Prize at the age of 31. Anderson had accidentally discovered antimatter.
Four years after his first discovery, he codiscovered another elementary particle, the muon. This one prompted one physicist to ask, “Who ordered that?”

Carl Anderson [top] sits beside the magnet cloud chamber he used to discover the positron. His cloud-chamber photograph [bottom] from 1932 shows the curved track of a positron, the first known antimatter particle. Caltech Archives & Special Collections
Over the decades since then, particle physicists have built increasingly sophisticated instruments of exploration. At the apex of these physics-finding machines sits the Large Hadron Collider, which in 2022 started its third operational run. This underground ring, 27 kilometers in circumference and straddling the border between France and Switzerland, was built to slam subatomic particles together at near light speed and test deep theories of the universe. Physicists from around the world turn to the LHC, hoping to find something new. They’re not sure what, but they hope to find it.
It’s the latest manifestation of a rich tradition. Throughout the history of science, new instruments have prompted hunts for the unexpected. Galileo Galilei built telescopes and found Jupiter’s moons. Antonie van Leeuwenhoek built microscopes and noticed “animalcules, very prettily a-moving.” And still today, people peer through lenses and pore through data in search of patterns they hadn’t hypothesized. Nature’s secrets don’t always come with spoilers, and so we gaze into the unknown, ready for anything.
But novel, fundamental aspects of the universe are growing less forthcoming. In a sense, we’ve plucked the lowest-hanging fruit. We know to a good approximation what the building blocks of matter are. The Standard Model of particle physics, which describes the currently known elementary particles, has been in place since the 1970s. Nature can still surprise us, but it typically requires larger or finer instruments, more detailed or expansive data, and faster or more flexible analysis tools.
Those analysis tools include a form of artificial intelligence (AI) called machine learning. Researchers train complex statistical models to find patterns in their data, patterns too subtle for human eyes to see, or too rare for a single human to encounter. At the LHC, which smashes together protons to create immense bursts of energy that decay into other short-lived particles of matter, a theorist might predict some new particle or interaction and describe what its signature would look like in the LHC data, often using a simulation to create synthetic data. Experimentalists would then collect petabytes of measurements and run a machine learning algorithm that compares them with the simulated data, looking for a match. Usually, they come up empty. But maybe new algorithms can peer into corners they haven’t considered.
“You’ve heard probably that there’s a crisis in particle physics,” says Tilman Plehn, a theoretical physicist at Heidelberg University, in Germany. At the LHC and other high-energy physics facilities around the world, the experimental results have failed to yield insights on new physics. “We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t,” Plehn says.

“We have a lot of unhappy theorists who thought that their model would have been discovered, and it wasn’t.”
Gregor Kasieczka, a physicist at the University of Hamburg, in Germany, recalls the field’s enthusiasm when the LHC began running in 2008. Back then, he was a young graduate student and expected to see signs of supersymmetry, a theory predicting heavier versions of the known matter particles. The presumption was that “we turn on the LHC, and supersymmetry will jump in your face, and we’ll discover it in the first year or so,” he tells me. Eighteen years later, supersymmetry remains in the theoretical realm. “I think this level of exuberant optimism has somewhat gone.”
The result, Plehn says, is that models for all kinds of things have fallen in the face of data. “And I think we’re going on a different path now.”
That path involves a kind of machine learning called unsupervised learning. In unsupervised learning, you don’t teach the AI to recognize your specific prediction—signs of a particle with this mass and this charge. Instead, you might teach it to find anything out of the ordinary, anything interesting—which could indicate brand new physics. It’s the equivalent of looking with fresh eyes at a starry sky or a slide of pond scum. The problem is, how do you automate the search for something “interesting”?
The Standard Model leaves many questions unanswered. Why do matter particles have the masses they do? Why do neutrinos have mass at all? Where is the particle for transmitting gravity, to match those for the other forces? Why do we see more matter than antimatter? Are there extra dimensions? What is dark matter—the invisible stuff that makes up most of the universe’s matter and that we assume to exist because of its gravitational effect on galaxies? Answering any of these questions could open the door to new physics, or fundamental discoveries beyond the Standard Model.
“Personally, I’m excited for portal models of dark sectors,” Kasieczka says, as if reading from a Marvel film script. He asks me to imagine a mirror copy of the Standard Model out there somewhere, sharing only one “portal” particle with the Standard Model we know and love. It’s as if this portal particle has a second secret family.
Kasieczka says that in the LHC’s third run, scientists are splitting their efforts roughly evenly between measuring more precisely what they know to exist and looking for what they don’t know to exist. In some cases, the former could enable the latter. The Standard Model predicts certain particle properties and the relationships between them. For example, it correctly predicted a property of the electron called the magnetic moment to about one part in a trillion. And precise measurements could turn up internal inconsistencies. “Then theorists can say, ‘Oh, if I introduce this new particle, it fixes this specific problem that you guys found. And this is how you look for this particle,’” Kasieczka says.
What’s more, the Standard Model has occasionally shown signs of cracks. Certain particles containing bottom quarks, for example, seem to decay into other particles in unexpected ratios. Plehn finds the bottom-quark incongruities intriguing. “Year after year, I feel they should go away, and they don’t. And nobody has a good explanation,” he says. “I wouldn’t even know who I would shout at”—the theorists or the experimentalists—“like, ‘Sort it out!’”
Exasperation isn’t exactly the right word for Plehn’s feelings, however. Physicists feel gratified when measurements reasonably agree with expectations, he says. “But I think deep down inside, we always hope that it looks unreasonable. Everybody always looks for the anomalous stuff. Everybody wants to see the standard explanation fail. First, it’s fame”—a chance for a Nobel—“but it’s also an intellectual challenge, right? You get excited when things don’t work in science.”
Now imagine you had a machine to find all the times things don’t work in science, to uncover all the anomalous stuff. That’s how researchers are using unsupervised learning. One day over ice cream, Plehn and a friend who works at the software company SAP began discussing autoencoders, one type of unsupervised learning algorithm. “He tells me that autoencoders are what they use in industry to see if a network was hacked,” Plehn remembers. “You have, say, a hundred computers, and they have network traffic. If the network traffic [to one computer] changes all of a sudden, the computer has been hacked, and they take it offline.”
Autoencoders are neural networks that start with an input—it could be an image of a cat, or the record of a computer’s network traffic—and compress it, like making a tiny JPEG or MP3 file, and then decompress it. Engineers train them to compress and decompress data so that the output matches the input as closely as possible. Eventually a network becomes very good at that task. But if the data includes some items that are relatively rare—such as white tigers, or hacked computers’ traffic—the network performs worse on these, because it has less practice with them. The difference between an input and its reconstruction therefore signals how anomalous that input is.
“This friend of mine said, ‘You can use exactly our software, right?’” Plehn remembers. “‘It’s exactly the same question. Replace computers with particles.’” The two imagined feeding the autoencoder signatures of particles from a collider and asking: Are any of these particles not like the others? Plehn continues: “And then we wrote up a joint grant proposal.”
It’s not a given that AI will find new physics. Even learning what counts as interesting is a daunting hurdle. Beginning in the 1800s, men in lab coats delegated data processing to women, whom they saw as diligent and detail oriented. Women annotated photos of stars, and they acted as “computers.” In the 1950s, women were trained to scan bubble chambers, which recorded particle trajectories as lines of tiny bubbles in fluid. Physicists didn’t explain to them the theory behind the events, only what to look for based on lists of rules.
But, as the Harvard science historian Peter Galison writes in Image and Logic: A Material Culture of Physics, his influential account of how physicists’ tools shape their discoveries, the task was “subtle, difficult, and anything but routinized,” requiring “three-dimensional visual intuition.” He goes on: “Even within a single experiment, judgment was required—this was not an algorithmic activity, an assembly line procedure in which action could be specified fully by rules.”

“We are not looking for flying elephants but instead a few extra elephants than usual at the local watering hole.”
Over the last decade, though, one thing we’ve learned is that AI systems can, in fact, perform tasks once thought to require human intuition, such as mastering the ancient board game Go. So researchers have been testing AI’s intuition in physics. In 2019, Kasieczka and his collaborators announced the LHC Olympics 2020, a contest in which participants submitted algorithms to find anomalous events in three sets of (simulated) LHC data. Some teams correctly found the anomalous signal in one dataset, but some falsely reported one in the second set, and they all missed it in the third. In 2020, a research collective called Dark Machines announced a similar competition, which drew more than 1,000 submissions of machine learning models. Decisions about how to score them led to different rankings, showing that there’s no best way to explore the unknown.
Another way to test unsupervised learning is to play revisionist history. In 1995, a particle dubbed the top quark turned up at the Tevatron, a particle accelerator at the Fermi National Accelerator Laboratory (Fermilab), in Illinois. But what if it actually hadn’t? Researchers applied unsupervised learning to LHC data collected in 2012, pretending they knew almost nothing about the top quark. Sure enough, the AI revealed a set of anomalous events that were clustered together. Combined with a bit of human intuition, they pointed toward something like the top quark.

“An algorithm that can recognize any kind of disturbance would be a win.”
That exercise underlines the fact that unsupervised learning can’t replace physicists just yet. “If your anomaly detector detects some kind of feature, how do you get from that statement to something like a physics interpretation?” Kasieczka says. “The anomaly search is more a scouting-like strategy to get you to look into the right corner.” Georgia Karagiorgi, a physicist at Columbia University, agrees. “Once you find something unexpected, you can’t just call it quits and be like, ‘Oh, I discovered something,’” she says. “You have to come up with a model and then test it.”
Kyle Cranmer, a physicist and data scientist at the University of Wisconsin-Madison who played a key role in the discovery of the Higgs boson particle in 2012, also says that human expertise can’t be dismissed. “There’s an infinite number of ways the data can look different from what you expected,” he says, “and most of them aren’t interesting.” Physicists might be able to recognize whether a deviation suggests some plausible new physical phenomenon, rather than just noise. “But how you try to codify that and make it explicit in some algorithm is much less straightforward,” Cranmer says. Ideally, the guidelines would be general enough to exclude the unimaginable without eliminating the merely unimagined. “That’s gonna be your Goldilocks situation.”
In his 1987 book How Experiments End, Harvard’s Galison writes that scientific instruments can “import assumptions built into the apparatus itself.” He tells me about a 1973 experiment that looked for a phenomenon called neutral currents, signaled by an absence of a so-called heavy electron (later renamed the muon). One team initially used a trigger left over from previous experiments, which recorded events only if they produced those heavy electrons—even though neutral currents, by definition, produce none. As a result, for some time the researchers missed the phenomenon and wrongly concluded that it didn’t exist. Galison says that the physicists’ design choice “allowed the discovery of [only] one thing, and it blinded the next generation of people to this new discovery. And that is always a risk when you’re being selective.”
I ask Galison if by automating the search for interesting events, we’re letting the AI take over the science. He rephrases the question: “Have we handed over the keys to the car of science to the machines?” One way to alleviate such concerns, he tells me, is to generate test data to see if an algorithm behaves as expected—as in the LHC Olympics. “Before you take a camera out and photograph the Loch Ness Monster, you want to make sure that it can reproduce a wide variety of colors” and patterns accurately, he says, so you can rely on it to capture whatever comes.
Galison, who is also a physicist, works on the Event Horizon Telescope, which images black holes. For that project, he remembers putting up utterly unexpected test images like Frosty the Snowman so that scientists could probe the system’s general ability to catch something new. “The danger is that you’ve missed out on some crucial test,” he says, “and that the object you’re going to be photographing is so different from your test patterns that you’re unprepared.”
The algorithms that physicists are using to seek new physics are certainly vulnerable to this danger. It helps that unsupervised learning is already being used in many applications. In industry, it’s surfacing anomalous credit-card transactions and hacked networks. In science, it’s identifying earthquake precursors, genome locations where proteins bind, and merging galaxies.
But one difference with particle-physics data is that the anomalies may not be stand-alone objects or events. You’re looking not just for a needle in a haystack; you’re also looking for subtle irregularities in the haystack itself. Maybe a stack contains a few more short stems than you’d expect. Or a pattern reveals itself only when you simultaneously look at the size, shape, color, and texture of stems. Such a pattern might suggest an unacknowledged substance in the soil. In accelerator data, subtle patterns might suggest a hidden force. As Kasieczka and his colleagues write in one paper, “We are not looking for flying elephants, but instead a few extra elephants than usual at the local watering hole.”
Even algorithms that weigh many factors can miss signals—and they can also see spurious ones. The stakes of mistakenly claiming discovery are high. Going back to the hacking scenario, Plehn says, a company might ultimately determine that its network wasn’t hacked; it was just a new employee. The algorithm’s false positive causes little damage. “Whereas if you stand there and get the Nobel Prize, and a year later people say, ‘Well, it was a fluke,’ people would make fun of you for the rest of your life,” he says. In particle physics, he adds, you run the risk of spotting patterns purely by chance in big data, or as a result of malfunctioning equipment.
False alarms have happened before. In 1976, a group at Fermilab led by Leon Lederman, who later won a Nobel for other work, announced the discovery of a particle they tentatively called the Upsilon. The researchers calculated the probability of the signal’s happening by chance as 1 in 50. After further data collection, though, they walked back the discovery, calling the pseudo-particle the Oops-Leon. (Today, particle physicists wait until the chance that a finding is a fluke drops below 1 in 3.5 million, the so-called five-sigma criterion.) And in 2011, researchers at the Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment, in Italy, announced evidence for faster-than-light travel of neutrinos. Then, a few months later, they reported that the result was due to a faulty connection in their timing system.
Those cautionary tales linger in the minds of physicists. And yet, even while researchers are wary of false positives from AI, they also see it as a safeguard against them. So far, unsupervised learning has discovered no new physics, despite its use on data from multiple experiments at Fermilab and CERN. But anomaly detection may have prevented embarrassments like the one at OPERA. “So instead of telling you there’s a new physics particle,” Kasieczka says, “it’s telling you, this sensor is behaving weird today. You should restart it.”
Particle physicists are pushing the limits of not only their computing software but also their computing hardware. The challenge is unparalleled. The LHC produces 40 million particle collisions per second, each of which can produce a megabyte of data. That’s much too much information to store, even if you could save it to disk that quickly. So the two largest detectors each use two-level data filtering. The first layer, called the Level-1 Trigger, or L1T, harvests 100,000 events per second, and the second layer, called the High-Level Trigger, or HLT, plucks 1,000 of those events to save for later analysis. So only one in 40,000 events is ever potentially seen by human eyes.

“That’s when I thought, we need something like [AlphaGo] in physics. We need a genius that can look at the world differently.”
HLTs use central processing units (CPUs) like the ones in your desktop computer, running complex machine learning algorithms that analyze collisions based on the number, type, energy, momentum, and angles of the new particles produced. L1Ts, as a first line of defense, must be fast. So the L1Ts rely on integrated circuits called field-programmable gate arrays (FPGAs), which users can reprogram for specialized calculations.
The trade-off is that the programming must be relatively simple. The FPGAs can’t easily store and run fancy neural networks; instead they follow scripted rules about, say, what features of a particle collision make it important. In terms of complexity level, it’s the instructions given to the women who scanned bubble chambers, not the women’s brains.
Ekaterina (Katya) Govorkova, a particle physicist at MIT, saw a path toward improving the LHC’s filters, inspired by a board game. Around 2020, she was looking for new physics by comparing precise measurements at the LHC with predictions, using little or no machine learning. Then she watched a documentary about AlphaGo, the program that used machine learning to beat a human Go champion. “For me the moment of realization was when AlphaGo would use some absolutely new type of strategy that humans, who played this game for centuries, hadn’t thought about before,” she says. “So that’s when I thought, we need something like that in physics. We need a genius that can look at the world differently.” New physics may be something we’d never imagine.
Govorkova and her collaborators found a way to compress autoencoders to put them on FPGAs, where they process an event every 80 nanoseconds (less than 10-millionth of a second). (Compression involved pruning some network connections and reducing the precision of some calculations.) They published their methods in Nature Machine Intelligence in 2022, and researchers are now using them during the LHC’s third run. The new trigger tech is installed in one of the detectors around the LHC’s giant ring, and it has found many anomalous events that would otherwise have gone unflagged.
Researchers are currently setting up analysis workflows to decipher why the events were deemed anomalous. Jennifer Ngadiuba, a particle physicist at Fermilab who is also one of the coordinators of the trigger system (and one of Govorkova’s coauthors), says that one feature stands out already: Flagged events have lots of jets of new particles shooting out of the collisions. But the scientists still need to explore other factors, like the new particles’ energies and their distributions in space. “It’s a high-dimensional problem,” she says.
Eventually they will share the data openly, allowing others to eyeball the results or to apply new unsupervised learning algorithms in the hunt for patterns. Javier Duarte, a physicist at the University of California, San Diego, and also a coauthor on the 2022 paper, says, “It’s kind of exciting to think about providing this to the community of particle physicists and saying, like, ‘Shrug, we don’t know what this is. You can take a look.’” Duarte and Ngadiuba note that high-energy physics has traditionally followed a top-down approach to discovery, testing data against well-defined theories. Adding in this new bottom-up search for the unexpected marks a new paradigm. “And also a return of sorts to before the Standard Model was so well established,” Duarte adds.
Yet it could be years before we know why AI marked those collisions as anomalous. What conclusions could they support? “In the worst case, it could be some detector noise that we didn’t know about,” which would still be useful information, Ngadiuba says. “The best scenario could be a new particle. And then a new particle implies a new force.”

“The best scenario could be a new particle. And then a new particle implies a new force.”
Duarte says he expects their work with FPGAs to have wider applications. “The data rates and the constraints in high-energy physics are so extreme that people in industry aren’t necessarily working on this,” he says. “In self-driving cars, usually millisecond latencies are sufficient reaction times. But we’re developing algorithms that need to respond in microseconds or less. We’re at this technological frontier, and to see how much that can proliferate back to industry will be cool.”
Plehn is also working to put neural networks on FPGAs for triggers, in collaboration with experimentalists, electrical engineers, and other theorists. Encoding the nuances of abstract theories into material hardware is a puzzle. “In this grant proposal, the person I talked to most is the electrical engineer,” he says, “because I have to ask the engineer, which of my algorithms fits on your bloody FPGA?”
Hardware is hard, says Ryan Kastner, an electrical engineer and computer scientist at UC San Diego who works with Duarte on programming FPGAs. What allows the chips to run algorithms so quickly is their flexibility. Instead of programming them in an abstract coding language like Python, engineers configure the underlying circuitry. They map logic gates, route data paths, and synchronize operations by hand. That low-level control also makes the effort “painfully difficult,” Kastner says. “It’s kind of like you have a lot of rope, and it’s very easy to hang yourself.”
The next piece of new physics may not pop up at a particle accelerator. It may appear at a detector for neutrinos, particles that are part of the Standard Model but remain deeply mysterious. Neutrinos are tiny, electrically neutral, and so light that no one has yet measured their mass. (The latest attempt, in April, set an upper limit of about a millionth the mass of an electron.) Of all known particles with mass, neutrinos are the universe’s most abundant, but also among the most ghostly, rarely deigning to acknowledge the matter around them. Tens of trillions pass through your body every second.
If we listen very closely, though, we may just hear the secrets they have to tell. Karagiorgi, of Columbia, has chosen this path to discovery. Being a physicist is “kind of like playing detective, but where you create your own mysteries,” she tells me during my visit to Columbia’s Nevis Laboratories, located on a large estate about 20 km north of Manhattan. Physics research began at the site after World War II; one hallway features papers going back to 1951.
Karagiorgi is eagerly awaiting a massive neutrino detector that’s currently under construction. Starting in 2028, Fermilab will send neutrinos west through 1,300 km of rock to South Dakota, where they’ll occasionally make their existence known in the Deep Underground Neutrino Experiment (DUNE). Why so far away? When neutrinos travel long distances, they have an odd habit of oscillating, transforming from one kind or “flavor” to another. Observing the oscillations of both the neutrinos and their mirror-image antiparticles, antineutrinos, could tell researchers something about the universe’s matter-antimatter asymmetry—which the Standard Model doesn’t explain—and thus, according to the Nevis website, “why we exist.”
“DUNE is the thing that’s been pushing me to develop these real-time AI methods,” Karagiorgi says, “for sifting through the data very, very, very quickly and trying to look for rare signatures of interest within them.” When neutrinos interact with the detector’s 70,000 tonnes of liquid argon, they’ll generate a shower of other particles, creating visual tracks that look like a photo of fireworks.
Even when not bombarding DUNE with neutrinos, researchers will keep collecting data in the off chance that it captures neutrinos from a distant supernova. “This is a massive detector spewing out 5 terabytes of data per second,” Karagiorgi says, “and it’s going to run constantly for a decade.” They will need unsupervised learning to notice signatures that no one was looking for, because there are “lots of different models of how supernova explosions happen, and for all we know, none of them could be the right model for neutrinos,” she says. “To train your algorithm on such uncertain grounds is less than ideal. So an algorithm that can recognize any kind of disturbance would be a win.”
Deciding in real time which 1 percent of 1 percent of data to keep will require FPGAs. Karagiorgi’s team is preparing to use them for DUNE, and she walks me to a computer lab where they program the circuits. In the FPGA lab, we look at nondescript circuit boards sitting on a table. “So what we’re proposing is a scheme where you can have something like a hundred of these boards for DUNE deep underground that receive the image data frame by frame,” she says. This system could tell researchers whether a given frame resembled TV static, fireworks, or something in between.
Neutrino experiments, like many particle-physics studies, are very visual. When Karagiorgi was a postdoc, automated image processing at neutrino detectors was still in its infancy, so she and collaborators would often resort to visual scanning (bubble-chamber style) to measure particle tracks. She still asks undergrads to hand-scan as an educational exercise. “I think it’s wrong to just send them to write a machine learning algorithm. Unless you can actually visualize the data, you don’t really gain a sense of what you’re looking for,” she says. “I think it also helps with creativity to be able to visualize the different types of interactions that are happening, and see what’s normal and what’s not normal.”
Back in Karagiorgi’s office, a bulletin board displays images from The Cognitive Art of Feynman Diagrams, an exhibit for which the designer Edward Tufte created wire sculptures of the physicist Richard Feynman’s schematics of particle interactions. “It’s funny, you know,” she says. “They look like they’re just scribbles, right? But actually, they encode quantitatively predictive behavior in nature.” Later, Karagiorgi and I spend a good 10 minutes discussing whether a computer or a human could find Waldo without knowing what Waldo looked like. We also touch on the 1964 Supreme Court case in which Justice Potter Stewart famously declined to define obscenity, saying “I know it when I see it.” I ask whether it seems weird to hand over to a machine the task of deciding what’s visually interesting. “There are a lot of trust issues,” she says with a laugh.
On the drive back to Manhattan, we discuss the history of scientific discovery. “I think it’s part of human nature to try to make sense of an orderly world around you,” Karagiorgi says. “And then you just automatically pick out the oddities. Some people obsess about the oddities more than others, and then try to understand them.”
Reflecting on the Standard Model, she called it “beautiful and elegant,” with “amazing predictive power.” Yet she finds it both limited and limiting, blinding us to colors we don’t yet see. “Sometimes it’s both a blessing and a curse that we’ve managed to develop such a successful theory.”
2026-02-03 06:00:04

Nonmedical devices that read brainwaves, such as smart headbands, headphones, and glasses, are becoming more popular among consumers. The products claim to make users more productive, creative, and healthier. IEEE Spectrum previewed several of these smart wearables that were introduced at this year’s Consumer Electronics Show (CES) in Las Vegas.
Since the wearable, noninvasive neurotech products aren’t medical devices, they are not subject to the same forms of regulation—which can lead to gaps in their safety and data privacy, as well as their effect on users’ brains.
UNESCO in November adopted the first global ethical standard for neurotechnologies, establishing guidelines to protect users’ mental privacy, freedom of thought, and human rights. In 2019 the Organisation for Economic Co-operation and Development issued responsible-neurotechnology recommendations. But there are no socio-technical standards for manufacturers to follow.
In response, the IEEE Brain technical community is developing the IEEE P7700 standard: “Recommended Practice for the Responsible Design and Development of Neurotechnologies.”
The proposed standard is being designed to provide a uniform set of definitions and a methodology to assess the ethical and socio-technical considerations and practices regarding the design, development, and use of neurotechnologies including wearable neurodevices for the brain, says Laura Y. Cabrera, the standard’s working group chair. Cabrera, an IEEE senior member, is an associate professor in the engineering science and mechanics department at Pennsylvania State University in University Park. Her research focuses on the ethical and societal implications of neurotechnologies.
“IEEE P7700 addresses the unique characteristics of the technology and its impact on individuals and society, in particular, as it moves from therapeutic users to a wide variety of consumers,” she says.
The standard is sponsored by the IEEE Society on Social Implications of Technology.
The multilayered complexity of technologies that interface with the brain and nervous system presents considerations to those developing them, Cabrera says.
“There may be long-term consequences in our brains with these types of technologies,” she says. “Maybe if they were used for a short period of time, there might not be significant consequences. But what are the effects over time?”
Patients using approved brain-stimulation technology, for example, are told of its risks and benefits, but the long-term effects of headbands to improve students’ attention span aren’t known.
“IEEE P7700 addresses the unique characteristics of the technology and its impact on individuals and society, in particular, as it moves from therapeutic users to a wide variety of consumers.”
IEEE P7700 will address potential risks to individuals and possible negative impacts on society, Cabrera says. That includes creating guardrails to prevent harm, she adds.
The cultural implications of using neurotechnologies that interface with the brain also need to be considered, she says, because people have different views.
“The brain is considered the seed of the self and the organ that orchestrates all our thoughts, behaviors, feelings, and emotions,” she says. “The brain is really central to who we are.”
For the past five years, the IEEE Brain community’s neuroethics committee has been developing a framework to evaluate the ethical, legal, social, and cultural issues that could emerge from use of the technology. The document covers nine types of applications, including those used for wellness.
Because more devices kept entering the market, IEEE Brain decided in 2023 that it was time to begin drafting a standard.
Members of its working group come from Argentina, China, Japan, Italy, Switzerland, and the United States. Participants include developers, engineers, ethicists, lawyers, and social science researchers.
The standard, Cabrera says, will be the first socio-technical standard aimed at fostering the ethical and responsible innovation of neurotechnology that meets societal and community values at an international level. P7700 will include a how-to guide, criteria for evaluating each suggested process, and case studies to help with the interpretation and practical use of the standard, she says.
“Our applied ethical approach uses a responsible research and innovation method to enable developers, researchers, users, and regulators to anticipate and address ethical and sociocultural implications of neurotechnologies, mitigating negative unintended consequences while increasing community support and engagement with innovators,” Cabrera says.
The working group is seeking additional participants to help refine the process, tools, and recommendations.
“There are a variety of people who can contribute their expertise,” she says, “including academics, data scientists, government program leaders, policymakers, lawyers, social scientists, and users.”
Cabrera says she anticipates the standard will be published early next year.
You can register to participate in the standard’s development here.