2026-03-20 21:15:45
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
OpenAI has a new grand challenge: building an AI researcher—a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its “north star” for the next few years.
By September, the company plans to build “an autonomous AI research intern” that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028.
In an exclusive interview this week, OpenAI’s chief scientist, Jakub Pachocki, talked me through the plans. Find out what I discovered.
—Will Douglas Heaven
Over the last decade, we’ve seen scientific interest in psychedelic drugs explode. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. But two studies out earlier this week demonstrate just how difficult it is to study these drugs.
For me, they show just how overhyped these substances have become. Find out why here.
—Jessica Hamzelou
This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Wednesday.
Read more: What do psychedelic drugs do to our brains? AI could help us find out
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI is building a “super app”
It’s merging ChatGPT, a web browser, and a coding tool into a single app. (The Verge)
+ It’s also buying coding startup Astral to enhance its Codex model. (Ars Technica)
+ The moves come amid a cutback on side projects. (WSJ $)
+ OpenAI has lost ground to Anthropic in the enterprise market. (Axios)
2 The US has charged Super Micro’s co-founder with smuggling AI tech to China
Super Micro is third on Fortune’s list of the fastest-growing companies. (Reuters)
+ GenAI is learning to spy for the US military. (MIT Technology Review)
+ The compute competition is shaping the China-US rivalry. (Politico)
3 The DoJ has taken down botnets behind the largest-ever DDoS attack
They had infected more than 3 million devices. (Wired $)
+ The DoJ has also seized domains tied to Iranian “hacktivists.” (Axios)
4 The Pentagon says Anthropic’s foreign workers are a security risk
It cited Chinese employees as a particular concern. (Axios)
+ Anthropic’s moral boundaries have incensed the DoD. (MIT Technology Review)
5 High oil prices could wreck the AI boom, the WTO has warned
Fears are growing of a prolonged energy shock. (The Guardian)
+ We did the math on AI’s energy footprint. (MIT Technology Review)
6 Jeff Bezos is trying to raise $100 billion to use AI in manufacturing
The funds would buy manufacturing firms and infuse them with AI. (WSJ $)
+ Here’s how to fine-tune AI for prosperity. (MIT Technology Review)
7 Signal’s creator is helping to encrypt Meta’s AI
Moxie Marlinspike is integrating his encrypted chatbot, Confer. (Wired $)
+ Meta is also ditching human moderators for AI again. (CNBC)
+ AI is making online crimes easier. (MIT Technology Review)
8 Prediction market Kalshi has raised $1 billion at a $22 billion valuation
That’s double its valuation from December. (Bloomberg $)
+ Arizona’s AG has charged the company with “illegal gambling.” (NPR)
9 Meta isn’t killing Horizon Worlds for VR after all
It’s canceled plans to dump the metaverse app (for now). (CNBC)
10 A US startup is recruiting an “AI bully”
The successful candidate must test the patience of leading chatbots. (The Guardian)
Quote of the day
—Kalshi rival Polymarket unveils its hellish vision for a new bar.
One More Thing

It’s a thought that occurs to every video-game player at some point: what if the weird, hyper-focused state I enter in virtual worlds could somehow be applied to the real one?
For a handful of consultants, startup gurus, and game designers in the late 2000s, this state of “blissful productivity” became the key to unlocking our true human potential. Their vision became the global phenomenon of gamification—but it didn’t live up to the hype.
Instead of liberating us, gamification became a tool for coercion, distraction, and control. Find out why we fell for it—and how we can recover.
—Bryan Gardiner
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ In a landmark legal win for trolling, Afroman has won his diss track case against the police.
+ This LEGO artist remixes standard sets into completely different iconic objects.
+ Ease your search for aliens with these interactive estimates of advanced civilizations.
+ A rare superbloom in Death Valley has been caught on camera.
2026-03-20 19:57:16
OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that this new research goal will be its “North Star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.
There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. This AI researcher (OpenAI says) will be able to tackle problems that are too large or complex for humans to cope with.
Those tasks might be related to math and physics—such as coming up with new proofs or conjectures—or life sciences like biology and chemistry, or even business and policy dilemmas. In theory, you would throw such a tool any kind of problem that can be formulated in text, code, or whiteboard scribbles—which covers a lot.
OpenAI has been setting the agenda for the AI industry for years. Its early dominance with large language models shaped the technology that hundreds of millions of people use every day. But it now faces fierce competition from rival model makers like Anthropic and Google DeepMind. What OpenAI decides to build next matters—for itself and for the future of AI.
A big part of that decision falls to Jakub Pachocki, OpenAI’s chief scientist, who sets the company’s long-term research goals. Pachocki played key roles in the development of both GPT-4, a game-changing LLM released in 2023, and so-called reasoning models, a technology that first appeared in 2024 and now underpins all major chatbots and agent-based systems.
In an exclusive interview this week, Pachocki talked me through OpenAI’s latest vision. “I think we are getting close to a point where we’ll have models capable of working indefinitely in a coherent way just like people do,” he says. “Of course, you still want people in charge and setting the goals. But I think we will get to a point where you kind of have a whole research lab in a data center.”
Such big claims aren’t new. Saving the world by solving its hardest problems is the stated mission of all the top AI firms. Demis Hassabis told me back in 2022 that it was why he started DeepMind. Anthropic CEO Dario Amodei says he is building the equivalent of a country of geniuses in a data center. Pachocki’s boss, Sam Altman, wants to cure cancer. But Pachocki says OpenAI now has most of what it needs to get there.
In January, OpenAI released Codex, an agent-based app that can spin up code on the fly to carry out tasks on your computer. It can analyze documents, generate charts, make you a daily digest of your inbox and social media, and much more. (Other firms have released similar tools, such as Anthropic’s Claude Code and Claude Cowork.)
OpenAI claims that most of its technical staffers now use Codex in their work. You can look at Codex as a very early version of the AI researcher, says Pachocki: “I expect Codex to get fundamentally better.”
The key is to make a system that can run for longer periods of time, with less human guidance. “What we’re really looking at for an automated research intern is a system that you can delegate tasks [to] that would take a person a few days,” says Pachocki.
“There are a lot of people excited about building systems that can do more long-running scientific research,” says Doug Downey, a research scientist at the Allen Institute for AI, who is not connected to OpenAI. “I think it’s largely driven by the success of these coding agents. The fact that you can delegate quite substantial coding tasks to tools like Codex is incredibly useful and incredibly impressive. And it raises the question: Can we do similar things outside coding, in broader areas of science?”
For Pachocki, that’s a clear Yes. In fact, he thinks it’s just a matter of pushing ahead on the path we’re already on. A simple boost in all-round capability also leads to models that can work longer without help, he says. He points to the leap from 2020’s GPT-3 to 2023’s GPT-4, two of OpenAI’s previous models. GPT-4 was able to work on a problem for far longer than its predecessor, even without specialized training, he says.
So-called reasoning models brought another bump. Training LLMs to work through problems step by step, backtracking when they make a mistake or hit a dead end, has also made models better at working for longer periods of time. And Pachocki is convinced that OpenAI’s reasoning models will continue to get better.
But OpenAI is also training its systems to work by themselves for longer by feeding them specific samples of complex tasks, such as hard puzzles taken from math and coding contests, which force the models to learn how to do things like keep track of very large chunks of text and split problems up into (and then manage) multiple subtasks.
The aim isn’t to build models that just win math competitions. “That lets you prove that the technology works before you connect it to the real world,” says Pachocki. “If we really wanted to, we could build an amazing automated mathematician. We have all the tools, and I think it would be relatively easy. But it’s not something we’re going to prioritize now because, you know, at the point where you believe you can do it, there’s much more urgent things to do.”
“We are much more focused now on research that’s relevant in the real world,” he adds.
Right now that means taking what Codex can do with coding and trying to apply that to problem-solving in general. “There’s a big change happening, especially in programming,” he says. “Our jobs are now totally different than they were even a year ago. Nobody really edits code all the time anymore. Instead, you manage a group of Codex agents.” If Codex can solve coding problems (the argument goes), it can solve any problem.
It’s true that OpenAI has had a handful of remarkable successes in the last few months. Researchers have used GPT-5 (the LLM that powers Codex) to discover new solutions to a number of unsolved math problems and punch through apparent dead ends in a handful of biology, chemistry, and physics puzzles.
“Just looking at these models coming up with ideas that would take most PhD weeks, at least, makes me expect that we’ll see much more acceleration coming from this technology in the near future,” Pachocki says.
But Pachocki admits that it’s not a done deal. He also understands why some people still have doubts about how much of a game-changer the technology really is. He thinks it depends on how people like to work and what they need to do. “I can believe some people don’t find it very useful yet,” he says.
He tells me that he didn’t even use autocomplete—the most basic version of generative coding tech—a year ago. “I’m very pedantic about my code,” he says. “I like to type it all manually in vim if I can help it.” (Vim is a text editor favored by many hardcore programmers that you interact with via dozens of keyboard shortcuts instead of a mouse.)
But that changed when he saw what the latest models could do. He still wouldn’t hand over complex design tasks, but it’s a time-saver when he just wants to try out a few ideas. “I can have it run experiments in a weekend that previously would have taken me like a week to code,” he says.
“I don’t think it is at the level where I would just let it take the reins and design the whole thing,” he adds. “But once you see it do something that would take a week to do—I mean, that’s hard to argue with.”
Pachocki’s game plan is to supercharge the existing problem-solving abilities that tools like Codex have now and apply them across the sciences.
Downey agrees that the idea of an automated researcher is very cool: “It would be exciting if we could come back tomorrow morning and the agent’s done a bunch of work and there’s new results we can examine,” he says.
But he cautions that building such a system could be harder than Pachocki makes out. Last summer, Downey and his colleagues tested several top-tier LLMs on a range of scientific tasks. OpenAI’s latest model, GPT-5, came out on top but still made lots of errors.
“If you have to chain tasks together, then the odds that you get several of them right in succession tend to go down,” he says. Downey admits that things move fast, and he has not tested the latest versions of GPT-5 (OpenAI released GPT-5.4 two weeks ago). “So those results might already be stale,” he says.
I asked Pachocki about the risks that may come with a system that can solve large, complex problems by itself with little human oversight. Pachocki says people at OpenAI talk about those risks all the time.
“If you believe that AI is about to substantially accelerate research, including AI research, that’s a big change in the world. That’s a big thing,” he told me. “And it comes with some serious unanswered questions. If it’s so smart and capable, if it can run an entire research program, what if it does something bad?”
The way Pachocki sees it, that could happen in a number of ways. The system could go off the rails. It could get hacked. Or it could simply misunderstand its instructions.
The best technique OpenAI has right now to address these concerns is to train its reasoning models to share details about what they are doing as they work. This approach to keeping tabs on LLMs is known as chain-of-thought monitoring.
In short, LLMs are trained to jot down notes about what they are doing in a kind of scratch pad as they step through tasks. Researchers can then use those notes to make sure a model is behaving as expected. Yesterday OpenAI published new details on how it is using chain-of-thought monitoring in house to study Codex.
“Once we get to systems working mostly autonomously for a long time in a big data center, I think this will be something that we’re really going to depend on,” says Pachocki.
The idea would be to monitor an AI researcher’s scratch pads using other LLMs and catch unwanted behavior before it’s a problem, rather than trying to stop that bad behavior from happening in the first place. LLMs are not understood well enough for us to control them fully.
“I think it’s going to be a long time before we can really be like, okay, this problem is solved,” he says. “Until you can really trust the systems, you definitely want to have restrictions in place.” Pachocki thinks that very powerful models should be deployed in sandboxes, cut off from anything they could break or use to cause harm.
AI tools have already been used to come up with novel cyberattacks. Some worry that they will be used to design synthetic pathogens that could be used as bioweapons. You can insert any number of evil-scientist scare stories here. “I definitely think there are worrying scenarios that we can imagine,” says Pachocki.
“It’s going to be a very weird thing. It’s extremely concentrated power that’s in some ways unprecedented,” says Pachocki. “Imagine you get to a world where you have a data center that can do all the work that OpenAI or Google can do. Things that in the past required large human organizations would now be done by a couple of people.”
“I think this is a big challenge for governments to figure out,” he adds.
And yet some people would say governments are part of the problem. The US government wants to use AI on the battlefield, for example. The recent showdown between Anthropic and the Pentagon revealed that there is little agreement across society about where we draw red lines for how this technology should and should not be used—let alone who should draw them. In the immediate aftermath of that dispute, OpenAI stepped up to sign a deal with the Pentagon instead of its rival. The situation remains murky.
I pushed Pachocki on this. Does he really trust other people to figure it out or does he, as a key architect of the future, feel personal responsibility? “I do feel personal responsibility,” he says. “But I don’t think this can be resolved by OpenAI alone, pushing its technology in a particular way or designing its products in a particular way. We’ll definitely need a lot of involvement from policymakers.”
Where does that leave us? Are we really on a path to the kind of AI Pachocki envisions? When I asked the Allen Institute’s Downey, he laughed. “I’ve been in this field for a couple of decades and I no longer trust my predictions for how near or far certain capabilities are,” he says.
OpenAI’s stated mission is to ensure that artificial general intelligence (a hypothetical future technology that many AI boosters believe will be able to match humans on most cognitive tasks) will benefit all of humanity. OpenAI aims to do that by being the first to build it. But the only time Pachocki mentioned AGI in our conversation, he was quick to clarify what he meant by talking about “economically transformative technology” instead.
LLMs are not like human brains, he says: “They are superficially similar to people in some ways because they’re kind of mostly trained on people talking. But they’re not formed by evolution to be really efficient.”
“Even by 2028, I don’t expect that we’ll get systems as smart as people in all ways. I don’t think that will happen,” he adds. “But I don’t think it’s absolutely necessary. The interesting thing is you don’t need to be as smart as people in all their ways in order to be very transformative.”
2026-03-20 17:00:00
This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin—which is found in magic mushrooms—are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity.
Over the last decade, we’ve seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges. And a lot of the trial results have been underwhelming or inconclusive.
Two studies out earlier this week demonstrate just how difficult it is to study these drugs. And to my mind, they also show just how overhyped these substances have become.
To some in the field, the hype is not necessarily a bad thing. Let me explain.
The two new studies both focus on the effectiveness of psilocybin in treating depression. And they both attempt to account for one of the biggest challenges in trialing psychedelics: what scientists call “blinding.”
The best way to test the effectiveness of a new drug is to perform a randomized controlled trial. In these studies, some volunteers receive the drug while others get a placebo. For a fair comparison, the volunteers shouldn’t know whether they’re getting the drug or placebo.
That is almost impossible to do with psychedelics. Almost anyone can tell whether they’ve taken a dose of psilocybin or a dummy pill. The hallucinations are a dead giveaway. Still, the authors behind the two new studies have tried to overcome this challenge.
In one, a team based in Germany gave 144 volunteers with treatment-resistant depression either a high or low dose of psilocybin or an “active” placebo, which has its own physical (but not hallucinatory) effects, along with psychotherapy. In their trial, neither the volunteers nor the investigators knew who was getting the drug.
The volunteers who got psilocybin did show some improvement—but it was not significantly any better than the improvement experienced by those who took the placebo. And while those who took psilocybin did have a bigger reduction in their symptoms six weeks later, “the divergence between [the two results] renders the findings inconclusive,” the authors write.
Not great news so far.
The authors of the second study took a different approach. Balázs Szigeti at UCSF and his colleagues instead looked at what are known as “open label” studies of both psychedelics and traditional antidepressants. In those studies, the volunteers knew when they were getting a psychedelic—but they also knew when they were getting an antidepressant.
The team assessed 24 such trials to find that … psychedelics were no more effective than traditional antidepressants. Sad trombone.
“When I set up the study, I wanted to be a really cool psychedelic scientist to show that even if you consider this blinding problem, psychedelics are so much better than traditional antidepressants,” says Szigeti. “But unfortunately, the data came out the other way around.”
His study highlights another problem, too.
In trials of traditional antidepressant drugs, the placebo effect is pretty strong. Depressive symptoms are often measured using a scale, and in trials, antidepressant drugs typically lower symptoms by around 10 points on that scale. Placebos can lower symptoms by around eight points.
When a drug regulator looks at those results, the takeaway is that the antidepressant drug lowers symptoms by an additional two points on the scale, relative to a placebo.
But with psychedelics, the difference between active drug and placebo is much greater. That’s partly because people who get the psychedelic drug know they’re getting it and are expecting the drug to improve their symptoms, says David Owens, emeritus professor of clinical psychiatry at the University of Edinburgh, UK.
But it’s also partly because of the effect on those who know they’re not getting it. It’s pretty obvious when you’re getting a placebo, says Szigeti, and it can be disappointing. Scientists have long recognized the “nocebo” effect as placebo’s “evil twin”—essentially, when you expect to feel worse, you will.
The disappointment of getting a placebo is slightly different, and Szigeti calls it the “knowcebo effect.” “It’s kind of like a negative psychedelic effect, because you have figured out that you’re taking the placebo,” he says.
This phenomenon can distort the results of psychedelic drug trials. While a placebo in a traditional antidepressant drug trial improves symptoms by eight points, placebos in psychedelic trials improve symptoms by a mere four points, says Szigeti.
If the active drug similarly improves symptoms by around 10 points, that makes it look as though the psychedelic is improving symptoms by around six points compared with a placebo. It “gives the illusion” of a huge effect, says Szigeti.
So why have those smaller trials of the past received so much attention? Many have been published in high-end journals, accompanied by breathless press releases and media coverage. Even the inconclusive ones. I’ve often thought that those studies might not have seen the light of day if they’d been investigating any other drug.
“Yeah, nobody would care,” Szigeti agrees.
It’s partly because people who work in mental health are so desperate for new treatments, says Owens. There has been little innovation in the last 40 years or so, since the advent of selective serotonin reuptake inhibitors. “Psychiatry is hemmed in with old theories … and we don’t need another SSRI for depression,” he says. But it’s also because psychedelics are inherently fascinating, says Szigeti. “Psychedelics are cool,” he says. “Culturally, they are exciting.”
I’ve often worried that psychedelics are overhyped—that people might get the mistaken impression they are cure-alls for mental-health disorders. I’ve worried that vulnerable people might be harmed by self-experimentation.
Szigeti takes a different view. Given how effective we know the placebo effect can be, maybe hype isn’t a totally bad thing, he says. “The placebo response is the expectation of a benefit,” he says. “The better response patients are expecting, the better they’re going to get.” Tempering the hype might end up making those drugs less effective, he says.
“At the end of the day, the goal of medicine is to help patients,” he says. “I think most [mental health] patients don’t care whether they feel better because of some expectancy and placebo effects or because of an active drug effect.”
Either way, we need to know exactly what these drugs are doing. Maybe they will be able to help some people with depression. Maybe they won’t. Research that acknowledges the pitfalls associated with psychedelic drug trials is essential.
“These are potentially exciting times,” says Owens. “But it’s really important we do this [research] well. And that means with eyes wide open.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
2026-03-19 20:17:02
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful—and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition next week.
The prize will go to the quantum computer that can solve real health care problems that conventional “classical” computers are unable to solve. But there can be only one big winner—if there is a winner at all. Read the full story.
—Michael Brooks
There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Recycling could reduce both the waste and the need to mine new material, but the process is costly, complicated, and not fully efficient.
Find out why it’s such an issue. —Casey Crownhart
This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The FBI has confirmed it’s buying Americans’ location data
Director Kash Patel said it’s led to “valuable intelligence.” (Politico)
+ What AI “remembers” about you is privacy’s next frontier. (MIT Technology Review)
2 The first draft of a federal AI bill has been introduced
It aims to protect “children, creators, conservatives, and communities.” (Engadget)
+ A war is brewing over AI regulation in the US. (MIT Technology Review)
3 Google is pitching itself to the Pentagon as the perfect defense partner
It’s framing its AI as a safe alternative to OpenAI and Anthropic. (NYT $)
+ Here’s where OpenAI’s tech could show up in Iran. (MIT Technology Review)
4 A rogue AI agent at Meta leaked sensitive information to employees
The exposure lasted for hours before it was contained. (The Information $)
+ Don’t let AI agent hype get ahead of reality. (MIT Technology Review $)
5 Sony just removed 135,000 ‘deepfakes’ of its music
Fraudsters were impersonating the label’s artists on streaming services. (BBC)
+ AI works better as a collaborator than a creator. (MIT Technology Review)
6 The EU has backed a ban on nonconsensual sexualized deepfakes
It has reacted to Elon Musk’s Grok chatbot “nudifying” children. (Bloomberg $)
7 Two quantum cryptography pioneers have won the Turing Award
Their encryption method can (theoretically) never be broken. (Quanta)
8 Gamers are disgusted by Nvidia’s new rendering model
They’ve labeled it an “AI slop filter.” (The Verge)
9 The White House has registered the aliens.gov domain
It’s sparked speculation that Trump’s long-awaited UFO disclosure is imminent. (404 Media)
+ Meet the new biologists treating LLMs like ETs. (MIT Technology Review)
10 Silicon Valley has embraced a new buzzword: “taste”
As a USP amid the deluge of AI-driven recommendations. (The New Yorker $)
Quote of the day
—Elizabeth Warren gives her take on the Trump administration allowing Nvidia to sell advanced chips to China.
One More Thing

Last year, Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away. He also suggested that those computers would need Nvidia GPUs to function. But Huang’s predictions miss the mark—both on the timeline and the role his company’s technology will play.
Quantum computing is rapidly converging on utility. And that’s good news, because the hope is that they will be able to perform calculations that no amount of AI or classical computation could ever achieve. Read the full story.
—Peter Barrett
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ A self-described “mad scientist” has powered a car with vape batteries.
+ Someone squeezed an Apple Mac Mini inside a classic LEGO computer.
+ Watch thousands of satellites orbit Earth in real-time with this mesmerizing interactive map.
+ This grilled wall cheese art looks good enough to eat.
2026-03-19 18:51:43
I’m standing in front of a quantum computer built out of atoms and light at the UK’s National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik’s Cube–size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam.
The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I’d be unlikely to get very far, though. It’s small but powerful—and so it’s very valuable. Infleqtion, the Colorado-based company that owns it, is hoping the machine’s abilities will win $5 million next week, at an event to be held in Marina del Rey, California.
Infleqtion is one of six teams that have made it to the final stage of a 30-month-long quantum computing competition called Quantum for Bio (Q4Bio). Run by the nonprofit Wellcome Leap, it aims to show that today’s quantum computers, though messy and error-prone and far from the large-scale machines engineers hope to build, could actually benefit human health. Success would be a significant step forward in proving the worth of quantum computers. But for now, it turns out, that worth seems to be linked to harnessing and improving the performance of conventional (also called classical) computers in tandem, creating a quantum-classical hybrid that can exceed what’s possible on classical machines by themselves.
There are two prize categories. A prize of $2 million will go to any and all teams that can run a significantly useful health care algorithm on computers with 50 or more qubits (a qubit is the basic processing unit in a quantum computer). To win the $5 million grand prize, a team must successfully run a quantum algorithm that solves a significant real-world problem in health care, and the work must use 100 or more qubits. Winners have to meet strict performance criteria, and they must solve a health care problem that can’t be solved with conventional computers—a tough task.
Despite the scale of the challenge, most of the teams think some of this money could be theirs. “I think we’re in with a good shout,” says Jonathan D. Hirst, a computational chemist at the University of Nottingham, UK. “We’re very firmly within the criteria for the $2 million prize,” says Stanford University’s Grant Rotskoff, whose collaboration is investigating the quantum properties of the ATP molecule that powers biological cells.
The grand prize is perhaps less of a sure thing. “This is really at the very edge of doable,” Rotskoff says. Insiders say the challenge is so difficult, given the state of quantum computing technology, that much of the money could stay in Wellcome Leap’s account.
With most of the Q4Bio work unpublished and protected by NDAs, and the quantum computing field already rife with claims and counterclaims about performance and achievements, only the judges will be in a position to decide who’s right.
The idea behind quantum computers is that they can use small-scale objects that obey the laws of quantum mechanics, such as atoms and photons of light, to simulate real-world processes too complex to model on our everyday classical machines.
Researchers have been working for decades to build such systems, which could deliver insights for creating new materials, developing pharmaceuticals, and improving chemical processes such as fertilizer production. But dealing with quantum stuff like atoms is excruciatingly difficult. The biggest, shiniest applications require huge, robust machines capable of withstanding the environmental “noise” that can very easily disrupt delicate quantum systems. We don’t have those yet—and it’s unclear when we will.
Wellcome Leap wanted to find out if the smaller-scale machines we have today can be made to do something—anything—useful for health care while we wait for the era of powerful, large-scale quantum computers. The group started the competition in 2024, offering $1.5 million in funding to each group of 12 selected teams.
The six Q4Bio finalists have taken a range of approaches. Crucially, they’ve all come up with ingenious ways to overcome quantum computing’s drawbacks. Faced with noisy, limited machines, they have learned how to outsource much of the computational load to classical processors running newly developed algorithms that are, in many cases, better than the previous state of the art. The quantum processors are then required only for the parts of the problem where classical methods don’t scale well enough as the calculation gets bigger.
For example, a team led by Sergii Strelchuk of Oxford University is using a quantum computer to map genetic diversity among humans and pathogens on complex graph-based structures. These will—the researchers hope—expose hidden connections and potential treatment pathways. “You can think about it as a platform for solving difficult problems in computational genomics,” Strelchuk says.
The corresponding classical tools struggle with even modest scale-up to large databases. Strelchuk’s team has built an automated pipeline that provides a way of determining whether classical solvers will struggle with a particular problem, and how a quantum algorithm might be able to formulate the data so that it becomes solvable on a classical computer or handleable on a noisy quantum one. “You can do all this before you start spending money on computing,” Strelchuk says.
In collaboration with Cleveland Clinic, Helsinki-based Algorithmiq has used a superconducting quantum computer built by IBM to simulate a cancer drug that is triggered by specific types of light. “The idea is you take the drug, and it’s everywhere in your body, but it’s doing nothing, just sitting there, until there’s light on it of a certain wavelength,” says Guillermo García-Pérez, Algorithmiq’s chief scientific officer. Then it acts as a molecular bullet, attacking the tumor only at the location in the body where that light is directed.
The drug with which Algorithmiq began its work is already in phase II clinical trials for treating bladder cancers. The quantum-computed simulation, which adapts and improves on classical algorithms, will allow it to be redesigned for treating other conditions. “It has remained a niche treatment precisely because it can’t be simulated classically,” says Sabrina Maniscalco, Algorithmiq’s CEO and cofounder.
Maniscalco, who is also confident of walking away from the competition with prize money, believes the methods used to create the algorithm will have wide applications: “What we’ve done in the period of the Q4Bio program is something unique that can change how to simulate chemistry for health care and life sciences.”
Infleqtion’s entry, running on its cesium-powered machine, is an effort to improve the identification of cancer signatures in medical data. Together with collaborators at the University of Chicago and MIT, the company’s scientists have developed a quantum algorithm that mines huge data sets such as the Cancer Genome Atlas.
The aim is to find patterns that allow clinicians to determine factors such as the likely origin of a patient’s metastasized cancer. “It’s very important to know where it came from because that can inform the best treatment,” says Teague Tomesh, a quantum software engineer who is Infleqtion’s Q4Bio project lead.
Unfortunately, those patterns are hidden inside data sets so large that they overwhelm classical solvers. Infleqtion uses the quantum computer to find correlations in the data that can reduce the size of the computation. “Then we hand the reduced problem back to the classical solver,” Teague says. “I’m basically trying to use the best of my quantum and my classical resources.”
The Nottingham-based team, meanwhile, is using quantum computing to nail down a drug candidate that can cure myotonic dystrophy, the most common adult-onset form of muscular dystrophy. One member of the team, David Brook, played a role in identifying the gene behind this condition in 1992. Over 30 years later, Brook, Hirst, and the others in their group—which includes QuEra, a Boston company developing a quantum computer based on neutral atoms—has now quantum-computed a way in which drugs can form chemical bonds with the protein that brings on the disease, blocking the mechanism that causes the problem.
The entrants’ confidence might be high, but Shihan Sajeed’s is much lower. Sajeed, a quantum computing entrepreneur based in Waterloo, Ontario, is program director for Q4Bio. He believes the error-prone quantum machines the researchers must work with are unlikely to deliver on all the grand prize criteria. “It is very difficult to achieve something with a noisy quantum computer that a classical machine can’t do,” he says.
That said, he has been surprised by the progress. “When we started the program, people didn’t know about any use cases where quantum can definitely impact biology,” he says. But the teams have found promising applications, he adds: “We now know the fields where quantum can matter.”
And the developments in “hybrid quantum-classical” processing that the entrants are using are “transformational,” Sajeed reckons.
Will it be enough to make him part with Wellcome Leap’s money? That’s down to a judging panel, whose members’ identities are a closely guarded secret to ensure that no one tailors their presentation to a particular kind of approach. But we won’t know the outcome for a while; the winner, or winners, will be announced in mid-April.
If it does turn out that there are no winners, Sajeed has some words of comfort for the competitors. The goal has always been about running a useful algorithm on a machine that exists today, he points out; missing the mark doesn’t mean your algorithm won’t be useful on a future quantum computer. “It just means the machine you need doesn’t exist yet.”
2026-03-19 18:00:00
The prospect of making trash useful is always fascinating to me. Whether it’s used batteries, solar panels, or spent nuclear fuel, getting use out of something destined for disposal sounds like a win all around.
In nuclear energy, figuring out what to do with waste has always been a challenge, since the material needs to be dealt with carefully. In a new story, I dug into the question of what advanced nuclear reactors will mean for spent fuel waste. New coolants, fuels, and logistics popping up in companies’ designs could require some adjustments.
My reporting also helped answer another question that was lingering in my brain: Why doesn’t the world recycle more nuclear waste?
There’s still a lot of usable uranium in spent nuclear fuel when it’s pulled out of reactors. Getting more use out of the spent fuel could cut down on both waste and the need to mine new material, but the process is costly, complicated, and not 100% effective.
France has the largest and most established reprocessing program in the world today. The La Hague plant in northern France has the capacity to reprocess about 1,700 tons of spent fuel each year.
The plant uses a process called PUREX—spent fuel is dissolved in acid and goes through chemical processing to pull out the uranium and plutonium, which are then separated. The plutonium is used to make mixed oxide (or MOX) fuel, which can be used in a mixture to fuel conventional nuclear reactors or alone as fuel in some specialized designs. And the uranium can go on to be re-enriched and used in standard low-enriched uranium fuel.
Reprocessing can cut down on the total volume of high-level nuclear waste that needs special handling, says Allison Macfarlane, director of the school of public policy and global affairs at the University of British Columbia and a former chair of the NRC.
But there’s a bit of a catch. Today, the gold standard for permanent nuclear waste storage is a geological repository, a deep underground storage facility. Heat, not volume, is often the key limiting factor for how much material can be socked away in those facilities, depending on the specific repository. And spent MOX fuel gives off much more heat than conventional spent fuel, Macfarlane says. So even if there’s a smaller volume, the material might take up as much, or even more, space in a repository.
It’s also tricky to make this a true loop: The uranium that’s produced from reprocessing is contaminated with isotopes that can be difficult to separate, Macfarlane says. Today, France essentially saves the uranium for possible future enrichment as a sort of strategic stockpile. (Historically, it’s also exported some to Russia for enrichment.) And while MOX fuel can be used in some reactors, once it is spent, it is technically challenging to reprocess. So today, the best case is that fuel could be used twice, not infinitely.
“Every responsible analyst understands that no matter what, no matter how good your recycling process is, you’re still going to need a geological repository in the end,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists.
Reprocessing also has its downsides, Lyman adds. One risk comes from the plutonium made in the process, which can be used in nuclear weapons. France handles that risk with high security, and by quickly turning that plutonium into the MOX fuel product.
Reprocessing is also quite expensive, and uranium supply isn’t meaningfully limited. “There’s no economic benefit to reprocessing at this time,” says Paul Dickman, a former Department of Energy and NRC official.
France bears the higher cost that comes with reprocessing largely for political reasons, he says. The country doesn’t have uranium resources, importing its supply today. Reprocessing helps ensure its energy independence: “They’re willing to pay a national security premium.”
Japan is currently constructing a spent-fuel reprocessing facility, though delays have plagued the project, which started construction in 1993 and was originally supposed to start up by 1997. Now the facility is expected to open by 2027.
It’s possible that new technologies could make reprocessing more appealing, and agencies like the Department of Energy should do longer-term research on advanced separation technologies, Dickman says. Some companies working on advanced reactors say they plan to use alternative reprocessing methods in their fuel cycle.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.