2026-03-26 01:26:26
Listen to the session or watch below
Whether it’s the race to find life on Mars, the campaign to outsmart killer asteroids, or the quest to make the moon a permanent home to astronauts, scientists’ efforts in space can tell us more about where humanity is headed. This subscriber-only discussion examines the progress and possibilities ahead.
Speakers: Amanda Silverman, features & investigations editor, and Robin George Andrews, award-winning science journalist and author
Recorded on March 25, 2026
Related Stories:
2026-03-25 23:02:50
Qichao Hu doesn’t mince words about how he sees the state of the battery industry. “Almost every Western battery company has either died or is going to die. It’s kind of the reality,” he says.
Hu is the CEO of SES AI, a Massachusetts-based battery company. It once had aims of making huge amounts of advanced lithium metal batteries for major industries like electric vehicles—but now the company is placing its bets on AI materials discovery.
Hu sees the pivot as an essential one. “It’s just not possible for a Western company to build a sustainable business,” he says. The company is still making some batteries, but only for smaller markets like drones rather than those that would require higher volumes, like EVs. The new focus is the company’s battery materials discovery platform—which it can either license to other battery companies or use to develop materials to sell.
Some leading US EV battery companies have folded in recent months, and others, like SES AI, are making dramatic changes in strategy. This shift in who’s building batteries and where they’re doing it could shape the future geopolitics of energy.
The work that would eventually evolve into SES AI began at MIT, where Hu completed his graduate research. His battery work was aimed at applications in oil and gas exploration. The industry uses sensors that go deep underground, where temperatures can top 120 °C (about 250 °F). The team hoped to develop a battery that could withstand those high temperatures and last longer on a single charge.
The chosen technology was a solid polymer lithium metal battery. These cells use lithium metal for their anode and a polymer for their electrolyte (the material that ions move through in a battery cell). Together, these components can increase the energy density of a cell significantly, relative to the lithium-ion batteries that are common in personal devices and EVs today. (Lithium-ion batteries generally use a graphite material for their anode and a liquid for the electrolyte.)
That solid-state battery technology became the foundation of Solid Energy, a startup Hu founded that spun out from MIT in 2012 and raised its first private investment in 2013.
The team eventually realized that underground oil exploration was a small market, so after several years of operation they began to focus on electric vehicles, which were starting to come into the mainstream. After the team tweaked the chemistry to work better at lower temperatures, the company built its first pilot facility in Massachusetts and eventually another facility in Shanghai.
By 2021, the battery industry was booming, Hu recalls, and EVs were the hottest industry to be in. There was a ton of interest in next-generation battery technology from major automakers at the time, and Solid Energy started developing technology with GM, Hyundai, and Honda.
Larger vehicles, like SUVs and trucks, seemed like a good fit for next-generation batteries, Hu says. Massive vehicles like the ones Americans like to drive would need lighter batteries so they could have a reasonable range without being prohibitively heavy.
The company also shifted its chemistry focus, and in 2022 it announced a battery with a silicon anode rather than a lithium metal one. That shift could help make the battery easier to manufacture.
Since then, growth in the EV market has slowed, at least in the US, partly because of major pullbacks in funding from the Trump administration. EV tax credits for drivers, a key piece of support pushing Americans toward electric options, ended in late 2025. With the market for large electric cars in trouble, Hu says, “now we have to look at every market.”
The AI materials discovery platform on which it’s pinning many of its hopes is called Molecular Universe. The company seeks not only to provide its software to other battery companies but also to identify new battery materials and either license them or sell them to those companies.

The platform has already identified six new electrolyte materials, according to the company. Hu says one is an additive that could help improve the lifetime of batteries with silicon anodes.
One of the challenges with silicon anodes is that they tend to swell a lot during use, which can cause physical damage and prevent efficient charging and discharging. To address the problem, the industry typically uses a material called fluoroethylene carbonate (FEC), which can help form an elastic film on the anode so the battery can still charge effectively. That additive can degrade at high temperatures, though, producing gases that can harm a battery’s lifetime. The SES platform identified a compound that works like FEC but doesn’t release those gases.
The company’s long history and deep battery knowledge could help make its platform a useful tool, Hu says. He sees the actual model as less crucial than SES’s domain expertise and data from years of making and testing batteries.
“By not actually making the physical battery, we’re actually able to scale and then generate revenue faster,” he says.
But some experts are skeptical about the near-term prospects for AI materials discovery to revive the industry. “New materials development, as much as we thought that was what people wanted (and, frankly, it should be what the cell makers want)—I don’t know that that seems to be the real linchpin of the battery industry’s progress,” says Kara Rodby, a technical principal at Volta Energy Technologies, a venture capital firm that focuses on the energy storage industry.
Investors are pulling back, and a slowdown in public support is making things difficult for some parts of the battery industry, she adds: “I don’t know that the ability to discover any new material is going to unlock anything new for the battery industry at this point in time.”
2026-03-25 21:59:17
Axiom Math, a startup based in Palo Alto, California, has released a free new AI tool for mathematicians, designed to discover mathematical patterns that could unlock solutions to long-standing problems.
The tool, called Axplorer, is a redesign of an existing one called PatternBoost that François Charton, now a research scientist at Axiom, co-developed in 2024 when he was at Meta. PatternBoost ran on a supercomputer; Axplorer runs on a Mac Pro.
The aim is to put the power of PatternBoost, which was used to crack a hard math puzzle known as the Turán four-cycles problem, in the hands of anyone who can install Axplorer on their own computer.
Last year, the US Defense Advanced Research Projects Agency set up a new initiative called expMath—short for Exponentiating Mathematics—to encourage mathematicians to develop and use AI tools. Axiom sees itself as part of that drive.
Breakthroughs in math have enormous knock-on effects across technology, says Charton. In particular, new math is crucial for advances in computer science, from building next-generation AI to improving internet security.
Most of the successes with AI tools have involved finding solutions to existing problems. But finding solutions is not all that mathematicians do, says Axiom Math founder and CEO Carina Hong. Math is exploratory and experimental, she says.
MIT Technology Review met with Charton and Hong last week for an exclusive video chat about their new tool and how AI in general could change mathematics.
In the last few months, a number of mathematicians have used LLMs, such as OpenAI’s GPT-5, to find solutions to unsolved problems, especially ones set by the 20th-century mathematician Paul Erdős, who left behind hundreds of puzzles when he died.
But Charton is dismissive of those successes. “There are tons of problems that are open because nobody looked at them, and it’s easy to find a few gems you can solve,” he says. He’s set his sights on tougher challenges—“the big problems that have been very, very well studied and famous people have worked on them.” Last year, Axiom Math used another of its tools, called AxiomProver, to find solutions to four such problems in mathematics.
The Turán four-cycles problem that PatternBoost cracked is another big problem, says Charton. (The problem is an important one in graph theory, a branch of math that’s used to analyze complex networks such as social media connections, supply chains, and search engine rankings. Imagine a page covered in dots. The puzzle involves figuring out how to draw lines between as many of the dots as possible without creating loops that connect four dots in a row.)
“LLMs are extremely good if what you want to do is derivative of something that has already been done,” says Charton. “This is not surprising—LLMs are pretrained on all the data that there is. But you could say that LLMs are conservative. They try to reuse things that exist.”
However, there are lots of problems in math that require new ideas, insights that nobody has ever had. Sometimes those insights come from spotting patterns that hadn’t been spotted before. Such discoveries can open up whole new branches of mathematics.
PatternBoost was designed to help mathematicians find new patterns. Give the tool an example and it generates others like it. You select the ones that seem interesting and feed them back in. The tool then generates more like those, and so on.
It’s a similar idea to Google DeepMind’s AlphaEvolve, a system that uses an LLM to come up with novel solutions to a problem. AlphaEvolve keeps the best suggestions and asks the LLM to improve on them.
Researchers have already used both AlphaEvolve and PatternBoost to discover new solutions to long-standing math problems. The trouble is that those tools run on large clusters of GPUs and are not available to most mathematicians.
Mathematicians are excited about AlphaEvolve, says Charton. “But it’s closed—you need to have access to it. You have to go and ask the DeepMind guy to type in your problem for you.”
And when Charton solved the Turán problem with PatternBoost, he was still at Meta. “I had literally thousands, sometimes tens of thousands, of machines I could run it on,” he says. “It ran for three weeks. It was embarrassing brute force.”
Axplorer is far faster and far more efficient, according to the team at Axiom Math. Charton says it took Axplorer just 2.5 hours to match PatternBoost’s Turán result. And it runs on a single machine.
Geordie Williamson, a mathematician at the University of Sydney, who worked on PatternBoost with Charton, has not yet tried Axplorer. But he is curious to see what mathematicians do with it. (Williamson still occasionally collaborates with Charton on academic projects but says he is not otherwise connected to Axiom Math.)
Williamson says Axiom Math has made several improvements to PatternBoost that (in theory) make Axplorer applicable to a wider range of mathematical problems. “It remains to be seen how significant these improvements are,” he says.
“We are in a strange time at the moment, where lots of companies have tools that they’d like us to use,” Williamson adds. “I would say mathematicians are somewhat overwhelmed by the possibilities. It is unclear to me what impact having another such tool will be.”
Hong admits that there are a lot of AI tools being pitched at mathematicians right now. Some also require mathematicians to train their own neural networks. That’s a turnoff, says Hong, who is a mathematician herself. Instead, Axplorer will walk you through what you want to do step by step, she says.
The code for Axplorer is open source and available via GitHub. Hong hopes that students and researchers will use the tool to generate sample solutions and counterexamples to problems they’re working on, speeding up mathematical discovery.
Williamson welcomes new tools and says he uses LLMs a lot. But he doesn’t think mathematicians should throw out the whiteboards just yet. “In my biased opinion, PatternBoost is a lovely idea, but it is certainly not a panacea,” he says. “I’d love us not to forget more down-to-earth approaches.”
2026-03-25 20:47:44
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
L. Stephen Coles’s brain sits in a vat at a storage facility in Arizona. It has been held there at a temperature of around −146 degrees °C for over a decade, largely undisturbed. Before he died in 2014, Coles had the brain frozen with an ambitious goal in mind: reanimation.
His friend, cryobiologist Greg Fahy, believes it could be revived one day. But other experts are less optimistic.
Still, Fahy’s research could lead to new ways to study the brain. And using cryopreservation for organ transplantation is becoming a viable reality.
Read the full story to find out what the future holds for the technology.
—Jessica Hamzelou
Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition.
Pokémon Go was the world’s first augmented-reality megahit. Released in 2016 by Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. “500 million people installed that app in 60 days,” says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out last year.
Now Niantic Spatial is using that vast trove of crowdsourced data to build a kind of world model—a buzzy new technology that grounds the smarts of LLMs in real environments. The firm wants to use it to help robots navigate more precisely.
—Will Douglas Heaven
This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
Our footprint in the solar system is rapidly expanding. Programs to build permanent Moon bases and find life on Mars have transitioned from science fiction to active space agency missions. The scientists behind them will not only shed new light on the cosmos, but also reveal where humanity is headed.
To examine what the future holds in store, MIT Technology Review features editor Amanda Silverman will sit down today with award-winning science journalist and author Robin George Andrews for an exclusive subscriber-only Roundtable conversation about “The Next Era of Space Exploration.” Register here to join the session at 16:00 GMT / 12:00 PM ET / 9:00 AM PT.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI is shutting down AI video generator Sora
The app attracted at least as much controversy as acclaim. (CNBC)
+ Closing it means saying goodbye to $1 billion from Disney. (BBC)
+ OpenAI is cutting back on side projects ahead of an expected IPO. (WSJ $)
+ But it’s focusing its efforts on building a fully automated researcher. (MIT Technology Review)
2 A judge suspects the Pentagon is illegally punishing Anthropic
She labelled the DoD’s ban “troubling.” (Bloomberg)
+ Anthropic and the Pentagon are facing off in court. (Guardian)
+ The DoD wants AI companies to train on classified data. (MIT Technology Review)
3 Meta has been ordered to pay $375 million for endangering children online
Prosecutors said the company knew it put children at risk. (Engadget)
+ Meta is offering its top talent stock options as incentives for its AI push. (CNBC)
4 Arm will sell its own computer chips for the first time
It’s aimed at data centers that run AI tasks. (NYT $)
+ Arm stock jumped 13% on the news. (CNBC)
5 Manus’s founders have been barred from leaving China following Meta’s takeover
Beijing is reviewing the $2 billion acquisition of the AI startup. (FT $)
6 Baltimore has sued xAI over Grok’s fake nude images
The chatbot allegedly violated consumer protections. (Guardian)
+ There’s a big market for pornographic deepfakes of real women. (MIT Technology Review)
7 NASA plans to send a nuclear-powered spacecraft to Mars in 2028
It’ll take a payload of Ingenuity-class helicopters to the Red Planet. (NYT $)
+ NASA also wants to put a $20 billion base on the Moon. (The Verge)
8 A company is secretly turning Zoom meetings into AI-generated podcasts
WebinarTV turns the calls into content without telling anyone. (404 Media)
9 Iranian volunteers have built their own missile warning map
It fills the gap left by Iran’s lack of a public emergency alert tool. (Wired $)
+ Here’s where OpenAI’s tech could show up in Iran. (MIT Technology Review)
10 A nonprofit is sending basic income payments to AI-impacted workers
It’s starting by giving 25-50 people $1,000 per month. (Gizmodo)
Quote of the day
—DeepMind CEO Demis Hassabis shares his approach to AI strategy with the FT.
One More Thing

As asteroid 2024 YR4 hurtled toward Earth, astronomers determined that this massive rock posed a higher risk of impact than any object of its size in recorded history. Then, just as quickly as history was made, experts declared that the danger had passed.
This is the inside story of the network of global scientists who found, followed, planned for, and finally dismissed the most dangerous asteroid ever found—all under the tightest of timelines and with the highest of stakes. Find out how they did it.
—Robin George Andrews
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ Soothe subscription fatigue with this simple cancellation tool.
+ Takashi Murakami’s reimagined Monets are pop-art magic.
+ Jump into a rabbit hole with this app that visualizes links between Wikipedia pages.
+ This playful lynx that snatched the top prize in a photo competition is a delight.
2026-03-25 19:48:13
Imagine telling a digital agent, “Use my points and book a family trip to Italy. Keep it within budget, pick hotels we’ve liked before, and handle the details.” Instead of returning a list of links, the agent assembles an itinerary and executes the purchase.

That shift, from assistance to execution, is what makes agentic AI different. It also changes the operating speed of commerce. Payment transactions are already clear in milliseconds. The new acceleration is everything before the payment: discovery, comparison, decisioning, authorization, and follow-through across many systems. As humans step out of routine decisions, “good enough” data stops being good enough. In an agent-driven economy, the constraint isn’t speed; it’s trust at machine speed and scale.
Automated markets already work because identity, authority, and accountability are built in. As agents transact across businesses, that same clarity is required. Master data management (MDM)—the discipline of creating a single master record—becomes the exchange layer: tracking who an agent represents, what it can do, and where responsibility sits when value moves. Markets don’t fail from automation; they fail from ambiguous ownership. MDM turns autonomous action into legitimate, scalable trust.
To make agentic commerce safe and scalable, organizations will need more than better models. They will need a modern data architecture and an authoritative system of context that can instantly recognize, resolve, and distinguish entities. It is the difference between automation that scales and automation that needs constant human correction.
Digital commerce has long been built on two primary sides: buyers and suppliers/merchants. Agentic commerce adds a third participant that must be treated as a first-class entity: the agent acting on the buyer’s behalf.
That sounds simple until you ask the questions every enterprise will face:
The practical risk is confusion. Humans, for example, can infer that “Delta” means the airline when they are booking a flight, not the faucet company. An agent needs deterministic signals. If the system guesses wrong, it either breaks trust or forces a human confirmation step that defeats the promise of speed.
Most organizations have learned to live with imperfect data. Duplicate customer records are tolerable. Incomplete product attributes are annoying. Merchant identities can be reconciled later.
Agentic workflows change that tolerance. When an agent takes action without a human checking the output, it needs data that is close to perfect, because it cannot reliably notice when data is ambiguous or wrong the way a person can.
The failure modes are predictable, and they show up in places that matter most:
This is why unified enterprise data and entity resolution move from nice to have to operationally required. The more autonomy you want, the more you must invest in modern data foundations that ensure it is safe.
When leaders talk about agentic AI, they often focus on model capability: planning, tool use, and reasoning. Those are necessary, but they are not sufficient.
Agentic commerce also requires a layer that provides authoritative context at runtime. Think of it as a real-time system of context that can answer instantly and consistently:
• Is this the right person?
• Is this the right agent, acting within the right permissions?
• Is this the right merchant or payee?
• What constraints apply right now (budget, policy, risk, loyalty rules, preferred suppliers)?
Two design principles matter.
First, entity truth must be deterministic enough for automation. Large language models are probabilistic by nature. That is helpful for creating options for writing and drawing. It is risky for deciding where money goes, especially in B2B and finance workflows, where “probably correct” is not acceptable.
Second, context must travel at the speed of interaction and remain portable across the entire connected network value chain. Mastercard’s experience optimizing payment flows is instructive: the more services you layer onto a transaction, the more you risk slowing it down. The pattern that scales pre-resolves, curates, and packages the signal so that execution is lightweight.
This is also where tokenization is heading. Initiatives like Mastercard’s Agent Pay and Verifiable Intent signal a future in which consumer credentials, agent identities, permissions, and provable user intent are encoded as cryptographically secure artifacts — enabling merchants, issuers and platforms to deterministically verify authorization and execution at machine speed.
Adoption will not be uniform. Early traction will often depend less on industry and more on the sophistication of an organization’s systems and data discipline.
That makes the next two years a window for practical preparation. Five moves stand out.
Agentic AI will not be confined to shopping carts. It will touch procurement, travel, claims, customer service, and finance operations. It will compress decision cycles and remove manual steps, but only for organizations that can supply agents with clean identity, precise entity truth, and reliable context.
The winners will treat entity truth and context as core infrastructure for automation, not as a back-office cleanup project. In commerce at machine speed, trust is not a brand attribute; it is an architectural decision encoded in identity, context, and control.
This content was produced by Reltio. It was not written by MIT Technology Review’s editorial staff.
2026-03-25 17:00:00
AI is at war. Anthropic and the Pentagon feuded over how to weaponize Anthropic’s AI model Claude; then OpenAI swept the Pentagon off its feet with an “opportunistic and sloppy” deal. Users quit ChatGPT in droves. People marched through London in the biggest protest against AI to date. If you’re keeping score, Anthropic—the company founded to be ethical—is now turbocharging US strikes on Iran.
On the lighter side, AI agents are now going viral online. OpenAI hired the creator of OpenClaw, a popular AI agent. Meta snapped up Moltbook, where AI agents seem to ponder their own existence and invent new religions like Crustafarianism. And on RentAHuman, bots are hiring people to deliver CBD gummies. The future isn’t AI taking your job. It’s AI becoming your boss and finding God.