2026-03-18 20:38:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.
AI models like Anthropic’s Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks.
It would embed sensitive intelligence—like surveillance reports or battlefield assessments—into the models themselves. It would also bring AI firms closer to classified data than ever before. Read the full story.
—James O’Donnell
The way the world currently deals with nuclear waste is as creative as it is varied: drown it in water pools, encase it in steel, bury it hundreds of meters underground. But an approaching wave of new reactors could introduce fresh challenges to nuclear waste management.
The new designs and materials could require some engineering solutions. And there’s a huge range of them coming, meaning there’s an equally wide range of potential waste types to handle. Read the full story.
—Casey Crownhart
This story is part of our MIT Technology Review Explains series, which untangles the complex, messy world of technology to show you what’s coming next. Check out the full series here.
For decades, handmade narco subs have been among the cocaine trade’s most elusive and productive workhorses, ferrying tons of drugs from Colombia to the rest of the world.
Now off-the-shelf technology—Starlink terminals, plug-and-play nautical autopilots, high-resolution video cameras—may be advancing that cat-and-mouse game into a new phase.
Uncrewed subs could move more cocaine over longer distances, and they wouldn’t put human smugglers at risk of capture. Law enforcement agencies are only just beginning to grapple with the consequences.
—Eduardo Echeverri López
This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
1 Nvidia has joined the OpenClaw craze with the launch of NemoClaw
It’s adding privacy and security to the AI agent platform. (Business Insider)
+ Chinese AI stocks surged on the news. (Bloomberg $)
+ Nvidia has also gained Beijing’s approval to sell H200 chips. (Reuters)
+ Tech-savvy “Tinkerers” are cashing in on China’s OpenClaw frenzy. (MIT Technology Review)
2 Microsoft is mulling legal action over the Amazon-OpenAI cloud deal
Citing a potential violation of its exclusive partnership. (FT $)
3 The Pentagon wants to mass-produce the drones it used to strike Iran
The kamikaze drone, called Lucas, is a copy of Iran’s Shahed UAV. (WSJ $)
+ The Shaheds have proven highly effective in the conflict. (NBC News)
+ AI is turning the war into theater. (MIT Technology Review)
4 US officials say Anthropic can’t be trusted with warfighting systems
They want to oust the AI company from all government agencies. (Wired $)
+ OpenAI has taken advantage of the spat. (MIT Technology Review)
+ Here’s how GenAI may be used in strikes. (MIT Technology Review)
5 China is penalizing people linked to Meta’s $2 billion acquisition of Manus
It’s seen as an attempt to stop Chinese AI leaders from relocating. (NYT)
6 DeepSeek appears to be quietly testing a next-generation AI model
An official launch of the new system may be imminent. (Reuters)
+ DeepSeek ripped up the AI playbook. (MIT Technology Review)
7 Meta is ending VR access to Horizon Worlds in June
It was Meta’s flagship metaverse project. (Engadget)
+ And became notorious for sexual harassment. (MIT Technology Review)
8 “Sensorveillance” is turning consumer tech into tracking tools for police
It’s turning our most personal devices into digital informants. (IEE Spectrum)
+ In the surveillance capitalism era, we need to rethink privacy. (MIT Technology Review)
9 Two landmark lawsuits could transform social media for the better
They target the dangers that the platforms pose to children. (New Scientist)
10 A DNA discovery suggests humanity may have seeded from space
An asteroid may have transported the ingredients for life to Earth. (404 Media)
Quote of the day
—Nvidia CEO tells CNBC why OpenClaw is a big step forward for AI.
One More Thing

It’s been just over a year since Kathleen Hicks stepped down as US deputy secretary of defense.
As the highest-ranking woman in Pentagon history, Hicks shaped US military posture through an era defined by renewed competition between powerful countries and a scramble to modernize defense technology.
In this conversation with MIT Technology Review, Hicks reflects on how the Pentagon is adapting—or failing to adapt—to a new era of geopolitical competition. She discusses China’s technological rise, the future of AI in warfare, and her signature initiative: Replicator. Read the full story.
—Caiwei Chen
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ Give typing a tuneful tempo by turning your keyboard into a piano with this new tool.
+ Barry’s Border Points is a fascinating photographic journey through the lines that divide us.
+ Feast your eyes on these five architectural contenders for “a new wonder of the world.”
+ This Ancient Rome cosplay game lets you live your best gladiator life.
2026-03-18 17:00:00
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.
The way the world currently deals with nuclear waste is as creative as it is varied: Drown it in water pools, encase it in steel, bury it hundreds of meters underground.
These methods are how the nuclear industry safely manages the 10,000 metric tons of spent fuel waste that reactors produce as they churn out 10% of the world’s electricity every year. But as new nuclear designs emerge, they could introduce new wrinkles for nuclear waste management.
Most operating reactors at nuclear power plants today follow a similar basic blueprint: They’re fueled with low-enriched uranium and cooled with water, and they’re mostly gigantic, sited at central power plants. But a large menu of new reactor designs that could come online in the next few years will likely require tweaks to ensure that existing systems can handle their waste.
“There’s no one answer about whether this panoply of new reactors and fuel types are going to make waste management any easier,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists.
Nuclear waste can be roughly split into two categories: low-level waste, like contaminated protection equipment from hospitals and research centers, and high-level waste, which requires more careful handling.
The vast majority by volume is low-level waste. This material can be stored onsite and often, once its radioactivity has decayed enough, largely handled like regular trash (with some additional precautions). High-level waste, on the other hand, is much more radioactive and often quite hot. This second category consists largely of spent fuel, a combination of materials including uranium-235, which is the fissile portion of nuclear fuel—the part that can sustain the chain reaction required for nuclear power plants to work. The material also contains fission products—the sometimes radioactive by-products of the splitting atoms that release energy.
Many experts agree that the best long-term solution for spent fuel and other high-level nuclear waste is a geologic repository—essentially, a very deep, very carefully managed hole in the ground. Finland is the furthest along with plans to build one, and its site on the southwest coast of the country should be operational this year.
The US designated a site for a geological repository in the 1980s, but political conflict has stalled progress. So today, used fuel in the US is stored onsite at operational and shuttered nuclear power plants. Once it’s removed from a reactor, it’s typically placed into wet storage, essentially submerged in pools of water to cool down. The material can then be put in protective cement and steel containers called dry casks, a stage known as dry storage.
Experts say the industry won’t need to entirely rewrite this playbook for the new reactor designs.
“The way we’re going to manage spent fuel is going to be largely the same,” says Erik Cothron, manager of research and strategy at the Nuclear Innovation Alliance, a nonprofit think tank focused on the nuclear industry. “I don’t stay up late at night worried about how we’re going to manage spent fuel.”
But new designs and materials could require some engineering solutions. And there’s a huge range of reactor designs, meaning there’s an equally wide range of potential waste types to handle.
Some new nuclear reactors will look quite similar to operating models, so their spent fuel will be managed in much the same way that it is today. But others use novel materials as coolants and fuels.
“Unusual materials will create unusual waste,” says Syed Bahauddin Alam, an assistant professor of nuclear, plasma, and radiological engineering at the University of Illinois Urbana-Champaign.
Some advanced designs could increase the volume of material that needs to be handled as high-level waste. Take reactors that use TRISO (tri-structural isotropic) fuel, for example. TRISO contains a uranium kernel surrounded by several layers of protective material and then embedded in graphite shells. The graphite that encases TRISO will likely be lumped together with the rest of the spent fuel, making the waste much bulkier than current fuel.
Today, separating those layers would be difficult and expensive, according to a 2024 report from the Nuclear Innovation Alliance. That means the entire package would be lumped together as high-level waste.
The company X-energy is designing high-temperature gas-cooled reactors that use TRISO fuel. It has already submitted plans for dealing with spent fuel to the Nuclear Regulatory Commission, which oversees reactors in the US. The fuel’s form could actually help with waste management: The protective shells used in TRISO eliminate X-energy’s need for wet storage, allowing for dry storage from day one, according to the company.
Liquid-fueled molten-salt reactors, another new type, could increase waste volume too. In these designs, fuel and coolant are not kept separate as in most reactors; instead, the fuel is dissolved directly into a molten salt that’s used as the coolant. That means the entire vat of molten salt would need to be handled as high-level waste.
On the other hand, some other reactor designs could produce a smaller volume of spent fuel, but that isn’t necessarily a smaller problem. Fast reactors, for example, achieve a higher burn-up, consuming more of the fissile material and extracting more energy from their fuel. That means spent fuel from these reactors typically has a higher concentration of fission products and emits more heat. And that heat could be the killer factor for designing waste solutions.
Spent fuel needs to be kept relatively cool, so it doesn’t melt and release hazardous by-products. Too much heat in a repository could also damage the surrounding rock. “Heat is what really drives how much you can put inside a repository,” says Paul Dickman, a former Department of Energy and NRC official.
Some spent fuel could require chemical processing prior to disposal, says Allison MacFarlane, director of the school of public policy and global affairs at the University of British Columbia and a former chair of the NRC. That could add complication and cost.
In fast reactors cooled by sodium metal, for example, the coolant can get into the fuel and fuse to its casing. Separation could be tricky, and sodium is highly reactive with water, so the spent fuel will require specialized treatment.
TerraPower’s Natrium reactor, a sodium fast reactor that received a construction permit from the NRC in early March, is designed to safely manage this challenge, says Jeffrey Miller, senior vice president for business development at TerraPower. The company has a plan to blow nitrogen over the material before it’s put into wet storage pools, removing the sodium.
Regardless of what materials are used, even just changing the size of reactors and where they’re sited could introduce complications for waste management.
Some new reactors are essentially smaller versions of the large reactors used today. These small modular reactors and microreactors may produce waste that can be handled in the same way as waste from today’s conventional reactors. But for places like the US, where waste is stored onsite, it would be impractical to have a ton of small sites that each hosts its own waste.
Some companies are looking at sending their microreactors, and the waste material they produce, back to a single location, potentially the same one where reactors are manufactured.
Companies should be required to think carefully about waste and design in management protocols, and they should be held responsible for the waste they produce, UBC’s MacFarlane says.
She also notes that so far, planning for waste has relied on research and modeling, and the reality will become clear only once the reactors are actually operational. As she puts it: “These reactors don’t exist yet, so we don’t really know a whole lot, in great gory detail, about the waste they’re going to produce.”
2026-03-18 06:30:46
The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned.
AI models like Anthropic’s Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before.
Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: The Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.)
Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies might in rare cases access the data if they have appropriate security clearance, the official said.
Before allowing this new training, though, the official said, the Pentagon intends to evaluate how accurate and effective models are when trained on nonclassified data, like commercially available satellite imagery.
The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.
Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks.
The biggest of these, he says, is that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI.
“You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military.
However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge.
The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. It has been used in combat, where generative AI has ranked lists of targets and recommended which to strike first, and in more administrative roles, like drafting contracts and reports.
There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video, in many languages, that intelligence services collect.
It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”
If you have information about the military’s use of AI, you can share it securely via Signal (username jamesodonnell.22).
2026-03-17 20:26:48
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate?
There’s pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI’s partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come.
AI has long handled military analysis. But applying generative AI’s advice to actions in the field is being tested in earnest for the first time in Iran. Read the full story.
—James O’Donnell
This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 xAI has been sued over AI-generated child sexual abuse material
Victims say Grok was built to create porn from photos of real people. (WP $)
+ There’s a booming market for custom deepfake porn. (MIT Technology Review)
2 In a world-first, China has approved a brain chip for commercial use
The BCI has been approved for treating paralysis. (Nature)
+ Brain implants are slowly becoming products. (MIT Technology Review)
+ Some are getting help from generative AI. (MIT Technology Review)
3 Anthropic is recruiting a weapons expert to prevent “catastrophic misuse” of its AI
They want experience with “chemical weapons and/or explosives defense.” (BBC)
+ Anthropic’s relationship with the White House is in tatters. (MIT Technology Review)
4 Nvidia predicts “at least” $1 trillion in AI chip revenue by the end of next year
But the bullish forecast failed to impress Wall Street. (FT $)
+ Nvidia has teamed up with Bolt to build European robotaxis. (Engadget)
5 OpenAI plans to shift its focus to coding and business users
Areas where its rival Anthropic already dominates. (WSJ $)
6 President Trump has driven a wedge between Republicans over AI
And that divide led to a sweeping AI bill flopping in Florida. (NYT $)
+ Trump was duped by a fake AI video again. (Reuters)
7 The US wants the WTO to permanently ban ecommerce tariffs
Brazil, India, and South Africa oppose the plan. (Bloomberg)
8 OpenAI’s wellbeing experts opposed the launch of ChatGPT’s “adult mode”
One said it risked creating a “sexy suicide coach” for vulnerable users. (Ars Technica)
+ AI is already transforming relationships. (MIT Technology Review)
9 A witness caught using smartglasses in court blamed ChatGPT
He was getting real-time legal coaching through the specs. (404 Media)
+ AI is creating legal errors in courtrooms. (MIT Technology Review)
10 Some people think Benjamin Netanyahu is an AI clone
Despite his insistence to the contrary. (The Verge)
+ Generative AI is amplifying disinformation and propaganda. (MIT Technology Review)
Quote of the day
—Nvidia CEO Jensen Huang claims we’ve reached a tipping point where AI usage is accelerating faster than its development, AP reports.
One More Thing
Serhii “Flash” Beskrestnov is, at least unofficially, a spy. Once a month, he drives to the frontline in a VW van equipped with radio hardware, roof antennas, and devices that monitor drones. Over several days, he searches the skies for transmissions that can help Ukrainian troops.
Drones define this brutal conflict, and most rely on the radio communications Flash has obsessed over since childhood. Though now a civilian, the former officer has taken it upon himself to inform his country’s defense on all matters related to radio.
Unlike traditional spies, Flash shares his discoveries with over 127,000 followers—including soldiers and officials—on social media. His work has won fans in the military, but also sparked controversy among the top brass. Read the full story.
—Charlie Metcalfe
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ A newly mapped spiral galaxy 65 million light-years away is an absolute knockout.
+ Miss the days of TV guides? A new app recreates them for YouTube.
+ Shameless plug: MIT’s Heirloom House shows homes can last for a millennium.
+ This supergroup of musical dogs is creating truly fur-midable harmonies (sorry).
2026-03-17 01:06:21
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.
It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.
The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?
Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)
If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video.
A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?
For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first.
It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.
At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people.
Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.
The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.
Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack.
In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world.
Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.
Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.
2026-03-16 21:00:00
Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach.

Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared.
Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human.
Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse. This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.
The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.
Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks. For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.
A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours. For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it.
For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.
Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.
While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.
The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device.
Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer.
The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI.
This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.