MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

The Pentagon is planning for AI companies to train on classified data, defense official says

2026-03-18 06:30:46

The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. 

AI models like Anthropic’s Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments become embedded into the models themselves, and bring AI firms into closer contact with classified data than before. 

Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with MIT Technology Review. The news comes as demand for more powerful models is high: the Pentagon has reached agreements with OpenAI and Elon Musk’s xAI to operate their models in classified settings, and is implementing a new agenda to become an “an ‘AI-first’ warfighting force” as the conflict with Iran escalates. (The Pentagon did not comment on its AI training plans as of publication time.)

Training would be done in a secure data center that’s accredited to host classified government projects, and where a copy of an AI model is paired with classified data, according to two people familiar with how such operations work. Though the Department of Defense would remain the owner of the data, personnel from AI companies with appropriate security clearances might in rare cases access the data, the official said. 

Before allowing this new training, though, the official said the Pentagon intends to first evaluate how accurate and effective models are when trained on non-classified data, like commercially available satellite imagery. 

The military has long used computer vision models, an older form of AI, to identify objects in images and footage it collects from drones and airplanes, and federal agencies have awarded contracts to companies to train AI models on such content. And AI companies building large language models (LLMs) and chatbots have created versions of their models fine-tuned for government work, like Anthropic’s Claude Gov, which are designed to operate across more languages and in secure environments. But the official’s comments are the first indication that AI companies building LLMs, like OpenAI and xAI, could train government-specific versions of their models directly on classified data.

Aalok Mehta, who directs the Wadhwani AI Center at the Center for Strategic and International Studies and previously led AI policy efforts at Google and OpenAI, says training on classified data, as opposed to just answering questions about it, would present new risks. 

The biggest of these, he says, is the fact that classified information these models train on could be resurfaced to anyone using the model. That would be a problem if lots of different military departments, all with different classification levels and needs for information, were to share the same AI. 

“You can imagine, for example, a model that has access to some sort of sensitive human intelligence—like the name of an operative—leaking that information to a part of the Defense Department that isn’t supposed to have access to that information,” Mehta says. That could create a security risk for the operative, one that’s difficult to perfectly mitigate if a particular model is used by more than one group within the military.

However, Mehta says, it’s not as hard to keep information contained from the broader world: “If you set this up right, you will have very little risk of that data being surfaced on the general internet or back to OpenAI.” The government has some of the infrastructure for this already; the security giant Palantir has won sizable contracts for building a secure environment through which officials can ask AI models about classified topics without sending the information back to AI companies. But using these systems for training is still a new challenge. 

The Pentagon, spurred by a memo from Defense Secretary Pete Hegseth in January, has been racing to incorporate more AI. This has included in combat, like generative AI ranking lists of targets and recommending which to strike first, and in more administrative roles, like drafting contracts and reports.

There are lots of tasks currently handled by human analysts that the military might want to train leading AI models to perform and would require access to classified data, Mehta says. That could include learning to identify subtle clues in an image the way an analyst does, or connecting new information with historical context. The classified data could be pulled from the unfathomable amounts of text, audio, images, and video in many languages collected by intelligence services. 

It’s really hard to say which specific military tasks would require AI models to train on such data, Mehta cautions, “because obviously the Defense Department has lots of incentives to keep that information confidential, and they don’t want other countries to know what kind of capabilities we have exactly in that space.”

The Download: OpenAI’s US military deal, and Grok’s CSAM lawsuit

2026-03-17 20:26:48

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Where OpenAI’s technology could show up in Iran 

OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate? 

There’s pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI’s partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come. 

AI has long handled military analysis. But applying generative AI’s advice to actions in the field is being tested in earnest for the first time in Iran. Read the full story

—James O’Donnell  

This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.  

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 xAI has been sued over AI-generated child sexual abuse material 
Victims say Grok was built to create porn from photos of real people. (WP $) 
+ There’s a booming market for custom deepfake porn. (MIT Technology Review

2 In a world-first, China has approved a brain chip for commercial use 
The BCI has been approved for treating paralysis. (Nature
+ Brain implants are slowly becoming products. (MIT Technology Review
+ Some are getting help from generative AI. (MIT Technology Review

3 Anthropic is recruiting a weapons expert to prevent “catastrophic misuse” of its AI 
They want experience with “chemical weapons and/or explosives defense.” (BBC
+ Anthropic’s relationship with the White House is in tatters. (MIT Technology Review

4 Nvidia predicts “at least” $1 trillion in AI chip revenue by the end of next year 
But the bullish forecast failed to impress Wall Street. (FT $) 
+ Nvidia has teamed up with Bolt to build European robotaxis. (Engadget

5 OpenAI plans to shift its focus to coding and business users 
Areas where its rival Anthropic already dominates. (WSJ $) 

6 President Trump has driven a wedge between Republicans over AI 
And that divide led to a sweeping AI bill flopping in Florida. (NYT $) 
+ Trump was duped by a fake AI video again. (Reuters

7 The US wants the WTO to permanently ban ecommerce tariffs 
Brazil, India, and South Africa oppose the plan. (Bloomberg

8 OpenAI’s wellbeing experts opposed the launch of ChatGPT’s “adult mode” 
One said it risked creating a “sexy suicide coach” for vulnerable users. (Ars Technica
+ AI is already transforming relationships. (MIT Technology Review

9 A witness caught using smartglasses in court blamed ChatGPT 
He was getting real-time legal coaching through the specs. (404 Media
+ AI is creating legal errors in courtrooms. (MIT Technology Review

10 Some people think Benjamin Netanyahu is an AI clone 
Despite his insistence to the contrary. (The Verge
+ Generative AI is amplifying disinformation and propaganda. (MIT Technology Review

Quote of the day 

“The inference inflection has arrived.” 

—Nvidia CEO Jensen Huang claims we’ve reached a tipping point where AI usage is accelerating faster than its development, AP reports

One More Thing 

Meet the radio-obsessed civilian shaping Ukraine’s drone defense 

EMRE ÇAYLAK

Serhii “Flash” Beskrestnov is, at least unofficially, a spy. Once a month, he drives to the frontline in a VW van equipped with radio hardware, roof antennas, and devices that monitor drones. Over several days, he searches the skies for transmissions that can help Ukrainian troops. 

Drones define this brutal conflict, and most rely on the radio communications Flash has obsessed over since childhood. Though now a civilian, the former officer has taken it upon himself to inform his country’s defense on all matters related to radio. 

Unlike traditional spies, Flash shares his discoveries with over 127,000 followers—including soldiers and officials—on social media. His work has won fans in the military, but also sparked controversy among the top brass. Read the full story

—Charlie Metcalfe  

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ A newly mapped spiral galaxy 65 million light-years away is an absolute knockout. 
+ Miss the days of TV guides? A new app recreates them for YouTube. 
+ Shameless plug: MIT’s Heirloom House shows homes can last for a millennium. 
+ This supergroup of musical dogs is creating truly fur-midable harmonies (sorry). 

Where OpenAI’s technology could show up in Iran

2026-03-17 01:06:21


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.

It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?

Targets and strikes

Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)

If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 

A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first. 

It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.

Drone defense

At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people. 

Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.

The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses. 

Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack. 

Back-office AI

In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world. 

Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.

Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Nurturing agentic AI beyond the toddler stage

2026-03-16 21:00:00

Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach.

Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared.

The accountability challenge: It’s not them, it’s you

Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human.

Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.

The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.  

Considering permissions

Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  

A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it.

For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.

Having a retirement plan

Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.

Financial optimization is governance out of the gate

While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.

The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device.

Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer.

Keeping humans in the loop remains critical

The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

The Download: glass chips and “AI-free” logos

2026-03-16 20:35:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Future AI chips could be built on glass 

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.  

This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.  

If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story

—Jeremy Hsu

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The race is on to establish a globally recognized “AI-free” logo 
Organizations are rushing to develop a universal label for human-made products. (BBC
+ A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review

2 Elizabeth Warren wants answers on xAI’s access to military data 
The Pentagon reportedly gave it access to classified networks. (NBC News
+ Here’s how chatbots could be used for targeting decisions. (MIT Technology Review
+ The DoD is struggling to upgrade software for fighter jets. (Bloomberg $) 

3 Models are applying to be the faces of AI romance scams 
The “AI face models” are duping victims out of their money. (Wired $) 
+ Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review

4 Meta is planning layoffs that could affect over 20% of staff 
The job cuts could offset its costly bet on AI. (Reuters $) 
+ There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review

5 ByteDance delayed launching a video AI model after copyright disputes 
It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $) 

6 Cybersecurity investigators have exposed a huge North Korean con 
The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News

7 A Chinese AI startup is set for a whopping $18 billion valuation 
That’s more than quadruple its valuation just three months ago. (Bloomberg $) 
+ Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome 
His plans have drawn attention from the Catholic Church. (Reuters $) 

9 Norway is fighting back against internet enshittification 
It’s joined a global campaign against the online world’s decay. (The Guardian
+ We may need to move beyond the big platforms. (MIT Technology Review

10 How a startup plans to resurrect the dodo 
Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian

Quote of the day 

“I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.” 

—Anduril founder Palmer Luckey shares his love of nukes with Axios

One More Thing 

We need a moonshot for computing 

grid of chips
TIM HERMAN/INTEL

The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots. 

There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 

The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story
 
—Brady Helwig & PJ Maykish 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ A UPS delivery driver heroically escaped from two murderous turkeys. 
+ Art’s love affair with cats is charmingly depicted in a new book. 
+ The humble pea and six other forgotten superfoods promise accessible nutritional power. 
MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper.