MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

Where OpenAI’s technology could show up in Iran

2026-03-17 01:06:21


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious.

It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China.

The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?

Targets and strikes

Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.)

If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 

A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions?

For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first. 

It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran.

Drone defense

At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people. 

Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit.

The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses. 

Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack. 

Back-office AI

In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world. 

Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions.

Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Nurturing agentic AI beyond the toddler stage

2026-03-16 21:00:00

Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach.

Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared.

The accountability challenge: It’s not them, it’s you

Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human.

Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.

The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.  

Considering permissions

Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  

A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it.

For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents.

Having a retirement plan

Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions.

Financial optimization is governance out of the gate

While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.

The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device.

Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer.

Keeping humans in the loop remains critical

The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI.

This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

The Download: glass chips and “AI-free” logos

2026-03-16 20:35:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Future AI chips could be built on glass 

Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.  

This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.  

If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story

—Jeremy Hsu

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The race is on to establish a globally recognized “AI-free” logo 
Organizations are rushing to develop a universal label for human-made products. (BBC
+ A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review

2 Elizabeth Warren wants answers on xAI’s access to military data 
The Pentagon reportedly gave it access to classified networks. (NBC News
+ Here’s how chatbots could be used for targeting decisions. (MIT Technology Review
+ The DoD is struggling to upgrade software for fighter jets. (Bloomberg $) 

3 Models are applying to be the faces of AI romance scams 
The “AI face models” are duping victims out of their money. (Wired $) 
+ Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review

4 Meta is planning layoffs that could affect over 20% of staff 
The job cuts could offset its costly bet on AI. (Reuters $) 
+ There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review

5 ByteDance delayed launching a video AI model after copyright disputes 
It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $) 

6 Cybersecurity investigators have exposed a huge North Korean con 
The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News

7 A Chinese AI startup is set for a whopping $18 billion valuation 
That’s more than quadruple its valuation just three months ago. (Bloomberg $) 
+ Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome 
His plans have drawn attention from the Catholic Church. (Reuters $) 

9 Norway is fighting back against internet enshittification 
It’s joined a global campaign against the online world’s decay. (The Guardian
+ We may need to move beyond the big platforms. (MIT Technology Review

10 How a startup plans to resurrect the dodo 
Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian

Quote of the day 

“I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.” 

—Anduril founder Palmer Luckey shares his love of nukes with Axios

One More Thing 

We need a moonshot for computing 

grid of chips
TIM HERMAN/INTEL

The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots. 

There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 

The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story
 
—Brady Helwig & PJ Maykish 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 
 
+ A UPS delivery driver heroically escaped from two murderous turkeys. 
+ Art’s love affair with cats is charmingly depicted in a new book. 
+ The humble pea and six other forgotten superfoods promise accessible nutritional power. 
MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Why physical AI is becoming manufacturing’s next advantage

2026-03-13 23:16:55

For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough.

Today’s manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world.

This is where physical AI—intelligence that can sense, reason, and act in the real world—marks a decisive shift. And it is why Microsoft and NVIDIA are working together to help manufacturers move from experimentation to production at industrial scale.

The industrial frontier: Intelligence and trust, not just automation

Most early AI adoption focused on narrow optimization: automating tasks, improving utilization, and cutting costs. While valuable, that phase often created new friction, including skills gaps, governance concerns, and uncertainty about long‑term impact. Furthermore, the use cases were plentiful but not as strategic.

The industrial frontier represents a different approach. Rather than asking how much work machines can replace, frontier manufacturers ask how AI can expand human capability, accelerate innovation, and unlock new forms of value while remaining trustworthy and controllable.

Across industries, companies that successfully move into this frontier phase share two non‑negotiables:

  • Intelligence: AI systems must understand how the business actually handles its data, workflows, and institutional knowledge.
  • Trust: As AI begins to act in high‑stakes environments, organizations must retain security, governance, and observability at every layer.

Without intelligence, AI becomes generic. Without trust, adoption stalls.

Why manufacturing is the proving ground for physical AI

Manufacturing is uniquely positioned at the center of this shift.

AI is no longer confined to planning or analytics. It is moving into physical execution: coordinating machines, adapting to real‑world variability, and working alongside people on the factory floor. Robotics, autonomous systems, and AI agents must now perceive, reason, and act in dynamic environments.

This transition exposes a critical gap. Traditional automation excels at repetition but struggles with adaptability. Human workers bring judgment and context but are constrained by scale. Physical AI closes that gap by enabling human‑led, AI‑operated systems, where people set intent and intelligent systems execute, learn, and improve over time. Humans are essential for scaled success.

Microsoft and NVIDIA: Accelerating physical AI at scale

Physical AI cannot be delivered through point solutions. It requires agentic-driven, enterprise-grade development, deployment, and operations toolchains and workflows that connect simulation, data, AI models, robotics, and governance into a coherent system.

NVIDIA is building the AI infrastructure that makes physical AI possible, including accelerated computing, open models, simulation libraries, and robotics frameworks and blueprints that enable the ecosystem to build autonomous robotics systems that can perceive, reason, plan, and take action in the physical world. Microsoft complements this with a cloud and data platform designed to operate physical AI securely, at scale, and across the enterprise.

Together, Microsoft and NVIDIA are enabling manufacturers to move beyond pilots toward production‑ready physical AI systems that can be developed, tested, deployed, and continuously improved across heterogeneous environments spanning the product lifecycle, factory operations, and supply chain.

From intelligence to action: Human-agent teams in the factory

At the industrial frontier, AI is not a standalone system, but a digital teammate.

When AI agents are grounded in the proper operational data, embedded in human workflows, and governed end to end, they can assist with tasks such as:

  • Optimizing production lines in real time
  • Coordinating maintenance and quality decisions
  • Adapting operations to supply or demand disruptions
  • Accelerating engineering and product lifecycle decisions

For example, manufacturers are beginning to use simulation‑grounded AI agents to evaluate production changes virtually before deploying them on the factory floor, reducing risk while accelerating decision‑making.

Crucially, frontier manufacturers design these systems so humans remain in control. AI executes, monitors, and recommends, while people provide intent, oversight, and judgment. This balance allows organizations to move faster without losing confidence or control.

The role of trust in scaling physical AI

As physical AI systems scale, trust becomes the limiting factor.

Manufacturers must ensure that AI systems are secure, observable, and operating within policy, especially when they influence safety‑critical or mission‑critical processes. Governance cannot be an afterthought; It must be engineered into the platform itself.

This is why frontier manufacturers treat trust as a first‑class requirement, pairing innovation with visibility, compliance, and accountability. Only then can physical AI move from promising demonstrations to enterprise‑wide deployment.

Why this moment matters—and what’s next

The convergence of AI agents, robotics, simulation, and real‑time data marks an inflection point for manufacturing. What was once experimental is becoming operational. What was once siloed is becoming connected.

At NVIDIA GTC 2026, Microsoft and NVIDIA will demonstrate how this collaboration supports physical AI systems that manufacturers can deploy today and scale responsibly tomorrow. From simulation‑driven development to real‑world execution, the focus is on helping manufacturers cross the industrial frontier with confidence.

For manufacturing leaders, the question is no longer whether physical AI will reshape operations, but how quickly they can adopt it responsibly, at scale, and with trust built in from the start.

Discover more with Microsoft at NVIDIA GTC 2026.

This content was produced by Microsoft. It was not written by MIT Technology Review’s editorial staff.

The Download: how AI is used for military targeting, and the Pentagon’s war on Claude

2026-03-13 20:16:56

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Defense official reveals how AI chatbots could be used for targeting decisions 

The US military might use generative AI systems to rank targets and recommend which to strike first, according to a Defense Department official. 

A list of possible targets could first be fed into a generative AI system that the Pentagon is fielding for classified settings. Humans might then ask the system to analyze the information and prioritize the targets. They would then be responsible for checking and evaluating the results and recommendations. 

OpenAI’s ChatGPT and xAI’s Grok could soon be at the center of exactly these sorts of high-stakes military decisions. Read the full story

—James O’Donnell 

The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 

1 The Pentagon’s CTO claims Claude would “pollute” the defense supply chain 
He blamed a “policy preference” that’s baked into the model. (CNBC
+ Anthropic is reeling from OpenAI’s “compromise” with the DoD. (MIT Technology Review

2 An ex-DOGE staffer has been accused of stealing social security data 
Then taking the information to his new job in the IT division of a government contractor. (Wired
+ He allegedly used a thumb drive to steal the data. (Washington Post

3 Ukraine is offering its battlefield data for AI training 
Allies can access the data to train drones and other UAVs. (Reuters)  
+ Europe has a drone-filled vision for the future of war. (MIT Technology Review)  

4 Meta has postponed its latest AI launch over performance issues 
It fell short of rival models from Google, OpenAI, and Anthropic. (NYT $) 
+ The company’s former AI chief is betting against LLMs. (MIT Technology Review). 

5 X could be breaching sanctions on Iran 
An account for Iran’s new supreme leader may break US rules. (Engadget
+ Hacker group Handala has become the face of Iranian cyberwarfare. (Wired
+ AI is turning the conflict into theater. (MIT Technology Review)  

6 A landmark social media addiction trial is wrapping up 
It’ll decide whether the platforms are liable for harms caused to children. (The Guardian)  
+ AI companions are the next stage of digital addiction. (MIT Technology Review

7 Western AI models have “failed spectacularly” on agriculture in the Global South 
The biggest problem? They’re not trained on local data. (Rest of World

8 Internet outages in Moscow are sparking surging sales of pagers 
The disruptions have been blamed on new tests of web controls. (Bloomberg $) 

9 Why is China obsessed with OpenClaw? 
Lobster-mania is spreading to the general public. (SCMP
Tech-savvy “tinkerers” are cashing in on the craze. (MIT Technology Review

10 Hollywood has soured on Silicon Valley 
Movies and TV shows have swapped eccentric founders for megalomaniac moguls. (NYT $) 

Quote of the day 

“We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” 

—OpenAI CEO Sam Altman makes a new pitch to investors at a BlackRock event, Gizmodo reports. 

One More Thing 

How the Ukraine-Russia war is reshaping the tech sector in Eastern Europe 

Latvia’s annual national defense exercises took place in September and October, as the Ukraine-Russia war nears its third anniversary.
GATIS INDRēVICS/ LATVIAN MINISTRY OF DEFENSE

When Latvian startup Global Wolf Motors first pitched the idea of a military scooter, it was met with skepticism—and a wall of bureaucracy. Then Russia launched its full-scale invasion of Ukraine in February 2022, and everything changed.  

Suddenly, Ukrainian combat units wanted any equipment they could get their hands on, and they were willing to try out ideas that might not have made the cut in peacetime. 

Within weeks, the scooters were on the front line—and even behind it, being used on daring reconnaissance missions. It signaled that a new product category for companies along Ukraine’s borders had opened: civilian technologies repurposed for military needs. Read the full story

—Peter Guest 

We can still have nice things 

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) 

+ A new mini magnet could slash the costs of MRIs and nuclear fusion.  
+ This interactive map of Earth offers new routes to facts about our planet. 
+ Escape the news cycle with this deep dive into the power of fantasy and nature. (Big thanks to reader and MIT alum Vicki for the find!) 
+ Reports of reading’s death are greatly exaggerated