2026-04-24 17:00:00
I don’t need to tell you that AI is everywhere.
Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays.
A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients?
We don’t yet have a good answer.
That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto, argue in a paper published in the journal Nature Medicine this week.
Wiens tells me she has spent years investigating how AI might benefit health care. For the first decade of her career she tried to pitch the technology to clinicians. Over the last few years, she says, it’s as though “a switch flipped.” Health-care providers not only appear much more interested in the promise of these technologies, they have also begun rapidly deploying them.
The problem is that many providers aren’t rigorously assessing how well they actually work.
Take “ambient AI” tools, for example. Also known as AI scribes, they “listen” to conversations between doctors and patients, then transcribe and summarize them. Multiple tools are available, and they are already being widely adopted by health-care providers.
A few months ago, a staffer at a major New York medical center who develops AI tools for doctors told me that, anecdotally, medics are “overjoyed” by the technology—it allows them to focus all their attention on their patients during appointments, and it saves them from a lot of time-consuming paperwork. Early studies support these anecdotes and suggest that the tools can reduce clinician burnout.
That’s all well and good. But what about patient health outcomes? “[Researchers] have evaluated provider or clinician and patient satisfaction, but not really how these tools are affecting clinical decision-making,” says Wiens. “We just don’t know.”
The same holds true for other AI-based technologies used in health-care settings. Some are used to predict patients’ health trajectories, others to recommend treatments. They are designed to make health care more effective and efficient.
But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest X-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately: What will this mean for those patients?
The answers to those questions might vary between hospitals or departments and could depend on clinical workflows, says Wiens. They might also differ between doctors at various stages of their careers.
Take the AI scribes, as another example. Some research on AI use in education suggests that such tools can impact the way people cognitively process information. Could they affect the way a doctor processes a patient’s information? Will the tools affect the way medical students think about patient data in a way that impacts care? These questions need to be explored, says Wiens. “We like things that save us time, but we have to think about the unintended consequences of this,” she says.
In a study published in January 2025, Paige Nong at the University of Minnesota and her colleagues found that around 65% of US hospitals used AI-assisted predictive tools. Only two-thirds of those hospitals evaluated their accuracy. Even fewer assessed them for bias.
The number of hospitals using these tools has probably increased since then, says Wiens. Those hospitals, or entities other than the companies developing the tools, need to evaluate how much they help in specific settings. There’s a possibility that they could leave patients worse off, although it’s more likely that AI tools just aren’t as beneficial as health-care providers might assume they are, says Wiens.
“I do believe in the potential of AI to really improve clinical care,” says Wiens, who stresses that she doesn’t want to stop the adoption of AI tools in health care. She just wants more information about how they are affecting people. “I have to believe that in the future it’s not all AI or no AI,” she says. “It’s somewhere in between.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
2026-04-23 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
When we talk about “nature,” we usually mean something untouched by humans. But little of that world exists today.
From microplastics in rainforest wildlife to artificial light in the Arctic Ocean, human influence now reaches every corner of Earth. In this context, what even is nature? And should we employ technology to try to make the world more “natural”?
In our new Nature issue, MIT Technology Review grapples with these questions. We investigate birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice, within ourselves, and in the far future on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer.
Together, these stories examine how technology has altered our planet—and how it might be used to repair it. Subscribe now to read the full print issue.
After ChatGPT launched in late 2022, the OpenAI chatbot became an everyday everything app for hundreds of millions of people. It led to LLMs being heralded as the new future. The entire tech industry was consumed by the inferno, with companies racing to spin up rival products.
But what’s the next big thing after LLMs? More LLMs—but better. Let’s call them LLMs+. Find out how they’re set to become cheaper, more efficient, and more powerful.
—Will Douglas Heaven
LLMs+ is on our list of the 10 Things That Matter in AI Right Now, MIT Technology Review’s guide to what’s really worth your attention in the busy, buzzy world of AI. We’ll be unpacking one item from the list each day here in The Download, so stay tuned.
Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study published in Nature Energy suggests that even if that future arrives, it might not come cheap.
The research team aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate—the percentage by which its cost declines every time capacity doubles. Their findings offer new clues on the technology’s path to deployment. Read the full story.
—Casey Crownhart
This story is from The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Trump signaled he’s open to reversing the Anthropic ban
What that really means in practice remains to be seen. (Reuters $)
+ Anthropic says there’s no “kill switch” for its AI. (Axios)
+ “Humans in the loop” in AI warfare is an illusion. (MIT Technology Review)
2 SpaceX plans to manufacture its own GPUs
To support the company’s growing AI ambitions. (Reuters $)
+ Musk is shifting SpaceX’s focus from Mars to AI ahead of its IPO. (NYT $)
+ SpaceX and Tesla may be on a collision course. (FT $)
3 Chinese tech giant Tencent has unveiled its first flagship AI model
A former OpenAI researcher is at the helm. (SCMP)
+ Chinese open models are spreading fast. (MIT Technology Review)
4 High earners are racing ahead on AI, deepening workplace divides
The division in adoption risks widening inequality. (FT $)
+ Startups are bragging they spend more on AI than staff. (404 Media)
5 Thousands of Samsung workers are demanding a new share of AI profits
Chip-division employees want 15% of the operating profit. (Bloomberg $)
+ Here’s why opinion on AI is so divided. (MIT Technology Review)
6 AI is helping mediocre Korean hackers steal millions
They’re vibe coding their malware. (Wired $)
+ AI is making online crimes easier. (MIT Technology Review)
7 Kalshi suspended three political candidates for betting on their own races
Including a Democrat and a Republican running for Congress. (CNN)
+ And an independent candidate who said he did it to make a point. (Gizmodo)
+ Lawmakers argue that prediction markets are a loophole for gambling. (NPR)
8 A ping-pong robot is beating elite human players for the first time
The Sony AI system was trained with reinforcement learning. (New Scientist)
+ Just days earlier, a humanoid smashed the human half-marathon record. (AP)
9 Crypto scammers are luring ships into the Strait of Hormuz
By falsely promising safe passage. (Ars Technica)
10 ‘Age tech’ could help us grow old comfortably at home
Apps, wearables, and remote monitoring could fill caregiving gaps. (NYT $)
Quote of the day
—Ross Gerber, the chief executive of Gerber Kawasaki, an investment firm that owns SpaceX shares, tells the New York Times that he’s unimpressed by Musk’s changing goals for the aerospace company.
One More Thing

After hundreds went missing in Maui’s deadly fires, victims were identified with rapid DNA analysis—an increasingly vital tool for putting names to the dead in mass-casualty events.
The technology helped identify victims within just a few hours and bring families some closure more quickly than ever before. But it also previews a dark future marked by the rising frequency of catastrophic events.
Find out how this forensic breakthrough is preparing us for a more volatile world.
—Erika Hayasaki
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ This fascinating dive into botanical history reveals the origins of the first true plants.
+ Here’s how to use Google’s reference desk to find what ordinary search engines miss.
+ Watch duct tape get deconstructed to reveal the physics behind its legendary stickiness.
+ When Radiohead covers Joy Division, the result is a beautiful intersection of two legendary musical eras.
2026-04-23 18:00:00
Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap.
Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.
It’s tricky to make any predictions about the cost of a technology that doesn’t exist yet. But when there’s billions of dollars of public and private funding on the line, it’s worth considering what assumptions we’re making about our future energy mix and its cost.
One crucial measure is a metric called experience rate—the percentage by which an energy technology’s cost declines every time capacity doubles. A higher figure means a quicker price drop and better economic gains with scaling.
Historically, the experience rate is 12% for onshore wind power, 20% for lithium-ion batteries, and 23% for solar modules. Other energy technologies haven’t gotten cheap quite as quickly—fission is at just 2%.
In the new study, published in Nature Energy, researchers aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate. The team looked at three key characteristics that can correlate with experience rate: unit size, design complexity, and the need for customization. The larger and more complex a technology is, and/or the more it needs to be customized for different use cases, the lower the experience rate.
The researchers interviewed fusion experts, including public-sector researchers and those working at companies in the private sector. They had the experts evaluate fusion power plants on those characteristics and used that info to predict the experience rate. (One note here: The study focused only on magnetic confinement and laser inertial confinement, two of the leading fusion approaches, which together receive the vast majority of funding today. Other approaches could come with different cost benefits.)
Fusion plants will likely be relatively large, similar to other types of facilities (like coal and fission power plants) that rely on generating heat. They will probably need less customization than fission plants—largely because regulations and safety considerations should be simpler—but more than technologies like solar panels. And as for complexity, “there was almost unanimous agreement that fusion is incredibly complex,” says Lingxi Tang, a PhD candidate in the energy and technology policy group at ETH Zurich in Switzerland and one of the authors of the study. (Some experts said it was literally off the scale the researchers gave them.)
The final figure the researchers suggest for fusion’s experience rate is between 2% and 8%, meaning it will see a faster price reduction than nuclear power but not as dramatic an improvement as many common energy technologies being deployed today.
That means that it would take a lot of deployment—and likely quite a long time—for the price of building a fusion reactor to drop significantly, so electricity produced by fusion plants could be expensive for a while. And it’s a much slower rate than the 8% to 20% that many modeling studies assume today.
“On the whole, I think questions should be raised about current investment levels in fusion,” Tang says. (The US allocated over $1 billion to fusion in the 2024 fiscal year, and private-sector funding totaled $2.2 billion between July 2024 and July 2025.) “If you’re talking about decarbonization of the energy system, is this really the best use of public money?”
But some experts say that looking to the past to understand the future of energy prices might be misleading.“It’s a good exercise, but we have to be humble about how much we don’t know,” says Egemen Kolemen, a professor at the Princeton Plasma Physics Laboratory.
In 2000, many analysts predicted that solar power would remain expensive—but then production exploded and prices came crashing down, largely because China went all in, he says. “People weren’t exactly wrong then,” he adds. “They were just extrapolating what they saw into the future.”
How fast prices drop depends on regulations, geopolitical dynamics, and labor cost, he says: “We haven’t built the thing yet, so we don’t know.”
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
2026-04-22 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now.
The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world.
We’ll be unpacking one item from the list each day here in The Download, explaining what it means and why it matters. Read the full rundown now—and stay tuned for the days ahead.
As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region.
President Donald Trump recently threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why.
—Casey Crownhart
This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 An unauthorized group has reportedly accessed Anthropic’s Mythos
Users in a private online forum may have gained access. (Bloomberg $)
+ Anthropic said the model was too dangerous for a full release. (Axios)
+ Mozilla used it to find 271 security vulnerabilities in Firefox. (Wired $)
2 Meta will track workers’ clicks and keystrokes for AI training
Tracking software is being installed on workers’ computers.(Reuters $)
+ Employees are up in arms about the program. (Business Insider)
+ LLMs could supercharge mass surveillance in the US. (MIT Technology Review)
3 ChatGPT allegedly advised the Florida State shooter
About when and where to strike, and which ammunition to use. (Washington Post $)
+ Florida’s attorney general is probing ChatGPT’s role in the shooting. (Ars Technica)
+ Does AI cause delusions or just amplify them? (MIT Technology Review)
4 SpaceX has secured the option to buy AI startup Cursor for $60 billion
Or pay $10 billion for the work they’re doing together. (The Verge)
+ SpaceX made the deal as it prepares to go public. (NYT $)
+ Musk’s endgame for the company may be a land grab in space. (The Atlantic $)
5 The Pentagon wants $54 billion for drones
That would rank among the top 10 military budgets for entire nations. (Ars Technica)
+ Shoplifters could soon be chased down by drones. (MIT Technology Review)
6 Apple’s new chief hardware officer signals a sprint to build in-house chips
Apple silicon lead Johny Srouji has been promoted to the role. (CNBC)
7 China’s government is tightening its grip on AI firms that try to leave
It’s doing all it can to stop firms like Manus sending talent and research overseas. (Washington Post $)
8 The FBI is probing the deaths of scientists tied to sensitive research
Including a nuclear physicist and MIT professor shot outside his home. (CNN)
9 The US is accelerating research into psychedelic medical treatment
Including the mysterious ibogaine. (Nature)
+ But psychedelics are (still) falling short in clinical trials. (MIT Technology Review)
10 The first retail boutique run by an AI agent has opened—and it’s chaos
The San Francisco shop is reassuringly mismanaged. (NYT $)
Quote of the day
—Donald Trump pays a classy tribute to Tim Cook on Truth Social.
One More Thing

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death. His idea? Replace your body parts. All of them. Even your brain.
Jean Hébert, a program manager at the US Advanced Research Projects Agency for Health (ARPA-H), believes we can beat aging by adding youthful tissue to people’s brains. Read the full story on his futuristic plan to extend human life.
—Antonio Regalado
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ A Lego set was sent to the edge of space—and survived.
+ Go behind the scenes with Werner Herzog as he guides a new generation of filmmakers.
+ This video about enshittification perfectly captures the frustration of the degrading internet.
+ NASA’s latest deep-space capture offers a rare view of planetary systems in their absolute infancy.
2026-04-22 18:05:06
Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey.

But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data — they must understand the business context behind it.
Without that context, AI can generate answers quickly but still make the wrong decision, says Irfan Khan, president and chief product officer of SAP Data & Analytics.
“AI is incredibly good at producing results,” he says. “It moves fast, but without context it can’t exercise good judgment, and good judgment is what creates a return on investment for the business. Speed without judgment doesn’t help. It can actually hurt us.”
In the emerging era of autonomous systems and intelligent applications, that context layer is becoming essential. To provide context, companies need a well-designed data fabric that does more than just integrate data, Khan says. The right data fabric allows organizations to scale AI safely, coordinate decisions across systems and agents, and ensure that automation reflects real business priorities rather than making decisions in isolation.
Recognizing this, many organizations are rethinking their data architecture. Instead of simply moving data into a single repository, they are looking for ways to connect information across applications, clouds, and operational systems while preserving the semantics that describe how the business works. That shift is driving growing interest in data fabric as a foundation for AI infrastructure.
Traditional data strategies have largely focused on aggregation. Over the past two decades, organizations have invested heavily in extracting information from operational systems and loading it into centralized warehouses, lakes, and dashboards. This approach makes it easier to run reports, monitor performance, and generate insights across the business, but in the process, much of the meaning attached to that data — how it relates to policies, processes, and real-world decisions — is lost.
Take two companies using AI to manage supply-chain disruptions. If one uses raw signals such as inventory levels, lead times, and supply scores, while the other adds context across business processes, policies, and metadata, both systems will rapidly analyze the data but likely come up with different conclusions.
Information such as which customers are strategic accounts, what tradeoffs are acceptable during shortages, and the status of extended supply chains will allow one AI system to make strategic decisions, while the other will not have the proper context, Khan says.
“Both systems move very quickly, but only one moves in the right direction,” he says. “This is the context premium and the advantage you gain when your data foundation preserves context across processes, policies and data by design.”
In the past, companies implicitly managed a lack of context because human experts provided the missing information, but with AI, there is a shortfall and that creates serious limitations. AI systems do not just display information; they act on it. If a system does not explain why data matters, an AI model may optimize for the wrong outcome. Inventory numbers, payment histories, or demand signals might be accurate, but they do not necessarily reveal which customers must be prioritized, which contractual obligations apply, or which products are strategically important. As a result, the system can produce answers that are technically correct but operationally flawed.
This realization is changing how companies think about AI readiness. Most acknowledge that they do not have the mature data processes and infrastructure in place to trust their data and their AI systems. Only one in five organizations consider their approach to data to be highly mature, and only 9% feel fully prepared to integrate and interoperate with their data systems.
The emerging solution is a data fabric: An abstraction layer that spans infrastructure, architecture, and logical organization. For agentic AI, the fabric becomes the primary interface, allowing agents to interact with business knowledge rather than raw storage systems. Knowledge graphs play a central role, enabling agents to query enterprise data using natural language and business logic.
The value of the data fabric relies on three components: Intelligent compute to provide speed, a knowledge pool to provide business understanding and context, and agents to provide autonomous action are grounded in that understanding. What makes this powerful is how these capabilities work together, says Khan.
The technology provides the architecture — a foundation that makes agent-to-agent communication and coordination possible. The process will define how businesses and IT share ownership, and establish governance and a culture in which people trust enough to adopt it. Now all three things must work together for a business data fabric to truly be successful.
“It empowers confident, consistent decisions, and when these elements all come together, AI just doesn’t analyze and interpret the data — it drives smarter, faster decisions that really create business impact,” he says. “This is the promise of a thoughtfully designed business data fabric, where every part reinforces the other, and every insight is grounded in trust and clarity.”
Technically, building a data-fabric layer requires several capabilities. Data must be accessible across multiple environments through federation rather than forced consolidation. A semantic or knowledge layer is needed to harmonize meaning across systems, often supported by knowledge graphs and catalog-driven metadata. Governance and policy enforcement must also operate across the fabric so that AI systems can access data securely and consistently.
Together, these elements create a foundation where AI interacts with business knowledge instead of raw storage systems — an essential step for moving from experimentation to real enterprise automation.
In the emerging era of agentic AI, the responsibility for monitoring, analyzing, and making decisions based on data increasingly shifts to software. AI agents can monitor events, trigger workflows, and make decisions in real time, often without direct human intervention. That speed creates new opportunities, but it also raises the stakes. When multiple agents operate across finance, supply chain, procurement, or customer operations, they must be guided by the same understanding of business priorities.
Without a common knowledge layer connecting disparate data together, coordination between systems quickly breaks down. One system might optimize for margin, another for liquidity, and another for compliance, each working from a different slice of data.
Importantly, most enterprises already possess much of the knowledge needed to make this work, says Khan. Years of operational data, master data, workflows, and policy logic already exist across business applications — companies just need to make it accessible. Companies that deploy data fabrics gain greater trust in their data, with more than two thirds of enterprises seeing improved data accessibility, data visibility, and exerting more control over their data.
“The opportunity isn’t just inventing context from scratch, it’s activating and connecting the context across your business that already exists,” he continues, adding that a data fabric is the “architecture that ensures data semantics, business processes and policies are connected as a unified system across all the clouds.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
2026-04-22 18:00:00
Los Angeles deserves its reputation as the quintessential car city—the rhythms of its 2,200 square miles are dictated by wide boulevards and concrete arcs of freeways. But it once had a world-class rail transit system, and for the last three decades, the city has been rebuilding a network of trolleys and subways. In May, a new four-mile segment with three new subway stations will open along Wilshire Boulevard, a key east-west corridor that connects downtown LA to the Pacific Ocean. What today can be an hours-long drive through a busy, museum-packed stretch of the city will be, if all goes well, a 25-minute train ride.
The existence of subway stops in this part of town—known as Miracle Mile—is a technological triumph over geography and geology. The ground underneath it is literally a disaster waiting to happen—it’s tarry and full of methane. One of those methane deposits actually exploded in 1985, destroying a department store in the neighborhood. In response, the city pushed its new train routes to other parts of town.
These days, dirt full of flammable goo is no longer a problem. “The technology finally caught up with the concerns,” says LA Metro’s James Cohen, a longtime manager of the engineering for this stretch of subway. The key was an earth-pressure-balance tunnel-boring machine, an automated digger that is designed to chew through ground packed with explosive gas. It sends removed dirt topside via conveyor belts and slides precast concrete liner segments into the tunnel, which are joined together with gaskets to create a gas- and waterproof tube. All that let the machine dig about 50 feet every day.



Meanwhile, engineers excavated the stations from the street level down. They worked mostly on weekends, digging out a space and then decking it with concrete so that work could go on underneath while LA drivers continued to exercise their God-given right to get around by car above.
Did the project finish on time? No. Did it come in under budget? Also no; this segment alone cost nearly $4 billion. Is the city now racing to build housing and walkable areas to take full advantage of the extension? Oh, please. Yet the new stations still manage to feel, in the end, transformative—as if Los Angeles’s train has finally come in.