MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

The Download: how the military is using AI, and AI’s climate promises

2025-04-11 20:10:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Generative AI is learning to spy for the US military

For much of last year, US Marines conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia were also running an experiment. The service members in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually.

Though the US military has been developing computer vision models and similar AI tools since 2017, the use of generative AI—tools that can engage in human-like conversation—represent a newer frontier. Read the full story.

—James O’Donnell

Why the climate promises of AI sound a lot like carbon offsets 

The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data center development pushes them up.

The finding echoes a point that prominent figures in the AI sector have made as well to justify, at least implicitly, the gigawatts’ worth of electricity demand that new data centers are placing on regional grid systems across the world.

There’s something familiar about the suggestion that it’s okay to build data centers that run on fossil fuels today because AI tools will help the world drive down emissions eventually—it recalls the purported promise of carbon credits. Unfortunately, we’ve seen again and again that such programs often overstate any climate benefits, doing little to alter the balance of what’s going into or coming out of the atmosphere. Read the full story

—James Temple

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 MAGA influencers are downplaying Trump’s market turmoil
They’re finding creative ways to frame the financial tumult as character building. (WP $)
+ Some democrats are echoing his trade myths, too. (Vox)

2 Amazon products are going to cost more
CEO Andy Jassy says he anticipates third party sellers passing the costs introduced by tariffs on to their customers. (CNBC)
+ He says the company has been renegotiating terms with sellers. (CNN)

3 OpenAI has slashed its model safety testing time
Which experts worry will mean it rushes out models without sufficient safeguarding. (FT $)
+ Why we need an AI safety hotline. (MIT Technology Review)

4 A woman gave birth to a stranger’s baby in an IVF mixup
Monash IVF transferred another woman’s embryo to her by accident. (The Guardian)
+ Inside the strange limbo facing millions of IVF embryos. (MIT Technology Review)

5 Amazon equipped some of its delivery vans in Europe with defibrillators 
In an experiment to see if drivers could speed up help to heart attack patients. (Bloomberg $)

6 The future of biotech is looking shaky
RFK Jr’s appointment and soaring interest rates are rocking an already volatile industry. (WSJ $)
+ Meanwhile, RFK Jr has visited the families of two girls who died from measles. (The Atlantic $)

7 Alexandre de Moraes isn’t backing down
The Brazilian judge, who has butted heads with Elon Musk, is worried about extremist digital populism. (New Yorker $)

8 An experimental pill mimics the effects of gastric bypass surgery
And could be touted as an alternative to weight-loss drugs. (Wired $)
+ Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)

9 What happens when video games start bleeding into the real world
Game Transfer Phenomenon is a real thing, and nowhere near as fun as it sounds. (BBC)
+ How generative AI could reinvent what it means to play. (MIT Technology Review)

10 Londoners smashed up a Tesla in a public art project 
The car was provided by an anonymous donor. (The Guardian)
+ Proceeds from the installation will go to food banks in the UK. (The Standard)

Quote of the day

“It feels so good to be surrounded by a bunch of people who disconnected.”

—Steven Vernon III, who works in finance, describes the beauties of a digital detox at the Masters in Augusta, Georgia as the markets descend into chaos, the Wall Street Journal reports.

The big story

This scientist is trying to create an accessible, unhackable voting machine

For the past 19 years, computer science professor Juan Gilbert has immersed himself in perhaps the most contentious debate over election administration in the United States—what role, if any, touch-screen ballot-marking devices should play in the voting process.

While advocates claim that electronic voting systems can be relatively secure, improve accessibility, and simplify voting and vote tallying, critics have argued that they are insecure and should be used as infrequently as possible.

As for Gilbert? He claims he’s finally invented “the most secure voting technology ever created.” And he’s invited several of the most respected and vocal critics of voting technology to prove his point. Read the full story.

—Spenser Mestel

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Bad news for hoodie lovers: your favorite comfy item of clothing is no longer cutting the mustard.
+ What happens inside Black Holes? A lot more than you might think.
+ Unfortunately, pushups are as beneficial for you as they are horrible to execute.
+ Very cool—archaeologists are making new discoveries in Pompeii.

Love or immortality: A short story

2025-04-11 18:00:00

1.

Sophie and Martin are at the 2012 Gordon Research Conference on the Biology of Aging in Ventura, California. It is a foggy February weekend. Both are disappointed about how little sun there is on the California beach.

They are two graduate students—Sophie in her sixth and final year, Martin in his fourth—who have traveled from different East Coast cities to present posters on their work. Martin’s shows health data collected from supercentenarians compared with the general Medicare population, capturing the diseases that are less and more common in the populations. Sophie is presenting on her recently accepted first-author paper in Aging Cell on two specific genes that, when activated, extend lifespan in C. elegans roundworms, the model organism of her research. 

2.

Sophie walks by Martin’s poster after she is done presenting her own. She is not immediately impressed by his work. It is not published, for one thing. But she sees how it is attention-grabbing and relevant, even necessary. He has a little crowd listening to him. He notices her—a frowning girl—standing in the back and begins to talk louder, hoping she hears.

“Supercentenarians are much less likely to have seven diseases,” he says, pointing to his poster. “Alzheimer’s, heart failure, diabetes, depression, prostate cancer, hip fracture, and chronic kidney disease. Though they have higher instances of four diseases, which are arthritis, cataracts, osteoporosis, and glaucoma. These aren’t linked to mortality, but they do affect quality of life.”

What stands out to Sophie is the confidence in Martin’s voice, despite the unsurprising nature of the findings. She admires that sound, its sturdiness. She makes note of his name and plans to seek him out. 

3.

They find one another in the hotel bar among other graduate students. The students are talking about the logistics of their futures: Who is going for a postdoc, who will opt for industry, do any have job offers already, where will their research have the most impact, is it worth spending years working toward something so uncertain? They stay up too late, dissecting journal articles they’ve read as if they were debating politics. They enjoy the freedom away from their labs and PIs. 

Martin says, again with that confidence, that he will become a professor. Sophie says she likely won’t go down that path. She has received an offer to start as a scientist at an aging research startup called Abyssinian Bio, after she defends. Martin says, “Wouldn’t your work make more sense in an academic setting, where you have more freedom and power over what you do?” She says, “But that could be years from now and I want to start my real life, so …” 

4-18.

Martin is enamored with Sophie. She is not only brilliant; she is helpful. She strengthens his papers with precise edits and grounds his arguments with stronger evidence. Sophie is enamored with Martin. He is not only ambitious; he is supportive and adventurous. He encourages her to try new activities and tools, both in and out of work, like learning to ride a motorcycle or using CRISPR.

Martin visits Sophie in San Francisco whenever he can, which amounts to a weekend or two every other month. After two years, their long-distance relationship is taking its toll. They want more weekends, more months, more everything together. They make plans for him to get a postdoc near her, but after multiple rejections from the labs where he most wants to work, his resentment toward academia grows. 

“They don’t see the value of my work,” he says.

19.

“Join Abyssinian,” Sophie offers.

The company is growing. They want more researchers with data science backgrounds. He takes the job, drawn more by their future together than by the science.

20-35.

For a long time, they are happy. They marry. They do their research. They travel. Sophie visits Martin’s extended family in France. Martin goes with Sophie to her cousin’s wedding in Taipei. They get a dog. The dog dies. They are both devastated but increasingly motivated to better understand the mechanisms of aging. Maybe their next dog will have the opportunity to live longer. They do not get a next dog.

Sophie moves up at Abyssinian. Despite being in industry, her work is published in well-respected journals. She collaborates well with her colleagues. Eventually, she is promoted to executive director of research. 

Martin stalls at the rank of principal scientist, and though Sophie is technically his boss—or his boss’s boss—he genuinely doesn’t mind when others call him “Dr. Sophie Xie’s husband.”

40.

At dinner on his 35th birthday, a friend jokes that Martin is now middle-aged. Sophie laughs and agrees, though she is older than Martin. Martin joins in the laughter, but this small comment unlocks a sense of urgency inside him. What once felt hypothetical—his own death, the death of his wife—now appears very close. He can feel his wrinkles forming.  

First come the subtle shifts in how he talks about his research and Abyssinian’s work. He wants to “defeat” and “obliterate” aging, which he comes to describe as humankind’s “greatest adversary.” 

43.

He begins taking supplements touted by tech influencers. He goes on a calorie-restricted diet. He gets weekly vitamin IV sessions. He looks into blood transfusions from young donors, but Sophie tells him to stop with all the fake science. She says he’s being ridiculous, that what he’s doing could be dangerous.  

Martin, for the first time, sees Sophie differently. Not without love, but love burdened by an opposing weight, what others might recognize as resentment. Sophie is dedicated to the demands of her growing department. Martin thinks she is not taking the task of living longer seriously enough. He does not want her to die. He does not want to die. 

Nobody at Abyssinian is taking the task of living longer seriously enough. Of all the aging bio startups he could have ended up at, how has he ended up at one with such modest—no, lazy—goals? He begins publicly dismissing basic research as “too slow” and “too limited,” which offends many of his and Sophie’s colleagues. 

Sophie defends him, says he is still doing good work, despite the evidence. She is busy, traveling often for conferences, and mistakenly misclassifies the changes in Martin’s attitude as temporary outliers.

44.

One day, during a meeting, Martin says to Jerry, a well-­respected scientist at Abyssinian and in the electron microscopy imaging community at large, that EM is an outdated, old, crusty technology. Martin says it is stupid to use it when there are more advanced, cutting-edge methods, like cryo-EM and super-resolution microscopy. Martin has always been outspoken, but this instance veers into rudeness. 

At home, Martin and Sophie argue. Initially, they argue about whether tools of the past can be useful to their work. Then the argument morphs. What is the true purpose of their research? Martin says it’s called anti-aging research for a reason: It’s to defy aging! Sophie says she’s never called her work anti-aging research; she calls it aging research or research into the biology of aging. And Abyssinian’s overarching mission is more simply to find druggable targets for chronic and age-related diseases. Occasionally, the company’s marketing arm will push out messaging about extending the human lifespan by 20 years, but that has nothing to do with scientists like them in R&D. Martin seethes. Only 20 years! What about hundreds? Thousands? 

45-49.

They continue to argue and the arguments are roundabout, typically ending with Sophie crying, absconding to her sister’s house, and the two of them not speaking for short periods of time.

50.

What hurts Sophie most is Martin’s persistent dismissal of death as merely an engineering problem to be solved. Sophie thinks of the ways the C. elegans she observes regulate their lifespans in response to environmental stress. The complex dance of genes and proteins that orchestrates their aging process. In the previous month’s experiment, a seemingly simple mutation produced unexpected effects across three generations of worms. Nature’s complexity still humbles her daily. There is still so much unknown. 

Martin is at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity. And all you want to do is sit in the lab to watch worms die.”

50.

Martin blames the past. He realizes he should have tried harder to become a professor. Let Sophie make the industry money—he could have had academic clout. Professor Warwick. It would have had a nice sound to it. To his dismay, everyone in his lab calls him Martin. Abyssinian has a first-name policy. Something about flat hierarchies making for better collaboration. Good ideas could come from anyone, even a lowly, unintelligent senior associate scientist in Martin’s lab who barely understands how to process a data set. A great idea could come from anyone at all—except him, apparently. Sophie has made that clear.

51-59.

They live in a tenuous peace for some time, perfecting the art of careful scheduling: separate coffee times, meetings avoided, short conversations that stick to the day-to-day facts of their lives.

60.

Then Martin stands up to interrupt a presentation by the VP of research to announce that studying natural aging is pointless since they will soon eliminate it entirely. While Jerry may have shrugged off Martin’s aggressiveness, the VP does not. This leads to a blowout fight between Martin and many of his colleagues, in which Martin refuses to apologize and calls them all shortsighted idiots. 

Sophie watches with a mixture of fear and awe. Martin thinks: Can’t she, my wife, just side with me this once? 

61.

Back at home:

Martin at the kitchen counter, methodically crushing his evening supplements into powder. “I’m trying to save humanity.” He taps the powder into his protein shake with the precision of a scientist measuring reagents. “And all you want to do is sit in the lab to watch worms die.”

Sophie observes his familiar movements, now foreign in their desperation. The kitchen light catches the silver spreading at his temples and on his chin—the very evidence of aging he is trying so hard to erase.

“That’s not true,” she says.

Martin gulps down his shake.

“What about us? What about children?”

Martin coughs, then laughs, a sound that makes Sophie flinch. “Why would we have children now? You certainly don’t have the time. But if we solve aging, which I believe we can, we’d have all the time in the world.”

“We used to talk about starting a family.”

“Any children we have should be born into a world where we already know they never have to die.”

“We could both make the time. I want to grow old together—”

All Martin hears are promises that lead to nothing, nowhere.  

“You want us to deteriorate? To watch each other decay?”

“I want a real life.”

“So you’re choosing death. You’re choosing limitation. Mediocrity.”

64.

Martin doesn’t hear from his wife for four days, despite texting her 16 times—12 too many, by his count. He finally breaks down enough to call her in the evening, after a couple of glasses of aged whisky (a gift from a former colleague, which Martin has rarely touched and kept hidden in the far back of a desk drawer). 

Voicemail. And after this morning’s text, still no glimmering ellipsis bubble to indicate Sophie’s typing. 

66.

Forget her, he thinks, leaning back in his Steelcase chair, adjusted specifically for his long runner’s legs and shorter­-than-average torso. At 39, Martin’s spreadsheets of vitals now show an upward trajectory; proof of his ability to reverse his biological age. Sophie does not appreciate this. He stares out his office window, down at the employees crawling around Abyssinian Bio’s main quad. How small, he thinks. How significantly unaware of the future’s true possibilities. Sophie is like them. 

67.

Forget her, he thinks again as he turns down a bay toward Robert, one of his struggling postdocs, who is sitting at his bench staring at his laptop. As Martin approaches, Robert minimizes several windows, leaving only his home screen behind.

“Where are you at with the NAD+ data?” Martin asks.

Robert shifts in his chair to face Martin. The skin of his neck grows red and splotchy. Martin stares at it in disgust.

“Well?” he asks again. 

“Oh, I was told not to work on that anymore?” The boy has a tendency to speak in the lilt of questions. 

“By who?” Martin demands.

“Uh, Sophie?” 

“I see. Well, I expect new data by end of day.” 

“Oh, but—”

Martin narrows his eyes. The red splotches on Robert’s neck grow larger. 

“Um, okay,” the boy says, returning his focus to the computer. 

Martin decides a response is called for …

70.

Immortality Promise

I am immortal. This doesn’t make me special. In fact, most people on Earth are immortal. I am 6,000 years old. Now, 6,000 years of existence give one a certain perspective. I remember back when genetic engineering and knowledge about the processes behind aging were still in their infancy. Oh, how people argued and protested.

“It’s unethical!”

“We’ll kill the Earth if there’s no death!”

“Immortal people won’t be motivated to do anything! We’ll become a useless civilization living under our AI overlords!” 

I believed back then, and now I know. Their concerns had no ground to stand on.

Eternal life isn’t even remarkable anymore, but being among its architects and early believers still garners respect from the world. The elegance of my team’s solution continues to fill me with pride. We didn’t just halt aging; we mastered it. My cellular machinery hums with an efficiency that would make evolution herself jealous.

Those early protesters—bless their mortal, no-longer-­beating hearts—never grasped the biological imperative of what we were doing. Nature had already created functionally immortal organisms—the hydra, certain jellyfish species, even some plants. We simply perfected what evolution had sketched out. The supposed ethical concerns melted away once people understood that we weren’t defying nature. We were fulfilling its potential.

Today, those who did not want to be immortal aren’t around. Simple as that. Those who are here do care about the planet more than ever! There are almost no diseases, and we’re all very productive people. Young adults—or should I say young-looking adults—are naturally restless and energetic. And with all this life, you have the added benefit of not wasting your time on a career you might hate! You get to try different things and find out what you’re really good at and where you’re appreciated! Life is not short! Resources are plentiful!

Of course, biological immortality doesn’t equal invincibility. People still die. Just not very often. My colleagues in materials science developed our modern protective exoskeletons. They’re elegant solutions, though I prefer to rely on my enhanced reflexes and reinforced skeletal structure most days. 

The population concerns proved mathematically unfounded. Stable reproduction rates emerged naturally once people realized they had unlimited time to start families. I’ve had four sets of children across 6,000 years, each born when I felt truly ready to pass on another iteration of my accumulated knowledge. With more life, people have much more patience. 

Now we are on to bigger and more ambitious projects. We conquered survival of individuals. The next step: survival of our species in this universe. The sun’s eventual death poses an interesting challenge, but nothing we can’t handle. We have colonized five planets and two moons in our solar system, and we will colonize more. Humanity will adapt to whatever environment we encounter. That’s what we do.

My ancient motorcycle remains my favorite indulgence. I love taking it for long cruises on the old Earth roads that remain intact. The neural interface is state-of-the-art, of course. But mostly I keep it because it reminds me of earlier times, when we thought death was inevitable and life was limited to a single planet. The future stretches out before us like an infinity I helped create—yet another masterpiece in the eternal gallery of human evolution.

71.

Martin feels better after writing it out. He rereads it a couple times, feels even better. Then he has the idea to send his writing to the department administrator. He asks her to create a new tab on his lab page, titled “Immortality Promise,” and to post his piece there. That will get his message across to Sophie and everyone at Abyssinian. 

72.

Sophie’s boss, Ray, is the first to email her. The subject line: “martn” [sic]. No further words in the body. Ray is known to be short and blunt in all his communications, but his meaning is always clear. They’ve had enough conversations about Martin by then. She is already in the process of slowly shutting down his projects, has been ignoring his texts and calls because of this. Now she has to move even faster. 

73.

Sophie leaves her office and goes into the lab. As an executive, she is not expected to do experiments, but watching a thousand tiny worms crawl across their agar plates soothes her. Each of the ones she now looks at carries a fluorescent marker she designed to track mitochondrial dynamics during aging. The green glow pulses with their movements, like stars blinking in a microscopic galaxy. She spent years developing this strain of C. elegans, carefully selecting for longevity without sacrificing health. The worms that lived longest weren’t always the healthiest—a truth about aging that seemed to elude Martin. Those worms taught her more about the genuine complexity of aging. Just last week, she observed something unexpected: The mitochondrial networks in her long-lived strains showed subtle patterns of reorganization never documented before. The discovery felt intimate, like being trusted with a secret.

“How are things looking?” Jerry appears beside her. “That new strain expressing the dual markers?”

Sophie nods, adjusting the focus. “Look at this network pattern. It’s different from anything in the literature.” She shifts aside so Jerry can see. This is what she loves about science: the genuine puzzles, the patient observation, the slow accumulation of knowledge that, while far removed from a specific application, could someday help people age with dignity.

“Beautiful,” Jerry murmurs. He straightens. “I heard about Martin’s … post.”

Sophie closes her eyes for a moment, the image of the mitochondrial networks still floating in her vision. She’s read Martin’s “Immortality Promise” piece three times, each more painful than the last. Not because of its grandiose claims—those were comically disconnected from reality—but because of what it’s revealed about her husband. The writing pulsed with a frightening certainty, a complete absence of doubt or wonder. Gone was the scientist who once spent many lively evenings debating with her about the evolutionary purpose of aging, who delighted in being proved wrong because it meant learning something new. 

74.

She sees in his words a man who has abandoned the fundamental principles of science. His piece reads like a religious text or science fiction story, casting himself as the hero. He isn’t pursuing research anymore. He hasn’t been for a long time. 

She wonders how and when he arrived there. The change in Martin didn’t take place overnight. It was gradual, almost imperceptible—not unlike watching someone age. It wasn’t easy to notice if you saw the person every day; Sophie feels guilty for not noticing. Then again, she read a new study out a few months ago from Stanford researchers that found people do not age linearly but in spurts—specifically, around 44 and 60. Shifts in the body lead to sudden accelerations of change. If she’s honest with herself, she knew this was happening to Martin, to their relationship. But she chose to ignore it, give other problems precedence. Now it is too late. Maybe if she’d addressed the conditions right before the spike—but how? wasn’t it inevitable?—he would not have gone from scientist to fanatic.

75.

“You’re giving the keynote at next month’s Gordon conference,” Jerry reminds her, pulling her back to reality. “Don’t let this overshadow that.”

She manages a small smile. Her work has always been methodical, built on careful observation and respect for the fundamental mysteries of biology. The keynote speech represents more than five years of research: countless hours of guiding her teams, of exciting discussions among her peers, of watching worms age and die, of documenting every detail of their cellular changes. It is one of the biggest honors of her career. There is poetry in it, she thinks—in the collisions between discoveries and failures. 

76.

The knock on her office door comes at 2:45. Linda from HR, right on schedule. Sophie walks with her to conference room B2, two floors below, where Martin’s group resides. Through the glass walls of each lab, they see scientists working at their benches. One adjusts a microscope’s focus. Another pipettes clear liquid into rows of tubes. Three researchers point at data on a screen. Each person is investigating some aspect of aging, one careful experiment at a time. The work will continue, with or without Martin.

In the conference room, Sophie opens her laptop and pulls up the folder of evidence. She has been collecting it for months. Martin’s emails to colleagues, complaints from collaborators and direct reports, and finally, his “Immortality Promise” piece. The documentation is thorough, organized chronologically. She has labeled each file with dates and brief descriptions, as she would for any other data.

77.

Martin walks in at 3:00. Linda from HR shifts in her chair. Sophie is the one to hand the papers over to Martin; this much she owes him. They contain words like “termination” and “effective immediately.” Martin’s face complicates itself when he looks them over. Sophie hands over a pen and he signs quickly.  

He stands, adjusts his shirt cuffs, and walks to the door. He turns back.

“I’ll prove you wrong,” he says, looking at Sophie. But what stands out to her is the crack in his voice on the last word. 

Sophie watches him leave. She picks up the signed papers and hands them to Linda, and then walks out herself. 

Alexandra Chang is the author of Days of Distraction and Tomb Sweeping and is a National Book Foundation 5 under 35 honoree. She lives in Camarillo, California.

Generative AI is learning to spy for the US military

2025-04-11 17:00:00

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders. 

“We still need to validate the sources,” says Lowdon. But the unit’s commanders encouraged the use of large language models, he says, “because they provide a lot more efficiency during a dynamic situation.”

The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence—not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. 

Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI—tools that can engage in human-like conversation like those built by Vannevar Labs—represent a newer frontier.

The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own to troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend (and a large part of what sets Vannevar’s products apart): terabytes of data in 80 different languages are hoovered every day in 180 countries. The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information; it also uses nonclassified data that is difficult to get online (gathered by human operatives on the ground), as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. 

Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment, with the results delivered through a chatbot interface that’s not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China’s efforts to secure rare earth minerals in the Philippines. 

“Our real focus as a company,” says Scott Philips, Vannevar Labs’ chief technology officer, is to “collect data, make sense of that data, and help the US make good decisions.” 

That approach is particularly appealing to the US intelligence apparatus because for years the world has been awash in more data than human analysts can possibly interpret—a problem that contributed to the 2003 founding of Palantir, a company with a market value of over $200 billion and known for its powerful and controversial tools, including a database that helps Immigration and Customs Enforcement search for and track information on undocumented immigrants

In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data but to actually talk through an analysis with someone interactively.

Vannevar’s tools proved useful for the deployment in the Pacific, and Enzenauer and Lowdon say that while they were instructed to always double-check the AI’s work, they didn’t find inaccuracies to be a significant issue. Enzenauer regularly used the tool to track any foreign news reports in which the unit’s exercises were mentioned and to perform sentiment analysis, detecting the emotions and opinions expressed in text. Judging whether a foreign news article reflects a threatening or friendly opinion toward the unit is a task that on previous deployments she had to do manually.

“It was mostly by hand—researching, translating, coding, and analyzing the data,” she says. “It was definitely way more time-consuming than it was when using the AI.” 

Still, Enzenauer and Lowdon say there were hiccups, some of which would affect most digital tools: The ships had spotty internet connections much of the time, limiting how quickly the AI model could synthesize foreign intelligence, especially if it involved photos or video. 

With this first test completed, the unit’s commanding officer, Colonel Sean Dynan, said on a call with reporters in February that heavier use of generative AI was coming; this experiment was “the tip of the iceberg.” 

This is indeed the direction that the entire US military is barreling toward at full speed. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. (The US is of course not alone in this approach; notably, Israel has been using AI to sort through information and even generate lists of targets in its war in Gaza, a practice that has been widely criticized.)

Perhaps unsurprisingly, plenty of people outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and has expertise in leading safety audits for AI-powered systems. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” 

Khlaaf adds that even if humans are “double-checking” the work of AI, there’s little reason to think they’re capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.”

One particular use case that concerns her is sentiment analysis, which she argues is “a highly subjective metric that even humans would struggle to appropriately assess based on media alone.” 

If AI perceives hostility toward US forces where a human analyst would not—or if the system misses hostility that is really there—the military could make an misinformed decision or escalate a situation unnecessarily.

Sentiment analysis is indeed a task that AI has not perfected. Philips, the Vannevar CTO, says the company has built models specifically to judge whether an article is pro-US or not, but MIT Technology Review was not able to evaluate them. 

Chris Mouton, a senior engineer for RAND, recently tested how well-suited generative AI is for the task. He evaluated leading models, including OpenAI’s GPT-4 and an older version of GPT fine-tuned to do such intelligence work, on how accurately they flagged foreign content as propaganda compared with human experts. “It’s hard,” he says, noting that AI struggled to identify more subtle types of propaganda. But he adds that the models could still be useful in lots of other analysis tasks. 

Another limitation of Vannevar’s approach, Khlaaf says, is that the usefulness of open-source intelligence is debatable. Mouton says that open-source data can be “pretty extraordinary,” but Khlaaf points out that unlike classified intel gathered through reconnaissance or wiretaps, it is exposed to the open internet—making it far more susceptible to misinformation campaigns, bot networks, and deliberate manipulation, as the US Army has warned.

For Mouton, the biggest open question now is whether these generative AI technologies will be simply one investigatory tool among many that analysts use—or whether they’ll produce the subjective analysis that’s relied upon and trusted in decision-making. “This is the central debate,” he says. 

What everyone agrees is that AI models are accessible—you can just ask them a question about complex pieces of intelligence, and they’ll respond in plain language. But it’s still in dispute what imperfections will be acceptable in the name of efficiency. 

Update: This story was updated to include additional context from Heidy Khlaaf.

How AI is interacting with our creative human processes

2025-04-11 17:00:00

In 2021, 20 years after the death of her older sister, Vauhini Vara was still unable to tell the story of her loss. “I wondered,” she writes in Searches, her new collection of essays on AI technology, “if Sam Altman’s machine could do it for me.” So she tried ChatGPT. But as it expanded on Vara’s prompts in sentences ranging from the stilted to the unsettling to the sublime, the thing she’d enlisted as a tool stopped seeming so mechanical. 

“Once upon a time, she taught me to exist,” the AI model wrote of the young woman Vara had idolized. Vara, a journalist and novelist, called the resulting essay “Ghosts,” and in her opinion, the best lines didn’t come from her: “I found myself irresistibly attracted to GPT-3—to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them … as I tried to write more honestly, the AI seemed to be doing the same.”

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them? And how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Both Vara’s book and The Uncanny Muse, a collection of essays on the history of art and automation by the music critic David Hajdu, explore how humans have historically and personally wrestled with the ways in which machines relate to our own bodies, brains, and creativity. At the same time, The Mind Electric, a new book by a neurologist, Pria Anand, reminds us that our own inner workings may not be so easy to replicate.

Searches is a strange artifact. Part memoir, part critical analysis, and part AI-assisted creative experimentation, Vara’s essays trace her time as a tech reporter and then novelist in the San Francisco Bay Area alongside the history of the industry she watched grow up. Tech was always close enough to touch: One college friend was an early Google employee, and when Vara started reporting on Facebook (now Meta), she and Mark Zuckerberg became “friends” on his platform. In 2007, she published a scoop that the company was planning to introduce ad targeting based on users’ personal information—the first shot fired in the long, gnarly data war to come. In her essay “Stealing Great Ideas,” she talks about turning down a job reporting on Apple to go to graduate school for fiction. There, she wrote a novel about a tech founder, which was later published as The Immortal King Rao. Vara points out that in some ways at the time, her art was “inextricable from the resources [she] used to create it”—products like Google Docs, a MacBook, an iPhone. But these pre-AI resources were tools, plain and simple. What came next was different.

Interspersed with Vara’s essays are chapters of back-and-forths between the author and ChatGPT about the book itself, where the bot serves as editor at Vara’s prompting. ChatGPT obligingly summarizes and critiques her writing in a corporate-­shaded tone that’s now familiar to any knowledge worker. “If there’s a place for disagreement,” it offers about the first few chapters on tech companies, “it might be in the balance of these narratives. Some might argue that the ­benefits—such as job creation, innovation in various sectors like AI and logistics, and contributions to the global economy—can outweigh the negatives.” 

book cover
Searches: Selfhood in the Digital Age
Vauhini Vara
PANTHEON, 2025

Vara notices that ChatGPT writes “we” and “our” in these responses, pulling it into the human story, not the tech one: “Earlier you mentioned ‘our access to information’ and ‘our collective experiences and understandings.’” When she asks what the rhetorical purpose of that choice is, ChatGPT responds with a numbered list of benefits including “inclusivity and solidarity” and “neutrality and objectivity.” It adds that “using the first-person plural helps to frame the discussion in terms of shared human experiences and collective challenges.” Does the bot believe it’s human? Or at least, do the humans who made it want other humans to believe it does? “Can corporations use these [rhetorical] tools in their products too, to subtly make people identify with, and not in opposition to, them?” Vara asks. ChatGPT replies, “Absolutely.”

Vara has concerns about the words she’s used as well. In “Thank You for Your Important Work,” she worries about the impact of “Ghosts,” which went viral after it was first published. Had her writing helped corporations hide the reality of AI behind a velvet curtain? She’d meant to offer a nuanced “provocation,” exploring how uncanny generative AI can be. But instead, she’d produced something beautiful enough to resonate as an ad for its creative potential. Even Vara herself felt fooled. She particularly loved one passage the bot wrote, about Vara and her sister as kids holding hands on a long drive. But she couldn’t imagine either of them being so sentimental. What Vara had elicited from the machine, she realized, was “wish fulfillment,” not a haunting. 

The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. How can we make sense of these machines, not just use them? 

The machine wasn’t the only thing crouching behind that too-good-to-be-true curtain. The GPT models and others are trained through human labor, in sometimes exploitative conditions. And much of the training data was the creative work of human writers before her. “I’d conjured artificial language about grief through the extraction of real human beings’ language about grief,” she writes. The creative ghosts in the model were made of code, yes, but also, ultimately, made of people. Maybe Vara’s essay helped cover up that truth too.

In the book’s final essay, Vara offers a mirror image of those AI call-and-­response exchanges as an antidote. After sending out an anonymous survey to women of various ages, she presents the replies to each question, one after the other. “Describe something that doesn’t exist,” she prompts, and the women respond: “God.” “God.” “God.” “Perfection.” “My job. (Lost it.)” Real people contradict each other, joke, yell, mourn, and reminisce. Instead of a single authoritative voice—an editor, or a company’s limited style guide—Vara gives us the full gasping crowd of human creativity. “What’s it like to be alive?” Vara asks the group. “It depends,” one woman answers.    

David Hajdu, now music editor at The Nation and previously a music critic for The New Republic, goes back much further than the early years of Facebook to tell the history of how humans have made and used machines to express ourselves. Player pianos, microphones, synthesizers, and electrical instruments were all assistive technologies that faced skepticism before acceptance and, sometimes, elevation in music and popular culture. They even influenced the kind of art people were able to and wanted to make. Electrical amplification, for instance, allowed singers to use a wider vocal range and still reach an audience. The synthesizer introduced a new lexicon of sound to rock music. “What’s so bad about being mechanical, anyway?” Hajdu asks in The Uncanny Muse. And “what’s so great about being human?” 

book cover of the Uncanny Muse
The Uncanny Muse: Music, Art, and Machines from Automata to AI
David Hajdu
W.W. NORTON & COMPANY, 2025

But Hajdu is also interested in how intertwined the history of man and machine can be, and how often we’ve used one as a metaphor for the other. Descartes saw the body as empty machinery for consciousness, he reminds us. Hobbes wrote that “life is but a motion of limbs.” Freud described the mind as a steam engine. Andy Warhol told an interviewer that “everybody should be a machine.” And when computers entered the scene, humans used them as metaphors for themselves too. “Where the machine model had once helped us understand the human body … a new category of machines led us to imagine the brain (how we think, what we know, even how we feel or how we think about what we feel) in terms of the computer,” Hajdu writes. 

But what is lost with these one-to-one mappings? What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s? Maybe what happens is we get a world full of chatbots and agents, computer-­generated artworks and AI DJs, that companies claim are singular creative voices rather than remixes of a million human inputs. And perhaps we also get projects like the painfully named Painting Fool—an AI that paints, developed by Simon Colton, a scholar at Queen Mary University of London. He told Hajdu that he wanted to “demonstrate the potential of a computer program to be taken seriously as a creative artist in its own right.” What Colton means is not just a machine that makes art but one that expresses its own worldview: “Art that communicates what it’s like to be a machine.”  

What happens when we imagine that the complexity of the brain—an organ we do not even come close to fully understanding—can be replicated in 1s and 0s?

Hajdu seems to be curious and optimistic about this line of inquiry. “Machines of many kinds have been communicating things for ages, playing invaluable roles in our communication through art,” he says. “Growing in intelligence, machines may still have more to communicate, if we let them.” But the question that The Uncanny Muse raises at the end is: Why should we art-­making humans be so quick to hand over the paint to the paintbrush? Why do we care how the paintbrush sees the world? Are we truly finished telling our own stories ourselves?

Pria Anand might say no. In The Mind Electric, she writes: “Narrative is universally, spectacularly human; it is as unconscious as breathing, as essential as sleep, as comforting as familiarity. It has the capacity to bind us, but also to other, to lay bare, but also obscure.” The electricity in The Mind Electric belongs entirely to the human brain—no metaphor necessary. Instead, the book explores a number of neurological afflictions and the stories patients and doctors tell to better understand them. “The truth of our bodies and minds is as strange as fiction,” Anand writes—and the language she uses throughout the book is as evocative as that in any novel. 

cover of the Mind Electric
The Mind Electric: A Neurologist on the Strangeness and Wonder of Our Brains
Pria Anand
WASHINGTON SQUARE PRESS, 2025

In personal and deeply researched vignettes in the tradition of Oliver Sacks, Anand shows that any comparison between brains and machines will inevitably fall flat. She tells of patients who see clear images when they’re functionally blind, invent entire backstories when they’ve lost a memory, break along seams that few can find, and—yes—see and hear ghosts. In fact, Anand cites one study of 375 college students in which researchers found that nearly three-quarters “had heard a voice that no one else could hear.” These were not diagnosed schizophrenics or sufferers of brain tumors—just people listening to their own uncanny muses. Many heard their name, others heard God, and some could make out the voice of a loved one who’d passed on. Anand suggests that writers throughout history have harnessed organic exchanges with these internal apparitions to make art. “I see myself taking the breath of these voices in my sails,” Virginia Woolf wrote of her own experiences with ghostly sounds. “I am a porous vessel afloat on sensation.” The mind in The Mind Electric is vast, mysterious, and populated. The narratives people construct to traverse it are just as full of wonder. 

Humans are not going to stop using technology to help us create anytime soon—and there’s no reason we should. Machines make for wonderful tools, as they always have. But when we turn the tools themselves into artists and storytellers, brains and bodies, magicians and ghosts, we bypass truth for wish fulfillment. Maybe what’s worse, we rob ourselves of the opportunity to contribute our own voices to the lively and loud chorus of human experience. And we keep others from the human pleasure of hearing them too. 

Rebecca Ackermann is a writer, designer, and artist based in San Francisco.

Why the climate promises of AI sound a lot like carbon offsets 

2025-04-11 07:52:58

The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data centers pushes them up.

The finding echoes a point that prominent figures in the AI sector have made as well to justify, at least implicitly, the gigawatts’ worth of electricity demand that new data centers are placing on regional grid systems across the world. Notably, in an essay last year, OpenAI CEO Sam Altman wrote that AI will deliver “astounding triumphs,” such as “fixing the climate,” while offering the world “nearly-limitless intelligence and abundant energy.”

There are reasonable arguments to suggest that AI tools may eventually help reduce emissions, as the IEA report underscores. But what we know for sure is that they’re driving up energy demand and emissions today—especially in the regional pockets where data centers are clustering. 

So far, these facilities, which generally run around the clock, are substantially powered through natural-gas turbines, which produce significant levels of planet-warming emissions. Electricity demands are rising so fast that developers are proposing to build new gas plants and convert retired coal plants to supply the buzzy industry.

The other thing we know is that there are better, cleaner ways of powering these facilities already, including geothermal plants, nuclear reactors, hydroelectric power, and wind or solar projects coupled with significant amounts of battery storage. The trade-off is that these facilities may cost more to build or operate, or take longer to get up and running.

There’s something familiar in the suggestion that it’s okay to build data centers that run on fossil fuels today because AI tools will help the world reduce emissions eventually. It recalls the purported promise of carbon credits: that it’s fine for a company to carry on polluting at its headquarters or plants, so long as it’s also funding, say, the planting of trees that will suck up a commensurate level of carbon dioxide.

Unfortunately, we’ve seen again and again that such programs often exaggerate the climate benefits, doing little to alter the balance of what’s going into or coming out of the atmosphere.  

But in the case of what we might call “AI offsets,” the potential to overstate the gains may be greater, because the promised benefits wouldn’t meaningfully accrue for years or decades. Plus, there’s no market or regulatory mechanism to hold the industry accountable if it ends up building huge data centers that drive up emissions but never delivers on these climate claims. 

The IEA report outlines instances where industries are already using AI in ways that could help limit emissions, including detecting methane leaks in oil and gas infrastructure, making power plants and manufacturing facilities more efficient, and reducing energy consumption in buildings.

AI has also shown early promise in materials discovery, helping to speed up the development of novel battery electrolytes. Some hope the technology could deliver advances in solar materials, nuclear power, or other clean energy technologies and improve climate science, extreme weather forecasting, and disaster response, as other studies have noted. 

Even without any “breakthrough discoveries,” the IEA estimates, widespread adoption of AI applications could cut emissions by 1.4 billion tons in 2035. Those reductions, “if realized,” would be as much as triple the emissions from data centers by that time, under the IEA’s most optimistic development scenario.

But that’s a very big “if.” It requires placing a lot of faith in technical advances, wide-scale deployments, and payoffs from changes in practices over the next 10 years. And there’s a big gap between how AI could be used and how it will be used, a difference that will depend a lot on economic and regulatory incentives.

Under the Trump administration, there’s little reason to believe that US companies, at least, will face much government pressure to use these tools specifically to drive down emissions. Absent the necessary policy carrots or sticks, it’s arguably more likely that the oil and gas industry will deploy AI to discover new fossil-fuel deposits than to pinpoint methane leaks.

To be clear, the IEA’s figures are a scenario, not a prediction. The authors readily acknowledged that there’s huge uncertainty on this issue, stating: “It is vital to note that there is currently no momentum that could ensure the widespread adoption of these AI applications. Therefore, their aggregate impact, even in 2035, could be marginal if the necessary enabling conditions are not created.”

In other words, we certainly can’t count on AI to drive down emissions more than it drives them up, especially within the time frame now demanded by the dangers of climate change. 

As a reminder, it’s already 2025. Rising emissions have now pushed the planet perilously close to fully tipping past 1.5 ˚C of warming, the risks from heatwaves, droughts, sea-level rise and wildfires are climbing—and global climate pollution is still going up. 

We are barreling toward midcentury, just 25 years shy of when climate models show that every industry in every nation needs to get pretty close to net-zero emissions to prevent warming from surging past 2 ˚C over preindustrial levels. And yet any new natural-gas plants built today, for data centers or any other purpose, could easily still be running 40 years from now.

Carbon dioxide stays in the atmosphere for hundreds of years. So even if the AI industry does eventually provide ways of cutting more emissions than it produces in a given year, those future reductions won’t cancel out the emissions the sector will pump out along the way—or the warming they produce.

It’s a trade-off we don’t need to make if AI companies, utilities, and regional regulators make wiser choices about how to power the data centers they’re building and running today.

Some tech and power companies are taking steps in this direction, by spurring the development of solar farms near their facilities, helping to get nuclear plants back online, or signing contracts to get new geothermal plants built. 

But such efforts should become more the rule than the exception. We no longer have the time or carbon budget to keep cranking up emissions on the promise that we’ll take care of them later.

The Download: AI co-creativity, and what Trump’s tariffs mean for batteries

2025-04-10 20:10:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How AI can help supercharge creativity

Existing generative tools can automate a striking range of creative tasks and offer near-instant gratification—but at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop.

And so they are looking for ways to inject human creativity back into the process: working on what’s known as co-­creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves.

The aim is to develop AI tools that augment our creativity rather than strip it from us—pushing us to be better at composing music, developing games, designing toys, and much more—and lay the groundwork for a future in which humans and machines create things together.

Ultimately, generative models could offer artists and designers a whole new medium, pushing them to make things that couldn’t have been made before, and give everyone creative superpowers. Read the full story.

—Will Douglas Heaven

This story is from the next edition of our print magazine, which is all about creativity. Subscribe now to read it and get a copy of the magazine when it lands!

Tariffs are bad news for batteries

Since Donald Trump announced his plans for sweeping tariffs last week, the vibes have been, in a word, chaotic. Markets have seen one of the quickest drops in the last century, and it’s widely anticipated that the global economic order may be forever changed.  

These tariffs could be particularly rough on the battery industry. China dominates the entire supply chain and is subject to monster tariff rates, and even US battery makers won’t escape the effects. Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has announced a 90-day tariff pause for some countries 
He’s decided that all the countries that didn’t retaliate against the severe tariffs would receive a reprieve. (The Guardian)
+ China, however, is now subject to a whopping 125% tariff. (CNBC)
+ Chinese sellers on Amazon are preparing to hike their prices in response. (Reuters)
+ Trump’s advisors have claimed the pivot was always part of the plan. (Vox)

2 DOGE has fired driverless car safety assessors
Many of whom were in charge of regulating Tesla, among other companies. (FT $)
+ The department is being audited by the Government Accountability Office. (Wired $)
+ Can AI help DOGE slash government budgets? It’s complex. (MIT Technology Review)

3 The cost of a US-made iPhone could rise by 90%
Bank of America has crunched the numbers. (Bloomberg $)
+ Even so, an American-made iPhone could be inferior quality. (WSJ $)
+ Apple has chartered 600 tons of iPhones to India. (Reuters)

4 The EU wants to build its own AI gigafactories
In a bid to catch up with the US and China. (WSJ $)

5 Amazon was forced to cancel its satellite internet launch
A rocket carrying a few thousands satellites was unable to take off due to bad weather. (NYT $)

6 America’s air quality is likely to get worse
The Trump administration is rolling back the environmental rules that helped lower air pollution. (The Atlantic $)
+ The world’s next big environmental problem could come from space. (MIT Technology Review)

7 Spammers exploited OpenAI’s tech to blast customized spam
The unwanted messages were distributed over four months. (Ars Technica)

8 Chinese social media is filled with memes mocking Trump’s tariffs
Featuring finance bros and JD Vance unhappily laboring in factories. (Insider $)

9 Do you have a Fortnite accent?
Players of the popular game tend to speak in a highly specific way. (Wired $)

10 An em dash is not a giveaway something has been written by AI
Humans use it too—and love it. (WP $)
+ Not all AI-generated writing is bad. (New Yorker $)
+ AI-text detection tools are really easy to fool. (MIT Technology Review)

Quote of the day

“Entering a group chat is like leaving your front door unlocked and letting strangers wander in.”


—Author LM Chilton reflects on the innate dangers of trusting that what you say in a group chat stays in the group chat to Wired.

The big story

Digital twins of human organs are here. They’re set to transform medical treatment.

Steven Niederer, a biomedical engineer at the Alan Turing Institute and Imperial College London, has a cardboard box filled with 3D-printed hearts. Each of them is modeled on the real heart of a person with heart failure, but Niederer is more interested in creating detailed replicas of people’s hearts using computers.

These “digital twins” are the same size and shape as the real thing. They work in the same way. But they exist only virtually. Scientists can do virtual surgery on these virtual hearts, figuring out the best course of action for a patient’s condition.

After decades of research, models like these are now entering clinical trials and starting to be used for patient care. The eventual goal is to create digital versions of our bodies—computer copies that could help researchers and doctors figure out our risk of developing various diseases and determine which treatments might work best.

But the budding technology will need to be developed very carefully. Read the full story to learn why.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Good news pop fans: Madonna and Elton John have ended their decades-long feud.
+ It’s time to take a trip to all 15 of these top restaurants across the world.
+ These tales of cross-generational friendships are truly heartwarming.
+ I’d love to know the secret behind America’s mystery mounds.