2026-04-15 02:00:00
Software engineering has experienced two seismic shifts this century. First was the rise of the open source movement, which gradually made code accessible to developers and engineers everywhere. Second, the adoption of development operations (DevOps) and agile methodologies took software from siloed to collaborative development and from batch to continuous delivery. Now, a third such shift looks to be taking shape with the adoption of agentic AI in software engineering.
Thus far, engineering teams have mainly used AI to assist with coding, testing, and other individual tasks, within tightly designed parameters. But with agentic capabilities, AI agents become reasoning, self-directing entities that can manage not just discrete tasks but entire software projects—and do so largely autonomously. If adopted and fully embraced by engineering teams, agentic AI will usher in end-to-end software process automation and, ultimately, agent-managed development and product lifecycle automation.

This report, which is based on a survey of 300 engineering and technology executives, finds that software engineering teams are seeing the potential in agentic AI and are beginning to put it to use, but so far in a mainly limited fashion. Their ambitions for it are high, but most realize it will take time and effort to reduce the barriers to its full diffusion in software operations. As with DevOps and agile, reaping the full benefits of agentic AI in engineering will require sometimes difficult organizational and process change to accompany technology adoption. But the gains to be won in speed, efficiency, and quality promise to make any such pain well worthwhile.

Key findings include the following:
Adoption momentum is building. While half of organizations deem agentic AI a top investment priority for software engineering today, it will be a leading investment for over four-fifths in two years. That spending is driving accelerated adoption. Agentic AI is in (mostly limited) use by 51% of software teams today, and 45% have plans to adopt it within the next 12 months.
Early gains will be incremental. It will take time for software teams’ investments in agentic AI to start bearing fruit. Over the next two years, most expect the improvements from agent use to be slight (14%) or at best moderate (52%). But around one-third (32%) have higher expectations, and 9% think the improvements will be game changing.
Agents will accelerate time-to-market. The chief gains from agentic AI use over that two-year time frame will come from greater speed. Nearly all respondents (98%) expect their teams’ delivery of software projects from pilot to production to accelerate, with the anticipated increase in speed averaging 37% across the group.
The goal for most is full agentic lifecycle management. Teams’ ambitions for scaling agentic AI are high. Most aim for AI agents to be managing the product development and software development lifecycles (PDLC and SDLC) end to end relatively quickly. At 41% of organizations, teams aim to achieve this for most or all products in 18 months. That figure will rise to 72% two years from now, if expectations are met.
Compute costs and integration pose key early challenges. For all survey respondents—but especially in early-adopter verticals such as media and entertainment and technology hardware—integrating agents with existing applications and the cost of computing resources are the main challenges they face with agentic AI in software engineering. The experts we interviewed, meanwhile, emphasize the bigger change management difficulties teams will face in changing workflows.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
2026-04-14 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. Stanford’s 2026 AI Index—the field’s annual report card—cuts through the noise.
The data reveals a technology evolving faster than we can manage. From the China-US rivalry and model breakthroughs to public sentiment and the impact on jobs, here are the index’s key findings on the state of AI today.
—Michelle Kim
Stanford’s 2026 AI Index is full of striking stats. It also reveals a field riddled with inconsistencies, most notably in the gap between experts and non-experts.
On jobs, 73% of US experts view AI’s impact positively, compared to just 23% of the public. Similar divides emerged on the economy and healthcare. What’s driving this disconnect?
Part of the answer may lie in their diverging experiences. Those using AI for coding and technical work see it at its best, while everyone else gets a more mixed bag. The result is two very different realities. Read the full story on what they are—and why they matter.
This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.
—Will Douglas Heaven
Grizzly bears have made such a comeback across eastern Montana that in 2017, the state hired its first-ever prairie-based grizzly manager: wildlife biologist Wesley Sarmento.
For seven years, Sarmento worked to keep both bears and humans out of trouble. He acted like a first responder, trying to defuse potentially dangerous situations. He even got caught in some himself, which led him to a new wildlife safety tool: drones. Find out the results of his experiments in digital ecology.
—Emily Senkosky
This article is from the next issue of our print magazine, which is all about nature. Subscribe now to read it when it lands on Wednesday, April 22.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Human scientists still trounce the top AI agents at complex tasks
The best agents perform only half as well as experts with PhDs. (Nature)
+ Can AI really help us discover new materials? (MIT Technology Review)
2 OpenAI is escalating its fight with Anthropic while pulling away from Microsoft
A leaked memo exposes plans to attack Anthropic. (Axios)
+ And says Microsoft “limited our ability” to reach clients. (The Information $)
+ While touting a budding alliance with Amazon. (CNBC)
3 Carbon removal technology is stalling—and that may be good news
Better solutions could now emerge. (New Scientist)
+ Here are three that are set to break through. (MIT Technology Review)
4 AI is finding bugs faster than we can fix them—and hackers will benefit
Welcome to the bug armageddon. (WSJ $)
+ AI may soon be capable of fully automated attacks. (MIT Technology Review)
5 A Texas man has been charged with the attempted murder of Sam Altman
He allegedly threw a Molotov cocktail at the OpenAI CEO’s home last Friday. (NPR)
+ The suspect reportedly had a list of other AI leaders. (NYT $)
6 AI is beginning to transform mathematics
It’s proving new results at a rapid pace. (Quanta)
+ One AI startup plans to unearth new mathematical patterns. (MIT Technology Review)
7 Students are turning away from computer science
It’s had a massive drop in enrollments. (WP $)
+ AI coding tools have diminished the degree’s value. (NYT $)
8 India’s bid to become a data center hub is sparking a fierce backlash
Farmers are protesting Delhi’s courtship of hyperscalers. (Rest of World)
9 Meta is set to overtake Google in advertising revenue this year
And become the world’s largest digital ad platform for the first time. (WSJ)
10 AI influencers are taking over Coachella
Synthetic content creators are “everywhere” at the festival. (The Verge)
Quote of the day
—The alleged firebomber of Sam Altman’s home shares his distrust of AI leaders in a blog post.
One More Thing

A few years ago, Brad Lowell, a Harvard University neuroscientist, figured out how to crank the food drive to the maximum. He did it by stimulating neurons in mice. Now, he’s following known parts of the neural hunger circuits into uncharted parts of the brain.
The work could have important implications for public health. More than 1.9 billion adults worldwide are overweight, and more than 650 million are obese. Understanding the circuits involved could shed new light on why these numbers are skyrocketing.
—Adam Piore
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
Top image credit: Stephanie Arnett/MIT Technology Review | Getty Images
+ Someone built a mechanical version of Tony Hawk’s Pro Skater from Lego.
+ Enjoy this wholesome clip of toddlers discovering the existence of hugs.
+ This interactive body map shows exactly which exercises you need.
+ Jon McCormack’s photos of nature’s patterns are breathtaking.
2026-04-14 20:04:47
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.
Just before Artemis II began its historic slingshot around the moon, Jared Isaacman, the recently confirmed NASA administrator, made a flurry of announcements from the agency’s headquarters in Washington, DC. He said the US would soon undertake far more regular moon missions and establish the foundations for a base at the lunar south pole before the end of the decade. He also affirmed the space agency’s commitment to putting a nuclear reactor on the lunar surface.
These goals were largely expected—but there was still one surprise. Isaacman also said NASA would build the first-ever nuclear reactor-powered interplanetary spacecraft and fly it to Mars by the end of 2028. It’s called the Space Reactor-1 Freedom, or SR-1 for short. “After decades of study, and billions spent on concepts that have never left Earth, America will finally get underway on nuclear power in space,” he said at the event. “We will launch the first-of-its-kind interplanetary mission.”
A successful mission would herald a new era in spaceflight, one in which traveling between Earth, the moon, and Mars would—according to a range of experts—be faster and easier than ever. And it might just give the US the edge in the race against China—allowing the country to beat its greatest geopolitical rival to landing astronauts on another planet.
While experts agree the timeline is extremely tight, they’re excited to see if America’s space agency and its industry partners can deliver an engineering miracle. “You wake up to that announcement, and it puts a big smile on your face,” says Simon Middleburgh, co-director of the Nuclear Futures Institute at Bangor University in Wales.
Little detail on SR-1 is publicly available, and NASA’s own spaceflight researchers did not respond to requests for comment. But MIT Technology Review spoke to several nuclear power and propulsion experts to find out how the new nuclear-powered spacecraft might work.
Traditionally, spaceflight has been powered by chemical propulsion. Liquefied hydrogen and liquefied oxygen are mixed, and then ignited, within a rocket; the searingly hot exhaust from this explosion is ejected through a nozzle, which propels the rocket forth.
Chemical propulsion offers a significant amount of thrust and will, for the foreseeable future, still be used to launch spacecraft from Earth. But nuclear propulsion would enable spacecraft to fly through the solar system for far longer, and faster, than is currently possible.
“You get more bang per kilogram,” says Middleburgh. A nuclear fuel source is far more energy-dense than its conventional cousin, which means it’s orders of magnitude more efficient. “It’s really, really, really high efficiency,” says Lindsey Holmes, an expert in space nuclear technology and the vice president of advanced projects at Analytical Mechanics Associates, an aerospace company in Virginia.
The approach also removes one other element of the traditional power equation: solar. Spacecraft, including the Artemis II mission’s Orion space capsule, often rely on the sun for power. But this can be a problem, since it doesn’t always shine in space, particularly when a planet or moon gets in its way—and as you head toward the outer solar system, beyond Mars, there’s just less sunlight available.
To circumvent this issue, nuclear energy sources have been used in spacecraft plenty of times before—including on both Voyager missions and the Saturn-interrogating Cassini probe. Known as radioisotope thermoelectric generators, or RTGs, these use plutonium, which radioactively decays and generates heat in the process. That heat is then converted into electricity for the spacecraft to use. RTGs, however, aren’t the same as nuclear reactors; they are more akin to radioactive batteries—more rudimentary and considerably less powerful.
So how will a nuclear-reactor-powered spacecraft work?
Despite operational differences, the fundamentals of running a nuclear reactor in space are much the same as they are on Earth. First, get some uranium fuel; then bombard it with neutrons. This ruptures the uranium’s unstable atomic nuclei, which expel a torrent of extra neutrons—and that rapidly escalates into a self-sustaining, roasting-hot nuclear fission reaction. Its prodigious heat output can then be used to produce electricity.
Doing this in space may sound like an act of lunacy, but it’s not: The idea, and even a lot of the basic technology, has been around for decades. The Soviet Union sent dozens of nuclear reactors into orbit (often to power spy satellites), while the US deployed just one, known as SNAP-10A, back in 1965—a technological demonstration to see if it would operate normally in space. The aim was for the reactor to generate electricity for at least a year, but it ran for just over a month before a high-voltage failure in the spacecraft caused it to malfunction and shut down.
Now, more than half a century later, the US wants its second-ever space-based nuclear reactor to do something totally different: power an interplanetary spacecraft.
To be clear, the US has started, and terminated, myriad programs looking into nuclear propulsion. The latest casualty was DRACO, a collaboration between NASA and the Department of Defense, which ended in 2025. Like several previous efforts, DRACO was canceled because of a mix of high experimentation costs, lower prices for conventional rocket propulsion, and the difficulty of ensuring that ground tests could be performed safely and effectively (they are creating an incredibly powerful nuclear reaction, after all).
But now external considerations may be changing the calculus. The Artemis program has jump-started America’s return to the moon, and the new space race has palpable momentum behind it. The first nation to deploy nuclear propulsion would have a serious advantage navigating through deep space.
“I think it’s a very doable technology,” says Philip Metzger, a spaceflight engineering researcher at the Florida Space Institute. “I’m happy to see them finally doing this.”
One version of this technology is known as nuclear thermal propulsion, or NTP. You start with a nuclear reactor, one that’s cooking at around 5,000°F. Then “you’ve got a cold gas, and you squirt cold gas over the hot reactor,” says Middleburgh. “The gas expands, you shoot it out the back of a nozzle, and you have an impulse. And that impulse drives you forward.”
Because the thrust depends on the speed of the gas being ejected, the propellant gas needs to be light, making hydrogen a popular choice. But hydrogen is a corrosive and explosive substance, so using it in NTP engines can make them precarious to operate. On top of this, NTP doesn’t necessarily have a very long operating life.
Alternatively, there’s nuclear electric propulsion, or NEP, which “is very low thrust, but very efficient, so you can use it for a long period of time,” says Sebastian Corbisiero, the US Department of Energy’s national technical director of space reactor programs. This method uses heat from a fission reactor to generate power. That power is used to electrify a gas and then blast it out of the spacecraft, generating thrust.
Both NTP and NEP have been investigated by US researchers, because both have the added benefit of making it easier and safer for human beings to explore the solar system. Astronauts in space are exposed to harmful cosmic radiation, but because nuclear propulsion makes spacecraft speedier and more agile, they’d spend less time in it. “It solves the radiation problem,” says Metzger. “That’s one of the main motivations for inventing better propulsion to and from Mars.”
For SR-1, NASA has opted for nuclear electric propulsion. NEP is “a much simpler affair” than its thermal counterpart, says Middleburgh. Essentially, you just need to plug a nuclear reactor into a power-and-propulsion system. Luckily for NASA, it’s already got one.
For many years, NASA—along with its space agency partners in Canada, Europe, Japan, and the Middle East—was preparing for Gateway, meant to be humanity’s first space station to orbit around the moon. Isaacman canceled the project in March, but that doesn’t mean its technology will go to waste; the power-and-propulsion element of the nixed space station will be used in SR-1 instead. This contraption was going to be powered by solar energy. It’ll now be attached to an in-development nuclear reactor custom built to survive in space.
What might the SR-1 look like? MIT Technology Review saw a presentation by Steve Sinacore, program executive of NASA’s Space Reactor Office, that offers some clues. So far, the concept art makes it look like a colossal fletched arrow. At the back will be the power-and-propulsion system, while its tip will hold a 20-kilowatt-or-greater uranium-filled nuclear reactor. (For context, a typical nuclear plant on Earth is 50,000 times more powerful, producing a gigawatt of power.)

The “fletches” on SR-1 are large fins that allow the reactor to cool down. “You have to have really large radiators,” says Holmes, since the nuclear fission process produces so much heat that much of it has to be vented into space—otherwise, the reactor and spacecraft will melt.
According to that presentation, the spacecraft’s hardware development is due to start this June. By January 2028, SR-1’s systems should be ready for assembly and testing. And by that October, the spacecraft will arrive at the launch site, ready for liftoff before the year’s end. Will the nuclear reactor manage to hold itself together? “Going through the launch safely is going to be a challenge,” says Middleburgh. “You are being shaken, rattled, and rolled.”
Then, he says, “once you’re up in space, once you’ve got through that few minutes of hell in getting there, it’s zero-gravity considerations you have to worry about.” The question then becomes: Will the mechanics of the reactor, built on terra firma, still work?
For safety reasons, the nuclear reactor will be switched on around two days post-launch, when it’s comfortably in space. Uranium isn’t tremendously dangerous by itself, but that can’t be said of the nuclear waste products that emerge when the reactor is activated, so you don’t want any of that to fall back to Earth.
If this schedule is adhered to, and SR-1 works as planned, it’s expected to reach Mars about a year after launch. “It’s an aggressive timeline,” says Holmes, something she suspects is being driven partly by China’s and Russia’s own deep-space nuclear ambitions. The two countries aim to place their own nuclear reactor on the moon’s surface to power the planned International Lunar Research Station—a jointly operated lunar base—by 2035.
Whether it flies or fails in space, SR-1’s operations should help NASA with putting a nuclear reactor on the moon soon after. “All of the things we’d be learning about how that system operates in space [are] very helpful for a surface application, because basically it’s the same,” says Corbisiero. “There’s still no air on the moon.”
And if SR-1 does triumph, it will be a game-changing victory for NASA. It will also be “a massive win for the human race, frankly,” says Middleburgh. “It will be a marvel of engineering, and it will move the dial in humans potentially taking a step on Mars.” Like many of his colleagues, including Holmes, he remains thrilled by the prospect of the first-ever nuclear-powered interplanetary spacecraft—even with the incredibly ambitious timeline.
“These are the things that get us up in the morning,” he says. “These are the sorts of things we will remember when we’re old.”
2026-04-14 19:00:00
Each year we compile our 10 Breakthrough Technologies list, featuring our educated predictions for which technologies will have the biggest impact on how we live and work.
This year, however, we had a dilemma. While our final picks encompass all our core coverage areas (energy, AI, and biotech, plus a few more), our 2026 list was harder to wrangle than normal. Why? We had so many worthy AI candidates we couldn’t fit them all in! (The ones that made it were AI companions, mechanistic interpretability, generative coding, and hyperscale data centers.) Many great ideas fell by the wayside to keep the list as wide-ranging as possible.
Well, that got us thinking: What if we made an entirely new list that was all about AI? We got excited about that idea—and before we knew it we had the beginnings of what we’re calling 10 Things That Matter in AI Right Now. It’s an entirely new annual list that we’re proud to be publishing for the first time on April 21, 2026. We’ll unveil it on stage for attendees at our signature AI conference, EmTech AI, held on MIT’s campus (it’s not too late to get tickets), and then publish the list online later that day.
The process for coming up with the list was similar to the way we pick our 10 Breakthrough Technologies. We petitioned our AI team of reporters and editors to propose ideas, put them all in a document, and engaged in some robust discussion. Eventually, we voted for our favorites and whittled the long list down to a final 10.
But there’s a slight difference between this list and our 10 Breakthrough Technologies. AI is already such a big part of our lives that we didn’t want to restrict ourselves to nominating only technologies. Instead, we wanted to put together a definitive annual list that highlights what we believe are the biggest ideas, topics, and research directions in AI right now. So yes, it will include cutting-edge AI technologies, but it will also feature other trends and developments in AI that we want to bring to our subscribers’ attention.
Think of it as a sneak peek inside the collective brain of our crack AI reporting team: These are the things that our reporters will be watching this year. We intend to follow the items on this list really closely, and you will see it reflected in the news and feature stories we publish in 2026.
For us, 10 Things That Matter in AI Right Now is a guide to how we view the current AI landscape. It will be a source of discussion, debate, and maybe some arguments! We are so excited to share it with you on April 21. If you want to be among the first to see it—join us at EmTech AI or become a subscriber to livestream the announcement.
2026-04-14 18:00:00
You’ve probably heard some version of this idea before: that many of us have an “inner Neanderthal.” That is to say, around 45,000 years ago, when Homo sapiens first arrived in Europe, they met members of a cousin species—the broad-browed, heavier-set Neanderthals—and, well, one thing led to another, which is why some people now carry a small amount of Neanderthal DNA.
This DNA is arguably the 21st century’s most celebrated discovery in human evolution. It has been connected to all kinds of traits and health conditions, and it helped win the Swedish geneticist Svante Pääbo a Nobel Prize.
But in 2024, a pair of French population geneticists called into question the foundation of the popular and pervasive theory.
Lounès Chikhi and Rémi Tournebize, then colleagues at the Université de Toulouse, proposed an alternative explanation for the very same genomic patterns. The problem, they said, was that the original evidence for the inner Neanderthal was based on a statistical assumption: that humans, Neanderthals, and their ancestors all mated randomly in huge, continent-size populations. That meant a person in South Africa was just as likely to reproduce with a person in West Africa or East Africa as with someone from their own community.
Archaeological, genetic, and fossil evidence all shows, though, that Homo sapiens evolved in Africa in smaller groups, cut off from one another by deserts, mountains, and cultural divides. People sometimes crossed those barriers, but more often they partnered up within them.
In the terminology of the field, this dynamic is called population structure. Because of structure, genes do not spread evenly through a population but can concentrate in some places and be totally absent from others. The human gene pool is not so much an Olympic-size swimming pool as a complex network of tidal pools whose connectivity ebbs and flows over time.
This dynamic greatly complicates the math at the heart of evolutionary biology, which long relied on assumptions like randomly mating populations to extract general principles from limited data. If you take structure into account, Chikhi told me recently, then there are other ways to explain the DNA that some living people share with Neanderthals—ways that don’t require any interspecies sex at all.
“I believe most species are spatially organized and structured in different, complex ways,” says Chikhi, who has researched population structure for more than two decades and has also studied lemurs, orangutans, and island birds. “It’s a general failure of our field that we do not compare our results in a clear way with alternative scenarios.” (Pääbo did not respond to multiple requests for comment.)
The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells.
Chikhi and Tournebize’s argument is about population structure, yes, but at heart, it is actually one about methods—how modern evolutionary science deploys computer models and statistical techniques to make sense of mountains upon mountains of genetic data.
They’re not the only scientists who are worried. “People think we really understand how genomes evolve and can write sophisticated algorithms for saying what happened,” says William Amos, a University of Cambridge population geneticist who has been critical of the “inner Neanderthal” theory. But, he adds, those models are “based on simple assumptions that are often wrong.”
And if they’re wrong, what’s at stake is far more than a single evolutionary mystery.
Back in 2010, Pääbo’s lab pulled off something of a miracle. The researchers were able to extract DNA from nuclei in the cells of 40,000-year-old Neanderthal bones. DNA breaks down quickly after death, but the group got enough of it from three different individuals to produce a draft sequence of the entire Neanderthal genome, with 4 billion base pairs.
As part of their study, they performed a statistical test comparing their Neanderthal genome with the genomes of five present-day people from different parts of the world. That’s how they discovered that modern humans of non-African ancestry had a small amount of DNA in common with Neanderthals, a species that diverged from the Homo sapiens line more than 400,000 years ago, that they did not share with either modern humans of African ancestry or our closest living relative, the chimpanzee.

Pääbo’s team interpreted this as evidence of sexual reproduction between ancient Homo sapiens and the Neanderthals they encountered after they expanded out of Africa. “Neanderthals are not totally extinct,” Pääbo said to the BBC in 2010. “In some of us, they live on a little bit.”
The discovery was monumental on its own—but even more so because it reversed a previous consensus. More than a decade earlier, in 1997, Pääbo had sequenced a much smaller amount of Neanderthal DNA, in that case from a cell structure called a mitochondrion. It was different enough from Homo sapiens mitochondrial DNA for his team to cautiously conclude there had been “little or no interbreeding” between the two species.
After 2010, though, the idea of hybridization, also called admixture, effectively became canon. Top journals like Science and Nature published study after study on the inner Neanderthal. Some scientists have argued that Homo sapiens would never have adapted to colder habitats in Europe and Asia without an infusion of Neanderthal DNA. Other research teams used Pääbo’s techniques to find genetic traces of interbreeding with an extinct group of hominins in Asia, called the Denisovans, and a mysterious “ghost lineage” in Africa. Biologists used similar tests to find evidence of interbreeding between chimpanzees and bonobos, polar and brown bears, and all kinds of other animals.
The inner-Neanderthal hypothesis also took a turn for the personal. Various studies linked Neanderthal DNA to a head-spinning range of conditions: alcoholism, asthma, autism, ADHD, depression, diabetes, heart disease, skin cancer, and severe covid-19. Some researchers suggested that Neanderthal DNA had an impact on hair and skin color, while others assigned individuals a “NeanderScore” that was correlated with skull shape and prevalence of schizophrenia markers. Commercial genetic testing companies like 23andMe started offering customers Neanderthal ancestry reports.
The inner Neanderthal became a story we could tell ourselves about our flaws and genetic destiny: Don’t blame me; blame the prognathic caveman hiding in my cells. Or as Latif Nasser, a host of the popular-science program Radiolab, put it when he was hospitalized with Crohn’s disease, another Neanderthal-associated condition: “I just keep imagining these tiny Neanderthals … just, like, stabbing me and drawing these little droplets of blood out of me.”
“These things become meaningful to people,” Chikhi says. “What we say will be important to how people view themselves.”
When population geneticists built the theoretical framework for evolutionary biology in the early 20th century, genes were only abstract units of heredity inferred from experiments with peas and fruit flies. Population genetics developed theory far more quickly than it accumulated data. As a result, many data-driven scientists dismissed the study of evolution as a form of storytelling based on unexamined assumptions and preconceived ideas.
By the ’90s, though, genes were no longer abstractions but sequenced segments of DNA. Genomic sequencing grounded evolutionary studies in the kind of hard data that a chemist or physicist could respect.
Yet biologists could not simply read evolutionary history from genomes as though they were books. They were trying to determine which of a nearly infinite number of plausible histories was the most likely to have created the patterns they observed in a small sample of genomes. For that, they needed simplified, algorithmic models of evolution. The study of evolution shifted from storytelling to statistics, and from biology to computer science.
That suited Chikhi, who as a child was drawn to the predictable laws and numerical precision of math and science. He entered the field in the mid-’90s just as the first big studies of human DNA were settling old debates about human origins. DNA showed that Africa harbored far more genetic diversity than the entire rest of the planet. The new evidence supported the idea that modern humans evolved for hundreds of thousands of years in Africa and expanded to the other continents only in the last 100,000 years. For Chikhi, whose parents were Algerian immigrants, this discovery was a powerful challenge to the way some archaeologists and biologists talked about race. DNA could be used to deconstruct rather than encourage the pernicious idea that human races had deep-seated evolutionary differences based on their places of origin.
At the same time, though, he was wary of the tendency to treat DNA as the final verdict on open questions in evolution. Chikhi had been surprised when, back in 1997, Pääbo and his team used that small amount of mitochondrial DNA to rule out hybridization between Homo sapiens and Neanderthals. He didn’t think that the absence of Neanderthal DNA there necessarily meant it wouldn’t be found elsewhere in the Homo sapiens genome.
Chikhi’s own research in the aughts opened his eyes to the gaps between historical reality and models of evolution. For one, despite the assumption of random mating, none of the animals Chikhi studied actually mated randomly. Orangutans lived in highly fragmented habitats, which restricted their pool of potential mates, and female birds were often extremely picky about their male partners.
These factors could confound an evolutionary biologist’s traditional statistical tool kit. Scientists were starting to apply a mathematical technique to estimate historical population sizes for a species from the genome of just a single individual. This method showed sharp population declines in the histories of many different species. Chikhi realized, though, that the apparent declines could be an artifact of treating a structured population as one that evolved with random mating; in that case, the technique could indicate a bottleneck even if all the subgroups were actually growing in size. “This is completely counterintuitive,” he says.
That’s at least partly why, when Pääbo’s 2010 Neanderthal genome came out, Chikhi was impressed with the sheer technical accomplishment but also leery of the findings about hybridization. “It was the type of thing we conclude too quickly based on genetic data,” he says. Pääbo’s work mentioned population structure as a possible alternative explanation—but didn’t follow up.
Just a couple of years later, a pair of independent scientists named Anders Eriksson and Andrea Manica picked up the idea, building a model with simple population structure that explicitly excluded admixture. They simulated human evolution starting from 500,000 years ago and found that their model produced the same genomic patterns Pääbo’s group had interpreted as evidence of hybridization.
“Working with structured models is really out of the comfort zone of a lot of population geneticists,” says Eriksson, now a professor at the University of Tartu in Estonia.
Their research impressed Chikhi. “At the time, I thought people would focus on population structure in the evolution of humans,” he says. Instead, he watched as the inner-Neanderthal hypothesis took on a life of its own. Scientists produced new methods to quantify hybridization but rarely examined whether population structure would yield the same results. To Chikhi, this wasn’t science; it was storytelling, like some of the old narratives about the evolution of racial differences.
Chikhi and Tournebize decided to take a crack at the problem themselves. “I’ve always been very skeptical about science, and population genetics in particular,” says Tournebize, now a researcher at the French National Research Institute for Sustainable Development. “We make a lot of assumptions, and the models we use are very simplistic.” As detailed in a 2024 paper published in Nature Ecology & Evolution, they built a model of human evolution that replaced randomly mating continent-wide populations with many smaller populations linked by occasional migration. Then they let it run—a million times.
At the end of the simulation, they kept the 20 scenarios that produced genomes most similar to the ones in a sample of actual Homo sapiens and Neanderthals. Many of these scenarios produced long segments of DNA like the ones their peers argued could only have been inherited from Neanderthals. They showed that several statistics, which other scientists had proposed as measurements of Neanderthal DNA, couldn’t actually distinguish between hybridization and population structure. What’s more, they showed that many of the models that supported hybridization failed to accurately predict other known features of human evolution.
“A model will say there was admixture but then predict diversity that is totally incompatible with what we actually know of human diversity,” Chikhi says. “Nobody seems to care.”
So how did Neanderthal DNA wind up in living people if not via interspecies passion? Chikhi and Tournebize think it’s more likely that it was inherited by both Neanderthals and some sapiens groups in Africa from a common ancestor living at least half a million years ago. If the sapiens groups carrying those genetic variants included the people who migrated out of Africa, then the two human species would have already had the DNA in common when they came into contact in Europe and Asia—no sex required.
“The interpretation of genetic data is not straightforward,” Chikhi says. “We always have to make assumptions. Nobody takes data and magically comes up with a solution.”
Most of the half-dozen population geneticists I spoke with praised Chikhi and Tournebize’s ingenuity and appreciated the spirit of their critique. “Their paper forces us to think more critically about the model we use for inference and consider alternatives,” says Aaron Ragsdale, a population geneticist at the University of Wisconsin–Madison. His own work likewise suggests that the earliest Homo sapiens populations in Africa were probably structured—and that this is the likely reason for genomic patterns that other research groups had attributed to hybridization with a mysterious “ghost lineage” of hominins in Africa.
Yet most researchers still believe that modern humans and Neanderthals did probably have children with each other tens of thousands of years ago. Several pointed to the fact that fossil DNA of Homo sapiens who died thousands of years ago had longer chunks of apparent Neanderthal DNA than living people, which is exactly what you would expect if they had a more recent Neanderthal ancestor. (To address this possibility, Chikhi and Tournebize included DNA from 10 ancient humans in their study and found that most of them fit the structured model.) And while the Harvard population geneticist David Reich, who helped design the statistical test from Pääbo’s 2010 study, declined an interview, he did say he thought Chikhi and Tournebize’s model was “weak” and “very contrived,” adding that “there are multiple lines of evidence for Neanderthal admixture into modern humans that make the evidence for this overwhelming.” (Two other authors of that study, Richard Green and Nick Patterson, did not respond to requests for comment.)
Nevertheless, most scientists these days welcome the development of structured, or “spatially explicit,” models that account for the fact that any given member of a population is usually more closely related to individuals living nearby than to those living far away.
Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history.
Other scientists also say that random mating isn’t the only assumption in population genetics that merits scrutiny. Models rarely factor in natural selection, which can also create genetic patterns that look like hybridization. Another common assumption is that everyone’s DNA mutates at the same, constant rate. “All the theory says the mutation rate is fixed,” says Amos, the Cambridge population geneticist. But he thinks that rate would have slowed drastically in the group of Homo sapiens that expanded to Europe around 45,000 years ago. This, too, could have created genomic patterns that other scientists interpret as evidence of interbreeding with Neanderthals.

The point here isn’t that a complex model of evolution with many moving pieces is necessarily better than a simple one. Scientists need to reduce complexity in order to see the underlying processes more clearly. But simple models require assumptions, and scientists need to reevaluate those assumptions in light of what they learn. “As you get more data, you can justify more complex models of the world,” says Mark Thomas, a population geneticist at University College London, who wrote a history of random mating in population genetics that highlighted how the field was starting to see it as “a limiting assumption as opposed to a simplifying one.”
It can feel discouraging to couch conversations about the past in confusing terms like “population structure” and “mutation rates.” It seems almost antithetical to the spirit of science to talk more about uncertainty at the same time we are developing powerful technologies and enormous data sets for analyzing evolution. These tools often yield novel answers, but they can also limit the questions we ask. The French archaeologist Ludovic Slimak, for example, has complained that the idea of the inner Neanderthal has domesticated our image of Neanderthals and made it difficult to imagine their humanity as distinct from our own. Investigating Neanderthal DNA is sexier to many young researchers than searching for archaeological and fossil evidence of how Neanderthals actually lived.
Loosening our attachment to certain narratives of evolution can create space for wonder at the sheer complexity of life’s history. Ultimately, that’s what Chikhi and Tournebize hope to do. After all, they don’t believe the question of population structure versus hybridization is either-or. It’s possible, and even likely, that both played a role in human evolution. “Our structured model does not necessarily mean that no admixture ever took place,” Chikhi and Tournebize wrote in their study. “What our results suggest is that, if admixture ever occurred, it is currently hard to identify using existing methods.”
Future methods might disentangle the different factors, but it’s just as important, Chikhi says, for scientists to be up-front about their assumptions and test alternatives. “There’s still so much uncertainty on so many aspects of the demographic history of Neanderthals and Homo sapiens,” he notes.
Keep that in mind the next time you read about your inner Neanderthal. The association between this DNA and some diseases may be real, of course—but would journals publish these studies without the additional claim that the DNA is from Neanderthals? Any good storyteller knows that sex sells, even in science.
Ben Crair is a science and travel writer based in Berlin.
2026-04-13 23:48:28
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
In an industry that doesn’t stand still, Stanford’s AI Index, an annual roundup of key results and trends, is a chance to take a breath. (It’s a marathon, not a sprint, after all.)
This year’s report, which dropped today, is full of striking stats. A lot of the value comes from having numbers to back up gut feelings you might already have, such as the sense that the US is gunning harder for AI than everyone else: It hosts 5,427 data centers (and counting). That’s more than 10 times as many as any other country.
There’s also a reminder that the hardware supply chain the AI industry relies on has some major choke points. Here’s perhaps the most remarkable fact: “A single company, TSMC, fabricates almost every leading AI chip, making the global AI hardware supply chain dependent on one foundry in Taiwan.” One foundry! That’s just wild.
But the main takeaway I have from the 2026 AI Index is that the state of AI right now is shot through with inconsistencies. As my colleague Michelle Kim put it today in her piece about the report: “If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock.” (The Stanford report notes that Google DeepMind’s top reasoning model, Gemini Deep Think, scored a gold medal in the International Math Olympiad but is unable to read analog clocks half the time.)
Michelle does a great job covering the report’s highlights. But I wanted to dwell on a question that I can’t shake. Why is it so hard to know exactly what’s going on in AI right now?
The widest gap seems to be between experts and non-experts. “AI experts and the general public view the technology’s trajectory very differently,” the authors of the AI Index write. “Assessing AI’s impact on jobs, 73% of U.S. experts are positive, compared with only 23% of the public, a 50 percentage point gap. Similar divides emerge with respect to the economy and medical care.”
That’s a huge gap. What’s going on? What do experts know that the public doesn’t? (“Experts” here means US-based researchers who took part in AI conferences in 2023 and 2024.)
I suspect part of what’s going on is that experts and non-experts base their views on very different experiences. “The degree to which you are awed by AI is perfectly correlated with how much you use AI to code,” a software developer posted on X the other day. Maybe that’s tongue-in-cheek, but there’s definitely something to it.
The latest models from the top labs are now better than ever at producing code. Because technical tasks like coding have right or wrong results, it is easier to train models to do them, compared with tasks that are more open-ended. What’s more, models that can code are proving to be profitable, so model makers are throwing resources at improving them.
This means that people who use those tools for coding or other technical work are experiencing this technology at its best. Outside of those use cases, you get more of a mixed bag. LLMs still make dumb mistakes. This phenomenon has become known as the “jagged frontier”: Models are very good at doing some things and less good at others.
The influential AI researcher Andrej Karpathy also had some thoughts. “Judging by my [timeline] there is a growing gap in understanding of AI capability,” he wrote in reply to that X post. He noted that power users (read: people who use LLMs for coding, math, or research) not only keep up to date with the latest models but will often pay $200 a month for the best versions. “The recent improvements in these domains as of this year have been nothing short of staggering,” he continued.
Because LLMs are still improving fast, someone who pays to use Claude Code will in effect be using a different technology from someone who tried using the free version of Claude to plan a wedding six months ago. Those two groups are speaking past each other.
Where does that leave us? I think there are two realities. Yes, AI is far better than a lot of people realize. And yes, it is still pretty bad at a lot of stuff that a lot of people care about (and it may stay that way). Anyone making bets about the future on either side should bear that in mind.