2026-04-25 05:40:58
On Friday, Chinese AI firm DeepSeek released a preview of V4, its long-awaited new flagship model. Notably, the model can process much longer prompts than its last generation, thanks to a new design that helps it handle large amounts of text more efficiently. Like DeepSeek’s previous models, V4 is open source, meaning it is available for anyone to download, use, and modify.
V4 marks DeepSeek’s most significant release since R1, the reasoning model it launched in January 2025. R1, which was trained on limited computing resources, stunned the global AI industry with its strong performance and efficiency, turning DeepSeek from a little-known research team into China’s best-known AI company almost overnight. It also helped set off a wave of open-weight model releases from other Chinese AI firms.
DeepSeek has kept a relatively low profile since then—but earlier this month, it effectively teased V4’s release when it added “expert” and “flash” modes to the online version of its model, prompting speculation that the updates were tied to a bigger upcoming release.
While the company has become a powerful symbol of China’s AI ambitions, its big return to cutting-edge frontier models comes after months of scrutiny—including major personnel departures, delays to previous model launches, and growing scrutiny from both the US and Chinese governments.
So, will V4 shake the AI field the way R1 did? Almost certainly not, but here are three big reasons why this release matters.
As with R1 before it, DeepSeek claims that V4’s performance rivals the best models available at a fraction of the price. This is great news for developers and for companies using the tech, because it means they can access frontier AI capabilities on their own terms, and without worrying about skyrocketing costs.
The new model comes in two versions, both of which are available on DeepSeek’s website and in its app, with API access also open to developers. V4-Pro is a larger model built for coding and complex agent tasks, and V4-Flash is a smaller version designed to be faster and cheaper to run. Both versions offer reasoning modes, in which the model can carefully parse a user’s prompt and show each step as it works through the problem.
For V4-Pro, DeepSeek charges $1.74 per million input tokens and $3.48 per million output tokens, a fraction of the cost of comparable models from OpenAI and Anthropic. V4-Flash is even cheaper, at about $0.14 per million input tokens and about $0.28 per million output tokens, making it one of the cheapest top-tier models available. This would make it a very appealing model to build applications on.
In terms of performance, V4 is, perhaps unsurprisingly, a huge jump from R1—and it seems to be a strong alternative to just about all the latest big AI models. On the major benchmarks, according to results shared by the company, DeepSeek V4-Pro competes with leading closed-source models, matching the performance of Anthropic’s Claude-Opus-4.6, OpenAI’s GPT-5.4, and Google’s Gemini-3.1. And compared to other open-source models, such as Alibaba’s Qwen-3.5 or Z.ai’s GLM-5.1, DeepSeek V4 exceeds them all on coding, math, and STEM problems, making it one of the strongest open-source models ever released.
DeepSeek also says that V4-Pro now ranks among the strongest open-source models on benchmarks for agentic coding tasks and performs well on other tests that measure ability to carry out multistep problems. Its writing ability and world knowledge also lead the field, according to benchmarking results shared by the company.
In a technical report released alongside the model, DeepSeek shared results from an internal survey of 85 experienced developers: More than 90% included V4-Pro among their top model choices for coding tasks.
DeepSeek says it has specifically optimized V4 for popular agent frameworks such as Claude Code, OpenClaw, and CodeBuddy.
One of the key innovations of V4 is its long context window—the amount of text the model can process at once. Both versions can handle 1 million tokens, which is large enough to fit all three volumes of The Lord of the Rings and The Hobbit combined. The company says this context window size is now the default across all DeepSeek services and it matches what is offered by cutting-edge versions of models like Gemini and Claude.
But it’s important to know not just that DeepSeek has made this leap, but how it did so. V4 makes significant architectural changes to the company’s former models—especially in the attention mechanism, which is the feature of AI models that helps them understand each part of a prompt in relation to the rest. As the prompt text gets longer, these comparisons become much more costly, making attention one of the main bottlenecks for long-context models.
DeepSeek’s innovation was to make the model more selective about what it pays attention to. Instead of treating all earlier text as equally important, V4 compresses older information and focuses on the parts most likely to matter in the present moment, while still keeping nearby text in full so it does not miss important details.
DeepSeek says this sharply reduces the cost of using long context. In a 1-million-token context, V4-Pro uses only 27% of the computing power required by its previous model, V3.2, while cutting memory use to 10%. The reduction in V4-Flash is even larger, using just 10% of the computing power and 7% of the memory. In practice, this could make it cheaper to build tools that need to work across huge amounts of material, such as an AI coding assistant that can read an entire codebase or a research agent that can analyze a long archive of documents without constantly forgetting what came before.
DeepSeek’s interest in long context windows didn’t start with V4. Over the past year and a half, the company has quietly published a series of papers on how AI models “remember” information, experimenting with compression and mathematical techniques to extend what AI models could realistically handle.
V4 is DeepSeek’s first model optimized for domestic Chinese chips, such as Huawei’s Ascend—a move that has turned the launch into something of a test of whether China’s homegrown AI industry can begin to loosen its dependence on US chip giant Nvidia.
This was largely expected, since The Information reported earlier this month that DeepSeek did not give American chipmakers like Nvidia and AMD early access to V4, though prerelease access is common to allow chipmakers to optimize support of the new model ahead of a launch. Instead, the company reportedly gave early access only to Chinese chipmakers.
On Friday, Huawei said its Ascend supernode products, based on the Ascend 950 series, would support DeepSeek V4. This means that companies and individuals who want to run their own modified version of Deepseek V4 will be able to use Huawei chips easily.
Reuters previously reported that Chinese government officials recommended that DeepSeek integrate Huawei chips in its training process. And this pressure fits a broader pattern in China’s industrial policy: Strategic sectors are often pushed, and sometimes effectively required, to align with national self-reliance goals. But there’s a particular urgency when it comes to AI. Since 2022, US export controls have cut Chinese firms off from Nvidia’s most powerful chips, and they later also restricted access to downgraded China-market versions. Beijing’s response has been to accelerate the push for a domestic AI stack, from chips to software frameworks to data centers.
Chinese authorities have reportedly been pushing data centers and public computing projects to use more domestic chips, including through reported bans on foreign-made chips, sourcing quotas, and requirements to pair Nvidia chips with Chinese alternatives from companies such as Huawei and Cambricon.
Still, replacing Nvidia is not as simple as swapping one chip for another. Nvidia’s advantage lies not only in its chips, but in the software ecosystem developers have spent years building around them. Moving to Huawei’s Ascend chips means adapting model code, rebuilding tools, and proving that systems built around those chips are stable enough for serious use.
To be clear, DeepSeek does not appear to have fully moved beyond Nvidia. The company’s technical report reveals that it is using Chinese chips to run the model for inference, or when someone asks the model to complete a task. But Liu Zhiyuan, a computer science professor at Tsinghua University, told MIT Technology Review that DeepSeek appears to have adapted only part of V4’s training process for Chinese chips. The report does not say whether some key long-context features were adapted to domestic chips, so Liu says V4 may still have been trained mainly on Nvidia chips. Multiple sources who spoke on the condition of anonymity, due to political sensitivity around these issues, told MIT Technology Review that Chinese chips still don’t perform as well as Nvidia chips but are better suited for inference than training.
DeepSeek is also tying the future costs of V4 to this hardware shift. The company says V4-Pro prices could fall significantly after Huawei’s Ascend 950 supernodes begin shipping at scale in the second half of this year.
If that works, V4 could be an early sign that China is successfully building a parallel AI infrastructure.
2026-04-24 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
When ChatGPT was released in late 2022, it showed how easily generative AI could create human-like text. This quickly caught the eye of cybercriminals, who began using LLMs to compose malicious emails. Since then, they’ve adopted AI for everything from turbocharged phishing and hyperrealistic deepfakes to automated vulnerability scans.
Many organizations are now struggling to cope with the sheer volume of cyberattacks. AI is making them faster, cheaper, and easier to carry out, a problem set to worsen as more cybercriminals adopt these tools—and their capabilities improve. Read the full story on how AI is reshaping cybercrime.
—Rhiannon Williams
“Supercharged scams” is one of the 10 Things That Matter in AI Right Now, our essential guide to what’s really worth your attention in the field.
Subscribers can watch an exclusive roundtable unveiling the technologies and trends on the list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth.
Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays.
A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients? We don’t yet have a good answer—here’s why.
—Jessica Hamzelou
The story is from The Checkup, our weekly newsletter that gives you the latest from the worlds of health and biotech. Sign up to receive it in your inbox every Thursday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 DeepSeek has unveiled its long-awaited new AI model
The Chinese company has just launched preview versions of DeepSeek-V4. (CNN)
+It says V4 is the most powerful open-source platform. (Bloomberg $)
+ And rivals top closed-source models from OpenAI and DeepMind. (SCMP)
+ The model is adapted for Huawei chip technology. (Reuters $)
2 More countries are curbing children’s social media access
Norway is set to enforce the latest ban. (Reuters $)
+ The Philippines could follow soon. (Bloomberg $)
+ Americans are pushing to get AI out of schools. (The New Yorker)
3 The US has accused China of mass AI theft as tensions rise
A White House memo claims Chinese firms are exploiting American models. (BBC)
+ Beijing calls the accusations “slander.” (Ars Technica)
4 OpenAI set itself apart from Anthropic by widely releasing its new model
It’s releasing GPT-5.5 to all ChatGPT users, despite cybersecurity concerns. (NYT $)
+ OpenAI says the new model is better at coding and more efficient. (The Verge)
5 Meta is cutting 10% of jobs to offset AI spending
Roughly 8,000 layoffs are set to be announced on May 20. (QZ)
+ Anti-AI protests are growing. (MIT Technology Review)
6 Palantir is facing a backlash from employees
Thanks to its work with ICE and the Trump administration. (Wired $)
+ Surveillance tech is reshaping the fight for privacy. (MIT Technology Review)
7 The era of free access to advanced AI is coming to an end
AI labs are under mounting pressure to start turning profits. (The Verge)
8 Elon Musk’s feud with Sam Altman is heading to court
The case has already revealed several unflattering secrets. (WP $)
9 A new movement is encouraging people to ditch their smartphones for a month
“Month Offline” is like a Dry January for smartphones. (The Atlantic)
10 Spotify has revealed its most-streamed music of the last 20 years
Featuring Taylor Swift, Bad Bunny, and The Weeknd. (Gizmodo)
Quote of the day
—Norwegian Prime Minister Jonas Gahr Store announces age restrictions for social media.
One More Thing

As astronomers have discovered more about Europa over the past few decades, Jupiter’s fourth-largest moon has excited planetary scientists interested in the geophysics of alien worlds.
All that water and energy—and hints of elements essential for building organic molecules —point to an extraordinary possibility. In the depths of its ocean, or perhaps crowded in subsurface lakes or below icy surface vents, Jupiter’s big, bright moon could host life.
To find further evidence, NASA is now searching for signs of alien existence on Europa. Read the full story on the mission.
—Stephen Ornes
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ Here’s a fun look at the secret collaborations of pop history.
+ Meet the mannequins showing how the “ideal” body has evolved.
+ A photographer has cataloged all 12,795 objects in her home into an archive of a life.
+ Slime molds are unexpectedly beautiful when viewed through these high-detail macro shots.
2026-04-24 17:00:00
I don’t need to tell you that AI is everywhere.
Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with notetaking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and X-rays.
A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients?
We don’t yet have a good answer.
That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto, argue in a paper published in the journal Nature Medicine this week.
Wiens tells me she has spent years investigating how AI might benefit health care. For the first decade of her career she tried to pitch the technology to clinicians. Over the last few years, she says, it’s as though “a switch flipped.” Health-care providers not only appear much more interested in the promise of these technologies, they have also begun rapidly deploying them.
The problem is that many providers aren’t rigorously assessing how well they actually work.
Take “ambient AI” tools, for example. Also known as AI scribes, they “listen” to conversations between doctors and patients, then transcribe and summarize them. Multiple tools are available, and they are already being widely adopted by health-care providers.
A few months ago, a staffer at a major New York medical center who develops AI tools for doctors told me that, anecdotally, medics are “overjoyed” by the technology—it allows them to focus all their attention on their patients during appointments, and it saves them from a lot of time-consuming paperwork. Early studies support these anecdotes and suggest that the tools can reduce clinician burnout.
That’s all well and good. But what about patient health outcomes? “[Researchers] have evaluated provider or clinician and patient satisfaction, but not really how these tools are affecting clinical decision-making,” says Wiens. “We just don’t know.”
The same holds true for other AI-based technologies used in health-care settings. Some are used to predict patients’ health trajectories, others to recommend treatments. They are designed to make health care more effective and efficient.
But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest X-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately: What will this mean for those patients?
The answers to those questions might vary between hospitals or departments and could depend on clinical workflows, says Wiens. They might also differ between doctors at various stages of their careers.
Take the AI scribes, as another example. Some research on AI use in education suggests that such tools can impact the way people cognitively process information. Could they affect the way a doctor processes a patient’s information? Will the tools affect the way medical students think about patient data in a way that impacts care? These questions need to be explored, says Wiens. “We like things that save us time, but we have to think about the unintended consequences of this,” she says.
In a study published in January 2025, Paige Nong at the University of Minnesota and her colleagues found that around 65% of US hospitals used AI-assisted predictive tools. Only two-thirds of those hospitals evaluated their accuracy. Even fewer assessed them for bias.
The number of hospitals using these tools has probably increased since then, says Wiens. Those hospitals, or entities other than the companies developing the tools, need to evaluate how much they help in specific settings. There’s a possibility that they could leave patients worse off, although it’s more likely that AI tools just aren’t as beneficial as health-care providers might assume they are, says Wiens.
“I do believe in the potential of AI to really improve clinical care,” says Wiens, who stresses that she doesn’t want to stop the adoption of AI tools in health care. She just wants more information about how they are affecting people. “I have to believe that in the future it’s not all AI or no AI,” she says. “It’s somewhere in between.”
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
2026-04-23 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
When we talk about “nature,” we usually mean something untouched by humans. But little of that world exists today.
From microplastics in rainforest wildlife to artificial light in the Arctic Ocean, human influence now reaches every corner of Earth. In this context, what even is nature? And should we employ technology to try to make the world more “natural”?
In our new Nature issue, MIT Technology Review grapples with these questions. We investigate birds that can’t sing, wolves that aren’t wolves, and grass that isn’t grass. We look for the meaning of life under Arctic ice, within ourselves, and in the far future on a distant world, courtesy of new fiction by the renowned author Jeff VanderMeer.
Together, these stories examine how technology has altered our planet—and how it might be used to repair it. Subscribe now to read the full print issue.
After ChatGPT launched in late 2022, the OpenAI chatbot became an everyday everything app for hundreds of millions of people. It led to LLMs being heralded as the new future. The entire tech industry was consumed by the inferno, with companies racing to spin up rival products.
But what’s the next big thing after LLMs? More LLMs—but better. Let’s call them LLMs+. Find out how they’re set to become cheaper, more efficient, and more powerful.
—Will Douglas Heaven
LLMs+ is on our list of the 10 Things That Matter in AI Right Now, MIT Technology Review’s guide to what’s really worth your attention in the busy, buzzy world of AI. We’ll be unpacking one item from the list each day here in The Download, so stay tuned.
Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study published in Nature Energy suggests that even if that future arrives, it might not come cheap.
The research team aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate—the percentage by which its cost declines every time capacity doubles. Their findings offer new clues on the technology’s path to deployment. Read the full story.
—Casey Crownhart
This story is from The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Trump signaled he’s open to reversing the Anthropic ban
What that really means in practice remains to be seen. (Reuters $)
+ Anthropic says there’s no “kill switch” for its AI. (Axios)
+ “Humans in the loop” in AI warfare is an illusion. (MIT Technology Review)
2 SpaceX plans to manufacture its own GPUs
To support the company’s growing AI ambitions. (Reuters $)
+ Musk is shifting SpaceX’s focus from Mars to AI ahead of its IPO. (NYT $)
+ SpaceX and Tesla may be on a collision course. (FT $)
3 Chinese tech giant Tencent has unveiled its first flagship AI model
A former OpenAI researcher is at the helm. (SCMP)
+ Chinese open models are spreading fast. (MIT Technology Review)
4 High earners are racing ahead on AI, deepening workplace divides
The division in adoption risks widening inequality. (FT $)
+ Startups are bragging they spend more on AI than staff. (404 Media)
5 Thousands of Samsung workers are demanding a new share of AI profits
Chip-division employees want 15% of the operating profit. (Bloomberg $)
+ Here’s why opinion on AI is so divided. (MIT Technology Review)
6 AI is helping mediocre Korean hackers steal millions
They’re vibe coding their malware. (Wired $)
+ AI is making online crimes easier. (MIT Technology Review)
7 Kalshi suspended three political candidates for betting on their own races
Including a Democrat and a Republican running for Congress. (CNN)
+ And an independent candidate who said he did it to make a point. (Gizmodo)
+ Lawmakers argue that prediction markets are a loophole for gambling. (NPR)
8 A ping-pong robot is beating elite human players for the first time
The Sony AI system was trained with reinforcement learning. (New Scientist)
+ Just days earlier, a humanoid smashed the human half-marathon record. (AP)
9 Crypto scammers are luring ships into the Strait of Hormuz
By falsely promising safe passage. (Ars Technica)
10 ‘Age tech’ could help us grow old comfortably at home
Apps, wearables, and remote monitoring could fill caregiving gaps. (NYT $)
Quote of the day
—Ross Gerber, the chief executive of Gerber Kawasaki, an investment firm that owns SpaceX shares, tells the New York Times that he’s unimpressed by Musk’s changing goals for the aerospace company.
One More Thing

After hundreds went missing in Maui’s deadly fires, victims were identified with rapid DNA analysis—an increasingly vital tool for putting names to the dead in mass-casualty events.
The technology helped identify victims within just a few hours and bring families some closure more quickly than ever before. But it also previews a dark future marked by the rising frequency of catastrophic events.
Find out how this forensic breakthrough is preparing us for a more volatile world.
—Erika Hayasaki
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ This fascinating dive into botanical history reveals the origins of the first true plants.
+ Here’s how to use Google’s reference desk to find what ordinary search engines miss.
+ Watch duct tape get deconstructed to reveal the physics behind its legendary stickiness.
+ When Radiohead covers Joy Division, the result is a beautiful intersection of two legendary musical eras.
2026-04-23 18:00:00
Fusion power could provide a steady, zero-emissions source of electricity in the future—if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap.
Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013. But historically, different technologies tend to go through this curve at different rates. And the cost of fusion might not sink as quickly as the prices of batteries or solar.
It’s tricky to make any predictions about the cost of a technology that doesn’t exist yet. But when there’s billions of dollars of public and private funding on the line, it’s worth considering what assumptions we’re making about our future energy mix and its cost.
One crucial measure is a metric called experience rate—the percentage by which an energy technology’s cost declines every time capacity doubles. A higher figure means a quicker price drop and better economic gains with scaling.
Historically, the experience rate is 12% for onshore wind power, 20% for lithium-ion batteries, and 23% for solar modules. Other energy technologies haven’t gotten cheap quite as quickly—fission is at just 2%.
In the new study, published in Nature Energy, researchers aimed to improve predictions of fusion’s future price by estimating the technology’s experience rate. The team looked at three key characteristics that can correlate with experience rate: unit size, design complexity, and the need for customization. The larger and more complex a technology is, and/or the more it needs to be customized for different use cases, the lower the experience rate.
The researchers interviewed fusion experts, including public-sector researchers and those working at companies in the private sector. They had the experts evaluate fusion power plants on those characteristics and used that info to predict the experience rate. (One note here: The study focused only on magnetic confinement and laser inertial confinement, two of the leading fusion approaches, which together receive the vast majority of funding today. Other approaches could come with different cost benefits.)
Fusion plants will likely be relatively large, similar to other types of facilities (like coal and fission power plants) that rely on generating heat. They will probably need less customization than fission plants—largely because regulations and safety considerations should be simpler—but more than technologies like solar panels. And as for complexity, “there was almost unanimous agreement that fusion is incredibly complex,” says Lingxi Tang, a PhD candidate in the energy and technology policy group at ETH Zurich in Switzerland and one of the authors of the study. (Some experts said it was literally off the scale the researchers gave them.)
The final figure the researchers suggest for fusion’s experience rate is between 2% and 8%, meaning it will see a faster price reduction than nuclear power but not as dramatic an improvement as many common energy technologies being deployed today.
That means that it would take a lot of deployment—and likely quite a long time—for the price of building a fusion reactor to drop significantly, so electricity produced by fusion plants could be expensive for a while. And it’s a much slower rate than the 8% to 20% that many modeling studies assume today.
“On the whole, I think questions should be raised about current investment levels in fusion,” Tang says. (The US allocated over $1 billion to fusion in the 2024 fiscal year, and private-sector funding totaled $2.2 billion between July 2024 and July 2025.) “If you’re talking about decarbonization of the energy system, is this really the best use of public money?”
But some experts say that looking to the past to understand the future of energy prices might be misleading.“It’s a good exercise, but we have to be humble about how much we don’t know,” says Egemen Kolemen, a professor at the Princeton Plasma Physics Laboratory.
In 2000, many analysts predicted that solar power would remain expensive—but then production exploded and prices came crashing down, largely because China went all in, he says. “People weren’t exactly wrong then,” he adds. “They were just extrapolating what they saw into the future.”
How fast prices drop depends on regulations, geopolitical dynamics, and labor cost, he says: “We haven’t built the thing yet, so we don’t know.”
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
2026-04-22 20:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What actually matters in AI right now? It’s getting harder to tell amid the constant launches, hype, and warnings. To cut through the noise, MIT Technology Review’s reporters and editors have distilled years of analysis into a new essential guide: the 10 Things That Matter in AI Right Now.
The list builds on our annual 10 Breakthrough Technologies, but takes a wider view of the ideas, topics, and research shaping AI, spotlighting the trends and breakthroughs shaping the world.
We’ll be unpacking one item from the list each day here in The Download, explaining what it means and why it matters. Read the full rundown now—and stay tuned for the days ahead.
As the conflict in Iran has escalated, a crucial resource is under fire: the desalinization technology that supplies water in the region.
President Donald Trump recently threatened to destroy “possibly all desalinization plants” in Iran if the Strait of Hormuz is not reopened. The impact on farming, industry, and—crucially—drinking in the Middle East could be severe. Find out why.
—Casey Crownhart
This is our latest story to be turned into an MIT Technology Review Narrated podcast, which we publish each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 An unauthorized group has reportedly accessed Anthropic’s Mythos
Users in a private online forum may have gained access. (Bloomberg $)
+ Anthropic said the model was too dangerous for a full release. (Axios)
+ Mozilla used it to find 271 security vulnerabilities in Firefox. (Wired $)
2 Meta will track workers’ clicks and keystrokes for AI training
Tracking software is being installed on workers’ computers.(Reuters $)
+ Employees are up in arms about the program. (Business Insider)
+ LLMs could supercharge mass surveillance in the US. (MIT Technology Review)
3 ChatGPT allegedly advised the Florida State shooter
About when and where to strike, and which ammunition to use. (Washington Post $)
+ Florida’s attorney general is probing ChatGPT’s role in the shooting. (Ars Technica)
+ Does AI cause delusions or just amplify them? (MIT Technology Review)
4 SpaceX has secured the option to buy AI startup Cursor for $60 billion
Or pay $10 billion for the work they’re doing together. (The Verge)
+ SpaceX made the deal as it prepares to go public. (NYT $)
+ Musk’s endgame for the company may be a land grab in space. (The Atlantic $)
5 The Pentagon wants $54 billion for drones
That would rank among the top 10 military budgets for entire nations. (Ars Technica)
+ Shoplifters could soon be chased down by drones. (MIT Technology Review)
6 Apple’s new chief hardware officer signals a sprint to build in-house chips
Apple silicon lead Johny Srouji has been promoted to the role. (CNBC)
7 China’s government is tightening its grip on AI firms that try to leave
It’s doing all it can to stop firms like Manus sending talent and research overseas. (Washington Post $)
8 The FBI is probing the deaths of scientists tied to sensitive research
Including a nuclear physicist and MIT professor shot outside his home. (CNN)
9 The US is accelerating research into psychedelic medical treatment
Including the mysterious ibogaine. (Nature)
+ But psychedelics are (still) falling short in clinical trials. (MIT Technology Review)
10 The first retail boutique run by an AI agent has opened—and it’s chaos
The San Francisco shop is reassuringly mismanaged. (NYT $)
Quote of the day
—Donald Trump pays a classy tribute to Tim Cook on Truth Social.
One More Thing

A US agency pursuing moonshot health breakthroughs has hired a researcher advocating an extremely radical plan for defeating death. His idea? Replace your body parts. All of them. Even your brain.
Jean Hébert, a program manager at the US Advanced Research Projects Agency for Health (ARPA-H), believes we can beat aging by adding youthful tissue to people’s brains. Read the full story on his futuristic plan to extend human life.
—Antonio Regalado
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)
+ A Lego set was sent to the edge of space—and survived.
+ Go behind the scenes with Werner Herzog as he guides a new generation of filmmakers.
+ This video about enshittification perfectly captures the frustration of the degrading internet.
+ NASA’s latest deep-space capture offers a rare view of planetary systems in their absolute infancy.