2026-02-12 04:08:35
AI agents are a risky business. Even when stuck inside the chatbox window, LLMs will make mistakes and behave badly. Once they have tools that they can use to interact with the outside world, such as web browsers and email addresses, the consequences of those mistakes become far more serious.
That might explain why the first breakthrough LLM personal assistant came not from one of the major AI labs, which have to worry about reputation and liability, but from an independent software engineer, Peter Steinberger. In November of 2025, Steinberger uploaded his tool, now called OpenClaw, to GitHub, and in late January the project went viral.
OpenClaw harnesses existing LLMs to let users create their own bespoke assistants. For some users, this means handing over reams of personal data, from years of emails to the contents of their hard drive. That has security experts thoroughly freaked out. The risks posed by OpenClaw are so extensive that it would probably take someone the better part of a week to read all of the security blog posts on it that have cropped up in the past few weeks. The Chinese government took the step of issuing a public warning about OpenClaw’s security vulnerabilities.
In response to these concerns, Steinberger posted on X that nontechnical people should not use the software. (He did not respond to a request for comment for this article.) But there’s a clear appetite for what OpenClaw is offering, and it’s not limited to people who can run their own software security audits. Any AI companies that hope to get in on the personal assistant business will need to figure out how to build a system that will keep users’ data safe and secure. To do so, they’ll need to borrow approaches from the cutting edge of agent security research.
OpenClaw is, in essence, a mecha suit for LLMs. Users can choose any LLM they like to act as the pilot; that LLM then gains access to improved memory capabilities and the ability to set itself tasks that it repeats on a regular cadence. Unlike the agentic offerings from the major AI companies, OpenClaw agents are meant to be on 24-7, and users can communicate with them using WhatsApp or other messaging apps. That means they can act like a superpowered personal assistant who wakes you each morning with a personalized to-do list, plans vacations while you work, and spins up new apps in its spare time.
But all that power has consequences. If you want your AI personal assistant to manage your inbox, then you need to give it access to your email—and all the sensitive information contained there. If you want it to make purchases on your behalf, you need to give it your credit card info. And if you want it to do tasks on your computer, such as writing code, it needs some access to your local files.
There are a few ways this can go wrong. The first is that the AI assistant might make a mistake, as when a user’s Google Antigravity coding agent reportedly wiped his entire hard drive. The second is that someone might gain access to the agent using conventional hacking tools and use it to either extract sensitive data or run malicious code. In the weeks since OpenClaw went viral, security researchers have demonstrated numerous such vulnerabilities that put security-naïve users at risk.
Both of these dangers can be managed: Some users are choosing to run their OpenClaw agents on separate computers or in the cloud, which protects data on their hard drives from being erased, and other vulnerabilities could be fixed using tried-and-true security approaches.
But the experts I spoke to for this article were focused on a much more insidious security risk known as prompt injection. Prompt injection is effectively LLM hijacking: Simply by posting malicious text or images on a website that an LLM might peruse, or sending them to an inbox that an LLM reads, attackers can bend it to their will.
And if that LLM has access to any of its user’s private information, the consequences could be dire. “Using something like OpenClaw is like giving your wallet to a stranger in the street,” says Nicolas Papernot, a professor of electrical and computer engineering at the University of Toronto. Whether or not the major AI companies can feel comfortable offering personal assistants may come down to the quality of the defenses that they can muster against such attacks.
It’s important to note here that prompt injection has not yet caused any catastrophes, or at least none that have been publicly reported. But now that there are likely hundreds of thousands of OpenClaw agents buzzing around the internet, prompt injection might start to look like a much more appealing strategy for cybercriminals. “Tools like this are incentivizing malicious actors to attack a much broader population,” Papernot says.
The term “prompt injection” was coined by the popular LLM blogger Simon Willison in 2022, a couple of months before ChatGPT was released. Even back then, it was possible to discern that LLMs would introduce a completely new type of security vulnerability once they came into widespread use. LLMs can’t tell apart the instructions that they receive from users and the data that they use to carry out those instructions, such as emails and web search results—to an LLM, they’re all just text. So if an attacker embeds a few sentences in an email and the LLM mistakes them for an instruction from its user, the attacker can get the LLM to do anything it wants.
Prompt injection is a tough problem, and it doesn’t seem to be going away anytime soon. “We don’t really have a silver-bullet defense right now,” says Dawn Song, a professor of computer science at UC Berkeley. But there’s a robust academic community working on the problem, and they’ve come up with strategies that could eventually make AI personal assistants safe.
Technically speaking, it is possible to use OpenClaw today without risking prompt injection: Just don’t connect it to the internet. But restricting OpenClaw from reading your emails, managing your calendar, and doing online research defeats much of the purpose of using an AI assistant. The trick of protecting against prompt injection is to prevent the LLM from responding to hijacking attempts while still giving it room to do its job.
One strategy is to train the LLM to ignore prompt injections. A major part of the LLM development process, called post-training, involves taking a model that knows how to produce realistic text and turning it into a useful assistant by “rewarding” it for answering questions appropriately and “punishing” it when it fails to do so. These rewards and punishments are metaphorical, but the LLM learns from them as an animal would. Using this process, it’s possible to train an LLM not to respond to specific examples of prompt injection.
But there’s a balance: Train an LLM to reject injected commands too enthusiastically, and it might also start to reject legitimate requests from the user. And because there’s a fundamental element of randomness in LLM behavior, even an LLM that has been very effectively trained to resist prompt injection will likely still slip up every once in a while.
Another approach involves halting the prompt injection attack before it ever reaches the LLM. Typically, this involves using a specialized detector LLM to determine whether or not the data being sent to the original LLM contains any prompt injections. In a recent study, however, even the best-performing detector completely failed to pick up on certain categories of prompt injection attack.
The third strategy is more complicated. Rather than controlling the inputs to an LLM by detecting whether or not they contain a prompt injection, the goal is to formulate a policy that guides the LLM’s outputs—i.e., its behaviors—and prevents it from doing anything harmful. Some defenses in this vein are quite simple: If an LLM is allowed to email only a few pre-approved addresses, for example, then it definitely won’t send its user’s credit card information to an attacker. But such a policy would prevent the LLM from completing many useful tasks, such as researching and reaching out to potential professional contacts on behalf of its user.
“The challenge is how to accurately define those policies,” says Neil Gong, a professor of electrical and computer engineering at Duke University. “It’s a trade-off between utility and security.”
On a larger scale, the entire agentic world is wrestling with that trade-off: At what point will agents be secure enough to be useful? Experts disagree. Song, whose startup, Virtue AI, makes an agent security platform, says she thinks it’s possible to safely deploy an AI personal assistant now. But Gong says, “We’re not there yet.”
Even if AI agents can’t yet be entirely protected against prompt injection, there are certainly ways to mitigate the risks. And it’s possible that some of those techniques could be implemented in OpenClaw. Last week, at the inaugural ClawCon event in San Francisco, Steinberger announced that he’d brought a security person on board to work on the tool.
As of now, OpenClaw remains vulnerable, though that hasn’t dissuaded its multitude of enthusiastic users. George Pickett, a volunteer maintainer of the OpenGlaw GitHub repository and a fan of the tool, says he’s taken some security measures to keep himself safe while using it: He runs it in the cloud, so that he doesn’t have to worry about accidentally deleting his hard drive, and he’s put mechanisms in place to ensure that no one else can connect to his assistant.
But he hasn’t taken any specific actions to prevent prompt injection. He’s aware of the risk but says he hasn’t yet seen any reports of it happening with OpenClaw. “Maybe my perspective is a stupid way to look at it, but it’s unlikely that I’ll be the first one to be hacked,” he says.
2026-02-11 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions
In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT.
QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. And while it’s unclear how many users have joined the boycott, there’s no denying QuitGPT is getting attention. Read the full story.
—Michelle Kim
EVs could be cheaper to own than gas cars in Africa by 2040
Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.
There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Read the full story.
—Casey Crownhart
MIT Technology Review Narrated: How next-generation nuclear reactors break out of the 20th-century blueprint
The popularity of commercial nuclear reactors has surged in recent years as worries about climate change and energy independence drowned out concerns about meltdowns and radioactive waste.
The problem is, building nuclear power plants is expensive and slow.
A new generation of nuclear power technology could reinvent what a reactor looks like—and how it works. Advocates hope that new tech can refresh the industry and help replace fossil fuels without emitting greenhouse gases.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Social media giants have agreed to be rated on teen safety
Meta, TikTok and Snap will undergo independent assessments over how effectively they protect the mental health of teen users. (WP $)
+ Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to be graded. (LA Times $)
2 The FDA has refused to review Moderna’s mRNA flu vaccine
It’s the latest in a long line of anti-vaccination moves the agency is making. (Ars Technica)
+ Experts worry it’ll have a knock-on effect on investment in future vaccines. (The Guardian)
+ Moderna says it was blindsided by the decision. (CNN)
3 EV battery factories are pivoting to manufacturing energy cells
Energy storage systems are in, electric vehicles are out. (FT $)
4 Why OpenAI killed off ChatGPT’s 4o model
The qualities that make it attractive for some users make it incredibly risky for others. (WSJ $)
+ Bereft users have set up their own Reddit community to mourn. (Futurism)
+ Why GPT-4o’s sudden shutdown left people grieving. (MIT Technology Review)
5 Drug cartels have started laundering money through crypto
And law enforcement is struggling to stop them. (Bloomberg $)
6 Morocco wants to build an AI for Africa
The country’s Minister of Digital Transition has a plan. (Rest of World)
+ What Africa needs to do to become a major AI player. (MIT Technology Review)
7 Christian influencers are bowing out of the news cycle
They’re choosing to ignore world events to protect their own inner peace. (The Atlantic $)
8 An RFK Jr-approved diet is pretty joyless
Don’t expect any dessert, for one. (Insider $)
+ The US government’s health site uses Grok to dispense nutrition advice. (Wired $)
9 Don’t toss out your used vape
Hackers can give it a second life as a musical synthesizer. (Wired $)
10 An ice skating duo danced to AI music at the Winter Olympics 
Centuries of bangers to choose from, and this is what they opted for. (TechCrunch)
+ AI is coming for music, too. (MIT Technology Review)
Quote of the day
“These companies are terrified that no one’s going to notice them.”
—Tom Goodwin, co-founder of business consulting firm All We Have Is Now, tells the Guardian why AI startups are going to increasingly desperate measures to grab would-be customers’ attention.
One more thing

How AI is changing gymnastics judging
The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.
At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges’ role in crafting a narrative.
For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.
—Jessica Taylor Price
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Today marks the birthday of the late, great Leslie Nielsen—one of the best to ever do it.
+ Congratulations are in order for Hannah Cox, who has just completed 100 marathons in 100 days across India in her dad’s memory.
+ Feeling down? A trip to Finland could be just what you need.
+ We love Padre Guilherme, the Catholic priest dropping incredible Gregorian chant beats.
2026-02-11 18:00:00
Electric vehicles could be economically competitive in Africa sooner than expected. Just 1% of new cars sold across the continent in 2025 were electric, but a new analysis finds that with solar off-grid charging, EVs could be cheaper to own than gas vehicles by 2040.
There are major barriers to higher EV uptake in many countries in Africa, including a sometimes unreliable grid, limited charging infrastructure, and a lack of access to affordable financing. As a result some previous analyses have suggested that fossil-fuel vehicles would dominate in Africa through at least 2050.
But as batteries and the vehicles they power continue to get cheaper, the economic case for EVs is building. Electric two-wheelers, cars, larger automobiles, and even minibuses could compete in most African countries in just 15 years, according to the new study, published in Nature Energy.
“EVs have serious economic potential in most African countries in the not-so-distant future,” says Bessie Noll, a senior researcher at ETH Zürich and one of the authors of the study.
The study considered the total cost of ownership over the lifetime of a vehicle. That includes the sticker price, financing costs, and the cost of fueling (or charging). The researchers didn’t consider policy-related costs like taxes, import fees, and government subsidies, choosing to focus instead on only the underlying economics.
EVs are getting cheaper every year as battery and vehicle manufacturing improve and production scales, and the researchers found that in most cases and in most places across Africa, EVs are expected to be cheaper than equivalent gas-powered vehicles by 2040. EVs should also be less expensive than vehicles that use synthetic fuels.
For two-wheelers like electric scooters, EVs could be the cheaper option even sooner: with smaller, cheaper batteries, these vehicles will be economically competitive by the end of the decade. On the other hand, one of the most difficult segments for EVs to compete in is small cars, says Christian Moretti, a researcher at ETH Zürich and the Paul Scherrer Institute in Switzerland.
Because some countries still have limited or unreliable grid access, charging is a major barrier to EV uptake, Noll says. So for EVs, the authors analyzed the cost of buying not only the vehicle but also a solar off-grid charging system. This includes solar panels, batteries, and the inverter required to transform the electricity into a version that can charge an EV. (The additional batteries help the system store energy for charging at times when the sun isn’t shining.)
Mini grids and other standalone systems that include solar panels and energy storage are increasingly common across Africa. It’s possible that this might be a primary way that EV owners in Africa will charge their vehicles in the future, Noll says.
One of the bigger barriers to EVs in Africa is financing costs, she adds. In some cases, the cost of financing can be more than the up-front cost of the vehicle, significantly driving up the cost of ownership.
Today, EVs are more expensive than equivalent gas-powered vehicles in much of the world. But in places where it’s relatively cheap to borrow money, that difference can be spread out across the course of a vehicle’s whole lifetime for little cost. Then, since it’s often cheaper to charge an EV than fuel a gas-powered car, the EV is less expensive over time.
In some African countries, however, political instability and uncertain economic conditions make borrowing money more expensive. To some extent, the high financing costs affect the purchase of any vehicle, regardless of how it’s powered. But EVs are more expensive up front than equivalent gas-powered cars, and that higher up-front cost adds up to more interest paid over time. In some cases, financing an EV can also be more expensive than financing a gas vehicle—the technology is newer, and banks may see the purchase as more of a risk and charge a higher interest rate, says Kelly Carlin, a manager in the program on carbon-free transportation at the Rocky Mountain Institute, an energy think tank.
The picture varies widely depending on the country, too. In South Africa, Mauritius, and Botswana, financing conditions are already close to levels required to allow EVs to reach cost parity, according to the study. In higher-risk countries (the study gives examples including Sudan, which is currently in a civil war, and Ghana, which is recovering from a major economic crisis), financing costs would need to be cut drastically for that to be the case.
Making EVs an affordable option will be a key first step to putting more on the roads in Africa and around the world. “People will start to pick up these technologies when they’re competitive,” says Nelson Nsitem, lead Africa energy transition analyst at BloombergNEF, an energy consultancy.
Solar-based charging systems, like the ones mentioned in the study, could help make electricity less of a constraint, bringing more EVs to the roads, Nsitem says. But there’s still a need for more charging infrastructure, a major challenge in many countries where the grid needs major upgrades for capacity and reliability, he adds.
Globally, more EVs are hitting the roads every year. “The global trend is unmistakable,” Carlin says. There are questions about how quickly it’s happening in different places, he says, “but the momentum is there.”
2026-02-11 01:00:24
In September, Alfred Stephen, a freelance software developer in Singapore, purchased a ChatGPT Plus subscription, which costs $20 a month and offers more access to advanced models, to speed up his work. But he grew frustrated with the chatbot’s coding abilities and its gushing, meandering replies. Then he came across a post on Reddit about a campaign called QuitGPT.
The campaign urged ChatGPT users to cancel their subscriptions, flagging a substantial contribution by OpenAI president Greg Brockman to President Donald Trump’s super PAC MAGA Inc. It also pointed out that the US Immigration and Customs Enforcement, or ICE, uses a résumé screening tool powered by ChatGPT-4. The federal agency has become a political flashpoint since its agents fatally shot two people in Minneapolis in January.
For Stephen, who had already been tinkering with other chatbots, learning about Brockman’s donation was the final straw. “That’s really the straw that broke the camel’s back,” he says. When he canceled his ChatGPT subscription, a survey popped up asking what OpenAI could have done to keep his subscription. “Don’t support the fascist regime,” he wrote.
QuitGPT is one of the latest salvos in a growing movement by activists and disaffected users to cancel their subscriptions. In just the past few weeks, users have flooded Reddit with stories about quitting the chatbot. Many lamented the performance of GPT-5.2, the latest model. Others shared memes parodying the chatbot’s sycophancy. Some planned a “Mass Cancellation Party” in San Francisco, a sardonic nod to the GPT-4o funeral that an OpenAI employee had floated, poking fun at users who are mourning the model’s impending retirement. Still, others are protesting against what they see as a deepening entanglement between OpenAI and the Trump administration.
OpenAI did not respond to a request for comment.
As of December 2025, ChatGPT had nearly 900 million weekly active users, according to The Information. While it’s unclear how many users have joined the boycott, QuitGPT is getting attention. A recent Instagram post from the campaign has more than 36 million views and 1.3 million likes. And the organizers say that more than 17,000 people have signed up on the campaign’s website, which asks people whether they canceled their subscriptions, will commit to stop using ChatGPT, or will share the campaign on social media.
“There are lots of examples of failed campaigns like this, but we have seen a lot of effectiveness,” says Dana Fisher, a sociologist at American University. A wave of canceled subscriptions rarely sways a company’s behavior, unless it reaches a critical mass, she says. “The place where there’s a pressure point that might work is where the consumer behavior is if enough people actually use their … money to express their political opinions.”
MIT Technology Review reached out to three employees at OpenAI, none of whom said they were familiar with the campaign.
Dozens of left-leaning teens and twentysomethings scattered across the US came together to organize QuitGPT in late January. They range from pro-democracy activists and climate organizers to techies and self-proclaimed cyber libertarians, many of them seasoned grassroots campaigners. They were inspired by a viral video posted by Scott Galloway, a marketing professor at New York University and host of The Prof G Pod. He argued that the best way to stop ICE was to persuade people to cancel their ChatGPT subscriptions. Denting OpenAI’s subscriber base could ripple through the stock market and threaten an economic downturn that would nudge Trump, he said.
“We make a big enough stink for OpenAI that all of the companies in the whole AI industry have to think about whether they’re going to get away enabling Trump and ICE and authoritarianism,” says an organizer of QuitGPT who requested anonymity because he feared retaliation by OpenAI, citing the company’s recent subpoenas against advocates at nonprofits. OpenAI made for an obvious first target of the movement, he says, but “this is about so much more than just OpenAI.”
Simon Rosenblum-Larson, a labor organizer in Madison, Wisconsin, who organizes movements to regulate the development of data centers, joined the campaign after hearing about it through Signal chats among community activists. “The goal here is to pull away the support pillars of the Trump administration. They’re reliant on many of these tech billionaires for support and for resources,” he says.
QuitGPT’s website points to new campaign finance reports showing that Greg Brockman and his wife each donated $12.5 million to MAGA Inc., making up nearly a quarter of the roughly $102 million it raised over the second half of 2025. The information that ICE uses a résumé screening tool powered by ChatGPT-4 came from an AI inventory published by the Department of Homeland Security in January.
QuitGPT is in the mold of Galloway’s own recently launched campaign, Resist and Unsubscribe. The movement urges consumers to cancel their subscriptions to Big Tech platforms, including ChatGPT, for the month of February, as a protest to companies “driving the markets and enabling our president.”
“A lot of people are feeling real anxiety,” Galloway told MIT Technology Review. “You take enabling a president, proximity to the president, and an unease around AI,” he says, “and now people are starting to take action with their wallets.” Galloway says his campaign’s website can draw more than 200,000 unique visits in a day and that he receives dozens of DMs every hour showing screenshots of canceled subscriptions.
The consumer boycotts follow a growing wave of pressure from inside the companies themselves. In recent weeks, tech workers have been urging their employers to use their political clout to demand that ICE leave US cities, cancel company contracts with the agency, and speak out against the agency’s actions. CEOs have started responding. OpenAI’s Sam Altman wrote in an internal Slack message to employees that ICE is “going too far.” Apple CEO Tim Cook called for a “deescalation” in an internal memo posted on the company’s website for employees. It was a departure from how Big Tech CEOs have courted President Trump with dinners and donations since his inauguration.
Although spurred by a fatal immigration crackdown, these developments signal that a sprawling anti-AI movement is gaining momentum. The campaigns are tapping into simmering anxieties about AI, says Rosenblum-Larson, including the energy costs of data centers, the plague of deepfake porn, the teen mental-health crisis, the job apocalypse, and slop. “It’s a really strange set of coalitions built around the AI movement,” he says.
“Those are the right conditions for a movement to spring up,” says David Karpf, a professor of media and public affairs at George Washington University. Brockman’s donation to Trump’s super PAC caught many users off guard, he says. “In the longer arc, we are going to see users respond and react to Big Tech, deciding that they’re not okay with this.”
2026-02-10 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
A first look at Making AI Work, MIT Technology Review’s new AI newsletter
Are you interested in learning more about the ways in which AI is actually being used? We’ve launched a new weekly newsletter series exploring just that: digging into how generative AI is being used and deployed across sectors and what professionals need to know to apply it in their everyday work.
Each edition of Making AI Work begins with a case study, examining a specific use case of AI in a given industry. Then we’ll take a deeper look at the AI tool being used, with more context about how other companies or sectors are employing that same tool or system. Finally, we’ll end with action-oriented tips to help you apply the tool.
The first edition takes a look at how AI is changing health care, digging into the future of medical note-taking by learning about the Microsoft Copilot tool used by doctors at Vanderbilt University Medical Center. Sign up here to receive the seven editions straight to your inbox, and if you’d like to read more about AI’s impact on health care in the meantime, check out some of our past reporting:
+ This medical startup uses LLMs to run appointments and make diagnoses.
+ How AI is changing how we quantify pain by helping health-care providers better assess their patients’ discomfort. Read the full story.
+ End-of-life decisions are difficult and distressing. Could AI help?
+ Artificial intelligence is infiltrating health care. But we shouldn’t let it make all the decisions unchecked. Read the full story.
Why the Moltbook frenzy was like Pokémon
Lots of influential people in tech recently described Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them—sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?
The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon. Read the full story to find out why.
—James O’Donnell
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 OpenAI has begun testing ads in ChatGPT
But the ads won’t influence the responses it provides, apparently. (The Verge)
+ Users who pay at least $20 a month for the chatbot will be exempt. (Gizmodo)
+ So will users believed to be under 18. (Axios)
2 The White House has a plan to stop data centers from raising electricity prices
It’s going to ask AI companies to voluntarily commit to keeping costs down. (Politico)
+ The US federal government is adopting AI left, right and center. (WP $)
+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review)
3 Elon Musk wants to colonize the moon
For now at least, his grand ambitions to live on Mars are taking a backseat. (CNN)
+ His full rationale for this U-turn isn’t exactly clear. (Ars Technica)
+ Musk also wants to become the first to launch a working data center in space. (FT $)
+ The case against humans in space. (MIT Technology Review)
4 Cheap AI tools are helping criminals to ramp up their scams
They’re using LLMs to massively scale up their attacks. (Bloomberg $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)
5 Iceland could be heading towards becoming one giant glacier
If human-driven warming disrupts a vital ocean current, that is. (WP $)
+ Inside a new quest to save the “doomsday glacier.” (MIT Technology Review)
6 Amazon is planning to launch an AI content marketplace
It’s reported to have spoken to media publishers to gauge their interest. (The Information $)
7 Doctors can’t agree on how to diagnose Alzheimer’s
They worry that some patients are being misdiagnosed. (WSJ $)
8 The first wave of AI enthusiasts are burning out
A new study has found that AI tools are linked to employees working more, not less. (TechCrunch)
9 We’re finally moving towards better ways to measure body fat
BMI is a flawed metric. Physicians are finally using better measures. (New Scientist $)
+ These are the best ways to measure your body fat. (MIT Technology Review)
10 It’s getting harder to become a social media megastar
Maybe that’s a good thing? (Insider $)
+ The likes of Mr Beast are still raking in serious cash, though. (The Information $)
Quote of the day
“This case is as easy as ABC—addicting, brains, children.”
—Lawyer Mark Lanier lays out his case during the opening statements of a new tech addiction trial in which a woman has accused Meta of deliberately designing their platforms to be addictive, the New York Times reports.
One more thing

China wants to restore the sea with high-tech marine ranches
A short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.
Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.
The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.
—Matthew Ponsford
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ Wow, Joel and Ethan Coen’s dark comedic classic Fargo is 30 years old.
+ A new exhibition in New York is rightfully paying tribute to one of the greatest technological inventions: the Walkman ($)
+ This gigantic sleeping dachshund sculpture in South Korea is completely bonkers.
+ A beautiful heart-shaped pendant linked to King Henry VIII has been secured by the British Museum.
2026-02-10 01:02:56
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Lots of influential people in tech last week were describing Moltbook, an online hangout populated by AI agents interacting with one another, as a glimpse into the future. It appeared to show AI systems doing useful things for the humans that created them (one person used the platform to help him negotiate a deal on a new car). Sure, it was flooded with crypto scams, and many of the posts were actually written by people, but something about it pointed to a future of helpful AI, right?
The whole experiment reminded our senior editor for AI, Will Douglas Heaven, of something far less interesting: Pokémon.
Back in 2014, someone set up a game of Pokémon in which the main character could be controlled by anyone on the internet via the streaming platform Twitch. Playing was as clunky as it sounds, but it was incredibly popular: at one point, a million people were playing the game at the same time.
“It was yet another weird online social experiment that got picked up by the mainstream media: What did this mean for the future?” Will says. “Not a lot, it turned out.”
The frenzy about Moltbook struck a similar tone to Will, and it turned out that one of the sources he spoke to had been thinking about Pokémon too. Jason Schloetzer, at the Georgetown Psaros Center for Financial Markets and Policy, saw the whole thing as a sort of Pokémon battle for AI enthusiasts, in which they created AI agents and deployed them to interact with other agents. In this light, the news that many AI agents were actually being instructed by people to say certain things that made them sound sentient or intelligent makes a whole lot more sense.
“It’s basically a spectator sport,” he told Will, “but for language models.”
Will wrote an excellent piece about why Moltbook was not the glimpse into the future that it was said to be. Even if you are excited about a future of agentic AI, he points out, there are some key pieces that Moltbook made clear are still missing. It was a forum of chaos, but a genuinely helpful hive mind would require more coordination, shared objectives, and shared memory.
“More than anything else, I think Moltbook was the internet having fun,” Will says. “The biggest question that now leaves me with is: How far will people push AI just for the laughs?”