2025-11-25 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What’s next for AlphaFold: A conversation with a Google DeepMind Nobel laureate
In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from game-playing AI to a secret project to predict the structures of proteins. He applied for a job.
Just three years later, Jumper and CEO Demis Hassabis had led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching lab-level accuracy, and doing it many times faster—returning results in hours instead of months.
Last year, Jumper and Hassabis shared a Nobel Prize in chemistry. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out. Read the full story.
—Will Douglas Heaven
The State of AI: Chatbot companions and the future of our privacy
—Eileen Guo & Melissa Heikkilä
Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.
Some state governments are taking notice and starting to regulate companion AI. But tellingly, one area the laws fail to address is user privacy. Read the full story.
This is the fourth edition of The State of AI, our subscriber-only collaboration between the Financial Times and MIT Technology Review. Sign up here to receive future editions every Monday.
While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing on our site.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has signed an executive order to boost AI innovation
The “Genesis Mission” will try to speed up the rate of scientific breakthroughs. (Politico)
+ The order directs government science agencies to aggressively embrace AI. (Axios)
+ It’s also being touted as a way to lower energy prices. (CNN)
2 Anthropic’s new AI model is designed to be better at coding
We’ll discover just how much better once Claude Opus 4.5 has been properly put through its paces. (Bloomberg $)
+ It reportedly outscored human candidates in an internal engineering test. (VentureBeat)
+ What is vibe coding, exactly? (MIT Technology Review)
3 The AI boom is keeping India hooked on coal
Leaving little chance of cleaning up Mumbai’s famously deadly pollution. (The Guardian)
+ It’s lethal smog season in New Delhi right now. (CNN)
+ The data center boom in the desert. (MIT Technology Review)
4 Teenagers are losing access to their AI companions
Character.AI is limiting the amount of time underage users can spend interacting with its chatbots. (WSJ $)
+ The majority of the company’s users are young and female. (CNBC)
+ One of OpenAI’s key safety leaders is leaving the company. (Wired $)
+ The looming crackdown on AI companionship. (MIT Technology Review)
5 Weight-loss drugs may be riskier during pregnancy
Recipients are more likely to deliver babies prematurely. (WP $)
+ The pill version of Ozempic failed to halt Alzheimer’s progression in a trial. (The Guardian)
+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review)
6 OpenAI is launching a new “shopping research” tool
All the better to track your consumer spending with. (CNBC)
+ It’s designed for price comparisons and compiling buyer’s guides. (The Information $)
+ The company is clearly aiming for a share of Amazon’s e-commerce pie. (Semafor)
7 LA residents displaced by wildfires are moving into prefab housing 
Their new homes are cheap to build and simple to install. (Fast Company $)
+ How AI can help spot wildfires. (MIT Technology Review)
8 Why former Uber drivers are undertaking the world’s toughest driving test
They’re taking the Knowledge—London’s gruelling street test that bypasses GPS. (NYT $)
9 How to spot a fake battery
Great, one more thing to worry about. (IEEE Spectrum)
10 Where is the Trump Mobile?
Almost six months after it was announced, there’s no sign of it. (CNBC)
Quote of the day
“AI is a tsunami that is gonna wipe out everyone. So I’m handing out surfboards.”
—Filmmaker PJ Accetturo, tells Ars Technica why he’s writing a newsletter advising fellow creatives how to pivot to AI tools.
One more thing

The second wave of AI coding is here
Ask people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.
Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. This next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it.
But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story.
—Will Douglas Heaven
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ If you’re planning a visit to Istanbul here’s hoping you like cats—the city can’t get enough of them.
+ Rest in power reggae icon Jimmy Cliff.
+ Did you know the ancient Egyptians had a pretty accurate way of testing for pregnancy?
+ As our readers in the US start prepping for Thanksgiving, spare a thought for Astoria the lovelorn turkey 
2025-11-25 18:28:29
For decades, business continuity planning meant preparing for anomalous events like hurricanes, floods, tornadoes, or regional power outages. In anticipation of these rare disasters, IT teams built playbooks, ran annual tests, crossed their fingers, and hoped they’d never have to use them.
In recent years, an even more persistent threat has emerged. Cyber incidents, particularly ransomware, are now more common—and often, more damaging—than physical disasters. In a recent survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year. Earlier in 2025, ransomware attack rates on enterprises reached record highs.

Mark Vaughn, senior director of the virtualization practice at Presidio, has witnessed the trend firsthand. “When I speak at conferences, I’ll ask the room, ‘How many people have been impacted?’ For disaster recovery, you usually get a few hands,” he says. “But a little over a year ago, I asked how many people in the room had been hit by ransomware, and easily two-thirds of the hands went up.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
2025-11-25 00:30:00
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power.
In this week’s conversation MIT Technology Review’s senior reporter for features and investigations, Eileen Guo, and FT tech correspondent Melissa Heikkilä discuss the privacy implications of our new reliance on chatbots.

Eileen Guo writes:
Even if you don’t have an AI friend yourself, you probably know someone who does. A recent study found that one of the top uses of generative AI is companionship: On platforms like Character.AI, Replika, or Meta AI, people can create personalized chatbots to pose as the ideal friend, romantic partner, parent, therapist, or any other persona they can dream up.
It’s wild how easily people say these relationships can develop. And multiple studies have found that the more conversational and human-like an AI chatbot is, the more likely it is that we’ll trust it and be influenced by it. This can be dangerous, and the chatbots have been accused of pushing some people toward harmful behaviors—including, in a few extreme examples, suicide.
Some state governments are taking notice and starting to regulate companion AI. New York requires AI companion companies to create safeguards and report expressions of suicidal ideation, and last month California passed a more detailed bill requiring AI companion companies to protect children and other vulnerable groups.
But tellingly, one area the laws fail to address is user privacy.
This is despite the fact that AI companions, even more so than other types of generative AI, depend on people to share deeply personal information—from their day-to-day-routines, innermost thoughts, and questions they might not feel comfortable asking real people.
After all, the more users tell their AI companions, the better the bots become at keeping them engaged. This is what MIT researchers Robert Mahari and Pat Pataranutaporn called “addictive intelligence” in an op-ed we published last year, warning that the developers of AI companions make “deliberate design choices … to maximize user engagement.”
Ultimately, this provides AI companies with something incredibly powerful, not to mention lucrative: a treasure trove of conversational data that can be used to further improve their LLMs. Consider how the venture capital firm Andreessen Horowitz explained it in 2023:
“Apps such as Character.AI, which both control their models and own the end customer relationship, have a tremendous opportunity to generate market value in the emerging AI value stack. In a world where data is limited, companies that can create a magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product will be among the biggest winners that emerge from this ecosystem.”
This personal information is also incredibly valuable to marketers and data brokers. Meta recently announced that it will deliver ads through its AI chatbots. And research conducted this year by the security company Surf Shark found that four out of the five AI companion apps it looked at in the Apple App Store were collecting data such as user or device IDs, which can be combined with third-party data to create profiles for targeted ads. (The only one that said it did not collect data for tracking services was Nomi, which told me earlier this year that it would not “censor” chatbots from giving explicit suicide instructions.)
All of this means that the privacy risks posed by these AI companions are, in a sense, required: They are a feature, not a bug. And we haven’t even talked about the additional security risks presented by the way AI chatbots collect and store so much personal information in one place.
So, is it possible to have prosocial and privacy-protecting AI companions? That’s an open question.
What do you think, Melissa, and what is top of mind for you when it comes to privacy risks from AI companions? And do things look any different in Europe?
Melissa Heikkilä replies:
Thanks, Eileen. I agree with you. If social media was a privacy nightmare, then AI chatbots put the problem on steroids.
In many ways, an AI chatbot creates what feels like a much more intimate interaction than a Facebook page. The conversations we have are only with our computers, so there is little risk of your uncle or your crush ever seeing what you write. The AI companies building the models, on the other hand, see everything.
Companies are optimizing their AI models for engagement by designing them to be as human-like as possible. But AI developers have several other ways to keep us hooked. The first is sycophancy, or the tendency for chatbots to be overly agreeable.
This feature stems from the way the language model behind the chatbots is trained using reinforcement learning. Human data labelers rate the answers generated by the model as either acceptable or not. This teaches the model how to behave.
Because people generally like answers that are agreeable, such responses are weighted more heavily in training.
AI companies say they use this technique because it helps models become more helpful. But it creates a perverse incentive.
After encouraging us to pour our hearts out to chatbots, companies from Meta to OpenAI are now looking to monetize these conversations. OpenAI recently told us it was looking at a number of ways to meet $1 trillion spending pledges, which included advertising and shopping features.
AI models are already incredibly persuasive. Researchers at the UK’s AI Security Institute have shown that they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way.
This feature, paired with their sycophancy and a wealth of personal data, could be a powerful tool for advertisers—one that is more manipulative than anything we have seen before.
By default, chatbot users are opted in to data collection. Opt-out policies place the onus on users to understand the implications of sharing their information. It’s also unlikely that data already used in training will be removed.
We are all part of this phenomenon whether we want to be or not. Social media platforms from Instagram to LinkedIn now use our personal data to train generative AI models.
Companies are sitting on treasure troves that consist of our most intimate thoughts and preferences, and language models are very good at picking up on subtle hints in language that could help advertisers profile us better by inferring our age, location, gender, and income level.
We are being sold the idea of an omniscient AI digital assistant, a superintelligent confidante. In return, however, there is a very real risk that our information is about to be sent to the highest bidder once again.
Eileen responds:
I think the comparison between AI companions and social media is both apt and concerning.
As Melissa highlighted, the privacy risks presented by AI chatbots aren’t new—they just “put the [privacy] problem on steroids.” AI companions are more intimate and even better optimized for engagement than social media, making it more likely that people will offer up more personal information.
Here in the US, we are far from solving the privacy issues already presented by social networks and the internet’s ad economy, even without the added risks of AI.
And without regulation, the companies themselves are not following privacy best practices either. One recent study found that the major AI models train their LLMs on user chat data by default unless users opt out, while several don’t offer opt-out mechanisms at all.
In an ideal world, the greater risks of companion AI would give more impetus to the privacy fight—but I don’t see any evidence this is happening.
Further reading
FT reporters peer under the hood of OpenAI’s five-year business plan as it tries to meet its vast $1 trillion spending pledges.
Is it really such a problem if AI chatbots tell people what they want to hear? This FT feature asks what’s wrong with sycophancy
In a recent print issue of MIT Technology Review, Rhiannon Williams spoke to a number of people about the types of relationships they are having with AI chatbots.
Eileen broke the story for MIT Technology Review about a chatbot that was encouraging some users to kill themselves.
2025-11-25 00:21:12
In 2017, fresh off a PhD on theoretical chemistry, John Jumper heard rumors that Google DeepMind had moved on from building AI that played games with superhuman skill and was starting up a secret project to predict the structures of proteins. He applied for a job.
Just three years later, Jumper celebrated a stunning win that few had seen coming. With CEO Demis Hassabis, he had co-led the development of an AI system called AlphaFold 2 that was able to predict the structures of proteins to within the width of an atom, matching the accuracy of painstaking techniques used in the lab, and doing it many times faster—returning results in hours instead of months.
AlphaFold 2 had cracked a 50-year-old grand challenge in biology. “This is the reason I started DeepMind,” Hassabis told me a few years ago. “In fact, it’s why I’ve worked my whole career in AI.” In 2024, Jumper and Hassabis shared a Nobel Prize in chemistry.
It was five years ago this week that AlphaFold 2’s debut took scientists by surprise. Now that the hype has died down, what impact has AlphaFold really had? How are scientists using it? And what’s next? I talked to Jumper (as well as a few other scientists) to find out.
“It’s been an extraordinary five years,” Jumper says, laughing: “It’s hard to remember a time before I knew tremendous numbers of journalists.”
AlphaFold 2 was followed by AlphaFold Multimer, which could predict structures that contained more than one protein, and then AlphaFold 3, the fastest version yet. Google DeepMind also let AlphaFold loose on UniProt, a vast protein database used and updated by millions of researchers around the world. It has now predicted the structures of some 200 million proteins, almost all that are known to science.
Despite his success, Jumper remains modest about AlphaFold’s achievements. “That doesn’t mean that we’re certain of everything in there,” he says. “It’s a database of predictions, and it comes with all the caveats of predictions.”
Proteins are the biological machines that make living things work. They form muscles, horns, and feathers; they carry oxygen around the body and ferry messages between cells; they fire neurons, digest food, power the immune system; and so much more. But understanding exactly what a protein does (and what role it might play in various diseases or treatments) involves figuring out its structure—and that’s hard.
Proteins are made from strings of amino acids that chemical forces twist up into complex knots. An untwisted string gives few clues about the structure it will form. In theory, most proteins could take on an astronomical number of possible shapes. The task is to predict the correct one.
Jumper and his team built AlphaFold 2 using a type of neural network called a transformer, the same technology that underpins large language models. Transformers are very good at paying attention to specific parts of a larger puzzle.
But Jumper puts a lot of the success down to making a prototype model that they could test quickly. “We got a system that would give wrong answers at incredible speed,” he says. “That made it easy to start becoming very adventurous with the ideas you try.”
They stuffed the neural network with as much information about protein structures as they could, such as how proteins across certain species have evolved similar shapes. And it worked even better than they expected. “We were sure we had made a breakthrough,” says Jumper. “We were sure that this was an incredible advance in ideas.”
What he hadn’t foreseen was that researchers would download his software and start using it straight away for so many different things. Normally, it’s the thing a few iterations down the line that has the real impact, once the kinks have been ironed out, he says: “I’ve been shocked at how responsibly scientists have used it, in terms of interpreting it, and using it in practice about as much as it should be trusted in my view, neither too much nor too little.”
Any projects stand out in particular?
Jumper brings up a research group that uses AlphaFold to study disease resistance in honeybees. “They wanted to understand this particular protein as they look at things like colony collapse,” he says. “I never would have said, ‘You know, of course AlphaFold will be used for honeybee science.’”
He also highlights a few examples of what he calls off-label uses of AlphaFold—“in the sense that it wasn’t guaranteed to work”—where the ability to predict protein structures has opened up new research techniques. “The first is very obviously the advances in protein design,” he says. “David Baker and others have absolutely run with this technology.”
Baker, a computational biologist at the University of Washington, was a co-winner of last year’s chemistry Nobel, alongside Jumper and Hassabis, for his work on creating synthetic proteins to perform specific tasks—such as treating disease or breaking down plastics—better than natural proteins can.
Baker and his colleagues have developed their own tool based on AlphaFold, called RoseTTAFold. But they have also experimented with AlphaFold Multimer to predict which of their designs for potential synthetic proteins will work.
“Basically, if AlphaFold confidently agrees with the structure you were trying to design then you make it and if AlphaFold says ‘I don’t know,’ you don’t make it. That alone was an enormous improvement.” It can make the design process 10 times faster, says Jumper.
Another off-label use that Jumper highlights: Turning AlphaFold into a kind of search engine. He mentions two separate research groups that were trying to understand exactly how human sperm cells hooked up with eggs during fertilization. They knew one of the proteins involved but not the other, he says: “And so they took a known egg protein and ran all 2,000 human sperm surface proteins, and they found one that AlphaFold was very sure stuck against the egg.” They were then able to confirm this in the lab.
“This notion that you can use AlphaFold to do something you couldn’t do before—you would never do 2,000 structures looking for one answer,” he says. “This kind of thing I think is really extraordinary.”
When AlphaFold 2 came out, I asked a handful of early adopters what they made of it. Reviews were good, but the technology was too new to know for sure what long-term impact it might have. I caught up with one of those people to hear his thoughts five years on.
Kliment Verba is a molecular biologist who runs a lab at the University of California, San Francisco. “It’s an incredibly useful technology, there’s no question about it,” he tells me. “We use it every day, all the time.”
But it’s far from perfect. A lot of scientists use AlphaFold to study pathogens or to develop drugs. This involves looking at interactions between multiple proteins or between proteins and even smaller molecules in the body. But AlphaFold is known to be less accurate at making predictions about multiple proteins or their interaction over time.
Verba says he and his colleagues have been using AlphaFold long enough to get used to its limitations. “There are many cases where you get a prediction and you have to kind of scratch your head,” he says. “Is this real or is this not? It’s not entirely clear—it’s sort of borderline.”
“It’s sort of the same thing as ChatGPT,” he adds. “You know—it will bullshit you with the same confidence as it would give a true answer.”
Still, Verba’s team uses AlphaFold (both 2 and 3, because they have different strengths, he says) to run virtual versions of their experiments before running them in the lab. Using AlphaFold’s results, they can narrow down the focus of an experiment—or decide that it’s not worth doing.
It can really save time, he says: “It hasn’t really replaced any experiments, but it’s augmented them quite a bit.”
AlphaFold was designed to be used for a range of purposes. Now multiple startups and university labs are building on its success to develop a new wave of tools more tailored to drug discovery. This year, a collaboration between MIT researchers and the AI drug company Recursion produced a model called Boltz-2, which predicts not only the structure of proteins but also how well potential drug molecules will bind to their target.
Last month, the startup Genesis Molecular AI released another structure prediction model called Pearl, which the firm claims is more accurate than AlphaFold 3 for certain queries that are important for drug development. Pearl is interactive, so that drug developers can feed any additional data they may have to the model to guide its predictions.
AlphaFold was a major leap, but there’s more to do, says Evan Feinberg, Genesis Molecular AI’s CEO: “We’re still fundamentally innovating, just with a better starting point than before.”
Genesis Molecular AI is pushing margins of error down from less than two angstroms, the de facto industry standard set by AlphaFold, to less than one angstrom—one 10-millionth of a millimeter, or the width of a single hydrogen atom.
“Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” says Michael LeVine, vice president of modeling and simulation at the firm. That’s because chemical forces that interact at one angstrom can stop doing so at two. “It can go from ‘They will never interact’ to ‘They will,’” he says.
With so much activity in this space, how soon should we expect new types of drugs to hit the market? Jumper is pragmatic. Protein structure prediction is just one step of many, he says: “This was not the only problem in biology. It’s not like we were one protein structure away from curing any diseases.”
Think of it this way, he says. Finding a protein’s structure might previously have cost $100,000 in the lab: “If we were only a hundred thousand dollars away from doing a thing, it would already be done.”
At the same time, researchers are looking for ways to do as much as they can with this technology, says Jumper: “We’re trying to figure out how to make structure prediction an even bigger part of the problem, because we have a nice big hammer to hit it with.”
In other words, make everything into nails? “Yeah, let’s make things into nails,” he says. “How do we make this thing that we made a million times faster a bigger part of our process?”
Jumper’s next act? He wants to fuse the deep but narrow power of AlphaFold with the broad sweep of LLMs.
“We have machines that can read science. They can do some scientific reasoning,” he says. “And we can build amazing, superhuman systems for protein structure prediction. How do you get these two technologies to work together?”
That makes me think of a system called AlphaEvolve, which is being built by another team at Google DeepMind. AlphaEvolve uses an LLM to generate possible solutions to a problem and a second model to check them, filtering out the trash. Researchers have already used AlphaEvolve to make a handful of practical discoveries in math and computer science.
Is that what Jumper has in mind? “I won’t say too much on methods, but I’ll be shocked if we don’t see more and more LLM impact on science,” he says. “I think that’s the exciting open question that I’ll say almost nothing about. This is all speculation, of course.”
Jumper was 39 when he won his Nobel Prize. What’s next for him?
“It worries me,” he says. “I believe I’m the youngest chemistry laureate in 75 years.”
He adds: “I’m at the midpoint of my career, roughly. I guess my approach to this is to try to do smaller things, little ideas that you keep pulling on. The next thing I announce doesn’t have to be, you know, my second shot at a Nobel. I think that’s the trap.”
2025-11-24 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Meet the man building a starter kit for civilization
You live in a house you designed and built yourself. You rely on the sun for power, heat your home with a woodstove, and farm your own fish and vegetables. The year is 2025.
This is the life of Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS).
It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit. It’s all part of his ethos that life-changing technology should be available to all, not controlled by a select few. Read the full story.
—Tiffany Ng
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.
What it’s like to find yourself in the middle of a conspiracy theory
Last week, we held a subscribers-only Roundtables discussion exploring how to cope in this new age of conspiracy theories. Our features editor Amanda Silverman and executive editor Niall Firth were joined by conspiracy expert Mike Rothschild, who explained exactly what it’s like to find yourself at the center of a conspiracy you can’t control. Watch the conversation back here.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 DOGE has been disbanded
Even though it’s got eight months left before its official scheduled end. (Reuters)
+ It leaves a legacy of chaos and few measurable savings. (Politico)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)
2 How OpenAI’s tweaks to ChatGPT sent some users into delusional spirals
It essentially turned a dial that increased both usage of the chatbot and the risks it poses to a subset of people. (NYT $)
+ AI workers are warning loved ones to stay away from the technology. (The Guardian)
+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)
3 A three-year old has received the world’s first gene therapy for Hunter syndrome
Oliver Chu appears to be developing normally one year after starting therapy. (BBC)
4 Why we may—or may not—be in an AI bubble 
It’s time to follow the data. (WP $)
+ Even tech leaders don’t appear to be entirely sure. (Insider $)
+ How far can the ‘fake it til you make it’ strategy take us? (WSJ $)
+ Nvidia is still riding the wave with abandon. (NY Mag $)
5 Many MAGA influencers are based in Russia, India and Nigeria
X’s new account provenance feature is revealing some interesting truths. (The Daily Beast)
6 The FBI wants to equip drones with facial recognition tech
Civil libertarians claim the plans equate to airborne surveillance. (The Intercept)
+ This giant microwave may change the future of war. (MIT Technology Review)
7 Snapchat is alerting users ahead of Australia’s under-16s social media ban
The platform will analyze an account’s “behavioral signals” to estimate a user’s age. (The Guardian)
+ An AI nudification site has been fined for skipping age checks. (The Register)
+ Millennial parents are fetishizing the notion of an offline childhood. (The Observer)
8 Activists are roleplaying ICE raids in Fortnite and Grand Theft Auto
It’s in a bid to prepare players to exercise their rights in the real world. (Wired $)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)
9 The JWST may have uncovered colossal stars 
In fact, they’re so big their masses are 10,000 times bigger than the sun. (New Scientist $)
+ Inside the hunt for the most dangerous asteroid ever. (MIT Technology Review)
10 Social media users are lying about brands ghosting them
Completely normal behavior. (WSJ $)
+ This would never have happened on Vine, I’ll tell you now. (The Verge)
Quote of the day
“I can’t believe we have to say this, but this account has only ever been run and operated from the United States.”
—The US Department of Homeland Security’s X account attempts to end speculation surrounding its social media origins, the New York Times reports.
One more thing

This company is planning a lithium empire from the shores of the Great Salt Lake
On a bright afternoon in August, the shore of Utah’s Great Salt Lake looks like something out of a science fiction film set in a scorching alien world.
This otherworldly scene is the test site for a company called Lilac Solutions, which is developing a technology it says will shake up the United States’ efforts to pry control over the global supply of lithium, the so-called “white gold” needed for electric vehicles and batteries, away from China.
The startup is in a race to commercialize a new, less environmentally-damaging way to extract lithium from rocks. If everything pans out, it could significantly increase domestic supply at a crucial moment for the nation’s lithium extraction industry. Read the full story.
—Alexander C. Kaufman
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ I love the thought of clever crows putting their smarts to use picking up cigarette butts (thanks Alice!)
+ Talking of brains, sea urchins have a whole lot more than we originally suspected.
+ Wow—a Ukrainian refugee has won an elite-level sumo competition in Japan.
+ How to make any day feel a little bit brighter.
2025-11-21 21:10:00
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
We’re learning more about what vitamin D does to our bodies
At a checkup a few years ago, a doctor told me I was deficient in vitamin D. But he wouldn’t write me a prescription for supplements, simply because, as he put it, everyone in the UK is deficient. Putting the entire population on vitamin D supplements would be too expensive for the country’s national health service, he told me.
But supplementation—whether covered by a health-care provider or not—can be important. As those of us living in the Northern Hemisphere spend fewer of our waking hours in sunlight, let’s consider the importance of vitamin D. Read the full story.
—Jessica Hamzelou
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
If you’re interested in other stories from our biotech writers, check out some of their most recent work:
+ Advanced in organs on chips, digital twins, and AI are ushering in a new era of research and drug development that could help put a stop to animal testing. Read the full story.
+ Here’s the latest company planning for gene-edited babies.
+ Preventing the common cold is extremely tricky—but not impossible. Here’s why we don’t have a cold vaccine. Yet.
+ Scientists are creating the beginnings of bodies without sperm or eggs. How far should they be allowed to go? Read the full story.
+ This retina implant lets people with vision loss do a crossword puzzle. Read the full story.
Partying at one of Africa’s largest AI gatherings
It’s late August in Rwanda’s capital, Kigali, and people are filling a large hall at one of Africa’s biggest gatherings of minds in AI and machine learning. Deep Learning Indaba is an annual AI conference where Africans present their research and technologies they’ve built, mingling with friends as a giant screen blinks with videos created with generative AI.
The main “prize” for many attendees is to be hired by a tech company or accepted into a PhD program. But the organizers hope to see more homegrown ventures create opportunities within Africa. Read the full story.
—Abdullahi Tsanni
This story is from the latest print issue of MIT Technology Review magazine, which is full of fascinating stories. If you haven’t already, subscribe now to receive future issues once they land.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Google’s new Nano Banana Pro generates convincing propaganda
The company’s latest image-generating AI model seems to have few guardrails. (The Verge)
+ Google wants its creations to be slicker than ever. (Wired $)
+ Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent. (MIT Technology Review)
2 Taiwan says the US won’t punish it with high chip tariffs
In fact, official Wu Cheng-wen says Taiwan will help support the US chip industry in exchange for tariff relief. (FT $)
3 Mental health support is one of the most dangerous uses for chatbots
They fail to recognize psychiatric conditions and can miss critical warning signs. (WP $)
+ AI companies have stopped warning you that their chatbots aren’t doctors. (MIT Technology Review)
4 It costs an average of $17,121 to deport one person from the US
But in some cases it can cost much, much more. (Bloomberg $)
+ Another effort to track ICE raids was just taken offline. (MIT Technology Review)
5 Grok is telling users that Elon Musk is the world’s greatest lover
What’s it basing that on, exactly? (Rolling Stone $)
+ It also claims he’s fitter than basketball legend LeBron James. Sure. (The Guardian)
6 Who’s really in charge of US health policy?
RFK Jr. and FDA commissioner Marty Makary are reportedly at odds behind the scenes. (Vox)
+ Republicans are lightly pushing back on the CDC’s new stance on vaccines. (Politico)
+ Why anti-vaxxers are seeking to discredit Danish studies. (Bloomberg $)
+ Meet Jim O’Neill, the longevity enthusiast who is now RFK Jr.’s right-hand man. (MIT Technology Review)
7 Inequality is worsening in San Francisco
As billionaires thrive, hundreds of thousands of others are struggling to get by. (WP $)
+ A massive airship has been spotted floating over the city. (SF Gate)
8 Donald Trump is thrusting obscure meme-makers into the mainstream
He’s been reposting flattering AI-generated memes by the dozen. (NYT $)
+ MAGA YouTube stars are pushing a boom in politically charged ads. (Bloomberg $)
9 Moss spores survived nine months in space
And they could remain reproductively viable for another 15 years. (New Scientist $)
+ It suggests that some life on Earth has evolved to endure space conditions. (NBC News)
+ The quest to figure out farming on Mars. (MIT Technology Review)
10 Does AI really need a physical shape?
It doesn’t really matter—companies are rushing to give it one anyway. (The Atlantic $)
Quote of the day
“At some point you’ve got to wonder whether the bug is a feature.”
—Alexios Mantzarlis, director of the Security, Trust and Safety Initiative at Cornell Tech, ponders xAI and Grok’s proclivity for surfacing Elon Musk-friendly and/or far-right sources, the Washington Post reports.
One more thing

The AI lab waging a guerrilla war over exploitative AI
Back in 2022, the tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.
But artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.
Ben Zhao, a computer security researcher at the University of Chicago, was listening. He and his colleagues have built arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping: two tools called Glaze and Nightshade that add barely perceptible perturbations to an image’s pixels so that machine-learning models cannot read them properly.
But Zhao sees the tools as part of a battle to slowly tilt the balance of power from large corporations back to individual creators. Read the full story.
—Melissa Heikkilä
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)
+ If you’re ever tempted to try and recreate a Jackson Pollock painting, maybe you’d be best leaving it to the kids.
+ Scientists have discovered that lions have not one, but two distinct types of roars 
+ The relentless rise of the quarter-zip must be stopped!
+ Pucker up: here’s a brief history of kissing 