MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

Inside the marketplace powering bespoke AI deepfakes of real women

2026-01-31 00:32:31

Civitai—an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz—is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found.

The study, from researchers at Stanford and Indiana University, looked at people’s requests for content on the site, called “bounties.” The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content—but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. (Their findings have not yet been peer reviewed.)

The debate around deepfakes, as illustrated by the recent backlash to explicit images on the X-owned chatbot Grok, has revolved around what platforms should do to block such content. Civitai’s situation is a little more complicated. Its marketplace includes actual images, videos, and models, but it also lets individuals buy and sell instruction files called LoRAs that can coach mainstream AI models like Stable Diffusion into generating content they were not trained to produce. Users can then combine these files with other tools to make deepfakes that are graphic or sexual. The researchers found that 86% of deepfake requests on Civitai were for LoRAs.

In these bounties, users requested “high quality” models to generate images of public figures like the influencer Charli D’Amelio or the singer Gracie Abrams, often linking to their social media profiles so their images could be grabbed from the web. Some requests specified a desire for models that generated the individual’s entire body, accurately captured their tattoos, or allowed hair color to be changed. Some requests targeted several women in specific niches, like artists who record ASMR videos. One request was for a deepfake of a woman said to be the user’s wife. Anyone on the site could offer up AI models they worked on for the task, and the best submissions received payment—anywhere from $0.50 to $5. And nearly 92% of the deepfake bounties were awarded.

Neither Civitai nor Andreessen Horowitz responded to requests for comment.

It’s possible that people buy these LoRAs to make deepfakes that aren’t sexually explicit (though they’d still violate Civitai’s terms of use, and they’d still be ethically fraught). But Civitai also offers educational resources on how to use external tools to further customize the outputs of image generators—for example, by changing someone’s pose. The site also hosts user-written articles with details on how to instruct models to generate pornography. The researchers found that the amount of porn on the platform has gone up, and that the majority of requests each week are now for NSFW content.

“Not only does Civitai provide the infrastructure that facilitates these issues; they also explicitly teach their users how to utilize them,” says Matthew DeVerna, a postdoctoral researcher at Stanford’s Cyber Policy Center and one of the study’s leaders. 

The company used to ban only sexually explicit deepfakes of real people, but in May 2025 it announced it would ban all deepfake content. Nonetheless, countless requests for deepfakes submitted before this ban now remain live on the site, and many of the winning submissions fulfilling those requests remain available for purchase, MIT Technology Review confirmed.

“I believe the approach that they’re trying to take is to sort of do as little as possible, such that they can foster as much—I guess they would call it—creativity on the platform,” DeVerna says.

Users buy LoRAs with the site’s online currency, called Buzz, which is purchased with real money. In May 2025, Civita’s credit card processor cut off the company because of its ongoing problem with nonconsensual content. To pay for explicit content, users must now use gift cards or cryptocurrency to buy Buzz; the company offers a different scrip for non-explicit content. 

Civitai automatically tags bounties requesting deepfakes and lists a way for the person featured in the content to manually request its takedown. This system means that Civitai has a reasonably successful way of knowing which bounties are for deepfakes, but it’s still leaving moderation to the general public rather than carrying it out proactively. 

A company’s legal liability for what its users do isn’t totally clear. Generally, tech companies have broad legal protections against such liability for their content under Section 230 of the Communications Decency Act, but those protections aren’t limitless. For example, “you cannot knowingly facilitate illegal transactions on your website,” says Ryan Calo, a professor specializing in technology and AI at the University of Washington’s law school. (Calo wasn’t involved in this new study.)

Civitai joined OpenAI, Anthropic, and other AI companies in 2024 in adopting design principles to guard against the creation and spread of AI-generated child sexual abuse material . This move followed a 2023 report from the Stanford Internet Observatory, which found that the vast majority of AI models named in child sexual abuse communities were Stable Diffusion–based models “predominantly obtained via Civitai.”

But adult deepfakes have not gotten the same level of attention from content platforms or the venture capital firms that fund them. “They are not afraid enough of it. They are overly tolerant of it,” Calo says. “Neither law enforcement nor civil courts adequately protect against it. It is night and day.”

Civitai received a $5 million investment from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his goal of building the main place where people find and share AI models for their own individual purposes. “We’ve aimed to make this space that’s been very, I guess, niche and engineering-heavy more and more approachable to more and more people,” he said. 

Civitai is not the only company with a deepfake problem in a16z’s investment portfolio; in February, MIT Technology Review first reported that another company, Botify AI, was hosting AI companions resembling real actors that stated their age as under 18, engaged in sexually charged conversations, offered “hot photos,” and in some instances described age-of-consent laws as “arbitrary” and “meant to be broken.”

The Download: US immigration agencies’ AI videos, and inside the Vitalism movement

2026-01-30 21:10:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

DHS is using Google and Adobe AI to make videos

The news: The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity.

Why it matters: It comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI—and as workers in tech have put pressure on their employers to denounce the agencies’ activities. Read the full story.

—James O’Donnell

How the sometimes-weird world of lifespan extension is gaining influence

—Jessica Hamzelou

For the last couple of years, I’ve been following the progress of a group of individuals who believe death is humanity’s “core problem.” Put simply, they say death is wrong—for everyone. They’ve even said it’s morally wrong.

They established what they consider a new philosophy, and they called it Vitalism.

Vitalism is more than a philosophy, though—it’s a movement for hardcore longevity enthusiasts who want to make real progress in finding treatments that slow or reverse aging. Not just through scientific advances, but by persuading influential people to support their movement, and by changing laws and policies to open up access to experimental drugs. And they’re starting to make progress.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The AI Hype Index: Grok makes porn, and Claude Code nails your job

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Capgemini is no longer tracking immigrants for ICE
After the French company was queried by the country’s government over the contract. (WP $)
+ Here’s how the agency typically keeps tabs on its targets. (NYT $)
+ US senators are pushing for answers about its recent surveillance shopping spree. (404 Media)
+ ICE’s tactics would get real soldiers killed, apparently. (Wired $)

2 The Pentagon is at loggerheads with Anthropic
The AI firm is reportedly worried its tools could be used to spy on Americans. (Reuters)
+ Generative AI is learning to spy for the US military. (MIT Technology Review)

3 It’s relatively rare for AI chatbots to lead users down harmful paths
But when it does, it can have incredibly dangerous consequences. (Ars Technica)
+ The AI doomers feel undeterred. (MIT Technology Review)

4
GPT-4o’s days are numbered
OpenAI says just 0.1% of users are using the model every day. (CNBC)
+ It’s the second time that it’s tried to turn the sycophantic model off in under a year. (Insider $)
+ Why GPT-4o’s sudden shutdown left people grieving. (MIT Technology Review)

5 An AI toy company left its chats with kids exposed

Anyone with a Gmail account was able to simply access the conversations—no hacking required. (Wired $)
+ AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review)

6 SpaceX could merge with xAI later this year

Ahead of a planned blockbuster IPO of Elon Musk’s companies. (Reuters)
+ The move would be welcome news for Musk fans. (The Information $)
+ A SpaceX-Tesla merger could also be on the cards. (Bloomberg $)

7 We’re still waiting for a reliable male contraceptive
Take a look at the most promising methods so far. (Bloomberg $)

8 AI is bringing traditional Chinese medicine to the masses
And it’s got the full backing of the country’s government. (Rest of World)

9 The race back to the Moon is heating up 
Competition between the US and China is more intense than ever. (Economist $)

10 What did the past really smell like?
AI could help scientists to recreate history’s aromas—including mummies and battlefields. (Knowable Magazine)

Quote of the day

“I think the tidal wave is coming and we’re all standing on the beach.”

—Bill Zysblat, a music business manager, tells the Financial Times about the existential threat AI poses to the industry. 

One more thing

Therapists are secretly using ChatGPT. Clients are triggered.

Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen.

For the rest of the session, Declan was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen, who was taking what Declan was saying, putting it into ChatGPT, and then parroting its answers.

But Declan is not alone. In fact, a growing number of people are reporting receiving AI-generated communiqués from their therapists. Clients’ trust and privacy are being abandoned in the process. Read the full story.

—Laurie Clarke

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Sinkholes are seriously mysterious. Is there a way to stay one step ahead of them?
+ This beautiful pixel art is super impressive.
+ Amid the upheaval in their city, residents of Minneapolis recently demonstrated both their resistance and community spirit in the annual Art Sled Rally (thanks Paul!)
+ How on Earth is Tomb Raider 30 years old?!

How the sometimes-weird world of lifespan extension is gaining influence

2026-01-30 18:00:00

For the last couple of years, I’ve been following the progress of a group of individuals who believe death is humanity’s “core problem.” Put simply, they say death is wrong—for everyone. They’ve even said it’s morally wrong.

They established what they consider a new philosophy, and they called it Vitalism.

Vitalism is more than a philosophy, though—it’s a movement for hardcore longevity enthusiasts who want to make real progress in finding treatments that slow or reverse aging. Not just through scientific advances, but by persuading influential people to support their movement, and by changing laws and policies to open up access to experimental drugs.

And they’re starting to make progress.

Vitalism was founded by Adam Gries and Nathan Cheng—two men who united over their shared desire to find ways to extend human lifespan. I first saw Cheng speak back in 2023, at Zuzalu, a pop-up city in Montenegro for people who were interested in life extension and some other technologies. (It was an interesting experience—you can read more about it here.)

Zuzalu was where Gries and Cheng officially launched Vitalism. But I’ve been closely following the longevity scene since 2022. That journey took me to Switzerland, Honduras, and a compound in Berkeley, California, where like-minded longevity enthusiasts shared their dreams of life extension.

It also took me to Washington, DC, where, last year, supporters of lifespan extension presented politicians including Mehmet Oz, who currently leads the Centers for Medicare & Medicaid Services, with their case for changes to laws and policies.

The journey has been fascinating, and at times weird and even surreal. I’ve heard biohacking stories that ended with smoking legs. I’ve been told about a multi-partner relationship that might be made possible through the cryopreservation—and subsequent reanimation—of a man and the multiple wives he’s had throughout his life. I’ve had people tell me to my face that they consider themselves eugenicists, and that they believe that parents should select IVF embryos for their propensity for a long life.

I’ve seen people draw blood during dinner in an upscale hotel restaurant to test their biological age. I’ve heard wild plans to preserve human consciousness and resurrect it in machines. Others have told me their plans to inject men’s penises with multiple doses of an experimental gene therapy in order to treat erectile dysfunction and ultimately achieve “radical longevity.”

I’ve been shouted at and threatened with legal action. I’ve received barefoot hugs. One interviewee told me I needed Botox. It’s been a ride.

My reporting has also made me realize that the current interest in longevity reaches beyond social media influencers and wellness centers. Longevity clinics are growing in number, and there’s been a glut of documentaries about living longer or even forever.

At the same time, powerful people who influence state laws, giant federal funding budgets, and even national health policy are prioritizing the search for treatments that slow or reverse aging. The longevity community was thrilled when longtime supporter Jim O’Neill was made deputy secretary of health and human services last year. Other members of Trump’s administration, including Oz, have spoken about longevity too. “It seems that now there is the most pro-longevity administration in American history,” Gries told me.

I recently spoke to Alicia Jackson, the new director of ARPA-H. The agency, established in 2022 under Joe Biden’s presidency, funds “breakthrough” biomedical research. And it appears to have a new focus on longevity. Jackson previously founded and led Evernow, a company focused on “health and longevity for every woman.”

“There’s a lot of interesting technologies, but they all kind of come back to the same thing: Could we extend life years?” she told me over a Zoom call a few weeks ago. She added that her agency had “incredible support” from “the very top of HHS.” I asked if she was referring to Jim O’Neill. “Yeah,” she said. She wouldn’t go into the specifics.

Gries is right: There is a lot of support for advances in longevity treatments, and some of it is coming from influential people in positions of power. Perhaps the field really is poised for a breakthrough.

And that’s what makes this field so fascinating to cover. Despite the occasional weirdness.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The AI Hype Index: Grok makes porn, and Claude Code nails your job

2026-01-30 04:56:23

Everyone is panicking because AI is very bad; everyone is panicking because AI is very good. It’s just that you never know which one you’re going to get. Grok is a pornography machine. Claude Code can do anything from building websites to reading your MRI. So of course Gen Z is spooked by what this means for jobs. Unnerving new research says AI is going to have a seismic impact on the labor market this year.

If you want to get a handle on all that, don’t expect any help from the AI companies—they’re turning on each other like it’s the last act in a zombie movie. Meta’s former chief AI scientist, Yann LeCun, is spilling tea, while Big Tech’s messiest exes, Elon Musk and OpenAI, are about to go to trial. Grab your popcorn.

DHS is using Google and Adobe AI to make videos

2026-01-30 02:57:11

The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump’s mass deportation agenda—some of which appears to be made with AI—and as workers in tech have put pressure on their employers to denounce the agencies’ activities. 

The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. 

In a section about “editing images, videos or other public affairs materials using AI,” it reveals for the first time that DHS is using Google’s Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools. It also discloses that DHS uses Microsoft Copilot Chat for generating first drafts of documents and summarizing long reports and Poolside software for coding tasks, in addition to tools from other companies.

Google, Adobe, and DHS did not immediately respond to requests for comment.

The news provides details about how agencies like Immigrations and Customs Enforcement, which is part of DHS, might be creating the large amounts of content they’ve shared on X and other channels as immigration operations have expanded across US cities. They’ve posted content celebrating “Christmas after mass deportations,” referenced Bible verses and Christ’s birth, showed faces of those the agency has arrested, and shared ads aimed at recruiting agents. The agencies have also repeatedly used music without permissions from artists in their videos.

Some of the content, particularly videos, has the appearance of being AI-generated, but it hasn’t been clear until now what AI models the agencies might be using. This marks the first concrete evidence such generators are being used by DHS to create content shared with the public.

It still remains impossible to verify which company helped create a specific piece of content, or indeed if it was AI-generated at all. Adobe offers options to “watermark” a video made with its tools to disclose that it is AI-generated, for example, but this disclosure does not always stay intact when the content is uploaded and shared across different sites. 

The document reveals that DHS has specifically been using Flow, a tool from Google that combines its Veo 3 video generator with a suite of filmmaking tools. Users can generate clips and assemble entire videos with AI, including videos that contain sound, dialogue, and background noise, making them hyperrealistic. Adobe launched its Firefly generator in 2023, promising that it does not use copyrighted content in its training or output. Like Google’s tools, Adobe’s can generate videos, images, soundtracks, and speech. The document does not reveal further details about how the agency is using these video generation tools.

Workers at large tech companies, including more than 140 current and former employees from Google and more than 30 from Adobe, have been putting pressure on their employers in recent weeks to take a stance against ICE and the shooting of Alex Pretti on January 24. Google’s leadership has not made statements in response. In October, Google and Apple removed apps on their app stores that were intended to track sightings of ICE, citing safety risks. 

An additional document released on Wednesday revealed new details about how the agency is using more niche AI products, including a facial recognition app used by ICE, as first reported by 404Media in June.

The Download: inside the Vitalism movement, and why AI’s “memory” is a privacy problem

2026-01-29 21:10:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the Vitalists: the hardcore longevity enthusiasts who believe death is “wrong”

Last April, an excited crowd gathered at a compound in Berkeley, California, for a three-day event called the Vitalist Bay Summit. It was part of a longer, two-month residency that hosted various events to explore tools—from drug regulation to cryonics—that might be deployed in the fight against death.

One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Nathan Cheng and his colleague Adam Gries a few years ago. Consider it longevity for the most hardcore adherents—a sweeping mission to which nothing short of total devotion will do.

Although interest in longevity has certainly taken off in recent years, not everyone in the broader longevity space shares Vitalists’ commitment to actually making death obsolete. And the Vitalists feel that momentum is building, not just for the science of aging and the development of lifespan-extending therapies, but for the acceptance of their philosophy that defeating death should be humanity’s top concern. Read the full story.

—Jessica Hamzelou

This is the latest in our Big Story series, the home for MIT Technology Review’s most important, ambitious reporting. You can read the rest of the series here

What AI “remembers” about you is privacy’s next frontier

—Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, & Ruchika Joshi, fellow at the Center for Democracy & Technology specializing in AI safety and governance

The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.

Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes.

But their ability to store and retrieve increasingly intimate details about their users over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities. So what can developers do to fix this problem? Read the full story.

How the grid can ride out winter storms

The eastern half of the US saw a monster snowstorm over the weekend. The good news is the grid has largely been able to keep up with the freezing temperatures and increased demand. But there were some signs of strain, particularly for fossil-fuel plants.

One analysis found that PJM, the nation’s largest grid operator, saw significant unplanned outages in plants that run on natural gas and coal. Historically, these facilities can struggle in extreme winter weather.

Much of the country continues to face record-low temperatures, and the possibility is looming for even more snow this weekend. What lessons can we take from this storm, and how might we shore up the grid to cope with extreme weather? Read the full story.

—Casey Crownhart

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Telegram has been flooded with deepfake nudes 
Millions of users are creating and sharing falsified images in dedicated channels. (The Guardian)

2 China has executed 11 people linked to Myanmar scam centers
The members of the “Ming family criminal gang” caused the death of at least 14 Chinese citizens. (Bloomberg $)
+ Inside a romance scam compound—and how people get tricked into being there. (MIT Technology Review)

3 This viral personal AI assistant is a major privacy concern
Security researchers are sounding the alarm on Moltbot, formerly known as Clawdbot. (The Register)
+ It requires a great deal more technical know-how than most agentic bots. (TechCrunch)

4 OpenAI has a plan to keep bots off its future social network
It’s putting its faith in biometric “proof of personhood” promised by the likes of World’s eyeball-scanning orb. (Forbes)
+ We reported on how World recruited its first half a million test users back in 2022. (MIT Technology Review)

5 Here’s just some of the technologies ICE is deploying

From facial recognition to digital forensics. (WP $)
+ Agents are also using Palantir’s AI to sift through tip-offs. (Wired $)

6 Tesla is axing its Model S and Model X cars 🚗
Its Fremont factory will switch to making Optimus robots instead. (TechCrunch)
+ It’s the latest stage of the company’s pivot to AI… (FT $)
+ …as profit falls by 46%. (Ars Technica)
+ Tesla is still struggling to recover from the damage of Elon Musk’s political involvement. (WP $)

7 X is rife with weather influencers spreading misinformation
They’re whipping up hype ahead of massive storms hitting. (New Yorker $)

8  Retailers are going all-in on AI
But giants like Amazon and Walmart are taking very different approaches. (FT $)
+ Mark Zuckerberg has hinted that Meta is working on agentic commerce tools. (TechCrunch)
+ We called it—what’s next for AI in 2026. (MIT Technology Review)

9 Inside the rise of the offline hangout
No phones, no problem. (Wired $)

10 Social media is obsessed with 2016
…why, exactly? (WSJ $)

Quote of the day

“The amount of crap I get for putting out a hobby project for free is quite something.”

—Peter Steinberger, the creator of the viral AI agent Moltbot, complains about the backlash his project has received from security researchers pointing out its flaws in a post on X.

One more thing

The flawed logic of rushing out extreme climate solutions

Early in 2022, entrepreneur Luke Iseman says, he released a pair of sulfur dioxide–filled weather balloons from Mexico’s Baja California peninsula, in the hope that they’d burst miles above Earth.

It was a trivial act in itself, effectively a tiny, DIY act of solar geoengineering, the controversial proposal that the world could counteract climate change by releasing particles that reflect more sunlight back into space.

Entrepreneurs like Iseman invoke the stark dangers of climate change to explain why they do what they do—even if they don’t know how effective their interventions are. But experts say that urgency doesn’t create a social license to ignore the underlying dangers or leapfrog the scientific process. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The hottest thing in art right now? Vertical paintings.
+ There’s something in the water around Monterey Bay—a tail walking dolphin!
+ Fed up of hairstylists not listening to you? Remember these handy tips the next time you go for a cut.
+ Get me a one-way ticket to Japan’s tastiest island.