MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

Musk v. Altman week 1: Elon Musk says he was duped, warns AI could kill us all, and admits that xAI distills OpenAI’s models

2026-05-02 06:08:19

In the first week of the landmark trial between Elon Musk and OpenAI, Musk took the stand in a crisp black suit and tie and argued that OpenAI CEO Sam Altman and president Greg Brockman had deceived him into bankrolling the company. Along the way, he warned that AI could destroy us all and sat through revelations that he had poached OpenAI employees for his own companies. He even confessed, to some audible gasps in the courtroom, that his own AI company, xAI, which makes the chatbot Grok, uses OpenAI’s models to train its own. 

The federal courthouse in Oakland, California, was packed with armies of lawyers carrying boxes of exhibits, journalists typing away at their laptops, and a handful of concerned OpenAI employees. Outside, protesters lined the streets, carrying signs urging people to quit ChatGPT, boycott Tesla, or both. Musk looked calm and comfortable, slipping in the occasional quip in his distinct South African accent. But he also was full of remorse. 

“I was a fool who provided them free funding to create a startup,” Musk told the jury. He said when he cofounded OpenAI in 2015 with Altman and Brockman, he was donating to a nonprofit developing AI for the benefit of humanity, not to make the executives rich. “I gave them $38 million of essentially free funding, which they then used to create what would become an $800 billion company,” he said.

Musk is asking the court to remove Altman and Brockman from their roles and to unwind the restructuring that allowed OpenAI to operate a for-profit subsidiary. The outcome of the trial could upend OpenAI’s race toward an IPO at a valuation approaching $1 trillion. Meanwhile, xAI is expected to go public as a part of Musk’s rocket company SpaceX as early as June, at a target valuation of $1.75 trillion.

This week’s testimony revolved around a central question of the trial: why Musk is suing OpenAI. Musk argued he was trying to save OpenAI’s mission to develop AI safely by restoring the company to its original nonprofit structure. OpenAI’s lawyer, William Savitt, who once represented Musk and his electric-car company Tesla, countered that Musk was “never committed to OpenAI being a nonprofit” and instead was suing to undermine his competitor. 

Who is the steward of AI safety?

During his direct examination early in the week, Musk painted himself as a longtime advocate of AI safety. He said he cofounded OpenAI to create a “counterbalance to Google,” which was leading the AI race at the time. He said that when he asked Google cofounder Larry Page what happens if AI tries to wipe out humanity, Page told him, “That will be fine as long as artificial intelligence survives.” 

“The worst-case scenario is a Terminator situation where AI kills us all,” Musk later told the jury.

Savitt stood at the lectern and argued that Musk was not a “paladin of safety and regulation.” As he cross-examined Musk in his sharp, surgical cadence, Savitt pointed out that xAI sued the state of Colorado in April over an AI law designed to prevent algorithmic discrimination. 

Musk’s lawyer, Steven Molo, sprang to his feet to object. He asked the judge if he, too, could weigh in on ChatGPT’s safety record. 

The lawyers then entered a heated debate about who was the true guardian of AI safety. 

The sparring continued the next morning. “We all could die as a result of artificial intelligence!” said Molo, suggesting that OpenAI could not be trusted to build AI safely.

“Despite these risks, your client is creating a company that’s in the exact space,” Judge Yvonne Gonzalez Rogers said sternly, referring to xAI. “I suspect there’s plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands.”

When the lawyers began talking over each other, the judge snapped. “This is not a trial on whether or not artificial intelligence has damaged humanity,” she said. 

When did Musk think he was being duped?

As Savitt continued to cross-examine Musk, he pressed on the idea that Musk had never been committed to keeping OpenAI a nonprofit. He also claimed that Musk waited too long to sue OpenAI, filing after the statute of limitations ran out. 

Musk explained why he sued in 2024 rather than earlier, describing “three phases” in his views of OpenAI. In phase one, he was “enthusiastically supportive” of the company.” In phase two, “I started to lose confidence that they were telling me the truth,” he said. In phase three, “I’m sure they’re looting the nonprofit.” 

In 2017, Musk and other OpenAI cofounders discussed creating a for-profit subsidiary to raise enough capital to build artificial general intelligence—powerful AI that can compete with humans on most cognitive tasks. Musk wanted a majority interest in the subsidiary and the right to choose a majority of the board members. He also pitched having Tesla acquire OpenAI. (He left OpenAI in 2018.)

“I was not opposed to there being a small for-profit that provides funding to the nonprofit,” he told the jury, “as long as the tail didn’t wag the dog.” 

But it was only in late 2022, Musk testified, that he “lost trust in Altman” and his commitment to keeping the company a nonprofit. The key moment came, he said, when he learned that Microsoft would invest $10 billion in OpenAI. 

“I texted Sam Altman, ‘What the hell is going on? This is a bait and switch,’” he told the jury. Microsoft would give $10 billion only if it expected “a very big financial return,” he said.

Is Musk just trying to kill competition?

But Savitt argued that Musk was really suing to undermine OpenAI as a competitor to his empire of tech companies. While he was on the board of OpenAI, Musk was also running Tesla and his brain-implant company, Neuralink. He founded xAI in 2023.

Savitt pulled up an email that Musk had sent to a Tesla vice president in 2017 after hiring Andrej Karpathy, a founding member of OpenAI, to work at Tesla.“The OpenAI guys are gonna want to kill me. But it had to be done,” he wrote.

When asked about it, Musk was flustered. He claimed Karpathy had already decided to leave OpenAI when he recruited him to work at Tesla. “I believe it’s a free world,” he said.

Savitt pulled up another email that Musk sent to a cofounder at Neuralink in 2017. He wrote that they could “hire independently or directly from OpenAI.” When pressed about it, he sounded frazzled. “It’s a free country,” he said. “I can’t restrict their ability to hire people from other companies.” 

Savitt also pointed out that Tesla, SpaceX, Neuralink, and X were socially beneficial for-profit companies, like OpenAI. He stressed that xAI was also a closed-source, for-profit company.

But Musk claimed that xAI was not a real competitor to OpenAI. “We’re not currently tracking to reach AGI first,” he told the jury. 

In fact, Musk admitted that xAI uses OpenAI’s technology. In response to Savitt’s relentless questioning, he said xAI “partly” distills OpenAI’s models. Some people in the courtroom gasped. 

Distillation is a technique where a smaller AI model is trained to mimic the behavior of larger, more capable models, so it can run faster and more cheaply while performing nearly as well. But OpenAI and other AI companies have pushed back against the practice. In February, OpenAI accused the Chinese AI company DeepSeek of distilling its AI models. In August 2025, Wired reported that Anthropic had blocked OpenAI’s access to Claude for violating the company’s terms of service, which prohibit, among other things, reverse-engineering its services and building competing products. 

“It is standard practice to use other AIs to validate your AI,” argued Musk.

Next week, Stuart Russell, a computer scientist at UC Berkeley, will testify about AI safety. Brockman, who has been taking notes during Musk’s testimony, will also testify.

This story is part of MIT Technology Review’s ongoing coverage of the Musk v. Altman trial. Follow @techreview or @michelletomkim on X for up-to-the-minute reporting.

Cyber-Insecurity in the AI Era

2026-05-01 23:54:01

Cybersecurity was already under strain before AI entered the stack. Now, as AI expands the attack surface and adds new complexity, the limits of legacy approaches are becoming harder to ignore. This session from MIT Technology Review’s EmTech AI conference explores why security must be rethought with AI at its core, not layered on after the fact.


About the speaker

Tarique Mustafa, GC Cybersecurity

Tarique Mustafa, Cofounder, CEO, and CTO, GC Cybersecurity

Tarique Mustafa is Cofounder and CEO/CTO of two AI-powered cybersecurity companies: GCCybersecurity, Inc. and its data compliance spinout, Chorology, Inc. A prolific inventor and internationally recognized authority in knowledge representation, inference calculus, and AI planning, Tarique has spent his career applying autonomously collaborative AI to solve complex, ultra-high-scale challenges across cybersecurity, data security, and compliance — with deep expertise spanning Data Classification, DLP, and DSPM industries. His groundbreaking innovations and multiple USPTO patents have earned him global recognition, including frequent invitations to deliver keynote addresses at prestigious international security conferences and forums.

At GCCybersecurity, Tarique architected the core AI algorithms powering the company’s 4th and 5th generation fully autonomous data leak protection and exfiltration platform — among the most advanced platform of its kind. Prior to founding GCCybersecurity and Chorology, he served as founding CEO/CTO of NexTier Networks, a Silicon Valley provider of award-winning Data Leak Prevention solutions. With over 20 years of technical leadership experience, Tarique has held senior roles at Symantec, DHL Airways IT, MCI WorldCom, EDS, Andes Networks, and Nevis Networks, where he served as Principal Architect and built industry-leading security products leveraging next-generation security monitoring, event correlation, IDS/IPS, and SSL/IPSec technologies.

Tarique holds multiple approved and pending patents with the USPTO and has authored numerous research publications spanning Information & Data Security, Computer & Network Security, Software Architecture, Database Technologies, and Artificial Intelligence. A recipient of the prestigious Rotary International Scholarship for doctoral studies in Computer Science at the University of Southern California (USC), Tarique also holds master’s degrees in engineering and computer science from USC, and a bachelor’s degree in mechanical engineering from NED University of Engineering & Technology.

Operationalizing AI for Scale and Sovereignty

2026-05-01 23:31:09

Companies are taking control of their own data to tailor AI for their needs. The challenge lies in balancing ownership with the safe, trusted flow of high‑quality data needed to power reliable insights. This conversation from MIT Technology Review’s EmTech AI conference examines how AI factories unlock new levels of scale, sustainability, and governance—positioning data control as a strategic imperative for governments and enterprises.


About the speakers

Chris Davidson, HPE

Chris Davidson, Vice President, HPC & AI Customer Solutions, HPE

Chris Davidson is Vice President of HPC & AI Customer Solutions at Hewlett Packard Enterprise. He leads HPE’s global strategy for AI Factory solutions and Sovereign AI, working with governments, enterprises, and research institutions to build secure, scalable national- and enterprise-grade AI capabilities.

He also directs Product Management and Performance Engineering across HPE’s HPC and AI portfolio, including large-model training platforms and Cray exascale systems. His teams define product strategy, performance architecture, and deployment models that position HPE at the forefront of high-performance and AI computing.

During his nine years at HPE, Chris has led key initiatives across Performance Engineering, AI Cloud, and Professional Services, shaping how HPE delivers optimized, cloud-native, and globally deployed high-performance systems. He previously held technical and leadership roles in the biotech and medical diagnostics sectors.

Chris holds an M.B.A. in Entrepreneurship and Finance and a B.S. in Biology from Loyola University Chicago.

Arjun Shankar, Oak Ridge National Laboratory

Arjun Shankar, Division Director, National Center for Computational Science, Oak Ridge National Laboratory

Mallikarjun (Arjun) Shankar is the Division Director for the National Center for Computational Science at the Oak Ridge National Laboratory. His research focuses on the interdisciplinary bridge between computer science and large-scale scientific discovery campaigns that rely on scalable computing and data science. He is a joint faculty appointee at the University of Tennessee’s Bredesen Center, a senior member of the IEEE and a senior member of the ACM.

The Download: a new Christian phone network, and debugging LLMs

2026-05-01 20:10:00

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A new US phone network for Christians aims to block porn and gender-related content

A new US-wide cell phone network marketed to Christians is set to launch next week. It blocks porn using network-level controls that can’t be turned off—even by adult account owners.

It’s also rolling out a filter on sexual content aimed at blocking material related to gender and trans issues, optional but turned on by default across all plans.

The trouble is, many websites don’t fit neatly into one category. That leaves its maverick founder with broad, subjective control over what is allowed or banned. Read the full story.

—James O’Donnell

This startup’s new mechanistic interpretability tool lets you debug LLMs

The San Francisco–based startup Goodfire has released a new tool, Silico, that lets researchers peer inside an AI model and adjust its parameters during training. It could give users more control over how this technology is built than was once thought possible.

The goal is to make building AI models less like alchemy and more like a science. Using a technique called mechanistic interpretability, Silico maps the neurons and pathways inside a model and lets developers tweak them to reduce unwanted behaviors or steer outputs.

By exposing the “knobs and dials,” Goodfire hopes to bring AI training closer to traditional software engineering. Read the full story.

—Will Douglas Heaven

With mass firing, Trump deals a fresh blow to American science

This past week delivered another gut punch for science in the US. This time, the target was the National Science Foundation—a federal agency that funds major research projects to the tune of around $9 billion. On Friday, the 22 scientists overseeing those efforts were all fired.

Since 2025, the NSF has faced budget cuts, grant terminations, and mass firings, with staff numbers down sharply and many ambitious projects grinding to a halt. The result is a major shift in how American science is funded and governed. Discover what it means, and what’s next.

—Jessica Hamzelou

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

China’s open-source bet: 10 Things That Matter in AI Right Now

Silicon Valley AI companies follow a familiar playbook: keep the models behind an API and charge for access. China’s leading AI labs are playing a different game, releasing “open-weight” models that developers can download, adapt, and run on their own hardware.

That approach went mainstream after DeepSeek open-sourced its R1 model, which matched top US systems at a fraction of the cost. It also won something subtler: goodwill with developers. A growing cohort of Chinese labs is now following the same blueprint.

As AI shifts from hype to deployment, open-source models are making the future of AI more multipolar than Silicon Valley expected. Read the full story.

—Caiwei Chen

China’s open-source bet is one of the 10 Things That Matter in AI Right Now, our list of the biggest ideas, trends, and advances in AI today. We’re unpacking one item from the list each day here in The Download, so stay tuned.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk has admitted that xAI trained Grok on OpenAI models
“Distillation” is standard practice in AI, despite being legally dubious. (Wired $)
+ The White House has accused Chinese firms using distillation of theft. (BBC)
+ American labs are widely assumed to use similar techniques. (TechCrunch)

2 A “de-extinction” startup wants to resurrect a long-lost antelope
Colossal Biosciences wants to bring back the bluebuck. (Axios)
+ The company is using genomic editing to revive the animal. (Gizmodo)
+ It previously claimed to have cloned red wolves. (MIT Technology Review)

3 ​​An OpenAI model outperformed ER doctors at diagnosing patients
By analyzing health records data and information provided to physicians. (NPR)
+ But it still must be proven in real-world clinical trials. (Vox)

4 Scientists are trying to power AI data centers with tiny nuclear reactors
They could provide a new way to meet AI’s energy demands. (Gizmodo)
+ We did the math on AI’s energy footprint. (MIT Technology Review)

5 Spotify has started verifying human artists
A new badge will distinguish them from AI. (The Guardian)
+ Spotify has faced criticism for its handling of AI. (BBC)

6 The US is backing a Congolese railway to break China’s grip on critical minerals
The old railroad is key to the race for critical metals in Africa. (Rest of World)
+ The US is also searching for alternative sources. (MIT Technology Review)

7 Huawei is set to overtake Nvidia in China’s AI chip market
It’s expected to capture the largest market share this year. (FT $)

8 Japan is building cardboard drones for the battlefield
The flatpack designs are cheap, disposable, and built at scale. (404 Media)

9 The more young people use AI, the more they hate it
Research shows that Gen Z doesn’t trust GenAI. (The Verge)

10 A new organoid can menstruate—and show how tissue repairs itself
It’s revealing how the uterus can shed without scarring. (Nature)

Quote of the day

“I suspect that there are a number of people who do not want to put the future of humanity in Mr Musk’s hands. But we’re not going to get into that.

—Judge Gonzalez Rogers rebukes attempts by Elon Musk’s lawyer to focus on AI’s existential risks as part of his lawsuit against OpenAI, the New York Times reports. 

One More Thing

an aerial view of Mountain Pass rare earth mine and processing facility
TMY350 VIA WIKIMEDIA COMMONS


This rare earth metal shows us the future of our planet’s resources

The materials we need to power our world are shifting from fossil fuels to energy sources that don’t produce greenhouse gas emissions.

Take neodymium, a rare earth metal used in powerful magnets that power everything from smartphones to wind turbines. Its story reveals many of the challenges we’ll likely face across the supply chain in the coming century and beyond.

The question isn’t whether we’ll run out, but how we extract, process, use, and recycle these materials as technology keeps changing. Find out what it reveals about the future of our planet’s resources.


—Casey Crownhart

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)

+ Here’s a fascinating visual history of exploring the dark side of the Moon.
+ This interactive map lets you compare the actual dimensions of our world.
+ These five tiny homes are proof you don’t need a massive footprint to live with style.
+ Explore the history of “Control Room Green” and why it was the default choice for the Cold War’s highest-stakes environments.

Top image credit: Stephanie Arnett/MIT Technology Review | Adobe Stock

Inexpensive seafloor-hopping submersibles could stoke deep-sea science—and mining

2026-05-01 18:00:00

Smack dab between Australia and South America, the US National Oceanic and Atmospheric Administration (NOAA) research vessel Rainier is currently on a mission to map more than 8,000 square nautical miles of the Pacific seafloor in search of critical mineral deposits. But it isn’t doing it alone; for a month starting this week, it will deploy two oblong neon submersibles as the project’s special agents, sending them nearly 6,000 meters down to hop along the seafloor. 

The submersibles, built by the young company Orpheus Ocean, are designed to explore just this environment: a squelchy substrate that teems with life of all kinds, from tiny microbes to worms and snails, along with egg-size “nodules” of metals—such as copper, cobalt, nickel, and manganese—that are crucial for technologies worldwide.  

Scientists and companies have long sought to probe the deep sea and bring such treasures to the surface. Orpheus, which spun off from the Woods Hole Oceanographic Institution (WHOI) in 2024, could be well positioned to make those possibilities a lot more economical. The company has designed its vehicles on a simple philosophy: “deep for cheap,” says Jake Russell, Orpheus’s cofounder and CEO, who is a chemist by training. The vehicles cost a couple of hundred thousand dollars each to build, whereas existing options can range from $5 million to $10 million. And unlike most autonomous ocean vehicles, they can push into the seafloor and capture cores of sediment—and the creatures within. 

Orpheus’s engineers have been tinkering with their deep-sea designs for years, much of the work taking place at WHOI and in collaboration with NOAA and the National Aeronautics and Space Administration. Its prototype vehicles were rated capable of diving to 11,000 meters—the deepest part of the Mariana Trench. They’ve completed two commercial deployments, but this new expedition marks the submersibles’ biggest test yet: operating over large ranges for multiple weeks and with multiple instruments at play. Using Rainier as their home base on the ocean’s surface, the vehicles will swim out for 10 kilometers at a time, taking one high-resolution image every second and up to eight physical samples from the seafloor apiece.

If all goes well, the test could help establish the vehicles as a tool for government agencies, scientists, and companies that hope to probe the vastly understudied deep sea and the resources it holds. And while they’re not the only option on the market, Orpheus hopes their size and low building cost will soon make them one of the most accessible. 

At present, to reach these depths scientists must wait for time on a limited and expensive set of submersibles owned by government agencies and research institutes. That formula lends itself better to capturing snapshots of the deep sea than it does to probing its interconnected ecological and biogeochemical systems. “A lot of this region that we’re surveying … has really never been explored in any kind of detail,” says Russell. “Anything we see is going to be new to NOAA and new to science.”

A sediment specialist

The Orpheus subs are classified as autonomous underwater vehicles (AUVs), which operate on a mix of preprogrammed commands and live decision-making and without being tethered to a ship. But unlike traditional AUVs engineered for long-distance, high-speed gliding, these submersibles are short and stout with little legs—better for making soft landings on the seafloor and then pushing into the mud to suck out sediment cores for scientists. When they do land, the submersibles can lift off the surface, thrust a few feet, and settle once more in a “hopping” fashion.

Their bodies are made mostly of a buoyant material known as syntactic foam, with the important electronics encased in a thick sphere of glass. The same kind of foam, which is interspersed with hollow microspheres of glass to prevent it from collapsing under high pressures, went to the deep in the vehicle that carried the filmmaker James Cameron to the Mariana Trench in 2012; he even donated leftover material for use in earlier Orpheus prototypes. 

At less than two meters in length and under 600 pounds (270 kilograms), Russell says the Orpheus robots are the smallest—and correspondingly the least expensive—ocean vehicles on the market capable of descending to 6,000 meters. They’re designed to populate future fleets of robotic explorers.

The approach stems from a fundamental challenge, says Victoria Orphan, a geobiologist at the California Institute of Technology, who has previously worked with an Orpheus vehicle on a science campaign: “Anytime you do things in the deep ocean, you always run this risk, when you put something over the side [of a ship], that it might not come back.” With existing fleets of large, expensive vessels operated by groups like NOAA, WHOI, and the Monterey Bay Aquarium Research Institute (MBARI), losing a vehicle can be disastrous, not least because scientists must already compete for their limited time.

In the spring of 2024, Orphan and her colleagues put an Orpheus sub through its paces during an expedition to study deep-sea methane seeps off the coast of Alaska’s Aleutian Islands. They hoped to use the vehicle to create maps of the area before the team sent down a human-crewed submersible called Alvin to study specific areas—and the microorganisms and animals that live there—in more detail. 

But as with any sort of new type of technology, “there’s always growing pains,” recalls Orphan. Frigid temperatures and steep topography added unseen challenges, and it took the full three weeks for the sub to get high-resolution photographs of the seeps. 

The setback didn’t dull Orphan’s excitement about the potential of these machines. “There’s a lot of real, unknown science right at that interface between the sediment and the ocean surface,” she says. “The Orpheus-type class of instrument, with the right kinds of sensors and samplers, could be a very enabling tool.”

Russell envisions pairing the vehicles with specially designed payloads that can sense the heat of chemical seeps and detect plumes of sediment, DNA shed from ocean life-forms, or the magnetic tug of buried cables. 

The vehicles are the “the best of both worlds,” says Andrew Sweetman, a deep-sea ecologist at the Scottish Association for Marine Science, who has not worked with Orpheus. While they can roam large areas like an AUV, they can also carry out precise sampling maneuvers like a remotely operated vehicle (ROV), a robot connected to a ship via cables that fulfills real-time human commands.

In addition to the low price tag, says Sweetman, the small size of the vessels means they don’t require a large research vessel to ferry them out to sea. That might make exploration more accessible for smaller or poorer countries without such ships, he says: “It will, in a way, help democratize deep-sea science.” He imagines using the sediment cores the submersibles gather to probe how seafloor-dwelling animals cycle nutrients—a crucial element of the ocean’s role as a carbon sink. 

The mining push 

As much as smaller, cheaper ocean vehicles have caught scientists’ eye, they have also piqued the interest of companies. Russell says inquiries come in weekly from businesses involved in deep-sea mining, defense, offshore wind, telecommunication, and oil and gas. He notes that Orpheus is merely a “service provider,” helping collect data where needed but not making decisions about how to use the seafloor. And he says that better data—such as information on the shape of the seafloor, the sediment quality, and the presence of life—also “raises the bars” that governments and regulators are only beginning to set.

But many scientists are far from eager about the growing push for seabed mining, which an executive order from President Donald Trump stoked further last week by mandating that the US government rapidly develop mineral exploration and processing. And earlier last month, the administration announced the creation of a new government office: the Marine Minerals Administration

the Orpheus from below with flare from its two lower lights
A view of an Orpheus vehicle from below.
ORPHEUS OCEAN

Given the current dearth of information on the deep sea, says Sweetman, “I think the push for deep-sea mining is happening way too fast.” And deep-sea communities are “probably the most stable environment on our planet,” adds Orphan. “The organisms that live there are really not adapted to a lot of disturbance, and it takes a really, really long time for them to recover, if at all.”

One mining method that governments and companies propose involves a machine that essentially operates like a giant bulldozer, trawling the seafloor, sucking up a trail of material, and leaving scar marks and sediment plumes in its wake. Brett Hobson, an ocean engineer at MBARI, says that Orpheus-like technology might enable companies to “take samples in a more surgical way, instead of just grossly scooping everything up off the seafloor and filtering through it.”

Hobson, who has run MBARI’s work on ocean vehicles for decades, also notes that Orpheus submersibles won’t be the only option available. Companies and government agencies—including those in Norway, France, Japan, China, and the UK—are developing similar deep-sea vehicles, he says: “What we really need [as] a society is just more of these systems out there.” 

As Orpheus’s neon vehicles plunge into the Pacific over the next few weeks, their readiness for future scientific and resource surveys should become clearer. Each time they dive, they will get a little bit more data—“just the smallest of postage stamps of our planet,” says Orphan. “There’s still so much to learn.”

A new US phone network for Christians aims to block porn and gender-related content

2026-05-01 17:00:00

A new US-wide cell phone network marketed to Christians is set to launch next week. It blocks porn, which experts in network security say marks the first time a US cell plan has used network-level blocking for such content that can’t be turned off even by adult account owners. It’s also rolling out a filter on sexual content aimed at blocking material related to gender and trans issues, which will be optional but turned on by default across all plans.

The network, which is currently being tested ahead of its May 5 launch date, will be run by Radiant Mobile, a newly launched mobile virtual network operator (MVNO). These operators don’t own cell towers but buy bandwidth from the big providers (in this case, T-Mobile) and sell to specific demographics (President Trump announced his own MVNO last year called Trump Mobile; CREDOMobile sends donations to progressive causes). 

“We are going to create—and we think we have every right to do so—an environment that is Jesus-centric, that is void of pornography, void of LGBT, void of trans,” Radiant Mobile’s founder, Paul Fisher, told MIT Technology Review. A representative for T-Mobile did not comment on whether these content blocks violate any of its policies. In a statement, the representative added that T-Mobile does not have a direct relationship with Radiant Mobile but instead works through the MVNO manager CompaxDigital. 

Fisher says he’s recruited a mix of Christian influencers to advertise the plan and has also done outreach to thousands of churches around the country, offering a way to have Radiant donate a portion of congregants’ $30-per-month subscription fee to their church. Fisher has ambitions to market it beyond the US in other countries with significant Christian populations, like South Korea and Mexico.

At least one piece of Radiant’s pitch will sound familiar: the idea that the internet is awash in toxic sludge. It’s powered by content and algorithms that are making us more sad, hateful, and detached. A number of efforts aim to fix that, including contentious age verification laws and a coming wave of lawsuits alleging that social media companies knowingly got young users hooked on their platforms. 

Fisher is pursuing the nuclear option. He says Radiant is working with the Israeli cybersecurity company Allot to block categories of content, such as material about violence or self-harm. Some categories are banned by default and cannot be allowed even for adult users. 

This includes pornography. Chris Klimis, a minister in Orlando who was recruited to be the company’s chief operating officer, says part of the reason he got involved was to offer Christians a real way to “do something” about what he sees as a pornography crisis in the faith. He was appalled by a recent survey showing that 67% of pastors have a “personal history” with porn use. And he worries his six children will come across porn on their devices, even if only inadvertently.

“We’ve got to figure out some way to close the door to the digital space,” he says. “That’s what we’re trying to do.”

The technology to do this blocking is a blunt instrument: Allot groups website domains into more than a hundred categories, which include pornography but also violence, malware, gaming, and in Radiant Mobile’s case “sects,” which includes websites about Satanism. If one of its users tries to visit a website that belongs to a blocked category, the page won’t load. That’s harsher than app-based content blockers like Covenant Eyes, a Christian porn-quitting app that sends notifications to your friends or family if you slip up; those can be worked around or deleted.

“Blocking in the network is certainly not new,” says David Choffnes, a computer science professor and executive director of Northeastern University’s Cybersecurity and Privacy Institute. Such blocking is the backbone of censorship efforts by authoritarian governments, for example. But there are more benign ways it’s used too. US telecoms block particular domains known to be spreading malware and offer optional network-level controls to block adult content on kids’ phones. What is new is a US cell plan instituting network-level blocks that can’t be removed, even by adults.

The trouble is that most websites don’t fit neatly into one category, leaving Fisher with enormous and subjective control over which are allowed or banned. This is most apparent in his effort to block content related to gender identity.

Anthony Re, a sales director at Allot, says the company does not have a category specific to gender but that “LGBT content” tends to fall into its sexuality category, which is described on Radiant Mobile’s website as “sites that provide information on sex, sex and teenagers, and sexual education, without pornographic content.” This category is blocked by default for all phones, a setting that can be changed by adult account owners. 

But if a news site starts hosting enough gender-related content, Fisher might not just label it as “press,” which is allowed, but also “sexuality,” thus blocking the whole domain to any phone with that category blocked. 

Fisher illustrates the subjectivity of such decisions with a recent example involving Yale University. Its general website, www.yale.edu, is categorized by Allot as education. “But they have a subsection of one of their websites that’s totally focused on, you know, trans equality,” Fisher says, referring to lgbtq.yale.edu. Because it’s a distinct domain, Radiant Mobile is able to place it in the sexuality category and block it. 

Yale’s main website remains unblocked, for now. “If we see [the LGBTQ content] on the front pages consistently of Yale University, we’ll block them too,” Fisher says.

Managing website block lists is a professional pivot for Fisher, who spent his career not in telecoms but in fashion; he was an agent for supermodels like Naomi Campbell and members of the Hilton and Getty families, and he later hosted a reality show in which he found people in rehab facilities and homeless shelters and tried to turn them into models. He ultimately left the industry and now says he regrets the role he played in it: “Am I proud that I spent 35 years creating star models or star influencers? Not at all.”

Last year, his friend and fellow fashion mogul Bernt Ullmann suggested he look at what Ryan Reynolds had built with his cell network Mint Mobile: It made buying a cell plan feel less like dealing with a utility and more like choosing a brand, and it had been acquired by T-Mobile in 2023 for $1.3 billion. Fisher liked the business model but didn’t have an audience in mind. Then came a late-night revelation. “God is talking to me,” Fisher recalls. “Do something in the faith-based industry.” He set out to build the first cell network that would let in only content deemed compatible with Christianity.

Fisher says the company has received $17.5 million in investment from Compax Ventures, part of the company serving as the technical middleman between Radiant and T-Mobile. Roger Bringmann, a vice president at Nvidia, is Radiant Mobile’s lead investor and silent partner (Bringmann recently funded a new complex at Austin Christian University in Texas, which bills itself as “the university for Christian entrepreneurs”).

To fill the gap left by all the sites being blocked, the company intends to offer access to a library of religious content, including AI-generated Bible videos. It plans to use characters like Cinderella, Tinker Bell, and others (it has obtained rights from the entertainment and media company Elf Labs, which has been amassing rights to hundreds of children’s characters). “Those characters were originally constructed with a conservative perspective,” Klimis says. They’ll be used in AI-generated content alongside testimonials and devotionals. 

Choffnes has technical doubts that the plan’s firewall will be as effective as promised, not least because “it’s really hard to come up with a list of every website you think is problematic.” But beyond that, he sees the internet, frustrating as it can be, as better open than closed. “I do believe in an open internet,” he says. “I also believe that a lot of the internet is toxic, but I don’t believe that this sledgehammer approach of blocking content is the right answer.”