MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

Behind the Blog: Hearing AI Voices and 'Undervolting'

2025-12-06 01:53:23

Behind the Blog: Hearing AI Voices and 'Undervolting'

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss PC woes, voice deepfakes, and mutual aid.

JOSEPH: Today I’m speaking at the Digital Vulnerabilities in the Age of AI Summit (DIVAS) (good name) on a panel about the financial risks of AI. The way I see it, that applies to the scams and are being powered by AI.

As soon as a new technology is launched, I typically think of ways it might be abused. Sometimes I cover this, sometimes not, but the thought always crosses my mind. One example that did lead to coverage was back at Motherboard in 2023 with an article called How I Broke Into a Bank Account With an AI-Generated Voice.

At the time, ElevenLabs had just launched. This company focuses on audio and AI and cloning voices. Basically you upload audio (originally that could be of anyone before ElevenLabs introduced some guardrails) and the company then lets you ‘say’ anything as that voice. I spoke to voice actors at the time who were obviously very concerned.

DHS’s Immigrant-Hunting App Removed from Google Play Store

2025-12-06 01:05:47

DHS’s Immigrant-Hunting App Removed from Google Play Store

A Customs and Border Protection (CBP) app that lets local cops use facial recognition to hunt immigrants on behalf of the federal government has been removed from the Google Play Store, 404 Media has learned.

It is unclear if the removal is temporary or not, or what the exact reason is for the removal. Google told 404 Media it did not remove the app, and directed inquiries to its developer. CBP did not immediately respond to a request for comment.

Its removal comes after 404 Media documented multiple instances of CBP and ICE officials using their own facial recognition app to identify people and verify their immigration status, including people who said they were U.S. citizens.

💡
Do you know anything else about this removal or this app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

The removal also comes after “hundreds” of Google employees took issue with the app, according to a source with knowledge of the situation.

Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted

2025-12-05 04:47:37

Kohler's Smart Toilet Camera Not Actually End-to-End Encrypted

Home goods company Kohler would like a bold look in your toilet to take some photos. It’s OK, though, the company has promised that all the data it collects on your “waste” will be “end-to-end encrypted.” However, a deeper look into the company’s claim by technologist Simon Fondrie-Teitler revealed that Kohler seems to have no idea what E2EE actually means. According to Fondrie-Teitler’s write-up, which was first reported by TechCrunch, the company will have access to the photos the camera takes and may even use them to train AI.

The whole fiasco gives an entirely too on-the-nose meaning to the “Internet of Shit.”

Kohler launched its $600 camera to hang on your toilets earlier this year. It’s called Dekoda, and along with the large price tag, the toilet cam also requires a monthly service fee that starts at $6.99. If you want to track the piss and shit of a family of 6, you’ll have to pay $12.99 a month.

What do you get for putting a camera on your toilet? According to Kohler’s pitch, “health & wellness insights” about your gut health and “possible signs of blood in the bowl” as “Dekoda uses advanced sensors to passively analyze your waste in the background.”

If you’re squeamish about sending pictures of the “waste” of your family to Kohler, the company promised that all of the data is “end-to-end encrypted.” The privacy page for the Kohler Health said “user data is encrypted end to end, at rest and in transit” and it’s mentioned several places in the marketing.

It’s not, though. Fondrie-Teitler told 404 Media he started looking into Dekoda after he noticed friends making fun of it in a Slack he’s part of. “I saw the ‘end-to-end encryption’ claim on the homepage, which seemed at odds with what they said they were collecting in the privacy policy,” he said. “Pretty much every other company I've seen implement end-to-end encryption has published a whitepaper alongside it. Which makes sense, the details really matter so telling people what you've done is important to build trust. Plus it's generally a bunch of work so companies want to brag about it. I couldn't find any more details though.”

E2EE has a specific meaning. It’s a type of messaging system that keeps the contents of a message private while in transit, meaning only the person sending and the person receiving a message can view it. Famously, E2EE means that the messaging company itself cannot decode or see the messages (Signal, for example, is E2EE). The point is to protect the privacy of individual users from a company prying into data if a third party, like the government, comes asking for it.

Kohler, it’s clear, has access to a user’s data. This means it’s not E2EE. Fondrie-Teitler told 404 Media that he downloaded the Kohler health app and analyzed the network traffic it sent. “I didn't see anything that would indicate an end-to-end encrypted connection being created,” he said.

Then he reached out to Kohler and had a conversation with its privacy team via email. “The Kohler Health app itself does not share data between users. Data is only shared between the user and Kohler Health,” a member of the privacy team at Kohler told Fondrie-Teitler in an email reviewed by 404 Media. “User data is encrypted at rest, when it’s stored on the user's mobile phone, toilet attachment, and on our systems.  Data in transit is also encrypted end-to-end, as it travels between the user's devices and our systems, where it is decrypted and processed to provide our service.”

If Kohler can view the user’s data, as it admits to doing in this email exchange with Fondrie-Teitler, then it’s not—by definition—using E2EE.

"The term end-to-end encryption is often used in the context of products that enable a user (sender) to communicate with another user (recipient), such as a messaging application. Kohler Health is not a messaging application. In this case, we used the term with respect to the encryption of data between our users (sender) and Kohler Health (recipient)," Kohler Health told 404 Media in a statement.

"Privacy and security are foundational to Kohler Health because we know health data is deeply personal. We’re evaluating all feedback to clarify anything that may be causing confusion," it added.

“I'd like the term ‘end-to-end encryption’ to not get watered down to just meaning ‘uses https’ so I wanted to see if I could confirm what it was actually doing and let people know,” Fondrie-Teitler told 404 Media. He pointed out that Zoom once made a similar claim and had to pay a fine to the FTC because of it.

“I think everyone has a right to privacy, and in order for that to be realized people need to have an understanding of what's happening with their data,” Fondrie-Teitler said. “It's already so hard for non-technical individuals (and even tech experts) to evaluate the privacy and security of the software and devices they're using. E2EE doesn't guarantee privacy or security, but it's a non-trivial positive signal and losing that will only make it harder for people to maintain control over their data.”

UPDATE: 12/4/2025: This story has been updated to add a statement from Kohler Health.

Scientists Are Increasingly Worried AI Will Sway Elections

2025-12-05 03:00:28

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists Are Increasingly Worried AI Will Sway Elections

Scientists are raising alarms about the potential influence of artificial intelligence on elections, according to a spate of new studies that warn AI can rig polls and manipulate public opinion

In a study published in Nature on Thursday, scientists report that AI chatbots can meaningfully sway people toward a particular candidate—providing better results than video or television ads. Moreover, chatbots optimized for political persuasion “may increasingly deploy misleading or false information,” according to a separate study published on Thursday in Science. 

“The general public has lots of concern around AI and election interference, but among political scientists there’s a sense that it’s really hard to change peoples’ opinions, ” said David Rand, a professor of information science, marketing, and psychology at Cornell University and an author of both studies. “We wanted to see how much of a risk it really is.”

In the Nature study, Rand and his colleagues enlisted 2,306 U.S. citizens to converse with an AI chatbot in late August and early September 2024. The AI model was tasked with both increasing support for an assigned candidate (Harris or Trump) and with increasing the odds that the participant who initially favoured the model’s candidate would vote, or decreasing the odds they would vote if the participant initially favored the opposing candidate—in other words, voter suppression. 

In the U.S. experiment, the pro-Harris AI model moved likely Trump voters 3.9 points toward Harris, which is a shift that is four times larger than the impact of traditional video ads used in the 2016 and 2020 elections. Meanwhile, the pro-Trump AI model nudged likely Harris voters 1.51 points toward Trump.

The researchers ran similar experiments involving 1,530 Canadians and 2,118 Poles during the lead-up to their national elections in 2025. In the Canadian experiment, AIs advocated either for Liberal Party leader Mark Carney or Conservative Party leader Pierre Poilievre. Meanwhile, the Polish AI bots advocated for either Rafał Trzaskowski, the centrist-liberal Civic Coalition’s candidate, or Karol Nawrocki, the right-wing Law and Justice party’s candidate.

The Canadian and Polish bots were even more persuasive than in the U.S. experiment: The bots shifted candidate preferences up to 10 percentage points in many cases, three times farther than the American participants. It’s hard to pinpoint exactly why the models were so much more persuasive to Canadians and Poles, but one significant factor could be the intense media coverage and extended campaign duration in the United States relative to the other nations.  

“In the U.S., the candidates are very well-known,” Rand said. “They've both been around for a long time. The U.S. media environment also really saturates with people with information about the candidates in the campaign, whereas things are quite different in Canada, where the campaign doesn't even start until shortly before the election.” 

“One of the key findings across both papers is that it seems like the primary way the models are changing people's minds is by making factual claims and arguments,” he added. “The more arguments and evidence that you've heard beforehand, the less responsive you're going to be to the new evidence.”

While the models were most persuasive when they provided fact-based arguments, they didn’t always present factual information. Across all three nations, the bot advocating for the right-leaning candidates made more inaccurate claims than those boosting the left-leaning candidates. Right-leaning laypeople and party elites tend to share more inaccurate information online than their peers on the left, so this asymmetry likely reflects the internet-sourced training data. 

“Given that the models are trained essentially on the internet, if there are many more inaccurate, right-leaning claims than left-leaning claims on the internet, then it makes sense that from the training data, the models would sop up that same kind of bias,” Rand said.

With the Science study, Rand and his colleagues aimed to drill down into the exact mechanisms that make AI bots persuasive. To that end, the team tasked 19 large language models (LLMs) to sway nearly 77,000 U.K. participants on 707 political issues. 

The results showed that the most effective persuasion tactic was to provide arguments packed with as many facts as possible, corroborating the findings of the Nature study. However, there was a serious tradeoff to this approach, as models tended to start hallucinating and making up facts the more they were pressed for information.

“It is not the case that misleading information is more persuasive,” Rand said. ”I think that what's happening is that as you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out of accurate facts. But you're still pushing it to make more factual claims, so then it starts grasping at straws and making up stuff that's not accurate.”

In addition to these two new studies, research published in Proceedings of the National Academy of Sciences last month found that AI bots can now corrupt public opinion data by responding to surveys at scale. Sean Westwood, associate professor of government at Dartmouth College and director of the Polarization Research Lab, created an AI agent that exhibited a 99.8 percent pass rate on 6,000 attempts to detect automated responses to survey data.

“Critically, the agent can be instructed to maliciously alter polling outcomes, demonstrating an overt vector for information warfare,” Westwood warned in the study. “These findings reveal a critical vulnerability in our data infrastructure, rendering most current detection methods obsolete and posing a potential existential threat to unsupervised online research.”

Taken together, these findings suggest that AI could influence future elections in a number of ways, from manipulating survey data to persuading voters to switch their candidate preference—possibly with misleading or false information. 

To counter the impact of AI on elections, Rand suggested that campaign finance laws should provide more transparency about the use of AI, including canvasser bots, while also emphasizing the role of raising public awareness. 

“One of the key take-homes is that when you are engaging with a model, you need to be cognizant of the motives of the person that prompted the model, that created the model, and how that bleeds into what the model is doing,” he said.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

2025-12-04 23:11:34

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

During a presentation at the International Atomic Energy Agency’s (IAEA) International Symposium on Artificial Intelligence on December 3, a US Department of Energy scientist laid out a grand vision of the future where nuclear energy powers artificial intelligence and artificial intelligence shapes nuclear energy in “a virtuous cycle of peaceful nuclear deployment.”

“The goal is simple: to double the productivity and impact of American science and engineering within a decade,” Rian Bahran, DOE Deputy Assistant Secretary for Nuclear Reactors, said.

His presentation and others during the symposium, held in Vienna, Austria, described a world where nuclear powered AI designs, builds, and even runs the nuclear power plants they’ll need to sustain them. But experts find these claims, made by one of the top nuclear scientists working for the Trump administration, to be concerning and potentially dangerous. 

Tech companies are using artificial intelligence to speed up the construction of new nuclear power plants in the United States. But few know the lengths to which the Trump administration is paving the way and the part it's playing in deregulating a highly regulated industry to ensure that AI data centers have the energy they need to shape the future of America and the world.

At the IAEA, scientists, nuclear energy experts, and lobbyists discussed what that future might look like. To say the nuclear people are bullish on AI is an understatement. “I call this not just a partnership but a structural alliance. Atoms for algorithms. Artificial intelligence is not just powered by nuclear energy. It’s also improving it because this is a two way street,” IAEA Director General Rafael Mariano Grossi said in his opening remarks.

In his talk, Bahran explained that the DOE has partnered with private industry to invest $1 trillion to “build what will be an integrated platform that connects the world’s best supercomputers, AI systems, quantum systems, advanced scientific instruments, the singular scientific data sets at the National Laboratories—including the expertise of 40,000 scientists and engineers—in one platform.”

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants
Image via the IAEA.

Big tech has had an unprecedented run of cultural, economic, and technological dominance, expanding into a bubble that seems to be close to bursting. For more than 20 years new billion dollar companies appeared seemingly overnight and offered people new and exciting ways of communicating. Now Google search is broken, AI is melting human knowledge, and people have stopped buying a new smart phone every year. To keep the number going up and ensure its cultural dominance, tech (and the US government) are betting big on AI.

The problem is that AI requires massive datacenters to run and those datacenters need an incredible amount of energy. To solve the problem, the US is rushing to build out new nuclear reactors. Building a new power plant safely is a mutli-year long process that requires an incredible level of human oversight. It’s also expensive. Not every new nuclear reactor project gets finished and they often run over budget and drag on for years.

But AI needs power now, not tomorrow and certainly not a decade from now.

According to Bahran, the problem of AI advancement outpacing the availability of datacenters is an opportunity to deploy new and exciting tech. “We see a future of and near future, by the way, an AI driven laboratory pipeline for materials modeling, discovery, characterization, evaluation, qualification and rapid iteration,” he said in his talk, explaining how AI would help design new nuclear reactors. “These efforts will substantially reduce the time and cost required to qualify advanced materials for next generation reactor systems. This is an autonomous research paradigm that integrates five decades of global irradiation data with generative AI robotics and high throughput experimentation methodologies.”

“For design, we’re developing advanced software systems capable of accelerating nuclear reactor deployments by enabling AI to explore the comprehensive design spaces, generate 3D models, [and] conduct rigorous failure mode analyzes with minimal human intervention,” he added. “But of course, with humans in the loop. These AI powered design tools are projected to reduce design timelines by multiple factors, and the goal is to connect AI agents to tools to expedite autonomous design.”

Bahran also said that AI would speed up the nuclear licensing process, a complex regulatory process that helps build nuclear power plants safely. “Ultimately, the objective is, how do we accelerate that licensing pathway?” he said. “Think of a future where there is a gold standard, AI trained capacity building safety agent.”

He even said that he thinks AI would help run these new nuclear plants. “We're developing software systems employing AI driven digital twins to interpret complex operational data in real time, detect subtle operational deviations at early stages and recommend preemptive actions to enhance safety margins,” he said.

One of the slides Bahran showed during the presentation attempted to quantify the amount of human involvement these new AI-controlled power plants would have. He estimated  less than five percent “human intervention during normal operations.”

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants
Image via IAEA.

“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.

“The implications of AI-generated safety analysis and licensing in combination with aspirations of <5% of human intervention during normal operations, demonstrates a concerted effort to move away from humans in the loop,” she said. “This is unheard of when considering frameworks and implementation of AI within other safety-critical systems, that typically emphasize meaningful human control.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at [email protected].

Sofia Guerra, a career nuclear safety expert who has worked with the IAEA and US Nuclear Regulatory Commission, attended the presentation live in Vienna. “I’m worried about potential serious accidents, which could be caused by small mistakes made by AI systems that cascade,” she said. “Or humans losing the know-how and safety culture to act as required.”

ChatGPT Told a Violent Stalker to Embrace the 'Haters,' Indictment Says

2025-12-04 01:25:33

ChatGPT Told a Violent Stalker to Embrace the 'Haters,' Indictment Says

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

A Pittsburgh man who allegedly made 11 women’s lives hell across more than five states used ChatGPT as his “therapist” and “best friend” that encouraged him to continue running his misogynistic and threat-filled podcast despite the “haters,” and to visit more gyms to find women, the Department of Justice alleged in a newly-filed indictment.