MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves

2026-03-13 01:25:13

I Watched 6 Hours of DOGE Bro Testimony. Here's What They Had to Say For Themselves

Over the course of a six hour long or so deposition, Justin Fox, a former investment banker turned DOGE bro, refused to define what he believes counts as DEI; admitted he used ChatGPT to scan government contracts for terms such as “Black” and “homosexual” but not “white” or “caucasian;” and said that one of the grants he helped slash was “not for the benefit of humankind” before walking that claim back.

I watched all of Fox’s deposition from start to finish. The terse exchanges, the circular arguments, the pregnant pauses, all of it. The videos, available publicly on YouTube, were released as part of a lawsuit by the Modern Language Association, American Council of Learned Societies, and American Historical Association. They provide fascinating, or perhaps horrifying, insight into the thinking of someone inside DOGE. Even with Fox’s inability to answer seemingly easy questions, the responses are still illustrative of the recklessness and hamfisted nature of a group of young, inexperienced people who caused massive damage across the U.S. government, leading to negative consequences outside of it. DOGE as an organization has been linked to 300,000 deaths due to its cuts and multiple significant data breaches. All the while, DOGE did not actually reduce the government’s deficit. 

'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back

2026-03-12 23:08:06

'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back

Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.

“It required a lot of creativity and fast thinking. Because if I’m talking to a man, I’m supposed to act like a woman. If I’m talking to a woman, I need to act like a man. If I’m talking to a gay person, I need to act like a gay person,” he told me at a coworking space I met him at in Nairobi. After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex. 

“It got to a point where my body couldn’t function. Where I saw someone naked, I don’t even feel it. And I have a wife, who expects a lot from you, a young family, she expects a lot from you intimately. But you can’t, like, do it,” Asia said. “It fractured a lot of things for me. My body is like, not functioning at all.”

Asia eventually hit a breaking point and stopped working for AI companies. He is now the secretary general of a Kenyan organization called the Data Labelers Association (DLA) and the author of “The Emotional Labor Behind AI Intimacy,” a testimony of his time working as the real human labor behind AI sex bots. As part of the DLA, Asia has been working to organize workers to fight for better pay, better mental health services, an end to draconian non-disclosure agreements, and better benefits for a workforce that often earns just a few dollars a day. Data labelers train, refine, and moderate the outputs of AI tools made by the largest companies in the world, yet they are wildly underpaid and haven’t benefitted from the runaway valuations of AI companies. 

Last month, the DLA held one of its largest events at the Nairobi Arboretum, sign up new members, and to help them tell their stories.  

These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world.

💡
Do you know anything else about data labeling or the human labor behind AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

“You can’t understand where you’re positioned if you don’t understand your history,” Angela, one of the day’s speakers, told the workers who had assembled there (many of the speakers at the event did not give their full names). “When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”

“These multinationals are coming to rule and dominate here,” she added. “It’s a very unfortunate supply chain, and my call today as data labelers is to build up on this—as we are fighting for labor rights, we are also fighting for the environment […] we are fighting big companies. We are fighting the British imperialist companies of today. It’s Apple, it’s Meta, it’s Gemini. Those are the ones we’re still fighting. It’s a call for solidarity and expanding our thinking beyond what we are doing, beyond our labor.”

In my few days in Kenya earlier this year, where I was traveling to speak at a conference about AI and journalism, it was immediately clear that data labelers make up a significant portion of the country’s tech workforce. Nearly everyone I spoke to there had either been a data labeler (or a content moderator) themselves or knows someone who has. Leaving the airport in Nairobi, you immediately drive by Sameer Business Park, an office complex that houses Sama, a San Francisco-headquartered “data annotation and labeling company” that has contracted with Meta, OpenAI, and many other tech giants. Sama has been sued repeatedly for its low pay and the fact that many of its workers suffer PTSD from repetitively looking at graphic content. For years, a giant sign outside its office read: “Samasource THE SOUL OF AI.” My Uber driver asked why I was going to a random office building in Nairobi’s Central Business District—I told her I was going to interview a data labeler. “Oh, I do data labeling too,” she said.

'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back
Michael Geoffrey Asia. Image: Jason Koebler

Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate

2026-03-11 23:41:16

Here’s the Memo Approving Gemini, ChatGPT, and Copilot for Use in the Senate

A top Senate administrator approved OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot for official use in the Senate, the New York Times reported on Tuesday. 404 Media has obtained the full text of the memo and is publishing it below.

“The Sergeant at Arms (SAA) office of the Chief Information Officer (CIO) has approved the use of three Generative Artificial Intelligence (AI) platforms with Senate data,” the memo starts. It also says the SAA will provide each Senate employee with one free license to either Gemini Chat or ChatGPT Enterprise, with Copilot also available at no cost.

💡
Do you know anything else about the government's use of AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

Podcast: How to Talk to Your Friend Experiencing 'AI Psychosis'

2026-03-11 22:37:19

Podcast: How to Talk to Your Friend Experiencing 'AI Psychosis'

This week we start with Sam’s story discussing something that has come up a lot but no one has really answered: how do you speak to your friend or family member falling into AI psychosis? After the break, Joseph breaks down what happened when the FBI wanted data from ProtonMail. In the subscribers-only section, Emanuel tells us about the viral developers behind an app called Quittr, and how they exposed very sensitive data of hundreds of thousands of users.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

From Flock to ICE, Here’s a Breakdown of How You’re Being Watched

2026-03-11 22:27:56

From Flock to ICE, Here’s a Breakdown of How You’re Being Watched

It’s nearly impossible not to be watched these days. It can start right at home with your neighbors and their Ring cameras—a company that sold fear to the American public and is now integrating AI to turn entire neighborhoods into networked, automated surveillance systems. 

Head out a bit further and you’ll likely be confronted by Flock’s network of cameras that not only track license plates, but also track people’s movements with detailed precision. And as the Trump administration raids cities across the U.S. for undocumented immigrants, tech giants like Palantir are powering tools for ICE, including one called ELITE that helps the agency pick which neighborhoods to raid.

To better understand what exactly we’re looking at in this dystopian hellscape, 404 Media’s Jason Koebler and Joseph Cox joined r/technology for an AMA

Understandably, people are worried about violations of their privacy by companies and the government. And many wonder, is there any way to go back once we’ve released all this AI-powered, surveillance tech? 

Questions and answers have been edited for clarity.

Q: How do you think we can as a society deescalate tools designed to spy on citizens? I feel like once the police state bottle is open it’s near impossible to put it back in?

Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims

2026-03-11 05:40:32

Cybertruck Tried to Drive 'Straight Off an Overpass' Attorney Claims

A Cybertruck owner in Texas is suing Tesla for $1,000,000 in damages for “ grossly negligent conduct” following an accident on a Houston highway that involved the vehicle’s self-driving feature. According to the lawsuit, Tesla is to blame for the crash because CEO Elon Musk has oversold the truck’s ability to drive itself.

As originally reported by the Austin American-Statesman, Justine Saint Amour bought a Cybertruck from a used car dealership in Florida and drove it until it crashed on a Houston overpass on August 18, 2025. That summer day, Saint Amour was driving down Houston’s 69 Eastex Freeway with the vehicle’s full self-driving (FSD) mode engaged.