MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

Trump Wants to Double Production of New Nuclear Weapon Cores

2026-04-22 23:45:59

Trump Wants to Double Production of New Nuclear Weapon Cores

Trump’s proposed 2027 budget would almost double the budget for plutonium pits, the chemical filled metal sphere inside a nuclear warhead that kicks off the explosion in a nuclear weapon. The same budget would slash almost $400 million from nuclear environmental cleanup. The budget request follows a leaked National Nuclear Security Administration (NNSA) memo calling on America’s nuclear scientists to prototype new kinds of nukes and to double plutonium pit production from 30 to 60 triggers a year.

About the size of a bowling ball, a plutonium pit is an essential part of a nuclear warhead. The implosion of these plutonium filled balls in a nuclear weapon triggers the massive explosion and unleashes the weapon’s destructive potential. Until 1992, American manufactured 1,000 plutonium pits a year. Now it makes fewer than 30. Trump wants to change that and he’s willing to throw money at the problem to make it happen.

The 2027 White House budget request sets aside $53.9 billion for the Department of Energy (DOE). This includes a 87 percent increase of funding for pit production at the Savannah River Site—$2.25 billion up from $1.2 billion—and an 83 percent increase in pit funding at Los Alamos National Lab (LANL)—$2.4 billion up from $1.3 billion.

These are shocking increases, especially given that there are around 15,000 existing and unused plutonium pits sitting in a warehouse in Texas. “We have thousands of pits that should be eligible to be reused. The NNSA has publicly acknowledged that they will be reusing pits for some number of warheads,” Dylan Spaulding, a senior scientist at the Union of Concerned Scientists, told 404 Media.

Many of those plutonium pits are old and some in the American government have concerns that they no longer function. But a 2006 and 2019 study from an independent group of scientists said the nuclear triggers should have a lifespan of 85 to 100 years. But some interpreted the 2019 study as cause for alarm.

Why the US General In Charge of Nuclear Weapons Said He Needs AI
Air Force General Anthony J. Cotton said that the US is developing AI tools to help leaders respond to ‘time sensitive scenarios.’
Trump Wants to Double Production of New Nuclear Weapon Cores

“They essentially said we haven’t learned anything alarming about detrimental degradation to pits, but nonetheless the NNSA should resume pit production ‘as expeditiously as possible.’ So those words ‘as expeditiously as possible,’ that raised a lot of alarm because it suggested there was something to worry about,” Spaulding said. “I don’t think it’s clear to me that there’s any physical evidence that pits have a shorter lifetime…we should have decades left to solve the pit production problems and I think using aging as an excuse to go back right now is sort of a red herring.”

For Spaulding, the budget increase isn’t about replacing old pits. It’s about making new ones for new and different kinds of nuclear weapons. “The new budget really corresponds to a new push to accelerate everything in the nuclear complex that this administration has increasingly emphasized,” he said.

A leaked NNSA memo dated February 11, 2026 from Deputy Administrator for Defense Programs David Beck outlined a plan for new weapons aimed at “enhancing American nuclear dominance.” The memo was first published by the Los Alamos Study Group, an independent community think tank. 

The Beck memo outlined an ambitious project for plutonium pit production. “Complete near-term modifications at Los Alamos National Laboratory’s Plutonium Facility (PF-4) to enable production of 100 pits and achieve a sustained production rate of at least 60 pits per year and begin production,” it said. “Position the Savannah River Site (SRS) to facilitate expanded pit production at PF-4 until Savannah River Plutonium Processing Facility (SRPPF) achieves full operations.”

Spaulding said that getting LANL to produce 60 pits a year at a sustained rate was going to be difficult. “They were already going to be struggling to get to 30 in the next few years. It's not clear that 60 is feasible,” he said. “I don't think that LANL is incapable of doing that if they choose to do it, but it's putting a lot of additional strain on a system that was already struggling to meet half the requirement.”

Spaulding also pointed out an interesting line in the Beck memo that seemed to call for new weapon designs. “They’re adding new requirements to LANL. One of those is to demonstrate what they call two new ‘novel Rapid Capability’ weapon systems, and for LANL to produce what they call ‘design-for-manufacture’ pits.’”

Spaulding said he interpreted these new tasks as the federal government asking America’s nuclear scientists to figure out how to get new weapons from the drawing board to prototype fast. “I think one of the things they’re thinking about is to be able to have increased flexibility in the 2030s to be able to produce different kinds of warheads,” he said. “We’re seeing calls for next generation hard and deeply buried target capabilities…it really seems like NNSA is shifting their philosophy from life extension and refurbishment…to all new production. This boost is really to try to get this industrial base moving faster than it is.”

Xiaodon Liang, a senior policy analyst for the Arms Control Association, also interpreted the increased plutonium pit budget as a sign of a new nuclear arms race. “There are new warhead designs that are currently in the early stages of production, if not late stages of development. One of those is the W87-1, which is a new warhead for the Sentinel,” he told 404 Media.

The Sentinel is a new intercontinental ballistic missile that’s set to replace the Minutemans that dot underground silos across the United States. The Sentinel program is billions over budget, will require the digging of new ICBM silos, and has no end in sight.

Liang pointed to the W93 warhead, another new design that’s set to be used in submarine-launched ballistic missiles. “I think the case has been even weaker as to why the existing warheads don't satisfy requirements,” he said. “And I would add that part of the argument for the W93 is that the British were very strongly in favor of it because the British are reliant on our sea based systems for their own deterrence. So they lobbied very hard for the W93 and the case for why the United States needs it was never made clear.”

Both the United States and Russia have about 5,000 nuclear weapons each. None of the other nuclear countries have anywhere close to that number. Experts estimate that China has the next biggest stockpile with only around 400 warheads. It begs the question: Why do we need more? Why make more plutonium pits at all?

“People are pointing at China as an emerging threat. There’s a widespread assumption in the defense world—which UCS disagrees with—that China is necessarily seeking parity with the United States in terms of numbers of weapons,” Spaulding said.

The amount of nuclear weapons began to plummet at the end of the Cold War. A series of treaties between Russia and the United States limited the amount of deployed weapons and both countries began to decommission the weapons. But all those treaties are gone now and global instability—largely driven by America and Russia—has many countries reconsidering their anti-nuclear stance.

The US military is worried it won’t have enough nukes to deter everyone who might get one in the future. It’s also worried about hypersonic weapons, AI-driven innovations, and nukes from space. “That doesn’t mean it’s still a game of numbers,” Spaulding said. “That sort of simplistic thinking that applied to the Cold War with the arms race against Russia was, well, if they have X number, we have to have X number. Once there's sort of horizontal proliferation across nine nuclear armed states. It's not clear that this sort of tit for tat numbers game works the same way. More and more weapons are not the solution to nuclear proliferation elsewhere, that doesn't lead us to a safer state in the world.”

Tiny Township Fears Iran Drone Strikes Because of New Nuclear Weapons Datacenter
The attorney for the township of Ypsilanti, Michigan, said the construction of the data center puts “a big bulls eye target on this entire township.”
Trump Wants to Double Production of New Nuclear Weapon Cores

That hasn’t stopped the US from throwing billions at making new nuclear weapons triggers and asking its scientists to step up production. But it’s unclear if that’s even possible in the short term. In 1992, when the US was making 1,000 pits a year, it did so because of a plant in Rocky Flats, Colorado. The plant closed because the FBI raided it. The plant was an environmental disaster that killed its workers and irradiated the surrounding community. But it met quotas.

Since the closure, America’s nuclear scientists have worked on preserving the pits they had instead of making new ones. “I think the feeling is that science based stockpile stewardship was not enough because it did not leave us with the capability to respond to geopolitical change,” Spaulding said. “I think it’s being looked at quite a bit as an indicator of how well the United States is meeting this new aspiration even if the goals and quantities we’re setting are completely unbounded by reality, which is one of the problems right now.”

The budget and NNSA call for South Carolina’s SRS to manufacture the bulk of the plutonium pits in the future. But it’s unclear if that will ever happen. The ACA’s Liang is skeptical. “The key unanswered question is whether the Savannah River Site will ever come online,” he said. “The current estimate is 2035 for when it’ll reach construction’s end.” Current projections predict the pit factory will cost $30 billion, making it one of the most expensive buildings ever constructed in the US.

All that money and time making new plutonium is less that goes towards other projects. “There’s ongoing remediation work that the state of New Mexico says should be done, that the NNSA has not performed because it claims ‘we are expanding pit production, we can’t do this until later,’” Liang said. 

“Los Alamos will start producing pits at some number soon. The question to me is, at what cost. Not just financial cost,” he said.  “If you look at the DOE budget, what is getting cut? The Trump administration has tried to cut $400 million from the Environmental Management budget twice in the last two years."

Ramping up pit production will lead to more radioactive waste that the DOE will be responsible for cleaning up. “We know from historical experience when pits were produced before…that this is a dangerous and hazardous process. Plutonium is radioactive. It’s a carcinogenic material. It results in large amounts of waste…which present human and environmental risks, not only to the workers who will be charged with carrying this out but to communities around these facilities,” Spaulding said at a press conference on Wednesday.

The United States spends billions of dollars every year cleaning up its radioactive messes, including around Rocky Flats where it once produced most of its plutonium pits. If this budget is approved, and it looks like it will be, then America will spend less money on helping people poisoned by nuclear weapons and more money making new ones.

Update 4/22/26: An earlier version of this story stated an incorrect statistic regarding cuts to environmental management. We've updated the piece with the correct information.

Startups Brag They Spend More Money on AI Than Human Employees

2026-04-22 21:11:08

Startups Brag They Spend More Money on AI Than Human Employees

Startup CEOs who are “tokenmaxxing” are bragging that they are spending more money on AI compute than it would cost to hire human workers. Astronomical AI bills are now, in a certain corner of the tech world, a supposed marker of growth and success. 

“Our AI bill just hit $113k in a single month (we’re a 4 person team). I’ve never been more proud of an invoice in my life,” Amos Bar-Joseph, the CEO of Swan AI, a coding agent startup, wrote in a viral LinkedIn post recently. Bar-Joseph goes on to explain that his startup is spending money on Claude usage bills rather than on salaries for human beings, and that the company is “scaling with intelligence, not headcount.”

“Our goal is $10M ARR [annual recurring revenue] with a sub-10 person org. We don’t have SDRs [sales development representatives], and our paid marketing budget is zero,” he wrote. “But we do spend a sh*t ton on tokens. That $113K bill? A part of it IS our go-to-market team. our engineering, support, legal.. you get the point.”

Much has been written in the last few weeks about “tokenmaxxing,” a vanity metric at tech startups and tech giants in which the amount of money being spent on AI tools like Claude and ChatGPT is seen as a measure of productivity. The Information reported earlier this month on an internal Meta dashboard called “Claudenomics,” a leaderboard that tracks the number of AI tokens individual employees use. The general narrative has been that the more AI tokens an employee uses, the more productive they are and the more innovative they must be in using AI. 

Stories abound of individual employees spending hundreds of thousands of dollars in AI compute by themselves, and this being something that other workers should aspire to. There has been at least a partial backlash to this, with Salesforce saying they have invented a metric called “Agentic Work Units” that attempts to quantify whether all this spend on AI tokens is translating into actual work. 

Shifting so much money and attention to using AI tools is, of course, being done with the goal of replacing human workers. We have seen CEOs justify mass layoffs with the idea that improving AI efficiency will reduce the need for human workers, and Monday Verizon CEO Dan Schulman said he expects AI to lead to mass unemployment

But while big companies are using AI to justify reducing worker headcount, startups are using AI to justify never hiring human workers in the first place. 

“This is the part people miss about AI-native companies - the $113k is not a cost, it is your headcount budget allocated differently,” Chen Avnery, a cofounder of Fundable AI, commented on Bar-Joseph’s LinkedIn post. “We run a similar model processing loan documents that would normally require a team of 15. The math works when your AI spend generates 10x the output of equivalent human cost. The real unlock is compound scaling—token spend grows linearly while output grows exponentially.”

Medvi, a GLP-1 telehealth startup that has two employees and seven contractors was built largely using AI, is apparently on track to bring in $1.8 billion in revenue this year, according to the New York Times (Medvi is facing regulatory scrutiny for its practices). The industry has become obsessed with the idea of a “one-person, billion-dollar company,” and various AI startups and venture capital firms are now trying to push founders to try to create “autonomous” companies that have few or no employees. 

Andrew Pignanelli, the founder of the dubiously-named General Intelligence Company, gave a presentation last month in which he explained that many of the “jobs” at his company are just a series of AI agents, and that he now usually spends more money on AI compute than he does on human salaries.

“We’ve started spending more on tokens than on salaries depending on the day,” he said. “Today we spent $4 grand on [Claude] Opus tokens. Some days it’ll be less. But this shows that we’re starting to shift our human capital to intelligence.”

Startups Brag They Spend More Money on AI Than Human Employees

What’s left unsaid by these tokenmaxxing entrepreneurs, however, is whether the spend on AI compute is actually worth it, whether the money would be better spent on human employees, what types of disasters could occur, and whether any of this is actually financially sustainable. 

Companies like OpenAI and Anthropic are losing tons of cash on their products; even though artificial intelligence compute is expensive, it is underpriced for what it actually costs, and it’s not clear how long investors in frontier AI companies are going to be willing to subsidize those losses. Meanwhile, we have reported endlessly on “workslop” and the human cleanup that is often needed when AI-written code, AI-generated work, and customer-facing AI products go awry. There are also numerous horror stories of AI getting caught in a loop and burning thousands of dollars worth of tokens on what end up being completely useless tasks. Regardless, there’s an entirely new class of entrepreneur who seems hell-bent on “hiring” AI employees, not human ones.

Podcast: How Algorithms Make Us Feel Bad and Weird

2026-04-22 20:57:16

Podcast: How Algorithms Make Us Feel Bad and Weird

This week Sam unpacks how social media algorithms manipulate our emotions around everything from engagement rings to wedding dresses to babies, and what it feels like getting lost in the #Weddingtok sauce. Then, Emanuel breaks down a satirical but functional AI tool that rips off open source software. There’s a long history in “clean room” software that’s really interesting. In the section for subscribers at the Supporter level, Jason walks us through “tokenmaxxing” and startups obsessed with spending as much money as possible on AI—and as little as possible on humans.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism.

If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

I Almost Lost My Mind in the Bridal Algorithm

This AI Tool Rips Off Open Source Software Without Violating Copyright

EMERGENCY BREAKING NEWS PODCAST: Tim, Cooked

2026-04-22 05:06:13

EMERGENCY BREAKING NEWS PODCAST: Tim, Cooked

Today after recording our normal weekly podcast, Sam, Emanuel, and Jason spontaneously began discussing the legacy of Tim Cook as Apple CEO, the #BreakingTechNewsofTheWeek. We got really riled up so decided to press record to discuss Tim Cook's accountant energy, his legacy of creating different sizes and shapes of rectangle and square phone-like devices, and the Business School Simulator create-a-player-ass look of his replacement.

This is, of course, a very loose, rough rant but thought we'd share because we are seeking to be thought leaders in these troubling times.

This AI Tool Rips Off Open Source Software Without Violating Copyright

2026-04-21 21:00:44

This AI Tool Rips Off Open Source Software Without Violating Copyright

For a small price, Malus.sh will use AI to ingest any piece of software you give and spit out a new version of it that “liberates” it from any existing copyright licenses. The result is a new piece of software that serves the same function, but doesn’t have to honor, for example, the kind of copyright licenses that ensure open source software remains free to use and modify, a process which could upend the already fragile open source ecosystem. 

The site is an elaborate bit of satire designed to bring attention to a very real problem in open source, but it also does exactly what it advertises and is a real LLC that is making money by using AI to produce “clean room” clones of existing software. 

“It works,” Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told me. “The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation.” 

Malus’s legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM’s computer would have infringed on the company’s copyright, so Columbia Data Products came up with what we now know as a “clean room” design.

It tasked one team with examining IBM’s BIOS and creating specifications for what a clone of that system would require. A different “clean” team, one that was never exposed to IBM’s code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM’s ecosystem but didn’t violate its copyright because it did not copy IBM’s technical process and counted as original work. 

This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects. 

Malus (pronounced malice), uses AI to do the same thing.

“Finally, liberation from open source license obligations,” Malus’s site says. “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.” Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify. 

Malus’s pitch is naked contempt for the open source community, which believes in developing software collaboratively and providing it for free to everyone. Normally, copyright licenses for open source projects only ask that anyone who uses the work give credit to maintainers and that any derivative works will continue to use the same permissive license, which hopefully grows the community of people who contribute back into the project and keep it going. 

“Some licenses require you to contribute improvements back. Your shareholders didn't invest in your company so you could help strangers,” Malus’s site says. “Is your legal team frustrated with the attribution clause? Tired of putting ‘Portions of this software…’ in your documentation? Those maintainers worked for free—why should they get credit?”

The site gained some incredulous attention when it was posted to Hacker News recently,, but it didn’t take people long to realize that it was an elaborate bit of satire, even if the tool can still replicate open source projects as advertised. 

Malus was born out of a talk that open source developers Dylan Ayrey and Michael Nolan gave at the open source conference FOSDEM 2026. The AI slop heavy presentation is a whirlwind history of copyright and software, how the two have always had an uneasy but necessary relationship, and how that relationship is fundamentally changed now that AI tools can produce clean room designs at a click of button.

“Even if the courts ruled that maybe this is legal, and maybe there aren't legal restrictions to doing this, is it ethical?” Ayrey asked. 

“The question we should be asking is, can we get rich off of this?” Nolan said. 

And so Malus was born. 

Malus is satire, but it will actually take your money and do what it advertises. It is modeled after the IBM case and uses one AI agent to write the specifications and a different agent to produce the code, creating that “clean room” effect. Malus will also do performance testing and scan for common vulnerabilities to make sure the output is functional. 

Nolan didn’t tell me exactly how much money the company is making but said it is a real LLC with a bank account and is profitable, with “probably hundreds” of dollars at this point. The service charges $0.01 for each KB of data across the project's various dependencies.

This AI Tool Rips Off Open Source Software Without Violating Copyright
The pricing for using Malus.

What Malus is satirizing is also really happening. For example, in March Ars Technica and The Register covered an incident around a widely used Python library called chardet. Originally it was released under the LGPL license; then a version was rereleased under the less permissive MIT license. Dan Blanchard, who used Claude to produce the MIT-licensed version of chardet, argued that it was a complete rewrite of chardet, and not derivative, because only a small percent of the code looked and functioned similarly. Mark Pilgrim, who originally released chardet, disagreed and complained about Blanchard using this method to shed the more restrictive LGPL license. 

“This concern is legitimate. AI has made clean-room style reimplementation dramatically cheaper,” Blanchard wrote in response to Pilgrim. “What used to require months of work by expensive engineering teams can now, as Armin Ronacher put it, be done trivially.”

Blanchard also conceded that Claude, which like all LLMs, was trained on vast amounts of data scraped indiscriminately from the internet and was exposed to the original chardet in its training, but maintains his version is not derivative.

“I have seen Malus.sh, and like many people, I wasn’t sure it was satire at first, because I’m sure someone will probably make that for real eventually,” Blanchard told me in an email. “I think the reality of the situation is that traditional software licenses (open source and commercial) weren’t the real barrier against these sorts of rewrites in the past (see WINE, Linux, and IBM PC BIOSes long ago), and the main obstacles were time and money. A rewrite that would’ve taken a team of people months or years can be done in days with AI. As a professional software engineer, I don’t love that much of the business model around selling software is in danger, but I don’t think there’s any putting the genie back in the bottle at this point.”

After the backlash, Blanchard changed the license on his version of chardet from MIT to the 0BSD license, which he told me “was a change that satisfied many in the community's concerns about AI-generated code not even being copyrightable in the first place.” The 0BSD license is very permissive and allows anyone to “use, copy, modify, and/or distribute this software for any purpose with or without fee.”

“Much of our law was designed with human scale inefficiencies in mind,” Meredith Rose, a senior policy counsel with Public Knowledge who focuses on copyright, DMCA, and intellectual property reform, told me. “Clean rooms worked because courts kind of looked at the whole clean room methodology and were like, ‘there's a lot of labor that goes into this.’ That’s part of the calculus. You had a couple human beings recreating this very big source package essentially from nothing but high level specs. The idea of collapsing that into something where you can press a button and get an entire package recreated is kind of wild, even though it is technically correct under the law as far as I can tell.”

Others in the open source community say that regardless of the legal implications of AI-generated clean room versions of existing software, the reality and impact of the practice is here, and not good for the open source community. 

“Whether or not Malus is satire, the concept it describes is already happening in practice. The legal theory that an AI can ‘clean room’ reimplement things was arguably made inevitable by the approach companies like OpenAI and Anthropic have taken to copyright: treat the entire internet as training data, then claim the output is a new, unencumbered work,” Mike McQuaid, developer of the popular open source package manager Homebrew, told me. “Even if you accept the legal argument, the ethics fucking suck. Open source isn't just source code you download once. It's an ongoing relationship: security patches, bug fixes, adaptation to new platforms, accumulated expertise from years of triage and review. A ‘clean room’ reimplementation fucks all of that. You get a snapshot with none of the maintenance. It’s basically just a fork where nobody knows how the code works, nobody is watching for CVEs, and nobody knows what to do when it breaks. That's not liberation, it's just technical debt.”

Nolan told me that he made Malus to make developers feel this danger.

“I've been publishing research on these [open source] communities for over a decade now, and consistently, what I hear over and over again is that open source has won because 80 or 90 percent of all software applications rely upon us, but what they're relying upon is the wholesale exploitation of massive communities of workers who convince themselves that they're winning because Google uses them, and what they end up doing instead is pretending that because their software is licensed under a certain license, that that means they’re ethical,” Nolan said. “It doesn’t matter if they’re in the supply chain of weapons that are committing war crimes. It doesn’t matter that their friends suddenly get the rug pulled out from under them when a CTO decides to change strategy and no longer wants to support that library anymore [...] They just keep on saying everything’s okay as the tech sector essentially will collapse down upon them, and they keep saying they're winning, even when they're not. And so my hope, with Malus, was to make people think critically about their position.”

Scientists Gave a Bunch of Salmon Cocaine. This Is What Happened Next

2026-04-21 03:40:33

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists Gave a Bunch of Salmon Cocaine. This Is What Happened Next

Salmon exposed to cocaine swim farther and behave differently than unexposed fish, according to the first study to observe the effects of cocaine on fish in the wild rather than a laboratory setting.

Many waterways around the world are contaminated with a host of legal and illegal substances that are consumed by humans and then excreted into sewage systems. As global demand for cocaine skyrockets, traces of the drug—including its main metabolite, benzoylecgonine—are flowing into lakes and rivers where they can be absorbed by wildlife, such as Atlantic salmon.

Previous research in laboratory conditions has already linked cocaine exposure to behavioral changes in aquatic species, but this connection has never been explored in fish in the wild. Now, scientists have demonstrated that cocaine and benzoylecgonine “can accumulate in the brains of exposed Atlantic salmon—an ecologically and economically important species of high conservation concern—and disrupt the movement and space use of these fish in the wild,” according to a study published on Monday in Current Biology.

“We were motivated by a major gap in the scientific literature: almost everything that was known about the impacts of cocaine pollution on animal behaviour relies on data that has been collected in laboratory settings,” said Michael Bertram, an author of the study and an associate professor in the department of wildlife, fish, and environmental studies at the Swedish University of Agricultural Sciences, in an email to 404 Media. 

“We wanted to know whether environmentally realistic exposure to cocaine and its major metabolite, benzoylecgonine, actually changes how fish move in the wild under real ecological and environmental conditions,” he continued.

To fill this knowledge gap, Bertram and his colleagues obtained more than a hundred Atlantic salmon “smolts”—the term for young fish—that were raised in a hatchery until they were two years old. The team divided them into three groups of 35 fish each and equipped every fish with an implant and tracking tags. The “cocaine group” received a slow-release chemical implant of cocaine, the “metabolite group” received a slow-release benzoylecgonine implant, and a third “control group” carried a dummy implant with no chemicals.  

Scientists Gave a Bunch of Salmon Cocaine. This Is What Happened Next
Graphical abstract outlining the team’s approach. Image: Brand, Jack et al. 

The three groups were released simultaneously on April 12, 2022 at the same site on the south-western side of Lake Vättern in Sweden, alongside 200 other smolts that were not involved in this experiment. Over the course of roughly two months, the exposed groups moved much more than the control group, especially the metabolite group; they traveled 1.9 times farther per week than the unexposed smolts.

“We expected an effect of contaminant exposure on the movement of salmon, but the scale of the changes seen still surprised us,” Bertram said. “The strongest response was close to a two-fold increase in movement, and the most unexpected result was that benzoylecgonine, the main metabolite of cocaine, produced the clearest effect rather than cocaine itself.”

Indeed, the study found that the metabolite group swam almost nine miles farther per week than the control week in the final two weeks of the 8-week experiment, whereas the control group was more settled down by that point.

“To the best of our knowledge, this is the first demonstration that environmental levels of a cocaine metabolite that is commonly found in aquatic ecosystems can alter the space use and swimming activity of fish in the wild,” the team said in the study. 

It’s not clear why the metabolite group was so restless, given that benzoylecgonine is considered psychoactively inactive in humans. The compound is a long-lived byproduct of cocaine made by the liver and excreted in urine, which makes it the easiest biomarker to look for in a typical drug test. The possibility that this metabolite may have a greater impact on some species in the wild is disturbing, in part because it is frequently found in higher concentrations in natural environments than its parent compound (cocaine).

“The results suggest that benzoylecgonine may be more biologically important than it is often assumed to be,” Bertram said. “Our findings raise new questions about whether metabolites can sometimes be as disruptive as, or even more disruptive than, the parent compound in aquatic wildlife.”

The team emphasized that much more research is required to understand the pressures that cocaine and other substances might be introducing both to individual species and to whole ecosystems. 

“The next steps are to work out the mechanisms by which cocaine and its metabolite disrupt behaviour and movement in fish in the wild, test how general this effect is across other species and systems, and use higher-resolution tracking to see whether these movement changes affect predation risk, migration, reproduction, or survival,” Bertram said. “That is really the key question now: not just whether behaviour changes, but what those changes mean ecologically.”

For example, this particular study focused on hatchery-raised smolts that were released into the wild, but future studies could test out the effects of these contaminants on fully wild populations as well, which have their own unique behavioral characteristics. Unraveling the effects of these human-sourced substances is even more urgent given that the global use of illicit drugs increased by roughly 20 percent over the last decade, suggesting that “the environmental impact of these substances is likely to grow,” according to the study.

“The behaviour and movement of wildlife underpin habitat use, feeding, predator exposure, and population connectivity, so altering these processes could have wider consequences for food webs and population dynamics,” Bertram concluded. “For species already under pressure, an added stressor like this could be highly detrimental, although the long-term effects on fisheries and ecosystems still need to be tested directly.”