2026-04-22 21:11:08

Startup CEOs who are “tokenmaxxing” are bragging that they are spending more money on AI compute than it would cost to hire human workers. Astronomical AI bills are now, in a certain corner of the tech world, a supposed marker of growth and success.
“Our AI bill just hit $113k in a single month (we’re a 4 person team). I’ve never been more proud of an invoice in my life,” Amos Bar-Joseph, the CEO of Swan AI, a coding agent startup, wrote in a viral LinkedIn post recently. Bar-Joseph goes on to explain that his startup is spending money on Claude usage bills rather than on salaries for human beings, and that the company is “scaling with intelligence, not headcount.”
“Our goal is $10M ARR [annual recurring revenue] with a sub-10 person org. We don’t have SDRs [sales development representatives], and our paid marketing budget is zero,” he wrote. “But we do spend a sh*t ton on tokens. That $113K bill? A part of it IS our go-to-market team. our engineering, support, legal.. you get the point.”
Much has been written in the last few weeks about “tokenmaxxing,” a vanity metric at tech startups and tech giants in which the amount of money being spent on AI tools like Claude and ChatGPT is seen as a measure of productivity. The Information reported earlier this month on an internal Meta dashboard called “Claudenomics,” a leaderboard that tracks the number of AI tokens individual employees use. The general narrative has been that the more AI tokens an employee uses, the more productive they are and the more innovative they must be in using AI.
Stories abound of individual employees spending hundreds of thousands of dollars in AI compute by themselves, and this being something that other workers should aspire to. There has been at least a partial backlash to this, with Salesforce saying they have invented a metric called “Agentic Work Units” that attempts to quantify whether all this spend on AI tokens is translating into actual work.
Shifting so much money and attention to using AI tools is, of course, being done with the goal of replacing human workers. We have seen CEOs justify mass layoffs with the idea that improving AI efficiency will reduce the need for human workers, and Monday Verizon CEO Dan Schulman said he expects AI to lead to mass unemployment.
But while big companies are using AI to justify reducing worker headcount, startups are using AI to justify never hiring human workers in the first place.
“This is the part people miss about AI-native companies - the $113k is not a cost, it is your headcount budget allocated differently,” Chen Avnery, a cofounder of Fundable AI, commented on Bar-Joseph’s LinkedIn post. “We run a similar model processing loan documents that would normally require a team of 15. The math works when your AI spend generates 10x the output of equivalent human cost. The real unlock is compound scaling—token spend grows linearly while output grows exponentially.”
Medvi, a GLP-1 telehealth startup that has two employees and seven contractors was built largely using AI, is apparently on track to bring in $1.8 billion in revenue this year, according to the New York Times (Medvi is facing regulatory scrutiny for its practices). The industry has become obsessed with the idea of a “one-person, billion-dollar company,” and various AI startups and venture capital firms are now trying to push founders to try to create “autonomous” companies that have few or no employees.
Andrew Pignanelli, the founder of the dubiously-named General Intelligence Company, gave a presentation last month in which he explained that many of the “jobs” at his company are just a series of AI agents, and that he now usually spends more money on AI compute than he does on human salaries.
“We’ve started spending more on tokens than on salaries depending on the day,” he said. “Today we spent $4 grand on [Claude] Opus tokens. Some days it’ll be less. But this shows that we’re starting to shift our human capital to intelligence.”

What’s left unsaid by these tokenmaxxing entrepreneurs, however, is whether the spend on AI compute is actually worth it, whether the money would be better spent on human employees, what types of disasters could occur, and whether any of this is actually financially sustainable.
Companies like OpenAI and Anthropic are losing tons of cash on their products; even though artificial intelligence compute is expensive, it is underpriced for what it actually costs, and it’s not clear how long investors in frontier AI companies are going to be willing to subsidize those losses. Meanwhile, we have reported endlessly on “workslop” and the human cleanup that is often needed when AI-written code, AI-generated work, and customer-facing AI products go awry. There are also numerous horror stories of AI getting caught in a loop and burning thousands of dollars worth of tokens on what end up being completely useless tasks. Regardless, there’s an entirely new class of entrepreneur who seems hell-bent on “hiring” AI employees, not human ones.
2026-04-22 20:57:16

This week Sam unpacks how social media algorithms manipulate our emotions around everything from engagement rings to wedding dresses to babies, and what it feels like getting lost in the #Weddingtok sauce. Then, Emanuel breaks down a satirical but functional AI tool that rips off open source software. There’s a long history in “clean room” software that’s really interesting. In the section for subscribers at the Supporter level, Jason walks us through “tokenmaxxing” and startups obsessed with spending as much money as possible on AI—and as little as possible on humans.
Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism.
If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
I Almost Lost My Mind in the Bridal Algorithm
This AI Tool Rips Off Open Source Software Without Violating Copyright
2026-04-22 05:06:13

Today after recording our normal weekly podcast, Sam, Emanuel, and Jason spontaneously began discussing the legacy of Tim Cook as Apple CEO, the #BreakingTechNewsofTheWeek. We got really riled up so decided to press record to discuss Tim Cook's accountant energy, his legacy of creating different sizes and shapes of rectangle and square phone-like devices, and the Business School Simulator create-a-player-ass look of his replacement.
This is, of course, a very loose, rough rant but thought we'd share because we are seeking to be thought leaders in these troubling times.
2026-04-21 21:00:44
For a small price, Malus.sh will use AI to ingest any piece of software you give and spit out a new version of it that “liberates” it from any existing copyright licenses. The result is a new piece of software that serves the same function, but doesn’t have to honor, for example, the kind of copyright licenses that ensure open source software remains free to use and modify, a process which could upend the already fragile open source ecosystem.
The site is an elaborate bit of satire designed to bring attention to a very real problem in open source, but it also does exactly what it advertises and is a real LLC that is making money by using AI to produce “clean room” clones of existing software.
“It works,” Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told me. “The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation.”
Malus’s legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM’s computer would have infringed on the company’s copyright, so Columbia Data Products came up with what we now know as a “clean room” design.
It tasked one team with examining IBM’s BIOS and creating specifications for what a clone of that system would require. A different “clean” team, one that was never exposed to IBM’s code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM’s ecosystem but didn’t violate its copyright because it did not copy IBM’s technical process and counted as original work.
This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects.
Malus (pronounced malice), uses AI to do the same thing.
“Finally, liberation from open source license obligations,” Malus’s site says. “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.” Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify.
Malus’s pitch is naked contempt for the open source community, which believes in developing software collaboratively and providing it for free to everyone. Normally, copyright licenses for open source projects only ask that anyone who uses the work give credit to maintainers and that any derivative works will continue to use the same permissive license, which hopefully grows the community of people who contribute back into the project and keep it going.
“Some licenses require you to contribute improvements back. Your shareholders didn't invest in your company so you could help strangers,” Malus’s site says. “Is your legal team frustrated with the attribution clause? Tired of putting ‘Portions of this software…’ in your documentation? Those maintainers worked for free—why should they get credit?”
The site gained some incredulous attention when it was posted to Hacker News recently,, but it didn’t take people long to realize that it was an elaborate bit of satire, even if the tool can still replicate open source projects as advertised.
Malus was born out of a talk that open source developers Dylan Ayrey and Michael Nolan gave at the open source conference FOSDEM 2026. The AI slop heavy presentation is a whirlwind history of copyright and software, how the two have always had an uneasy but necessary relationship, and how that relationship is fundamentally changed now that AI tools can produce clean room designs at a click of button.
“Even if the courts ruled that maybe this is legal, and maybe there aren't legal restrictions to doing this, is it ethical?” Ayrey asked.
“The question we should be asking is, can we get rich off of this?” Nolan said.
And so Malus was born.
Malus is satire, but it will actually take your money and do what it advertises. It is modeled after the IBM case and uses one AI agent to write the specifications and a different agent to produce the code, creating that “clean room” effect. Malus will also do performance testing and scan for common vulnerabilities to make sure the output is functional.
Nolan didn’t tell me exactly how much money the company is making but said it is a real LLC with a bank account and is profitable, with “probably hundreds” of dollars at this point. The service charges $0.01 for each KB of data across the project's various dependencies.

What Malus is satirizing is also really happening. For example, in March Ars Technica and The Register covered an incident around a widely used Python library called chardet. Originally it was released under the LGPL license; then a version was rereleased under the less permissive MIT license. Dan Blanchard, who used Claude to produce the MIT-licensed version of chardet, argued that it was a complete rewrite of chardet, and not derivative, because only a small percent of the code looked and functioned similarly. Mark Pilgrim, who originally released chardet, disagreed and complained about Blanchard using this method to shed the more restrictive LGPL license.
“This concern is legitimate. AI has made clean-room style reimplementation dramatically cheaper,” Blanchard wrote in response to Pilgrim. “What used to require months of work by expensive engineering teams can now, as Armin Ronacher put it, be done trivially.”
Blanchard also conceded that Claude, which like all LLMs, was trained on vast amounts of data scraped indiscriminately from the internet and was exposed to the original chardet in its training, but maintains his version is not derivative.
“I have seen Malus.sh, and like many people, I wasn’t sure it was satire at first, because I’m sure someone will probably make that for real eventually,” Blanchard told me in an email. “I think the reality of the situation is that traditional software licenses (open source and commercial) weren’t the real barrier against these sorts of rewrites in the past (see WINE, Linux, and IBM PC BIOSes long ago), and the main obstacles were time and money. A rewrite that would’ve taken a team of people months or years can be done in days with AI. As a professional software engineer, I don’t love that much of the business model around selling software is in danger, but I don’t think there’s any putting the genie back in the bottle at this point.”
After the backlash, Blanchard changed the license on his version of chardet from MIT to the 0BSD license, which he told me “was a change that satisfied many in the community's concerns about AI-generated code not even being copyrightable in the first place.” The 0BSD license is very permissive and allows anyone to “use, copy, modify, and/or distribute this software for any purpose with or without fee.”
“Much of our law was designed with human scale inefficiencies in mind,” Meredith Rose, a senior policy counsel with Public Knowledge who focuses on copyright, DMCA, and intellectual property reform, told me. “Clean rooms worked because courts kind of looked at the whole clean room methodology and were like, ‘there's a lot of labor that goes into this.’ That’s part of the calculus. You had a couple human beings recreating this very big source package essentially from nothing but high level specs. The idea of collapsing that into something where you can press a button and get an entire package recreated is kind of wild, even though it is technically correct under the law as far as I can tell.”
Others in the open source community say that regardless of the legal implications of AI-generated clean room versions of existing software, the reality and impact of the practice is here, and not good for the open source community.
“Whether or not Malus is satire, the concept it describes is already happening in practice. The legal theory that an AI can ‘clean room’ reimplement things was arguably made inevitable by the approach companies like OpenAI and Anthropic have taken to copyright: treat the entire internet as training data, then claim the output is a new, unencumbered work,” Mike McQuaid, developer of the popular open source package manager Homebrew, told me. “Even if you accept the legal argument, the ethics fucking suck. Open source isn't just source code you download once. It's an ongoing relationship: security patches, bug fixes, adaptation to new platforms, accumulated expertise from years of triage and review. A ‘clean room’ reimplementation fucks all of that. You get a snapshot with none of the maintenance. It’s basically just a fork where nobody knows how the code works, nobody is watching for CVEs, and nobody knows what to do when it breaks. That's not liberation, it's just technical debt.”
Nolan told me that he made Malus to make developers feel this danger.
“I've been publishing research on these [open source] communities for over a decade now, and consistently, what I hear over and over again is that open source has won because 80 or 90 percent of all software applications rely upon us, but what they're relying upon is the wholesale exploitation of massive communities of workers who convince themselves that they're winning because Google uses them, and what they end up doing instead is pretending that because their software is licensed under a certain license, that that means they’re ethical,” Nolan said. “It doesn’t matter if they’re in the supply chain of weapons that are committing war crimes. It doesn’t matter that their friends suddenly get the rug pulled out from under them when a CTO decides to change strategy and no longer wants to support that library anymore [...] They just keep on saying everything’s okay as the tech sector essentially will collapse down upon them, and they keep saying they're winning, even when they're not. And so my hope, with Malus, was to make people think critically about their position.”
2026-04-21 03:40:33

Salmon exposed to cocaine swim farther and behave differently than unexposed fish, according to the first study to observe the effects of cocaine on fish in the wild rather than a laboratory setting.
Many waterways around the world are contaminated with a host of legal and illegal substances that are consumed by humans and then excreted into sewage systems. As global demand for cocaine skyrockets, traces of the drug—including its main metabolite, benzoylecgonine—are flowing into lakes and rivers where they can be absorbed by wildlife, such as Atlantic salmon.
Previous research in laboratory conditions has already linked cocaine exposure to behavioral changes in aquatic species, but this connection has never been explored in fish in the wild. Now, scientists have demonstrated that cocaine and benzoylecgonine “can accumulate in the brains of exposed Atlantic salmon—an ecologically and economically important species of high conservation concern—and disrupt the movement and space use of these fish in the wild,” according to a study published on Monday in Current Biology.
“We were motivated by a major gap in the scientific literature: almost everything that was known about the impacts of cocaine pollution on animal behaviour relies on data that has been collected in laboratory settings,” said Michael Bertram, an author of the study and an associate professor in the department of wildlife, fish, and environmental studies at the Swedish University of Agricultural Sciences, in an email to 404 Media.
“We wanted to know whether environmentally realistic exposure to cocaine and its major metabolite, benzoylecgonine, actually changes how fish move in the wild under real ecological and environmental conditions,” he continued.
To fill this knowledge gap, Bertram and his colleagues obtained more than a hundred Atlantic salmon “smolts”—the term for young fish—that were raised in a hatchery until they were two years old. The team divided them into three groups of 35 fish each and equipped every fish with an implant and tracking tags. The “cocaine group” received a slow-release chemical implant of cocaine, the “metabolite group” received a slow-release benzoylecgonine implant, and a third “control group” carried a dummy implant with no chemicals.

The three groups were released simultaneously on April 12, 2022 at the same site on the south-western side of Lake Vättern in Sweden, alongside 200 other smolts that were not involved in this experiment. Over the course of roughly two months, the exposed groups moved much more than the control group, especially the metabolite group; they traveled 1.9 times farther per week than the unexposed smolts.
“We expected an effect of contaminant exposure on the movement of salmon, but the scale of the changes seen still surprised us,” Bertram said. “The strongest response was close to a two-fold increase in movement, and the most unexpected result was that benzoylecgonine, the main metabolite of cocaine, produced the clearest effect rather than cocaine itself.”
Indeed, the study found that the metabolite group swam almost nine miles farther per week than the control week in the final two weeks of the 8-week experiment, whereas the control group was more settled down by that point.
“To the best of our knowledge, this is the first demonstration that environmental levels of a cocaine metabolite that is commonly found in aquatic ecosystems can alter the space use and swimming activity of fish in the wild,” the team said in the study.
It’s not clear why the metabolite group was so restless, given that benzoylecgonine is considered psychoactively inactive in humans. The compound is a long-lived byproduct of cocaine made by the liver and excreted in urine, which makes it the easiest biomarker to look for in a typical drug test. The possibility that this metabolite may have a greater impact on some species in the wild is disturbing, in part because it is frequently found in higher concentrations in natural environments than its parent compound (cocaine).
“The results suggest that benzoylecgonine may be more biologically important than it is often assumed to be,” Bertram said. “Our findings raise new questions about whether metabolites can sometimes be as disruptive as, or even more disruptive than, the parent compound in aquatic wildlife.”
The team emphasized that much more research is required to understand the pressures that cocaine and other substances might be introducing both to individual species and to whole ecosystems.
“The next steps are to work out the mechanisms by which cocaine and its metabolite disrupt behaviour and movement in fish in the wild, test how general this effect is across other species and systems, and use higher-resolution tracking to see whether these movement changes affect predation risk, migration, reproduction, or survival,” Bertram said. “That is really the key question now: not just whether behaviour changes, but what those changes mean ecologically.”
For example, this particular study focused on hatchery-raised smolts that were released into the wild, but future studies could test out the effects of these contaminants on fully wild populations as well, which have their own unique behavioral characteristics. Unraveling the effects of these human-sourced substances is even more urgent given that the global use of illicit drugs increased by roughly 20 percent over the last decade, suggesting that “the environmental impact of these substances is likely to grow,” according to the study.
“The behaviour and movement of wildlife underpin habitat use, feeding, predator exposure, and population connectivity, so altering these processes could have wider consequences for food webs and population dynamics,” Bertram concluded. “For species already under pressure, an added stressor like this could be highly detrimental, although the long-term effects on fisheries and ecosystems still need to be tested directly.”
2026-04-21 00:18:26
In another sign that the depravity economy has no bottom, Forbes published a story about a Louisiana man that killed 8 children over the weekend containing a box that asked readers to predict whether Congress would do anything about gun control. Citation Needed author Molly White first spotted the box and shared it on Bluesky.

On Sunday morning 31-year old Shamar Elkins killed eight children ages one to fourteen, including seven of his own kids, in a rampage across three locations in Shreveport, Louisiana. Police shot Elkins to death. The Forbes story summarized these events, aggregated the Associated Press and New York Times stories about the killings, and then asked readers to predict whether or not Congress will pass stricter gun laws.
“The New York Times reported his family members said he had mental health problems and had expressed suicidal thoughts,” Forbes said. And then, below that, a “ForbesPredict” box:
“Congress WILL/ WON’T pass new gun safety legislation before 31st December 2026?” The box said then asked readers to “make your prediction.” A green checkmark and red X pulsed in place. Sliding your cursor over each changes the construction of the sentence.
Forbes launched ForbesPredict in January as part of an effort to reverse declining traffic from search engines and keep users on its website longer. It’s a prediction market like Kalshi or Polymarket, but unlike those sites there’s no money to be won. “AI is fundamentally changing how people access information, and that shift is already starkly visible in publisher's traffic,” Nina Gould, Forbes’ Chief Innovation Officer said in a press release announcing ForbesPredict. “Our response isn’t to chase scale, but to deepen engagement. ForbesPredict gives our audience a reason to return, participate and invest their thinking—not just consume headlines.”
ForbesPredict is an attempt to gamify news consumption and keep users scanning the website. Rather than cash, players earn tokens. “Tokens that have no cash value but matter within the ForbesPredict ecosystem as a signal of judgment over time. The tokens unlock greater status, gameplay advantages, and non-monetary rewards along the way,” Gould told Publishing Insider in an interview about the launch of ForbesPredict.
As a new user who had not signed into Forbes.com, I had 800 tokens. A story about the horrifying murder of children in Louisiana invited me to predict the legislative future of gun control. It cost 100 tokens for me to predict that Congress will pass new gun laws by the end of 2026, an outcome ForbesPredict gave an 18 percent chance of happening.

For 10 tokens I could get a “hint” about potential outcomes before spending 100 to make a prediction. The next question asked if Trump would pardon Ghislaine Maxwell before the end of his term. Paying 10 tokens for the hint revealed that ForbesPredict users say there’s a 61 percent chance Trump WILL pardon Maxwell. According to the hints this is because Trump said he’s allowed to do it. There’s a daily login bonus of 800 tokens for anyone willing to make an account.
Websites like Polymarket and Kalshi allow people to bet on the outcomes of world events including of war and death. ForbesPredict is an ersatz version of Polymarket where no money changes hands and users spend tokens for clout internally on Forbes. It’s hard for me to picture the person who is interested in prediction markets without real money visiting Forbes daily to read watered down reporting from the Associated Press and New York Times and then clicking a little boxy like they’re playing Candy Crush with the news cycle.
Forbes built ForbesPredict in partnership with a company called Axiom. It’s an attempt to solve the very real problem of AI devouring traffic and referrals. “AI platforms are answering the questions your journalism used to answer, permanently restructuring how information flows,” Axiom’s website said. “The quiet hope that this was a fluke. The data says otherwise. The trajectory is clear.”
The trajectory is, indeed, clear. AI does seem to be restructuring how information flows on the internet. Forbes is making a bet that it can keep its digital business afloat by serving as a low-stakes prediction market for news junkies. It’s offering gambling without the stakes and the payout and it’s offering news without first hand reporting or new information. It remains to be seen if this will help it retain readers and keep people on the site.
Forbes did not return 404 Media’s request for a comment.