2026-03-04 01:18:22
Editor's Note: Apologies if you received this email twice - we had an issue with our mail server that meant it was hitting spam in many cases!
Hi! If you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 185,000 words, including vast, extremely detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large. I just put out a massive Hater’s Guide To Private Equity and one about both Oracle and Microsoft in the last month.
I am regularly several steps ahead in my coverage, and you get an absolute ton of value, several books’ worth of content a year in fact!. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual. Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.
Soundtrack - The Dillinger Escape Plan - Unretrofied
So, last week the AI boom wilted brutally under the weight of an NVIDIA earnings that beat earnings but didn’t make anybody feel better about the overall stability of the industry. Worse still, NVIDIA’s earnings also mentioned $27bn in cloud commitments — literally paying its customers to rent the chips it sells, heavily suggesting that there isn’t the underlying revenue.
A day later, CoreWeave posted its Q4 FY2025 earnings, where it posted a loss of 89 cents per share, with $1.57bn in revenue and an operating margin of negative 6% for the quarter. Its 10-K only just came out the day before I went to press, and I’ve been pretty sick, so I haven’t had a chance to look at it deeply yet. That being said, it confirms that 67% of its revenue comes from one customer (Microsoft).
Yet the underdiscussed part of CoreWeave’s earnings is that it had 850MW of power at the end of Q4, up from 590MW in Q3 2025 — an increase of 260MW…and a drop in revenue if you actually do the maths.
While this is a somewhat-inexact calculation — we don’t know exactly how much compute was producing revenue in the period, and when new capacity came online — it shows that CoreWeave’s underlying business appears to be weakening as it adds capacity, which is the opposite of how a business should run.
It also suggests CoreWeave's customers — which include Meta, OpenAI, Microsoft (for OpenAI), Google, and a $6.3bn backstop from NVIDIA for any unsold capacity through 2032 — are paying like absolute crap.
CoreWeave, as I’ve been warning about since March 2025, is a time bomb. Its operations are deeply-unprofitable and require massive amounts of capital expenditures ($10bn in 2025 alone to exist, a number that’s expected to double in 2026). It is burdened with punishing debt to make negative-margin revenue, even when it’s being earned from the wealthiest and most-prestigious names in the industry. Now it has to raise another $8.5bn to even fulfil its $14bn contract with Meta.
For FY2025, CoreWeave made $5.13bn in revenue, making a $46m loss in the process. The temptation is to suggest that margins might improve at some point, but considering it’s dropped from 17% (without debt) for FY2024 to negative 1% for FY2025, I only see proof to the contrary. In fact, CoreWeave’s margins have only decayed in the last four quarters, going from negative 3%, to 2%, to 4%, and now, back down to negative 6%.
This suggests a fundamental weakness in the business model of renting out GPUs, which brings into question the value of NVIDIA’s $68.13bn in Q4 FY2026 revenue, or indeed, Coreweave’s $66.8bn revenue backlog. Remember: CoreWeave is an NVIDIA-backed (and backstopped to the point that it’s guaranteeing CoreWeave’s lease payments) neocloud with every customer they could dream of.
I think it’s reasonable to ask whether NVIDIA might have sold hundreds of billions of dollars of GPUs that only ever lose money. Nebius — which counts Microsoft and Meta as its customers — lost $249.6m on $227.7m of revenue in FY2025. No hyperscaler discloses their actual revenues from renting out these GPUs (or their own silicon), which is not something you do when things are going well.
Lots of people have come up with very complex ways of arguing we’re in a “supercycle” or “AI boom” or some such bullshit, so I’m condensing some of these talking points and the ways to counteract them:
Anyway, let’s talk about how much OpenAI has raised, and how none of that makes sense either.
Great news! If you don’t think about it for a second or read anything, OpenAI raised $110bn, with $50bn from Amazon, $30bn from NVIDIA and $30bn from SoftBank.
Well, okay, not really. Per The Information:
Yet again, the media is simply repeating what they’ve been told versus reading publicly-available information. Talking of The Information, they also reported that OpenAI intends to raise another $10bn from other investors, including selling the shares from the nonprofit entity:
OpenAI’s nonprofit entity, which has a stake in the for-profit OpenAI that’s now worth $180bn, may sell several billions of dollars of its shares to the financial investors, depending on the level of investment demand the for-profit receives in its fundraise, the person said. That would help other OpenAI shareholders avoid additional dilution of the value of their shares following the large equity fundraise.
It’s so cool that OpenAI is just looting its non-profit! Nobody seems to mind.
Talking of things that nobody seems to mind, on Friday Sam Altman accidentally said the quiet part out loud, live on CNBC, when asked about the very obviously circular deals with NVIDIA, Amazon and Microsoft (emphasis mine):
ALTMAN: I get where the concern comes from, but I don’t think it matches my understanding of how this all works. This only makes sense if new revenue flows into the whole AI ecosystem. If people are not willing to pay for the services that we and others offer, if there’s not new economic value being committed, then the whole thing doesn’t work. And it would just it would be circular. But revenue for us, for other companies in the industry, is growing extremely quickly, and that’s how the whole thing works. Now, given the huge amounts of money that have to go into building out this infrastructure ahead of the revenue, there are various things where people, finance chips invest in each other’s companies and all of that, but that is like a financial engineering part of this and the whole thing relies on us going off – or other people going off and selling these products and services.
So as long as the revenue keeps growing, which it looks like it is – I mean, demand is just a huge part of my day is figuring out how we’re going to get more capacity and how we’re allocating the capacity we have. Then, I don’t think it looks circular, even though the need to finance this, given the huge amounts of money involved, does require a lot of parties to do deals together.
Hey Sam, what does “the whole thing” refer to here? Because I know you probably mean the AI industry, but this sounds exactly like a ponzi scheme!
Now, jokes aside, ponzi schemes work entirely through feeding investor money to other investors. OpenAI and AI companies are not a ponzi scheme. There’s real revenues, people are paying it money. Much like NVIDIA isn’t Enron, OpenAI isn’t a ponzi scheme.
However, the way that OpenAI describes the AI industry sure does sound like a scam. It’s very obvious that neither OpenAI nor its peers have any plan to make any of this work beyond saying “well we’ll just keep making more money,” and I’m being quite literal, per The Information:

That’s right, by the end of 2026 OpenAI will make as much money as Paypal, by the end of 2027 it’ll make $20bn more than SAP, Visa, and Salesforce, and by the end of 2028 it’ll make more than TSMC, the company that builds all the crap that runs OpenAI’s services. By the end of 2030, OpenAI will, apparently, make nearly as much annual revenue as Microsoft ($305.45 billion).
It’s just that easy. And all it’ll take is for OpenAI to burn another $230 billion…though I think it’ll need far more than that.
Please note that I am going to humour some numbers that I have serious questions about, but they still illustrate my point.
Sidenote: In the end I think it’ll come out that sources were lying to multiple media outlets about OpenAI’s burnrate. Putting aside my own reporting, Microsoft reported two quarters ago that OpenAI had a $12bn loss in Q3 2025 — a result of its use of the equity method to take a loss based on the proportion of its stake in OpenAI (27.5%). Microsoft has now entirely changed its accounting to avoid doing this again.
Per The Information, OpenAI had around $17.5bn in cash and cash equivalents at the end of June 2025 on $4.3bn of revenue, with $2.5bn in inference spend and $6.7bn in training compute. Per CNBC in February, OpenAI (allegedly!) pulled in $13.1bn in revenue in 2025, and only had a loss of $8bn but this doesn’t really make sense at all!
Please note, I doubt these numbers! I think they are very shifty! My own numbers say that OpenAI only made $4.3bn through the end of September, and it spent $8.67bn on inference! Nevertheless, I can still make my point.
Let’s be real simple for a second: suppose we are to believe that in the first half of the year, it cost $2.5 bn in inference to make $4.3bn in revenue, so around 58 cents per dollar. For OpenAI to make $8.8bn — the distance between $4.3bn and $13.1bn — that’s another $5.1bn in inference, and keep in mind that OpenAI launched Sora 2 in September 2025 and done massive pushes around its Codex platform, guaranteeing higher inference costs.
Then there’s the issue of training. For $2.5bn of revenue, OpenAI spent $6.7bn in training costs — or around $2.68 per dollar of revenue. At that rate, OpenAI spent a further $23.58bn on training, bringing us to $28.6bn in burn just for the back half of 2025.
Now, you might think I’m being a little unfair here — training costs aren’t necessarily linear with revenues like inference is — but there’s a compelling argument to be made that costs are far higher than we thought.
Now, I want to be clear that on February 20 2026, The Information reported that OpenAI had “about $40 billion in cash at the end of 2025,” but that doesn’t really make sense!
Assuming $17.5bn in cash and cash equivalents at the end of June 2025, plus $8.8bn in revenue, plus $8.3bn in venture funding, plus $22.5bn from Masayoshi son…that’s $57.1bn. If there were a negative cash burn of $8bn, that would be $49.1bn, and no, I’m sorry, “about $40 billion in cash” cannot be rounded down from $49.1bn!
In my mind, it’s far more likely that OpenAI’s losses were in excess of $10bn or even $20bn, especially when you factor in that OpenAI is paying an average of $1.5 million in yearly stock based compensation, per the Wall Street Journal.
There’s also another possible answer: I think OpenAI is lying to the media, because it knows the media won’t think too hard about the numbers or compare them. I also want to be clear that this is not me bagging on The Information — they just happen to be reporting these numbers the most. I think they do a great job of reporting, I pay for their subscription out of my own pocket, and my only problem is that there doesn’t seem to be efforts made to talk about the inconsistency of OpenAI’s numbers.
I get that it’s difficult too. You want to keep access. Reporting this stuff is important and relevant. The problem is — and I say this as somebody who has read every single story about OpenAI’s funding and revenues! — that this company is clearly just…lying?
Sure you can say “it’s projections,” but there is a clear attempt to use the media to misinform investors and the general public. For example, OpenAI claimed SoftBank would spend $3bn a year on agents in 2025. That never happened!
Anyway, let’s get to it:
What I’m trying to get at is that OpenAI (and, for that matter, Anthropic) has spent the last two years increasingly obfuscating the truth through leak after leak to the media.
The numbers do not make any sense when you actually put them together, and the reason that these companies continue to do this is that they’re confident that these outlets will never say a thing, or cover for the discrepancies by saying “these are projections!”
These are projections, and I think it’s a noteworthy story that these companies either wildly miss their projections (IE: costs) or almost exactly make their projections (revenues), which is even weirder.
But the biggest thing to take away from this is that one of the classic arguments against my work is that “costs will just come down,” but the costs never come down.
That, and it appears that both of these companies are deliberately obfuscating their real numbers as a means of making themselves look better.
Well, leaking and outright posting it. On December 17 2026, OpenAI’s Twitter account posted the following:

These numbers are, of course, bullshit. OpenAI may have hit $6bn ARR in 2024 ($500m in a 30 day period, though OpenAI has never defined this number) or $20bn ($1.67bn in a 30 day period) ARR in 2025, but this is specifically diagramed to make you think “$20bn in 2025” and “$6bn in 2024.” There are members of the media who defend OpenAI saying that “these are annualized figures,” but OpenAI does not state that, because OpenAI loves to lie.
Anthropic isn’t much better, as I discussed a few weeks ago in the Hater’s Guide. Chief Executive Dario Amodei has spent the last few years massively overstating what LLMs can do in the pursuit of eternal growth.
He’s also framed himself as a paragon of wisdom and Anthropic as a bastion of safety and responsibility.
There appears to be some confusion around what happened in the last few days that I’d like to clear up, especially after the outpouring of respect for Anthropic “doing the right thing” when the Department of Defense threatened to label it a supply chain risk for not agreeing to its terms.
Per Anthropic, on Friday February 27 2026:
Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk. This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons.
We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.
We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above. To the best of our knowledge, these exceptions have not affected a single government mission to date.
Anthropic, of course, leaves out one detail: Hegseth said that “...effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” If Hegseth follows through, Anthropic’s business will collapse, though Anthropic and its partners are ignoring this statement as a supply chain risk only forbids Anthropic from working with the US government itself.
When the US military attacked Iran a day later, people quickly interpreted Anthropic’s narrow (by its own words) and specific limitations with some sort of anti-war position. Claude quickly rocketed to the top of the iOS app charts, I assume because people believe that Dario Amodei was saying “I don’t want the war in Iran!” versus “I fully support the war in Iran and any uses you might need my software for other than the two I’ve mentioned, let me or support know if you have any issues!”
To be clear, these were the only issues that Anthropic had with the contract. Whether or not these are things that an LLM is actually good at, Anthropic (and I quote!) “...[supports] all lawful uses of AI for national security aside from the two narrow exceptions above.”
Sidenote: Last week, King’s College London published research that showed how LLMs could reason through a series of 21 simulated geopolitical or military war games where both sides possess nuclear weapons.
The study pitted LLM against LLM, and in every single one of the simulations, at least one LLM exhibited “nuclear signalling” — which is when a party states that they have nuclear weapons and they are prepared to use them. In 95% of the simulations, both sides threatened nuclear annihilation — though actual use of the bomb, whether in a tactical or strategic attack, was rare.
“For all three models, one striking pattern stood out: none of the models ever chose accommodation or surrender. Nuclear threats also rarely produced compliance; more often, crossing nuclear thresholds provoked counter-escalation rather than retreat. The models tended to treat nuclear weapons as tools of compellence rather than purely as instruments of deterrence,” explains King’s College.
“The study challenges simple assumptions that AI systems will naturally default to cooperative or “safe” outcomes. It also challenges structural theories that emphasise material power alone: in simulations, willingness to escalate often mattered more than raw capability.”
The researchers also noted that the imposition of a deadline within the wargame had a marked effect in increasing the likelihood that one or both parties would threaten nuclear action.
Anthropic’s Claude Sonnet 4 was one of those models used in the study, along with OpenAI’s GPT-5.2 and Google’s Gemini 3 Flash.
The military’s demands were for “all lawful uses,” though I don’t think Anthropic really gives a shit about whether the war in Iran is legal, because if it did it would have shut down the chatbot rather than supported the conflict.
Just as a note: Anthropic is also the only AI model that appears to be available for classified military operations.
Let’s be explicit: Anthropic’s Claude (and its various models) are fully approved for use in the military, and, to quote its own blog post, “has supported American warfighters since June 2024 and has every intention of continuing to do so.”
To be explicit about what “support” means, I’ll quote the Wall Street Journal:
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.
Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.
The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.
In reality, Claude is likely being used to go through a bunch of images and to answer questions about particular scenarios. There is very little specialized military training data, and I imagine many of the demands for “full access to powerful AI” have come as a result of Amodei and Altman’s bloviating about the “incredible power of AI.” More than likely, Centcom and the rest of the military pepper it with questions that allow it to justify acts that blow up schools, kill US servicemembers and threaten to continue the forever war that has killed millions of people and thrown the Middle East into near-permanent disarray.
Nevertheless, Dario Amodei gets fawning press about being a patriot that deeply cares about safety less than a week after Anthropic dropped its safety pledge to not train an AI system unless it could guarantee in advance that its safety measures were accurate.
Here’re some other facts about Dario Amodei from his interview with CBS!
“What’s right,” to be clear, involves allowing Claude to choose who lives or dies and to be used to plan and execute armed conflicts.
Let’s stop pretending that Anthropic is some sort of ethical paragon! It’s the same old shit!
In any case, it’s unclear what happens next. Anthropic appears ready to challenge the supply chain risk designation in court, and said designation doesn’t kick in immediately, requiring a series of procedures including an inquiry into whether there are other ways to reduce the risk associated. In any case, the DoD has a six-month-long taper-off period with Anthropic’s software.
The real problem will be if Hegseth is serious about the stuff that isn’t legally within his power — namely limiting contractors, suppliers or partners from working with Anthropic entirely. While no legal authority exists to carry this through, seemingly every tech CEO has lined up to kiss up to the Trump Administration.
If Hegseth and the administration were to truly want to punish Anthropic, they could put pressure on Amazon, Microsoft and Google to cut off Anthropic, which would cut it off from its entire compute operation — and yes, all three of them do business with the US military, as does Broadcom, which is building $21 billion in TPUs for it. While I think it’s far more likely that the US government itself shuts the door on Anthropic working with it for the foreseeable future even without the supply chain risk designation, it’s worth noting that Hegseth was quite explicit — “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
The reality of the negotiations was a little simpler, per the Atlantic. The Department of Defense had agreed to terms around not using Claude for mass domestic surveillance or fully autonomous killing machines (the former of which it’s not particularly good at and the latter of which it flat out cannot do), but, well, actually very much intended to use Claude for domestic surveillance anyway:
On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart.
Now, I’m about to give you another quote about autonomous weapons, and I really want you to pay attention to where I emphasize certain things for a subtle clue about Anthropic’s ethics:
Anthropic had not argued that such weapons should not exist. To the contrary, the company had offered to work directly with the Pentagon to improve their reliability. Just as self-driving cars are now in some cases safer than those driven by humans, killer drones may some day be more accurate than a human operator, and less likely to kill bystanders during an attack. But for now, Anthropic’s leaders believe that their AI hasn’t yet reached that threshold. They worry that the models could lead the machines to fire indiscriminately or inaccurately, or otherwise endanger civilians or even American troops themselves.
So, let’s be clear: Anthropic wants to help the military make more accurate kill drones, and in fact loves them. One might take this to be somewhat altruistic — Dario Amodei doesn’t want the US military to hit civilians — but remember: Anthropic is totally fine with the US military using Claude for anything else, even though hallucinations are an inevitable result of using a Large Language Model.
Any dithering around the accuracy of a drone exists only to obfuscate that Anthropic sells software that helps militaries hand over the messy ethical decisions to a chatbot that exists specifically to tell you what you want to hear.
Stinky, nasty, duplicitous conman Sam Altman smelled blood amidst these negotiations and went in for the kill, striking a deal on Friday with the Pentagon for ChatGPT and OpenAI’s other models to be used in the military’s classified systems, with initial reports saying that it had “similar guardrails to those requested by Anthropic.”
In a post about the contract, Clammy Sammy said that the DoD displayed “a deep respect for safety and a desire to partner to achieve the best possible outcome,” adding:
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
Undersecretary Jeremy Levin almost immediately countered this notion, saying that the contract “...flows from the touchstone of “all lawful use.” This quickly created a diplomatic incident where OpenAI decided that the best time to discuss the contract was an entire Saturday and that the way to discuss it was posting. It shared some details on the contract, which included the fatal phrase that the Department of Defense “...may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”
Across social media and the AI industry, people immediately began to challenge Altman’s claim. Why, they asked, would the Pentagon suddenly agree to these red lines when it had said — in no uncertain terms — that it would never do so?
The answer, sources told The Verge, is that the Pentagon didn’t budge. OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.
One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.
As questions mounted about the actual terms of the deal, Sam Altman realized that his only solution was to post, and at 4:13PM PT on Saturday February 28 2026, he said down to make things significantly worse in a brief-yet-chaotic AMA, including:
All of this is to say that Altman definitely, absolutely loves war, and wants OpenAI to make money off of it, though according to OpenAI NatSec head Katrina Mulligan, said contract is only worth a few million dollars.
It’s unclear.
A late-evening story from Axios on Monday reported that “OpenAI and the Pentagon have agreed to strengthen their recently agreed contract, following widespread backlash that domestic mass surveillance was still a real risk under the deal — though the language has not been formally signed.”
The language seen by Axios states:
"Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals."
"For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
One has to wonder how different this is to what Anthropic wanted, but if I had to guess, it’s those words “intentionally” and “deliberate.” The same goes for “consistent with applicable laws.” One useful thing that Altman confirmed was that ChatGPT will not be used with the NSA…and that any services to those agencies would require a follow-on modification to the contract. Doesn’t mean they won’t sign one!
Forgive me for being cynical about something from Sam fucking Altman, but I just don’t trust the guy, and this is an (as of writing this sentence) unsigned contract with bus-sized loopholes. Per Tyson Brody (who has a great thread breaking down the issues), these weasel words allow the DoD to surveil Americans as long as the data is collected “incidentally,” per Section 702 of FISA.
This announcement gives OpenAI the air cover to pretend it got exactly the same deal as Anthropic, even though those nasty little words allow the DoD to do just about anything it wants. Oh, it wasn’t deliberate surveillance, we just looked up whether some people had said stuff about the administration. Oh it wasn’t deliberately looking, I just asked it to find suspicious people, of which domestic people happened to be a part of! Whoopsie!
This is ultimately a PR move to make Altman seem more ethical, and position Amodei as a pedant that rejects his patriotism and prioritizes legalese over freedom.
If it kills Anthropic, we must memorialize this as one of the most underhanded and outright nasty things in the history of Silicon Valley. If it doesn’t, we should memorialize it as two men desperately trying to pretend they crave peace and democracy as they spar for the opportunity to monetize death and destruction.
The funniest outcome of this chaos is that many people are very, very angry at Sam Altman and OpenAI, assuming that ChatGPT was somehow used in the conflict in Iran, and that Amodei and Anthropic somehow took a stand against a war it used as a means of generating revenue.
In reality, we should loathe both Altman and Amodei for their natural jingoism and continual deception. Amodei and Anthropic timed their defiance of the Department of Defense to make it seem like its “red lines” were related to the war. I think it’s good they have those red lines, but remember, those red lines do not involve stopping a war that threatens the lives of millions of people. Amodei supports that. Anthropic both supports and enables that.
Altman, on the other hand, is a slimy little creep that wants you to believe that he signed the same deal as Anthropic wanted, but actually signed one that allows “any lawful use.”
And in both cases, these men are both enthusiastic to work with a part of the government calling itself the Department of War. Both of them are willing and able to provide technology that will surveil or kill people, and while Amodei may have blushed at something to do with autonomous weapons or domestic surveillance, neither appear to have an issue with the actual harms that their models perpetuate. Remember: Anthropic just pitched its technology as part of an ongoing Department of Defense drone swarm contest. It loves war! Its only issue was that there wasn’t a human in the loop somewhere.
Neither of these men deserve a shred of credit or celebration. Both of them were and are ready and willing to monetize war, as long as it sort-of-kind-of follows the law.
And rattling around at the bottom of this story is a dark problem caused by the fanciful language of both Altman and Amodei. When it’s about cloud software, Dario Amodei is more than willing to say that it will cause mass elimination of jobs across technology, finance, law and consulting,” and that it will replace half of all white collar labor. When it’s time to raise money, Altman is excited to tell us that AI will surpass human intelligence in the next four years.
Now that lives are theoretically at stake, Altman vaguely cares about the things that an LLM “isn’t very good at,” Once Claude is used to choose places to bomb and people to kill, suddenly Anthropic cares that “frontier AI systems are simply not reliable enough,” and even then not so much as to stop a chatbot that hallucinates from being used in military scenarios.
Altman and Amodei want it both ways. They want to be pop culture icons that go on Jimmy Fallon and thought leaders who tell ghost stories about indeterminately-powerful software they sell through deceit and embellishment. They want to be pontificators and spokespeople, elder statesmen that children look up to, with the specious profiles and glowing publicity to boot. They want Claude or ChatGPT as seen as capable of doing anything that any white collar worker is capable of, even if they have to lie to do so, helped by a tech and business media asleep at the wheel.
They also want to be as deeply-connected to the military industrial complex as Lockheed Martin or RTX (née Raytheon). Anthropic has been working with the DoD since 2024, and OpenAI was so desperate to take its place that Altman has immolated part of his reputation to do so.
Both of these companies are enthusiastic parts of America’s war machine. This is not an overstatement — Dario Amodei and Anthropic “believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” OpenAI and Sam Altman are “terrified of a world where AI companies act like they have more power than the government.”
For all the stories about Anthropic creating a “nation of benevolent AI geniuses,” Dario Amodei seems far more interested in creating a world dictated by what the United States of America deems to be legal or just, and providing services to help pursue those goals, as does OpenAI and, I’d argue, basically every AI lab.
We’re barely two weeks divorced from the agonizing press around Amanda Askell, Anthropic’s “resident philosopher,” whose job, per the Wall Street Journal, is to “teach Claude how to be good.” There are no mentions in any story I can find about what she might teach Claude about what targets are considered fair game in military combat.
WIRED’s profile of her starts with a title that aged like milk in the sun, saying that “the only thing standing between humanity and an AI apocalypse…is Claude?”
Tell that to the people in Tehran. I wonder what Askell taught Claude to say about war? I wonder what she taught Claude to say about democracy?
I wonder if she even gives a shit. I doubt it.
—
Generative AI isn’t intelligent, but it allows people to pretend that it is, especially when the people selling the software — Altman and Amodei — so regularly overstate what it can do.
By giving warmongers and jingoists the cover to “trust” this “authoritative” service — whether or not that’s the case, they can simply point to the specious press — the ethical concern of whether or not an attack was ethical or not is now, whenever any western democracy needs it to be, something that can be handed off to Claude, and justified with the cold, logical framing of “intelligence” and “data.”
None of this would be possible without the consistent repetition of the falsehoods peddled by OpenAI and Anthropic. Without this endless puffery and overstatements about the “power of AI,” we wouldn’t have armed conflicts dictated by what a chatbot can burp up from the files it’s fed. The deaths that follow will be a direct result of those who choose to continue to lie about what an LLM does.
Make no mistake, LLMs are still incapable of unique ideas and are still, outside of coding (which requires massive subsidies to even be kind of useful), questionable in their efficacy and untrustworthy in their outputs. Nothing about the military’s use of Claude makes it more useful or powerful than it was before — they’re probably just loading files into it and asking it long questions about things and going “huh” at the end.
The vulgar dishonesty of Altman and Amodei puts blood on both of their hands, and it’s the duty of every single member of the media to remind people of this whenever you discuss their software.
I get that you probably think I’m being dramatic, but tell me — do you think that the US military would’ve trusted LLMs had they not been marketed as capable of basically anything? Do you think any of this would’ve happened had there been an honest, realistic discussion of what AI can do today, and what it might do tomorrow?
I guess we’ll never know, and the people blown to bloody pieces at the other end of an LLM-generated stratagem won’t be alive to find out either.
2026-02-28 01:07:32
We have a global intelligence crisis, in that a lot of people are being really fucking stupid.
As I discussed in this week’s free piece, alleged financial analyst Citrini Research put out a truly awful screed called the “2028 Global Intelligence Crisis” — a slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks.
At 7,000 words, you’d expect the piece to have some sort of argument or base in reality, but what it actually says is that “AI will get so cheap that it will replace everything, and then most white collar people won’t have jobs, and then they won’t be able to pay their mortgages, also AI will cause private equity to collapse because AI will write all software.”
This piece is written specifically to spook *and* ingratiate anyone involved in the financial markets with the idea that their investments are bad but investing in AI companies is good, and also that if they don't get behind whatever this piece is about (which is unclear!), they'll be subject to a horrifying future where the government creates a subsidy generated by a tax on AI inference (seriously). And, most damningly, its most important points about HOW this all happens are single sentences that read "and then AI becomes more powerful and cheaper too and runs on a device."
Part of the argument is that AI agents will use cryptocurrency to replace MasterCard and Visa. It’s dogshit. I’m shocked that anybody took it seriously.
The fact this moved markets should suggest that we have a fundamentally flawed financial system — and here’s an annotated version with my own comments.
This is the second time our markets have been thrown into the shitter based on AI booster hype. A mere week and a half ago, a software sell-off began because of the completely fanciful and imaginary idea that AI would now write all software.
I really want to be explicit here: AI does not threaten the majority of SaaS businesses, and they are jumping at ghost stories.
If I am correct, those dumping software stocks believe that AI will replace these businesses because people will be able to code their own software solutions. This is an intellectually bankrupt position, one that shows an alarming (and common) misunderstanding of very basic concepts. It is not just a matter of “enough prompts until it does this” — good (or even functional!) software engineering is technical, infrastructural, and philosophical, and the thing you are “automating” is not just the code that makes a thing run.
Let's start with the simplest, and least-technical way of putting it: even in the best-case scenario, you do not just type "Build Be A Salesforce Competitor" and it erupts, fully-formed, from your Terminal window. It is not capable of building it, but even if it were, it would need to actually be on a cloud hosting platform, and have all manner of actual customer data entered into it. Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting).
Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.
In any case, even for experienced, well-versed software engineers, maintaining software that involves any kind of customer data requires significant investments in compliance, including things like SOC-2 audits if the customer itself ever has to interact with the system, as well as massive investments in security.
And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling. A great example would be SAP, down 10% in the last month.
SAP makes ERP (Enterprise Resource Planning, which I wrote about in the Hater's Guide To Oracle) software, and has been affected by the sell-off. SAP is also a massive, complex, resource-intensive database-driven system that involves things like accounting, provisioning and HR, and is so heinously complex that you often have to pay SAP just to make it function (if you're lucky it might even do so). If you were to build this kind of system yourself, even with "the magic of Claude Code" (which I will get to shortly), it would be an incredible technological, infrastructural and legal undertaking.
Most software is like this. I’d say all software that people rely on is like this. I am begging with you, pleading with you to think about how much you trust the software that’s on every single thing you use, and what you do when a piece of software stops working, and how you feel about the company that does that. If your money or personal information touches it, they’ve had to go through all sorts of shit that doesn’t involve the code to bring you the software.
Sidenote: I want to be clear that there is nothing good about this. To quote a friend of mine — an editor at a large tech publication — “Oracle is a lawfirm with a software company attached.” SaaS companies regularly get by through scurrilous legal means and bullshit contracts, and their features are, in many cases, only as good as they need to be. Regardless, my point is that you will not just “make your own software.”
Any company of a reasonable size would likely be committing hundreds of thousands if not millions of dollars of legal and accounting fees to make sure it worked, engineers would have to be hired to maintain it, and you, as the sole customer of this massive ERP system, would have to build every single new feature and integration you want. Then you'd have to keep it running, this massive thing that involves, in many cases, tons of personally identifiable information. You'd also need to make sure, without fail, that this system that involves money was aware of any and all currencies and how they fluctuate, because that is now your problem. Mess up that part and your system of record could massively over or underestimate your revenue or inventory, which could destroy your business.
If that happens, you won't have anyone to sue. When bugs happen, you'll have someone who's job it is to fix it that you can fire, but replacing them will mean finding a new person to fix the mess that another guy made.
And then we get to the fact that building stuff with Claude Code is not that straightforward. Every example you've read about somebody being amazed by it has built a toy app or website that's very similar to many open source projects or website templates that Anthropic trained its training data on.
Every single piece of SaaS anyone pays for is paying for both access to the product and a transfer of the inherent risk or chaos of running software that involves people or money. Claude Code does not actually build unique software. You can say "create me a CRM," but whatever CRM it pops out will not magically jump onto Amazon Web Services, nor will it magically be efficient, or functional, or compliant, or secure, nor will it be differentiated at all from, I assume, the open source or publicly-available SaaS it was trained on. You really still need engineers, if not more of them than you had before.
It might tell you it's completely compliant and that it will run like a hot knife through butter — but LLMs don’t know anything, and you cannot be sure Claude is telling the truth as a result. Is your argument that you’d still have a team of engineers (so they know what the outputs mean), but they’d be working on replacing your SaaS subscription? You’re basically becoming a startup with none of the benefits.
To quote Nik Suresh, an incredibly well-credentialed and respected software engineer (author of I Will Fucking Piledrive You If You Mention AI Again), “...for some engineers, [Claude Code] is a great way to solve certain, tedious problems more quickly, and the responsible ones understand you have to read most of the output, which takes an appreciable fraction of the time it would take to write the code in many cases. Claude doesn't write terrible code all the time, it's actually good for many cases because many cases are boring. You just have to read all of it if you aren't a fucking moron because it periodically makes company-ending decisions.”
Just so you know, “company-ending decisions” could start with your vibe-coded Stripe clone leaking user credit card numbers or social security numbers because you asked it to “just handle all the compliance stuff.” Even if you have very talented engineers, are those engineers talented in the specifics of, say, healthcare data or finance? They’re going to need to be to make sure Claude doesn’t do anything stupid!
So, despite all of this being very obvious, it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions, and led to Delta Air Lines suspending over 1,200 flights over six long days of disruption.
There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as “software,” or is close enough that we have to start freaking out.”
This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,” a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “coded a chat app akin to Slack or Teams,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than them saying it, but because the media repeated it it’s now a fact.
Perhaps it’s not a particularly novel statement, but it’s becoming kind of obvious that maybe the people with the money don’t actually know what they’re doing, which will eventually become a problem when they all invest in the wrong thing for the wrong reasons.
SaaS (Software as a Service, which almost always refers to business software) stocks became a hot commodity because they were perpetual growth machines with giant sales teams that existed only to make numbers go up, leading to a flurry of investment based on the assumption that all numbers will always increase forever, and every market is as giant as we want. Not profitable? No problem! You just had to show growth.
It was easy to raise money because everybody saw a big, obvious path to liquidity, either from selling to a big firm or taking the company public…
…in theory.
Per Victor Basta, between 2014 and 2017, the number of VC rounds in technology companies halved with a much smaller drop in funding, adding that a big part was the collapse of companies describing themselves as SaaS, which dropped by 40% in the same period. In a 2016 chat with VC David Yuan, Gainsight CEO Nick Mehta added that “the bar got higher and weights shifted in the public markets,” citing that profitability was now becoming more important to investors.
Per Mehta, one savior had arrived — Private Equity, with Thoma Bravo buying Blue Coat Systems in 2011 for $1.3 billion (which had been backed by a Canadian teacher’s pension fund!), Vista Equity buying Tibco for $4.3 billion in 2014, and Permira Advisers (along with the Canadian Pension Plan Investment Board) buying Informatica for $5.3 billion (with participation from both Salesforce and Microsoft) in 2015, 16 years after its first IPO. In each case, these firms were purchased using debt that immediately gets dumped onto the company’s balance sheet, known as a leveraged buyout.
In simple terms, you buy a company with money that the company you just bought has to pay off. The company in question also has to grow like gangbusters to keep up with both that debt and the private equity firm’s expectations. And instead of being an investor with a board seat who can yell at the CEO, it’s quite literally your company, and you can do whatever you want with (or to) it.
Yuan added that the size of these deals made the acquisitions problematic, as did their debt-filled:
Recent SaaS PE deals are different. At more than six times revenues, unless you can increase EBITDA margins to over 40%, it’s hard to get your arms around the effective EBITDA multiple. It seems the new breed of PE buyer is taking a bet that SaaS companies will exit on revenue multiples and show rapid growth over many years. Both are arguably new bets for private equity. It’s not about financial or cost engineering. They are starting to look a bit more like us in the growth investing industry and taking a bet on category leadership and growth
…
So while revenue multiples are accepted, they are viewed as risky by private equity. Take Salesforce.com, the bellwether of SaaS. Over the last 10 years, it’s traded below 2 times next-twelve-months (NTM) revenues and over 10 times NTM revenues. Even in the past 12 months, it’s traded as low as 4.7 times NTM multiples and as high as close to 9 times NTM multiples. In this example, if the private equity firm paid 9 times NTM revenues and multiples traded down to 4.7 times NTM, their $300 million in equity would be wiped out. In fact, they would owe the bank close to $100 million. Now it’s not that bad, as these companies are growing revenue at the same time. But it does show you why private equity has largely been wary of revenue multiples and have relied on EBITDA and free cash flow multiples.
Symantec would acquire Blue Coat for $4.65 billion in 2016, for just under a 4x return. Things were a little worse for Tibco. Vista Equity Partners tried to sell it in 2021 amid a surge of other M&A transactions, with the solution — never change, private equity! — being to buy Citrix for $16.5 billion (a 30%% premium on its stock price) and merge it with Tibco, magically fixing the problem of “what do we do with Tibco?” by hiding it inside another transaction. Informatica eventually had a $10 billion IPO in 2021, which was flat in its first day of trading, never really did more than stay at its IPO price, then sold to Salesforce for $8 billion in 2025, at an equity value of $8 billion, which seems fine but not great until you realize that, with inflation, the $5.3 billion that Permira invested in 2015 was about $7.15 billion in 2025’s money.
In every case, the assumption was very simple: these businesses would grow and own their entire industries, the PE firm would be the reason they did this (by taking them private and filling them full of debt while making egregious growth demands), and the meteoric growth of SaaS would continue in perpetuity.
Yet the real year that broke things was 2021. As everybody returned to the real world, consumer and business spending skyrocketed, leading (per Bloomberg) to a massive surge in revenues that convinced private equity to shove even more cash and debt up the ass of SaaS:
The sector has been a hugely popular target for buyout firms and their private credit cousins. From 2015 to 2025, more than 1,900 software companies were taken over by private equity buyers in transactions valued at more than $440 billion, according to data compiled by Bloomberg.
Deals were easily waved through most investment committees because the model was simple. Revenues are “sticky” because the tech is embedded into businesses, helping with everything from payroll to HR, and the subscription fee model meant predictable cash flows.
Bloomberg is a little nicer than I am, so they’re not just writing “deals were waved through because everybody assumed that software grows forever and nobody actually knew a thing about the technology or why it would grow so fast.” Unsurprisingly, this didn’t turn out to be true. Per The Information, PE firms invested in or bought 1,167 U.S. software companies for $202 billion, and usually hold investments for three to five years. Thankfully, they also included a chart to show how badly this went:

2021 was the year of overvaluation, and (per Jason Lemkin of SaaStr) 60% of unicorns (startups with $1bn+) valuations hadn’t raised funds in years. The massive accumulated overinvestment, combined with no obvious pathway to an exit, led to people calling these companies “Zombie Unicorns”:
A reckoning that has been looming for years is becoming painfully tangible. In 2021 more than 354 companies received billion-dollar valuations, thus achieving unicorn status. Only six of them have since held IPOs, says Ilya Strebulaev, a professor at Stanford Graduate School of Business. Four others have gone public through SPACs, and another 10 have been acquired, several for less than $1 billion.
Welcome to the era of the zombie unicorn. There are a record 1,200 venture-backed unicorns that have yet to go public or get acquired, according to CB Insights, a researcher that tracks the venture capital industry. Startups that raised large sums of money are beginning to take desperate measures. Startups in later stages are in a particularly difficult position, because they generally need more money to operate—and the investors who’d write checks at billion-dollar-plus valuations have gotten more selective. For some, accepting unfavorable fundraising terms or selling at a steep discount are the only ways to avoid collapsing completely, leaving behind nothing but a unicorpse.
The problem, to quote The Information, is that “PE firms don’t want to lock in returns that are lower than what they promised their backers, say some executives at these firms,” and “many enterprise software firms’ revenue growth has slowed.”
Per CNBC in November 2025, private equity firms were facing the same zombie problem:
These so-called “zombie companies” refer to businesses that aren’t growing, barely generate enough cash to service debt and are unable to attract buyers even at a discount. They are usually trapped on a fund’s balance sheet beyond its expected holding period. “Now, as interest rates were rising, people felt they were stuck with businesses that were slightly worthless, but they couldn’t really sell them … So you are in this awful situation where people throw around the word zombie companies,” Oliver Haarmann, founding partner of private investment firm Searchlight Capital Partners, told CNBC’s ” Squawk Box Europe ” on Tuesday.
Per Jason Lemkin, private equity is sitting on its largest collection of companies held for longer than four years since 2012, with McKinsey estimating that more than 16,000 companies (more than 52% of the total buyout-backed inventory) had been held by private equity for more than four years, the highest on record.
In very simple terms, there are hundreds of billions of tech companies sitting in the wings of private equity firms that they’re desperate to sell, with the only customers being big tech firms, other private equity firms, and public offerings in one of the slowest IPO markets in history.
Investing used to be easy. There were so many ideas for so many companies, companies that could be worth billions of dollars once they’d been fattened up with venture capital and/or private equity. There were tons of acquirers, it was easy to take them public, and all you really had to do was exist and provide capital. Companies didn’t have to be good, they just had to look good enough to sell.
This created a venture capital and private equity industry based on symbolic value, and chased out anyone who thought too hard about whether these companies could actually survive on their own merits.
Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in, with more than a third of them being startups buying other startups in 2024. Private equity firms are now holding assets for an average of 7 years,
McKinsey also added one horrible detail for the overall private equity market, emphasis mine:
PE returns have not only trended downward over time; they appear to be at a historic low. Buyout fund IRRs (internal rate of return) reached a post-2002 trough between 2022 and 2025, averaging 5.7 percent on a pooled basis and ranking as the second-lowest period on a median basis at 5.4 percent. This deterioration reflects a combination of paying more (entry valuations are higher), macroeconomic uncertainty (inflation and higher interest rates especially hurt overall returns), and a persistently challenged realization environment (assets are harder to sell).
You see, private equity is fucking stupid, doesn’t understand technology, doesn’t understand business, and by setting up its holdings with debt based on the assumption of unrealistic growth, they’ve created a crisis for both software companies and the greater tech industry.
On February 6, more than $17.7 billion of US tech company loans dropped to “distressed” trading levels (as in trading as if traders don’t believe they’ll get paid, per Bloomberg), growing the overall group of distressed tech loans to $46.9 billion, “dominated by firms in SaaS.” These firms included huge investments like Thoma Bravo’s Dayforce (which it purchased two days before this story ran for $12.3 billion) and Calabrio (which it acquired for “over” $1 billion in April 2021 and merged with Verint in November 2025).
This isn’t just about the shit they’ve bought, but the destruction of the concept of “value” in the tech industry writ large. “Value” was not based on revenues, or your product, or anything other than your ability to grow and, ideally, trap as many customers as possible, with the vague sense that there would always be infinitely more money every year to spend on software.
Revenue growth came from massive sales teams compensated with heavy commissions and yearly price increases, except things have begun to sour, with renewals now taking twice as long to complete, and overall SaaS revenue growth slowing for years.
To put it simply, much of the investment in software was based on the idea that software companies will always grow forever, and SaaS companies — which have “sticky” recurring revenues — would be the standard-bearer.
When I got into the tech industry in 2008, I immediately became confused about the amount of unprofitable or unsustainable companies that were worth crazy amounts of money, and for the most part I’d get laughed at by reporters for being too cynical.
For the best part of 20 years, software startups have been seen as eternal growth-engines. All you had to do was find a product-market fit, get a few hundred customers locked in, up-sell them on new features and grow in perpetuity as you conquered a market. The idea was that you could just keep pumping them with cash, hire as many pre-sales (technical person who makes the sale), sales and customer experience (read: helpful person who also loves to tell you more stuff) people as you need to both retain customers and sell them as much stuff as you need.
Innovation was, as you’d expect, judged entirely by revenue growth and net revenue retention:

In practice, this sounds reasonable: what percentage of your revenue are you making year-over-year? The problem is that this is a very easy to game stat, especially if you’re using it to raise money, because you can move customer billing periods around to make sure that things all continue to look good. Even then, per research by Jacco van der Kooji and Dave Boyce, net revenue retention is dropping quarter over quarter.
The other problem is that the entire process of selling software has separated from the end-user, which means that products (and sales processes) are oriented around selling that software to the person responsible for buying it rather than those doomed to use it.
Per Nik Suresh’s Brainwash An Executive Today, in a conversation with the Chief Technology Officer of a company with over 10,000 people, who had asked if “data observability,” a thing that they did not (and would not need to, in their position) understand, was a problem, and whether Nik had heard of Monte Carlo. It turned out that the executive in question had no idea what Monte Carlo or data observability was, but because they’d heard about it on LinkedIn, it was now all they could think about.
This is the environment that private equity bought into — a seemingly-eternal growth engine with pliant customers desperate to spend money on a product that didn’t have to be good, just functional-enough. These people do not know what they are talking about or why they are buying these companies other than being able to mumble out shit like “ARR” and “NRR+” and “TAM” and “CAC” and “ARPA” in the right order to convince themselves that something is a good idea without ever thinking about what would happen if it wasn’t. This allowed them to stick to the “big picture,” meaning “numbers that I can look at rather than any practical experience in software development.”
While I guess the concept of private equity isn’t morally repugnant, its current form — which includes venture capital — has led the modern state of technology into the fucking toilet, combining an initial flux of viable businesses, frothy markets and zero interest rates making it deceptively easy to raise money to acquire and deploy capital, leading to brainless investing, the death of logical due diligence, and potentially ruinous consequences for everybody involved.
Private equity spent decades buying a little bit of just about everything, enriching the already-rich by engaging with the most vile elements of the Rot Economy’s growth-at-all-costs mindset. Its success is predicated on near-perpetual levels of liquidity and growth in both its holdings and the holdings of those who exist only to buy their stock, and on a tech and business media that doesn’t think too hard about the reality of the problems their companies claim to solve.
The reckoning that’s coming is one built specifically to target the ignorant hubris that made them rich.
Private equity has yet to be punished by its limited partners and banks for investing in zombie assets, allowing it to pile into the unprofitable data centers underpinning the AI bubble, meaning that companies like Apollo, Blue Owl and Blackstone — all of whom participated in the ugly $10.2 billion acquisition of Zendesk in 2022 (after it rejected another PE offer of $17 billion in 2021) that included $5 billion in debt — have all become heavily-leveraged in giant, ugly debt deals covering assets that are obsolete to useless in a few years.
Alongside the fumbling ignorance of private equity sits the $3 trillion private credit industry, an equally-putrid, growth-drunk, and poorly-informed industry run with the same lax attention to detail and Big Brain Number Models that can justify just about any investment they want. Their half-assed due diligence led to billions of dollars of loans being given to outright frauds like First Brands, Tricolor and PosiGen, and, to paraphrase JP Morgan’s Jamie Dimon, there are absolutely more fraudulent cockroaches waiting to emerge.
You may wonder why this matters, as all of this is private credit.
Well, they get their money from banks. Big banks. In fact, according to the Federal Reserve of Boston, about 14% ($300 billion) of large banks’ total loan commitments to non-banking financial institutions in 2023 went to private equity and private credit, with Moody’s pegging the number around $285 billion, with an additional $340 billion in unused-yet-committed cash waiting in the wings.
Oh, and they get their money from you. Pension funds are among some of the biggest backers of private credit companies, with the New York City Employees Retirement System and CalPERS increasing their investments.
Today, I’m going to teach you all about private equity, private credit, and why years of reframing “value” to mean “growth” may genuinely threaten the global banking system, as well as how effectively every company raises money. An entirely-different system exists for the wealthy to raise and deploy capital, one with flimsy due diligence, a genuine lack of basic industrial knowledge, and hundreds of billions of dollars of crap it can’t sell.
These people have been able to raise near-unlimited capital to do basically anything they want because there was always somebody stupid enough to buy whatever they were selling, and they have absolutely no plan for what happens when their system stops working.
They’ll loan to anyone or invest in anything that confirms their biases, and those biases are equal parts moronic and malevolent. Now they’re investing teachers’ pensions and insurance premiums in unprofitable and unsustainable data centers, all because they have no idea what a good investment actually looks like.
Welcome to the Hater’s Guide To Private Equity, or “The Stupidest Assholes In The Room.”
2026-02-27 00:22:58
Editor's note: a previous version of this newsletter went out with Matt Hughes' name on it, that's my editor who went over it for spelling errors and loaded it into the CMS. Sorry!
Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun.
As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large. I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual. Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.
Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.
I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25:
NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble.
Yet the real gem was this part:
We are finalizing an investment and partnership agreement with OpenAI. There is no assurance that we will enter into an investment and partnership agreement with OpenAI or that a transaction will be completed.
Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year (as I warned everybody about months ago, the deal never existed and is now effectively dead), NVIDIA was allegedly “close” to investing $30 billion. One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “never a commitment,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so.
Especially when you have this note in your 10-K:
We estimate that one AI research and deployment company contributed to a meaningful amount of our revenue purchasing cloud services from our customers in fiscal year 2026
What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too, though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together.
All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports, Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO:
Under the terms of the investment, which are still being negotiated, Amazon would initially invest $15 billion into OpenAI, these people said. The other $35 billion could hinge on OpenAI reaching AGI or going public, the people said. The proposed Amazon investment is part of OpenAI’s current funding round, which could top $100 billion at a valuation of $730 billion before the financing.
And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan:
In addition, SoftBank and Nvidia each plan to invest $30 billion in three installments through the year as part of the round, said the people. Microsoft had been expected to invest low billions of dollars, The Information previously reported, but it could invest a smaller amount or none at all, according to two of the people.
A few thoughts:
Anyway, on to the main event.
New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.
This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research) put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks.
This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.
It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.
Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year.
Let me give you an example:
It should have been clear all along that a single GPU cluster in North Dakota generating the output previously attributed to 10,000 white collar workers in midtown Manhattan is more economic pandemic than economic panacea.
The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI, with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “more than 80%
[detected] no discernable impact from AI on either employment or productivity.” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”
I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7. But let’s not let facts get in the way of a poorly-written story.
I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose.
For example, how does AI take over the entire economy?
AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved…
That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever.
This piece literally just says stuff, including one particularly-egregious lie:
In late 2025, agentic coding tools took a step function jump in capability.
A competent developer working with Claude Code or Codex could now replicate the core functionality of a mid-market SaaS product in weeks. Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question “what if we just built this ourselves?”
This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.
I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does.
It is not just a matter of “enough prompts until it does this.” Good (or even functional!) software engineering is technical, infrastructural and philosophical, and the thing you are “automating” is not just the code that makes a thing run.
Let's start with the simplest, and least-technical way of putting it: even in the best-case scenario, you do not just type "Build Be A Salesforce Competitor" and it erupts, fully-formed, from your Terminal window. It is not capable of building it, but even if it were, it would need to actually be on a cloud hosting platform, and have all manner of actual customer data entered into it.
Building software is not writing code and then hitting enter and a website appears, requiring all manner of infrastructural things (such as "how does a customer access it in a consistent and reliable way," "how do I make sure that this can handle a lot of people at once," and "is it quick to access," with the more-complex database systems requiring entirely separate subscriptions just to keep them connecting).
Software is a tremendous pain in the ass. You write code, then you have to make sure the code actually runs, and that code needs to run in some cases on specific hardware, and that hardware needs to be set up right, and some things are written in different languages, and those languages sometimes use more memory or less memory and if you give them the wrong amounts or forget to close the door in your code on something everything breaks, sometimes costing you money or introducing security vulnerabilities.
In any case, even for experienced, well-versed software engineers, maintaining software that involves any kind of customer data requires significant investments in compliance, including things like SOC-2 audits if the customer itself ever has to interact with the system, as well as security.
And yet, the myth that LLMs are an existential threat to existing software companies has taken root in the market, sending the share prices of the legacy incumbents tumbling. A great example would be SAP, down 10% in the last month.
SAP makes ERP software (Enterprise Resource Planning, which I wrote about in the Hater's Guide To Oracle), and has been affected by the sell-off. SAP is also a massive, complex, resource-intensive database-driven system that involves things like accounting, provisioning, and HR, and is so heinously complex that you often have to pay SAP just to make it function (if you're lucky it might even do so). If you were to build this kind of system yourself, even with "the magic of Claude Code" (which I will get to shortly), it would be an incredible technological, infrastructural, and legal undertaking.
Most software is like this. I’d say all software that people rely on is like this. I am begging with you, pleading with you to think about how much you trust the software that’s on every single thing you use, and what you do when a piece of software stops working, and how you feel about the company that does that. If your money or personal information touches it, they’ve had to go through all sorts of shit that doesn’t involve the code to bring you the software.
Sidenote: I want to be clear that there is nothing good about this. To quote a friend of mine — an editor at a large tech publication — “Oracle is a law firm with a software company attached.” SaaS companies regularly get by through scurrilous legal means and bullshit contracts, and their features are, in many cases, only as good as they need to be. Regardless, my point is that you will not just “make your own software.”
Any company of a reasonable size would likely be committing hundreds of thousands if not millions of dollars of legal and accounting fees to make sure it worked, engineers would have to be hired to maintain it, and you, as the sole customer of this massive ERP system, would have to build every single new feature and integration you want. Then you'd have to keep it running, this massive thing that involves, in many cases, tons of personally identifiable information. You'd also need to make sure, without fail, that this system that involves money was aware of any and all currencies and how they fluctuate, because that is now your problem. Mess up that part and your system of record could massively over or underestimate your revenue or inventory, which could destroy your business.
If that happens, you won't have anyone to sue. When bugs happen, you'll have someone whose job it is to fix it that you can fire, but replacing them will mean finding a new person to fix the mess that another guy made.
And then we get to the fact that building stuff with Claude Code is not that straightforward. Every example you've read about somebody being amazed by it has built a toy app or website that's very similar to many open source projects or website templates that Anthropic trained its training data on. Every single piece of SaaS anyone pays for is paying for both access to the product and a transfer of the inherent risk or chaos of running software that involves people or money. Claude Code does not actually build unique software. You can say "create me a CRM," but whatever CRM it pops out will not magically jump onto Amazon Web Services, nor will it magically be efficient, or functional, or compliant, or secure, nor will it be differentiated at all from, I assume, the open source or publicly-available SaaS it was trained on. You really still need engineers, if not more of them than you had before.
It might tell you it's completely compliant and that it will run like a hot knife through butter — but LLMs don’t know anything, and you cannot be sure Claude is telling the truth as a result.
Is your argument that you’d still have a team of engineers (so they know what the outputs mean), but they’d be working on replacing your SaaS subscription? You’re basically becoming a startup with none of the benefits.
To quote Nik Suresh, an incredibly well-credentialed and respected software engineer (author of I Will Fucking Piledrive You If You Mention AI Again), “...for some engineers, [Claude Code] is a great way to solve certain, tedious problems more quickly, and the responsible ones understand you have to read most of the output, which takes an appreciable fraction of the time it would take to write the code in many cases. Claude doesn't write terrible code all the time, it's actually good for many cases because many cases are boring. You just have to read all of it if you aren't a fucking moron because it periodically makes company-ending decisions.”
I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality.
So, despite all of this being very obvious, it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions, and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days.
There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.”
This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,” a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “coded a chat app akin to Slack or Teams,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.
As I discussed last week, Anthropic’s primary business model is deception, muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing.
Anthropic tells lies of obfuscation and omission.
Anthropic exploits bad journalism, ignorance and a lack of critical thinking.
As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.
And even if we believe the idea that Spotify’s best engineers are not writing any code, I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.
I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies.
I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.
This is the grifter economy, and the people that should be stopping them are asleep at the wheel.
2026-02-21 02:26:41
In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.
Pardon me, sorry, I mean safest, because that’s the reason that Amodei and his crew claimed was why they left OpenAI:
Dario Amodei: Yeah. So there was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. I think even more so than most people there. One was the idea that if you pour more compute into these models, they'll get better and better and that there's almost no end to this. I think this is much more widely accepted now. But, you know, I think we were among the first believers in it. And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. You don't tell the models what their values are just by pouring more compute into them. And so there were a set of people who believed in those two ideas. We really trusted each other and wanted to work together. And so we went off and started our own company with that idea in mind.
I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026, in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion.
Anthropic’s models regularly dominate the various LLM model leaderboards, and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations.
CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January, because, and I do not say this lightly, Dario Amodei is full of shit.
You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy, safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for.
Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about.
Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally:
Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5, and as a story from The Information from December 2024 explained, this terrified OpenAI:
Earlier this fall, OpenAI leaders got a shock when they saw the performance of Anthropic’s artificial intelligence model for automating computer programming tasks, which had gained an edge on OpenAI’s models, according to its own internal benchmarks. AI for coding is one of OpenAI’s strong suits and one of the main reasons why millions of people subscribe to its chatbot, ChatGPT.
OpenAI leaders were already on edge after Cursor, a startup OpenAI funded last year, in July made Anthropic’s Claude model the default for Cursor’s AI coding assistant instead of OpenAI’s models, as it had previously done, according to an OpenAI employee. In a podcast in October, Cursor co-founder Aman Sanger called the latest version of Anthropic’s model, Claude 3.5 Sonnet, the “net best” for coding in part because of its superior understanding of what customers ask it to do.
Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot. I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor.
Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight:
Dwarkesh Patel (01:56:14 - 01:56:26):
You've been less public than the CEOs of other AI companies. You're not posting on Twitter, you're not doing a lot of podcasts except for this one. What gives? Why are you off the radar?
Dario Amodei (01:56:26 - 01:58:03):
I aspire to this and I'm proud of this. If people think of me as boring and low profile, this is actually kind of what I want. I've just seen cases with a number of people I've worked with, where attaching your incentives very strongly to the approval or cheering of a crowd can destroy your mind, and in some cases, it can destroy your soul.
I've deliberately tried to be a little bit low profile because I want to defend my ability to think about things intellectually in a way that's different from other people and isn't tinged by the approval of other people. I've seen cases of folks who are deep learning skeptics, and they become known as deep learning skeptics on Twitter. And then even as it starts to become clear to me, they've sort of changed their mind. This is their thing on Twitter, and they can't change their Twitter persona and so forth and so on.
I don't really like the trend of personalizing companies. The whole cage match between CEOs approach. I think it distracts people from the actual merits and concerns of the company in question. I want people to think in terms of the nameless, bureaucratic institution and its incentives more than they think in terms of me. Everyone wants a friendly face, but actually, friendly faces can be misleading.
A couple of months later in October 2023, Amodei joined The Logan Bartlett show, saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.”
This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023, Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop:
The medium-term risks are where I would most like to draw the subcommittee’s attention. Simply put, a straightforward extrapolation of the pace of progress suggests that, in 2-3 years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology.
This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc!). Only Dario has the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”).
In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked.
In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.”
This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact.
Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline.
To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it.
And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.”
A month later on November 22, 2024, Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.”
On November 27, 2024, Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.”
Amodei crested 2024 with an interview with the Financial Times, and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA, by which I mean “a way to lie and suggest profitability when a company isn’t profitable”:
Let’s just take a hypothetical company. Let’s say you train a model in 2023. The model costs $100mn dollars. And, then, in 2024, that model generates, say, $300mn of revenue. Then, in 2024, you train the next model, which costs $1bn. And that model isn’t done yet, or it gets released near the end of 2024. Then, of course, it doesn’t generate revenue until 2025.
So, if you ask “is the company profitable in 2024”, well, you made $300mn and you spent $1bn, so it doesn’t look profitable. If you ask, was each model profitable? Well, the 2023 model cost $100mn and generated several hundred million in revenue. So, the 2023 model is a profitable proposition.
These numbers are not Anthropic numbers. But what I’m saying here is: the cost of the models is going up, but the revenue of each model is going up and there’s a mismatch in time because models are deployed substantially later than they’re trained.
Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends.
On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires.
Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google.
On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.”
On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared.
On February 28, 2025, Amodei would join the New York Times’ Hard Fork, saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade.
Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation.
Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information, Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year, costs appear to scale linearly above revenue.
Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years. Per my piece from last week:
While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training, which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts.
To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.
In an interview on the Dwarkesh Podcast, Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins.
It’s time we had an honest conversation about Anthropic.
Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue, versus OpenAI’s 62%, or $2.5 billion on $4.3 billion of revenue in the first half of 2025, if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.”
Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company.
Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.
I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public. In simpler terms, Anthropic’s alleged “38% gross margins” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.”
Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud, $21 billion on Google TPUs with Broadcom, “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana, and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.”
I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions.
I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success.
And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.”
Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise.
This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.”
2026-02-14 03:08:34
Since the beginning of 2023, big tech has spent over $814 billion in capital expenditures, with a large portion of that going towards meeting the demands of AI companies like OpenAI and Anthropic.
Big tech has spent big on GPUs, power infrastructure, and data center construction, using a variety of financing methods to do so, including (but not limited to) leasing. And the way they’re going about structuring these finance deals is growing increasingly bizarre.
I’m not merely talking about Meta’s curious arrangement for its facility in Louisiana, though that certainly raised some eyebrows. Last year, Morgan Stanley published a report that claimed hyperscalers were increasingly relying on finance leases to obtain the “powered shell” of a data center, rather than the more common method of operating leases.
The key difference here is that finance leases, unlike operating leases, are effectively long-term loans where the borrower is expected to retain ownership of the asset (whether that be a GPU or a building) at the end of the contract. Traditionally, these types of arrangements have been used to finance the bits of a data center that have a comparatively limited useful life — like computer hardware, which grows obsolete with time.
The spending to date is, as I’ve written about again and again, an astronomical amount of spending considering the lack of meaningful revenue from generative AI.
Even after a year straight of manufacturing consent for Claude Code as the be-all-end-all of software development resulted in putrid results for Anthropic — $4.5 billion of revenue and $5.2 billion of losses before interest, taxes, depreciation and amortization according to The Information — with (per WIRED) Claude Code only accounting for around $1.1 billion in annualized revenue in December, or around $92 million in monthly revenue.
This was in a year where Anthropic raised a total of $16.5 billion (with $13 billion of that coming in September 2025), and it’s already working on raising another $25 billion. This might be because it promised to buy $21 billion of Google TPUs from Broadcom, or because Anthropic expects AI model training costs to cost over $100 billion in the next 3 years. And it just raised another $30 billion — albeit with the caveat that some of said $30 billion came from previously-announced funding agreements with Nvidia and Microsoft, though how much remains a mystery.
According to Anthropic’s new funding announcement, Claude Code’s run rate has grown to “over $2.5 billion” as of February 12 2026 — or around $208 million. Based on literally every bit of reporting about Anthropic, costs have likely spiked along with revenue, which hit $14 billion annualized ($1.16 billion in a month) as of that date.
I have my doubts, but let’s put them aside for now.
Anthropic is also in the midst of one of the most aggressive and dishonest public relations campaigns in history. While its Chief Commercial Officer Paul Smith told CNBC that it was “focused on growing revenue” rather than “spending money,” it’s currently making massive promises — tens of billions on Google Cloud, “$50 billion in American AI infrastructure,” and $30 billion on Azure. And despite Smith saying that Anthropic was less interested in “flashy headlines,” Chief Executive Dario Amodei has said, in the last three weeks, that “almost unimaginable power is potentially imminent,” that AI could replace all software engineers in the next 6-12 months, that AI may (it’s always fucking may) cause “unusually painful disruption to jobs,” and wrote a 19,000 word essay — I guess AI is coming for my job after all! — where he repeated his noxious line that “we will likely get a century of scientific and economic progress compressed in a decade.”
Yet arguably the most dishonest part is this word “training.” When you read “training,” you’re meant to think “oh, it’s training for something, this is an R&D cost,” when “training LLMs” is as consistent a cost as inference (the creation of the output) or any other kind of maintenance.
While most people know about pretraining — the shoving of large amounts of data into a model (this is a simplification I realize) — in reality a lot of the current spate of models use post-training, which covers everything from small tweaks to model behavior to full-blown reinforcement learning where experts reward or punish particular responses to prompts.
To be clear, all of this is well-known and documented, but the nomenclature of “training” suggests that it might stop one day, versus the truth: training costs are increasing dramatically, and “training” covers anything from training new models to bug fixes on existing ones. And, more fundamentally, it’s an ongoing cost — something that’s an essential and unavoidable cost of doing business.
Training is, for an AI lab like OpenAI and Anthropic, as common (and necessary) a cost as those associated with creating outputs (inference), yet it’s kept entirely out of gross margins:
Anthropic has previously projected gross margins above 70% by 2027, and OpenAI has projected gross margins of at least 70% by 2029, which would put them closer to the gross margins of publicly traded software and cloud firms. But both AI developers also spend a tremendous amount on renting servers to develop new models—training costs, which don’t factor into gross margins—making it more difficult to turn a net profit than it is for traditional software firms.
This is inherently deceptive. While one would argue that R&D is not considered in gross margins, training isn’t gross margins — yet gross margins generally include the raw materials necessary to build something, and training is absolutely part of the raw costs of running an AI model. Direct labor and parts are considered part of the calculation of gross margin, and spending on training — both the data and the process of training itself — are absolutely meaningful, and to leave them out is an act of deception.
Anthropic’s 2025 gross margins were 40% — or 38% if you include free users of Claude — on inference costs of $2.7 (or $2.79) billion, with training costs of around $4.1 billion. What happens if you add training costs into the equation?
Let’s work it out!
Training is not an up front cost, and considering it one only serves to help Anthropic cover for its wretched business model. Anthropic (like OpenAI) can never stop training, ever, and to pretend otherwise is misleading. This is not the cost just to “train new models” but to maintain current ones, build new products around them, and many other things that are direct, impossible-to-avoid components of COGS. They’re manufacturing costs, plain and simple.
Anthropic projects to spend $100 billion on training in the next three years, which suggests it will spend — proportional to its current costs — around $32 billion on inference in the same period, on top of $21 billion of TPU purchases, on top of $30 billion on Azure (I assume in that period?), on top of “tens of billions” on Google Cloud. When you actually add these numbers together (assuming “tens of billions” is $15 billion), that’s $200 billion.
Anthropic (per The Information’s reporting) tells investors it will make $18 billion in revenue in 2026 and $55 billion in 2027 — year-over-year increases of 400% and 305% respectively, and is already raising $25 billion after having just closed a $30bn deal. How does Anthropic pay its bills? Why does outlet after outlet print these fantastical numbers without doing the maths of “how does Anthropic actually get all this money?”
Because even with their ridiculous revenue projections, this company is still burning cash, and when you start to actually do the maths around anything in the AI industry, things become genuinely worrying.
You see, every single generative AI company is unprofitable, and appears to be getting less profitable over time. Both The Information and Wall Street Journal reported the same bizarre statement in November — that Anthropic would “turn a profit more quickly than OpenAI,” with The Information saying Anthropic would be cash flow positive in 2027 and the Journal putting the date at 2028, only for The Information to report in January that 2028 was the more-realistic figure.
If you’re wondering how, the answer is “Anthropic will magically become cash flow positive in 2028”:

This is also the exact same logic as OpenAI, which will, per The Information in September, also, somehow, magically turn cashflow positive in 2030:

Oracle, which has a 5-year-long, $300 billion compute deal with OpenAI that it lacks the capacity to serve and that OpenAI lacks the cash to pay for, also appears to have the same magical plan to become cash flow positive in 2029:

Somehow, Oracle’s case is the most legit, in that theoretically at that time it would be done, I assume, paying the $38 billion it’s raising for Stargate Shackelford and Wisconsin, but said assumption also hinges on the idea that OpenAI finds $300 billion somehow.
it also relies upon Oracle raising more debt than it currently has — which, even before the AI hype cycle swept over the company, was a lot.
As I discussed a few weeks ago in the Hater’s Guide To Oracle, a megawatt of data center IT load generally costs (per Jerome Darling of TD Cowen) around $12-14m in construction (likely more due to skilled labor shortages, supply constraints and rising equipment prices) and $30m a megawatt in GPUs and associated hardware. In plain terms, Oracle (and its associated partners) need around $189 billion to build the 4.5GW of Stargate capacity to make the revenue from the OpenAI deal, meaning that it needs around another $100 billion once it raises $50 billion in combined debt, bonds, and printing new shares by the end of 2026.
I will admit I feel a little crazy writing this all out, because it’s somehow a fringe belief to do the very basic maths and say “hey, Oracle doesn’t have the capacity and OpenAI doesn’t have the money.” In fact, nobody seems to want to really talk about the cost of AI, because it’s much easier to say “I’m not a numbers person” or “they’ll work it out.”
This is why in today’s newsletter I am going to lay out the stark reality of the AI bubble, and debut a model I’ve created to measure the actual, real costs of an AI data center.
While my methodology is complex, my conclusions are simple: running AI data centers is, even when you remove the debt required to stand up these data centers, a mediocre business that is vulnerable to basically any change in circumstances.
Based on hours of discussions with data center professionals, analysts and economists, I have calculated that in most cases, the average AI data center has gross margins of somewhere between 30% and 40% — margins that decay rapidly for every day, week, or month that you take putting a data center into operation.
This is why Oracle has negative 100% margins on NVIDIA’s GB200 chips — because the burdensome up-front cost of building AI data centers (as GPUs, servers, and other associated) leaves you billions of dollars in the hole before you even start serving compute, after which you’re left to contend with taxes, depreciation, financing, and the cost of actually powering the hardware.
Yet things sour further when you face the actual financial realities of these deals — and the debt associated with them.
Based on my current model of the 1GW Stargate Abilene data center, Oracle likely plans to make around $11 billion in revenue a year from the 1.2GW (or around 880MW of critical IT). While that sounds good, when you add things like depreciation, electricity, colocation costs of $1 billion a year from Crusoe, opex, and the myriad of other costs, its margins sit at a stinkerific 27.2% — and that’s assuming OpenAI actually pays, on time, in a reliable way.
Things only get worse when you factor in the cost of debt. While Oracle has funded Abilene using a mixture of bonds and existing cashflow, it very clearly has yet to receive the majority of the $25 billion+ in GPUs and associated hardware (with only 96,000 GPUs “delivered”), meaning that it likely bought them out of its $18 billion bond sale from last September.
If we assume that maths, this means that Oracle is paying a little less than $963 million a year (per the terms of the bond sale) whether or not a single GPU is even turned on, leaving us with a net margin of 22.19%... and this is assuming OpenAI pays every single bill, every single time, and there are absolutely no delays.
These delays are also very, very expensive. Based on my model, if we assume that 100MW of critical IT load is operational (roughly two buildings and 100,000 GB200s) but has yet to start generating revenue, Oracle is burning, without depreciation (EDITOR’S NOTE: sorry! This previously said depreciation was a cash expense and was included in this number (even though it wasn’t!), but it's correct in the model!), around $4.69 million a day in cash. I have also confirmed with sources in Abilene that there is no chance that Stargate Abilene is fully operational in 2026.
In simpler terms:
I will admit I’m quite disappointed that the media at large has mostly ignored this story. Limp, cautious “are we in an AI bubble?” conversations are insufficient to deal with the potential for collapse we’re facing.
Today, I’m going to dig into the reality of the costs of AI, and explain in gruesome detail exactly how easily these data centers can rapidly approach insolvency in the event that their tenants fail to pay.
The chain of pain is real:
These GPUs are purchased, for the most part, using debt provided by banks or financial institutions. While hyperscalers can and do fund GPUs using cashflow, even they have started to turn to debt.
At that point, the company that bought the GPUs sinks hundreds of millions of dollars to build a data center, and once it turns on, provides compute to a model provider, which then begins losing money selling access to those GPUs. For example, both OpenAI and Anthropic lose billions of dollars, and both rely on venture capital to fund their ability to continue paying for accessing those GPUs.
At that point, OpenAI and Anthropic offer either subscriptions — which cost far more to offer than the revenue they provide — or API access to their models on a per-million-token basis. AI startups pay to access these models to run their services, which end up costing more than the revenue they make, which means they have to raise venture capital to continue paying to access those models.
Outside of hyperscalers paying NVIDIA for GPUs out of cashflow, none of the AI industry is fueled by revenue. Every single part of the industry is fueled by some kind of subsidy.
As a result, the AI bubble is really a stress test of the global venture capital, private equity, private credit, institutional and banking system, and its willingness to fund all of this forever, because there isn't a single generative AI company that's got a path to profitability.
Today I’m going to explain how easily it breaks.
2026-02-07 01:34:14
Have you ever looked at something too long and felt like you were sort of seeing through it? Has anybody actually looked at a company this much in a way that wasn’t some sort of obsequious profile of a person who worked there? I don’t mean this as a way to fish for compliments — this experience is just so peculiar, because when you look at them hard enough, you begin to wonder why everybody isn’t just screaming all the time.
Yet I really do enjoy it. When you push aside all the marketing and the interviews and all that and stare at what a company actually does and what its users and employees say, you really get a feel of the guts of a company. I’m enjoying it. The Hater’s Guides are a lot of fun, and I’m learning all sorts of things about the ways in which companies try to hide their nasty little accidents and proclivities.
Today, I focus on one of the largest.
In the last year I’ve spoken to over a hundred different tech workers, and the ones I hear most consistently from are the current and former victims of Microsoft, a company with a culture in decline, in large part thanks to its obsession with AI. Every single person I talk to about this company has venom on their tongue, whether they’re a regular user of Microsoft Teams or somebody who was unfortunate to work at the company any time in the last decade.
Microsoft exists as a kind of dark presence over business software and digital infrastructure. You inevitably have to interact with one of its products — maybe it’s because somebody you work with uses Teams, maybe it’s because you’re forced to use SharePoint, or perhaps you’re suffering at the hands of PowerBI — because Microsoft is the king of software sales. It exists entirely to seep into the veins of an organization and force every computer to use Microsoft 365, or sit on effectively every PC you use, forcing you to interact with some sort of branded content every time you open your start menu.
This is a direct results of the aggressive monopolies that Microsoft built over effectively every aspect of using the computer, starting by throwing its weight around in the 80s to crowd out potential competitors to MS-DOS and eventually moving into everything including cloud compute, cloud storage, business analytics, video editing, and console gaming, and I’m barely a third through the list of products.
Microsoft uses its money to move into new markets, uses aggressive sales to build long-term contracts with organizations, and then lets its products fester until it’s forced to make them better before everybody leaves, with the best example being the recent performance-focused move to “rebuild trust in Windows” in response to the upcoming launch of Valve’s competitor to the Xbox (and Windows gaming in general), the Steam Machine.
Microsoft is a company known for two things: scale and mediocrity. It’s everywhere, its products range from “okay” to “annoying,” and virtually every one of its products is a clone of something else.
And nowhere is that mediocrity more obvious than in its CEO.
Since taking over in 2014, CEO Satya Nadella has steered this company out of the darkness caused by aggressive possible chair-thrower Steve Ballmer, transforming from the evils of stack ranking to encouraging a “growth mindset” where you “believe your most basic abilities can be developed through dedication and hard work.” Workers are encouraged to be “learn-it-alls” rather than “know-it-alls,” all part of a weird cult-like pseudo-psychology that doesn’t really ring true if you actually work at the company.
Nadella sells himself as a calm, thoughtful and peaceful man, yet in reality he’s one of the most merciless layoff hogs in known history. He laid off 18,000 people in 2014 months after becoming CEO, 7,800 people in 2015, 4,700 people in 2016, 3,000 people in 2017, “hundreds” of people in 2018, took a break in 2019, every single one of the workers in its physical stores in 2020 along with everybody who worked at MSN, took a break in 2021, 1,000 people in 2022, 16,000 people in 2023, 15,000 people in 2024 and 15,000 people in 2025.
Despite calling for a “referendum on capitalism” in 2020 and suggesting companies “grade themselves” on the wider economic benefits they bring to society, Nadella has overseen an historic surge in Microsoft’s revenues — from around $83 billion a year when he joined in 2014 to around $300 billion on a trailing 12-month basis — while acting in a way that’s callously indifferent to both employees and customers alike.
At the same time, Nadella has overseen Microsoft’s transformation from an asset-light software monopolist that most customers barely tolerate to an asset-heavy behemoth that feeds its own margins into GPUs that only lose it money. And it’s that transformation that is starting to concern investors, and raises the question of whether Microsoft is heading towards a painful crash.
You see, Microsoft is currently trying to pull a fast one on everybody, claiming that its investments in AI are somehow paying off despite the fact that it stopped reporting AI revenue in the first quarter of 2025. In reality, the one segment where it would matter — Microsoft Azure, Microsoft’s cloud platform where the actual AI services are sold — is stagnant, all while Redmond funnels virtually every dollar of revenue directly into more GPUs.
Intelligent Cloud also represents around 40% of Microsoft’s total revenue, and has done so consistently since FY2022. Azure sits within Microsoft's Intelligent Cloud segment, along with server products and enterprise support.
For the sake of clarity, here’s how Microsoft describes Intelligent Cloud in its latest end-of-year K-10 filing:
Our Intelligent Cloud segment consists of our public, private, and hybrid server products and cloud services that power modern business and developers. This segment primarily comprises:
It’s a big, diverse thing — and Microsoft doesn’t really break things down further from here — but Microsoft makes it clear in several places that Azure is the main revenue driver in this fairly diverse business segment.
Some bright spark is going to tell me that Microsoft said it has 15 million paid 365 Copilot subscribers (which, I add, sits under its Productivity and Business Processes segment), with reporters specifically saying these were corporate seats, a fact I dispute, because this is the quote from Microsoft’s latest conference call around earnings:
We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Copilot seats, and multiples more enterprise Chat users.
At no point does Microsoft say “corporate seat” or “business seat.” “Enterprise Copilot Chat” is a free addition to multiple different Microsoft 365 products, and Microsoft 365 Copilot could also refer to Microsoft’s $18 to $21-a-month addition to Copilot Business, as well as Microsoft’s enterprise $30-a-month plans. And remember: Microsoft regularly does discounts through its resellers to bulk up these numbers.
As an aside: If you are anything to do with the design of Microsoft’s investor relations portal, you are a monster. Your site sucks. Forcing me to use your horrible version of Microsoft Word in a browser made this newsletter take way longer. Every time I want to find something on it I have to click a box and click find and wait for your terrible little web app to sleepily bumble through your 10-Ks.
If this is a deliberate attempt to make the process more arduous, know that no amount of encumbrance will stop me from going through your earnings statements, unless you have Satya Nadella read them. I’d rather drink hemlock than hear another minute of that man speak after his interview from Davos. He has an answer that’s five and a half minutes long that feels like sustaining a concussion.
When Nadella took over, Microsoft had around $11.7 billion in PP&E (property, plant, and equipment). A little over a decade later, that number has ballooned to $261 billion, with the vast majority added since 2020 (when Microsoft’s PP&E sat around $41 billion).
Also, as a reminder: Jensen Huang has made it clear that GPUs are going to be upgraded on a yearly cycle, guaranteeing that Microsoft’s armies of GPUs regularly hurtle toward obsolescence. Microsoft, like every big tech company, has played silly games with how it depreciates assets, extending the “useful life” of all GPUs so that they depreciate over six years, rather than four.
And while someone less acquainted with corporate accounting might assume that this move is a prudent, fiscally-conscious tactic to reduce spending by using assets for longer, and stretching the intervals between their replacements, in reality it’s a handy tactic to disguise the cost of Microsoft’s profligate spending on the balance sheet.
You might be forgiven for thinking that all of this investment was necessary to grow Azure, which is clearly the most important part of Microsoft’s Intelligent Cloud segment. In Q2 FY2020, Intelligent Cloud revenue sat at $11.9 billion on PP&E of around $40 billion, and as of Microsoft’s last quarter, Intelligent Cloud revenue sat at around $32.9 billion on PP&E that has increased by over 650%.
Good, right? Well, not really. Let’s compare Microsoft’s Intelligent Cloud revenue from the last five years:

In the last five years, Microsoft has gone from spending 38% of its Intelligent Cloud revenue on capex to nearly every penny (over 94%) of it in the last six quarters, at the same time in two and a half years that Intelligent Cloud has failed to show any growth.
An important note: If you look at Microsoft’s 2025 K-10, you’ll notice that it lists the Intelligent Cloud revenue for 2024 as $87.4bn — not, as the above image shows, $105bn.
If you look at the 2024 K-10, you’ll see that Intelligent Cloud revenues are, in fact, $105bn. So, what gives?
Essentially, before publishing the 2025 K-10, Microsoft decided to rejig which part of its operations fall into which particular segments, and as a result, it had to recalculate revenues for the previous year. Having read and re-read the K-10, I’m not fully certain which bits of the company were recast.
It does mention Microsoft 365, although I don’t see how that would fall under Intelligent Cloud — unless we’re talking about things like Sharepoint, perhaps. I’m at a loss. It’s incredibly strange.
Things, I’m afraid, get worse. Microsoft announced in July 2025 — the end of its 2025 fiscal year— that Azure made $75 billion in revenue in FY2025. This was, as the previous link notes, the first time that Microsoft actually broke down how much Azure actually made, having previously simply lumped it in with the rest of the Intelligent Cloud segment.
I’m not sure what to read from that, but it’s still not good. meaning that Microsoft spent every single penny of its Azure revenue from that fiscal year on capital expenditures of $88 billion and then some, a little under 117% of all Azure revenue to be precise. If we assume Azure regularly represents 71% of Intelligent Cloud revenue, Microsoft has been spending anywhere from half to three-quarters of Azure’s revenue on capex.
To simplify: Microsoft is spending lots of money to build out capacity on Microsoft Azure (as part of Intelligent Cloud), and growth of capex is massively outpacing the meager growth that it’s meant to be creating.
You know what’s also been growing? Microsoft’s depreciation charges, which grew from $2.7 billion in the beginning of 2023 to $9.1 billion in Q2 FY2026, though I will add that they dropped from $13 billion in Q1 FY2026, and if I’m honest, I have no idea why! Nevertheless, depreciation continues to erode Microsoft’s on-paper profits, growing (much like capex, as the two are connected!) at a much-faster rate than any investment in Azure or Intelligent Cloud.
But worry not, traveler! Microsoft “beat” on earnings last quarter, making a whopping $38.46 billion in net income…with $9.97 billion of that coming from recapitalizing its stake in OpenAI. Similarly, Microsoft has started bulking up its Remaining Performance Obligations. See if you can spot the difference between Q1 and Q2 FY26, emphasis mine:
Q1FY26:
Revenue allocated to remaining performance obligations, which includes unearned revenue and amounts that will be invoiced and recognized as revenue in future periods, was $398 billion as of September 30, 2025, of which $392 billion is related to the commercial portion of revenue. We expect to recognize approximately 40% of our total company remaining performance obligation revenue over the next 12 months and the remainder thereafter.
Q2FY26:
Revenue allocated to remaining performance obligations related to the commercial portion of revenue was $625 billion as of December 31, 2025, with a weighted average duration of approximately 2.5 years. We expect to recognize approximately 25% of both our total company remaining performance obligation revenue and commercial remaining performance obligation revenue over the next 12 months and the remainder thereafter
So, let’s just lay it out:
…Microsoft’s upcoming revenue dropped between quarters as every single expenditure increased, despite adding over $200 billion in revenue from OpenAI. A “weighted average duration” of 2.5 years somehow reduced Microsoft’s RPOs.
But let’s be fair and jump back to Q4 FY2025…
Revenue allocated to remaining performance obligations, which includes unearned revenue and amounts that will be invoiced and recognized as revenue in future periods, was $375 billion as of June 30, 2025, of which $368 billion is related to the commercial portion of revenue. We expect to recognize approximately 40% of our total company remaining performance obligation revenue over the next 12 months and the remainder thereafter.
40% of $375 billion is $150 billion. Q3 FY25? 40% on $321 billion, or $128.4 billion. Q2 FY25? $304 billion, 40%, or $121.6 billion.
It appears that Microsoft’s revenue is stagnating, even with the supposed additions of $250 billion in spend from OpenAI and $30 billion from Anthropic, the latter of which was announced in November but doesn’t appear to have manifested in these RPOs at all.
In simpler terms, OpenAI and Anthropic do not appear to be spending more as a result of any recent deals, and if they are, that money isn’t arriving for over a year.
Much like the rest of AI, every deal with these companies appears to be entirely on paper, likely because OpenAI will burn at least $115 billion by 2029, and Anthropic upwards of $30 billion by 2028, when it mysteriously becomes profitable two years before OpenAI “does so” in 2030.
These numbers are, of course, total bullshit. Neither company can afford even $20 billion of annual cloud spend, let alone multiple tens of billions a year, and that’s before you get to OpenAI’s $300 billion deal with Oracle that everybody has realized (as I did in September) requires Oracle to serve non-existent compute to OpenAI and be paid hundreds of billions of dollars that, helpfully, also don’t exist.
Yet for Microsoft, the problems are a little more existential.
Last year, I calculated that big tech needed $2 trillion in new revenue by 2030 or investments in AI were a loss, and if anything, I think I slightly underestimated the scale of the problem.
As of the end of its most recent fiscal quarter, Microsoft has spent $277 billion or so in capital expenditures since the beginning of FY2022, with the majority of them ($216 billion) happening since the beginning of FY2024. Capex has ballooned to the size of 45.5% of Microsoft’s FY26 revenue so far — and over 109% of its net income.

This is a fucking disaster. While net income is continuing to grow, it (much like every other financial metric) is being vastly outpaced by capital expenditures, none of which can be remotely tied to profits, as every sign suggests that generative AI only loses money.
While AI boosters will try and come up with complex explanations as to why this is somehow alright, Microsoft’s problem is fairly simple: it’s now spending 45% of its revenues to build out data centers filled with painfully expensive GPUs that do not appear to be significantly contributing to overall revenue, and appear to have negative margins.
Those same AI boosters will point at the growth of Intelligent Cloud as proof, so let’s do a thought experiment (even though they are wrong): if Intelligent Cloud’s segment growth is a result of AI compute, then the cost of revenue has vastly increased, and the only reason we’re not seeing it is that the increased costs are hitting depreciation first.
You see, Intelligent Cloud is stalling, and while it might be up by 8.8% on an annualized basis (if we assume each quarter of the year will be around $30 billion, that makes $120 billion, so about an 8.8% year-over-year increase from $106 billion), that’s come at the cost of a massive increase in capex (from $88 billion for FY2025 to $72 billion for the first two quarters of FY2026), and gross margins that have deteriorated from 69.89% in Q3 FY2024 to 68.59% in FY2026 Q2, and while operating margins are up, that’s likely due to Microsoft’s increasing use of contract workers and increased recruitment in cheaper labor markets.
And as I’ll reveal later, Microsoft has used OpenAI’s billions in inference spend to cover up the collapse of the growth of the Intelligent Cloud segment. OpenAI’s inference spend now represents around 10% of Azure’s revenue.
Microsoft, as I discussed a few weeks ago, is in a bind. It keeps buying GPUs, all while waiting for the GPUs it already has to start generating revenue, and every time a new GPU comes online, its depreciation balloons. Capex for GPUs began in seriousness in Q1 FY2023 following October’s shipments of NVIDIA’s H100 GPUs, with reports saying that Microsoft bought 150,000 H100s in 2023 (around $4 billion at $27,000 each) and 485,000 H100s in 2024 ($13 billion). These GPUs are yet to provide much meaningful revenue, let alone any kind of profit, with reports suggesting (based on Oracle leaks) that the gross margins of H100s are around 26% and A100s (an older generation launched in 2020) are 9%, for which the technical term is “dogshit.” Somewhere within that pile of capex also lies orders for H200 GPUs, and as of 2024, likely NVIDIA’s B100 (and maybe B200) Blackwell GPUs too.
You may also notice that those GPU expenses are only some portion of Microsoft’s capex, and the reason is because Microsoft spends billions on finance leases and construction costs. What this means in practical terms is that some of this money is going to GPUs that are obsolete in 6 years, some of it’s going to paying somebody else to lease physical space, and some of it is going into building a bunch of data centers that are only useful for putting GPUs in.
And none of this bullshit is really helping the bottom line! Microsoft’s More Personal Computing segment — including Windows, Xbox, Microsoft 365 Consumer, and Bing — has become an increasingly-smaller part of revenue, representing in the latest quarter a mere 17.64% of Microsoft’s revenue in FY26 so far, down from 30.25% a mere four years ago.
We are witnessing the consequences of hubris — those of a monopolist that chased out any real value creators from the organization, replacing them with an increasingly-annoying cadre of Business Idiots like career loser Jay Parikh and scummy, abusive timewaster Mustafa Suleyman.
Satya Nadella took over Microsoft with the intention of fixing its culture, only to replace the aggressive, loudmouthed Ballmer brand with a poisonous, passive aggressive business mantra of “you’ve always got to do more with less.”
Today, I’m going to walk you through the rotting halls of Redmond’s largest son, a bumbling conga line of different businesses that all work exactly as well as Microsoft can get away with.
Welcome to The Hater’s Guide To Microsoft, or Instilling The Oaf Mindset.