MoreRSS

site iconEd ZitronModify

CEO of national Media Relations and Public Relations company EZPR
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ed Zitron

Desperate Times, Desperate Measures

2025-05-28 01:02:28

Next year is going to be big. Well, I personally don't think it'll be big, but if you ask the AI industry, here's the things that will happen by the end of 2026:

How much of this actually sounds plausible to you?

Jony Ive and "The Device"

I thought I couldn't be more disappointed in parts of the tech media, then OpenAI went and bought former Apple Chief Design Officer Jony Ive's "Io," a hardware startup that it initially invested in to create some sort of consumer tech device. As part of the ridiculous $6.5 billion all-stock deal to acquire Io, Jony Ive will take over all design at OpenAI, and also build a device of some sort.

At this point, no real information exists. Analyst Ming-Chi Kuo says it might have a "form factor as compact and elegant as an iPod shuffle," yet when you look at the tweet everybody is citing Kuo's quotes from, most of the "analysis" is guesswork outside of a statement about what the prototype might be like. 

Let's Talk About Ming-Chi Kuo!

It feels like everybody is quoting analyst Ming-Chi Kuo as a source as to what this device might be as a means of justifying writing endless fluff about Jony Ive and Sam Altman's at-this-point-theoretical device.

Kuo is respectable, but only when it comes to the minutiae of Apple — changes in strategy around the components and near-term launches. He has a solid reputation when it comes to finding out what’s selling, what isn’t, and what the company plans to launch. That’s because analysts work by speaking to people — often people working at companies in the less glamorous element of the iPhone and Mac supply chain, like those that manufacture specific components — and asking what orders they’ve received, for what, and when. If a company massively cuts down on production for, say, iPhone screens, you can infer that Apple’s struggling to shift the latest version of the iPhone. Similarly, if a company is having to work around the clock to manufacture an integrated circuit that goes into the newest MacBook, you can assume that sales are pretty brisk. 

Outside of that, Kuo is fucking guessing, and assuming much more than that allows reporters to make ridiculous and fantastical guesses based on nothing other than vibes. If you are writing that Kuo "revealed details" about the device you have failed your readers, first by putting Kuo on a mythological pedestal (which he already has, to some extent), and secondly by failing to put into context what an analyst does, and what an analyst can’t do. 

And yeah, Kuo is guessing. Jony Ive may have worked at Apple, but he is not Apple. Ive was not a hardware guy — at least when it came to the realm beyond industrial and interface design —, nor did he handle operations at Apple. While Kuo's sources may indeed have some insight, it's highly doubtful he magically got his sources to talk after the announcement, meaning that he's guessing.

Kuo also predicted in 2021 that Apple would release 15-20 million foldable iPhones in 2023, and predicted Apple would launch some sort of AR headset almost every year, claiming it would arrive in 2020, 2022 (with glasses in 2025!), second quarter 2022, "late 2022" (where he also said that Apple would somehow also launch a second-generation version in 2024 with a lighter design), or 2023, but then in mid-2022 decided the headset would be announced in January 2023, and become available "2-4 weeks after the event," and predicted that, in fact, Apple would ship 1.5 million units of said headset in 2023. Sadly, by the end of 2023, Kuo said that the headset would be delayed until the second half of 2023, before nearly getting it right, saying that the device would be announced at WWDC 2023 (correct!), but that it would ship "the second or third quarter of 2023."

Not content with being wrong this many times, Kuo doubled down (or quadrupled down, I’ve lost count)  in February 2023, saying that Apple would launch "high-end and low-end versions of second-generation headset in 2025," at a point in time when Apple had yet to announce or ship the first generation. Then, finally, literally a day before the announcement of the Vision Pro, Kuo predicted it "could launch as late as 2024," the kind of thing you could've learned from a single source at Apple telling you what would be announced in 24 hours, or, I dunno, the press embargo

On December 25 2023, Kuo successfully predicted that the Vision Pro would launch "in late January or Early February 2024." It launched in the US February 2 2024. Mark Gurman of Bloomberg reported that Apple planned to launch the device "by February 2024" five days earlier on December 20 2023.

Kuo then went on to predict Apple would only produce "up to 80,000 Vision Pro headsets for launch" on January 11 2024, only to say that Apple had sold "up to 180,000" of them 11 days later. On February 28 2024, after predicting no less than twice that Apple would make multiple models, said that Apple had not started working on a second-generation or lower-priced Vision Pro.

This was a very long-winded way to say that anybody taking tweets by Ming-Chi Kuo as even clues as to what Jony Ive and Sam Altman are making is taking the piss. He has a 72.5% track record for getting things right, according to Apple Track, which is decent, but far from perfect. Any journalist that regurgitates a Ming-Chi Kuo prediction without mentioning that is committing criminal levels of journalistic malpractice. 

So, now that we've got that out the way, here's what we actually know — and that’s a very load-bearing “know” — about this device, according to the Wall Street Journal:

OpenAI Chief Executive Sam Altman gave his staff a preview Wednesday of the devices he is developing to build with the former Apple designer Jony Ive, laying out plans to ship 100 million AI “companions” that he hopes will become a part of everyday life.

...

Altman and Ive offered a few hints at the secret project they have been working on. The product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk, and will be a third core device a person would put on a desk after a MacBook Pro and an iPhone.

The Journal earlier reported that the device won’t be a phone, and that Ive and Altman’s intent is to help wean users from screens. Altman said that the device isn’t a pair of glasses, and that Ive had been skeptical about building something to wear on the body.

Let's break down what this actually means:

  • It will be "...capable of being fully aware of a user’s surroundings and life": Multimodal generative AI that can accept both visual and audio inputs are already a feature in basically every major Large Language Model.
  • "It will be unobtrusive, able to rest in one’s pocket or on one’s desk" (and it won't have a screen?): I cannot express how bad it is that this device, which will allegedly ship in a year, is so vague about how big it is. How big are your pockets? Is it smartphone sized? Smaller? If it's able to be "aware" that suggests that it'll have a bunch of sensors and maybe a camera inside it? If that’s the case, wouldn’t putting it in your pocket defeat the point? 
  • "...will be a third core device a person would put on a desk after a MacBook Pro and an iPhone": This means absolutely nothing. It's a statement made to the journalist or from marketing material intentionally shared with the journalist. A "third core device" has never really taken shape — Apple has sold, at best, a hundred million Apple Watches, and sales have begun to tumble. Products like Google’s Glass similarly failed — partially because it was expensive, partially because they became fatally uncool overnight, and partially because the battery life was dismal. The only "third core device" that's stuck is...tablets. And that's a computer! 
    • Also, calling a tablet a “core device” is, at best, a push. According to Canalys — a fairly reliable analyst firm that does the kind of supply-chain investigations I mentioned earlier — fewer than 40 million tablets were shipped worldwide in Q4 last year. That’s talking about shipments, not sales, and it also takes into account demand from educational and business customers, who likely represent a large proportion of global tablet demand. 

The Journal's story also has one of the most ludicrous things I've read in the newspaper: that "...Altman suggested the $6.5 billion acquisition has the potential to add $1 trillion in value to OpenAI," which would mean that OpenAI acquiring a washed former Apple designer who has designed basically nothing since 2019 to create a consumer AI device — a category that has categorically failed to catch on — would somehow nearly quadruple its valuation. Printing that statement is journalistic malpractice without a series of sentences about how silly it is.

But something about Jony Ive gives reporters, analysts and influencers a particular kind of madness. Reporters frame this acquisition as "the big bet that Jony Ive can make AI hardware work," that this is OpenAI "crashing Apple's party," that this is "a wake up call" for Apple, that this is OpenAI "breaking away from the pack" by making "a whole range of devices from the ground up for AI."

Based on this coverage, one might think that Jony Ive has been, I dunno, building something since he left Apple in 2019, which CNBC called "the end of the hardware era at Apple" about six months before Apple launched its M1 series processors and markedly improved its hardware as a result.Hell, much of Apple’s hardware improvement has been because it walked away from Ive’s dubious design choices. Ive’s obsession with thinness led to the creation of the Butterfly Keyboard — a keyboard design that was deeply unpleasant to type on, with very little travel (the distance a key moves when pressed), and a propensity to break at the first glimpse of a speck of dust.

Millions of angry customers — including famed film director Taika Waititi — and a class-action lawsuit later, Apple ditched it and returned to the original design. Similarly, since Ive’s exit, Apple has added HDMI ports, SD card readers, and MagSafe charging back to its laptops. Y’know, the things that people — especially creatives — wanted and liked, but had to be eliminated because they added negligible levels of heft to a laptop. 

What Has Jony Ive Been Up To?

On leaving Apple in 2019 — where he'd been part time since 2015 (though the Wall Street Journal says he returned as a day-to-day executive in 2017, just in time to promise and then never ship the AirPower charging pad) — Ive formed LoveFrom, a design studio with Apple as its first (and primary) client with a contract valued at more than $100 million, according to the New York Times, which reported the collapse of the relationship in 2022:

In recent weeks, with the contract coming up for renewal, the parties agreed not to extend it. Some Apple executives had questioned how much the company was paying Mr. Ive and had grown frustrated after several of its designers left to join Mr. Ive’s firm. And Mr. Ive wanted the freedom to take on clients without needing Apple’s clearance, these people said.

In 2020, LoveFrom signed a non-specific multi-year relationship to “design the future of Airbnb.” LoveFrom also worked on some sort of seal for King Charles to give away during the coronation to — and I quote — “recognize private sector companies that are leading the way in creating sustainable markets.” It also designed an entirely new font for the multi-million dollar event, which does not matter to me in the slightest but led to some reporters writing entire stories about it. The project involves King Charles encouraging space companies. I don’t know, man.

I cannot find a single thing that Jony Ive has done since leaving Apple other than "signing deals." He hasn't designed or released a tech product of any kind. He was a consultant at Apple until 2022, though it's not exactly obvious what it is he did there since the death of Steve Jobs. People lovingly ascribe Apple's every success to Ive, forgetting that (as mentioned) Ive oversaw the truly abominable butterfly keyboard, as well as numerous other wonky designs, including the trashcan-shaped Mac Pro, the PowerMac G4 Cube (a machine aimed at professionals, with a price to match, but limited upgradability thanks to its weird design), and the notorious “hockey puck” mouse.

In fact, since leaving Apple, all I can confirm is that Jony Ive redesigned Airbnb in a non-specific way, made a new font, made a new system for putting on clothing, made a medal for the King of England to give companies that recycle, and made some non-specific contribution to creating an electric car that has yet to be shown to the public.

Are You Kidding Me?

Anyway, this is the guy who's going to be building a product that will ship 100 million units "faster than any company has ever shipped 100 million of something new before."

It took 3.6 years for Apple to sell 100 million iPhones, and nearly six years for them to it 100 million Apple Watches. It took four years for Amazon to sell 100 million Echo devices, Former NFT scam Rabbit claims to have sold over 130,000 units of its "barely reviewable" "AI-powered" R1 device, but told FastCompany last year that the product had barely 5000 daily active users. The Humane Pin was so bad that their returns outpaced their sales, with 10,000 devices shipped but many returned due to, well, it sucking. I cannot find another comparison point, because absolutely nobody has succeded in making the next smartphone or "third device."

To give you another data point, Gartner — another reliable analyst firm, at least when it comes to historical sales trends, although its future-looking predictions about AI and the metaverse can be more ‘miss’ than ‘hit’ — says that the number of worldwide PC shipments (which includes desktops and laptops) hit 64.4 million in Q4 2024. OpenAI thinks that it’ll sell nearly twice as many devices in one year as PCs were sold during the 2024 holiday quarter. That’s insane. And that’s without mentioning things like… uh, I don’t know, who’ll actually build them? Where will you get your parts, Sam? Where will you get your chips? Most semiconductor manufacturers book orders months — if not years — in advance. And I doubt Qualcomm has a spare 100 million chipsets lying around that it’ll let you have for cheap. 

Yet people seem super ready to believe — much like they were with the Rabbit R1 — except they're asking even less of Jony Ive and Sam Altman, the Abbott and Costello of bullshit merchants. It's hard to tell exactly what it is that Ive did at Apple, but what we do know is that Ive designed the Apple Watch, a product that flopped until it refocused on fitness over fashion, and apparently wanted the watch to be a "high-end fashion accessory" rather than the "extension of the iPhone" that Apple executives wanted according to the Wall Street Journal, heavily suggesting that Ive was the reason the Apple Watch flopped far more than the great mind that made Apple a success.

Anyway, this is the guy who's going to build the first true successor to the smartphone, something Jony Ive already failed to do with the full backing of the entire executive team at Apple, a company he worked at for decades, and one that has literally tens of billions of cash sitting in its bank accounts.

Jony Ive hasn't overseen the design or launch of a consumer electronics product in — at my most charitable guess — three years, though I'd be very surprised if his two-or-three-year-long consultancy deal with Apple involved him leading design on any product, otherwise it would have extended it.

If I was feeling especially uncharitable — and I am — I’d guess that Ive’s relationship with Apple ended up looking like that between Alicia Keys and Research in Motion, which in 2013 appointed the singer its “Global Creative Director,” a nebulous job title that gives Prabhakar Raghavan’s “Chief Technologist” a run for its money. Ive acted as a thread of continuity between the Jobs and Cook eras of Apple, while also adding a degree of celebrity to the company that Apple’s other execs — like Phil Schiller and Craig Federighi — otherwise lacked. 

He's teamed up with Sam Altman, a guy who has categorically failed to build any new consumer-facing product outside of the launch of ChatGPT, a product that loses OpenAI billions of dollars a year, to do the only other thing that loses a bunch of money — building hardware.

No, really, hardware is hard. You don't just design something and then send it to a guy in China - you have to go through multiple prototypes, then find one that actually does something using, then work out how to mass-produce it, then actually build the industrial rails to do so, then build the infrastructure to run it, then ship it. At that point, even if the device is really good (it won't be, if it ever launches), you have to sell one hundred million of them, somehow.

I repeat myself - Hardware is hard, to the point where even Apple and Microsoft can cock-up in disastrous (and expensive) ways. Pretty much every 2011 year MacBook Pro — at least, those with their own discrete GPUs — is now e-waste, in part because the combination of shoddy cooling and lead-free solder led these machines to become expensive bricks. The same was true of the Xbox 360. Even if you think the design and manufacturing processes go swimmingly, there’s no guarantee that problems won’t creep up later down the line. 

I beg, plead, scream and yell to the tech media to take one fucking second to consider how ludicrious this is. Io raised $225 million in total funding (and OpenAI already owned 23% of the company from those rounds), a far cry from the billion dollars that The Information was claiming it wanted to raise in April 2024, heavily suggesting that whatever big, secret, sexy product was sitting there wasn't compelling enough to attract anyone other than Sutter Hill Ventures (which famously burned hundreds of millions of dollars investing in Lacework, a company that sold for $200 million and once gave away $30,000 of Lululemon gift cards in one night to anyone that would meet with the company’s sales representatives), Thrive (which has participated in or led multiple OpenAI funding rounds), Emerson Collective (run by Lauren Powell Jobs, a close friend of Jony Ive and Altman according to The Information) and, of course, OpenAI itself, which bought the company in its own stock after already owning 23% of its shares.

This deal reeks of desperation, and is, at best, a way for venture capitalists that feel bad about investing in Jony Ive's lack of productivity to get stock in OpenAI, a company that also doesn't build much product.

While OpenAI has succeeded in making multiple different models, what actual products have come out of GPT, Gemini or other Large Language Models? We're three joyless years into this crap, and there isn't a single consumer product of note other than ChatGPT, a product that gained its momentum through a hype campaign driven by press and markets that barely understood what they were hyping.

Despite all that media and investor attention — despite effectively the entirety of the tech industry focusing on this one specific thing — we're still yet to get any real consumer product. Somehow Sam Altman and Jony Ive are going to succeed where Google, Amazon, Meta, Apple, Samsung, LG, Huawei, Xiaomi, and every single other consumer electronics companies has failed, and they're going to do so in less than a year, and said device is going to sell 100 million units.

OpenAI didn't acquire Jony Ive's company to build anything — it did so that it could increase the valuation of OpenAI in the hopes that it can raise larger rounds of funding. It’s the equivalent of adding an extension to a decrepit, rotting house. 

OpenAI, as a company, is lost. It has no moat, its models are hitting the point of diminishing returns and have been for some time, and as popular as ChatGPT may be, it isn't a business and constantly loses money.

On top of that, it requires more money than has ever been invested in a startup. SoftBank had to take out a $15 billion bridge loan from 21 different banks just to fund the first $7.5 billion of the $30 billion it’s promised OpenAI in its last funding round.

At this point, it isn't obvious how SoftBank affords the next part of that funding, and OpenAI using stock rather than cash to buy Jony Ive's company suggests that it doesn’t have much to spare. OpenAI is allegedly also buying AI coding company Windsurf for $3 billion. The deal was announced on May 6 2025 by Bloomberg, but it's not clear if it closed, or whether the deal would be in cash or stock, or really anything, and I have to ask: how much money does OpenAI really have?

And how much can it afford to burn? OpenAI’s operating costs are insane, and the company has already committed to several grand projects, while also pushing deeper and deeper into the red. And if — when? — its funding rounds convert into loans, because it failed to convert into a for-profit, OpenAI will have even less money to splash on nebulous vanity projects. Then again, asking questions like that isn't really how the media is doing business with OpenAI — or, for that matter, has done with the likes of Mark Zuckerberg, Satya Nadella, or Sundar Pichai. Everything has to be blindly accepted, written down and published, for fear of...what, exactly? Not getting the embargo to a product launch everybody else got? Missing out on the chance to blindly cover the next big thing, even if it won't be big, and might not even be a thing?

Into The Bullshitverse

So, I kicked off this newsletter with a bunch of links tied to the year 2026, and I did so because I want — no, need — you to understand how silly all of this is.

Sam Altman's OpenAI is going to, in the next year, according to reports:

  • Design, prototype, manufacture and ship the next big consumer tech device, shipping 100 million units — or, as mentioned, nearly twice the number of PCs shipped in Q4 2024 — faster than any other company in history.
  • Bring both the barely-started Stargate Texas and the still-theoretical Stargate UAE data centers online.
    • As a note, SoftBank, who will have full financial responsibility for the project, is having trouble raising the supposed $100 billion to build it.
    • This project is also dependent on Crusoe, a company that has never built an AI data center, and it is being, to quote CEO Chase Lochmiller, forced to "...deliver on the fastest schedule that a 100-megawatt-or-greater data center has ever been built."
    • In fact, the entire Texas project is contingent on debt. Bloomberg reports that both OpenAI and SoftBank will put $19bn each "to start" (with what money?) and Abu Dhabi-based investment firm MGX and Oracle are putting in $7 billion each. Oracle has also signed a 15-year-long lease with Crusoe, and Stargate has one customer — OpenAI.
  • Launch a non-specific AI-specific chip with Broadcom.
    • I cannot express how unlikely it is that this happens. Silicon is even harder than hardware!

Even one of these projects would be considered a stretch. A few weeks ago, Bloomberg Businessweek put out a story called "Inside the First Stargate AI Data Center."  Just to be clear, this will be "fully operational" (or "constructed" depending on who you ask!) by the middle of 2026. The real title should've been "Outside the First Stargate AI Data Center," in part because Bloomberg didn't seem to be allowed into anything, and in part because it doesn't seem like there's an inside to visit.

Again, if I’m being uncharitable — which I am — this whole thing reminds me of that model town that North Korea built alongside the demilitarized zone to convince South Koreans about the beauty of the Juche system and the wisdom of the Dear Leader — except the beautiful, ornate houses are, in fact, empty shells. A modern-day Potemkin village. Bloomberg got to visit a Potemkin data center. 

Data centers do not just pop out of the ground like weeds. They require masses of permits, endless construction, physical service architecture, massive amounts of power, and even if you somehow get all of that together you still have to make everything inside it work. While analysts believe that NVIDIA has overcome the overheating issues with its Blackwell chips, Crusoe is brand fucking spanking new at this, and The Information described Stargate as "new terrain for Oracle...relying on scrappy but unproven startups...[and] more broadly, [Oracle] has less experience than its larger rivals in dealing with utilities to secure power and working with powerful and demanding customers whose plans change frequently."

In simpler terms, you have a company (Oracle) building something at a scale it’s never built at before, using a partner (Crusoe) which has never done this, for a company (OpenAI) that regularly underestimates the demands it puts on its servers. The project being built is also the largest of its kind, and is being built during the reign of an administration that births and kills a new tariff seemingly every day.

Anyway, all of this needs to happen while OpenAI also funds its consumer electronic product, as well as their main operations which will lose them $14 billion in 2026, according to The Information.

It also needs to become a for-profit by the end of 2025 or lose $10 billion of SoftBank's funding, a plan that SoftBank accepted but Microsoft is yet to approve, in part (according to the Information) because OpenAI wants to both give it a smaller cut of profits and stop Microsoft from accessing its technology past 2030.

This is an insane negotiation strategy — leaking to the press that you want to short-change your biggest investor both literally and figuratively — and however it resolves will be a big tell as to how stupid the C-suite at Microsoft really is. Microsoft shouldn't budge a fucking inch. OpenAI is a loser of a company run by a career liar that cannot ship product, only further iterations of an increasingly-commoditized series of Large Language Models.


At this point, things are so ridiculous that I feel like I'm huffing paint fumes every time I read Techmeme.

If you're a member of the media reading this, I implore you to look more critically on what's going on, to learn about the industries in question and begin asking yourselves why you continually and blandly write up whatever it is they say. If you think you're "not a financial journalist" or "not a data center journalist" and thus "can't understand this stuff," you're wrong. It isn't that complex, otherwise a part-time blogger and podcaster wouldn't be able to pry it apart.

That being said, there's no excuse for how everybody covered this Jony Ive fiasco. Even if you think this device ships, it took very little time and energy to establish how little Jony Ive has done since leaving Apple, and only a little more time to work out exactly how ridiculous everything about it is. I know you need stories about stuff — I know you have to cover an announcement like this — but god, would it fucking hurt to write something even a little critical? Is it too much to ask that you sit down and find out what Jony Ive actually does and then think about what that might mean for the future?

This story is ridiculous. The facts, the figures, the people involved, everything is stupid, and every time you write a story without acknowledging how unstable and untenable it is, you further misinform your readers. Even if I’m wrong — even if they somehow pull off all of this stuff — you still left out a valuable part of the story, refused to critique the powerful, and ultimately decided that marketing material and ephemera were more valuable than honest analysis. 

There is no reason to fill in the gaps or “give the benefit of the doubt” to billionaires, and every single time you do, you fail your audience. If that hurts to read, perhaps ask yourself why. 

Holding these people accountable isn’t just about asking tough questions, but questioning their narratives and actions and plans, and being willing to write that something is ridiculous, fantastical, or outlandish. Doing so — even if you end up being proven wrong — is how you actually write history, rather than simply existing as a vessel for Sam Altman or Jony Ive or Dario Amodei or any number of the world’s Sloppenheimers. 

Look, I am nobody special. I am not supernaturally intelligent, nor am I connected to vast swaths of data or suppliers that allow me to write this. I am a guy with a search engine who remembers when people said stuff, and the only thing you lack is my ability to write 5000 or more words in the space of five hours. If you need help, I am here to help you. If you need encouragement, I am here to provide it. If you need critiques, well, scroll up. Either way, I want to see a better tech media, because that’s what the world deserves.

You can do better.

The Era Of The Business Idiot

2025-05-22 00:34:18

Fair warning: this is the longest thing I've written on this newsletter. I do apologize.

Soundtrack: EL-P - $4 Vic

Listen to my podcast Better Offline. We have merch.


Last week, Bloomberg profiled Microsoft CEO Satya Nadella, revealing that he's either a liar or a specific kind of idiot.

The article revealed that — assume we believe him, and this wasn’t merely a thinly-veiled advert for Microsoft’s AI tech — Copilot consumes Nadella’s life outside the office as well at work.

He likes podcasts, but instead of listening to them, he loads transcripts into the Copilot app on his iPhone so he can chat with the voice assistant about the content of an episode in the car on his commute to Redmond. At the office, he relies on Copilot to deliver summaries of messages he receives in Outlook and Teams and toggles among at least 10 custom agents from Copilot Studio. He views them as his AI chiefs of staff, delegating meeting prep, research and other tasks to the bots. “I’m an email typist,” Nadella jokes of his job, noting that Copilot is thankfully very good at triaging his messages. 

None of these tasks are things that require you to use AI. You can read your messages on Outlook and Teams without having them summarized — and I’d argue that a well-written email is one that doesn’t require a summary. Podcasts are not there "to be chatted about" with an AI. Preparing for meetings isn't something that requires AI, nor is research, unless, of course, you don't really give a shit about the actual content of what you're reading or the message of what you're saying, just that you are "saying the right thing."

To be clear, I am deeply unconvinced that Nadella actually runs his life in this way, but if he does, Microsoft’s board should fire him immediately.

In any case, the article is rambling, cloying, and ignores Microsoft AI CEO Mustafa Suleyman's documented history of abusing his workers. Ten custom agents that do what? What do you mean by "other tasks"? Why are these questions never asked? Is it because the reporters know they won't get an answer? Is it because the reporters are too polite to ask more probing questions, knowing that these anecdotes are likely entirely made up as a means to promote a flagging AI ecosystem that cost billions to construct, but doesn’t really seem to do anything, and the reporter in question doesn’t want to force Satya to build a bigger house of cards than he needs to. 

Or is it because we, as a society, do not want to look too closely at the powerful? Is it because we've handed our economy to men that get paid $79 million a year to do a job they can't seem to describe, and even that, they would sooner offload to a bunch of unreliable AI models than actually do?

We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what's useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don't participate in it and our tech companies are directed by people that don't experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

I, however, believe the problem runs a little deeper than the economy, which is a symptom of a bigger, virulent, and treatment-resistant plague that has infected the minds of those currently twigging at the levers of power — and really, the only levers that actually matter. 

The incentives behind effectively everything we do have been broken by decades of neoliberal thinking, where the idea of a company — an entity created to do a thing in exchange for money —has been drained of all meaning beyond the continued domination and extraction of everything around it, focusing heavily on short-term gains and growth at all costs. In doing so, the definition of a “good business” has changed from one that makes good products at a fair price to a sustainable and loyal market, to one that can display the most stock price growth from quarter to quarter. 

This is the Rot Economy, which is a useful description for how tech companies have voluntarily degraded their core products in order to placate shareholders, transforming useful — and sometimes beloved — services into a hollow shell of their former selves as a means of expressing growth. But it’s worth noting that this transformation isn’t constrained to the tech industry, nor was it a phenomena that occurred when the tech industry entered its current VC-fuelled, publicly-traded incarnation. 

In The Shareholder Supremacy, I drew a line from an early 20th-century court ruling, to former General Electric CEO Jack Welch, to the current tech industry, but there’s one figure I didn’t pay as much attention to, and I regrettably now have to do so.

Famed Chicago School economist (and dweller of Hell) Milton Friedman once argued in his 1970 doctrine that those who didn’t focus on shareholder value were “unwitting pup­pets of the intellectual forces that have been undermining the basis of a free society these past decades," and that any social responsibility — say, treating workers well, doing anything other than focus on shareholder value — is tantamount to an executive taxing his shareholders by "spending their money" on their own personal beliefs.

Friedman was a fundamentalist when it came to unrestricted, unfettered capitalism, and this zealotry surpassed any sense of basic human morality — if he had any — at times. For example, in his book, Capitalism and Friedman, he argued that companies should be allowed to discriminate on racial grounds because the owner might suffer should they be required to hire an equally or better-qualified Black person. 

Bear in mind, this was written at the height of the civil rights movement, just six years before the assassination of Martin Luther King, and when America was rapidly waking up to the evils of racism and segregation (a process, I add, is ongoing and sadly not complete). This is a direct quote: 

“...consider a situation in which there are grocery stores serving a neighborhood inhabited by people who have a strong aversion to being waited on by Negro clerks. Suppose one of the grocery stores has a vacancy for a clerk and the first applicant qualified in other respects happens to be a Negro. Let us suppose that as a result of the law the store is required to hire him. The effect of this action will be to reduce the business done by this store and to impose losses on the owner. If the preference of the community is strong enough, it may even cause the store to close. When the owner of the store hires white clerks in preference to Negroes in the absence of the law, he may not be expressing any preference or prejudice, or taste of his own. He may simply be transmitting the tastes of the community. He is, as it were, producing the services for the consumers that the consumers are willing to pay for. Nonetheless, he is harmed, and indeed may be the only one harmed appreciably, by a law which prohibits him from engaging in this activity, that is, prohibits him from pandering to the tastes of the community for having a white rather than a Negro clerk. The consumers, whose preferences the law is intended to curb, will be affected substantially only to the extent that the number of stores is limited and hence they must pay higher prices because one store has gone out of business.”

Friedman was grotesque. I am not religious, but I hope Hell exists if only for him. 

The broader point I’m trying to make is that neoliberalism is inherently selfish, believing that the free market should reign supreme, bereft of government intervention, regulation or interference, thinking that somehow these terms will enable "freedom" rather than a kind of market-dominated quasi-dictatorship where our entire lives are dominated by the whims of the affluent, and that there is no institution that can possibly push back against them. 

Friedman himself makes the facile argument that economic freedom — which, he says, is synonymous with unfettered capitalism — is a necessary condition of unfettered political freedom. Obviously, that’s bollocks, although it’s an argument that’s proven persuasive with a certain class of people that are either intellectually or morally hollow (or both).

Neoliberalism also represents a kind of modern-day feudalism, dividing society based on whether someone is a shareholder or not, with the former taking precedence and the latter seen as irrelevant at best, or disposable at worst. It’s curious that Friedman saw economic freedom — a state that is non-interventionist in economic matters — as essential for political freedom, while also failing to see equality as the same. 

I realize this is all very big and clunky, but I want you to understand how these incentives have fundamentally changed everything, and why they are responsible for the rot we see in our society and our workplaces. When your only incentive is shareholder value, and you raise shareholder value as a platonic ideal, everything else is secondary, including the customer you are selling something to. Friedman himself makes a moral case for discrimination, because shareholder value — in his example, the store owner — matters more than racial equality at its most basic level. 

When you care only about shareholder value, the only job you have is to promote further exploitation and dominance — not to have happy customers, not to make your company "a good place to work," not to make a good product, not to make a difference or contribute to anything other than further growth.

While this is, to anyone with a vapor of an intellectual or moral dimension, absolutely fucking stupid, it’s an idea that’s proven depressingly endemic among the managerial elite, in part because it has entered the culture, and because it is hammered across in MBA classes and corporate training seminars. 

In simpler terms, modern business theory trains executives not to be good at something, or to make a company based on their particular skills, but to "find a market opportunity" and exploit it. The Chief Executive — who makes over 300 times more than their average worker — is no longer a leadership position, but a kind of figurehead measured on their ability to continually grow the market capitalization of their company. It is a position inherently defined by its lack of labor, the amorphousness of its purpose and its lack of any clear responsibility. 

While CEOs do get fired when things go badly, it's often after a prolonged period of decline and stagnancy, and almost always comes with some sort of payoff — and when I say "badly," I mean that growth has slowed to the point that even firing masses of people doesn't make things better. 

Sidebar: I also note that “fired” means something different when it comes to top execs. Excluding those fired due to criminal levels of malfeasance — like Robert Moffat, the man once tipped to be the next CEO of IBM, had he not been convicted of securities fraud and jailed for six months, thus losing nearly $85m in benefits — most ousted corporate leaders often enjoy generous severance packages, far beyond the usual “two weeks of pay and COBRA.” WeWork founder Adam Neumann’s $200m in cash and $225m in (now-worthless) stock is perhaps the most egregious example of this. 

We have, as a society, reframed all business leadership — which is increasingly broad, consisting of all management from the C-suite down — to be the equivalent of a mall cop, a person that exists to make sure people are working without having any real accountability for the work themselves, or to even understand the work itself.

When the leader of a company doesn't participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels. Management as a concept no longer means doing "work," but establishing cultures of dominance and value extraction. A CEO isn't measured on happy customers or even how good their revenue is today, but how good revenue might be tomorrow and whether those customers are paying them more. A "manager," much like a CEO, is no longer a position with any real responsibility — they're there to make sure you're working, to know enough about your work that they can sort of tell you what to do, but somehow the job of "telling you what to do" doesn't come with it any actual work, and the instructions don’t need to be useful or even meaningful.

Decades of direct erosion of the very concept of leadership means that the people running companies have been selected not based on their actual efficacy — especially as the position became defined by its lack of actual production — but on whether they resemble what a manager or executive is meant to look like based on the work that somebody else did.

That’s how someone like David Zaslav, a lawyer by trade and arguably the worst CEO in the entertainment industry, managed to become the head of Warner Brothers (that, and kissing up to Jack Welch, who he called a “big brother” that “picked him up like a friend”). It’s how Carly Fiorina — an MBA by trade — went on to become the head of HP, only to drive the company into a ditch where it stopped innovating, and largely missed the biggest opportunities of the early Internet era. The three CEOs that followed (Mark Hurd (who was ousted after fudging expense reports to send money to a love interest and still got tens of millions of dollars in severance), Leo Apotheker (who the New York Times suggests may have been worse than Fiorina), and Meg Whitman (famous for being a terrible CEO at HP and co-founding doomed video startup Quibi) similarly came from a non-tech background, and similarly did a shitty job, in part because they didn’t understand the company or the products or the customers. 

Management has, over the course of the past few decades, eroded the very fabric of corporate America, and I'd argue it’s done the same in multiple other western economies, too.

I’d also argue that this kind of dumb management thinking also infected the highest echelons of politics across the world, and especially in the UK, my country of birth and where I lived until 2011, delivering the same kind of disastrous effects but at a macro level, as they impacted not a single corporate entity but the very institutions of the state. I’m not naive. I don’t think that the average politician is a salt-of-the-earth type, someone who did a normal job and then decided to enter politics. Especially not in the UK, where the trappings of class permeate everything, and we’re yet to shake off the noxious influence of the aristocracy and constitutionally-mandated hereditary privilege. Our political elite often comes from one of two universities (Oxford and Cambridge, the alma mater of 20% of current UK Members of Parliament) and a handful of fee-paying schools (like Eton, which is a hellmouth for the worst people to ever exist, and educated 20 of the UK’s 55 prime ministers). 

The UK has never been an egalitarian society. And yet, things have changed markedly in the past few decades.The difference between now and then is that the silver-spooned elite was, whether because they believed it or because it was politically expedient, not totally contemptuous of those at the bottom of the economic ladder. 

I was born in the midst of the Thatcher government, and my formative years were spent as British society tried to restructure itself after her reforms. Thatcher, famously, was an acolyte of the Friedman school of thought, and spent her nearly twelve years in office dismantling the state and pushing the culture towards an American-style individualism, once famously quipping that there was “no such thing as society.” 

She didn’t understand how things worked, but was nonetheless completely convinced of the power of the market to handle what was the functions of the state — from housing to energy and water. The end result of this political and cultural shift was, in the long run, disastrous. 

The UK has the smallest houses in the OECD, the smallest housing stock of any developed country,  and some of the worst affordability. The privatization of the UK’s water infrastructure meant that money that would previously go towards infrastructure upgrades was, instead, funnelled to shareholders in the form of dividends. As a result, Britain is literally unable to process human waste and is actively dumping millions of liters of human sewage into its waterways and coastline. When Britain privatized its energy companies, the new management sold or closed the vast majority of its gas storage infrastructure. As a result, when the Ukraine War sparked, and natural gas prices surged, Britain had some of the smallest reserves of any country in Europe, and was forced to buy gas at the market prices — which were several times higher than their pre-war levels. 

I’m no fan of Thatcher, and like Friedman, I too wish hell exists, if only for the both of them. I wrote the above to emphasize the consequences of this clueless managerial thinking on a macro level — where the impacts aren’t just declining tech products or white-collar layoffs, but rather the emergence of generational crises in housing, energy, and the environment. These crises were obvious consequences of decisions made by someone whose belief in the free market was almost absolute, and whose fundamentalist beliefs surpassed the actual informed understanding of those working in energy, or housing, or water. 

As the legendary advertiser Stanley Pollitt once said, “bullshit baffles brains.” The sweeping changes we’ve seen, both in our economy and in our society, has led to an unprecedented, gilded age of bullshit where nothing matters, and things — things of actual substance — matter nothing. 

We live in a symbolic economy where we apply for jobs, writing CVs and cover letters to resemble a certain kind of hire, with our resume read by someone who doesn't do or understand our job, but yet is responsible for determining whether we’re worthy of going to the next step of the hiring process. All this so that we might get an interview with a manager or executive who will decide whether they think we can do it. We are managed by people whose job is implicitly not to do work, but oversee it. We are, as children (and young adults), encouraged to aspire to become a manager or executive, to "own our own business," to "have people that work for us," and the terms of our society are, by default, that management is not a role you work at, so much as a position you hold — a figurehead that passes the buck and makes far more of them than you do.

This problem, I believe, has poisoned the fabric of almost every part of modern business, elevating people that don't do work to oversee companies that make things they don't understand, creating substrates of management that do not do anything but create further distance from actually doing a job.

While some of you might automatically think I'm talking about Graeber's concept of Bullshit Jobs, this is far, far bigger. The system as it stands selects people at all levels of management specifically because they resemble the kind of specious, work-averse dullard that runs seemingly every company — a person built to go from meeting to meeting with the vague consternation that suggests they're "busy."

As a result, the higher you get up in an organization, the further you get from the customer, the problem you've solving, and any of the actual work, and the higher up you get, the more power you have to change the conditions of the business. On some level, modern corporate power structures are a giant game of telephone where vibes beget further vibes, where managers only kind-of-sort-of understand what's going on, and the more vague one's understanding is, the more likely you are to lean toward what's good, or easy, or makes you feel warm and fuzzy inside.

The system selects for people comfortable in these roles, creating org charts full of people that become harder and harder to justify other than "they've been here a while." They do not do "work" on the "product," and their answer as to why would be "what, am I meant to go down on the line and use a machine?" or "am I meant to call a customer and make a sale?" and the answer is yes, you lazy fucking piece of shit, you should do that once in a while, or at the very least go and watch or listen to somebody else do so, and do so regularly.

But that's not what a manager does, right? Management isn't work, it's about thinking really hard and telling people what to do. It's about making the calls. It's about "managing people," and that can mean just about anything, but often means "who do I take credit from or pass blame to," because modern management has been stripped of all meaning other than continually reinforcing power structures for the next manager up.

This system creates products for these people, because these people are more often than not the ones in power — they are your boss, your boss' boss, and their boss too. Big companies build products sold by specious executives or managers to other specious executives, and thus the products themselves stop resembling things that solve problems so much as they resemble a solution. After all, the person buying it — at least at the scale of a public company — isn’t necessarily the recipient of the final product, so they too are trained (and selected) to make calls based on vibes.

I believe the scale of this problem is society-wide, and it is, at its core, a destruction of what it means to be a leader, and a valorization of selfishness, isolationist thinking, turning labor into a faceless resource, which naturally leads to seeing customers in an equally faceless way, their problems generalized, their pain points viewed as parts of a powerpoint rather than anything that your company earnestly tries to solve or even really thinks about. And that assumes that said pain points are even considered to begin with, or not ignored in favor of a fictitious and purely hypothetical pain point. 

People — be they the ones you're paying or paying you — become numbers. We have created and elevated an entirely new class of person, the nebulous "manager," and told decades-worth of children that's what they should aspire to, that the next step from doing a job is for us to tell other people to do a job, until we're able to one day tell those people how to do their job, with each rung on the corporate ladder further distancing ourselves from anything that interacts with reality.

The real breaking point is fairly simple: the higher up you go at a company, the further you are from problems or purpose. Everything is abstract — the people that work for you, the people you work for, and even the tasks you do. 

We train people — from a young age! — to generalize and distance oneself from actual tasks, to aspire to doing managerial work, because managers are well-paid and "know what's going on," even if they haven't actually known what was going on for years, if they ever did so. This phenomenon has led to the stigmatization of blue-collar work (and the subsequent evisceration of practical trade and technical education across most of the developed world) in favor of universities. Society respects an MBA more than a plumber, even though the latter benefits society more — though I concede that both roles involve, on some level, shit, with the plumber unblocking it and the MBA spewing it. 

Sidebar: Hey, have you noticed how most of the calls for people to return to the office come not from people who actually do the jobs, but occupy managerial roles? More on that later. 

I believe this process has created a symbolic society — one where people are elevated not by any actual ability to do something or knowledge they may have, but by their ability to make the right noises and look the right way to get ahead. The power structures of modern society are run by business idiots — people that have learned enough to impress the people above them, because the business idiots have had power for decades. They have bred out true meritocracy or achievement or value-creation in favor of symbolic growth and superficial intelligence, because real work is hard, and there are so many of them in power they've all found a way to work together.

I need you to understand how widespread this problem is, because it is why everything feels fucking wrong.


Think of the Business Idiot as a kind of con artist, except the con has become the standard way of doing business for an alarmingly large part of society. 

The Business Idiot is  the manager that doesn't seem to do anything but keeps being promoted, and the chief executive officer of a public company that says boring, specious nonsense about AI. They're the tenured professor that you wish would die, the administrator whose only job appears to be opening and closing their laptop, the consultant that can come up with a million reasons to charge you more money yet not one metric to judge their success by, the marketing executive that's worked exactly three years at every major cloud player but does not appear to have done anything, and the investor that invests "based on founders," but really means "guys that look at sound exactly like them."

These people are present throughout the private and public sector, and our governments too, and they paradoxically do nothing of substance, but somehow damage everything they touch. This isn’t to say our public and private sector is entirely useless — just that these people have poisoned so many parts of our power structures that avoiding them is impossible.  

Our economy is oriented around them — made easier and more illogical for their benefit — because their literal only goal in life has been to take and use power. The Business Idiot is also an authoritarian, and will do whatever they need to — including harming the institution they work for, or those closest to them, like their co-workers or their community — as a means of avoiding true accountability or responsibility.

Decades of neoliberalism has incentivized their rise, because when you incentivize society to become management — to "manage or run a company" rather than do something for a reason or purpose — you are incentivizing a kind of corporate narcissism, one that bleeds into whatever field the person goes into, be it public or private. We go to college as a means of getting a job after college using the grades we got in college, rendering many students desperate to get the best grades they can versus "learn" anything, because our economy is riddled with power structures controlled by people that don't know stuff and find it offensive when you remind them.

Our society is in the thrall of dumb management, and functions as such. Every government, the top quarter of every org chart, features little Neros who, instead of battling the fire engulfing Rome, are sat in their palaces strumming an off-key version of “Wonderwall” on the lyre and grumbling about how the firefighters need to work harder, and maybe we could replace them with an LLM and a smart sprinkler system. 

Every institution keeps its core constituents and labor forces at arms-length, and effectively anything built at scale quickly becomes distanced from both the customer and laborer. This disconnection — or alienation — sits at the center of almost every problem I've ever talked about. Why would companies push generative AI in seemingly every part of their service, even though customers don't like it and it doesn't really work?

It's simple: they neither know nor care what the customer wants, barely know how their businesses function, barely know what their products do, and barely understand what their workers are doing, meaning that generative AI feels magical, because it does an impression of somebody doing a job, which is an accurate way of describing how most executives and middle managers operate.


Let me get a little more specific.

An IBM study based on conversations with 2,000 global CEOs recently found that only 25% of AI initiatives have delivered their expected ROI over the last few years, and, worse still, "64% of CEOs surveyed acknowledge that the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring to the organization." 50% of respondents also found that "the pace of recent investments has left their organization with disconnected, piecemeal technology," almost as if they don't know what they're doing and are just putting AI in stuff for no reason.

Johnson & Johnson recently decided to "shift from broad generative AI experimentation to a focused approach on high-value use cases" according to the Wall Street Journal, adding that "only 10 to 15% of use cases were driving about 80% of the value." Their last two CEOs (Alex Gorsky and Joaquin Duato) both have MBAs, with current CEO Duato's previous ten years at Johnson & Johnson being "some sort of Chairman or Vice President," and the previous two CEOs (Alex Gorsky and William Weldon) were both pharmaceutical sales and marketing people. 

Fun fact about Alex Gorsky! During his first tenure at Johnson & Johnson he led marketing of products that deliberately underplayed some drugs' side effects and paid off the largest nursing home pharmacy in America to sell more drugs to old people.

The term "executive" loosely refers to a person who moves around numbers and hopes for the best. The modern executive does not "lead," but prod, their managers hall monitors for organizations run predominantly by people that, by design, are entirely removed from the business itself even in roles like marketing and sales, where CMOs and VPs bark orders without really participating in the process.

We talk eagerly about how young people in entry level jobs should "earn their stripes" by doing "grunt work," and that too is the neoliberal poison in the veins of our society, because, by definition, your very first experience of the workforce is working hard enough so that you don't have to work as hard.

And anyway, the same managerial types who bitch about the entitlement and unrealistic expectations of young people are the same ones that also eviscerated the bottom rung of the career ladder — typically by offshoring many of these roles, or consolidating them into the responsibilities of their increasingly burned-out senior workers — or see AI as a way to eliminate what they see as an optional cost center, and not the future of their workforce. 

Society berated people for "quiet quitting," a ghastly euphemism for “doing the job as specified in your employment contract,” in 2022 because journalism is enthralled by the management class, and because the management class has so thoroughly rewritten the concept of what "labor" means that people got called lazy for literally doing their jobs. The middle manager brain doesn't see a worker as somebody hired and paid for a job, but as an asset that must provide a return. As a result, if another asset comes along that could potentially provide a bigger return — like an offshore worker, or an AI agent — that middle manager won’t hesitate to drop them. 

Artificial intelligence is the ultimate panacea for the Business Idiot — a tool that gives an impression of productivity with far more production than the Business Idiot themselves. The Information reported recently that ServiceNow CEO Bill McDermott — the chief executive of a company with a market capitalization of over $200 billion, despite the fact that, like SalesForce, nobody really knows what it does  — chose to push AI across his whole organization (both in product and in practice) based on the mental consideration I'd usually associate with a raven finding a shiny object:

When ChatGPT debuted in November 2022, McDermott joined his executives around a boardroom table and they played with the chatbot together. From there, he made a quick decision. “Bill’s like, ‘Let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI,’” recalled Tzitzon, the ServiceNow vice chair.

To begin a customer meeting on AI, McDermott has asked his salespeople to do what amounts to their best impression of him: Present AI not as a matter of bots or databases but in grand-sounding terms, like “business transformation.

During the push to grow AI, McDermott has insisted his managers improve efficiency across their teams. He is laser-focused on a sales team’s participation rate. “Let’s assume you’re a manager, and you have 12 direct reports,” he said. “Now let’s assume out of those 12, two people did good, which was so good that the manager was 110% of plan. I don’t think that’s good. I tell the manager: ‘What did the other 10 do?’”

You'll notice that all of this is complete nonsense. What do you mean "efficiency"? What does that quote even mean? 110% of plan? What're you on about? Did you hit your head on something Bill?

I'd wager Bill is concussion-free — and an example of a true Business Idiot — a person with incredible power and wealth that makes decisions not based on knowing stuff or caring about his customers, but on the latest shiny thing that makes him think "line go up." No, really, that's Bill McDermott's thing. Back in 2022, he said to Yahoo Finance the metaverse was "real" and that ServiceNow could help someone "create an e-mall in the metaverse" and have a futuristic store of some sort. One might wonder how ServiceNow provided that, and the answer is it didn't. I cannot find a single product that it’s offered that includes it.

Bill, like any of these CEOs, doesn't really know stuff, or even do stuff, he just is. The corporate equivalent of a stain on a carpet that nobody really knows how it got there, but hasn’t been removed. The modern executive is symbolic, and the media has — due to the large amount of Business Idiots running these outlets and middle managers stuffed into the editorial class — been trained to never ask difficult questions, such as "what the fuck are you talking about, Bill?" or even the humble "what does that mean?" or "how would you do that?" or saying "I'm not sure I understand, would you mind explaining?"

Perhaps the last part is the symptom of the overall problem. So many layers of editorial and managerial power are filled full of people that don't know anything, and there's never anyone crueler about ignorance than somebody that's ignorant themselves. 

Worse still, in many fields — journalism included — we are rarely rewarded for knowing things or being "right," but being right in the way that keeps the people with the keys from scraping them across our cars.  We are, however, rewarded for saying the right thing at the right time, which more often than not means resembling our (white, male) superiors, speaking like our peers, and delivering results in the way that makes everybody feel happiest.


A great example of our vibes-based society was back in October 2021, where a Washington Post article written by two Harvard professors rallied against remote work by citing a Microsoft-funded anti-remote study and quoting 130-year-old economist Alfred Marshall about how "workers gather in dense clusters," ignoring the fact that Marshall was so racist they've had to write papers about it, how excited he was about eugenics, or the fact he was writing about fucking factories.

Remote work terrifies the Business Idiot, because it removes the performative layer that allowed them to stomp around and feel important, reducing their work to, well...work. Office culture is inherently heteronormative and white, and black women are less likely to be promoted by their managers, and continuing the existence of "The Office" is all about making sure The Business Idiot reigns supreme. Removing the ability for the managerial hall monitors to look at you and try and work out what you're doing without ever really helping is a big part of being a manager — and if you're a manager reading this and saying you don't do this, I challenge you to talk to another person that doesn't confirm your biases.

The Business Idiot reigns supreme. Their existence holds up almost every public company, and remote work was the first time they willingly raised their heads. Google demanded employees return to the office in 2021 — but let one executive work remotely from New Zealand because absolutely none of the decisionmaking was done with people that actually do work. While we can  (well, you can, I'm not interested) debate whether exclusively working remote is as productive, the Return To Office push was almost entirely done in two ways:

  1. Executives demanding people return to the office.
  2. Journalists asking executives if remote work was good or not, entirely ignoring the people actually doing the work.

The New York Times, The Washington Post, The Wall Street Journal, and many, many other outlets all fell for this crap because the Business Idiots have captured our media too, training even talented journalists to defer to power at every turn. When every power structure is stuffed full of do-nothing management types that have learned exactly as little as they need to as a means to get by, it's inevitable that journalism caters to them — specious, thoughtless reproductions of the powerful's ideas.

Look at the coverage of AI, or the metaverse, or cryptocurrency, or Clubhouse. Look at how willingly reporters will accept narratives not based on practical experience or what the technology can do, but what the powerful (and the popular) are suddenly interested in. Every single tech bubble followed the same path, and that path was paved with flawed, deferential and specious journalism, from small blogs to the biggest mastheads.

Look at how reporters talk to executives — not just the way they ask things (like Nilay Patel's 100+ word questions to Sundar Pichai in his abominable interview), but the things they accept them saying, and the willingness reporters have to just accept what they're told. Satya Nadella is the CEO of a company with a market capitalization of over $3 trillion. I have no idea how you, as a reporter, do not say "Satya, what the fuck? You're outsourcing most of your life to generative AI? That's insane!" or even "do you really do that?" and then asking further questions.

But that would get you in trouble. The editorial class is the managerial class now, and has spent decades mentoring young reporters to not ask questions, to not push back, to believe that a big, strong, powerful company CEO would never mislead them. Kara Swisher's half-assed interviews are considered "daring" and "critical" because journalism has, at large, lost its teeth, breeding reporters rewarded for knowing a little bit about a few things and punishing those who ask too many questions or refuse to fall in line.

The reason they don't want you to ask these questions is that the Business Idiot isn't big on answers. Editors that tell you not to push too hard are doing so because they know the executive likely won't have an answer. It isn't just about the PR person that trained them, but the fact that these men more often than not have only a glancing understanding of their underlying business.

Yet in the same way that Business Idiots penetrated every other part of society, they eventually found their way to journalism. While we can (and should) scream at the disconnected idiots that ran Vice into the ground, the problem is everywhere, because the Business Idiots aren't just at the top, but infecting the power structures underlying every newsroom.

While there are many really great editors, there are plenty more that barely understand the articles they edit, the stories they commission, or that make reporters pull punches for fear of advertiser blowback.

That, and mentorship is dead across effectively all parts of society, meaning that most reporters (as with many jobs) learn by watching each other, which means they all make sure to not ask the rough questions, and not push too hard against the party/market/company messaging until everybody else does it.

And under these conditions, Business Idiots thrive.


The Business Idiot's reign is one of speciousness and shortcuts, of acquisition, of dominance and of theft. Mentoring people is something you do to pass on knowledge — it may make them grateful to you, but it ultimately, in the mind of a Business Idiot, creates a competitor or rival. 

Investing in talent, or worker conditions, or even really work itself would require you to know what you're talking about, or actually do work, which doesn't make sense when you're talking to a worker. They're the ones who're meant to work! You're there to manage them! Yet they keep talking back — asking questions about the work you want them to do, asking you to step in and help on something — and all of that's so annoying. Just know the stuff already! Get it done! I have to go to lunch and then go back out to another lunch! 

I believe this is the predominant mindset across most of the powerful, to the point that everything in the world is constructed to reaffirm their beliefs rather than follow any logical path. Our stock market is inherently illogical, driven not by whether a company is good or bad, but whether it can show growth, even if said growth is horrifically unprofitable, and I'd argue it's because the market has no idea how to make intelligent decisions, just complex ones that mean that you don't really need to understand the business so much as you understand the associated vibes of the industry.

Friedman's influence and Reagan's policies have allowed our markets to be dominated by Business Idiocy, where a bad company can be a good stock because everybody (IE: other traders and the business press) likes how it looks, which allows the Business Idiots to continue making profit using illogical and partially-rigged market-making, with the business press helpfully pushing up their narratives. 

This also keeps regular people from accumulating too much wealth — if regular people could set the tone for the markets as "a company that makes something people like and people pay them for it and they make more money than they spend," that might make things a little too even.

It doesn't matter that CoreWeave quite literally does not have enough money for its capital expenditures and lost over $300m in the last quarter because its year-over-year growth was 420%. It doesn't matter that it has October loan payments that will crush the life out of the company either. These narratives are fed to the media knowing that the media will print them, because thinking too hard about a stock would mean the Business Idiot had to think also, and that is not why they are in this business.

The "AI trade" is the Business Idiot's nirvana — a fascination for a managerial class that long since gave up any kind of meaningful contribution to the bottom line, as moving away from the fundamental creation of value as a business naturally leads to the same kind of specious value that one finds from generative AI.

I’m not even saying that there’s no returns, or that LLMs don’t do anything, or even that there’s no possible commercial use for generative AI. They just don’t do enough, almost by design, and we’re watching companies desperately try and contort them into something, anything that works, pretending so fucking hard they’ll stake their entire futures on the idea. Just fucking work, will you? Agentforce doesn’t make any money, it sucks, but god damn is Marc Benioff going to make you bear witness.

Does it matter that Agentforce doesn't make Salesforce any money? No! Because Benioff and Salesforce have got rich selling to fellow Business Idiots who then shove Salesforce into their organization without thinking about who would use it or how they'd use it other than in the most general ways. Agentforce was — and is — a fundamentally boring and insane product, charging $2 a conversation for a chatbot that, to quote The Information, provides customers with "...incorrect answers — AI hallucinations — while testing how the software handles customer service queries."

But this shit is catnip to the Business Idiot, because the Business Idiot ideally never has to deal with work, workers or customers. Generative AI doesn’t do enough to actually help us be better at our jobs, but it gives a good enough impression of something useful so that it can convince someone really stupid that doesn’t understand what you do that they don’t need you, sometimes.

A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work. OpenAI's "Deep Research" wows professional Business Idiot Ezra Klein because he doesn't seem to realize that part of research is the research itself, not just the output, as you learn about stuff as you research a topic, allowing you to come to a conclusion. The concept of an "agent" is the erotic dream of the managerial sect — a worker that they can personally command to generate product that they can say is their own, all without ever having to know or do anything other than the bare minimum of keeping up appearances, which is the entirety of the Business Idiot's resume.

And because the Business Idiot's career has been built on only knowing exactly enough to get by, they don't dig into Large Language Models any further than hammering away at ChatGPT and saying "we must put AI in everything now." Yet the real problem is that for every Business Idiot selling a product, there are many more that will buy it, which has worked in the past for Software as a Service (SaaS) companies that grew fat and happy hocking giant annual contracts and continual upsells, because CIOs and CTOs work for Business Idiot CEOs that demand that they "put AI in everything now," a nonsensical and desperate remit that's part growth-lust and part ignorance, borne of the fear that one gets when they're out of their depth.

Look at every single institution installing some sort of ChatGPT integration, and then look for the Business Idiot. Perhaps it's Cal State University Chanceller Mildred Garcia, who claimed that giving everybody a ChatGPT subscription would "elevate...students' educational experience across all fields of study, empower [its] faculty's teaching and research, and help provide the highly educated workforce that will drive California's future AI-driven economy," a nonsensical series of words to justify a $16.9 million-a-year single-vendor no-bid contract to a product that is best known as either a shitty search engine or a way to cheat at college.

In some ways, Sam Altman is the Business Idiot's antichrist, taking advantage of a society where the powerful rarely know much other than what they want to control or dominate. ChatGPT and other AI tools are, for the most part, sold based on what they might do in the future to people that will never really use them, and Altman has done well to manipulate, pester and terrify those in power with the idea that they might miss out on something. Does anyone know what it is? No, they don't, because the powerful are Business Idiots too, willing to accept anything that somebody brings along that makes them feel good, or bad in a way that they can make headlines with.

Hey, whatever happened to Gavin Newsom's Blockchain executive order? Did that do anything?

In any case, Altman's whole Sloppenheimer motif has worked wonders on the Business Idiots in the markets and global governments that fear what artificial intelligence could do, even if they can't really define artificial intelligence, or what it could do, or what they're scared of. The fear of China's "rise in AI" is one partially based on sinophobia, and partially based on the fact that China has their own Business Idiots willing to shove hundreds of millions of dollars into data centers.

Generative AI has created a reckoning between the Business Idiot and the rest of society, its forced adoption and proliferation providing a meager return for the massive investment of capital and the revulsion it causes in many people, not just in the Business Idiot's excitement in replacing them, but how wrong the Business Idiot is.

While there are many people that dick around with ChatGPT, years since it launched we still can't find a clean way to say what it does or why it matters other than the fact that everybody agreed it did. The media, now piloted by Business Idiots, has found itself declawed, its reporters unprepared, unwilling and unsupported, the backbone torn out of most newsrooms for fear that being too critical is somehow "not being objective," despite the fact that what you choose to cover objectively is still subjective.

Reporters still, to this day, as these companies burn billions of dollars to make an industry the size of the free-to-play gaming industry, refuse to say things that bluntly because "the cost of inference is coming down" and "these companies have some of the smartest people in the world." They ignore the truth as it sits in front of them — that the combined annual recurring revenue of The Information's comprehensive database of every generative AI company is less than $10 billion, or $4 billion if you remove Anthropic and OpenAI.

ChatGPT's popularity is the ultimate Business Idiot success story — the "fastest growing product in Silicon Valley history" that didn't grow because it was useful, or good, or able to do anything in particular, but because a media controlled by Business Idiots decided it was "the next big thing" and started talking about it nonstop since November 2022, guaranteeing that everybody would try it, even if even to this day the company can't really explain what it is you're meant to use it for.

Much like the Business Idiot themselves, ChatGPT doesn't need to do anything specific. It just needs to make the right sounds at the right times to impress people that barely care what it does other than make them feel futuristic.

Real people — regular people, not Business Idiots, not middle managers, not executive coaches, not MBAs, not CEOs — have seen this for what it was early and often, but real people are seldom the ones with the keys, and the media — even the people writing good stuff — regularly fails to directly and clearly say what's going on. 

The media is scared of doing the wrong thing — of "getting in trouble" with someone for "misquoting them" or "misreading what they said" — and in a society where in-depth knowledge is subordinate to knowing enough catchphrases, the fight often doesn't feel worth it even with an editor's blessing.

I also want to be clear that this goes far beyond money. Editors aren't just scared of advertisers being upset. They know that if narratives have to shift toward more critical, thoughtful coverage, they too will have to be more thoughtful and knowledgeable, which is rough when you are a Business Idiot and got there by editing the right people in a way that didn't help them in the slightest.


Nothing about what I'm saying should suggest the Business Idiot is weak. In fact, Business Idiots are fully in control — we have too many managers, and our most powerful positions are valorized for not knowing stuff, for having a general view that can "take the big picture," not realizing that a big picture is usually made up of lots of little brush strokes.

Yet there are, eventually, consequences for everything being controlled by Business Idiots.

Our current society — an unfair, unjust one dominated by half-broken tech products that make their owners billions — is the real punishment wrought by growth, a brain drain in corporate society, one that leads it to doing illogical things and somehow making money. It doesn't make any fucking sense that generative AI got this big. The returns aren't there, the outcomes aren't there, and any sensible society would've put a gun to a ChatGPT and aggressively pulled the trigger.

Instead it's the symbolic future of capitalism — one that celebrates mediocrity and costs billions of dollars, every human work it can consume, and the destruction of our planet, all because everybody has kind of agreed that this is what they're all doing, with nobody able to give a convincing explanation of what that even is. Generative AI is revolting both in how overstated its abilities are and in how it continually tests how low a standard someone will take for a product, both in its outputs and in the desperate companies trying to integrate it into everything, and its proliferation throughout society and organizations is already fundamentally harmful.

We’re not just drowning in a sea of slop — we’re in a constant state of corporate AI beta tests, new “features” sprouting out of our products like new limbs that sometimes function normally but often attempt to strangle us. It’s unclear if companies forcing these products on us have contempt for us or simply don’t know what good looks like. Or perhaps it's both, with the Business Idiot resenting us for not scarfing down whatever they serve us, as that's what's worked before. 

They don't really understand their customers — they understand what a customer pays for and how a purchase is made, you know, like the leaders of banks and asset managers during the subprime mortgage crisis didn't really think about whether people could pay those mortgages, just that they needed a lot of them to put in a CDO.

The Business Idiot's economy is one built for other Business Idiots. They can only make things that sell to companies that must always be in flux — which is the preferred environment of the Business Idiot, because if they're not perpetually starting new initiatives and jumping on new "innovations," they'd actually have to interact with the underlying production of the company. 

Does the software work? Sometimes! Do successful companies exist that sell like this? Sure! But look at today's software and tell me with a straight face that things feel good to use.

And something like generative AI was inevitable: an industry claiming to change the world that never really does so, full of businesses that don’t function as businesses, full of flimflam and half-truths used to impress people who will likely never interact with it, or do so in only a passing way. By chasing out the people that actually build things in favour of the people that sell them, our economy is built on production puppetry — just like generative AI, and especially like ChatGPT. 

These people are antithetical to what’s good in the world, and their power deprives us of happiness, the ability to thrive, and honestly any true innovation. The Business Idiot thrives on alienation — on distancing themselves from the customer and the thing they consume, and in many ways from society itself. Mark Zuckerberg wants us to have fake friends, Sam Altman wants us to have fake colleagues, and an increasingly loud group of executives salivate at the idea of replacing us with a fake version of us that will make a shittier version of what we make for a customer that said executive doesn’t fucking care about. 

They’re building products for other people that don’t interact with the real world. We are no longer their customers, and so, we’re worth even less than before — which, as is the case in a world dominated by shareholder supremacy, not all that much.

They do not exist to make us better — the Business Idiot doesn’t really care about the real world, or what you do, or who you are, or anything other than your contribution to their power and wealth. This is why so many squealing little middle managers look up to the Musks and Altmans of the world, because they see in them the same kind of specious corporate authoritarian, someone above work, and thinking, and knowledge. 


One of the most remarkable things about the Business Idiot is their near-invulnerability.

Modern management is resource control, shifting blame away from the manager (who should hold responsibility. After all, if they don’t, why do they have that job?) onto the laborer, knowing that the organization and the media will back it up. 

While you may think I’m making a generalization, the 2021-2023 anti-remote work push in the media was grotesque proof of where the media’s true allegiance lies — the media happily manufactured consent for return-to-office mandates from large companies by framing remote work as some sort of destructive force, doing all they can to disguise how modern management has no fucking idea how the workplace actually works

These articles were effectively fan fiction for managers and bosses demanding we return to the office — ridiculous statements about how remote work “failed young people” (it didn’t) or how employees needed remote work more than their employers because “the chitchat, lunches and happy hours” are so important. Had any of those reporters spoken to an actual worker, they’d say that they value more time with their families, rather than the grind of a daily commute softened with the promise of an occasional company pizza party — which usually happens outside of the typical working hours, anyway. 

These articles rarely (if ever) cared about whether remote work was more productive, or that the disconnect appeared to be between managers and workers. It was, from the very beginning, about crushing the life out of a movement that gave workers more flexibility and mobility while suppressing managers’ ability to hide how little work they did. I give credit to CNBC in 2023 for saying the quiet part out loud — that “...the biggest disadvantage of remote work that employers cite is how difficult it is to observe and monitor employees” — because when you can’t do that, you have to (eugh!) actually know what they’re doing and understand their work. 

Yet higher up the chain, the invulnerability continues. 

CEOs may get fired — and more are getting fired than ever, although sadly not the ones we want — but always receive some sort of golden parachute payoff at the end before walking into another role at another organization doing exactly the same level of nothing.  

Yet before that happens, A CEO is allowed to pull basically every lever before they take a single ounce of accountability — laying people off, pay freezes, moving from salaried to contracted workers, closing down sites, cutting certain products, or even spending more fucking money. If you or I misallocated billions of dollars on stupid ideas we’d be fired. CEOs, somehow, get paid more.

Let me give you an example. Microsoft CEO Satya Nadella said that the “ultimate computer…is the mixed reality world” and that Microsoft would be “inventing new computers and new computing” in 2016, pushing his senior executives to tell reporters that Hololens was Microsoft’s next wave of computing in 2017, selling hundreds of millions of dollars’ worth of headsets to the military in 2019, then debuting HoloLens 2 at BUILD 2019 only for the on-stage demo to break in realtime, calling for a referendum on capitalism in 2020, then saying he couldn’t overstate the breakthrough of the metaverse in 2021. Let’s see what he said about it (props to Preston Gralla of ComputerWorld for finding this):

Nadella, in that 2021 keynote, made big promises: “When we talk about the metaverse, we’re describing both a new platform and a new application type, similar to how we talked about the web and websites in the early ’90s…. In a sense, the metaverse enables us to embed computing into the real world and to embed the real world into computing, bringing real presence to any digital space. For years, we’ve talked about creating this digital representation of the world, but now, we actually have the opportunity to go into that world and participate in it.”

As Gralla notes, Nadella said Microsoft would be, “…beefing up development in projects such as its Mixed Reality Tool Kit MRTK, the virtual reality workspace project AltspaceVR (which it had bought back in 2017), its HoloLens virtual reality headset, and its industrial metaverse unit, among others,” before firing all 100% members of its industrial Metaverse core team along with those behind MRTK and shutting down AltSpace VR (which it acquired in 2017) in 2023, before discontinuing HoloLens 2 entirely in 2024.

Nadella was transparently copying Meta and Mark Zuckerberg’s ridiculous “metaverse” play, and absolutely nothing happened to him as a result. The media — outlets like The Verge and independents like Ben Thompson — happily boosted the metaverse idea when it was announced and conveniently forgot it the second that Microsoft and Meta wanted to talk about AI (no, really, both The Verge and Ben Thompson were ready and waiting) without a second’s consideration about what was previously said. 

A true Business Idiot never admits wrongdoing, and the more powerful the Business Idiot is, the more likely there are power structures that exist to avoid them having to do so. The media, captured by other Business Idiots, has become poisoned by power, deferring to its whims and ideals and treating CEOs with more respect, dignity and intelligence than anyone that works for them. When a big company decides they want to “do AI,” the natural reaction is to ask “how?” and write down the answer rather than think about whether it’s possible or whether the company might profit (say, by increasing their shareholder price) by having whatever they say printed ad verbatim. 

These people aren’t challenged by the media, or their employees, because their employees are vulnerable all the time, and often encouraged to buy into whatever bullshit is du jour like hostages being held by a terrorist group that eventually fall victim to Stockholm syndrome. They’re only challenged by shareholders, who are agnostic about idiocy because it’s not core to value in any meaningful sense, as we’ve seen with crypto, the metaverse and AI, and shareholders will tolerate infinite levels of idiocy if it boosts the value of their holdings. 

It goes further too. 2021 saw the largest amount of venture capital invested in the last decade, a record-breaking $643 billion, with a remarkable $329.5 billion of that invested in the US alone. Some of the biggest deals include Amazon reseller aggregator Thrasio, which raised $1 billion in October 2021 and filed for bankruptcy in February 2025, cloud security company Lacework, which raised $525 million in January 2021 then $1.3 billion in October 2021 and was rumoured to be up for sale to Wiz, only for the deal to collapse, and autonomous car company Cruise, which raised $2.75 billion 2021 and was killed off in December 2024

The people who lose their livelihoods — those who took stock in lieu of cash compensation, and those who end up getting laid off at the end — are always workers, while people like Lacework co-CEO Jay Parikh (who oversaw “reckless spending” and “management dysfunction” according to The Information) can walk into highly-paid positions at companies like Microsoft, as he did in October 2024 a few months after a fire sale to cybersecurity Fortinet for around $200 million according to analysts

It doesn’t matter if you’re wrong, or if you run your company badly, because the Business Idiot is infallible, and judged too by fellow disconnected Business Idiots. In a just society, nobody would want to touch any of the C-suite that oversaw a company that handed out Nintendo Switches to literally anyone who booked a meeting (as with Lacework). Instead, the stank remains on the employees alone. 

One point about this: Meta’s most recent layoffs were explicitly said to target low-performers, needlessly harming the future job prospects of those handed a pink slip in an already fucked tech job market. It was cruel and pointless and — I’m certain — a big fat lie.

Meta is spending big on AI, has spent big on the metaverse (which went nowhere), and owns two dying platforms (Instagram and Facebook) and one that’s hard to monetize (WhatsApp). It needs to get costs down and improve margins. Layoffs are one way to do that. And things are getting bad enough that Meta is now, according to The Information, walking around Silicon Valley begging other big tech companies for money to train their open source “Llama” LLM.

The “low-performer” jibe is an unnecessary twist of the knife, demonstrating that Meta would happily throw its workers under the bus if it serves their interests — because the optics of firing low-performers is different to, say, firing a bunch of people because you keep spunking money on dead-end vanity projects and me-too products that nobody wants or uses. 

Mark Zuckerberg, I add, owns an island on Hawaii. The idea that he even thinks this much about Meta is disgraceful. Go outside, you fucking freak.

It’s so easy, and perhaps inevitable, to feel a sense of nihilism about it all. Nothing matters. It’s all symbolic. Our world is filled with companies run by people who don’t interact with the business, and that raise money from venture capitalists that neither run businesses nor really have any experience doing so. And despite the fact that these people exist several abstractions from reality, the things that they do and the decisions they make impact us all. And it’s hard to imagine how to fix it. 


We live in a system of iniquity, dominated by people that do not interact with the real world who have created an entire system run by their fellow Business Idiots. The Rot Economy’s growth-at-all-costs mania is a symptom of the grander problem of shareholder supremacy, and the single-minded economic focus on shareholder value inevitably ends at an economy run by and for Business Idiots. There is a line, and it ends here — with layoffs, the destruction of our planet and our economy and our society, and a rising tide of human misery that nobody really knows where it comes from, and so, we don’t know who to blame, and for what.

If our economy actually worked as a true meritocracy — where we didn’t have companies run by people who don’t use their products or understand how they’re made, and who hire similarly-specious people — these people would collapse under the pressure of having to know their ass from their earhole.

Yet none of this would be possible without the enabling layers, and those layers are teeming with both Business Idiots and those unfortunate enough to have learned from them. The tech media has enabled every single bubble, without exception, accepting every single narrative fed to them by VCs and startups, with even critical reporters still accepting the lunacy of a company like OpenAI just because everybody else does too.

Let’s be honest, when you remove all the money, our current tech industry is a disgrace.

Our economy is held up by NVIDIA, a company that makes most of its money selling GPUs to other companies primarily so that they can start losing the money selling software that might eventually make them money, just not today. NVIDIA is defined by massive peaks and valleys, as it jumps on trends and bandwagons at the right time, despite knowing that these bandwagons always come to an abrupt halt.

The other companies feature Tesla, a meme stock car company with a deteriorating brand and a chief executive famous for his divorces from both reality and multiple women along with a flagrant racism that may cost the company its life. A company that we are watching die in real time, with a stagnant line-up and actual fucking competition from companies that are spending big on innovation.

In Europe and elsewhere, BYD is eating Tesla’s lunch, offering better products for half the price — and far less stigma. And this is just the first big Chinese automotive brand to go global. Others — like Chery — are enjoying rapid growth outside of China, because these cars are actually quite good and affordable, even when you factor in the impact of things like tariffs. 

Hey, remember when Tesla fired all the people in its charging network — despite that being one of the most profitable and valuable parts of the business? And then hired them back because it turns out they were actually useful?

This is a good example of managerial alienation — decisions made by non-workers who don’t understand their customers, their businesses, or the work their employees do. And let’s not forget about the Cybertruck, a monstrosity both in how it looks and how it’s sold, and that’s illegal in the majority of developed countries because it is a death-trap for drivers and pedestrians alike. Oh, and that nobody actually wants, with Tesla sitting on a quarter’s worth of inventory that it can’t sell

Elsewhere is Meta, a collapsing social network with 99% of its revenue based on advertising to an increasingly-aged population and a monopoly so flagrantly abusive in its contempt for its customers that it’s at times difficult to call Instagram or Facebook social networks.

Mark Zuckerberg had to admit to the Senate Judiciary Committee that people don’t use Facebook as a social network anymore. The reason why is because the platform is so fucking rotten, run by a company alienated from its user base, its decrepit product actively hostile to anybody trying to use it. 

And, more fundamentally, what’s the point of posting on Facebook if your friends won’t see it, because Meta’s algorithm decided it wouldn’t drive engagement? 

Meta is a monument to disconnection, a company that runs in counter to its own mission to connect people, run by Mark Zuckerberg, a man who hasn’t had a good idea since he stole it from the Winklevoss Brothers. The solution to all that ails him? Adding generative AI to every part of Meta, which…uh…was meant to do something other than burn $72 billion in capital expenditures in 2025, right? It isn’t clear what was meant to happen, but the Wall Street Journal reports that Meta’s AI chatbots are, and I quote, “empowered to engage in ‘romantic role-play’ that can turn explicit” — even with children. In a civil society, Zuckerberg would be ousted immediately for creating a pedophilic chatbot, — instead, four days after the story ran, everyone cheered Meta’s better-than-expected quarterly earnings.

In Redmond, Microsoft sits atop multiple monopolies, using tariffs as a means to juice flailing Xbox revenue as it invests billions of dollars in OpenAI so that OpenAI can spend billions of dollars on cloud compute, losing billions more in the process, requiring Microsoft to invest further money to keep them alive. All because Microsoft wanted generative AI in Bing. What a fucking waste! 

While also raising the costs of its office suite — which it’s only able to hold a monopoly on because it acted so underhandedly in the 1990s.

Amazon lumbers listlessly through life, its giant labor-abuse machine shipping things overnight at whatever cost necessary to crush the life out of any other source of commerce, its cloud services and storage arm, unsure who to copy next. Is it Microsoft? Is it Google? Who knows! But one analyst believes it’s making $5 billion in revenue from AI in 2025 — and spending $105 billion in capital expenditures. There are slot machines with a better ROI than this shit. 

Again, it’s a company that’s totally exploitative of its customers, no longer acting as a platform that helps people find the shit they need, but to direct them to the products that pay the most for prime advertising real-estate, no matter whether they are good or safe.

Let’s be clear: Amazon’s recklessness will kill someone, if it hasn’t already.   

Then there’s the worst of them — Google. Most famous for its namesake, a search engine that it has juiced as hard as possible, and will continue to juice before the inevitable antitrust sentencing that would rob it of its power, along with the severance of its advertising monopoly. But don’t worry, Google also has a generative AI thing, for some reason, and no, you don’t have a choice about using it, because it’s now stapled onto Google Search and Google Assistant

At no point do any of these companies seem to be focused on making our lives better, or selling us any kind of real future. They exist to maintain the status quo, where cloud computing allows them to retain their various fiefdoms.

They’re alienated from people.

They’re alienated from workers.

They’re alienated from their customers.

They’re alienated from the world.

They’re deeply antisocial and misanthropic — as demonstrated by Zuck’s moronic AI social network comments.

And AI is a symptom of a reckoning of this stupidity and hubris.

They cut, and cut, and stagnated. Their hope is a product that will be adopted by billions of imaginary customers and companies, and will allow them to cut further without becoming just a PO Box and a domain name.

We have to recognize that what we’re seeing now with generative AI isn’t a fluke or a bug, but a feature of a system that’s rapacious and short-term by its very nature, and doesn’t define value as we do, because “value” gets defined by a faceless shareholder as “growth.” And this system can only exist with the contribution of the business idiot. These are the vanguard — the foot soldiers — of this system, and a key reason why everything is so terrible all the time, and why nothing seems to be getting better. 

Breaking from that status-quo would require a level of bravery that they don’t have — and perhaps isn’t possible in the current economic system. 

These people are powerful, and have big platforms. They’re people like Derek Thompson, famed co-author of the “abundance” agenda, who celebrates the idea of a fictitious version of ChatGPT that can entirely plan and execute a 5-year-old’s birthday party, or his co-author Ezra Klein, who, while recording a podcast where his researchers likely listened, talked proudly about replacing their work with OpenAI’s broken Deep Research product, because anything that can be outsourced must be, and all research is “looking at stuff that is relevant.”

And really, that’s the most grotesque part about Business Idiots. They see every part of our lives as a series of inputs and outputs They boast about how many books they’ve read rather than the content of said books, about how many hours they work (even though they never, ever work that many), about high level they are in a video game they clearly don’t play, about the money they’ve raised and the scale they’ve raised it at, and about how expensive and fancy their kitchen gadgets are. Everything is dominance, acquisition, growth and possession over any lived experience, because their world is one where the journey doesn’t matter, because their journeys are riddled with privilege and the persecution of others in the pursuit of success. 

These people don’t want to automate work, they want to automate existence. They fantasize about hitting a button and something happening, because experiencing — living! — is beneath them, or at least your lives and your wants and your joy are. They don’t want to plan their kids’ birthday parties. They don’t want to research things. They don’t value culture or art or beauty. They want to skip to the end, hit fast-forward on anything, because human struggle is for the poor or unworthy. 

When you are steeped in privilege and/or have earned everything through a mixture of stolen labor and office pantomime, the idea of “effort” is always negative. The process of creation — or affection, of love, of kindness, of using time not just for an action or output — is disgusting to the Business Idiot, because those are times they could be focused on themselves, or some nebulous self-serving “vision” that is, when stripped back to its fundamental truth, is either moronic or malevolent. They don’t realise that you hire a worker for the worker’s work rather than just the work themselves, which is why they don’t see why it’s so insulting to outsource their interactions with human beings. 

You’ll notice these people never bring up examples of automating actual work — the mind-numbing grunt work that we all face in the workplace — because they neither know nor care what that is. Their “problems” are the things that frustrate them, like dealing with other people, or existing outside of the gilded circles of socialite fucks or plutocrats, or just things that are an inevitable facet of working life, like reading an email. Your son’s birthday party or a conflict with a friend can, indeed, be stressful, but these are not problems to be automated out. They are the struggles that make us human, the things that make us grow, the things that make us who we are, which isn’t a problem for anybody other than somebody who doesn’t believe they need to change in any way. It's both powerful and powerless at the same time — a nihilistic way of seeing our lives as a collection of events we accept or dismiss like a system prompt, the desperate pursuit of such efficient living that you barely feel a thing until you die. 

I’ve spent years writing about these people without giving them a name, because categorizing anything is difficult. I can’t tell you how long it took for me to synthesize the Rot Economy from the broader trends I saw in tech and elsewhere, how long it took for me to thread that particular needle, to identify the various threads that unified events that are otherwise separate and distinct.

I am but one person. Everything you’ve read in my newsletter to this point has been something I’ve had to learn. Building an argument and turning it into words — often at the same time — that other people will read doesn’t come naturally to anyone. It’s something you have to deliberately work at.  It’s imperfect. There are typos. These newsletters increase in length and breadth and have so many links, and I will never, ever change my process, because part of said process is learning, relearning, processing, getting pissed off, writing, rewriting, and so on and so forth. 

This process makes what I do possible, and the idea of having someone automate it disgusts me, not because I’m special or important, but because my work is not the result of me reading a bunch of links or writing a bunch of words. This piece is not just 13,000 words long — it’s the result of the 800,000 or more words I wrote before it, the hundreds of stories I’ve read in the past, the hours of conversations with friends and editors, years of accumulating knowledge and, yes, growing with the work itself. 

This is not something that you create through a summation of content vomited by an AI, but the chaotic histories of a human being mashed against the challenge of trying to process it. Anyone who believes otherwise is a fucking moron — or, better put, just another Business Idiot.

Reality Check

2025-04-29 00:45:12

Like this newsletter? Why not listen to the podcast version on Better Offline? Part 1 is out now (here're other links), and Part 2 comes out Friday May 2nd!


I'm sick and god-damn tired of this! I have written tens of thousands of words about this and still, to this day, people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside, yet the tech media continues to tell me to get my swimming trunks and take a nice long dip in the pool.

I apologize, this is going to be a little less reserved than usual.

I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing. Since I put out OpenAI Is A Systemic Risk To The Tech Industry, I've heard nothing in response, as was the case with How Does OpenAI Survive? and OpenAI Is A Bad Business

There seems to be little concern — or belief — that there is any kind of risk at the heart of OpenAI, a company that spent $9 billion in 2024 to lose $5 billion. While I'd love to add a "because..." here, if not because it’s important to be intellectually honest and represent views that directly contrast my own, even if I do so in a somewhat sardonic fashion, nobody seems to actually have a cogent response to how they right this ship other than Hard Forker Casey Newton throwing a full-scale tantrum on a podcast and saying I'm wrong because "inference costs are coming down."

Newton is a nakedly-captured booster that ran an infographic from Anthropic a few weeks ago the likes of which I haven't seen since 2013, but he's far from the only one with a flimsy attachment to reality.

The Information ran a piece a couple of weeks ago that made me furious, which was a surprise because — for the most part — their coverage of tech, and especially AI, has been some of the best around, and they generally avoid the temptation to be shills for shaky and unsustainable tech companies. 

The story claimed that OpenAI was "forecasting revenue topping $125 billion in 2029" based on "selling agents" and "monetizing free users...as a driver to higher revenue." The piece, reported out based on things "...told [to] some potential and current investors," takes great pains to accept literally everything that OpenAI says as perfectly reasonable, if not gospel, even if said things make absolutely no sense.

According to The Information's reporting, OpenAI expects "agents" and "new products" to contribute tens of billions of dollars of revenue, both in the near-term (somehow contributing $3 billion in revenue this year, which I'll get to in a little bit) and in the long-term, with an egregious $25 billion in revenue in 2029 projected to come from "new products." 

If you're wondering what those new products might be, I am too, because The Information doesn't seem to know, and instead of saying "OpenAI has no idea what the fuck they're talking about and is just saying stuff," the outlet chooses instead to publish things with the kind of empty optimism that's indistinguishable from GPT-generated LinkedIn posts.

Check out this fucking chart.

The Information — OpenAI Forecasts Revenue Topping $125 Billion in 2029 as Agents, New Products Gain

I want to be really, really clear: we are nearly in May 2025, and I see no evidence that OpenAI even has a marketable agent product, let alone one that will make it three billion god damn dollars in the next six or seven months.

For context, that’s triple the revenue OpenAI reportedly made from selling access to its models via its APIs — essentially allowing third-party companies to use GPT in their apps — in the entirety of 2024. And those APIs and models actually exist in a meaningful sense, as opposed to whatever the fuck OpenAI’s half-baked Agents stuff is. 

In fact, no, no, I'm not going to be mean, I'm going to explain exactly what The Information is reporting in an objective way, because writing it out really shows how silly it all sounds. I am going to write "they believe" a lot because I must be clear how stupid this is:

  • According to The Information's reporting, they believe that OpenAI will make $3 billion in 2025 from selling access to its agents in 2025. This appears to come from SoftBank, which has said it will buy $3 billion worth of OpenAI products annually.
  • Earlier this year, we got a bit of extra information about how SoftBank would use those products. It plans to create a system called Cristal Intelligence that will be a kind-of general purpose AI agent platform for big enterprises. The exact specifics of what it does is vague (shocker, I know) but SoftBank intends to use the technology internally, across its various portfolio companies, as well as market it to other large enterprise companies in Japan.  
  • I also want to add that The Information can't keep its story straight on this issue. Back in February, they reported that OpenAI would make $3 billion in revenue only from agents, with a big, beautiful chart that said $3 billion would come from “it," only to add that “it” would be SoftBank "...[using] OpenAI's products across its companies." 
  • Based on these numbers, it seems like SoftBank will be the only customer for OpenAI’s agents. While this won’t be the case — and isn’t, because it excludes anyone willing to pay a few bucks to test it out — it nonetheless doesn’t signal good things for Agents as a mass market product.  
    • Agents do not exist as a product that can be sold at that scale. The Information's own reporting from last week highlighted how OpenAI’s "Operator" agent "struggle[d] with comparison shopping on financial products," and how Operator and other agents are "...tripped by pop-ups or logins, as well as prompts asking for email addresses and phone numbers for marketing purposes," which I think accurately describes most of the internet.
    • To summarize, The Information is saying that the above product will make OpenAI three billion dollars by the end of the year.
  • According to The Information's reporting, they believe that OpenAI will basically double revenue every single year for the next four years and make $13 billion in revenue in 2025, more than doubling that to $29 billion in 2026, nearly doubling that to $54 billion in 2027, nearly doubling that to $86 billion in 2028, and eventually hitting $125 billion in 2029.
    • Said revenue estimates, as of 2026, include billions of dollars of "new products" that include "free user monetization."
      • If you are wondering what that means, I have no idea. The Information does not explain. They do, however, say that "OpenAI won’t start generating much revenue from free users and other products until next year. In 2029, however, it projects revenue from free users and other products will reach $25 billion, or one-fifth of all revenue," and said that "shopping is another potential avenue."

I cannot express my disgust about how willing publications are to blindly publish projections like these, especially when they're so utterly ridiculous. Check out this quote:

OpenAI has already begun experimenting with launching software features for shopping. Starting in January, some users can access web-browsing agent Operator as part of their pro ChatGPT subscription tier to order groceries from Instacart and make restaurant reservations on OpenTable.

So you're saying this experimental software launched to an indeterminate amount of people that barely works is going to make OpenAI $13 billion in 2025, and $29 billion in 2026, and later down the line $125 billion in 2029? How? How?

What fucking universe are we all living in? There's no proof that OpenAI can do this other than the fact that it has a lot of users and venture capital! 

In fact, I think we have reason to worry about whether OpenAI even makes its current projections. In my last piece I wrote that Bloomberg had estimated that OpenAI would triple revenue to $12.7 billion in 2025, and based on its current subscriber base, OpenAI would have to effectively double its current subscription revenue and massively increase its API revenue to hit these targets.

These projections rely on one entity (SoftBank) spending $3 billion on OpenAI's services, meaning that it’d make enough API calls to generate more revenue than OpenAI made in subscriptions in the entirety of 2024, and something else that I can only describe as “an act of God.”

That, I admit, assumes that Softbank’s spending commitment is based on usage, and not a flat fee (where Softbank pays $3bn and gets a set — or infinite — level of access). Assuming it’s the former, I’d be stunned if SoftBank’s consumption hits $3bn this year, even with the massive cost of the reasoning models that Cristal Intelligence will be based on. Softbank announced its deal with OpenAI in February. 

Cristal Intelligence, if it works — and that is possibly the most load-bearing “if” of all time — will be a massive, complicated, ambitious product. Details are vague, but from what I understand, SoftBank wants to create an AI that handles the infinitely varied tasks that knowledge workers perform on a daily basis. 

To be clear, OpenAI’s agents cannot consistently do, well… anything

What I believe is happening is that reporters are taking OpenAI's rapid growth in revenue from 2023 to 2024 (from tens of millions a month at the start of 2023 to $300 million in August 2024) to mean that the company will always effectively double or triple revenue every single year forever, with their evidence being "OpenAI has projected this will be the case."

It's bullshit! I'm sorry! As I wrote before, OpenAI effectively is the generative AI industry, and nothing about the rest of the generative AI industry suggests that the revenue exists to sustain these ridiculous, obscene and fantastical projections. Believing this — and yes, reporting it objectively is both endorsing and believing these numbers — is engaging in childlike logic, where you take one event (OpenAI's revenue grew 1700% from 2023 to 2024! Wow!) to mean another will take place (OpenAI will continue to double revenue literally every other year! Wow!), consciously ignoring difficult questions such as "how?" and "what's the total addressable market of Large Language Model subscriptions exactly?" and "how does this company even survive when it "expects the costs of inference to triple this year to $6 billion alone"?

Wait, wait, sorry, I need to be really clear with that last one, this is a direct quote from The Information:

The company also expects growth in inference costs—the costs of running AI products such as ChatGPT and underlying models—to moderate over the next half-decade. Those costs will triple this year, to about $6 billion and rise to nearly $47 billion in 2030. Still, the annual growth rate will fall to about 30% then.

Are you fucking kidding me?

Six billion fucking dollars for inference alone? Hey Casey, I thought those costs were coming down! Casey, are you there? Casey? Casey?????

Anyway, that's not great at all! That's really bad! The Information reports that OpenAI will make "about $8 billion" from subscriptions to ChatGPT in 2025, meaning that 75% of OpenAI's largest revenue source is eaten up by the price to provide it. This is meant to be the cheaper part! This is the one fucking thing people say is meant to come down in price!

Are we living in different dimensions? Are there large parts of the tech media that have gas leaks in their offices? What am I missing? Tell me what I'm missing!

Nerr, Ed, you haven't talked to the people building these things, you don't know what you're- shut the fuck up! Shut up! I am sick and tired of people (like Casey!) suggesting that what's missing from my analysis is to "interview people who work at these companies and understand how this technology works." What would these people say to me, exactly? What response would they have to these numbers?

Forgive Me I'm Going To Be A Little Rude

In fact, you know what, let me just sit down and go through the critiques one-by-one. Some of you are going to say I'm being rude to these people and it weakens my analysis, to which I respond "kiss my entire ass." I can beat you to death with the truth while making fun of you for believing stupid things.

  • The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry! 
    • But DeepSeek… No, my sweet idiot child. DeepSeek is not OpenAI, and OpenAI’s latest models only get more expensive as time drags on. GPT-4.5 costs $75 per million input tokens, and $150 per million output tokens. And at the risk of repeating myself, OpenAI is effectively the generative AI industry — at least, for the world outside China. 
  • This is the company at its growth stage, it can simply "hit the button" and it'll all be profitable: You have the mind of a child! If this was the case, why would both Anthropic and OpenAI be losing so much money? Why are none of the hyperscalers making profit on AI? Why does nobody want to talk about the underlying economics?
  • These are the early days of AI: Wrong! We have the entire tech industry and more money than has ever been invested into anything piled into generative AI and the result has been utterly mediocre. Nobody's making money but NVIDIA!
  • They're already showing signs that it'll be powerful: No it's not! If it was there'd be people doing crazy, impressive things with it!
  • But Ed, really, it's the early days, it was just like this in the early days of the internet: No it wasn't! Read Jim Covello of Goldman Sachs' note from last year, the early days of the internet were absolutely nothing like this-
    • Smartphones! YES! Got you, Ed! Smartphones! People doubted those too- I am going to drown you in an icy lake! Covello's note also included an entire thing about how smartphones were fully telegraphed to analysts in advance, with "hundreds of presentations" that accurately fit how smartphones rolled out, no such roadmap exists for AI!
  • Heh, heh, Ed, you're so boned. Check out this article from Newsweek in 1995 where a guy says that the internet won't be a big business. This somehow proves that AI is going to be big, due to the fact one guy was wrong once: Motherfucker, have you read that piece? He basically says that the internet, at that time, was pretty limited, and yes, he conflated that with the idea that it wouldn't be big in the future. Clifford Stoll's piece also — as Michael Hiltzik wrote for the LA Times — was alarmingly accurate about misinformation and sleazy companies selling computerized replacements for education.
    • In any case, one guy saying that the internet won't be big doesn't mean a fucking thing about generative AI and you are a simpleton if you think it does. One guy being wrong in some way is not a response to my work. I will crush you like a bug.
    • Stoll's analysis also isn't based on hundreds of hours of research and endless reporting. Mine is! I will grab you from the ceiling like the Wallmaster from Zelda and you will never be heard from again.
  • OpenAI and Anthropic are research entities not businesses, they aren't focused on profit: Okay so are they just going to burn money forever? No, really, is that the case? Or do you think they hit the "be profitable" button sometime?

[Record Scratch] Wait a second...

  • OpenAI has as many as 800 million weekly active users! That's proof of adoption! Hey, woah, I get that you're really horny about this number, but something don't make no sense here! On March 31 2025, OpenAI said that it had "...500 million people who use ChatGPT every week." Two weeks later, Sam Altman claimed that "something like 10% of the world "uses our systems a lot," which the media took to mean that ChatGPT has 800 million weekly active users.
  • Here are the three ways to interpret this, and you tell me which one sounds real:
    • OpenAI's userbase increased by 300 million weekly active users in two weeks.
    • OpenAI understated its userbase in the announcement of their funding announcement on OpenAI dot com by 300 million users.
    • Sam Altman fucking lied.

I get that some members of the media have a weird attachment to this nasty little man, but have any of you ever considered he’s just fucking says things knowing you will print them with the kindest possible interpretation?

Sam Altman is a liar! He lies! He's lied before and he'll lie again!

But wait, Ed! Google says it has 350 million monthly active users on Gemini! Eat shit, Zitron! No, you eat shit! Yes, Google Gemini has 350 million monthly active users.

And that’s because it started replacing Google Assistant with Google Gemini in early March! You are being had! You are being swindled! If Google replaced Google Search with Google Gemini it would have billions of monthly active users! 

Anyway, back to the critiques...

  • OpenAI having hundreds of millions of free users, each losing it money, is proof that the free version of ChatGPT is popular, largely because the entirety of the media has written about AI nonstop for two straight years and mentioned ChatGPT every single fucking time. Yes there is a degree here of marketing, of partnerships, of word of mouth, of some degree of utility, but remove the non-stop free media campaign and ChatGPT would've peetered out by now along with this stupid fucking bubble.
    • But Ed it's proof of something right- yeah! It's proof that something is broken in society. Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful, but it has enough to make people give it a try.

You know what? Let's talk about why this bubble actually inflated!

So, let's start simple: the term "artificial intelligence" is bastardized to the point it effectively means nothing and everything at the same time. When people hear "AI" they think of an autonomous intelligence that can do things for them, and generative AI can "do things for you" like generate an image or text "from a simple prompt." As a result, it's easy to manipulate people who don't know much about tech into believing that this will naturally progress from "it can create a bunch of text for me that I have to write for my job just by me typing in a prompt" to "it can do my job for me just by typing in a prompt."

Basically everything you read about "the future of AI" extrapolates generative AI's ability to sort of generate something a human would make and turns it into do whatever a human can do, all because tech has, in the past, been bad at the beginning and linearly improved as time drags on. 

This illogical thinking underpins the entire generative AI boom, because we've found out exactly how many people do not know what the fuck they're talking about and are willing to believe the last semi-intelligent person they talked to. Generative AI is a remarkable con — a just-good-enough simulacrum of human expression to get it past the gatekeepers in finance and the media, knowing that neither will apply a second gear of critical thinking beyond "huh guess we're doing AI now."

The expectation that generative AI will transform into something much, much more powerful requires you to first ignore the existing limitations, believing it to be more capable than it is, and also ignore the fact that these models have yet to show meaningful improvement over the past few years. They still hallucinate. They’re still ungodly expensive to run. They’re still unreliable. And they still don’t do much.  

Worse still, ChatGPT's growth has galvanized these people into believing that this is a legitimate, meaningful movement, rather than the most successful PR campaign of all time.  

Think of it like this: if almost every single media outlet talked about one thing (generative AI), and that one thing was available from one company (OpenAI), wouldn't it look exactly how things look today? You've got OpenAI with hundreds of millions of monthly active users, and then a bunch of other companies — including big tech firms with multi-trillion dollar market caps — with somewhere between 10 and 69 million monthly active users.

What we're seeing is one company taking most of the users and money available and doing so because the media fucking helped them.  People aren't amazed by ChatGPT — they're curious! They're curious about why the media won't shut up about it!

This Bubble Was Also Inflated By The Failure of Google Search

Everybody I talk to that uses ChatGPT regularly uses it as either a way to generate shitty limericks or as a replacement for Google search, a product that Google has deliberately made worse as a means of increasing profits.

ChatGPT is, if I'm honest, better at processing search strings than Google Search, which is not so much a sign that ChatGPT is good at something as it is that Google has stopped innovating in any meaningful way. Over time, Google Search should've become something that was able to interpret your searches into the perfect result, which would require the company to improve how it processes your requests. Instead, Google Search has become dramatically worse, mostly because the company's incentives changed from "help people find something on the web" to "funnel as much traffic and show as many ad impressions as possible on Google.com."

By this point, Google Search should have been more magical, more capable of taking a dimwitted question and turning it into a great answer, with said answer being a result on the internet. Note that nothing I'm writing here is actually about generating a result — it's about processing a user's query and presenting an answer, the very foundation of computing and the thing that Google, at one point, was the best in the world at doing. Thanks to Prabhakar Raghavan, the former head of ads that led a coup to become head of search, Google was pulled away from being a meaningful source of information.

And I'd argue that ChatGPT filled that void by doing the thing that people wanted Google Search to do: answer a question, even if the user isn't really sure how to ask it. Google Search has become clunky, obfuscatory, putting the burden of using the service on the user rather than helping fill the gap between query and answer in any meaningful way. Google's AI summaries don't even try to do what ChatGPT does — they generate summaries based on search results and say "okay man, uhh, is this what you want?" 

One note on Google’s AI summaries: They’re designed to answer a question, rather than provide a right answer. That’s a distinction that needs to be made, because it speaks to the underlying utility of this product. 

One good illustration of this came earlier this week, when someone noticed that you could ask Google to explain the meaning of a completely made-up phrase, and it would dutifully obey. “Two dry frogs in a situation,” Google said, referred to a group of people in an awkward or difficult social situation. 

Not every insect has a mortgage,” Google claimed,” is a humorous way of explaining that not everything is as it seems. My favorite, “big winky on the skillet bowl,” is apparently a slang term that refers to a piece of bread with an egg in the middle. 

Funny? Sure. But is it useful? No. 

With all its data and all its talent, Google has put the laziest version of a Large Language Model on top of a questionably-functional search product as a means of impressing shareholders.

None of this is to say that ChatGPT is good, just that it is better at understanding a user's request than Google Search.

Yes, I fundamentally believe that 500 million people a week could be using ChatGPT as some sort of search replacement, and no, I do not believe that's a functional business model, in part because if it was, ChatGPT would've been a functional business. 

That, and it appears that Google's ability to turn search into such a big business was because it held a monopoly on search, search advertising and the entire online ads industry, and if it was a truly competitive market and it wasn’t allowed to be vertically integrated with the entire digital advertising apparatus of the web, it would likely be making much less revenue per user. And that’s bad if your Google Replacement costs many, many times more than Google to run. 

As an aside: if you're wondering, no, OpenAI cannot "just create a Google Search competitor." SearchGPT will be significantly more expensive to run at Google's scale than ChatGPT — both infrastructurally and in the cost of revenue, with OpenAI forced to create a massive advertising arm that currently doesn't exist at the company.

People love the ChatGPT interface — the box where they can type one thing and get another thing out — because it resembles how everybody has always wanted Google Search to work. Does it actually work? Who knows. But people feel like they're getting more out of it.

Let's Talk About AGI Really Quick

This newsletter has been a break from the extremely deep and onerous analysis I've been on for the last few months, in part because I needed to have a little fun writing.

It also comes from a place of frustration. None of this has ever felt substantive or real because the actual things that you can do with generative AI never seem to come close to the things that people like Sam Altman and Dario Amodei seem to be promising, nor do they come close to the bullshit that people like Casey Newton and Kevin Roose are peddling. None of this ever resembled "artificial general intelligence," and if I'm honest, very little of it seems to even suggest it's a functional industry.

When cynical plants like Roose bumble around asking theoretical questions such as "do you think that there is a 50% chance or greater that AGI, defined as an AI system that outperforms human experts at virtually all cognitive tasks, will be built before 2030," we should all be terrified, not of AGI, but that the lead tech columnist at the New York Times appears to have an undiagnosed concussion. Roose's logic (as with Newton's) is based on the idea that he's talked to a bunch of people that say "yeah dude AGI is right around the corner" rather than any kind of proof or tangible evidence, just "the curve is going up."

Roose’s most egregious example of this company-forward credulousness came last week, when he published a thinly-veiled puff piece about what to do if AI models become conscious in the near future. He interviewed two people — both employed by Anthropic, with one holding the genuinely hilarious job description of “AI welfare researcher” — who said batshit things like “there’s only a small chance (maybe 15 percent or so) that Claude or another current A.I. system is conscious” and “It seems to me that if you find yourself in the situation of bringing some new class of being into existence… then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences.”

What makes this so appalling is that Roose acknowledges that this shit is seen by most level-headed people as nothing less than utter fantasy. He describes the concept of AI consciousness as “a taboo subject” and that many critics will see this as “crazy talk,” but doesn’t bother to speak to any actual critics. He does, however, speculate on the motives of said critics, saying that “they might object to an A.I. company’s studying consciousness in the first place, because it might create incentives to train their systems to act more sentient than they actually are.”

Yeah Kevin, wouldn’t it be terrible if a company somehow convinced someone that their AI was more powerful than it was? Also, do you bark at the mirror every time you walk past it because you think you see another guy?

Nothing about anything that Anthropic or OpenAI is building or shipping suggests we are anywhere near any kind of autonomous computing. They've used the concept of "AI safety" — and now, AI welfare — as a marketing term to convince people that their expensive, wasteful software will somehow become conscious because they're having discussions about what to do if it does so, and anyone — literally any reporter — accepting this at face value is doing their readers a disservice and embarrassing themselves in the process.

If AI safety advocates cared about, say, safety or AI, they'd have cared about the environmental impact, or the fact these models train using stolen material, or the fact that if these models actually delivered on their promises, it would deliver a shock to the labor market that would meaningfully hurt millions — if not billions — of people, and we don’t have anywhere near the social safety net to support them. 

These companies don't care about your safety and they don't have any way to get to AGI. They are full of shit and it's time to start being honest that you don't have any proof they will do anything they say they will.

Oh, By The Way, The Bubble Might Be Bursting

Hey, remember in August of last year when I talked about the pale horses of the AIpocalpyse? One of the major warning signs that the bubble was bursting was big tech firms reducing their capital expenditures, a call I've made before, with a little more clarity, on April 4 2024:

While I hope I'm wrong, the calamity I fear is one where the massive over-investment in data centers is met with a lack of meaningful growth or profit, leading to the markets turning on the major cloud players that staked their future on unproven generative AI. If businesses don't adopt AI at scale — not experimentally, but at the core of their operations — the revenue is simply not there to sustain the hype, and once the market turns, it will turn hard, demanding efficiency and cutbacks that will lead to tens of thousands of job cuts.

We're about to find out if I'm right.

Last week, Yahoo Finance reported that analyst Josh Beck said that Amazon's generative AI revenue for Amazon Web Services would be $5 billion, a remarkably small sum that is A) not profit and B) a drop in the bucket compared to Amazon's projected $105 billion in capital expenditures in 2025, its $78.2 billion in 2024, or its $48.4 billion in 2023.

Is That Really It? Are you kidding me? Amazon will only make $5 billion from AI in 2025? What?

5 billion dollars? Five billion god damn dollars? Are you fucking kidding me? You'd make more money auctioning dogs! This is a disgrace! And if you're wondering, yes! All of this is for AI:

CEO Andy Jassy said in February that the vast majority of this year’s $100 billion in capital investments from the tech giant will go toward building out artificial intelligence capacity for its cloud segment, Amazon Web Services (AWS).

Well shit, I bet investors are gonna love this! Better save some money, Andy!

What's that? You already did? How?

Oh, shit! A report from Wells Fargo analysts (called "Data Centers: AWS Goes on Pause") says that Amazon has "paused a portion of its leasing discussions on the colocation side...[and while] it's not clear the magnitude of the pause...the positioning is similar to what [analysts have] heard recently from Microsoft, [that] they are digesting aggressive recent lease-up deals...pulling back from a pipeline of LOIs or SOQs."

Some asshole is going to say "LOIs and SOQs aren't a big deal," but they are. I wrote about it here.

"Digesting" in this case refers to when hyperscalers sit with their current capacity for a minute, and Wells Fargo adds that these periods typically last 6-12 months, though can be much shorter. It's not obvious how much capacity Amazon is walking away from, but they are walking away from capacity. It's happening.

But what if it wasn't just Amazon? Another report from friend of the newsletter (read: people I email occasionally asking for a PDF) analyst TD Cowen put out a report last week that, while titled in a way that suggested there wasn't a pull back, actually said there was.

Let's take a look at one damning quote:

...relative to the hyperscale demand backdrop at PTC, hyperscale demand has moderated a bit (driven by the Microsoft pullback and to a lesser extent Amazon, discussed below), particularly in Europe, 2) there has been a broader moderation in the urgency and speed with which the hyperscalers are looking to take down capacity, and 3) the number of large deals (i.e. +400MW deals) in the market appears to have moderated.

In plain English, this means "demand has come down, there's less urgency in building this stuff, and the market is slowing down. Cowen also added that it "...observed a moderation in the exuberance around the outlook for hyperscale demand which characterized the market this time last year." 

Brother, isn't this meant to be the next big thing? We need more exuberance! Not less!

Worse still, Microsoft appears to have pulled back even further, with TD Cowen noting that there has been a "slowdown in demand," and that it saw "very little third-party leasing from Microsoft" this quarter, and, most damningly, and I'll bold this for effect, "these deals in totality suggest Microsoft's run-rate demand has decelerated materially," which for those of you wondering means it’s not getting the fucking demand for generative AI.

Well, at least Meta and Oracle aren't slowing down, right?

Well...

TD Cowen reported that it received "reverse inquiries from industry participants around a potential slowdown in demand from Oracle," leading the analyst to ask around and find that "there had been a NT (near-term) slowdown in decision-making amid organizational changes at Oracle," though it adds this might not mean that this is changing its needs or the speed at which it secures capacity. If you're wondering what else this could mean, you are correct to do so, because "slowing down" traditionally refers to a change in speed.

TD Cowen also adds that Meta has continued demand "albeit with less volume of MW (Megawatt) signings quarter-over-quarter..." then adding that "Meta's data center activity has historically been characterized by short periods of strong activity followed by digestion." In essence, Meta is signing less megawatts of compute and has, in the past, followed periods of aggressive buildouts with, well, fewer buildouts.

If I'm Wrong, How Am I Wrong Exactly?

I dunno man, all of this sure seems like the hyperscalers are reducing their capital expenditures at a time when tariffs and economic uncertainty are making investors more critical of revenues. It sure seems like nobody outside of OpenAI is making any real revenue on generative AI, and they're certainly not making a profit.

It also, at this point, is pretty obvious that generative AI isn't going to do much more than it does today. If Amazon is only making $5 billion in revenue from the literal only shiny new thing it has, sold on the world's premier cloud platform, at a time when businesses are hungry and desperate to integrate AI, then there's little chance this suddenly turns into a remarkable revenue-driver.

Amazon made $187.79 billion in its last quarterly earnings, and if $5 billion is all it’s making at the very height of the bubble, it heavily suggests that there may not actually be that much money to make, either because it's too expensive to run these services or because these services don't have the kind of total addressable market as the rest of Amazon's services.

Microsoft reported that it was making a paltry $13 billion a year — so the equivalent of $3.25 billion a quarter — selling generative AI services and model access. The Information reported that Salesforce's "Agentforce" bullshit isn't even going to boost sales growth in 2025, in part because it’s pitching it as "digital labor that can essentially replace humans for tasks" and it turns out that it doesn't do that very well at all, costs $2 a conversation, and requires paying Salesforce to use its "data cloud" product.

What, if anything, suggests that I'm wrong here? That things have worked out in the past with things like the Internet and smartphones, and so it surely must happen for generative AI and, by extension, OpenAI? That companies like Uber lost money and eventually worked out (see my response here)? That OpenAI is growing fast, and that somehow discounts the fact it burns billions of dollars and does not appear to have any path to making a profit? That agents will suddenly start working and everything will be fine?

It's a fucking joke and I'm tired of it!

Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot. Silicon Valley is dominated by management consultants that no longer know what innovation looks like, tricked by Sam Altman, a savvy con artist who took advantage of tech’s desperation for growth. 

Generative AI is the perfected nihilistic form of tech bubbles — a way for people to spend a lot of money and power on cloud compute because they don’t have anything better to do. Large Language Models are boring, unprofitable cloud software stretched to their limits — both ethically and technologically — as a means of tech’s collapsing growth era, OpenAI’s non-profit mission fattened up to make foie gras for SaaS companies to upsell their clients and cloud compute companies to sell GPUs at an hourly rate. 

The Rot Economy has consumed the tech industry. Every American tech firm has become corrupted by the growth-at-all-costs mindset, and thus they no longer know how to make sustainable businesses that solve real problems, largely because the people that run them haven’t experienced them for decades. 

As a result, none of them were ready for when Sam Altman tricked them into believing he was their savior. 

Generative AI isn’t about helping you or me do things — it’s about making new SKUs, new monthly subscription costs for consumers and enterprises, new ways to convince people to pay more for the things that they already used to be slightly different in a way that often ends up being worse. 

Only an industry out of options would choose this bubble, and the punishment for doing so will be grim. I don’t know if you think I’m wrong or not. I don’t know if you think I’m crazy for the way I communicate about this industry. Even if you think I am, think long and hard about why it is you disagree with me, and the consequences of me being wrong. 

There is nothing else after generative AI. There are no other hypergrowth markets left in tech. SaaS companies are out of things to upsell. Google, Microsoft, Amazon and Meta do not have any other ways to continue showing growth, and when the market works that out, there will be hell to pay, hell that will reverberate through the valuations of, at the very least, every public software company, and many of the hardware ones too.

And I fear it'll go much further, too. The longer this bubble inflates - the longer everybody pretends - the worse the consequences will be.

OpenAI Is A Systemic Risk To The Tech Industry

2025-04-15 00:06:48

Before we go any further: I hate to ask you to do this, but I need your help — I'm up for this year's Webbys for the best business podcast award. I know it's a pain in the ass, but can you sign up and vote for Better Offline? I have never won an award in my life, so help me win this one.


Soundtrack: Mastodon - High Road


I wanted to start this newsletter with a pithy anecdote about chaos, both that caused by Donald Trump's tariffs and the brittle state of the generative AI bubble.

Instead, I am going to write down some questions, and make an attempt to answer them.

How Much Cash Does OpenAI Have?

Last week, OpenAI closed "the largest private tech funding round in history," where it "raised"  an astonishing "$40 billion," and the reason that I've put quotation marks around it is that OpenAI has only raised $10 billion of the $40 billion, with the rest arriving by "the end of the year." 

The remaining $30 billion — $20 billion of which will (allegedly) be provided by SoftBank — is partially contingent on OpenAI's conversion from a non-profit to a for-profit by the end of 2025, and if it fails, SoftBank will only give OpenAI a further $20 billion. The round also valued OpenAI at $300 billion.

To put that in context, OpenAI had revenues of $4bn in 2024. This deal values OpenAI at 75 times its revenue. That’s a bigger gulf than Tesla at its peak market cap — a company that was, in fact, worth more than all other legacy car manufacturers combined, despite making far less than them, and shipping a fraction of their vehicles. 

I also want to add that, as of writing this sentence, this money is yet to arrive. SoftBank's filings say that the money will arrive mid-April — and that SoftBank would be borrowing as much as $10 billion to finance the round, with the option to syndicate part of it to other investors. For the sake of argument, I'm going to assume this money actually arrives.

Filings also suggest that "in certain circumstances" the second ($30 billion) tranche could arrive "in early 2026." This isn't great. It also seems that SoftBank's $10 billion commitment is contingent on getting a loan, "...financed through borrowings from Mizuho Bank, Ltd., among other financial institutions."

OpenAI also revealed it now has 20 million paying subscribers and over 500 million weekly active users. If you're wondering why it doesn’t talk about monthly active users, it's because they'd likely be much higher than 500 million, which would reveal exactly how poorly OpenAI converts free ChatGPT users to paying ones, and how few people use ChatGPT in their day-to-day lives.

The Information reported back in January that OpenAI was generating $25 million in revenue a month from its $200-a-month "Pro" subscribers (it still loses money on every one of them), suggesting around 125,000 ChatGPT Pro subscribers. Assuming the other 19,875,000 users are paying $20 a month, that puts its revenue at about $423 million a month, or about $5 billion a year, from ChatGPT subscriptions. 

This is what reporters mean when they say "annualized revenue" by the way — it's literally the monthly revenue multiplied by 12.

Bloomberg reported recently that OpenAI expects its 2025 revenue to "triple" to $12.7 billion this year. Assuming a similar split of revenue to 2024, this would require OpenAI to nearly double its annualized subscription revenue from Q1 2025 (from $5 billion to around $9.27 billion) and nearly quadruple API revenue (from 2024's revenue of $1 billion, which includes Microsoft's 20% payment for access to OpenAI's models, to $3.43 billion).

While these are messy numbers, it's unclear how OpenAI intends to pull this off.

The Information reported in February that it planned to do so by making $3 billion a year selling "agents," with ChatGPT subscriptions ($7.9 billion) and API calls ($1.8 billion) making up the rest. This, of course, is utter bollocks. OpenAI's "agents" can't do even the simplest tasks, and three billion dollars of the $12.7 billion figure appears to be a commitment made by SoftBank to purchase OpenAI's tech for its various subsidiaries and business units. 

Let's say out the numbers precisely:

  • Incoming monthly revenue: roughly $425 million, give or take.
  • Theoretical revenue from Softbank: $250 million a month. However, I can find no proof that SoftBank has begun to make these payments or, indeed, that it intends to make them.
  • Liquidity:
    • $10 billion that it is yet to receive from SoftBank and a syndicate of investors including Microsoft, potentially.
    • An indeterminate amount of remaining capital on the $4 billion credit facility provided by multiple banks back in October 2024, raised alongside a funding round that valued the company at $157 billion.
      • As a note, this announcement stated that OpenAI had "access to over $10 billion in liquidity."
    • Based on reports, OpenAI will not have access to the rest of its $40bn funding until "the end of the year," and it's unclear what part of the end of the year.

We can assume, in this case, that OpenAI likely has, in the best case scenario, access to roughly $16 billion in liquidity at any given time. It's reasonable to believe that OpenAI will raise more debt this year, and I'd estimate it does so to the tune of around $5 billion or $6 billion. Without it, I am not sure what it’s going to do.

As a reminder: OpenAI loses money on every single user.

What Are OpenAI's Obligations?

When I wrote "How Does OpenAI Survive?" and "OpenAI Is A Bad Business," I used reported information to explain how this company was, at its core, unsustainable.

Let's refresh our memories.

Compute Costs: at least $13 billion in 2025 with Microsoft alone, and as much as $594 million to CoreWeave.

It seems, from even a cursory glance, that OpenAI's costs are increasing dramatically. The Information reported earlier in the year that OpenAI projects to spend $13 billion on compute with Microsoft alone in 2025, nearly tripling what it spent in total on compute in 2024 ($5 billion).

This suggests that OpenAI's costs are skyrocketing, and that was before the launch of its new image generator which led to multiple complaints from Altman about a lack of available GPUs, leading to OpenAI's CEO saying to expect "stuff to break" and delays in new products. Nevertheless, even if we assume OpenAI factored in the compute increases into its projections, it still expects to pay Microsoft $13 billion for compute this year.

This number, however, doesn't include the $12.9 billion five-year-long compute deal signed with CoreWeave, a deal that was a result of Microsoft declining to pick up the option to buy said compute itself. Payments for this deal, according to The Information, start in October 2025, and assuming that it's evenly paid (the terms of these contracts are generally secret, even in the case of public companies), this would still amount to roughly $2.38 billion a year.

However, for the sake of argument, let's consider the payments are around $198 million a month, though there are scenarios — such as, say, CoreWeave's buildout partner not being able to build the data centers or CoreWeave not having the money to pay to build them — where OpenAI might pay less.

To be clear, and I’ll explain in greater detail later, this wouldn’t be a good thing, either. While it would be off the hook for some of its payments, it would also be without the compute that’s essential for it to continue growing, serving existing customers, and building new AI models. Cash and compute are both essential to OpenAI’s survival.  

Stargate: $1 Billion+

OpenAI has dedicated somewhere in the region of $19 billion to the Stargate data center project, along with another $19 billion provided by SoftBank and an indeterminate amount by other providers.

Based on reporting from Bloomberg, OpenAI plans to have 64,000 Blackwell GPUs running "by the end of 2026," or roughly $3.84 billion worth of them. I should also note that Bloomberg said that 16,000 of these chips would be operational by Summer 2025, though it's unclear if that will actually happen.

Though it's unclear who actually pays for what parts of Stargate, it's safe to assume that OpenAI will have to, at the very least, put a billion dollars into a project that is meant to be up and running by the end of 2026, if not more.

As of now, Stargate has exactly one data center under development in Abilene, Texas, and as above, it's unclear how that's going, though a recent piece from The Information reported that it was currently "empty and incomplete," and that if it stays that way, "OpenAI could walk away from the deal, which would cost Oracle billions of dollars." Though the article takes pains to assure the reader that won't be likely, even an inkling of such a possibility is a bad sign.

Business Insider's reporting on the site in Abilene calls it a "$3.4 billion data center development" (as did the press release from site developer Crusoe), though these numbers don't include GPUs, hardware, or the labor necessary to run them. Right now, Crusoe is (according to Business Insider) building "six new data centers, each with a minimum square footage...[which will] join the two it is already constructing for Oracle." Oracle has signed, according to The Information, a 15-year-long lease with Crusoe for its data centers, all of which will be rented to OpenAI.

In any case, OpenAI’s exposure could be much, much higher than the $1bn posited at the start of this section (and I’ll explain in greater depth how I reached that figure at the bottom of this section). If OpenAI has to contribute significantly to the costs associated with building Stargate, it could be on the hook for billions. 

Data Center Dynamics reports that the Abilene site is meant to have 200MW of compute capacity in the first half of 2025, and then as much as 1.2GW by "mid-2026." To give you a sense of total costs for this project, former Microsoft VP of Energy Brian Janous said in January that it costs about $25 million a megawatt (or $25 billion a gigawatt), meaning that the initial capital expenditures for Stargate to spin up its first 200MW data center will be around $5 billion, spiraling to $30 billion for the entire project. 

Or perhaps even more. The Information has reported that the site, which could be "...potentially one of the world's biggest AI data centers," could cost "$50 billion to $100 billion in the coming years." 

Assuming we stick with the lower end of the cost estimates, it’s likely that OpenAI is on the hook for over $5 billion for the Abilene site based on the $19 billion it has agreed to contribute to the entire Stargate project, the (often disagreeing) cost projections of the facility), and the contributions of other partners. 

This expenditure won’t come all at once, and will be spread across several years. Still, assuming even the rosiest numbers, it's hard to see how OpenAI doesn't have to pony up $1 billion in 2025, with similar annual payments going forward until its completion, and that is likely because the development of this site is going to be heavily delayed by both tariffs, labor shortages, and Oracle's (as reported by The Information) trust in "scrappy but unproven startups to develop the project."

Other costs: at least $3.5 billion

Based on reporting from The Information last year, OpenAI will spend at least $2.5 billion across salaries, "data" (referring to buying data from other companies), hosting and other cost of sales, and sales and marketing, and then another billion on what infrastructure OpenAI owns.

I expect the latter cost to balloon with OpenAI's investment in physical infrastructure for Stargate.

How Does OpenAI Meet Its Obligations?

OpenAI Could Spend $28 Billion Or More In 2025, and Lose over $14 Billion while having an absolute maximum of $20 billion in liquidity

Based on previous estimates, OpenAI spends about $2.25 to make $1. At that rate, it's likely that OpenAI's costs in its rosiest revenue projections of $12.7 billion are at least $28 billion — meaning that it’s on course to burn at least $14 billion in 2025.

Assuming that OpenAI has all of its liquidity from last year (it doesn't, but for sake of argument, let’s pretend it still has the full $10 billion), as well as the $10 billion from SoftBank, it is still unclear how it meets its obligations.

While OpenAI likely has preferential payment structures with all vendors, such as its discounted rates with Microsoft for Azure cloud services, it will still have to pay them, especially in the case of costs related to Stargate, many of which will be up-front costs. In the event that its costs are as severe as reporting suggests, it’s likely the company will find itself needing to raise more capital — whether through equity (or the weird sort-of equity that it issues) or through debt. 

And yes, while OpenAI has some revenue, it comes at a terrible cost, and anything that isn’t committed to paying for salaries and construction fees will likely be immediately funnelled directly into funding the obscene costs behind inference and training models like GPT 4.5 — a "giant expensive model" to run that the company has nevertheless pushed to every user.

Worse still, OpenAI has, while delaying its next model (GPT-5), promised to launch its o3 reasoning model after saying it wouldn't do so, which is strange, because it turns out that o3 is actually way more expensive to run than people thought. 

Reasoning models are almost always more expensive to operate, as they involve the model “checking” its work, which, in turn, requires more calculations and more computation. Still, o3 is ludicrously expensive even for this category, with the Arc Prize Foundation (a non-profit that makes the ARC-AGI test for benchmarking models) estimating that it will cost $30,000 a task.

SoftBank Has To Borrow Money To Meet Its OpenAI and Stargate Obligations, leading to SoftBank's "...financial condition likely deteriorating."

As of right now, SoftBank has committed to the following:

SoftBank's exposure to OpenAI is materially harming the company. To quote the Wall Street Journal:

Ratings agency S&P Global said last week that SoftBank’s “financial condition will likely deteriorate” as a result of the OpenAI investment and that its plans to add debt could lead the agency to consider downgrading SoftBank’s ratings. 

While one might argue that SoftBank has a good amount of cash, the Journal also adds that it’s  somewhat hamstrung in its use as a result of CEO Masayoshi Son's reckless gambles:

SoftBank had a decent buffer of $31 billion of cash as of Dec. 31, but the company has also pledged to hold much of that in reserve to quell worried investors. SoftBank has committed not to borrow more than 25% of the value of all of its holdings, which means it will likely need to sell some of the other parts of its empire to pay for the rest of the OpenAI deal.

Worse still, it seems, as mentioned before, that SoftBank will be financing the entirety of the first $10 billion — or $7.5 billion, assuming it finds investors to syndicate the first tranche, and they follow through right until the moment Masayoshi Son hits ‘send’ on the wire transfer .

As a result, SoftBank will likely have to start selling off parts of its valuable holdings in companies like Alibaba and ARM, or, worse still, parts of its ailing investments from its Vision Fund, resulting in a material loss on its underwater deals.

This is an untenable strategy, and I'll explain why.

OpenAI Needs At Least $40 billion A Year To Survive, And Its Costs Are Increasing

While we do not have much transparency into OpenAI's actual day-to-day finances, we can make the educated guess that its costs are increasing based on the amount of capital it’s raising. If OpenAI’s costs were flat, or only mildly increasing, we’d expect to see raises roughly the same size as previous ones. Its $40bn raise is nearly six times the previous funding round. 

Admittedly, multiples like that aren’t particularly unusual. If a company raises $300,000 in a pre-seed round, and $3m in a Series A round, that’s a tenfold increase. But we’re not talking about hundreds of thousands of dollars, or even millions of dollars. We’re talking about billions of dollars. If OpenAI’s funding round with Softbank goes as planned, it’ll raise the equivalent of the entire GDP of Estonia — a fairly wealthy country itself, and one that’s also a member of Nato and the European Union. That alone should give you a sense of the truly insane scale of this. 

Insane, sure, but undoubtedly necessary. Per The Information, OpenAI expects to spend as much as $28 billion in compute on Microsoft's Azure cloud in 2028. Over a third of OpenAI's revenue, per the same article, will come from SoftBank's (alleged) spend.It's reasonable to believe that OpenAI will, as a result, need to raise in excess of $40 billion in funding a year, though it's reasonable to believe that it will need to raise more along the lines of $50 billion or more a year until it reaches profitability. This is due to both its growing cost of business, as well as its various infrastructure commitments, both in terms of Stargate, as well as with third-party suppliers like CoreWeave and Microsoft. 

Counterpoint: OpenAI could reduce costs: While this is theoretically possible, there is no proof that this is taking place. The Information claims that "...OpenAI would turn profitable by the end of the decade after the buildout of Stargate," but there is no suggestion as to how it might do so, or how building more data centers would somehow reduce its costs.This is especially questionable when you realize that Microsoft is already providing discounted pricing on Azure compute. We don’t know if these discounts are below Microsoft’s break-even point — which it wouldn’t, nor would any other company offer, if they didn’t have something else to incentivize it, such as equity or a profit-sharing program. Microsoft, for what it’s worth, has both of those things. 

OpenAI CEO Sam Altman's statements around costs also suggest that they're increasing. In late February, Altman claimed that OpenAI was "out of GPUs." While this suggests that there’s demand for some products — like its image-generating tech, which enjoyed a viral day in the sun in March — it also means that to meet the demand it needs to spend more. And, at the risk of repeating myself, that demand doesn’t necessarily translate into profitability. 

SoftBank Cannot Fund OpenAI Long-Term, as OpenAI's costs are projected to be $320 billion in the next five years

As discussed above, SoftBank has to overcome significant challenges to fund both OpenAI and Stargate, and when I say "fund," I mean fund the current state of both projects, assuming no further obligations.

The Information reports that OpenAI forecasts that it will spend $28 billion on compute with Microsoft alone in 2028. The same article also reports that OpenAI "would turn profitable by the end of the decade after the buildout of Stargate," suggesting that OpenAI's operating expenses will grow exponentially year-over-year.

These costs, per The Information, are astronomical:

The reason for the expanding cash burn is simple: OpenAI is spending whatever revenue comes in on computing needs for operating its existing models and developing new models. The company expects those costs to surpass $320 billion overall between 2025 and 2030.

The company expects more than half of that spending through the end of the decade to fund research-intensive compute for model training and development. That spending will rise nearly sixfold from current rates to around $40 billion per year starting in 2028. OpenAI projects its spending on running AI models will surpass its training costs in 2030.

SoftBank has had to (and will continue having to) go to remarkable lengths to fund OpenAI's current ($40 billion) round, lengths so significant that it may lead to its credit rating being further downgraded.

Even if we assume the best case scenario — OpenAI successfully converts to a for-profit entity by the end of the year, and receives the full $30 billion — it seems unlikely (if not impossible) for it to continue raising the amount of capital they need to continue operations. As I’ve argued in previous newsletters, there are only a few entities that can provide the kinds of funding that OpenAI needs. These include big tech-focused investment firms like Softbank, sovereign wealth funds (like those of Saudi Arabia and the United Emirates), and perhaps the largest tech companies.

These entities can meet OpenAI’s needs, but not all the time. It’s not realistic to expect Softbank, or Microsoft, or the Saudis, or Oracle, or whoever, to provide $40bn every year for the foreseeable future. 

This is especially true for Softbank. Based on its current promise to not borrow more than 25% of its holdings, it is near-impossible that SoftBank will be able to continue funding OpenAI at this rate ($40 billion a year), and $40 billion a year may not actually be enough.

Based on its last reported equity value of holdings, SoftBank's investments and other assets are worth around $229 billion, meaning that it can borrow just over $57bn while remaining compliant with these guidelines.

In any case, it is unclear how SoftBank can fund OpenAI, but it's far clearer that nobody else is willing to.

OpenAI Is Running Into Capacity Issues, Suggesting Material Instability In Its Business or Infrastructure — And It's Unclear How It Expands Further

Before we go any further, it's important to note that OpenAI does not really have its own compute infrastructure. The majority of its compute is provided by Microsoft, though, as mentioned above, OpenAI now has a deal with CoreWeave to take over Microsoft's future options for more capacity.

Anyway, in the last 90 days, Sam Altman has complained about a lack of GPUs and pressure on OpenAI's servers multiple times. Forgive me for repeating stuff from above, but this is necessary.

  • On February 27, he lamented how GPT 4.5 was a "giant, expensive model," adding that it was "hard to perfectly predict growth surges that lead to GPU shortages." He also added that they would be adding tens of thousands of GPUs in the following week, then hundreds of thousands of GPUs "soon."
  • On March 26, he said that "images in chatgpt are wayyyy more popular than [OpenAI] expected," delaying the free tier launch as a result.
  • On March 27, he said that OpenAI's "GPUs [were] melting," adding that it was "going to introduce some temporary rate limits" while it worked out how to "make it more efficient."
  • On March 28, he retweeted Rohan Sahai, the product team lead on OpenAI's Sora video generation model, who said "The 4o image gen demand has been absolutely incredible. Been super fun to watch the Sora feed fill up with great content...GPUs are also melting in Sora land unfortunately so you may see longer wait times / capacity issues over coming days."
  • On March 30, he said "can yall please chill on generating images this is insane our team needs sleep."
  • On April 1, he said that "we are getting things under control, but you should expect new releases from openai [sic] to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges." He also added that OpenAI is "working as fast we can to really get stuff humming; if anyone has GPU capacity in 100k chunks we can get asap please call!"

These statements, in a bubble, seem either harmless or like OpenAI's growth is skyrocketing — the latter of which might indeed be true, but bodes ill for a company that burns money on every single user.

Any mention of rate limits or performance issues suggests that OpenAI is having significant capacity issues, and at this point it's unclear what further capacity it can actually expand to outside of that currently available. Remember, Microsoft has now pulled out of as much as 2GW of data center projects, walked away from a $1 billion data center development in Ohio, and declined the option on $12bn of compute from CoreWeave that OpenAI had to pick up — meaning that it may be pushing up against the limits of what is physically available.

While the total available capacity of GPUs at many providers like Lambda and Crusoe is unknown, we know that CoreWeave has approximately 360MWavailable, compared to Microsoft's 6.5 to 7.5 Gigawatts, a large chunk of which already powers OpenAI.

If OpenAI is running into capacity issues, it could be one of the following:

  • OpenAI is running up against the limit of what Microsoft has available, or is willing to offer the company. The Information reported in October 2024 that OpenAI was frustrated with Microsoft, which said it wasn’t moving fast enough to supply it with servers.
  • While OpenAI's capacity is sufficient, It does not have the resources available to easily handle bursts in user growth in a stable manner.

Per The Information's reporting, Microsoft "promised OpenAI 300,000 NVIDIA GB200 (Blackwell) chips by the end of 2025," or roughly $18 billion of chips. It's unclear if this has changed since Microsoft allowed OpenAI to seek other compute in late January 2025.

I also don't believe that OpenAI has any other viable options for existing compute infrastructure outside of Microsoft. CoreWeave's current data centers mostly feature NVIDIA's aging "Hopper" GPUs, and while it could — and likely is! — retrofitting its current infrastructure with Blackwell chips, doing so is not easy. Blackwell chips require far more powerful cooling and server infrastructure to make them run smoothly (a problem which led to a delay in their delivery to most customers), and even if CoreWeave was able to replace every last Hopper GPU with Blackwell (it won't), it still wouldn't match what OpenAI needs to expand.

One might argue that it simply needs to wait for the construction of the Stargate data center, or for CoreWeave to finish the gigawatt or so of construction it’s working on.

As I've previously written, I have serious concerns over the viability of CoreWeave ever completing its (alleged) contracted 1.3 Gigawatts of capacity.

Per my article:

Per its S-1, CoreWeave has contracted for around 1.3 Gigawatts of capacity, which it expects to roll out over the coming years, and based on NextPlatform's math, CoreWeave will have to spend in excess of $39 billion to build its contracted compute. It is unclear how it will fund doing so, and it's fair to assume that CoreWeave does not currently have the capacity to cover its current commitments.

However, even if I were to humour the idea, it is impossible that any of this project is done by the end of the year, or even in 2026. I can find no commitments to any timescale, other than the fact that OpenAI will allegedly start paying CoreWeave in October (per The Information), which could very well be using current capacity.

I can also find no evidence that Crusoe, the company building the Stargate data center, has any compute available. Lambda, a GPU compute company that raised $320 million earlier in this year, and according to Data Center Dynamics "operates out of colocation data centers in San Francisco, California, and Allen, Texas, and is backed by more than $820 million in funds raised just this year," suggesting that it may not have their own data centers at all. Its ability to scale is entirely contingent on the availability of whatever data center providers it has relationships with. 

In any case, this means that OpenAI's only real choice for GPUs is CoreWeave or Microsoft. While it's hard to calculate precisely, OpenAI's best case scenario is that 16,000 GPUs come online in the summer of 2025 as part of the Stargate data center project.

That's a drop in the bucket compared to the 300,000 Blackwell GPUs that Microsoft had previously promised.

Any capacity or expansion issues threaten to kneecap OpenAI

OpenAI is, regardless of how you or I may feel about generative AI, one of the fastest-growing companies of all time. It currently has, according to its own statements, 500 million weekly active users. Putting aside that each user is unprofitable, such remarkable growth — especially as it's partially a result of its extremely resource-intensive image generator — is also a strain on its infrastructure.

The vast majority of OpenAI's users are free customers using ChatGPT, with only around 20 million paying subscribers, and the vast majority on the cheapest $20 plan. OpenAI's services — even in the case of image generation — are relatively commoditized, meaning that users can, if they really care, go and use any number of other different Large Language Model services. They can switch to Bing Image Creator, or Grok, or Stable Diffusion, or whatever.

Free users are also a burden on the company — especially with such a piss-poor conversion rate — losing it money with each prompt (which is also the case with paying customers), and the remarkable popularity of its image generation service only threatens to bring more burdensome one-off customers that will generate a few abominable Studio Ghibli pictures and then never return.

If OpenAI's growth continues at this rate, it will run into capacity issues, and it does not have much room to expand. While we do not know how much capacity it’s taking up with Microsoft, or indeed whether Microsoft is approaching capacity or otherwise limiting how much of it OpenAI can take, we do know that OpenAI has seen reason to beg for access to more GPUs.

In simpler terms, even if OpenAI wasn’t running out of money, even if OpenAI wasn’t horrifyingly unprofitable, it also may not have enough GPUs to continue providing its services in a reliable manner.

If that's the case, there really isn't much that can be done to fix it other than:

  • Significantly limiting free users' activity on the platform, which is OpenAI's primary mechanism for revenue growth and customer acquisition.
  • Limiting activity or changing the economics behind its paid product, to quote Sam Altman, "find[ing] some way to let people to pay for compute they want to use more dynamically."
    • On March 4th, Altman solicited feedback on "...an idea for paid plans: your $20 plus subscription converts to credits you can use across features like deep research, o1, gpt-4.5, sora, etc...no fixed limits per feature and you choose what you want; if you run out of credits you can buy more."
    • On January 5th, Sam Altman revealed that OpenAI is currently losing money on every paid subscription, including its $200-a-month "pro" subscription.
    • Buried in an article from The Information from March 5 is a comment that suggests it’s considering measures like changing its pricing model, with "...Sam Altman reportedly [telling] developers in London [in February] that OpenAI is primed to charge 20% or 30% of Pro customers a higher price because of how many research queries they’re doing, but he suggested an “a la carte” or pay-as-you-go approach. When it comes to agents, though, “we have to charge much more than $200 a month.”

The problem is that these measures, even if they succeed in generating more money for the company, also need to reduce the burden on OpenAI's available infrastructure.

Remember: data centers can take three to six years to build, and even with the Stargate's accelerated (and I'd argue unrealistic) timelines, OpenAI isn't even unlocking a tenth of Microsoft's promised compute (16,000 GPUs online this year versus the 300,000 GPUs promised by Microsoft).

What Might Capacity Issues Look Like? And What Are The Consequences?

Though downtime might be an obvious choice, capacity issues at OpenAI will likely manifest in hard limits on what free users can do, some of which I've documented above. Nevertheless, I believe the real pale horses of capacity issues come from arbitrary limits on any given user group, meaning both free and paid users. Sudden limits on what a user can do — a reduction in the number of generations of images of videos for paid users, any introduction of "peak hours," or any increases in prices are a sign that OpenAI is running out of GPUs, which it has already publicly said is happening.

However, the really obvious one would be service degradation — delays in generations of any kind, 500 status code errors, or ChatGPT failing to fully produce an answer. OpenAI has, up until this point, had fairly impressive uptime. Still, if it is running up against a wall, this streak will end.

The consequences depend on how often these issues occur, and to whom they occur. If free users face service degradation, they will bounce off the product, as their use is likely far more fleeting than a paid user, which will begin to erode OpenAI's growth. Ironically, rapid (and especially unprecedented) growth in one of OpenAI’s competitors, like xAI or Anthropic, could also represent a pale horse for OpenAI. 

If paid users face service degradation, it's likely this will cause the most harm to the company, as while paid users still lose OpenAI money in the end, it at least receives some money in exchange.

OpenAI has effectively one choice here: getting more GPUs from Microsoft, and its future depends heavily both on its generosity and there being enough of them at a time when Microsoft has pulled back from two gigawatts of data centers specifically because of it moving away from providing compute for OpenAI.

Admittedly, OpenAI has previously spent more on training models than inference (actually running them) and the company might be able to smooth downtime issues by shifting capacity. This would, of course, have a knock-on effect on its ability to continue developing new models, and the company is already losing ground, particularly when it comes to Chinese rivals like DeepSeek.

OpenAI Must Convert To A For-Profit Entity By The End of 2025 Or It Loses $10 Billion In Funding, And Doing So May Be Impossible

As part of its deal with SoftBank, OpenAI must convert its bizarre non-profit structure into a for-profit entity by December 2025, or it’ll lose $10 billion from its promised funding. 

Furthermore, in the event that OpenAI fails to convert to a for-profit by October 2026, investors in its previous $6.6 billion round can claw back their investment, with it converting into a loan with an attached interest rate. Naturally, this represents a nightmare scenario for the company, as it’ll increase both its costs and its outgoings.

This is a complex situation that almost warrants its own newsletter, but the long and short of it is that OpenAI would have to effectively dissolve itself, start the process of forming an entirely new entity, and distribute its assets to other nonprofits (or sell/license them to the for-profit company at fair market rates). It would require valuing OpenAI's assets, which in and of itself would be a difficult task, as well as getting past the necessary state regulators, the IRS, state revenue agencies, and the upcoming trial with Elon Musk only adds further problems.

I’ve simplified things here, and that’s because (as I said) this stuff is complex. Suffice to say, this isn’t as simple as liquidating a company and starting afresh, or submitting a couple of legal filings. It’s a long, fraught process and one that will be — and has been — subject to legal challenges, both from OpenAI’s business rivals, as well as from civil society organizations in California.

Based on discussions with experts in the field and my own research, I simply do not know how OpenAI pulls this off by October 2026, let alone by the end of the year.

OpenAI Has Become A Systemic Risk To The Tech Industry

OpenAI has become a load-bearing company for the tech industry, both as a narrative — as previously discussed, ChatGPT is the only Large Language Model company with any meaningful userbase — and as a financial entity. 

Its ability to meet its obligations and its future expansion plans are critical to the future health — or, in some cases, survival — of multiple large companies, and that's before the after-effects that will affect its customers as a result of any financial collapse. 

The parallels to the 2007-2008 financial crisis are startling. Lehman Brothers wasn’t the largest investment bank in the world (although it was certainly big), just like OpenAI isn’t the largest tech company (though, again, it’s certainly large in terms of market cap and expenditure). Lehman Brothers’ collapse sparked a contagion that would later spread throughout the global financial services industry, and consequently, the global economy. 

I can see OpenAI’s failure having a similar systemic effect. While there is a vast difference between OpenAI’s involvement in people’s lives compared to the millions of subprime loans issued to real people, the stock market’s dependence on the value of the Magnificent 7 stocks (Apple, Microsoft, Amazon, Alphabet, NVIDIA and Tesla), and in turn the Magnificent 7’s reliance on the stability of the AI boom narrative still threatens material harm to millions of people, and that’s before the ensuing layoffs. 

And as I’ve said before, this entire narrative is based off of OpenAI’s success, because OpenAI is the generative AI industry. 

I want to lay out the direct result of any kind of financial crisis at OpenAI, because I don't think anybody is taking this seriously.

Oracle Will Lose At Least $1 Billion If OpenAI Doesn't Fulfil Its Obligations

Per The Information, Oracle, which has taken responsibility for organizing the construction of the Stargate data centers with unproven data center builder Crusoe, "...may need to raise more capital to fund its data center ambitions."

Oracle has signed a 15-year lease with Crusoe, and, to quote The Information, "...is on the hook for $1 billion in payments to that firm."

To further quote The Information:

...while that’s a standard deal length, the unprecedented size of the facility Oracle is building for just one customer makes it riskier than a standard cloud data center used by lots of interchangeable customers with more predictable needs, according to half a dozen people familiar with these types of deals.

In simpler terms, Oracle is building a giant data center for one customer — OpenAI — and has taken on the financial burden associated with it. If OpenAI fails to expand, or lacks the capital to actually pay for its share of the Stargate project, Oracle is on the hook for at least a billion dollars, and, based on The Information's reporting, is also on the hook to buy the GPUs for the site.

Even before the Stargate announcement, Oracle and OpenAI had agreed to expand their Abilene deal from two to eight data center buildings, which can hold 400,000 Nvidia Blackwell GPUs, adding tens of billions of dollars to the total cost of the facility.

In reality, this development will likely cost tens of billions of dollars, $19 billion of which is due from OpenAI, which does not have the money until it receives its second tranche of funding in December 2025, which is contingent partially on its ability to convert into a for-profit entity, which, as mentioned, is a difficult and unlikely proposition.

It's unclear how many of the Blackwell GPUs that Oracle has had to purchase in advance, but in the event of any kind of financial collapse at OpenAI, Oracle would likely take a loss of at least a billion dollars, if not several billion dollars.

CoreWeave's Expansion Is Likely Driven Entirely By OpenAI, And It Cannot Survive Without OpenAI Fulfilling Its Obligations (And May Not Anyway)

I have written a lot about publicly-traded AI compute firm CoreWeave, and it would be my greatest pleasure to never mention it again.

Nevertheless, I have to.

The Financial Times revealed a few weeks ago that CoreWeave's debt payments could balloon to over $2.4 billion a year by the end of 2025, far outstripping its cash reserves, and The Information reported that its cash burn would increase to $15 billion in 2025.

As per its IPO filings, 62% of CoreWeave's 2024 revenue (a little under $2 billion, with losses of $863 million) was Microsoft compute, and based on conversations with sources, a good amount of this was Microsoft running compute for OpenAI.

Starting October 2025, OpenAI will start paying Coreweave as part of its five-year-long $12 billion contract, picking up the option that Microsoft declined. This is also when CoreWeave will have to start making payments on its massive, multi-billion dollar DDTL 2.0 loan, which likely makes these payments critical to CoreWeave's future.

This deal also suggests that OpenAI will become CoreWeave's largest customer. Microsoft had previously committed to spending $10 billion on CoreWeave's services "by the end of the decade," but CEO Satya Nadella added a few months later on a podcast that its relationship with CoreWeave was a "one-time thing." Assuming Microsoft keeps spending at its previous rate — something that isn't guaranteed — it would still be only half of OpenAI's potential revenue.

CoreWeave's expansion, at this point, is entirely driven by OpenAI. 77% of its 2024 revenue came from two customers — Microsoft being the largest, and using CoreWeave as an auxiliary supplier of compute for OpenAI. As a result, the future expansion efforts — the theoretical 1.3 gigawatts of contracted (translation: does not exist yet) compute — are largely (if not entirely) for the benefit of OpenAI.

In the event that OpenAI cannot fulfil its obligations, CoreWeave will collapse. It is that simple. 

NVIDIA Relies On CoreWeave For More Than 6% Of Its Revenue, And CoreWeave's Future Creditworthiness To Continue Receiving It — Much Of Which Is Dependent On OpenAI

I’m basing this on a comment I received from Gil Luria, Managing Director and Head of Technology Research at analyst D.A. Davidson & Co:

Since CRWV bought 200,000 GPUs last year and those systems are around $40,000 we believe CRWV spent $8 billion on NVDA last year. That represents more than 6% of NVDA’s revenue last year. 

CoreWeave receives preferential access to NVIDIA's GPUs, and makes up billions of dollars of its revenue. CoreWeave then takes those GPUs and raises debt using them as collateral, then proceeds to buy more of those GPUs from NVIDIA. NVIDIA was the anchor for CoreWeave's IPO, and CEO Michael Intrator said that the IPO "wouldn't have closed" without NVIDIA buying $250 million worth of shares. NVIDIA invested $100 million in the early days of CoreWeave, and, for reasons I cannot understand, also agreed to spend $1.3 billion over four years to, and I quote The Information, "rent its own chips from CoreWeave."

Buried in CoreWeave's S-1 — the document every company publishes before going public —  was a warning about counterparty credit risk, which is when one party provides services or goods to another with specific repayment terms, and the other party not meeting their side of the deal. While this was written as a theoretical (as it could, in theoretically, come from any company to which CoreWeave acts as a creditor) it only named one company: OpenAI. 

As discussed previously, CoreWeave is saying that, should a customer — any customer, but really, it means OpenAI — fail to pay its bills for infrastructure built on their behalf, or for services rendered, it could have a material risk to the business.

Aside: The Information reported that Google is in "advanced talks" to rent GPUs from CoreWeave. It also, when compared to Microsoft and OpenAI's deals with CoreWeave, noted that "...Google's potential deal with CoreWeave is "significantly smaller than those commitments, according to one of the people briefed on it, but could potentially expand in future years."

CoreWeave's continued ability to do business hinges heavily on its ability to raise further debt (which I have previously called into question), and its ability to raise further debt is, to quote the Financial Times, "secured against its more than 250,000 Nvidia chips and its contracts with customers, such as Microsoft." Any future debt that CoreWeave raises would be based upon its contract with OpenAI (you know, the counterparty credit risk threat that represents a disproportionate share of its revenue) and whatever GPUs it still has to collateralize.

As a result, a chunk of NVIDIA's future revenue is dependent on OpenAI's ability to fulfil its obligations to CoreWeave, both in its ability to pay them and their timeliness in doing so. If OpenAI fails, then CoreWeave fails, which then hurts NVIDIA. 

Contagion. 

OpenAI's Expansion Is Dependent On Two Unproven Startups, Who Are Also Dependent on OpenAI To Live

With Microsoft's data center pullback and OpenAI's intent to become independent from Redmond, future data center expansion is based on two partners supporting CoreWeave and Oracle: Crusoe and Core Scientific, neither of which appear to have ever built an AI data center.

I also must explain how difficult building a data center is, and how said difficulty increases when you're building an AI-focused data center. For example, NVIDIA had to delay the launch of its Blackwell GPUs because of how finicky the associated infrastructure (the accompanying servers and cooling them) is. For customers that already had experience handling GPUs, and therefore likely know how to manage the extreme temperatures created by them.

As another reminder, OpenAI is on the hook for $19 billion of funding behind Stargate, money that neither it nor SoftBank has right now.

Imagine if you didn't have any experience, and effectively had to learn from scratch? How do you think that would go?

We're about to find out!

Crusoe - Stargate - Abilene Texas

Crusoe is a former cryptocurrency mining company that has now raised hundreds of millions of dollars to build data centers for AI companies, starting with a $3.4 billion data center financing deal with asset manager Blue Owl Capital. This (yet-to-be-completed) data center has now been leased by Oracle, which will, allegedly, fill it full of GPUs for OpenAI.

Despite calling itself "the industry’s first vertically integrated AI infrastructure provider," with the company using flared gas (a waste byproduct of oil production) to power IT infrastructure, Crusoe does not appear to have built an AI data center, and is now being tasked with building a 1.2 Gigawatt data center campus for OpenAI.

Crusoe is the sole developer and operator of the Abilene site, meaning, according to The Information, "...is in charge of contracting with construction contractors and data center customers, as well as running the data center after it is built."

Oracle, it seems, will be responsible for filling said data center with GPUs and the associated hardware.

Nevertheless, the project appears to be behind schedule.

The Information reported in October 2024 that Abeline was meant to have "...50,000 of NVIDIA's [Blackwell] AI chips...in the first quarter of [2025]," and also suggested that the site was projected to have 100,000 Blackwell chips by the end of 2025.

Here in reality, a report from Bloomberg in March 2025 (that I cited previously) said that OpenAI and Oracle were expected to have 16,000 GPUs available by the Summer of 2025, with "...OpenAI and oracle are expected to deploy 64,000 NVIDIA GB200s at the Stargate data center...by the end of 2026."

As discussed above, OpenAI needs this capacity. According to The Information, OpenAI expects Stargate to handle three-quarters of its compute by 2030, and these delays call into question at the very least whether this schedule is reasonable, if not whether Stargate, as a project, is actually possible.

Core Scientific - CoreWeave - Denton Texas

I've written a great deal about CoreWeave in the past, and specifically about its buildout partner Core Scientific, a cryptocurrency mining company (yes, another one) that has exactly one customer for AI data centers — CoreWeave.

A few notes:

Core Scientific is also, it seems, taking on $1.14 billion of capital expenditures to build out these data centers, with CoreWeave promising to reimburse $899.3 million of these costs.

It's also unclear how Core Scientific intends to do this. While it’s taken on a good amount of debt in the past — $550 million in a convertible note toward the end of 2024 — this would be more debt than it’s ever taken on.

It also, as with Crusoe, does not appear to have any experience building AI data centers, except unlike Crusoe, Core Scientific is a barely-functioning recently-bankrupted bitcoin miner pretending to be a data center company.

How important is CoreWeave to OpenAI exactly? From Semafor:

“CoreWeave has been one of our earliest and largest compute partners,” OpenAI chief Sam Altman said in CoreWeave’s roadshow video, adding that CoreWeave’s computing power “led to the creation of some of the models that we’re best known for.”

“Coreweave figured out how to innovate on hardware, to innovate on data center construction, and to deliver results very, very quickly.”

But will it survive long term?

Going back to the point of contagion: If OpenAI fails, and CoreWeave fails, so too does Core Scientific. And I don’t fancy Crusoe’s chances, either. At least Crusoe isn’t public.

An Open Question: Does Microsoft Book OpenAI's Compute As Revenue?

Up until fairly recently, Microsoft has been the entire infrastructural backbone of OpenAI, but recently (to free OpenAI up to work with Oracle) released it from its exclusive cloud compute deal. Nevertheless, per The Information, OpenAI still intends to spend $13 billion on compute on Microsoft Azure this year.

What's confusing, however, is whether any of this is booked as revenue. Microsoft claimed earlier in this year that it surpassed $13 billion in annual recurring revenue — by which it means its last month multiplied by 12 — from artificial intelligence. OpenAI's compute costs in 2024 were $5 billion, at a discounted Azure rate, which, on an annualized basis, would be around $416 million in revenue a month for Microsoft.

It isn't, however, clear whether Microsoft counts OpenAI's compute spend as revenue.

Microsoft's earnings do not include an "artificial intelligence" section, but three separate segments:

  • Productivity and Business Processes, which includes things like Microsoft 365, LinkedIn, Dynamics 365 and other business processing software.
  • More Personal Computing, which includes Windows and Gaming Products
  • Intelligent Cloud, Including server products and cloud services like Azure, which is likely where OpenAI's compute is included.

As a result, it's hard to say specifically where OpenAI's revenue sits, but based on an analysis of Microsoft's Intelligent Cloud segment from FY23 Q1 (note, financial years don’t always correspond with the calendar year, so we just finished FY25 Q2 in January) through to its most recent earnings, and found that there was a spike in revenue from FY23 Q1 to FY24 Q1. 

In FY23 Q1 (which ended on September 30, 2022, a month before ChatGPT's launch),  the segment made $20.3 billion. The following year, in FY24 Q1, it made $24.3 billion — a 19.7% year-over-year (or roughly $4 billion) increase.

This could represent the massive increase in training and inference costs associated with hosting ChatGPT, peaking at $28.5 billion in revenue in FY24 Q4 — before dropping dramatically to $24.1 billion in FY25 Q1 and raising a little to $25.5 billion in FY25 Q2.

OpenAI spent 2023 training its GPT-4o model before transitioning to its massive, expensive "Orion" model which would eventually become GPT 4.5, as well as its video generation model "Sora." According to the Wall Street Journal, training GPT 4.5 involved at least one training run costing "around half a billion dollars in computing costs alone."

These are huge sums, but it’s worth noting a couple of things. First, Microsoft licenses OpenAI’s models to third parties, so some of this revenue could be from other companies using GPT on Azure. And there’s also other companies running their own models on Azure. We’ve seen a lot of companies launch AI products, and not all of them are based on LLMs.

Muddling things further, Microsoft provides OpenAI access to Azure cloud services at a discounted rate. And so, there’s a giant question mark over OpenAI’s contribution to the various spikes in revenue for Microsoft’s Intelligent Cloud segment, or whether other third-parties played a significant role. 

Furthermore, Microsoft’s investment in OpenAI isn’t entirely in cold, hard cash. Rather, it has provided the company with credits to be redeemed on Azure services. I’m not entirely sure how this would be represented on accounting terms, and if anyone can shed light on this, please get in touch. 

Would it be noted as revenue, or something else? OpenAI isn’t paying Microsoft, but rather doing the tech equivalent of redeeming some airmiles, or spending a gift card. 

Additionally, while equity is often treated as income for tax purposes — as is the case when an employee receives RSUs as part of their compensation package — under the existing OpenAI structure, Microsoft isn’t a shareholder but rather the owner of profit-sharing units. This is a distinction worth noting.  

These profit-sharing units are treated as analogous to equity, at least in terms of OpenAI’s ability to raise capital, but in practice they aren’t the same thing. They don’t represent ownership in the company as directly as, for example, a normal share unit would. They lack the liquidity of a share, and the upside they provide — namely, dividends — is purely theoretical. 

Another key difference: when a company goes bankrupt and enters liquidation, shareholders can potentially receive a share of the proceeds (after other creditors, employees, etc are paid). While that often doesn’t happen (as in, the liabilities far exceed the assets of the company), it’s at least theoretically possible. Given that profit-sharing units aren’t actually shares, where does that leave Microsoft?

This stuff is confusing, and I’m not ashamed to say that complicated accounting questions like these are far beyond my understanding. If anyone can shed some light, drop me an email, or a message on Twitter or BlueSky, or post on the Better Offline subreddit. 

The Future of Generative AI Rests On OpenAI, And OpenAI's Future Rests On Near-Impossible Financial Requirements

I have done my best to write this piece in as objective a tone as possible, regardless of my feelings about the generative AI bubble and its associated boosters.

OpenAI, as I've written before, is effectively the entire generative AI industry, with its nearest competitor being less than five percent of its 500 million weekly active users.

Its future is dependent — and this is not an opinion, but objective fact — on effectively infinite resources.

Financial Resources

If it required $40 billion to continue operations this year, it is reasonable to believe it will need at least another $40 billion next year, and based on its internal projections, will need at least that every single other year until 2030, when it claims, somehow, it will be profitable "with the completion of the Stargate data center."

Compute Resources and Expansion

OpenAI requires more compute resources than anyone has ever needed, and will continue to do so in perpetuity. Building these resources is now dependent on two partners — Core Scientific and Crusoe — that have never built a data center, as Microsoft has materially pulled back on data center development, which have (as well as the aforementioned pullback on 2GW of data centers) "slowed or paused" some of its "early stage" data center projects. This shift is directly linked to Microsoft’s relationship with OpenAI, withTD Cowen's recent analyst report saying that data center pullbacks were, and I quote its March 26 2025 data center channel checks letter, "...driven by the decision to not support incremental OpenAI training workloads."

In simpler terms, OpenAI needs more compute at a time when its lead backer, which has the most GPUs in the world, has specifically walked away from building it.

Even in my most optimistic frame of mind, it isn't realistic to believe that Crusoe or Core Scientific can build the data centers necessary for OpenAI's expansion.

Even if SoftBank and OpenAI had the money to invest in Stargate today, dollars do not change the fabric of reality. Data centers take time to build, requiring concrete, wood, steel and other materials to be manufactured and placed, and that's after the permitting required to get these deals done. Even if that succeeds, getting the power necessary is a challenge unto itself, to the point that even Oracle, an established and storied cloud compute company, to quote The Information, "...has less experience than its larger rivals in dealing with utilities to secure power and working with powerful and demanding cloud customers whose plans change frequently."

A partner like Crusoe or Core Scientific simply doesn't have the muscle memory or domain expertise that Microsoft has when it comes to building and operating data centers. As a result, it's hard to imagine even in the best case scenario that they're able to match the hunger for compute that OpenAI has.

Now, I want to be clear — I believe OpenAI will still continue to use Microsoft's compute, and even expand further into whatever remaining compute Microsoft may have. However, there is now a hard limit on how much of it there's going to be, both literally (in what's physically available) and in what Microsoft itself will actually OpenAI them to use, especially given how unprofitable GPU compute might be.

How Does This End?

Last week, a truly offensive piece of fan fiction — framed as a "report" — called AI 2027 went viral, garnering press coverage with the Dwarkesh Podcast and gormless, child-like wonder from the New York Times' Kevin Roose. Its predictions vaguely suggest a theoretical company called OpenBrain will invent a self-teaching agent of some sort.

It's bullshit, but it captured the hearts and minds of AI boosters because it vaguely suggests that somehow Large Language Models and their associated technology will become something entirely different.

I don't like making predictions like these because the future — especially in our current political climate — is so chaotic, but I will say that I do not see, and I say this with complete objectivity, how any of this continues.

I want to be extremely blunt with the following points, as I feel like both members of the media and tech analysts have failed to express how ridiculous things have become. I will be repeating myself, but it's necessary, as I need you to understand how untenable things are.

  • SoftBank is putting itself in dire straits simply to fund OpenAI once. This deal threatens its credit rating, with SoftBank having to take on what will be multiple loans to fund OpenAI's $40 billion round. OpenAI will need at least another $40 billion in the next year.
    • This is before you consider the other $19 billion that SoftBank has agreed to contribute to the Stargate data center project, money that it does not currently have available.
  • OpenAI has promised $19 billion to the Stargate data center project, money it does not have and cannot get without SoftBank's funds.
    • Again, neither SoftBank nor OpenAI has the money for Stargate right now.
  • OpenAI needs Stargate to get built to grow much further.

I see no way in which OpenAI can continue to raise money at this rate, even if OpenAI somehow actually receives the $40 billion, which will require it becoming a for-profit entity. While it could theoretically stretch that $40 billion to last multiple years, projections say it’ll burn $320 billion in the next five years.

Or, more likely, I can’t see a realistic way in which OpenAI gets the resources it needs to survive. It’ll need a streak of unlikely good fortune, the kind of which you only ever hear about in Greek epic poems: 

  • SoftBank somehow gets the resources (and loses the constraints) required to bankroll it indefinitely. 
  • The world’s wealthiest entities — those sovereign wealth funds mentioned earlier, the Saudis and so on  — pick up the slack each year until OpenAI reaches productivity (assuming it does).
  • It has enough of those mega-wealthy benefactors to provide the $320bn it needs before it reaches profitability.
  • Crusoe and CoreScientific turn out to be really good at building AI infrastructure — something they’ve never done before. 
  • Microsoft walks-back its walk-back on building new AI infrastructure and recommits to the tens of billions of dollars of capex spending it previously floated. 
  • Stargate construction happens faster than expected, and there are no supply chain issues (in terms of labor, building materials, GPUs, and so on).

If those things happen, I’ll obviously find myself eating crow. But I’m not worried. 

In the present conditions, OpenAI is on course to run out of money or compute capacity, and it's unclear which will happen first.

It's Time To Wake Up

Even in a hysterical bubble where everybody is agreeing that this is the future, OpenAI currently requires more money and more compute than is reasonable to acquire. Nobody has ever raised as much as OpenAI needs to, and based on the sheer amount of difficulty that SoftBank is having in raising the funds to meet the lower tranche ($10bn) of its commitment, it may simply not be possible for this company to continue.

Even with extremely preferential payment terms — months-long deferred payments, for example — at some point somebody is going to need to get paid.

I will give Sam Altman credit. He's found many partners to shoulder the burden of the rotten economics of OpenAI, with Microsoft, Oracle, Crusoe and CoreWeave handling the up-front costs of building the infrastructure, SoftBank finding the investors for its monstrous round, and the tech media mostly handling his marketing for him.

He is, however, over-leveraged. OpenAI has never been forced to stand on its own two feet or focus on efficiency, and I believe the constant enabling of its ugly, nonsensical burnrate has doomed this company. OpenAI has acted like it’ll always have more money and compute, and that people will always believe its bullshit, mostly because up until recently everybody has.

OpenAI cannot "make things cheaper" at this point, because the money has always been there to make things more expensive, as has the compute to make larger language models that burn billions of dollars a year. This company is not built to reduce its footprint in any way, nor is it built for a future in which it wouldn't have access to, as I've said before, infinite resources.

Worse still, investors and the media have run cover for the fact that these models don't really do much more than they did a year ago and for the overall diminishing returns of Large Language Models.

I have had many people attack my work about OpenAI, but none have provided any real counterpoint to the underlying economic argument I've made since July of last year that OpenAI is unsustainable. This is likely because there really isn't one, other than "OpenAI will continue to raise more money than anybody has ever raised in history, in perpetuity, and will somehow turn from the least-profitable company of all time to a profitable one."

This isn’t a rational argument. It’s a religious one. It’s a call for faith. 

And I see no greater pale horse of the apocalypse than Microsoft's material pullback on data centers. While the argument might be that Microsoft wants OpenAI to have an independent future, that's laughable when you consider Microsoft's deeply monopolistic tendencies — and, for that matter, it owns a massive proportion of OpenAI’s pseudo-equity. At one point, Microsoft’s portion was valued at 49 percent. And while additional fundraising has likely diluted Microsoft’s stake, it still “owns” a massive proportion of what is (at least) the most valuable private startup of all time.

And we’re supposed to believe that Microsoft’s pullback — which limits OpenAI’s access to the infrastructure it needs to train and run its models, and thus (as mentioned) represents an existential threat to the company — is because of some paternal desire to see OpenAI leave the childhood bedroom, spread its wings, and enter the real world? Behave. 

More likely, Microsoft got what it needed out of OpenAI, which has reached the limit of the models it can develop, and which Microsoft already retains the IP of. There’s probably no reason to make any further significant investments, though they allegedly may be part of the initial $10 billion tranche of OpenAI’s next round.

It's also important to note that absolutely nobody other than NVIDIA is making any money from generative AI. CoreWeave loses billions of dollars, OpenAI loses billions of dollars, Anthropic loses billions of dollars, and I can't find a single company providing generative AI-powered software that's making a profit. The only companies even close to doing so are consultancies providing services to train and create data for models like Turing and Scale AI — and Scale isn't even profitable.

The knock-on effects of OpenAI's collapse will be wide-ranging. Neither CoreWeave nor Crusoe will have tenants for their massive, unsustainable operations, and Oracle will have nobody to sell the compute it’s leased from Crusoe for the next 15 years. CoreWeave will likely collapse under the weight of its abominable debt, which will lead to a 7%+ revenue drop for NVIDIA at a time when revenue growth has already begun to slow.

On a philosophical level, OpenAI's health is what keeps this industry alive. OpenAI has the only meaningful userbase in generative AI, and this entire hype-cycle has been driven by its success, meaning any deterioration (or collapse) of OpenAI will tell the market what I've been saying for over a year: that generative AI is not the next hyper-growth market, and its underlying economics do not make sense.

I am not writing this to be "right" or "be a hater."

If something changes, and I am wrong somehow, I will write exactly how, and why, and what mistakes I made to come to the conclusions I have in this piece.

I do not believe that my peers in the media will do the same when this collapses, but I promise you that they will be held accountable, because all of this abominable waste could have been avoided.

Large Language Models are not, on their own, the problem. They're tools, capable of some outcomes, doing some things, but the problem, ultimately, are the extrapolations made about their abilities, and the unnecessary drive to make them larger, even if said largeness never amounted to much.

Everything that I'm describing is the result of a tech industry — including media and analysts — that refuses to do business with reality, trafficking in ideas and ideology, celebrating victories that have yet to take place, applauding those who have yet to create the things they're talking about, cheering on men lying about what's possible so that they can continue to burn billions of dollars and increase their wealth and influence.

I understand why others might not have written this piece. What I am describing is a systemic failure, one at a scale hereto unseen, one that has involved so many rich and powerful and influential people agreeing to ignore reality, and that’ll have crushing impacts for the wider tech ecosystem when it happens.

Don't say I didn't warn you.

The Phony Comforts of AI Optimism

2025-03-25 01:01:14

A few months ago, Casey Newton of Platformer ran a piece called "The phony comforts of AI skepticism," framing those who would criticize generative AI as "having fun," damning them as "hyper-fixated on the things [AI] can't do."

I am not going to focus too hard on this blog, in part because because Edward Ongweso Jr. already did so, and in part because I believe that there are much larger problems at work here. Newton is, along with his Hard Fork co-host Kevin Roose, actively engaged in a cynical marketing campaign, a repetition of the last two hype cycles where Casey Newton blindly hyped the metaverse and Roose pumped the bags of a penguin-themed NFT project.

The cycle continues with Roose running an empty-headed screed about what he "believes" about the near-term trajectory of artificial intelligence — that AGI will be here in the next two years, that we are not ready, but also that he cannot define it or say what it does — to Newton claiming that OpenAI’s Deep Research is "the first good agent" despite the fact his own examples show exactly how mediocre it is.

You see, optimism is easy. All you have to do is say "I trust these people to do the thing they'll do" and choose to take a "cautiously optimistic" (to use Roose's terminology) view on whatever it is that's put in front of you. Optimism allows you to think exactly as hard as you'd like to, using that big, fancy brain of yours to make up superficially intellectually-backed rationalizations about why something is the future, and because you're writing at a big media outlet, you can just say whatever and people will believe you because you're ostensibly someone who knows what they're talking about. As a result, Roose, in a piece in the New York Times seven months before the collapse of FTX, was able to print that he'd "...come to accept that [crypto] isn't all a cynical money-grab, and that there are things of actual substance being built," all without ever really proving anything.

Roose's "Latecomer's Guide To Cryptocurrency" never really makes any argument about anything, other than explaining, in a "cautiously optimistic" way, the "features" of blockchain technology, all without really having to make any judgment but "guess we'll wait and see!" 

While it might seem difficult to write 14,000 words about anything — skill issue, by the way — Roose's work is a paper thin, stapled-together FAQs about a technology that still, to this day, lacks any real use cases. Three years later, we’re still waiting for those “things of actual substance,” or, for that matter, any demonstration that it isn’t a “cynical money-grab.” 

Roose's AGI piece is somehow worse. Roose spends thousands of words creating flimsy intellectual rationalizations, writing that "the people closest to the technology — the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving," and that "the people with the best information about A.I. progress — the people building powerful A.I., who have access to more-advanced systems than the general public sees — are telling us that big change is near."

In other words, the people most likely to benefit from the idea (and not necessarily the reality) that AI is continually improving and becoming more powerful are those who insist that AGI — an AI that surpasses human ability, and can tackle pretty much any task presented to it — is looming on the horizon.  

The following quote is most illuminating:

This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my A.I. portfolio or a guy who took too many magic mushrooms and watched “Terminator 2.”

I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful A.I. systems, the investors funding it and the researchers studying its effects. And I’ve come to believe that what’s happening in A.I. right now is bigger than most people understand.

Roose's argument, and I am being completely serious, is that he has talked to some people — some of them actively investing in the thing he's talking about who are incentivized to promote an agenda where he tells everybody they're building AGI — and these people have told him that a non-specific thing is happening at some point, and that it will be bigger than people understand. Insiders are "alarmed." Companies are "preparing" (writing blogs) for AGI. It's all very scary.

But, to quote Roose, "...even if you ignore everyone who works at A.I. companies, or has a vested stake in the outcome, there are still enough credible independent voices with short A.G.I. timelines that we should take them seriously."

Roose's entire argument can be boiled down to "AI models are much better," and when he says "much better" he means "they are able to get high scores on benchmarks," at which point he does not mention which ones, or question the fact that they (despite him saying these exact words) have had to "create new, harder tests to measure their capabilities," which can be read as "make up new tests to say why these models are good." He mentions, in passing, that hallucinations still happen, but "they're rarer on newer models," a statement he does not back up with evidence.

Sidenote: Although this isn’t a story about generative AI, per se, we do need to talk about the benchmarks used to test AI performance. I’m wary of putting too much stock into them, because they’re easily gamed, and quite often, they’re not an effective way of showing actual progress. One good example is SimpleQA, which OpenAI uses to test the hallucination rate of its models.

This is effectively a long quiz that touches on a variety of subjects, from science and politics, to TV and video games. An example question is: “Which Dutch player scored an open-play goal in the 2022 Netherlands vs Argentina game in the men’s FIFA World Cup?”

If you’re curious, OpenAI’s GPT 4.5 model — its most expensive general purpose LLM yet — flunked 37% of these questions. Which is, to say, that it confidently made up an answer more than one-third of the time.

There’s a really good article from the Australian Broadcasting Corporation that explains why this approach isn’t particularly useful, based on interviews with academics at the University of Monash and La Trobe University.

First, it’s gamable. If you know the answers ahead of time — and, given that you’re testing how close an answer resembles a pre-written “correct” answer, you absolutely have to — you can optimize the model to answer these questions correctly.

There’s no accusation that OpenAI — or any other vendor — has done this, but it remains a possibility. There’s an honor system, and honor systems often don’t work when there’s billions of dollars on the line, and there’s no real consequences for actually cheating. Or, indeed, no way for people to easily find out whether a vendor cheated on a test. Moreover, as the ABC piece points out, they don’t actually reflect the way people use generative AI.

While some people use ChatGPT as a way to find singular, discrete pieces of information, people also use ChatGPT — and other similar LLMs — to write longer, more complex pieces that incorporate multiple topics. Put simply, OpenAI is testing for something that doesn’t actually represent the majority of ChatGPT usage. 

In his AGI piece, Roose mentions that the OpenAI’s models continue to score higher and higher marks on the International Math Olympiad test. While that sounds impressive, it’s worth remembering that this is just another benchmark, and thus, is susceptible to the same kind of exploitation as any other benchmark.

This is, of course, important context for anyone trying to understand the overall trajectory of AI, and whether these models are improving, or whether we’re any closer to reaching AGI. And it’s context that’s curiously absent from the piece. 

He mentions that "...in A.I., bigger models, trained using more data and processing power, tend to produce better results, and today’s leading models are significantly bigger than their predecessors." He does not explain what those results are, what results they produce, and what said results lead to as products, largely because they haven't. He talks about "...if you really want to grasp how much better A.I. has gotten recently, talk to a programmer," then fails to quote a single programmer.

I won't go on, because the article is boring, thinly-sourced and speciously-founded.

But it's also an example of how comfortable optimism is. Roose doesn't have to make actual arguments – he makes statements, finds one example of something that confirms his biases, and then moves on. By choosing the cautiously-optimistic template, Roose can present "intelligent people that are telling him things" as proof that confirms what he wants to believe, which is that Dario Amodei, the CEO of Anthropic, who he just interviewed on Hard Fork, is correct when he says that AGI is mere years away.

Roose is framing his optimism as a warning – all without ever having to engage with what AGI is and the actual ramifications of its imminence. If he did, he would have to discuss concepts like personage. Is a conscious AI system alive? Does it have rights? And, for that matter, what even is consciousness? That’s no discussion of the massive, world-changing consequences of a (again, totally theoretical, no proof this exists) artificial intelligence that's as smart and capable (again, how is it capable?) as a human being.

Being an AI optimist is extremely comfortable, because Roose doesn't even have to do any real analysis — he has other people to do it for him, such as the people that stand to profit from generative AI's proliferation. Roose doesn't have to engage with the economics, or the actual capabilities of these models, or even really understand how they work. He just needs enough to be able to say "wow, that's so powerful!"

Cautious optimism allows Roose to learn as little as necessary to write his column, knowing that the market wants AI to work, even as facts scream that it doesn't. Cautious optimism is extremely comfortable, because — as Roose knows from boosting cryptocurrency — that there are few repercussions for being too optimistic.

I, personally, believe that there should be.

Here's a thing I wrote three years ago — the last time Roose decided to boost an entire movement based on vibes and his own personal biases.

The tech media adds two industry-unique problems - the fear of being wrong, and the fear of not being right. While one might be reasonable for wanting to avoid the next Theranos, one also does not want to be the person who said that social media would become boring and that people would leave it en masse. This is the nature of career journalism - you want to be right all the time, which means taking risks and believing both your sources and your own domain expertise - but it is a nature that cryptocurrency has taken advantage of at scale.

I hate that I've spent nearly two thousand words kvetching over Roose's work, but it's necessary, because I want to be abundantly clear: cautious optimism is cowardice.

Criticism — skepticism — takes a certain degree of bravery, or at least it does so when you make fully-formed arguments. Both Roose and Newton, participating in their third straight hype-cycle boost, frame skepticism as lazy, ignorant and childish.

To quote Casey:

...this is the problem with telling people over and over again that it’s all a big bubble about to pop. They’re staring at the floor of AI’s current abilities, while each day the actual practitioners are successfully raising the ceiling.

Newton doesn't actually prove that anyone has raised a ceiling, and in fact said:

...I fear, though, will be that “AI is fake and sucks” people will see a $200 version of ChatGPT and see only desperation: a cynical effort to generate more revenue to keep the grift going a few more months until the bottom drops out. And they will continue to take a kind of phony comfort in the idea that all of this will disappear from view in the next few months, possibly forever.

In reality, I suspect that many people will be happy to pay OpenAI $200 or more to help them code faster, or solve complicated problems of math and science, or whatever else o1 turns out to excel at. And when the open-source world catches up, and anyone can download a model like that onto their laptop, I fear for the harms that could come.

This is not meaningful analysis, and it's deeply cowardly on a number of levels. Newton does not prove his point in any way — he makes up a person that combines several ideas about generative AI, says that open source will “catch up,” and that also there will be some sort of indeterminate harm. It doesn't engage with a single critic’s argument. It is, much like a lot of Newton’s work, the intellectual equivalent of saying “nuh uh!”

Newton delights in his phony comforts. He proves his points in the flimsiest ways, knowing that the only criticisms he'll get are from the people he's steadfastly othered, people that he will never actually meaningfully engage with. He knows that his audience trusts him, and thus he will never have to meaningfully engage with the material. In fact, Newton isn't really proving anything — he is stating his own assumptions, giving the thinnest possible rationale, and then singling out Gary Marcus because he perceives him as an easy target.

This is, to repeat myself, extremely comfortable. Newton, like Roose, simply has to follow whatever the money is doing at any given time, learn enough about it to ask some questions in a podcast, and then get on with his day. There is nothing more comfortable than sitting on the podcast of a broadsheet newspaper and writing a newsletter for 150,000 people with no expectation that you'd ever have to push back on anything.

And I cannot be clear enough how uncomfortable it is being a skeptic or a cynic during these cycles, even before people like Casey Newton started trying to publicly humiliate critics like Gary Marcus.


My core theses — The Rot Economy (that the tech industry has become dominated by growth), The Rot-Com Bubble (that the tech industry has run out of hyper-growth ideas), and that generative AI has created a kind of capitalist death cult where nobody wants to admit that they're not making any money — are far from comfortable.

The ramifications of a tech industry that has become captured by growth are that true innovation is being smothered by people that neither experience nor know how (or want) to fix real problems, and that the products we use every day are being made worse for a profit. These incentives have destroyed value-creation in venture capital and Silicon Valley at large, lionizing those who are able to show great growth metrics rather than creating meaningful products that help human beings.

The ramifications of the end of hyper-growth mean a massive reckoning for the valuations of tech companies, which will lead to tens of thousands of layoffs and a prolonged depression in Silicon Valley, the likes of which we've never seen.

The ramifications of the collapse of generative AI are much, much worse. On top of the fact that the largest tech companies have burned hundreds of billions of dollars to propagate software that doesn't really do anything that resembles what we think artificial intelligence looks like, we're now seeing that every major tech company (and an alarming amount of non-tech companies!) is willing to follow whatever it is that the market agrees is popular, even if the idea itself is flawed.

Generative AI has laid bare exactly how little the markets think about ideas, and how willing the powerful are to try and shove something unprofitable, unsustainable and questionably-useful down people's throats as a means of promoting growth. It's also been an alarming demonstration of how captured some members of the media have become, and how willing people like Roose and Newton are to defend other people's ideas rather than coming up with their own.

In short, reality can fucking suck, but a true skeptic learns to live in it.

It's also hard work. Proving that something is wrong — really proving it — requires you to push against the grain, and battering your own arguments repeatedly. Case in point: my last article about CoreWeave was the product of nearly two weeks of work, where, alongside my editor, we poured over the company’s financial statements trying to separate reality from hype. Whenever we found something damning, we didn’t immediately conclude it validated our original thesis — that the company is utterly rotten. We tried to find other explanations that were equally or more plausible to our own hypothesis — “steelmanning” our opponent because being skeptical demands a level of discomfort. 

Hard work, sure, but when your hypotheses are vindicated by later reporting by the likes of Semafor and the Financial Times, it all becomes worthwhile. I’ll talk about CoreWeave in greater depth later in this post, because it’s illustrative of the reality-distorting effects of AI optimism, and how optimism can make people ignore truths that are, quite literally, written in black ink and published for all the world to see. 

An optimist doesn't have to prove that things will go well — a skeptic must, in knowing that they are in the minority, be willing to do the hard work of pulling together distinct pieces of information in something called an "analysis." A skeptic cannot simply say "I talked to some people," because skeptics are "haters," and thus must be held to some higher standard for whatever reason.

The result of a lack of true skepticism and criticism is that the tech industry has become captured by people that are able to create their own phony and comfortable realities, such as OpenAI, a company that burned $5 billion in 2024 and is currently raising $40 billion, the majority of it from SoftBank, which will have to raise $16 billion or more to fund it.

Engaging with this kind of thinking is far from comfortable, because what I am describing is one of the largest abdications of responsibility by financial institutions and members of the media in history. OpenAI and Anthropic are abominations of capitalism, bleeding wounds that burn billions of dollars with no end in sight for measly returns on selling software that lacks any real mass market use case. Their existence is proof that Silicon Valley is capable of creating its own illogical realities and selling them to corporate investors that have lost any meaningful way to evaluate businesses, drunk off of vibes and success stories from 15 or more years ago.

What we are witnessing is a systemic failure, not the beginnings of a revolution. Large Language Models have never been a mass market product — other than ChatGPT, generative AI products are barely a blip on the radar — and outside of NVIDIA (and consultancy Turing), there doesn't appear to be one profitable enterprise in the industry, nor is there any sign any of these companies will ever stop burning money.

The leaders behind the funding, functionality, and media coverage of the tech industry have abdicated their authority so severely that the consensus is that it's fine that OpenAI burns $5 billion a year, and it's also fine that OpenAI, or Anthropic, or really any other generative AI company has no path to profitability. Furthermore, it's fine that these companies are destroying our power grid and our planet, and it's also fine that they stole from millions of creatives while simultaneously undercutting those creatives in an already-precarious job market.

The moment it came out that OpenAI was burning so much money should've begun an era of renewed criticism and cynicism about these companies. Instead, I received private messages that I was "making too big a deal" out of it.

These are objectively horrifying things — blinking red warning signs that our markets and our media have reached an illogical point where they believe that destruction isn't just acceptable, but necessary to make sure that "smart tech people" are able to build the future, even if they haven't built anything truly important in quite some time, or even if there’s no evidence they can build their proposed future.

I am not writing this with any comfort or satisfaction. I am fucking horrified. Our core products — Facebook, Google Search, Microsoft Office, Google Docs, and even basic laptops — are so much worse than they've ever been, and explaining these things unsettles and upsets me. Digging into the fabric of why these companies act in this way, seeing how brazen and even proud they are of their pursuit of growth, it fills me full of disgust, and I'm not sure how people like Roose and Newton don't feel the same way.

And now I want to show you how distinctly uncomfortable all of this is.


Last week, I covered the shaky state of AI data center provider CoreWeave — an unprofitable company riddled by onerous debt, with 77% of its $1.9 billion of 2024 revenue coming from Microsoft and NVIDIA. CoreWeave lost $863 million in revenue in 2024, and when I published this analysis, some people suggested that its "growth would fix things," and that OpenAI's deal to buy $11.9 billion of compute over five years was a sign that everything would be okay.

Since then, some things have come out:

To summarize (and repeat one part from my previous article):

  • CoreWeave is set to lose $15 billion this year. Its projected revenue, according to The Information, is only $4.6 billion.
  • To service its future revenue, CoreWeave must also build more data centers, which will mean it needs more debt, and the terms of DDTL 2.0 (its largest loan) means that any debt it raises must be used to repay it.
  • CoreWeave's revenue is highly concentrated, and its future almost entirely dependent on both OpenAI's ability to pay (in that it needs the money and its loans are contingent on the creditworthiness of its contracts, according to the Financial Times).
  • CoreWeave's entire data center buildout strategy — over a gigawatt of capacity — is in the hands of Core Scientific, a company that doesn't appear to have built an AI data center before and makes the majority of its money from mining and selling crypto.
  • To afford to build out the data centers necessary to serve OpenAI, CoreWeave needs to spend tens of billions of dollars it does not have, and may not have access to depending on the terms of their loans and the overall state of the market.

I'm afraid I'm not done explaining why I'm uncomfortable.

Let me make this much simpler.

  • The majority of CoreWeave's future revenue appears to come from OpenAI. In any case, assuming the deal goes as planned, CoreWeave will still burn $15 billion in 2025.
  • To service the revenue that OpenAI will bring, CoreWeave is required to aggressively expand. It currently lacks the liquidity to do so, and further loans will be contingent on its contracts, which are contingent on CoreWeave's ability to aggressively expand.
  • OpenAI's future expansion is contingent both on CoreWeave and Stargate's ability to deliver compute.
  • OpenAI's ability to pay CoreWeave is contingent on its ability to continue raising money, as it is set to lose over $14 billion in 2025. It does not anticipate being profitable until 2030, and does not have any explanation as to how they will become so other than “we will build Stargate.”
  • OpenAI's ability to continue raising money is contingent on SoftBank providing it, as is Stargate's future contingent on SoftBank's ability to both give OpenAI money and contribute $19 billion to Stargate.
  • SoftBank's ability to give OpenAI money is contingent on its ability to raise debt.
  • SoftBank's ability to raise debt is going to be dictated by investor sentiment about the future of AI.
  • Even if all of these things somehow happen, both CoreWeave and OpenAI are businesses that lose billions of dollars a year with no tangible evidence that this will ever change.

Okay, simpler.

CoreWeave's continued existence is contingent on its ability to borrow money, pay its debts, and expand its business, which is contingent on OpenAI's ability to raise money and expand its business, which is contingent on SoftBank's ability to give it money, which is contingent on SoftBank's ability to borrow money.

OpenAI is CoreWeave. CoreWeave is OpenAI. SoftBank is now both CoreWeave and OpenAI, and if SoftBank buckles, both CoreWeave and OpenAI are dead. For this situation to work even for the next year, these companies will have to raise tens of billions of dollars just to maintain the status quo.

There is nothing comfortable about my skepticism, and in fact I'd argue it's a huge pain in the ass. Being one of the few people that is willing to write down the numbers in stark, objective detail is a frustrating exercise — and it's isolating too, especially when I catch strays from Casey Newton claiming he's taking "detailed notes" about my work as a punishment for the sin of "doing basic mathematics and asking why nobody else seems to want to."

It isn't comfortable to continually have to explain to people who are all saying "AI is the future" that the majority of what they are discussing is fictional, because it reveals how many people believe things based entirely on someone they trust saying it's real, or being presented a flimsy argument that confirms their biases or affirms their own status quo.

In Newton and Roose's case, this means that they continue being the guys that people trust will bring them the truth about the future. This position is extremely comfortable, as it doesn't require them to be correct, only convincing and earnest.

I don't fear that we're "not taking AGI seriously." I fear that we've built our economy on top of NVIDIA, which is dependent on the continued investment in GPUs from companies like Microsoft, Amazon and Google, one of which has materially pulled back from data center expansion. Outside of NVIDIA, nobody is making any profit off of generative AI, and once that narrative fully takes hold, I fear a cascade of events that gores a hole in the side of the stock market and leads to tens of thousands of people losing their jobs.

Framing skepticism as comfortable is morally bankrupt, nakedly irresponsible, and calls into question the ability of those saying it to comprehend reality, as well as their allegiances. It's far more comfortable to align with the consensus, to boost the powerful in the hopes that they will win and that their victories will elevate you even further, even if your position is currently at the very top of your industry.

While it's possible to take a kinder view of those who peddle this kind of optimism — that they may truly believe these things and can dismiss the problems as surmountable — I do not see at this time in history how one can logically or rationally choose to do so.

To choose to believe that this will "all just work out" at this point is willful ignorance and actively refusing to engage with reality. I cannot speak to the rationale or incentives behind the decision to do so, but to do so with a huge platform, to me, is morally reprehensible. Be optimistic if you'd like, but engage with the truth when you do so.

I leave you with a quote from the end of HBO's Chernobyl: "where I once would fear the cost of truth, now I only ask — what is the cost of lies?"

CoreWeave Is A Time Bomb

2025-03-18 02:07:55

Soundtrack: EL-P (ft. Aesop Rock) - Run The Numbers

In my years writing this newsletter I have come across few companies as rotten as CoreWeave — an "AI cloud provider" that sells GPU compute to AI companies looking to run or train their models. 

CoreWeave had intended to go public last week, with an initial valuation of $35bn. While it’s hardly a recognizable name — like, say, OpenAI, or Microsoft, or Nvidia — this company is worth observing, if not for the fact that it’s arguably the first major IPO that we’ve seen from the current generative AI hype bubble, and undoubtedly the biggest. Moreover, it’s a company that deals in the infrastructure aspect of AI, where one would naturally assume is where all the money really is — putting up the servers for hyperscalers to run their hallucination-prone, unprofitable models. 

You’d assume that such a company would be a thriving, healthy business. And yet, a cursory glance at its financial disclosure documents reveals a business that’s precarious at best, and, in my most uncharitable opinion, utterly rancid. If this company was in any other industry, it would be seen as such. Except, it’s one of the standard bearers of the generative AI boom, and so, it exists within its own reality distortion field. 

Regardless, CoreWeave’s IPO plans appear to have been delayed, and it’s unclear when it’ll eventually make its debut on the public markets. I assume the reasons for the delay are as follows. 

First, (and we’ll talk about this later), on March 10, OpenAI announced the completion of a deal with Coreweave valued at $11.9bn that would see it procure AI compute from the company, while also taking a $350m stake in the business. This arrangement has, undoubtedly altered some of the calculus behind things like valuations, and so on.

Additionally, Coreweave has now released an amended version of its S-1 — the document that all companies must file before going public, and that acts as a prospectus for would-be investors, revealing the strengths and weaknesses of the business. The new partnership with OpenAI does complicate some things, including when it comes to risk (as we’ll discuss later), and so it naturally makes sense that CoreWeave would have to release an updated version of its prospectus. 

I’ve spent far too long reading CoreWeave’s S-1. For the uninitiated, S-1 documents are, as a matter of rule, often brutal. The SEC — and federal law — demands total, frank honesty. It’s a kind of hazing for would-be public companies, where they reveal all their dirty secrets to anyone with a web browser, thereby ensuring those who invest in the company on the first day are able to make informed decisions.

The revelations contained in  S-1 documents are, quite often, damning, as we saw in the case of WeWork. It laid bare the company’s mounting losses, its debt burden, and its insane cash burn, and raised questions about the sustainability of a company that had signed hundreds of expensive long-term leases in the pursuit of growth, despite having never made a profit. Within a matter of weeks, WeWork cancelled the IPO and its CEO and founder, Adam Neumann, had left the company.

Sidenote: WeWork, incidentally, would later go public by merging with a SPAC (special purpose acquisition company, which is essentially a shell company that’s already listed on the open markets). SPACs exist for one reason, and that’s to allow shitty companies to go public and raise money from investors without having to go through the scrutiny of a full IPO. At least, that was the case prior to 2024, when the SEC began demanding increased disclosures from companies that sought to merge with SPACs and enter the public markets via the back door. 

Unsurprisingly, many of the companies that used SPACs (like failed EV makers Fisker and Lordstown Motors, and Virgin Orbit) ultimately ended up in liquidation, or winding up petitioning a court for Chapter 11 bankruptcy protection. WeWork, for what it’s worth, filed for Chapter 11 in 2023, exiting bankruptcy the following year, albeit as a much smaller company, and one that was no longer listed on the vaunted New York Stock Exchange. 

CoreWeave’s S-1 tells the tale of a company that appears to be built for collapse, with over 60% of its revenue dependent on one customer, Microsoft. In early March, the Financial Times reported that Microsoft has dropped "some services" with CoreWeave, citing delivery issues and delays, although Coreweave would later deny this

The timing, however, is suspicious. It came a mere week after TD Cowen's explosive report that claimed Microsoft had walked away from over a gigawatt of data center operations, and likely much, much more. For context, a gigawatt is about the same as the cumulative data center capacity in London or Tokyo — each city being the largest data center market in their respective regions. 

CoreWeave is burdened by $8 billion of debt (with its most recent debt raise bringing in $7.6bn, although that line of credit has not been fully tapped) that it may not be able to service. This figure does not include other commitments which are listed on the balance sheet as liabilities, like its various lease agreements for hardware and data center facilities. 

Worse, despite making $1.9 billion in revenue during the 2024 financial year, the company lost $863 million in 2024, with its entire future riding on "explosive growth" that may not materialize, and even if it does, would require CoreWeave to spend unfathomable amounts of money on the necessary capex investments. 

CoreWeave is, on many levels, indicative of the larger success (or failure) of the so-called AI revolution. The company's core business involves selling the unavoidable fuel for generative AI — access to the latest (and most powerful) GPUs and the infrastructure to run them, a result of its cozy relationship with (and investment from) NVIDIA, which has given CoreWeave priority access to its chips. As CoreWeave’s own S-1 notes, it was “the first cloud provider to make NVIDIA GB200 NVL72-based instances generally available,” and “among the first cloud providers to deploy high-performance infrastructure with NVIDIA H100, H200, and GH200.”

CoreWeave owns over 250,000 NVIDIA GPUs across 32 data centers, supported by more than 260MW of active power, making it competitive with many of the familiar hyperscalers I’ve mentioned in previous newsletters, despite being a company few have ever heard of. By comparison, Microsoft bought 485,000 GPUs in 2024 and aimed to have as many as 1.8 million GPUs by the end of that year, though it's unclear how many it has. Meta likely has somewhere in the region of 600,000 GPUs, and according to The Information's AI Data Center Database, Amazon has hundreds of thousands of its own.

In short, CoreWeave's position is one that at the very least competes with the hyperscalers, and is both a fascinating and disturbing window into the actual money that these companies do (or don't) make, and the answer is "not very much at all."

Furthermore, CoreWeave's underlying financials are so dramatically unstable that it's unclear how this company will last the next six months. As I'll get into, CoreWeave's founders are finance guys that have already cashed out nearly $500 million before the IPO, but did so in a way that means that despite only retaining 30% of the company's ownership, they retain 82% of the voting power, allowing them to steer a leaky, poorly-built ship in whatever direction they see fit, even if doing so might send CoreWeave into the abyss.

If this sounds familiar, it’s pretty much the same arrangement that Mark Zuckerberg has with Facebook. Despite only holding a small percentage of the company’s equity (around 13%), he holds the majority of voting shares, as well as the role of chairman of Facebook’s board, ensuring his position as CEO can never be challenged, regardless of any pressure from shareholders.

CoreWeave is a company that is continually hungry for more capital, and its S-1 cites potential difficulties in obtaining new cash as a potential risk factor. It intends to raise around $4 billion at IPO, which presumably will go towards servicing its debt and fuelling future expansion, as well as funding the day-to-day operations of the business. However, as I'll walk through in this newsletter, that will not be enough for this company to survive.

Sidenote: Remember when I said that companies have to lay out all their dirty laundry in the S-1, including potential risk factors? One of the factors cited is the questionable future of AI, and the failure of its customers “to support Al use cases in their systems” when those AI use cases are deployed on CoreWeave’s iron. 

Again, Microsoft is Coreweave’s biggest customer. Essentially, it’s saying that Microsoft might not actually do a good job of getting people to use Copilot, or the OpenAI models it licenses through its own ecosystem, and that would, in turn, hurt CoreWeave. 

The same document also mentions the usual stuff: the reputational harm that generative AI poses to its creators and those linked to them, regulatory scrutiny, and the uncertain trajectory of AI and its commercialization.  

And, while we’re on the subject of risk factors, a few other things caught my eye. CoreWeave cited “material weaknesses in [its] internal control over financial reporting” as a risk factor. As a public company, CoreWeave will be forced to prepare and publish regular (and accurate) financial reports. While building the S-1, CoreWeave said it “identified material weaknesses in our internal control over financial reporting” which means that “there is a reasonable possibility that a material misstatement of our annual or interim financial statements will not be prevented or detected on a timely basis” 

The good news: It’ll be able to fix them. The bad news? Doing so likely won’t be completed into 2026, and it’ll be “time consuming and costly.”

CoreWeave says that “negative publicity” could harm the company’s prospects, “regardless of whether the negative publicity is true.” This is a fairly generic statement that could apply to any business, and you’ll see similar generic warnings in most S-1 prospectuses, as they’re supposed to be a comprehensive representation of the risks that business faces. One line, however, did stand out. “Harm to our reputation can also arise from many other sources, including employee misconduct, which we have experienced in the past.” Interesting!

Anyway, I have a great deal of problems with this company, but let’s start somewhere simple.

Paging Doctor Zitron…

Number One — CoreWeave Does Not Have A Stable Business, And Is A Bad Omen For Generative AI Writ Large

To properly understand CoreWeave, we have to look at its origin story. Founded in 2017, CoreWeave was previously known as Atlantic Crypto, a cryptocurrency mining operation started by three guys that worked at a natural gas fund. When the crypto markets crashed in 2019, they renamed the company and bought up tens of thousands of GPUs, which CoreWeave offered to the (at the time) much smaller group of companies that used them for things like 3D modelling and data analytics. This was a much smaller business, and far less capital-intensive, with CoreWeave making $12m in 2022 with losses of $31m. 

When ChatGPT's launch in late 2022 activated the management consultant sleeper cells that decide what the tech industry's next hypergrowth fixation is going to be, Coreweave pivoted again, this time towards providing the computational muscle for generative AI. CoreWeave became what WIRED would call "the Multibillion-dollar Backbone of the AI boom," a comment that would suggest that CoreWeave was far more successful than it really is.

Nevertheless, CoreWeave has — through its relationship with NVIDIA, which holds a reported 5% stake in the company — an extremely large amount of GPUs, and it makes money by renting them out on a per-GPU-per-hour basis. Its competition includes companies like Lambda, as well as hyperscalers like Amazon, Google and — believe it or not — Microsoft, all of whom sell the same services.

What's important to recognize about CoreWeave's revenue is that, despite whatever WIRED might have said, the majority of its revenue does not come from "being the backbone of the AI boom," but as auxiliary cloud compute provider for hyperscalers. When a hyperscaler needs more capacity than it actually owns, it’ll turn to a company like CoreWeave to pick up the slack, because building a new datacenter is — as noted in the previous newsletter — something that can take between three and six years to complete. 

CoreWeave's customers include AI startup Cohere, Meta, NVIDIA, IBM, and Microsoft, the latter of which is its largest customer, accounting for 62% of all revenue during the 2024 financial year. It’s worth noting the speed in which CoreWeave became highly reliant on a single customer to exist. By contrast, in 2022 its largest customer accounted for 16% of its revenue, suggesting a far more diversified — and healthy — revenue base.

Although CoreWeave says its reliance on Microsoft will decrease to 50% of revenue as (if?) OpenAI starts shifting workloads to its servers, the current reality remains unchanged. Broadly speaking, CoreWeave is dependent on a few big-spending “whales” to stay afloat. 

Per the S-1, 77% of CoreWeave's revenue comes from two of its customers, the latter of which remains unnamed, and is only referred to as “Customer C” in the document. However, based on reporting from The Information, it’s reasonable to assume it’s NVIDIA, which agreed in 2023 to spend $1.3 billion over four years “to rent its own chips from CoreWeave.” 

Once you remove these two big contracts, CoreWeave only made $440 million in 2024.

These numbers deeply concern me, and I'll explain why.

  1. CoreWeave is, other than the hyperscalers, one of if not the largest holder of GPUs in the cloud space.
  2. CoreWeave sells, other than the GPUs themselves, the most "valuable" service in generative AI — compute.
  3. As a result, CoreWeave — as an independent company with the scale of a hyperscaler — is indicative of demand for generative AI services.
  4. CoreWeave's revenue, outside of its large contracts, amounts to about $440m, or $1.9 billion with these contracts.
    1. As a reminder, CoreWeave lost $863 million servicing these contracts.
  5. These numbers suggest one (or all) of the following:
    1. CoreWeave's business model does not make enough revenue to cover its costs.
    2. Outside of auxiliary capacity for hyperscalers, CoreWeave does not have a fundamentally sound or scalable business model.
    3. There is a fundamental lack of demand for compute for generative AI.

In short, CoreWeave is existentially tied to the idea that generative AI will become both a massive, revenue-generating industry and one that's incredibly compute-intensive. CoreWeave's future is one that requires an industry that has yet to show any meaningful product-market fit to grow so significantly that compute companies turn into oil companies at a time when Microsoft — the largest provider of GPU compute and the hyperscaler with the highest amount of proposed capex spending — has pulled back from both over a gigawatt of compute capacity and (reportedly) some of its contracts with CoreWeave.

CoreWeave’s three largest customers have, according to its S-1, increased their committed spend by around $7.8 billion during the 2024 financial year, representing a fourfold increase in the initial contract value. For the sake of clarity, this reflects future spending commitments — not actualized revenue from providing services to these companies. 

While this might seem like good news, that's also nearly four times its current revenue from three customers, and as Microsoft has reportedly proven with its other compute contracts, big customers can simply cancel contracts on a whim.

Put simply, this dependence on a handful of hyperscalers represents a fundamental — and potentially fatal — vulnerability. 

Sidenote: On the subject of vulnerabilities, the updated S-1 prospectus talks about a theoretical “counterparty credit risk.” What does that mean? Essentially, it’s when one party defaults on paying for services that the other party has paid for. If you don’t pay your mortgage, that’s counterparty credit risk. 

CoreWeave is saying that, should a customer fail to pay its bills for infrastructure built on their behalf, or for services rendered, it could have a material risk to the business. The S-1 gives the example of its arrangement with OpenAI, where CoreWeave has agreed to provide certain services (and build certain infrastructure) in exchange for $11.9bn over the course of the next five years. 

Although CoreWeave talks generally about the risk of counterparty credit risk, and only cites OpenAI in hypothetical terms, it’s also the only company named in this section. Which makes sense, because the chances of Microsoft or Meta becoming insolvent in the immediate future are slim, whereas OpenAI’s entire existence depends on its ability to raise more money than any startup in history, indefinitely, while also never making a profit. 

And, as readers of this newsletter will know, I don’t rate its chances. 

One last note on risks: Perhaps the biggest, in my view, is the fact that there’s really nothing inherently special about CoreWeave besides its existing infrastructure. Cloud GPUs are incredibly commoditized, and the core factors of differentiation between the various players are price, availability, and the exact hardware available. In fairness to CoreWeave, it has some strength in the latter point, with a close relationship with Nvidia that’s afforded it access to the latest and greatest hardware as it becomes available. 

The problem is that, for the most part, with enough money you could make a company as equally capable as CoreWeave. And, indeed, CoreWeave does effectively the same thing as other hyperscalers like Google Cloud and Azure and AWS, as well as upstarts like Lambda Labs.

So, tell me, why is this business worth $35bn? 

CoreWeave simply doesn't have meaningful demand or revenue resulting from its services. $440 million — with some of that revenue likely coming from other hyperscalers, albeit those who haven’t spent as much as Microsoft — is a pathetic sum that suggests either not enough people want to use CoreWeave’s services, or the services themselves are not actually that lucrative to provide, likely due to the ruinously-expensive costs of running hundreds of thousands of GPUs at full tilt.

Regardless of the reason, the company selling the literal fuel of the supposed AI revolution is losing hundreds of millions of dollars doing so.

Worse still, CoreWeave is entirely dependent on its largest customers, to the point that their entire business would collapse without them...and frankly, given the precarious nature of its financials, might even collapse with them. 

That's because servicing this revenue is also incredibly costly. According to The Information, CoreWeave spent over $8.5 billion in capital expenditures in 2024, and funding said expenditures required CoreWeave to take on onerous amounts of debt, to the point that it's unclear how this business survives.

Number Two — CoreWeave Has Taken On A Fatal Amount of Debt

Forgive me, as the following is going to be a little dense.

In simple terms, CoreWeave's operations requires it to be in a near-constant state of capital expenditure, both to build the data centers it needs to serve customers from, to purchasing massive amounts of power, to acquiring the specialized GPUs necessary to run AI workloads.

CoreWeave has raised a total of $14.5 billion in equity funding (selling stock in the company) and debt financing (loans). Many of these loans are collateralized not by money or real estate, with “the Company’s property, equipment and other assets,” which includes the value of the GPUs used to power its operations. This is a new kind of asset-backed lending model created specifically to fund compute-sellers like CoreWeave, Crusoe, and Lambda, similar to how a mortgage is backed by the value of the property. 

The problem is that, yes, GPUs are depreciating assets, a fact that will eventually become problematic for these companies. They eventually slide into obsolescence, as new chips and new manufacturing processes come out. With constant, intensive use, they wear down and fail, and thus require replacing. As noted later in this piece, as these assets lose value, CoreWeave is forced to increase its monthly payments as the collateral is (presumably) no longer sufficient to satisfy the outstanding debt. 

In CoreWeave's case, the majority of its raised capital was raised in debt, with the majority of that coming in two Delayed Draw Term Loan facilities (DDTL). This usually means that while you have access to a certain amount of money, said money is only disbursed in predefined installments. These installments may come after a certain period of time, or when the company reaches a certain milestone. DDLTs work unlike personal loans, where you typically get the cash upfront, and start repayments immediately. The contracts are custom-written for each loan, and reflect the needs (and risks) of the business.  

These loans can have wildly different terms based on their collateral and the incoming revenue of the company in question. All numbers below are an estimate based on the terms of the loans in question, and I do not attempt to factor in the actual cost of interest. For the most part, I've worked out the terms of the loans and the repayment schedule, and given you what I believe will be the lowest amount CoreWeave will owe.

For the sake of both your and my sanity, I'm going to focus on just these loans, as I believe they are, on their own, enough to potentially destroy CoreWeave.

Problem Loan Number 1: DDTL 1.0 

  • Size: $2.3 billion (fully drawn as of S-1 filing)
  • 14.11% Annual interest in 2024
  • At least $892 million a year in interest payments 

CoreWeave's first Delayed Draw Term Loan (DDTL 1.0, as it calls it), came from titanic alternative asset management firm Blackstone (not to be confused with Blackrock), and Magnetar Capital, an Illinois-based hedge fund most famous for its involvement in the creation of Collateralized Debt Obligations, the security product that created the Global Financial Crisis of the late 2000s, raising $2.3 billion. This loan has now been fully drawn.

The effective annual interest rate on this loan is a little bit more than 14%, averaging at 14.11% in 2024 and 14.12% in 2023. The reason for the variance is because of how the interest rate is actually calculated. It combines the Term SOFR of the period (which is an average of the interest rate on treasury bond activity outside normal trading hours), plus 9.62%, or an unspecified “alternative base rate” plus 8.62%. At the time of writing, the 180-day average of SOFR is around 4.6%, although this can fluctuate depending on market conditions.

The terms of the loan require quarterly payments based on the company's cash flow and, starting in January 2025, the depreciated value of the GPUs that are used as collateral to provide the loan, and CoreWeave has until March 2028 to fully pay it off. Interest accrues monthly, and there is a final (though unspecified) balloon payment. Per the amended S-1 document, CoreWeave has paid $288 million in principal and $255 million in interest since the inception of the loan.

It’s tricky to actually calculate the monthly payments on this. The previous balance repayments aren’t a useful guide, as the loan wasn’t fully utilized in 2023, with CoreWeave carrying a balance of $1.3bn. There are 13 quarters between December 31, 2024 and March 31, 2028. If we divide the outstanding debt of $2bn by thirteen, we have around $150m. That only covers the principal, and not the interest — which, I remind you, stands at an arse-clenchingly steep 14.11%. 

Nor, for that matter, do the previous repayments include the increase in principal payments starting from January 2025 to reflect the depreciating value of the collateral. 

As Reuters reported in 2003, CoreWeave used its H100s as collateral for this loan. Those chips are, at this point, nearly two years old. While it’s unclear how the resale value of these chips has changed over time, it’s worth noting that the cost to rent a H100 in the cloud has dropped from between $4.70 to $8 an hour in late 2023, to just $1.90 at the time of writing. That will, undoubtedly, affect the value of CoreWeave’s collateral.   

As the S-1 notes, this loan has a final balloon payment. It’s unclear how big this payment will be, as the filing doesn’t provide any detail. Still, regardless of the balloon payment size, it’s not unreasonable to expect that, from this year, CoreWeave will be spending around $250m each quarter to service this loan, or $1bn annually. 

DDTL 1.0 also imposes liquidity requirements, although CoreWeave is only required to have $56 million in cash on hand to keep this loan, and that amount goes down as the principal reduces through repayments.

Problem Loan Number 2: DDTL 2.0

  • At least 10.53% annual interest
  • At least $760 million a year in interest payments
  • $7.6 Billion ($3.8 billion drawn, $3.8 billion remaining) 

CoreWeave's biggest, sexiest loan was also co-led by Blackstone and Magnetar, and allows it to draw up to $7.6 billion by June 2025 (with the option to extend for a further three months), with several fees (both upfront, at closing, and annually). These are calculated in rather esoteric ways. 

Take, for example, the yearly fee. This is equal to 0.5% of the difference between $7.6 billion and the average outstanding debt on the loan, or $6.1 billion (at least $75 million), whichever is greater. In essence, CoreWeave pays a fee if it uses the loan and pays interest on whatever amount it chooses to borrow. The loan must be fully repaid in 60 months from whenever the money was drawn.

Per DDTL 1.0, a series of bizarre interest calculations based on standard/prime interest rates are involved, but DDTL 2.0 can also calculate further interest rate increases based on CoreWeave's credit rating. And there’s a huge scope for variation, with the acceptable range starting at 5% and ending at 12%. That eye-watering 10.53% interest rate I mentioned earlier — although far higher than what a consumer with decent credit could pay on a mortgage or car loan — isn’t necessarily the highest it could reach. It could get much, much worse. 

These mob-like terms suggest Blackstone and Magnetar don't necessarily trust CoreWeave to survive, and intend to rip out whatever guts are left if it doesn't. CoreWeave's S-1 says that the actual average interest rate being charged on the amounts borrowed was 10.53%, but again, this could go up.

The number I've put above could as much as double to $1.52 billion a year (again, with no consideration of accrued interest) in the event that CoreWeave chooses to draw on the remaining $3.8 billion, something that I'm fairly confident will happen based on its aggressive capital expenditures.

DDTL 2.0 also has a brutal covenant — that if CoreWeave raises any other debt, it must use that debt to pay off this debt. This raises the question as to how it’ll manage to pay the DDTL 1.0 balloon payment, should any future debt raised be used to satisfy the DDTL 2.0 loan. 

On the subject of future debt, the updated S-1 prospectus says that, in order to meet the requirements of OpenAI’s $11.9bn deal, CoreWeave will have to take on additional financing. How will it accomplish this while also remaining compliant with the terms of DDTL 2.0, particularly when it comes to how it’ll use the proceeds of any future borrowing? 

The S-1 sheds some light here. The company has created a “special purpose vehicle” — essentially, a separate company owned and controlled by Coreweave, though technically distinct — that will “incur indebtedness to finance the obligations under the OpenAI Master Services Agreement.” 

Number Three: CoreWeave Does Not Have Access To The Capital Necessary To Meet Its Obligations

All of this might seem a little dense, but it's actually pretty simple. CoreWeave made slightly under $2 billion in revenue in 2024, but somehow ended up losing $863 million. In effect, CoreWeave spends $1.43 to make $1.

As of January 2025, CoreWeave's obligations under DDTL 1.0 will likely reach $1bn a year, if not more. Starting from October 2025, it’ll need to start repaying the DDTL 2.0 loan, and these repayments will depend on whether it draws more capital from the loan, and whether its interest rate increases or decreases based on its perceived risk. Regardless, it’s not hard to imagine a scenario where its debt repayments surpass its entire 2024 revenue. 

Furthermore, CoreWeave has made a lot of commitments. It’s planning to invest over a billion dollars to convert a New Jersey lab building into a data center, it’s part of a $5 billion effort with Blue Owl, Chirisa and PowerHouse, it’s committed to invest over a billion pounds sterling in UK-based data centers, it’s committed to invest an additional $2.2 billion in data centers in Europe, it’s committed to a $600 million data center project in Virginia, and allegedly have exercised an option with Core Scientific — a deeply dodgy company I'll describe in a minute — to create "approximately 500 Megawatts of Critical IT Load at Six Core Scientific Sites," with the agreement "increasing potential cumulative revenue to $8.7 billion over 12 Year Contract Terms."

In short, CoreWeave has committed to billions of dollars of data center buildouts, and the only way it can pay for them is with burdensome loans that it, as of right now, does not appear to have the revenue to support.

CoreWeave spent approximately $2.86 billion to make just under $2 billion, with $1.5 billion of that coming from the cost of running its infrastructure and scaling its operations, and the rest coming from hundreds of millions of dollars of interest payments and associated fees.

These numbers do not appear to include capital expenditures, and by its own admission, the vast loans that CoreWeave has pulled are necessary to continue funding them. Worse still, NextPlatform estimates that CoreWeave spent about $15 billion to turn its $7.5 billion of GPUs into around 360 Megawatts of operational computing power.

Per its S-1, CoreWeave has contracted for around 1.3 Gigawatts of capacity, which it expects to roll out over the coming years, and based on NextPlatform's math, CoreWeave will have to spend in excess of $39 billion to build its contracted compute. It is unclear how it will fund doing so, and it's fair to assume that CoreWeave does not currently have the capacity to cover its current commitments.

How does CoreWeave — a company with roughly $1.3 billion in the bank and more than $4.6 billion of debt available to draw and an inability to raise further capital without paying said debt off — actually continue doing business?

Number Four - CoreWeave Is Using A Suspicious and Unproven Partner To Build its Entire Infrastructure

Before We Go Any Further, A Note On "Contracted Power"

"Contracted power" does not necessarily mean that it exists. "Contracted power" is a contract that says "you will provide this much compute." This term is used to deliberately obfuscate the actual compute that a company has.

As of writing this sentence, CoreWeave has "more than 360" megawatts of active power and "approximately 1.3 GW of total contracted power." This means that CoreWeave has committed to building this much.

These figures will become relevant shortly.

In 2017, a company was founded with the goal of mining digital assets like Bitcoin to generate revenue, and to provide mining services for others. Several years later, it pivoted into providing compute for generative AI. 

Confusingly, this company is not CoreWeave, but Core Scientific, a totally different company entirely that went public in 2022 in a disastrous SPAC-merger, and later filed for Chapter 11 bankruptcy that same year. It exited bankruptcy court in January 2024, having shed $400m in debt and restructured its obligations to creditors, and once again returned to the public markets, where it trades on the NASDAQ. 

In June 2024, CoreWeave made an unsolicited proposal to acquire Core Scientific that it rejected (three days after announcing a 12-year-long deal with CoreWeave to provide 200 megawatts of compute), before signing an extension of an already-existent 12-year-long deal in August 2024 to deliver "an additional 112 megawatts of computing infrastructure to support CoreWeave's operations" according to CNBC.

This capacity, according to CNBC, will be operational by "the second half of 2026," and would involve repurposing existing crypto mining hardware. Here's a quote from CNBC about how easy that'll be:

Needham analysts wrote in a report in May that almost all infrastructure that miners currently have would “need to be bulldozed and built from the ground up to accommodate HPC,” or high-performance computing. 

Great!

As of now, Core Scientific holds, according to CEO Adam Sullivan, "the largest operational footprint of Bitcoin mining infrastructure," and per the above analyst quote, it's very obvious that you can't just retrofit a crypto mining rig to start "doing AI," likely because the GPUs are different to the ASICs (Application Specific Integrated Circuits) used in crypto mining, meaning the server hardware is different, which means the entire bloody thing is different.

Nevertheless, CoreWeave's S-1 repeatedly mentions that it’s made an agreement with Core Scientific for "more than 500 MW of capacity."

Right now, however, it's unclear how much capacity Core Scientific actually has, despite both its and CoreWeave's suggestions. Core Scientific, as of February 2025, had approximately 166,000 bitcoin miners — which, I should add, are likely all application-specific chips that only mine bitcoin!, which means that none of that has (or, potentially, any of their data center operations have) anything to do with GPUs or compute for AI.

In fact, I can find little proof that Core Scientific has any meaningful compute capacity at all.

Once you dig into its financial filings, things get weirder. Per its most recent annual report for the year ending December 31, 2024, Core Scientific made $24.3 million in HPC hosting revenue (referring to high performance computing, which includes generative AI workloads).

That isn’t a typo. $24.3 million. By contrast, it generated $408m in revenue from mining and selling cryptocurrencies for itself, and $77m for mining crypto for third-parties.

Sidenote: Assuming it’s possible for Core Scientific to repurpose bitcoin miners for AI workloads, how does that help the business? As noted, mining crypto for resale, and for external partners, provides the overwhelming majority — nearly 95% — of its revenue. 

Core Scientific has run at a loss for the last three quarters, losing $265 million in Q4 2024, $455 million in Q3 2024, and $804 million in June 2024.

Core Scientific has one HPC client: CoreWeave, which is referred to as “Customer J” in the 10-K form — the annual financial report that every publicly-traded company must publish at the close of each financial year.

Core Scientific, according to its 10-K form:

"...was contractually committed for approximately $1.14 billion of capital expenditures, mainly related to infrastructure modifications, equipment procurement, and labor associated with the conversion of a significant portion of its data centers to deliver hosting services for HPC... [with] $899.3 million [being] reimbursable by our customer under our agreements." 

That  customer — Core Scientific's only HPC customer — being CoreWeave, with the expenses expected to "occur over the next year."

How exactly will Core Scientific, a company that was bankrupt last time this year and lost over $265 million in its last quarter afford the up-front capital expenditures from CoreWeave's expansion? Core Scientific has around $836 million in cash and cash equivalents on hand and is still in the process of cleaning up its already-existent piles of debt, and even then...how does any of this work, exactly?

And given that the company recently exited Chapter 11 bankruptcy protection, it’s unlikely to receive capital on favorable terms. 

Hey wait a second...in Core Scientific's latest 10-K, it proudly boasts that it has "approximately 1,317 MW of contracted power capacity to operate and manage one of the largest center infrastructure asset bases." CoreWeave's S-1 says that it has "...total contracted power extends to approximately 1.3 GW as of December 31, 2024, which we expect to roll out over the coming years."

Core Scientific's only customer (CoreWeave) is contracted to build 1.3 gigawatts of capacity, and mysteriously, that's exactly how much CoreWeave, Core Scientific's only customer, has said it’s contracted. While Core Scientific has said a chunk of that capacity is reserved for expanding its cryptocurrency-ming operations, it is still an extremely suspicious coincidence.

Nevertheless, Core Scientific, as of right now, does not appear to have any meaningful HPC infrastructure. While it may have seven data centers, that doesn't mean it’s able to meet the demands of companies like Microsoft and OpenAI, both customers of CoreWeave, as evidenced by the fact that it made $8.5 million in HPC revenue last quarter, and $24.3m for the entire financial year. 

Somehow, Core Scientific intends to spend a billion dollars building HPC infrastructure, a thing it has yet to meaningfully do (it has, as of November, broken ground on a site in Oklahoma), and somehow deliver over a gigawatt of capacity to a company that will allegedly reimburse it, at some point, somehow, with the money they do not have.

What the fuck is going on?

CoreWeave Is Both A Time Bomb and a Bad Omen For Generative AI

To summarize:

  • CoreWeave is burdened by interest payments that may balloon to more than $2 billion a year, and lost $863 million on $2 billion of revenue.
  • CoreWeave's expansion, which is critical to servicing future revenue and growth, requires it to invest tens of billions of dollars that it does not have.
  • CoreWeave's data center expansion is dependent on what is primarily a Bitcoin mining company — Core Scientific — that appears to have no current HPC capacity building out over a gigawatt of capacity at a time where it does not appear to have built any.
    • Converting cryptocurrency mining data centers to HPC data centers is effectively starting from scratch. There is no easy or logical way to repurpose a bitcoin miner for AI. 

If CoreWeave makes it to IPO — and it may do so as soon as next week — it will raise about $4 billion, which might give it enough runway to continue operations for a year, but by October 2025 it’ll face upwards of $500 million of loan payments a quarter, all while trying to scale up an operation that doesn't appear to have a path to profit.

The reason I've spent thousands of words walking you through CoreWeave's problems is that this is the first meaningful tech IPO in some time, and the first one directly connected to the AI boom.

CoreWeave's financial health and revenue status suggest that there either isn't demand or profit in providing services for generative AI. This company — the so-called backbone of the generative AI boom, and one of the largest holders of NVIDIA GPUs, with a seemingly closer relationship with the company than Meta or Microsoft, based on its early access to the company’s latest hardware — does not appear to be able to get meaningful business for its operations outside of hyperscalers. While it may sell by-the-hour compute to regular companies, it's clear that that market just doesn't exist at a meaningful revenue point.

If NVIDIA is selling the pickaxes for the gold rush, CoreWeave is selling the shovels, and it mostly appears to be turning up dirt. If this were a meaningful growth industry, CoreWeave would be printing money, just like how the automobile created an entire generation of billionaire oil barons, like John D. Rockefeller and Henry Flagler. And yet, it appears that, outside of Microsoft, it can't even scrape a billion dollars of revenue out of being the single-most prominent independent provider of AI compute.

Furthermore, it's unclear how CoreWeave actually intends to expand. Core Scientific is a tiny, unproven party that has yet to build an HPC data center, one that has to front the money for CoreWeave's expansion in the hopes that it’ll be reimbursed. Building data centers isn't easy, and Core Scientific's previous work as a Bitcoin mining firm does not necessarily apply thanks to the massively-different server architecture involved with running superclusters of GPUs.

CoreWeave should have been a positive signal for generative AI, or at least a way for AI boosters to shut me up. If generative AI had this incredible demand — both from companies looking to integrate it and users looking to use it — CoreWeave would be making far, far more money, have a far more diverse customer base, and, if I'm honest, not have to take out more than five times its revenue in burdensome loans with loan shark-level interest rates.

In reality, this company is a dog, and will show the markets exactly how little money or growth is left in generative AI. NVIDIA's remarkable GPU sales have been conflated with the success of generative AI, rather than seen as a sign of desperation, and a signal of how willing big tech is to spend billions of dollars on something if they think their competition is doing so.

Really, the proof is in the use of those GPUs, and CoreWeave gives us a transparent — and terrifying — expression of the lack of excitement or real usage of generative AI. As I hypothesized a few weeks ago, I believe that outside of OpenAI, the generative AI industry is terribly small, a point that CoreWeave only underlines.

Based on its revenue, how much could Amazon Web Services, Google Cloud, or Microsoft Azure really be making? Based on reporting by The Information, OpenAI spends roughly $2 billion on the compute to run its models and a further $3 billion to train its models, paying Microsoft a discounted rate of around 25% the normal cost. Even in the most optimistic figures, given how much bigger and more popular ChatGPT is than literally every other generative AI company, how can any hyperscaler be making more than $3 billion or 4 billion in revenue a year from selling AI compute?

Without conceding that generative AI has a future beyond the frothy present, one also has to question whether there’s even much of a place for massive hyperscaler investment, given the rise of new, more efficient models. I’m not merely talking about DeepSeek. The largest version of Google’s newest Gemma 3 model can run on a single H100 GPU, and according to Sundar Pichai, requires one-tenth the computing power as similar models. Separately, Baidu’s ERNIE 4.5 model reportedly has one-hundredth the computational demands as GPT-4.5, while delivering similar performance, and its X1 reasoning model allegedly outperforms DeepSeek R1 at half the cost

These numbers also suggest that OpenAI is likely charging way, way less than it should be for its services. If it costs CoreWeave $493 million (yes, this is napkin math) — this is its "cost of revenue," and that amount only includes rentals, power and personnel to run its services — to service 360 megawatt of power, and Microsoft's 7.5 gigawatts of power is, say, 70% OpenAI's compute, it may cost Microsoft over $7 billion. It's already been well-established that OpenAI's costs eat into Microsoft's profits.

Again, these are estimates, as we don't know Microsoft's exact costs, but it's reasonable to believe that its contracted compute with CoreWeave was likely to facilitate OpenAI's growth, which is further ratified by The Information's AI data center database, which reports that the upcoming data center buildout in Denton, Texas is, and I quote, "...[for] Microsoft [to] rent to use by OpenAI."

And you'll never guess who's building it. That's right, Core Scientific, which announced on February 26 2025 that it was partnering with CoreWeave to expand its relationship across its Denton, Texas location.

It's unclear how this data center gets built, or whether OpenAI will actually use it given its new plans for a "Stargate" data center in Abilene Texas, and the general chilling of the relationship with Microsoft. Furthermore, Microsoft's indeterminately-sized cancellations with CoreWeave pair with its own retreat from data center buildouts, which coincides with Microsoft releasing OpenAI from its exclusive cloud compute provider relationship, which coincides with OpenAI's plans to build gigawatts of its own capacity.

How, exactly, does any of this make sense?

While I can only hypothesize, I believe that this move is Microsoft's attempt to disconnect from OpenAI, dumping its exclusive relationship and canceling its own capacity expansion along with contracts with CoreWeave, citing, according to the Financial Times, "delivery issues and missed deadlines," which would make sense, as it appears that CoreWeave's infrastructure partner does not appear to have expertise in building data center capacity, or any actual cloud compute capacity that I can find.

Think about it. It was reported last year that OpenAI was frustrated with Microsoft for not providing servers fast enough, after which Microsoft allowed OpenAI to seek other compute partners, which in turn led to OpenAI shacking up with Oracle and SoftBank to build out the future of OpenAI’s compute infrastructure. Once this happened, Microsoft decided to (or had already been in the process of) massively reduce its future data center capacity at a time when OpenAI’s latest model necessitates bringing hundreds of thousands of GPUs online.

Even if you disagree with my thesis, how is Microsoft going to support OpenAI’s growth any further? OpenAI’s latest models o-3 and GPT 4.5 are more compute-heavy than ever. How, exactly, does canceling over a gigawatt of planned capacity make sense?

It doesn’t. And I think we’re about to see what happens when the world’s biggest startup becomes desperate.