2025-12-30 00:57:06
Soundtrack: Lynyrd Skynyrd — Free Bird
This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 15,000 words, including vast, extremely detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large. I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual. Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.
If you have any issues signing up for premium, please email me at [email protected].
OpenAI told me opex keep eating his revenues, so I asked how many rounds of private equity he has burned and he said he just goes to the market and gets new equity afterwards so I said it sounds like he's just feeding equity to opex and then Sam Altman started crying — @FraPippo428
One time, a good friend of mine told me that the more I learned about finance, the more pissed off I’d get.
He was right.
There is an echoing melancholy to this era, as we watch the end of Silicon Valley’s hypergrowth era, the horrifying result of 15+ years of steering the tech industry away from solving actual problems in pursuit of eternal growth. Everything is more expensive, and every tech product has gotten worse, all so that every company can “do AI,” whatever the fuck that means.
We are watching one of the greatest wastes of money in history, all as people are told that there “just isn’t the money” to build things like housing, or provide Americans with universal healthcare, or better schools, or create the means for the average person to accumulate wealth. The money does exist, it just exists for those who want to gamble — private equity firms, “business development companies” that exist to give money to other companies, venture capitalists, and banks that are getting desperate and need an overnight shot of capital from the Federal Reserve’s Overnight Repurchase Facility or Discount Window, two worrying indicators of bank stress I’ll get into later.
No, the money does not exist for you or me or a person. Money is for entities that could potentially funnel more money into the economy, even if the ways that these entities use the money are reckless and foolhardy, because the system’s intent on keeping entities alive incentivizes it. We are in an era where the average person is told to pull up their bootstraps, to work harder, to struggle more, because, as Martin Luther King Jr. once said, it’s socialism for the rich and rugged free market capitalism for the poor.
The “free market” is a fucking con. When you or I run out of money, our things are taken from us, we receive increasingly-panicked letters, we get phone calls and texts and emails and demands, we are told that all will be lost if we don’t “work it out,” because the financial system is not about an exchange of value but whether or not you can enter into the currently agreed-upon con.
By letting neoliberalism and the scourge of the free markets rule, modern society created the conditions for what I call The Enshittifinancial Crisis — the place at which my friend Cory Doctorow’s Enshittification Theory meets my own Rot Economy Thesis in a fourth stage of Enshittification.
Enshittification unfolds in three phases: first, a company is “good to users,” Doctorow writes, drawing people in droves, as funnel traps do Japanese beetles, with the promise of connection or convenience. Second, with that mass audience consolidated, the company is “good to business customers,” compromising some of its features so that the most lucrative clients, usually advertisers, can thrive on the platform. This second phase is the point at which, say, our Facebook feeds fill with ads and posts from brands. Third, the company turns the user experience into “a giant pile of shit,” making the platform worse for users and businesses alike in order to further enrich the company’s owners and executives.
I’ll walk you through it.
Facebook was a huge, free platform, much like Instagram, that offered fast and easy access to everybody you knew. It acquired Instagram in 2012 to kill off a likely competitor, and over time would start making both products worse — clickbait notifications, a mandatory algorithmic feed that deliberately emotionally manipulated people and stoked political division, eventually becoming full of AI slop and videos, all so that Meta could continue to sell billions of dollars of ads a quarter. Per Kyle Chayka of the New Yorker, “Facebook’s feed, now choked with A.I.-generated garbage and short-form videos, is well into the third act of enshittification.”
The third stage is critical, in that it’s when the company also turns on its business customers. A Marketing Brew story from September of last year told the tale of multiple advertisers who found their campaigns switching to different audiences, wasting their money and getting questionable results. A New York Times story from 2021 described companies losing upwards of 70% of their revenue during a Facebook ads outage, another from 2018 described how Meta (then Facebook) deliberately hid issues with its measurement of engagement on videos from advertisers for over a year, and more recently, Meta’s ads tools started switching out top-performing ads with AI-generated ones, in one case targeting men aged 30 to 45 with an AI-generated grandma, all without warning the advertiser.
Meta doesn’t give a shit, because investors and analysts don’t give a shit. I could say “sell-side analysts” here — the ones that are trying to get you to buy a stock — but based on every analyst report I’ve read from a major bank or hedge fund, I truly think everybody is complicit.
In November 2025, Reuters revealed that Meta projected in late 2024 that 10% of its annual revenue ($16 billion) would come from advertisements for scams or banned goods, mere weeks after Meta announced a ridiculous $27 billion data center debt package, one that used deep accountancy magic to keep it off of its balance sheet despite Meta guaranteeing the entirety of the loan.
One would think this would horrify investors for two reasons:
One would be wrong. Morgan Stanley said a few weeks ago that it is “one of the handful of companies that can leverage its leading data, distribution and investments in AI,” and raised its target to $750, with a $1000-a-share bull case. Wedbush raised Meta’s price to $920, and Bank of America staunchly held firm at…$810. I can find no analyst commentary on Meta making sixteen billion dollars on fraud, because it doesn’t matter to them, because this is the Rot Economy, and all that matters is number go up.
Reality — such as whether there’s any revenue in AI, or whether it’s a good idea that Meta is spending over $70 billion this year on capital expenditures on a product that has generated no revenue (and please, fucking spare me the bullshit around “Meta’s AI ads play,” that whole story is nonsense) — doesn’t matter to analysts, because stocks are thoroughly, inextricably enshittified, and analysts don’t even realize it’s happening.
The stages of enshittification usually involve some sort of devil’s deal.
We have now entered Enshittification Stage 4, where businesses turn on shareholders.
Analysts and investors have become trapped in the same kind of loathsome platform play as consumers and businesses, and face exactly the same kinds of punishment through the devaluation of the stock itself. Where platforms have prioritized profits over the health and happiness of users or business customers, they are now prioritizing stock value over literally anything, and have — through the remarkable growth of tech stocks in particular — created a placated and thoroughly whipped investor and analyst sect that never asks questions and always celebrates whatever the next big thing is meant to be.
The value of a “stock” is not based on whether the business is healthy, or its future certain, but on its potential price to grow, and analysts have, thanks to an incredible bull run of tech stocks going on over a decade, been able to say “I bet software will be big” for most of the time, going on CNBC or Bloomberg and blandly repeating whatever it is that a tech CEO just said, all without any worries about “responsibility” or “the truth.”
This is because big tech stocks — and many other big stocks, if I’m honest — have made their lives easy as long as they don’t ask questions. Number always seems to be going up for software companies, and all you need to do is provide a vociferous defense of the “next big thing,” and come up with a smart-sounding model that justifies eternal growth.
This is entirely disconnected from the products themselves, which don’t matter as long as Number Go Up. If net income is high and the company estimates it will continue to grow, then the company can do whatever the fuck it want with the product it sells or the things that it buys. Software Has Eaten The World in the sense that Andreesen got his wish, with investors now caring more about the “intrinsic value” of software companies rather than the businesses or products themselves.
And because that’s happening, investors aren’t bothering to think too hard about the tech itself, or the deteriorating products underlying tech companies, because “these guys have always worked it out” and “these companies have always managed to keep growing.” As a result, nobody really looks too deep. Minute changes to accounting in earnings filings are ignored, egregious amounts of debt are waved off, and hundreds of billions of dollars of capital expenditures are seen as “the new AI revolution” versus “a huge waste of money.”
By incentivizing the Rot Economy — making stocks disconnected from the value of the company beyond net income and future earnings guidance — companies have found ways to enshittify their own stocks, and shareholders will be the ones who suffer, all thanks to the very downstream pressure that they’ve chosen to ignore for decades.
You see, while one might (correctly) see that the deterioration of products like Facebook and Google Search was a sign of desperation, it’s important to also see it as the companies themselves orienting around what they believe analysts and investors want to see.
You can also interpret this as weakness, but I see it another way: stock manipulation, and a deliberate attempt to reshape what “value” means in the eyes of customers and investors. If the true value of a stock is meant to be based on the value of its business, cash flow, earnings and future growth, a company deliberately changing its products is an intentional interference with value itself, as are any and all deceptive accounting practices used to boost valuations.
But the real problem is that analysts do not…well…analyze, not, at least, if it goes against the market consensus. That’s why Goldman Sachs and JP Morgan and Futurum and Gartner and Forrester and McKinsey and Morgan Stanley all said that the metaverse was inevitable — because they do not actually care about the underlying business itself, just its ability to grow on paper.
Need proof that none of these people give a fuck about actual value? Mark Zuckerberg burned $77 billion on the metaverse, creating little revenue or shareholder value and also burning all that money without any real explanation as to where it went.
The street didn’t give a shit because meta’s existent ads business continued to grow, same as it didn’t give a shit that Mark Zuckerberg burned $70 billion on capex, even though we also really don’t know where that went either.
In fact, we really have no idea where all this AI spending is going. These companies don’t tell us anything. They don’t tell us how many GPUs they have, or where those GPUs are, or how many of them are installed, or what their capacity is, or how much money they cost to run, or how much money they make. Why would we? Analysts don’t even look at earnings beyond making sure they beat on estimates. They’ve been trained for 20 years to take a puddle-deep look at the numbers to make sure things look okay, look around their peers and make sure nobody else is saying something bad, and go on and collect fees.
The same goes for hedge funds and banks propping up these stocks rather than asking meaningful questions or demanding meaningful answers. In the last two years, every major hyperscaler has extended the “useful life” of its servers from 3 years to either 5.5 or 6 years — and in simple terms, this allowed them to incur a smaller depreciation expense each quarter as a result, boosting net income. Those who are meant to be critical — analysts and investors sinking money into these stocks — had effectively no reaction, despite the fact that Meta used (per the Wall Street Journal) this adjustment to reduce its expenses by $2.3 billion in the first three quarters of this year.
This is quite literally disconnected from reality, and done based on internal accounting that we are not party to. Every single tech firm buying GPUs did this and benefited to the tune of billions of dollars in decreased revenues, and analysts thought it was fine and dandy because number went up.
Shareholders are now subordinate to the shares themselves, reacting in the way that the shares demand they do, being happy for what the companies behind the shares give them, and analysts, investors and even the media spend far more energy fighting the doubters than they do showing these companies scrutiny.
Much like a user of an enshittified platform, investors and analysts are frogs in a pot, the experience of owning a stock deteriorating since Jack Welch and GE taught corporations that the markets are run with the kind of simplistic mindset built for grifter exploitation.
And much like those platforms, corporations have found as many ways as possible to abuse shareholders, seeing what they can get away with, seeing how far they can push things as long as the numbers look right, because analysts are no longer looking for sensible ideas.
Let me give you an example I’ve used before. Back in November 1998, Winstar Communications signed a “$2 billion equipment and finance agreement with Lucent Technologies” where Winstar would borrow money from Lucent to buy stuff from Lucent, all to create $100 million in revenue over 5 years.
In December 1999, Barron’s wrote a piece called “In 1999 Tech Ruled”:
George Gilbert, who manages the Northern Technology Fund , predicts the Web-centric worlds of consumer services and software will fare well next year, too.
"A lot of people are increasing their access to the Internet," says Gilbert. "And e-commerce and business networking are very high priorities for the Fortune 100."
Lawrence York, lead portfolio manager of the WWW Internet Fund, is bullish on semiconductors, telecommunications and business-to-business – or B2B – e-commerce software. But he's wary of online retailers. "That model won't work long term," he asserts.
His top B2B picks? Ariba and Official Payments. In wireless, he likes Winstar , Ciena and AirNet Communications , which went public earlier this month.
Airnet? Bankrupt. WinStar? Horribly bankrupt. While Ciena survived, it had spent over a billion dollars to acquire other companies (all stock, of course), only to see its revenue dwindle basically overnight from $1.6bn to $300 million as the optical cable industry collapsed.
One would have been able to work out that Winstar was a dog, or that all of these companies were dogs, if you were to look at the numbers, such as “how much they made versus how much they were spending.” Instead, analysts, the media and banks chose to pump up these stocks because the numbers kept getting bigger, and when the collapse happened, rationalizations were immediately created — there were a few bad apples (Enron, Winstar, WorldCom), “the fiber was useful” and thus laying it was worthwhile, and otherwise everything was fine.
The problem, in everybody else’s mind, was that everybody had got a bit distracted and some companies that weren’t good would die. All of that lost money was only a problem because it didn’t pay off. This was a misplaced gamble, and it taught tech executives one powerful lesson: earnings must be good, without fail, by any means necessary, and otherwise nothing else matters to Wall Street.
It’s all about incentives. A sell-side analyst that tells you not to buy something is a problem. A journalist that is skeptical or critical of an industry in the midst of a growth or hype cycle is considered a “hater” — don’t I fucking know it. Analysts that do not sing the same tune as everybody else are marginalized, mocked and aggressively policed.
And I don’t fucking care. Stop being fucking cowards. By not being skeptical or critical you are going to lead regular people into the jaws of another collapse.
The dot com bubble was actually a great time to start reevaluating how and why we value stocks — to say “hey, wait, that $2 billion deal will only make $100 million in revenue?” or “this company spends $5 for every $1 it makes!” — but nobody, it appears, remained particularly suspicious of the tech industry, or a stock market that was increasingly orienting itself around conning shareholders.
And because shareholders, analysts and the media alike refused to retain a single shred of suspicion leaving the dot com era, the mania never actually subsided. Financial publications still found themselves dedicated to explaining why the latest hype cycle was real. Journalists still found themselves told by editors that they had to cover the latest fad, even if it was nonsensical or clearly rotten. Analysts still grabbed their swords and rushed to protect the very companies that have spent decades misleading them.
Much like we spent years saying that Facebook was a “good deal” because it was free, analysts and investors say tech stocks are “great to hold” because they kept growing, even if the reason they “kept growing” was a series of interlocking monopolies, difficult-to-leave platforms and impossible-to-fight traction and pricing, all of which have an eventual sell-by date.
I realize I’m pearl-clutching over the amoral status of capitalism and the stock market, but hear me out: what if we’re actually in a 15-to-20-year-long knife-catching competition? What if all anybody has done is look at cashflow, net income, future growth guidance, and called it a day? A lack of scrutiny has allowed these companies to do effectively anything they want, bereft of worrisome questions like "will this ever make a profit?"
What if we basically don’t know what the fuck is going on? What if all of this was utterly senseless?
As I wrote last year, the tech industry has run out of hypergrowth ideas, facing something I call “the Rot Com bubble.” In simple terms, they’re only “doing AI” because there do not appear to be any other viable ideas to continue the Rot Economy’s eternal growth-at-all-costs dance.
Yet because growth hasn’t slowed yet, analysts, the media and other investors are quick to claim that AI is “paying off,” even if nobody has ever said how much AI revenue is being generated or, in the case of Salesforce, they can say “nearly $1.4 billion ARR,” which sounds really big until you realize a company with $10.9 billion in revenue is boasting about making less than $116 million in revenue in a month.
Nevertheless, because Salesforce set a new revenue target of $60 billion by 2030, the stock jumped 4%. It doesn’t matter that most Agentforce customers don’t pay for the service, or that AI isn’t really making much money, or really anything, other than Number Go Up.
The era we live in is one of abject desperation, to the point that analysts and investors — and shareholders by extension — will take any abuse from management. They will allow companies to spend as much money as they want in whatever ways they want, as long as it continues the charade of “number go up.”
Let me spell it out a little more, using the latest earnings of various hyperscalers as an example.
We have no idea, because analysts and investors are in an abusive relationship with tech stocks. It is fundamentally insane that Microsoft, Meta, Amazon and Google have spent $776 billion in capital expenditures in the space of three years, and even more so that analysts and investors, when faced with such egregious numbers, simply sit back and say “they’re building the infrastructure of the future, baby!” Analysts and traders and investors and reporters do not think too hard about the underlying numbers, because doing so immediately makes you run head-first into a number of worrying questions such as “where did all that money go?” and “will any of this pay off?” and “how many GPUs do they actually own?”
Analysts have, on some level, become the fractional marketing team for the stocks they’re investing in. When Oracle announced its $300 billion deal with OpenAI in September — one that Oracle does not have the capacity to fill and OpenAI does not have the money to pay for – analysts heaved and stammered like horny teenagers seeing their first boob:
John DiFucci from Guggenheim Securities said he was “blown away.” TD Cowen’s Derrick Wood called it a “momentous quarter.” And Brad Zelnick of Deutsche Bank said, “We’re all kind of in shock, in a very good way.”
“There’s no better evidence of a seismic shift happening in computing than these results that you just put up,” Zelnick said on the earnings call.
These are the same people that retail and institutional investors rely upon for advice on what stocks to buy, all acting with the disregard for the truth that comes from years of never facing a consequence. Three months later, and Oracle has lost basically all of the stock bump it saw from the OpenAI deal, meaning that any retail investor that YOLO’d into the trade because, say, analysts from major institutions said it was a good idea and news outlets acted like this deal was real, already got their ass kicked.
And please, spare me the “oh they shouldn’t trade off of analysts” bullshit. That’s the kind of victim-blaming that allows these revered fuckwits to continue farting out these meaningless calls.
In reality, we’re in an era of naked, blatant, shameless stock manipulation, both privately and publicly, because a “stock” no longer refers to a unit of ownership in a company so much as it is a chip at a casino where the house constantly changes the rules. Perhaps you’re able to occasionally catch the house showing its hand, and perhaps the house meant for you to see it. Either way, you are always behind, because the people responsible for buying and selling stocks at scale under the auspices of “knowing what’s going on” don’t seem to know what they’re talking about, or don’t care to find out.
Let’s walk through the latest surge of blatant stock manipulation, and how the media and analysts helped it happen.
Oracle announces its unfillable, unpayable $300 billion deal with OpenAI, leading to 30%+ bump in stock price. Analysts, who should ostensibly be able to count, call it “momentous” and say they’re “in shock.” On September 22 2025, CEO Safra Catz steps down, and nobody seems to think that’s weird or suspicious.
Two months later, Oracle’s stock is down 40%, with investors worried about Oracle’s growing capex, which is surprising I suppose if you didn’t think about how Oracle would build the fucking data centers.
Basically anyone who traded into this got burned.
NVIDIA announced a “strategic partnership” to invest “up to $100 billion” and build 10GW of data centers with OpenAI, with the first gigawatt to be deployed in the second half of 2026. Where would the data centers go? How would OpenAI afford to build them? How would OpenAI build a gigawatt in less than a year? Don’t ask questions, pig!
NVIDIA’s stock bumped from from $175.30 to $181 in the space of a day. The media wrote about the story as if the deal was done, with CNBC claiming that “the initial $10 billion tranche [was] expected to close within a month or so once the transaction has been finalized.” I read at least ten stories that said that “NVIDIA had invested $100 billion.”
Analysts would say that NVIDIA was “locking in OpenAI” to “remain the backbone of the next-gen AI infrastructure,” that “demand for NVIDIA GPUs is effectively baked into the development of frontier AI models,” that the deal “[strengthened] the partnership between the two companies…[and] validates NVIDIA’s long-term growth numbers with so much volume and compute capacity.” Others would say that NVIDIA was “enabling OpenAI to meet surging demand.”
Three analysts — Rasgon at Bernstein, Luria at D.A. Davidson and Wagner at Aptus Capital — all raised circular deal concerns, but they were the minority, and those concerns were still often buried under buoyant optimism about the prospects of the company.
One eensy weensy problem though, everyone! This was a “letter of intent” — it said so in the announcement! — and on NVIDIA’s November earnings, it said that it “entered into a letter of intent with an opportunity to invest in OpenAI.”
It turns out the deal didn’t exist and everybody fell for it! NVIDIA hasn’t sent a dime and likely won’t. A letter of intent is a “concept of a plan.”
Back in October, Reuters reported that Samsung and SK Hynix had "signed letters of intent to supply memory chips for OpenAI's data centers," with South Korea's presidential office saying that said chip demand was expected to reach "900,000 wafers a month," with "much of that from Samsung and SK Hynix," which was quickly extrapolated to mean around 40% of global DRAM output.
Stocks in both companies, to quote Reuters, “soared,” with Samsung climbing 4% and SK Hynix more than 12% to an all-time high. Analyst Jeff Kim of KB Securities said that “there have been worries about high bandwidth memory prices falling next year on intensifying competition, but such worries will be easily resolved by the strategic partnership,” adding that “Since Stargate is a key project led by President Trump, there also is a possibility the partnership will have a positive impact on South Korea's trade negotiations with the U.S.”
Donald Trump is not “leading Stargate.” Stargate is a name used to refer to data centers built by OpenAI. KB Securities has around $43 billion of assets under management. This is the level of analysis you get from these analysts! This is how much they know!
On SK Hynix's October 29 2025 earnings call, weeks after the announcement, its CEO, Kim Woo-Hyun, was asked a question about High Bandwidth Memory growth by SK Kim from Daiwa Securities:
Kim: Thank you very much for taking my question. It is on demand. Now, there have been a series of announcements of GPU and ASIC supply cooperation between Big Techs and AI companies, fueling expectations of further AI market growth. Then, against this backdrop, what is the company's outlook on HBM demand growth, as well as a broadening of the customer base?
SK Hynix: Thank you for the question. Now, with upward adjustment in Big Tech's CapEx and increased investment by AI companies, the HBM market, even by a conservative estimate, will keep growing at an average of over 30% for the next five years.
I will point to our recent LOI with OpenAI for large-scale DRAM supply as an example of the very strong market demand for AI, as well as the need to secure AI memory based on HBM more than anything else when developing AI technology.
This is the only mention of OpenAI. Otherwise, SK Hynix has not added any guidance that would suggest that its DRAM sales will spike beyond overall growth, other than mentioning it had "completed year 2026 supply discussions with key customers." There is no mention of OpenAI in any earnings presentation.
On Samsung's October 30 2025 earnings call, Samsung mentioned the term "DRAM" 18 times, and neither mentioned OpenAI nor any letters of intent.
In its Q3 2025 earnings presentation, Samsung mentions it will "prioritize the expansion of the HBM4 [high bandwidth memory 4] business with differentiated performance to address increasing AI demand."
Analysts do not appear to have noticed a lack of revenue from an apparent deal for 40% of the world’s RAM! Oh well! Pobody’s nerfect!
Both Samsung and SK Hynix’s stocks have continued to rise since, and you’d be forgiven for thinking this deal was something to do with it, even though it wasn’t.
AMD announced that it had entered a “multi-year, multi-generation agreement” with OpenAI to build 6 GW of data centers, with “the first 1GW deployment set to begin in the second half of 2026,” calling the agreement “definitive” with terms that allowed OpenAI to buy up to 10% of AMD’s stock, vesting over “specific milestones” that started with the first gigawatt of data center development. Said data centers would also use AMD’s yet-to-be-released MI450 GPUs. The deal would, per Reuters, bring in “tens of billions of dollars of revenue.”
Where would those data centers go? How would OpenAI pay for them? Would the chips be ready in time? Silence, worm! How dare you ask questions? How dare you? Why are you asking questions? NUMBER GO UP!
AMD’s shares surged by 34%, with analyst Dan Ives of Wedbush saying that this was a “major valuation moment” for AMD. As an aside, Ives said that NVIDIA would benefit from the metaverse in 2021, and told CBS News in November 22 2021 that “the metaverse [was] real and Wall Street [was] looking for winners.”
One would think that AMD’s November earnings — a month after the announcement — might be a barn-burner full of remaining performance obligations from OpenAI. In fact, CEO Lisa Su said that “[AMD expected] this partnership will significantly accelerate [its] data center AI business, with the potential to generate well over $100 billion in revenue over the next few years.”
Here’s how AMD’s 10-Q filing referred to it:
As of September 27, 2025, the aggregate transaction price allocated to remaining performance obligations under contracts with an original expected duration of more than one year was $279 million, of which $139 million is expected to be recognized in the next 12 months. The revenue allocated to remaining performance obligations does not include amounts which have an original expected duration of one year or less.
…so, no revenue from OpenAI at all, I guess? AMD raised guidance by 35% over the next five years AMD's trailing 12-month revenue is $32 billion. "Tens of billions of dollars" would surely lead to more than a 35% boost (an increase of $11.2 billion or so) in the next five years?
Guess all of that was for nothing. No follow-up from the media, no questions from analysts, just a shrug and we all move on.
Anyway, AMD’s stock is now down from a high of $259 at the end of October to around $214 as of writing this sentence. Everybody who traded in based on analyst and media comments got fucked.
So, back on September 5, Broadcom said on its earnings call that it had a $10 billion order from a mystery customer, which analysts quickly assumed was OpenAI, leading to the stock popping 9%, and gradually increasing to a high of $369 or so on September 10, before declining a little until October 13, when Broadcom announced its ridiculous 10 gigawatt deal with OpenAI, claiming that it would deploy 10GW of OpenAI-designed chips, with the first racks to deploy the second half of 2026 and the entire deployment completed by end of 2029.
The same day, its president of semiconductor solutions Charlie Kawwas added that said mystery customer was actually somebody else:
“I would love to take a $10 billion [purchase order] from my good friend Greg [Brockman, COO of OpenAI],” Kawwas said. “He has not given me that PO yet.”
Nevertheless, Broadcom's stock popped by 9% on the news about the 10GW deal, with CNBC adding that "the companies have been working together for 18 months." Because it's OpenAI, nobody sat and thought about whether somebody at Broadcom saying "well, OpenAI has yet to order these chips yet" was a problem. In fact, the answer to “how does OpenAI afford this?” appeared to be “they’d afford it” when it came to analysts:
The 2026 timeline set out by OpenAI for the build-out is aggressive, but the startup is also best positioned to raise the funds required for the project, given the heights of investor confidence, said Gadjo Sevilla, an analyst at eMarketer.
"Financing such a large chip deal will likely require a combination of funding rounds, pre-orders, strategic investments, and support from Microsoft (MSFT.O), as well as leveraging future revenue streams and potential credit facilities."
Not to worry, OpenAI’s solution was far simpler: it didn’t order any chips. During Broadcom's November earnings call, where Broadcom revealed that the $10 billion order was actually from Anthropic, another LLM startup that burns billions of dollars, which was buying Google's TPUs, and also booked another $11 billion in orders. Analysts somehow believed that Anthropic is “positioned to spend heavily” despite being another venture-backed welfare recipient in the same flavor as OpenAI.
Oh, right, that 10GW OpenAI deal. Broadcom CEO Hock Tan said that he did “not expect much in 2026” from the deal, and guidance did not change to reflect it.
Broadcom climbed to a high of $412 leading up to its earnings, and I imagine it did so based on people trading on the belief that OpenAI and Broadcom were doing a deal together, which does not appear to be happening. While there’s an alleged $73 billion backlog, every dollar from Anthropic is questionable.
Actually, yes we can.
Whenever a company says “letter of intent” — as NVIDIA and SK Hynix/Samsung did — it’s important to immediately stop taking the deal seriously until you get the word “contract” involved. Not “agreement” or “deal” or “announcement,” but “contract,” because contracts are the only thing that actually matters.
Similarly, it’s time for everybody — analysts, the media, members of congress, the fucking pope, I don’t care — to start treating these companies with suspicion, and to start demanding timelines. NVIDIA and Microsoft announced their $15 billion investment in Anthropic over a month ago. Where’s the money? Why does the agreement say “up to $10 billion” for NVIDIA and “up to $5 billion” from Microsoft? These subtle details suggest that the deal is not going to be for $15 billion, and the lack of activity suggests it might not happen at all.
These deals are announced with the intention of suggesting there is more revenue and money in generative AI than actually exists. Furthermore, it is irresponsible and actively harmful for analysts and the media to continually act as if these deals will actually get paid when you consider the financial conditions of these companies.
As part of its alleged funding announcement with NVIDIA and Microsoft, Anthropic agreed to purchase $30 billion of Azure compute. It also agreed to spend "tens of billions of dollars" with Google Cloud. It ordered $10 billion in chips from Broadcom earlier in the year, and apparently placed another $11 billion order in its latest fiscal quarter. How does it pay for those? It allegedly will burn $2.8 billion this year (I believe it burned much, much more) and raised $16.5 billion in funding (before Microsoft and NVIDIA’s involvement, which we cannot confirm has actually happened). How are investors tolerating Broadcom not directly stating “the future financial condition of this company is questionable”? Has Broadcom created a reserve for this deal?
If not, why not? Anthropic will make no more than $5 billion this year, and has raised $17.5bn (with a further $2.5bn coming in the form of debt). How can it foreseeably afford to pay $10 billion, or $11 billion, or $21 billion, considering its already massive losses and all those other obligations mentioned? Will Jensen Huang hand over $10 billion so that Anthropic can hand it to Broadcom?
I realize the counter-argument is that companies aren’t responsible for their counterparties’ financial health, but my argument is that it’s the responsibility of any public company to give a realistic view of its financial health, which includes noting if a chunk of its revenue is from a startup that can’t afford to pay for its orders. There is no counter to that! Anthropic cannot afford to pay Broadcom $10 billion right now!
Nevertheless, the problem is that in any bubble, being really stupid and ignorant works right up until it doesn’t, and however harsh the dot com bubble might have been, it wasn’t harsh enough and those who were responsible were left unpunished and unashamed, guaranteeing that this cycle would happen again.
I want to be really, abundantly clear about what’s happening: every single stock you see “growing because of AI” outside of those selling RAM and GPUs is actually growing because of something else. Microsoft, Amazon, Google and Meta all have other products that are making them money. AI is not doing it, and because analysts and investors do not think about things for two seconds, they have allowed themselves to be beaten down and turned into supplicants for public stocks.
Investors have allowed themselves to be played, and the results will be worse than the dot com bubble bursting by several echelons.
I’m gonna be really simplistic for a second.
I am skeptical of AI because everybody loses money. I believe every AI company is unprofitable with margins that are getting increasingly worse as they scale, and as a result that none of them will be able to either get acquired or go public.
Sidenote: This was always the way that venture worked — pump up an unprofitable startup, then sell it to a hyperscaler or take it public, and then let the rest of the world deal with the toxic asset until it either died or wasn’t toxic anymore.
This means that venture capitalists that have sunk money into AI stocks are going to be sitting on a bunch of assets under management (AUM) — the same assets they collect fees on — that will eventually crater or go to zero, because there will be no way for any liquidity event to occur.
This is at a time of historically-low liquidity for venture capitalists, with Pitchbook estimating there will only be $100.8 billion in venture capital funds available at the end of 2025.
Venture capitalists raise money from limited partners, who invest in venture capital with the hope of returns that outpace investing in the public markets. Venture capital vastly overinvested during 2021 and 2022, This was also a problem in private equity. In simple terms, this means these funds are sitting on tons of stock that they cannot shift, and the longer it takes for a company to either go public or acquired, the more likely it is the VC or PE firm will have to mark down its value.
This is so bad that according to Carta, as of August 2024, less than 10% of VC funds raised in 2021 have made any distributions to their investors. In a piece from September, Carta revealed that “about 15% of funds” from 2023 have generated any disbursements as of Q2 2025, and the median net internal rate of return was a median 0.1%, meaning that, at best, most investors got their money back and absolutely nothing else.
In fact, investing in venture capital has kinda fucking sucked. According to Carta, “As of the end of Q2, most VC funds across all recent vintages had a TVPI somewhere between 0.8x and 2x. But there are some areas where standout TVPIs are surfacing.” TVPI means Total Value To Paid-in Capital, or the amount of money you made for each dollar invested.

This chart may seem confusing, it tells you that for the most part, VCs have struggled to provide even money returns since 2017. A “decent” TVPI is 2.5x, and as you’ll see, things have effectively collapsed since 2021. Companies are not going public or being acquired at the same rate, meaning that investor capital is increasingly locked up, meaning that limited partners are still waiting for a payoff from the last bubble, let alone this one.
Carta would update the piece in December 2025, and things would somehow get worse. TVPI soured further, suggesting a further lack of exits across the board. The only slight improvement was the median IRR rose to 0.5% for funds from 2021 and 0.1% for funds from 2022.
In simple terms, we are looking at years of locked-up capital leaving venture capital cash-starved and a little desperate.

The worst part? All of this is happening during a generational increase in the amounts that startups need to raise thanks to the ruinous costs of generative AI, and the negative margins of AI-powered services. To quote myself:
Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June [on revenues of $83 million in, I assume, October). Perplexity, one of the most “popular” AI companies, raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024. Cognition raised $400 million in September after raising $300 million in March. Cohere raised $100 million in September a month after it raised $500 million.
None of these companies are profitable, nor do they have any path to an acquisition or IPO. Why? Because even the most advanced AI software company is ultimately prompting Anthropic or OpenAI’s models, meaning that their only real intellectual property is those prompts and their staff, and whatever they can build around the models they don’t control, which has been obvious from the meager “acquisitions” we’ve seen so far.
Windsurf, which was allegedly being sold to OpenAI, ended up selling its assets to Cognition in July, with Google paying $2.4 billion for its co-founders and a “licensing agreement,” similar to its acquisition of Character.Ai, where it paid $2.7 billion to rehire Noam Shazeer, license its tech, and pay off the stock of its remaining staff. This is also exactly what Microsoft did with Inflection AI and its co-founder Mustafa Suleyman.
OpenAI’s acquisitions of Statsig ($1.1bn), Io Products ($6.5bn) and Neptune ($400m) were all-stock. Every other acquisition — Wiz, Confluent, Informatica, and so on (CRN has a great list here) — is either somebody trying to pretend that (for example) Wiz is related to AI, or trying to say that a data streaming platform is AI-related because AI needs that, which may be true, but doesn’t mean that any AI startups are actually selling.
And they’re not, which is a problem, as 41% of US venture dollars in 2025 have gone into AI as of August, and according to Axios, the global number was around 51%.
A crisis is brewing. Nerdlawyer, back in October, wrote about the explosive growth of secondary markets:
Enter the secondary market—a once-niche corner of venture capital that has transformed into a primary liquidity mechanism.
What's remarkable is how quickly this market has matured. At least five major venture funds have hired full-time staff dedicated to manufacturing non-traditional exits. As Hans Swildens, CEO of Industry Ventures, explained: "All the brand name funds are all staffing and thinking through liquidity structures."
And professional buyers have flooded in. Mega-funds specializing in secondaries have raised unprecedented amounts: Lexington raised a record $23 billion fund, while HarbourVest, Ardian, and Coller Capital have raised funds in the $10-20 billion range.
In simpler terms, there are now Hot Potato Funds, where either another limited partner buys another one’s allocation, the companies themselves buy back their stock, or the stock is resold to other private investors.
And they're not alone. The secondary market is projected to handle $122 billion in assets in 2025, yet that still represents just 1.9% of total unicorn value. There's $6+ trillion in untapped liquidity potential.
The transformation of the secondary market from emergency tool to standard operating procedure represents the most significant structural shift in venture capital since the rise of unicorns. It's not a temporary fix—it's a permanent evolution driven by misaligned timeframes between fund lifecycles (10 years) and company maturation (11+ years).
For better or worse, this is the new reality of startup funding. VCs can no longer afford to simply "spray and pray" and wait for exits. They need active liquidity management strategies. And that fundamentally changes what kinds of companies get funded and how.
While this piece frames this as a positive, the reality is far grimmer. Venture capitalists are sitting on piles of immovable equity in companies worth far less than they invested at, and the answer, it appears, is to find somebody else to buy the dead weight.
According to Newcomer, only 1117 venture funds closed in 2025 (down from 2100 in 2024), and 43% of dollars raised went to the largest venture funds, per The New York Times and PitchBook, suggesting limited partners are becoming less-interested in pumping cash into the system at a time when AI startups are demanding more capital than has ever been raised.
How long can the venture capital industry keep handing out $100 million to $500 million to multiple startups a year? Because all signs suggest that the current pace of funding must continue in perpetuity, as nobody appears to have worked out that generative AI is inherently unprofitable, and thus every single company is on the Silicon Valley Welfare System until everybody gives up, or the system itself cannot sustain the pressure.
I’ve read too many people make off-handed comments about this “being like the dot com boom” and saying that “lots of startups might die but what’s left over will be good,” and I hate them for both their flippancy and ignorance.
None of the current stack of AI companies can survive on their own, meaning that the venture capital industry is holding them up. If even one of these companies falters and dies, the entire narrative will die. If that happens, it will be harder for AI companies to raise, and even harder to sell an AI company to someone else.
This is a punishment for a decade-plus of hubris, where companies were invested in without ever considering a path to profitability. Venture capital has made the same mistake again and again, believing that because Uber, or Facebook, or Airbnb, or any number of companies founded nearly twenty years ago were unprofitable (with paths to profitability in all three cases, mind), it was totally okay to keep pumping up companies that had no path to profitability, which eventually became “had no apparent business model” (see: the metaverse, web3), which eventually became “have negative margins so severe and valuations so high that we will need an IPO at a market cap higher than Netflix.”
This is Silicon Valley’s Rot Economy — the desperate, growth-at-all-costs attachment to startups where you “really like the founder,” where “the market could be huge” (who knows if it is!), where you just don’t need to worry about profitability because IPOs and exits were easy.
Venture capital also used to be easy, because we were still in the era of hypergrowth. You could be a stupid asshole that doesn’t know anything, but there were so many good deals, and the more well-known you were, the more likely you’d be brought them first, guaranteeing a bigger payout, guaranteeing more LP capital, guaranteeing more opportunities that were of a higher quality because you were a big name. It was easier to make a valuable company, easier to get funded, and easier to sell, because the goal was always “get funded, grow as large an audience as possible, or go public/get acquired.”
As a result, venture capital encouraged growth-at-all-costs thinking. In 2010, Ben Horwitz said that “the only thing worse for an entrepreneur than start-up hell (bankruptcy) is start-up purgatory”:
when you don’t go bankrupt, but you fail to build the No. 1 product in the space. You have enough money with your conservative burn rate to last for many years. You may even be cash-flow positive. However, you have zero chance of becoming a high-growth company. You have zero chance of being anything but a very small technology business (see Navisite). From the entrepreneur’s point of view, this can be worse than start-up hell since you are stuck with the small company.
This poisonous theory paid off, in that startups got used to building high-growth, low-margin companies that would easily sell to other companies or the markets themselves.
Until it didn’t, of course.
Per Nerdlawyer, IPOs have collapsed as an exit route, along with easy-to-raise capital.

Per PitchBook, since 2022, 70% of VC-backed exits were valued at less than the capital put in, with more than a third of them being startups buying other startups in 2024.
The money is drying up as the value of VCs’ assets is decreasing, at a time when VCs need more money than ever, because everybody is heavily leveraged in the single-most-expensive funding climate in history.
And as we hit this historic liquidity crisis, the two largest companies — OpenAI and Anthropic — are becoming drains on the system that, in a very real sense, are participating in a massive redistribution of capital reserved for startups to one of a few public companies.
No, really!
OpenAI is trying to raise as much as $100 billion in funding so it can continue to pass money to one of a few public companies — $38 billion to Amazon Web Services over seven years, $22.4 billion to CoreWeave over five years, and $250 billion over an indeterminate period on Microsoft Azure. If successful, OpenAI’s venture telethon will raise more money than has ever been raised in a single round, draining funds that actual startups need. Anthropic has agreed to $70 billion in compute and chip deals across Google, Amazon and Broadcom, and that’s not including the Hut8 compute deal that Google is backing.
This money will come from what remains of venture capital, private equity and hyperscaler generosity.
Yet elsewhere, even the money that goes to regular startups is ultimately being sent to hyperscalers. That AI startup that needs to keep raising $100 million in a single round isn’t sending that cash to other startups — it’s mostly going to OpenAI (Microsoft, Amazon, CoreWeave, Google), Anthropic (Google, Microsoft, Amazon), or one of the large hyperscalers for Azure, AWS or Google Cloud.
Silicon Valley didn’t birth the next big tech firm. It incubated yet another hyperscaler-level parasite, except instead of just spending money on hyperscaler services (and raising money to do so), both Anthropic and OpenAI actively drain the venture capital system as well, as they both burn billions of dollars.
By creating something that’s incredibly expensive to run, they naturally create startups more-dependent on the venture capital system, and the venture capital system has no idea what to do other than say “just grow, baby!” Both OpenAI and Anthropic’s models might be getting cheaper on a per-million-token basis, but use more tokens, increasing the cost of inference, which in turn increases the costs of startups doing business, which in turn means OpenAI, Anthropic, and all connected startups lose more money, which increases the burn on venture capital.
This is a doom-spiral, one that can only be reversed through the most magical and aggressive turnaround we will have seen in history, and it will have to happen next year, without fail.
It won’t.
So why did venture do this?
Folks, we haven’t seen values this big in a long time. These are the biggest numbers we’ve ever seen. They’re simply tremendous. OpenAI is maybe worth $830 billion dollars, can you believe that? They lose so much money but folks we don’t worry about that, because they’re growing so fast. We love that Clammy Sam Altman — they call him “Clamuel” — tells everybody he’s giving them one billion dollars. Data centers are going to have the biggest deals we’ve ever seen, even [tchhh sound through teeth] if we have to work with Dario.
You see, right now AI startups are big, exciting news for the limited partners funding LLM firms. Things feel exciting because the value of the assets under management (AUM) are going up, which is nothing dodgy, but just how VCs value things and if they are valuing AI stocks, that is how their fees are paid. Investing early in OpenAI allows a VC — or even an asset manager like Blackstone, which invested in 2024 — to say it has a big holding and a big increase in its AUM.

We are currently in the sowing stage.
Nevertheless, AI stocks make VCs who bet on them two years ago look like geniuses on paper. You got in early on OpenAI, Anthropic, Cursor, Cognition, Perplexity or any other company that loves to burn several dollars per dollar of revenue, you have a big, beautiful number, the biggest you’ve ever seen, and your limited partners need to pay you a fee just to manage it.
Venture capital hasn’t seen valuations like this in a long time, and on paper, it feels like a lot of VCs got in on companies worth billions of dollars. On paper, Cognition is worth $10.2 billion, Perplexity $18 billion, Cursor $29.3 billion, Lovable $6.6 billion, Cohere $6.8 billion, Replit $3 billion, and Glean $7.2 billion — massive valuations for companies that all basically do products that OpenAI or Anthropic or Amazon or Google or any number of Chinese companies are already working to clone. They are all losing tons of money and have no path to profitability.
But right now the numbers are simply tremendous. I’ve heard venture capitalists tell me that there are times when they have to agree to invest with little to no information or know that they’ll lose the opportunity to another sucker investor. I’ve heard venture capitalists say they don’t have any insight into finances.
Venture capitalists would, of course, claim I’m insane, saying that the “growth is obviously there” while pointing to whatever startup has made $100 million ARR ($8.3 million in a month), all while not discussing the underlying operating expenses. The idea, I believe, is that the current spate of AI spending is only set to increase next year, and that will…somehow lead to fixing margins? Venture capitalists staunchly refuse to learn anything other than “invest in growth and then profit from growth,” even if “profiting from growth” doesn’t seem to be happening anymore.
In reality, venture capital shouldn’t have touched LLMs with a fifteen foot pole, because the margins were obviously, blatantly bad from the very beginning. We knew OpenAI would lose $5 billion in the middle of 2024. A sane venture capital climate would have fucking panicked, but instead chose to double, triple and quadruple down.
I believe that massive valuation drawdowns are a certainty. There are losses coming.
Venture capitalists, I have to ask you: what happens if OpenAI dies? Do you think that this will make investors interested in funding or acquiring other AI startups? How much longer are we going to do this? When will venture capital realize it’s setting itself up for disaster?
And what, exactly, is the plan? OpenAI and Anthropic will suck the lakes dry like an NVIDIA GPU named after Nancy Reagan. How is this meant to continue, and what will be left when it does?
The answer is simple: there won’t be money for venture capital for a while. Those AI holdings are going to be worth, at best, 50%, if they retain any value at all. Once one of these startups die, a panic will ensue, sending venture capitalists scrambling to get their holdings acquired, until there’s little or no investor interest left.
Why would LPs ever trust venture capital after this? Why would anybody? Because based on the past four years, it doesn’t appear that venture capital is actually good at investing money — it just got lucky, year after year, until there were few ideas that could sell for hundreds of millions or billions of dollars.
Venture capital believed it knew better as it turned its back on basic business fundamentals, starting with Clubhouse, crypto, the metaverse, and now generative AI.
Yet they’re far from the only fuckwits on the dickhead express.
Per Bloomberg, there were at least $178.5 billion in data-center credit deals in the US in 2025, rivaling the $215.4 billion invested in US venture capital in 2024 and the $197.2 billion invested in US VC through August 7 2025, and over $100 billion more than the $60.69 billion of data center credit deals done in 2024.
I’m very worried, and I’m going to tell you why, using a company called CoreWeave that I’ve been actively warning people about since March.
CoreWeave is something called a “neocloud.” It’s a company that sells AI compute, and does so by renting out NVIDIA GPUs, and as I explained a few months ago, it does so by building data centers backed by endless debt:
That’s because setting up a neocloud is expensive. Even if the company in question already has data centers — as CoreWeave did with its cryptocurrency mining operation — AI requires completely new data center infrastructure to house and cool the GPUs, and those GPUs also need paying for, and then there’s the other stuff I mentioned earlier, like power, water, and the other bits of the computer (the CPU, the motherboard, the memory and storage, and the housing).
As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have, along with contracts from customers, which they use to buy more GPUs. CoreWeave, for example, has $25 billion in debt on estimated revenues of $5.35 billion, losing hundreds of millions of dollars a quarter.
You know who also invests in these neoclouds? NVIDIA!
NVIDIA is also one of CoreWeave’s largest customers (accounting for 15% of its revenue in 2024), and just signed a deal to buy $6.3 billion of any capacity that CoreWeave can’t otherwise sell to someone else through 2032, an extension of a $1.3 billion 2023 deal reported by the Information. It was the anchor investor ($250 million) in CoreWeave’s IPO, too.
CoreWeave is one of the largest providers of AI compute in the world, and its business model is indicative of how most data center companies make money, and to explain my concerns, I’m going to explain why using this chart from CoreWeave’s Q2 2025 earnings presentation.

First, CoreWeave signs contracts — such as its $14 billion deal with Meta and $22.4 billion deal with OpenAI — before it has the physical infrastructure to service them. It then raises debt using this contract as collateral, orders the GPUs from NVIDIA, which arrive after three months, and then take another three months to install, at which point monthly client payments begin.
To really simplify this: data center developers are raising money months up to a year before they ever expect to make a penny. In fact, I can find no consistent answer to “how long a data center takes to build,” and the answer here is pretty important, because that’s how the money is gonna get made from these things.
You may notice that “monthly payments” begin at 6 to 30 months, a curious and broad blob of time. You see, data centers are extremely difficult to build, and the concept of an “AI data center” is barely a few years old, with the concept of hundreds of megawatts in one data center campus entirely made up of AI GPUs barely two years old, which means basically everybody building one is doing so for the first time, and even experienced developers are running into problems.
For example, Core Scientific, CoreWeave’s weird partner organization it tried and failed to buy, has been trying to convert its Denton Texas cryptocurrency mining data center into an AI data center since November 2024, specifically so that CoreWeave can rent it to Microsoft for OpenAI. This hasn’t gone well, with the Wall Street Journal reporting a few weeks ago that Denton has been wracked with “several months” of delays thanks to rainstorms preventing contractors from pouring concrete. The cluster is apparently going to have 260MW of capacity.
What this means for CoreWeave is that it can’t start getting paid by OpenAI, because, per its contract, customers don’t have to start paying until the compute is actually available. This is a very important detail to know for literally any data center development you’ve ever seen.
As of its latest Q3 2025 earnings filing, CoreWeave is sitting on $1.1 billion in deferred revenue (income for services not yet rendered), up from $951 million in Q2 2025 and $436 million in Q1 2025. This means deposits have been made, but the contract has yet to be serviced.
Now, I’m a curious little critter, so I went and found the 921-page $2.6 billion DDTL 3.0 loan agreement between CoreWeave and banks including Morgan Stanley, MUFG Bank and Goldman Sachs, and in doing so learned the following:
I apologize, that suggests that CoreWeave isn’t already in trouble. Buried inside NVIDIA’s latest earnings (page 17) there was a little clue:
In the third quarter of fiscal year 2026, we entered into an agreement to guarantee a partner's facility lease obligations in the event of their default. The agreement allows our partner to secure a limited-availability facility lease backed by our credit profile, in exchange for issuing us warrants. The maximum gross exposure is $860 million, which is reduced as the partner makes payments to the lessor over five years. The partner has placed $470 million in escrow and executed an agreement to sell the data center cloud capacity, mitigating our default risk.
Credit where credit is due — eagle-eyed analyst JustDario caught this in November — but in CoreWeave’s condensed consolidated balance sheets, there sits a $477.5 million line-item under “restricted cash and cash equivalents, non-current.” Though this might not be the NVIDIA escrow — this number shifted from $617m in Q1 to $340m in Q2 — it lines up all-too-precisely…and who else would NVIDIA be guaranteeing?
In any case, CoreWeave is likely getting the best deals in data center debt outside of Oracle. It has top-tier financiers (who I will get to shortly), the full backing of NVIDIA (which is both an investor, customer and apparent financial backstop), and the ability to raise debt quickly. CoreWeave’s deals are likely indicative of how data center financing takes place, and those top-tier financiers? It’s been in basically every deal.
In fact…
So, I went and dug through a pile of 26 prominent data center loan deals, including the proposed $38 billion debt package that Oracle and Vantage Data Center Partners are raising for Stargate Shackelford and Wisconsin, Stargate Abilene, New Mexico, SoftBank’s $15 billion bridge loan (which I included for a reason that will become obvious shortly) and multiple CoreWeave loans, and found a few commonalities:
I realize there are far more data center deals than these, but I wanted to show you exactly how centralized these deals are.
The largest deals — the $38 billion Stargate TX/WI deal and $18 billion Stargate New Mexico deal — both involved Goldman Sachs, BNP Paribas, SMBC and MUFG, and all four of those companies have, at some point, funded CoreWeave. In fact, everybody appears to have funded CoreWeave at some point — CitiBank, Credit Agricole, Societe Generale, Wells Fargo, Carlyle, Blackstone, BlackRock, Barclays, Magentar, and Jefferies to name a few.
Of the 40 banks and financial institutions I researched, 24 have, at some point, loaned to or organized debt for CoreWeave. Of those institutions, Blackstone, Deutsche Bank, JP Morgan Chase, Morgan Stanley, MUFG and Wells Fargo have done so multiple times.
CoreWeave is a deeply unprofitable company sidled with incredible debt and deteriorating margins, with one of its largest clients paying net 360, and, as I’ve said, is arguably the best-financed data center company in the world.
What I’m getting at is that most data center deals are likely much worse than the terms that CoreWeave faces, and are likely financed in a similar way, where a client is signed for data center capacity that doesn’t exist, such as when Nebius raised $4.3 billion through a share sale and convertible notes (read: loans) to handle its $17.4 billion data center contract with Microsoft, and guess what? Goldman Sachs acted as lead underwriter on the deal, with assistance from Bank of America, CitiGroup, and Morgan Stanley, all three of which have invested in CoreWeave.
AI data centers are expensive, require debt due to the massive cost of construction and GPUs, and all take at least a year, if not two to start generating revenue, at which point they also begin losing money because it seems that renting out AI GPUs is really unprofitable.
Every single major bank and financial institution has piled hundreds of millions if not billions of dollars into building data centers that take forever to even start generating money, at which point they only seem to lose it. Worse still, NVIDIA sells GPUs on a one-year upgrade cycle, meaning that all of those data centers being built right now are being filled with Blackwell chips, and by the time they turn on, NVIDIA will be selling its next-generation Vera Rubin chips.
Now, you’ve probably heard that Vera Rubin will use the same racks (Oberon) as Blackwell, which is true to an extent, but won’t be true for long, as NVIDIA intends to shift to Kyber racks in 2027, hoping to build 1MW IT racks (which will involve entire racks-full of power supplies!), meaning that all of those data centers you see today — whenever they get built! — will be full of racks incompatible with the next generation of GPUs.
This will also decrease the value of the assets inside the data centers, which will in turn decrease the value of the assets held by the firms investing. Stargate Abilene? The one invested in by JP Morgan, Blue Owl, Primary Digital Infrastructure and Societe Generale? The one that’s heavily delayed and won’t be ready until the end of 2026 at earliest? Full to the brim with two-year-old GB200 racks!
By the beginning of 2027, Stargate Abilene will be obsolete, as will any and all data centers filled with Blackwell GPUs, as will any and all data centers being built today. Every single one takes 1-3 years and hundreds of millions (or billions) in debt, every single one faces the same kinds of construction delays, and better yet, almost all of them will turn on in roughly the same time frame.
Now, I ain’t no economist, but I do know that “supply and demand” has an effect on pricing. What do you believe happens to the price of renting a Blackwell GPU when all of these data centers come on? Do you think it becomes more valuable? Or less?
And while we’re on the subject, what do you think happens if there isn’t sufficient demand?
Right now, OpenAI makes up a large chunk of the global sale of compute — at least $8.67 billion of Azure revenue through September 2025, $22.4 billion of CoreWeave’s backlog, $38 billion of Amazon’s backlog, and so on and so forth — and made, based on my reporting, just over $4.5 billion in that period. It cannot afford to pay anybody, and nowhere is that more obvious than when it negotiated year-long payment terms for CoreWeave.
Otherwise, when you remove the contracts signed by hyperscalers and OpenAI (which I do not believe has paid anybody other than Microsoft yet), based on my analysis, there was less than a billion dollars of AI compute revenue in 2025, or 0.5831% of the money spent on data centers.
Hyperscaler revenue is also immediately questionable, with Microsoft’s deal with Nebius (per its 6k filing) set to default in the event that Nebius cannot provide the capacity it sold out of its unfinished Vineland, New Jersey data center, which is being built by DataOne, a company which has never built an AI data center with a CEO that has his LinkedIn location set to “United Arab Emirates” with funding from a concrete firm that is also a vendor on the construction project.
I also believe Microsoft is setting Nebius up to fail. Based on discussions with sources with direct knowledge of plans for the Vineland, New Jersey data center, Nebius has agreed to timelines that involve having 18,000 NVIDIA B200 and B300 GPUs by the end of January for a total of 50MW, with another 18,000 B300s due by the end of May. On speaking with experts in the field about how viable these plans are, two laughed, and one told me to fuck off.
If Nebius fails to build the capacity, Microsoft can walk away, much like OpenAI can walk away from Stargate in the event that Oracle fails to build it on time (as reported by The Information in April), and I believe that this is the case for literally any data center provider that’s building a data center for any signed-up tenant. This is another layer of risk to data center development that nobody bothers to discuss, because everybody loves seeing these big, beautiful numbers.
Except the numbers might have become a little too beautiful for some.
A few weeks ago, the Financial Times reported that Blue Owl Capital had pulled out of the $10 billion Michigan Stargate Data Center project, citing “concerns about its rising debt and artificial intelligence spending.” To quote the FT, “Blue Owl had been in discussions with lenders and Oracle about investing in the planned 1 gigawatt data centre being built to serve OpenAI in Saline Township, Michigan.”
What debt, you ask? Well, Blue Owl — formerly the loosest legs in data center financing — was in CoreWeave’s $600 million and $750 million debt deals for its planned Virginia data center with Chirisa Technology Parks, as well as a $4 billion CoreWeave data center project in Lancaster, Pennsylvania, Stargate Abilene and Stargate Mexico, Meta’s $30 billion Hyperion data center, and a $1.3 billion data center deal in Australia through Stack Infrastructure, a company it owns through its acquisition of IPI Partners.
To be clear, Blue Owl “pulling out” is not the same as a regular deal. It’s a BDC — Business Development Corporation — that invests both its own money and rallies together various banks, in this case SMBC, BNP Paribas, MUFG and Goldman Sachs (all part of Stargate New Mexico).
The private capital group has been the primary backer for Oracle’s largest data centre projects in the US, investing its own money and raising billions more in debt to build the facilities. Blue Owl typically sets up a special purpose vehicle, which owns the data centre and leases it to Oracle.
Blue Owl is incredibly well-connected and experienced in putting together these kinds of deals, and very likely went to the many banks it’s worked with over the years, who apparently had “concerns about its rising debt,” much of it issued by them! While rumours suggest that Blackstone may “step in,” the banks that will actually back a $10 billion deal are fairly narrow, and “stepping in” would require billions of dollars and legal logistics.
So, why are things looking shaky? Well, remember that thing about how this data center would be leased to Oracle? Well, it had a free cash flow of negative thirteen billion on revenues of $16 billion, with its most-recent earnings only "beat" on estimates only thanks to the sale of its $2.68 billion stake in Ampere. Its debt is exploding (with over a billion dollars in interest payments in its last quarter), its GPU gross margins are 14% (which does not mean profitable), its latest NVIDIA GB200 GPUs have a negative 100% gross margin, and it has $248 billion in upcoming data center leases yet to begin.
All, for the most part, to handle compute for one customer: OpenAI, which needs to raise $100 billion, I guess.

We’ve already got some signs of concern within the banking world around data center exposure.
In November, the FT reported that Deutsche Bank — which backed CoreWeave multiple times and several data centers — was “exploring ways to hedge its exposure to data centers after extending billions of dollars in debt,” including shorting a “basket of AI-related stocks” or buying default protection on some of its debt using synthetic risk transfers, which are when a bank sells the full or partial credit risk of a loan (or loans) to another bank while keeping the loans on their book, paying a monthly fee to investors (this is a simplification).
In December, Fortune reported that Morgan Stanley (CoreWeave three times, IPI Partners, Hyperion, SoftBank Bridge Loan) was also considering synthetic risk transfers on “loans to businesses involved in AI infrastructure.”
Back in April, SMBC sold synthetic risk transfers tied to “private debt BDCs” — and while this predates the large data center deals done by Blue Owl, SMBC has overseen multiple Blue Owl deals in the past. In December, SMBC closed another SRT, selling off risk from “Australian and Asian project finance loans,” though I can’t confirm if any of them were data center related.
In December, Goldman Sachs paused a planned mortgage-bond sale for data center operator CyrusOne, with the intent to revive it in the first quarter of 2026. Oracle’s credit risk reached a 16-year high in the middle of December, with credit default swaps (basically, betting that Oracle will default on its debts, an unlikely yet no-longer-impossible event) climbing to their highest price since the great financial crisis.
While Morgan Stanley and Deutsche Bank’s SRTs are yet to close, it’s still notable that two of the largest players in data center financing feel the need to hedge their bets.
So, what exactly are they hedging against?
Simple! That tenants won’t arrive and debts won’t get paid.
I also believe they’re going to need bigger hedges, because I don’t think there is enough actual demand for AI to meet the data centers being built, and I think most data center loans end up being underwater within the next two years.
I realize we’ve taken a great deal of words to get here, but every single part was necessary to explain what I think happens next.
Let’s start by quoting my premium newsletter from a few weeks ago:
While many people talk about how circular the AI bubble may or may not be, the reality is that it's far more like a chain — a deeply vulnerable one held together by debt and venture capital.
A company buys GPUs from NVIDIA, at which point nobody is making any profit anymore.
These GPUs are purchased, for the most part, using debt provided by banks or financial institutions. While hyperscalers can and do fund GPUs using cashflow, even they have started to turn to debt.
At that point, the company that bought the GPUs sinks hundreds of millions of dollars to build a data center, and once it turns on, provides compute to a model provider, which then begins losing money selling access to those GPUs. For example, both OpenAI and Anthropic lose billions of dollars, and both rely on venture capital to fund their ability to continue paying for accessing those GPUs.
At that point, OpenAI and Anthropic offer either subscriptions — which cost far more to offer than the revenue they provide — or API access to their models on a per-million-token basis. AI startups pay to access these models to run their services, which end up costing more than the revenue they make, which means they have to raise venture capital to continue paying to access those models.
Outside of hyperscalers paying NVIDIA for GPUs out of cashflow, none of the AI industry is fueled by revenue. Every single part of the industry is fueled by some kind of subsidy.
As a result, the AI bubble is really a stress test of the global venture capital, private equity, private credit, institutional and banking system, and its willingness to fund all of this forever, because there isn't a single generative AI company that's got a path to profitability.
You see, every little link in the chain of pain is necessary to understand things.
In really simple terms, I believe that almost every investment in a data center or AI startup may go to zero.
Let me explain.
If we assume that 50% of $171.5 (so $85.75) billion in data center debt is in GPUs, that’s around 3.2GW of data center capacity, based on my model of NVIDIA’s approximate split of sales between different AI GPUs from my premium piece last week. The likelihood of the majority of these projects being A) completed within the next year and B) completed on budget is very, very small. Every delay increases the likelihood of default, as each of these projects is heavily debt-based.
The customers of these projects are either hyperscalers (who are only “doing AI” because they have no other hypergrowth ideas and because Wall Street currently approves) or AI startups, all of whom are unprofitable. While there are potentially hedge funds or other companies looking for “private AI” integrations, I think this is a very, very small market.
On top of that, AI compute itself may not be profitable, and because, by my estimate, everybody has spent about $85 billion on filling data centers with the same GPUs, the aggregate price of renting out GPUs will decline. Already the average price of Blackwell GPUs has declined to an average of $4.41 an hour according to Silicon Data, and that’s before the majority of Blackwell-powered GPUs come online.
Yet the customer base shrinks from there, because the majority of AI startups aren’t actually renting GPUs — they build products on top of models built by OpenAI or Anthropic, who have made it clear they’re buying capacity from either hyperscalers or, in OpenAI’s case, getting Oracle or CoreWeave to build it for them. Why? Because building your own model is incredibly capital-intensive, and it’s hard to tell if the results will be worth it.
Now, let’s assume — I don’t actually believe it will, but let’s try anyway — that all of that 3.2GW of capacity comes online. How much compute does an AI company use? OpenAI claims it has 2GW of capacity as of the end of 2025, and is allegedly approaching 900 million weekly active users. I don’t think there are any AI companies with even 10% of that userbase, but even if there were, OpenAI spent $8.67 billion on inference through the end of September. Who can afford to pay even 10% of that a year? Or 5%?
Yet in reality, OpenAI is likely more indicative of the overall compute spend of the entire AI industry. As I’ve said, most companies are powered not by their own GPU-driven models, but by renting them from other providers.
OpenAI and Anthropic spent a combined $11.33 billion in compute on Azure and AWS respectively through the first 9 months of this year, and as the two largest consumers of AI compute, which suggests two things:
In fact, it would take sinking every single dollar of venture capital — over $200 billion — every single year and then some funneled into AI compute just to provide the revenue to justify these deals.
In the space of a year, Microsoft Azure made $75 billion, Google Cloud $43 billion and Amazon Web Services $100 billion.
Need more proof? Still don’t believe me? Then skip to page 18 of NVIDIA’s most-recent earnings:
Multi-year cloud service agreement commitments as of October 26, 2025, were $26 billion for which $1 billion, $6 billion, $6 billion, $5 billion, $4 billion, and $4 billion will be paid in fiscal years 2026 (fourth quarter), 2027, 2028, 2029, 2030, and 2031 & thereafter, respectively.
If there’s such incredible, surging demand, why exactly is NVIDIA spending six fucking billion dollars a year in 2026 and 2027 on cloud compute? NVIDIA doesn’t need the compute — it just shut down its AWS rival DGX Cloud! It looks far more like NVIDIA is propping up an industry with non-existent demand.
I’m afraid there is no secret AWS-sized spend waiting in the wings for the right moment to pounce. There is no secret demand wave, nor is there any capacity crunch that is holding back incredible swaths of revenue. Oracle’s $523 billion in remaining performance obligations are made up of OpenAI, Meta, and fucking NVIDIA.
For AI data centers to make sense, most startups would have to start becoming direct users of AI compute, while also spending more on cloud compute services than they’ve ever spent. The largest consumers of AI compute are both unprofitable, unsustainable monstrosities.
Eventually, reality will dawn on one or more of these banks. Projects will get delayed thanks to weather, or budgetary issues, or when customers walk away (as just happened to data center REIT Fermi). Loan payments will start going unpaid.
Elsewhere, AI startups will keep asking for money, again and again, and for a while they’ll keep raising, until the valuations get too high, or VC coffers get too low.
You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.
Anyway, I’m sure these numbers are great-oh my GOD!

In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.
It is time to wake up. These are the real-life costs of running an AI company. OpenAI and Anthropic are going to be even worse.
This is why nobody wants to take AI companies public. This is why nobody wants to talk about the actual costs of AI. This is why nobody wants you to know the hourly cost of running a GPU, and this is why OpenAI and Anthropic both burn billions of dollars — the margins fucking stink, every product is unprofitable, and none of these companies can afford their bills based on their actual cashflow.
Generative AI is not a functional industry, and once the money works that out, everything burns.
Though many AI data centers boast of having tenancy agreements, remember that these agreements are either with AI startups that will run out of money or hyperscalers with legal teams numbering in the thousands. Every single deal that Microsoft, Amazon, Meta, Google or NVIDIA signs is riddled with outs specifically hedging against this scenario, and there won’t be a damn thing that anybody can do if hyperscalers decide to walk away.
Before then, NVIDIA’s bubble is likely to burst. As I discussed a few weeks ago, NVIDIA claims to have shipped six million Blackwell GPUs, and while it may be employing very dodgy maths (claiming each Blackwell GPU is actually two GPUs because each one has two chips), my modeling of its last three quarters suggests that NVIDIA shipped around 5.33GW’s worth of GPUs — and based on reading about every single data center I can find, it doesn’t appear that many have been built and powered on.
Worse still, NVIDIA’s diversified revenue is collapsing. In Q1FY26, two customers represented 16% and 14% of revenue, in Q2FY26 two customers represented 23% and 16% of revenue, and in Q3FY26 four customers represented 22%, 15%, 13% and 11% of total revenue, with all that money going toward either GPUs or networking gear. I go into detail here, but I put it in a chart to show you why this is bad:

In simpler terms, NVIDIA’s revenue is no longer coming from a diverse swath of customers. In Q1FY26, NVIDIA had $30.84 billion of diversified revenue, Q2 $28.51 billion, and Q3 $22.23 billion.
NVIDIA GPUs are astronomically expensive — $4.5 million for a GB300 rack of 72 B300 GPUs, for example — and filling data centers full of them requires debt unless you’re a hyperscaler. While I can’t say for sure, I believe NVIDIA’s diversified revenue collapse is a sign that smaller data center projects are starting to have issues getting funded, and/or hyperscalers are pulling back on their GPU purchases.
To look through the eyes of an AI booster — all I’m seeing is blue and yellow, as usual! — one might say that these big customers are covering the loss of revenue, but the reality is that these big projects are run on debt issued by banks that are becoming increasingly-worried about nobody paying them back.
The mistake that every investor, commentator, analyst and member of the media makes about NVIDIA is believing that its sales are an expression of demand for AI compute, when it’s really more of a statement about the availability of debt from banks and private credit.
Similarly, the continued existence of AI startups is an expression of the desperation of venture capital, and the continuing flow of massive funding rounds is a sign that they see no other avenues for growth.
Eventually, data centers are going to go unbuilt, and data center debt packages will begin to fall apart. Remember, Oracle’s $38 billion data center deal is actually yet to close, much like Stargate New Mexico is yet to close. These deals, while seeming like they’re trending positively, are both incredibly important to the future of the AI bubble, and any failure will spook an already-nervous market.
Only one link in the chain needs to break. Every part of the AI bubble — this fucking charade — is unprofitable, save for NVIDIA and the construction firms erecting future laser tag arenas full of negative-margin GPUs.
What happens if the debt stops flowing to data centers? How will NVIDIA sell those 20 million Blackwell and Vera Rubin GPUs?
What happens if venture capitalists start running low on funds, and can’t keep feeding hundreds of millions of dollars to AI startups so that they can feed them to Anthropic or OpenAI?
What happens to OpenAI and Anthropic if their already negative-margin businesses when their customers run out of money?
What happens to Oracle or CoreWeave’s work-in-progress data centers if OpenAI can’t pay its bills? What happens to Anthropic’s $21 billion of Broadcom orders, or tens of billions of Google Cloud spend?
In the last year, I estimate I’ve been asked the question “what if you’re wrong?” over 25 times. Every single time the question comes with an undercurrent of venom — the suggestion that I’m being an asshole for daring to question the wondrous AI bubble.
Every single person who has asked this has been poorly-read — both in terms of my work and the surrounding economics and technological possibilities of Large Language Models — and believes they’re defending technology, when in reality they’re defending growth, and the Rot Economy’s growth-at-all-costs mindset.
In many cases they are not excited about technology, but the prospects of being first in line to lick an already-sparkling boot. This has never been about progress or productivity. If it was, we’d actually see progress, or productivity boosts, or anything other than the frothiest debt and venture markets of all time. Large Language Models do not create novel concepts, they are inconsistent and unreliable, and even the “good” things they do vary wildly thanks to the dramatic variance of a giant probability machine. LLMs are not good enough for people to pay regular software prices at any scale, and the consequences of this will be that every single dollar spent on GPUs has been for exactly one point: manipulating the value of their stocks. AI does not have the business returns and may have negative gross margins. It is inconsistent, ugly, unreliable, expensive and environmentally ruinous, pissing off a large chunk of consumers and underwhelming most of the rest, other than those convinced they’re smart for using it or those who have resigned to giving up at the sight of a confidence game sold by a tech industry that stopped making products primarily focused on solving the problems of consumers or businesses some time ago.
You may say that I’m wrong because Google, Microsoft, Meta and Amazon continue to have healthy net revenues and revenue growth, and as I previously said, these companies are not sharing AI revenues and their existing businesses are still growing due to the massive monopolies they’ve built.
And I want to plea to AI boosters and bullish analysts alike: you are being had. Satya Nadella, Sam Altman, Dario Amodei, Jensen Huang, Mark Zuckerberg, Larry Ellison, Safra Catz, Elon Musk, Clay Magouyrk, Mark Sicilia, Michael Truell, Aravind Srivinas — all of them are laughing at you behind your back, because they know that you are never going to ask the obvious questions that would defeat my arguments, and know that you will never, ever push back on them.
The enshittification of the shareholder has the downstream effect of an enshittification of the media and Wall Street analysts writ large. These companies own you. They treat you with disdain and condescension, because they know you’ll let them. They know that no sell-side analyst will ever ask them “when will you be profitable?” or “how much are you spending?” or if you do ask, they know you will experience temporary amnesia and forget whatever answer they give, because these are the incentives of an enshittified stock market, where stocks are not extrapolations of shareholder value but chips in a fucking casino where the house always wins and changes the rules every three months.
They have changed the meaning of “stock” to mean “what the market will reward,” and when you allow companies to start dictating the terms of what will be rewarded — as neoliberalism, Friedman, Reagan, Nixon, NAFTA, Thatcher, and every other policy has, orienting everything exclusively around growth — companies eventually cut off any powers that may curtail any reevaluation of the fundamental terms of capitalism, and the incentives within.
Focusing on growth-at-all-costs thinking naturally encourages, enables, and empowers grifters, because all they ever have to promise is “more” — more users, more debt, more venture, more features, more everything.
The very institutions that are meant to hold companies accountable — analysts and the media — are far more desperate to trade scoops for interviews, to pull punches, to find ways to explain why a company is right rather than understand what the company is doing, and this is something pushed not by writers, but by editors that want to make sure they stay on the right side of the largest companies.
And if I’m right, OpenAI’s death will kill off most if not all other AI startups, Anthropic included. Every investor that invested in AI will take massive losses. Every startup that builds on the back of their models will see their company fold, if it hasn’t already due to the massive costs and upcoming price increases. The majority of GPU-based data centers — which really have no other revenue stream — will be left inert, likely powered down, waiting for the day that somebody works it all out, which they won’t, because literally everybody has these things now and I truly believe they’ve tried everything.
I don’t “hate on AI” because I am a hater, I hate on it because it fucking sucks and what I’m worried about happening seems to be happening. The tech industry has run out of hypergrowth ideas, and in its desperation hitched itself to the least-profitable hardware and software in history, then spent three straight years lying about what was possible to the media, analysts and shareholders.
And they were allowed to lie, because everybody lapped it the fuck up. They didn’t need to worry about convincing anybody. Financiers, editors, analysts and investors were already drafting reasons why they were excited about something they didn’t really understand or believe in, other than the fact it promised more.
This is what happens when you make everything about growth: everybody becomes stupid, ready to be conned, ready to hear what the next big growth thing is because asking nasty questions gets you fucking fired.
And what’s left is a tech industry that doesn’t build technology, but growth-focused startups.
Look at Silicon Valley. Do you see these fucking people ever building a new kind of computer? Do you believe these men fit to even imagine a future? These men care about the status quo, they want to always have more software to sell or ways to increase advertising revenue so that the stock number goes up so they receive more money in the form of stock compensation. They are concerned with neither actual business value, honest exchange of value, or societal value. Their existence is only in shareholder value, which is how they are incentivized by their board of directors.
And really, if you’re still defending AI -- does it matter to any of you that this software fucking sucks, does it? If you think it’s good you don’t know much about software! It does not respond precisely at any point to a user or programmer’s intent. That’s bad software. I don’t care that you have heard developers really like it, because that doesn’t fix the underlying economic and social poison in AI. I don’t care that it sort of replaced search for you. I don’t care if you “know a team of engineers that use it.” Every single AI app is subsidized, its price is fake, you are being lied to, and none of this is real.
When the collapse happens, do not let a single person that waved off the economics have a moment’s peace. Do not let anybody who sat in front of Dario Amodei or Sam Altman and squealed with delight at whatever vacuous talking points they burped out forget that they didn’t push them, they didn’t ask hard questions, they didn’t worry or wonder or feel any concern for investors or the general public. Do not let a single analyst that called AI skeptics “luddites” or equated them to flat Earthers hear the end of it. Do not let anybody who claimed that we “lost control of AI” or “blackmailed developers” go without their complementary “Fell For It Again” badge.
When it happens, I promise I won’t be too insufferable, but I will be calling for accountability for anybody who boosted AI 2027, who sat in front of Sam Altman or Dario Amodei and refused to ask real questions, and for anyone who collected anything resembling “detailed notes” about me or any other AI skeptic. If you think I’m talking about you, I probably am, and I have a question: why didn’t you approach the AI companies with as much skepticism as you did the skeptics?
I also promise you, if I’m wrong, I’ll happily explain how and why, and I’ll do so at length, too. I will have links and citations, I’ll do podcast episodes. I will make a good faith effort to explain every single failing, because my concern is the truth, and I would love everybody else to follow suit.
Do you think any booster will have the same courtesy? Do you think they care about the truth? Or do they just want to get a fish biscuit from Sam Altman or Jensen Huang?
Pathetic.
It’s times like this where it’s necessary to make the point that there is absolutely “enough money” to end hunger or build enough affordable housing or have universal healthcare, but they would be “too expensive” or “not profitable enough,” despite having a blatant and obvious economic benefit in that more people would have happier, better lives and — if you must see the world in purely reptilian senses — enable many more people to have disposable income and the means of entering the economy on even terms.
By contrast, investments in AI do not appear to be driving much economic growth at all, other than in the revenue driven to NVIDIA from selling these GPUs, and the construction of data centers themselves. Had Microsoft, Google, Meta and Amazon sunk $776 billion into building housing and renting it out, the world would be uneven, we would have horrible new landlords, and it would still be a great deal better than one where nearly a trillion dollars is being wasted propping up a broken, doomed industry, all because the people in charge are fucking idiots obsessed with growth.
The future, I believe, spells chaos, and I am trying to rise to the occasion. My work has transformed from being critical of the tech industry to a larger critique of the global financial system. I’ve had to learn accountancy, the mechanics of venture and private equity, and all sorts of annoying debt-related language, all so that I sufficiently explain what’s going on.
I see several worrying signs I have yet to fully understand. The Discount Window — where banks go when they need quick liquidity as a last resort — has seen a steady increase of loans on its books since September 2024, suggesting that financial institutions are facing liquidity issues, and the last few times that this has happened, financial crises followed.
There is also a brewing bullshit crisis in Private Equity, which is heavily invested in data centers.
In September, Auto parts maker First Brands collapsed in a puff of fraud with billions of dollars “vanishing” after it double-pledged the same collateral to multiple loans, off-balance sheet liabilities, falsified invoices, and even leased some of the parts it sold. This wasn’t a case where smaller lenders were swindled, either — global investment banks UBS and Jefferies both lost hundreds of millions of dollars, along with asset manager BlackRock through associated funds.
Subprime auto lender Tricolor collapsed in similar circumstances, burning JPMorgan, Jefferies, and Zions Bancorporation, who also loaned money to First Brands. A similar situation is currently brewing with Solar company PosiGen, which recently filed for bankruptcy after, you guessed it, double-pledging collateral for loans. One of its equity financing backers is Magnetar Capital, who invested in CoreWeave.
What appears to be happening is simple: large financial institutions are issuing debt without doing the necessary due diligence or considering the future financial health of the companies involved. Private Equity firms are also heavily-leveraged, sidling acquisitions with debt, and playing silly games where they “volatility launder” — deliberately choosing not to regularly revalue assets held to make returns (or the value of assets) look better to their investors.
I don’t really know what this means right now, but I am worried that these data center loans have been entered into under similarly-questionable circumstances. Every single data center deal is based on the phony logic that AI will somehow become profitable one day, and if there’s even one First Brands situation, the entire thing collapses.
I realize this is the longest thing I’ve ever written (or should I say written so far?), and I want to end it on a positive note, because hundreds of thousands of people now read and listen to my work, and it’s important to note how much support I’ve received and how awesome it is seeing people pick up my work and run with.
I want to be clear that there is very little that separates you from the people running these companies, or many analysts. I have taught myself everything I know from scratch, and I believe you can too, and I hope I have been able to and will be able to teach you everything I know, which is why everything I write is so long. Well, that and I’m working out what I’m going to say as I write it.
The AI bubble is an inflation of capital and egos, of people emboldened and outright horny over the prospect of millions of people’s livelihoods being automated away. It is a global event where we’ve realized how the global elite are just as stupid and ignorant as anybody you’d meet on the street — Business Idiots that couldn’t think their way out of a paper bag, empowered by other Business Idiots that desperately need to believe that everything will grow forever.
I have had a tremendous amount of help in the last year — from my editor Matt Hughes, Robert and Sophie at Cool Zone Media, Better Offline producer Matt Osowski, Kakashii and JustDario (two pseudonymous analysts that know more about LLMs and finance than most people I read), Kasey Kagawa, Ed Ongweso Jr., Rob Smith, Bryce Elder and Tabby Kinder of the Financial Times, all of whom have been generous with their time, energy and support. A special shoutout to Caleb Wilson (Kill The Computer) and Arif Hasan (Wide Left), my cohosts on our NFL podcast 60 Minute Drill.
And I’ve heard from thousands of you about how frustrated you are, and how none of this makes sense, and how crazy you feel seeing AI get shoved into every product, how insane it marks you feel when somebody tells you that LLMs are amazing when their actual outputs fucking suck. We are all being lied to, we all feel gaslit and manipulated and punished for not pledging ourselves to Sam Altman’s graveyard smash, but I believe we are right.
In the last year, my work has gone from being relatively popular to being cited by multiple major international news organizations, hedge funds, and internal investor analyses. I was profiled by the Financial Times, went on the BBC twice, and watched as my Subreddit, r/BetterOffline, grew to around 80,000 visitors a week and became one of the 20th largest podcast Subreddits, which is a bigger deal than it sounds.
I believe there are millions of people that are tired of the state of the tech industry, and disgusted at what these people have done to the computer. I believe that they outnumber the boosters, the analysts and the hype-fiends that have propped up this era. I believe that a better world is possible by creating a meaningful consensus around making the powerful prove themselves to us rather than proving it for them.
I am honoured that you read me, and even more so if you read this far. I’ll see you in 2026.
2025-12-23 01:33:52
Hello and welcome to the final premium edition of Where's Your Ed At for the year. Since kicking off premium, we've had some incredible bangers that I recommend you revisit (or subscribe and read in the meantime!):
I pride myself on providing a ton of value in these pieces, and I really hope if you're on the fence about subscribing you'll give me a look.
Last week has been a remarkably grim one for the AI industry, resplendent with some terrible news and "positive stories" that still leave investors with a vile taste in their mouth.
Let's recount:
There are a few common threads between all of these stories:
And the other key thread is the year 2026.
Next year is meant to be the year that everything changes. It was meant to be the year that OpenAI had a gigawatt of data centers built with Broadcom and AMD, and when Stargate Abilene's 8 buildings were fully built and energized. 2026 is meant to be the year that OpenAI opened Stargate UAE, too.
Here in reality, absolutely none of this is happening, and I believe that 2026 is the year when everything begins to collapse.
In today's piece, I'm going to line up the sharp objects sitting right next to an increasingly-wobbling AI bubble, and why everything hinges on a looming cash crunch for OpenAI, AI data centers, those funding AI data centers, and venture capital itself.
2025-12-16 01:22:14
I keep trying to think of a cool or interesting introduction to this newsletter, and keep coming back to how fucking weird everything is getting.
Two days ago, cloud stalwart Oracle crapped its pants in public, missing on analyst revenue estimates and revealing it spent (to quote Matt Zeitlin of Heatmap News) more than $4 billion more in that quarter than analysts expected on capital expenditures, for a total of $12 billion.
The "good" news? Oracle has remaining performance obligations (RPOs) of $523 billion. For those that aren’t fluent in financese, this is future contracted revenue that hasn’t been paid for, or even delivered:
"Remaining Performance Obligations (RPO) increased by $68 billion in Q2—up 15% sequentially to $523 billion—highlighted by new commitments from Meta, NVIDIA, and others," said Oracle Principal Financial Officer, Doug Kehring.
So we've got — per Kakashii on Twitter — $68 billion of new compute deals signed in the quarter, with $20 billion from Meta (announced in October), and a few other mystery clients that could include the ByteDance/TikTok deal.
But wait. Hold the fort — what was that? NVIDIA?
NVIDIA? The accelerated computing company? The largest company on the stock market? That NVIDIA? Why is NVIDIA buying cloud compute? The Information reported back in September that NVIDIA was "stepping back from its nascent cloud computing business," intending to use it "for its own researchers."
Well, I sure hope those researchers need compute! NVIDIA has, according to its November 10-Q, agreed to $26 billion in cloud compute deals, spending $6 billion in a year each in Fiscal Years 2027 and 2028, $5 billion in FY2029, $4 billion in 2030, and $4 billion in 2031.

AI boosters damn near ripped their jorts jumping for joy at the sight of this burst of new performance obligations, yet it seems that the reason that NVIDIA CEO Jensen Huang said back in October that AI compute demand had gone up "substantially" in the last six months was because NVIDIA had stepped in to increase it. It signed a deal to buy $6.3 billion of unused capacity from CoreWeave, another to buy $1.5 billion from Lambda, and now apparently needs to buy even more compute from Oracle, despite Huang saying in November that cloud GPUs are "sold out", which traditionally means you "can't rent them."
We are in the dynasty of bullshit, a deceptive epoch where analysts and journalists who are ostensibly burdened with telling the truth feel the need to continue pushing the Gospel According To Jensen. When all of this collapses there must be a reckoning with how little effort was made to truly investigate the things that executives are saying on the television, in press releases, in earnings filings and even on social media, all because the market consensus demanded that The Number Must Continue Going Up.
The AI era is one of mythology, where billions in GPUs are bought to create supply for imaginary demand, where software is sold based on things it cannot reliably do, where companies that burn billions of dollars are rewarded with glitzy headlines and not an ounce of cynicism, and where those that have pushed back against it have been treated with more skepticism and ire than those who would benefit the most from the propagation of propaganda and outright lies.
So today I'm giving you Mythbusters — AI Edition. This is the spiritual successor to How To Argue With An AI Booster, where I address the technical, financial and philosophical myths that underpin the endless sales of GPUs and ever-increasing valuation of OpenAI.
This is going to be fun, because I truly believe that both the financial and tech press take this all a little too seriously, in the sense that everything is so dull. With a handful of exceptions (The Register being the best example), most publications treat financial reporting as something that must be inherently separate from any kind of analysis or criticism. And so, that’s why, if a publication calls bullshit on something insane, that call is almost always segmented away in its own little piece.
If you asked me why I thought this is the case, I’d say it’s probably because (excluding those cases of genuine malfeasance and fraud, like Enron and Worldcom and Nortel) we haven’t seen anything as egregiously offensive or dishonest as what’s emerged from the AI bubble. And so, reporters are accustomed to a lack of civility that, frankly, isn’t warranted.
I also think the total lack of levity or self-awareness leads to less-effective analysis, too.
For example, lots of people are freaking out about Disney investing $1 billion for an equity stake in OpenAI, all while licensing its characters to be used in Sora, and I really think you can simmer the deal down to two points:
Oh, and while I'm here, let's talk about TIME naming the "Architects of AI" its person (people) of the year. Who fuckin' cares! Marc Benioff, one of the biggest AI boosters in the world, owns TIME, and has already run no less than three other pieces of booster propaganda, including everything from "researchers finding that AIs can scheme, deceive or blackmail," to the supposed existence of an "AI arms race" to "coding tools like Cursor and Claude code becoming so powerful that engineers across top AI companies are using them for virtually every aspect of their work."
Are any of these points true? No! But that doesn't stop them being printed! Number must go up! AI bubble must inflate! No fact check! No investigation! Just print! Print AI Now! Make AI Go Big Now! Jensen Sell GPU! Ahhhhhhhhhhh!
Okay, alright, let's go into it. Let's bust some myths.
That sounded better in my head.
2025-12-09 01:02:17
If you enjoy this free newsletter, why not subscribe to Where's Your Ed At Premium? It's $7 a month or $70 a year, and helps support me putting out these giant free newsletters!
At the end of November, NVIDIA put out an internal memo (that was leaked to Barron's reporter Tae Kim, who is a huge NVIDIA fan and knows the company very well, so take from that what you will) that sought to get ahead of a few things that had been bubbling up in the news, a lot of which I covered in my Hater’s Guide To NVIDIA (which includes a generous free intro).
Long story short, people have a few concerns about NVIDIA, and guess what, you shouldn’t have any concerns, because NVIDIA’s very secret, not-to-be-leaked-immediately document spent thousands of words very specifically explaining how NVIDIA was fine and, most importantly, nothing like Enron.
As an aside: NVIDIA wrote this note as a response to both Michael Burry and a guy called “Shanaka Anslem Perera,” who wrote a piece called “The Algorithm That Detected a $610 Billion Fraud: How Machine Intelligence Exposed the AI Industry’s Circular Financing Scheme” that I’ve been sent about 11 times.
The reason I’m not linking to this piece is simple: it’s full of bullshit. In one part, Perera talks about “major semiconductor distributor Arrow Electronics” stating things in its Q3 2025 earnings about NVIDIA, yet Arrow makes no statements about NVIDIA of any kind on its earnings call, 10-Q or earnings presentation. If you need another example, Perera claims that when “Nvidia launched the Hopper H100 architecture in Q2 fiscal 2023—also amid reported supply constraints and strong demand—inventory declined 18% quarter-over-quarter as the company fulfilled backlogged orders.”
Actually looking at NVIDIA’s inventory for that period shows that inventory increased quarter over quarter.I have not heard of Perera before, but his LinkedIn says he is the “CEO at Pet Express Sri Lanka.” I would suggest getting your financial advice elsewhere, and at a minimum, making sure that you read outlets that actually source their data.
Anyway, all of this is fine and normal. Companies do this all the time, especially successful ones, and there is nothing to be worried about here, because after reading all seven pages of the document, we can all agree that NVIDIA is nothing like Enron.
No, really! NVIDIA is nothing like Enron, and it’s kind of weird that you’re saying that it is! Why would you say anything about Enron? NVIDIA didn’t say anything about Enron.
Okay, well now NVIDIA said something about Enron, but that’s because fools and vagabonds kept suggesting that NVIDIA was like Enron, and very normally, NVIDIA has decided it was time to set the record straight.
And I agree! I truly agree. NVIDIA is nothing like Enron.
Putting aside how I might feel about the ethics or underlying economics of generative AI, NVIDIA is an incredibly successful business that has incredible profits, holds an effective monopoly on CUDA (explained here), which powers the underlying software layer to running software on GPUs, specifically generative AI, and not really much else that has any kind of revenue potential.
And yes, while I believe that one day this will all be seen as one of the most egregious wastes of capital of all time, for the time being, Jensen Huang may be one of the most successful salespeople in business history.
Nevertheless, people have somewhat run away with the idea that NVIDIA is Enron, in part because of the weird, circular deals it’s built with Neoclouds — dedicated AI-focused cloud companies — like CoreWeave, Lambda and Nebius, who run data centers full of GPUs sold by NVIDIA, which they then use as collateral for loans to buy more GPUs from NVIDIA.
Yet as dodgy and weird and unsustainable as this is, it isn’t illegal, and it certainly isn’t Enron, because, as NVIDIA has been trying to tell you, it is nothing like Enron!
Now, you may be a little confused — I get it! — that NVIDIA is bringing up Enron at all. Nobody seriously thought that NVIDIA was like Enron before (though JustDario, who has been questioning its accounting practices for years, is a little suspicious), because Enron was one of the largest criminal enterprises in history, and NVIDIA is at worst, I believe, a big, dodgy entity that is doing whatever it can to survive.
Wait, what’s that? You still think NVIDIA is Enron? What’s it going to take to convince you? I just told you NVIDIA isn’t Enron! NVIDIA itself has shown it’s not Enron, and I’m not sure why you keep bringing up Enron all the time!
Stop being an asshole. NVIDIA is not Enron!
Look, NVIDIA’s own memo said that “NVIDIA does not resemble historical accounting frauds because NVIDIA's underlying business is economically sound, [its] reporting is complete and transparent, and [it] cares about [its] reputation for integrity.”
Now, I know what you’re thinking. Why is the largest company on the stock market having to reassure us about its underlying business economics and reporting? One might immediately begin to think — Streisand Effect style — that there might be something up with NVIDIA’s underlying business. But nevertheless, NVIDIA really is nothing like Enron.
But you know what? I’m good. I’m fine. NVIDIA, grab your coat, we’re going out, let’s forget any of this ever happened. Wait, what was that?
First, unlike Enron, NVIDIA does not use Special Purpose Entities to hide debt and inflate revenue. NVIDIA has one guarantee for which the maximum exposure is disclosed in Note 9 ($860M) and mitigated by $470M escrow. The fair value of the guarantee is accrued and disclosed as having an insignificant value. NVIDIA neither controls nor provides most of the financing for the companies in which NVIDIA invests.
Oh, okay! I wasn’t even thinking about that at all, I was literally just saying how you were nothing like Enron, we’re good. Let’s go home-
Second, the article claims that NVIDIA resembles WorldCom but provides no support for the analogy. WorldCom overstated earnings by capitalizing operating expenses as capital expenditures. We are not aware of any claims that NVIDIA has improperly capitalized operating expenses. Several commentators allege that customers have overstated earnings by extending GPU depreciation schedules beyond economic useful life. Rebutting this claim, some companies have increased useful life estimates to reflect the fact that GPUs remain useful and profitable for longer than originally anticipated; in many cases, for six years or more. We provide additional context on the depreciation topic below.
I…okay, NVIDIA is also not like WorldCom either. I wasn’t even thinking about WorldCom. I haven’t thought of them in a while.
On June 25, 2002, WorldCom, the second-largest telecommunications company in the United States, admitted that its accountants had overstated its 2001 and first quarter 2002 earnings by $3.8 billion. On July 21 of the same year, WorldCom filed for bankruptcy. On August 8, 2002, the company admitted that it had misclassified at least another $3.8 billion.
In the investigation that followed the initial revelations by WorldCom, it was revealed that the company had misstated earnings by approximately $11 billion. This remains one of the largest accounting scandals in United States history. The fall in the value of WorldCom stock after revelations about the massive accounting fraud led to over $180 billion in losses by WorldCom’s investors.
WorldCom, which began operating under the name Long Distance Discount Services in 1983, was led by one of its founders, CEO Bernard Ebbers, from 1985 to 2002. Under Ebbers’s leadership, the company engaged in a series of acquisitions, becoming one of the largest American telecommunications companies. In 1997, the company merged with MCI, making it the second largest telecom company after AT&T. In 1999, it attempted to merge with Sprint, which would have made it the largest in the industry. However, this merger was scrapped due to the intervention of the Department of Justice, which feared a WorldCom monopoly.
WorldCom stock, which rose more than 50 percent on rumors of this merger, began to fall. Ebbers then tried to grow his company through new customers rather than corporate mergers, but was unable to do so because the sector was saturated by 2000. He borrowed significantly so that WorldCom would have enough cash to cover anticipated margin calls, commonly used to prove that a company has funds to cover potential speculative losses. Desperate to keep his company’s stock prices high, Ebbers pressured company accountants to show robust growth on earnings statements.
…NVIDIA, are you doing something WorldCommy? Why are you bringing up WorldCom?
To be clear, WorldCom was doing capital F fraud, and its CEO Bernie Ebbers went to prison after an internal team of auditors led by WorldCom VP of internal auditing Cynthia Cooper reported $3.8 billion in “misallocated expenses and phony accounting entries.”
So, yeah, NVIDIA, you were really specific about saying you didn’t capitalize operating expenses as capital expenditures. You’re…not doing that, I guess? That’s great. Great stuff. I had literally never thought you had done that before. I genuinely agree that NVIDIA is nothing like WorldCom.
Anyway, also glad to hear about the depreciation stuff, looking forward to reading-
Third, unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years.
NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements. Our customers are subject to strict credit evaluation to ensure collectability. NVIDIA would disclose any receivable longer than one year in long-term other assets. The $632M "Other" balance as of Q3 does not include extended receivables; even if it did, the amount would be immaterial to revenue.
Erm…
Alright man, if anyone asks about whether you’re like famed dot-com crashout Lucent Technologies, I’ll be sure to correct them. After all, Lucent’s situation was really different — well…sort of. Lucent was a giant telecommunications equipment company, one that was, for a time, extremely successful, really really successful, in fact, turned around by the now-infamous Carly Fiorina.
From a 2010 profile in CNN:
Yet Fiorina’s campaign biography quickly skates over the stint that made her a star: her three-year run as a top executive at Lucent Technologies. That seems puzzling, since unlike her decidedly mixed record at HP, Fiorina’s tenure at Lucent has all the outward trappings of success.
Lucent reported a stream of great results beginning in 1996, after Fiorina, who had been a vice-president at AT&T (T), helped oversee the company’s spin-off from Ma Bell. By the time she left to run HP in 1999 revenues were up 58%, to $38 billion. Net income went from a small loss to $4.8 billion profit. Giddy investors bid up Lucent’s stock 10-fold. And unlike HP, where Fiorina instituted large layoffs—a fact Senator Boxer loves to mention whenever possible—Lucent added 22,000 jobs during Fiorina’s tenure.
NVIDIA, this sounds great — why wouldn’t you want to be compared to Lucen-
In 1997 Fiorina took over the group selling gear to such “service provider networks.” The company reported that sales to such networks climbed from $15.7 billion in fiscal 1997 to $19.1 billion in 1998. In 1999 they hit an amazing $23.6 billion. In the midst of this rise Fortune named Fiorina — then largely anonymous outside of telecom — to the top of its first list of the country’s most powerful women in business. A star was born.
As Wall Street became fixated on equipment companies’ growth, the whole industry entered a manic phase. With capital easy to come by, Qwest, Worldcom and their peers laid more fiber and installed far more capacity than customers needed. Much like the housing bubble that was just beginning to inflate, easy credit fed the telecom bubble.
Lucent and its major competitors all started goosing sales by lending money to their customers. In a neat bit of accounting magic, money from the loans began to appear on Lucent’s income statement as new revenue while the dicey debt got stashed on its balance sheet as an allegedly solid asset. It was nothing of the sort. Lucent said in its SEC filings that it had little choice to play the so-called vendor financing game, because all its competitors were too.
Oh.
So, to put it simply, Lucent was classifying debt as an asset (we're getting into technicalities here, but it sort of was but was really counting money from loans as revenue, which is dodgy and bad and accountants hate it), and did something called “vendor financing,” which means you lend somebody money to buy something from you. It turns out Lucent did a lot of this.
In the giant PathNet deal that Fiorina oversaw, Lucent agreed to fund more than 100% of the company’s equipment purchases, meaning the small company would get both Lucent gear at no money down and extra cash to boot. Yet how could such a loan to PathNet make sense for Lucent, even based on the world as it appeared in the heady days of 1999? The smaller company had barely $100 million in equity (and that’s based on generous accounting assumptions) on top of which it had already balanced $350 million in junk bonds paying 12.25% interest. Adding $440 million in loans from Lucent to this already debt-heavy capital structure would jack the company’s leverage up to 8 to 1, and potentially even higher as they drew more of the loan.
Okay, NVIDIA, I hate to say this, but I kind of get why somebody might say you’re doing Lucent stuff. After all, rumour has it that your deal with OpenAI — a company that burns billions of dollars a year — will involve it leasing your GPUs, which sure sounds like you’re doing vendor financing...
-we do not disclose any vendor financing arrangements-
Fine! Fine.
Anyway, Lucent really fucked up big time, indulging in the dark art of circular vendor financing. In 1998 it signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar, which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking.
To quote The Wall Street Journal:
Winstar was one of scores of stand-alone, start-up companies created in the late 1990s to compete in the market for local telecom services. These firms, known as "competitive local exchange carriers," or CLECs, raised billions of dollars in debt and equity financing, and embarked upon ambitious plans to compete with "incumbent" carriers. For a time in the late 1990s, their stocks were hot properties, outpacing even Internet stocks.
In December 1999, WIRED would say that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications,” and included this awesome quote about Lucent from CEO and founder Will Rouhana:
On one level we are a customer and they are a supplier. On another level they are a financier and we are a borrower. On yet another level they are providing services around the world to accelerate our development. They also want to use our service, and have guaranteed $100 million in business.
Fuck yeah!
But that’s not the only great part of this piece:
WinStar is publicly traded (Nasdaq: WCII), has more than 4,000 employees, and reports more than $300 million in annualized core revenues.
Annualized revenues, very nice. We love annualized revenues don't we folks? A company making about $25 million a month a year after taking on $2 billion in financing from Lucent. Weirdly, Winstar’s Wikipedia page says that revenues were $445.6 million for the year ending 1999 — or around $37.1 million a month.
Winstar loved raising money — two years later in November 2000, it would raise $1.02 billion, for example — and it raised a remarkable $5.6 billion between February 1999 and July 2001 according to the Wall Street Journal. $900 million of that came in December 1999 from an investment from Microsoft and “several investment firms,” with analyst Greg Miller of Jefferies & Co saying:
The Microsoft investment is a significant endorsement that the technology will be used more aggressively in the future," said Greg Miller, an analyst at Jefferies & Co. "WinStar can use the capital.
Cool!
Another fun thing happened in November 2000 too. Lucent would admit it had overstated its fourth-quarter profits by improperly recording $125 million in sales, reducing that quarter’s revenue from “profitable” to “break-even.”
Things would eventually collapse when Winstar couldn’t pay its debts, filing for Chapter 11 bankruptcy protection on April 18 2001 after failing to pay $75 million in interest payments to Lucent, which had cut access to the remaining $400 million of its $1 billion loan to Winstar as a result. Winstar would file a $10 billion lawsuit in bankruptcy court in Delaware the very same day, claiming that Lucent breached its contract and forced Winstar into bankruptcy by, well, not offering to give it more money that it couldn’t pay off.
Elsewhere, things had begun to unravel for Lucent. A January 2001 story from the New York Times told a strange story of Lucent, a company that had made over $33 billion in revenue in its previous fiscal year, asking to defer the final tranche of payment — $20 million — for an acquisition due to “accounting and financial reporting considerations.”
Why? Because Lucent needed to keep that money on the books to boost its earnings, as its stock was in the toilet, and was about to announce it was laying off 10,000 people and a quarterly loss of $1.02 billion.
Over the course of the next few years, Lucent would sell off various entities, and by the end of September 2005 it would have 30,500 staff and have a stock price of $2.99 — down from a high of $75 a share at the end of 1999 and 157,000 employees. According to VC Tomasz Tunguz, Lucent had $8.1 billion of vendor financing deals at its height.
Lucent was still a real company selling real things, but had massively overextended itself in an attempt to meet demand that didn’t really exist, and when Lucent realized that, it decided to create demand itself to please the markets. To quote MIT Tech Review (and author Lisa Endlich), it believed that “setting and meeting [the expectations of Wall Street] “subsumed all other goals,” and that “Lucent had little choice but to ride the wave.”
To be clear, NVIDIA is quite different from Lucent. It has plenty of money, and the circular deals it does with CoreWeave and Lambda don’t involve the same levels of risk. NVIDIA is not (to my knowledge) backstopping CoreWeave’s business or providing it with loans, though NVIDIA has agreed to buy $6.3 billion of compute as the “buyer of last resort” of any unsold capacity. NVIDIA can actually afford this, and it isn’t illegal, though it is obviously propping up a company with flagging demand. NVIDIA also doesn’t appear to be taking on masses of debt to fund its empire, with over $56 billion in cash on hand and a mere $8.4 billion in long term debt.
Okay, phew. We got through this man. NVIDIA is nothing like Lucent either. Okay, maybe it’s got some similarities — but it’s different! No worries at all. I know I’m relaxed.
You still seem nervous, NVIDIA. I promise you, if anyone asks me if you’re like Lucent I’ll tell them you’re not. I’ll be sure to tell them you’re nothing like that. Are you okay, dude? When did you last sleep?
Inventory growth indicates waning demand
Claim: Growing inventory in Q3 (+32% QoQ) suggests that demand is weak and chips are accumulating unsold, or customers are accepting delivery without payment capability, causing inventory to convert to receivables rather than cash.
Woah, woah, woah, slow down. Who has been saying this? Oh, everybody? Did Michael Burry scare you? Did you watch The Big Short and say “ah, fuck, Christian Bale is going to get me! I can’t believe he played drums to Pantera! Ahh!”
Anyway, now you’ve woken up everybody else in the house and they’re all wondering why you’re talking about receivables. Shouldn’t that be fine? NVIDIA is a big business, and it’s totally reasonable to believe that a company planning to sell $63 billion of GPUs in the next quarter would have ballooning receivables ($33 billion, up from $27 billion last quarter) and growing inventory ($19.78 billion, up from $14.96 billion the last quarter). It’s a big, asset-heavy business, which means NVIDIA’s clients likely get decent payment terms to raise debt or move cash around to get them paid.
Everybody calm down! Like my buddy NVIDIA, who is nothing like Enron by the way, just said:
Response: First, growing inventory does not necessarily indicate weak demand. In addition to finished goods, inventory includes significant raw materials and work-in-progress. Companies with sophisticated supply chains typically build inventory in advance of new product launches to avoid stockouts. NVIDIA's current supply levels are consistent with historical trends and anticipate strong future growth.
Second, growing inventory does not indicate customers are accepting delivery without payment capability. NVIDIA recognizes revenue upon shipping a product and deeming collectability probable. The shipment reduces inventory, which is not related to customer payments. Our customers are subject to strict credit evaluation to ensure collectability.
Payment is due shortly after product delivery; some customers prepay. NVIDIA's DSO actually decreased sequentially from 54 days to 53 days.
Haha, nice dude, you’re totally right, it’s pretty common for companies, especially large ones, to deliver something before they receive the cash, it happens, I’m being sincere. Sounds like companies are paying! Great!
But, you know, just, can you be a little more specific? Like about the whole “shipping things before they’re paid” thing.
NVIDIA recognizes revenue upon shipping a product and deeming collectability probable-
Alright, yeah, thought I heard you right the first time. What does “deeming collectability probable” mean? You could’ve just said “we get paid 95% of the time within 2 months” or whatever. Unless it’s not 95%? Or 90%? How often is it? Most companies don’t break this down by the way, but then again, most companies are not NVIDIA, the largest company on the stock market, and if I’m honest, nobody else has recently had to put out anything that said “I’m not like Enron,” and I want to be clear that NVIDIA is not like Enron.
For real, Enron was a criminal enterprise. It broke the law, it committed real deal, actual fraud, and NVIDIA is nothing like Enron. In fact, before NVIDIA put out a letter saying how it was nothing like Enron I would have staunchly defended the company against the Enron allegations, because I truly do not think NVIDIA is committing fraud.
That being said, it is very strange that NVIDIA wants somebody to think about how it’s nothing like Enron. This was, technically, an internal memo, and thus there is a chance its existence was built for only internal NVIDIANs worried about the value of their stock, and we know it was definitely written to try and deflect Michael Burry’s criticism, as well as that of a random Substacker who clearly had AI help him write a right-adjacent piece that made all sorts of insane and made up statements (including several about Arrow Electronics that did not happen) — and no, I won’t link it, it’s straight up misinformation.
Nevertheless, I think it’s fair to ask: why does NVIDIA need you to know that it’s nothing like Enron? Did it do something like Enron? Is there a chance that I, or you, may mistakenly say “hey, is NVIDIA doing Enron?”
Heeeeeeyyyy NVIDIA. How’re you feeling? Yeah, haha, you had a rough night. You were saying all this crazy stuff about Enron last night, are you doing okay? No, no, I get it, you’re nothing like Enron, you said that a lot last night.
So, while you were asleep — yeah it’s been sixteen hours dude, you were pretty messed up, you brought up Lucent then puked in my sink — I did some digging and like, I get it, you are definitely not like Enron, Enron was breaking the law. NVIDIA is definitely not doing that.
But…you did kind of use Special Purpose Vehicles recently? I’m sorry, I know, you’re not like Enron! You’re investing $2 billion in Elon Musk’s special purpose vehicle that will then use that money to raise debt to buy GPUs from NVIDIA that will then be rented to Elon Musk.
This is very different to what Enron did! I am with you dude, don’t let the haters keep you down! No, I don’t think a t-shirt that says “NVIDIA is not like Enron for these specific reasons” helps.
Wait, wait, okay, look. One thing. You had this theoretical deal lined up with Sam Altman and OpenAI to invest $100 billion — and yes, you said in your latest earnings that "it was actually a Letter of Intent with the opportunity to invest," which doesn’t mean anything, got it — and the plan was that you would “lease the GPUs to OpenAI.”
Now how would you go about doing that NVIDIA? You’d probably need to do exactly the same deal as you just did with xAI. Right? Because you can’t very well rent these GPUs directly to Elon Musk, you need to sell them to somebody so that you can book the revenue, you were telling me that’s how you make money. I dunno, it’s either that or vendor financing.
Oh, you mentioned that already-
-unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements-
Let me stop you right there a second, you were on about this last night before you scared my cats when you were crying about something to do with “two nanometer.”
First of all, why are you bringing up typical vendor financing agreements? Do you have atypical ones?
Also I’m jazzed to hear you “disclose your standard payment terms,” but uh, standard payment terms for what exactly? Where can I find those? For every contract?
Also, you are straight up saying you don’t disclose any vendor financing arrangements, that’s not the same as “not having any vendor financing arrangements.” I “do not disclose” when I go to the bathroom but I absolutely do use the toilet.
Let’s not pretend like you don’t have a history in helping get your buddies funding. You have deals with both Lambda and CoreWeave to guarantee that they will have compute revenue, which they in turn use to raise debt, which is used to buy more of your GPUs. You have learned how to feed debt into yourself quite well, I’m genuinely impressed.
This is great stuff, I’m having the time of my life with how not like Enron you are, and I’m serious that I 100% do not believe you are like Enron.
But…what exactly are you doing man? What’re you going to do about what Wall Street wants?
Enron was a criminal enterprise! NVIDIA is not. More than likely NVIDIA is doing relatively boring vendor financing stuff and getting people to pay them on 50-60 day time scales — probably net 60, and, like it said, it gets paid upfront sometimes.
NVIDIA truly isn’t like Enron — after all, Meta is the one getting into ENERGY TRADING — to the point that I think it’s time to explain to you what exactly happened with Enron. Or, at least as much as is possible within the confines of a newsletter that isn’t exclusively about Enron…
The collapse of Enron wasn’t just — in retrospect — a large business that ultimately failed. If that was all it was, Enron wouldn’t command the same space in our heads as other failures from that era, like WorldCom (which I mentioned earlier) and Nortel (which I’ll get to later), both of whom were similarly considered giants in their fields.
It’s also not just about the fact that Enron failed because of proven business and accounting malfeasance. WorldCom entered bankruptcy due to similar circumstances (though, rather than being liquidated, it was acquired as part of Verizon’s acquisition of MCI, the name of a company that had previously merged with WorldCom that WorldCom renamed itself to after bankruptcy), and unlike Enron, isn’t the subject of flashy Academy-nominated films, or even a Broadway production.
Editor’s Note: Hi! It's Ed's editor Matt here! I actually saw the UK touring production of Enron in 2010 at Newcastle’s Theatre Royal. It was extremely good. From time to time, new productions of it show up (from what I can tell, the most recent one was in October at the London Barbican), and if you get a chance to watch it, you should.
It’s not the size of Enron that makes its downfall so intriguing. Nor, for that matter, is it the fact that Enron did a lot of legally and ethically dubious stuff to bring about its downfall.
No, what makes Enron special is the sheer gravity of its malfeasance, the rotten culture at the heart of the company that encouraged said malfeasance, and the creative ways Enron’s leaders crafted an image of success around what was, at its heart, a dog of a company.
Enron was born in 1985 on the foundations of two older, much less interesting businesses. The first, Houston Natural Gas (HNG), started life as a utility provider, pumping natural gas from the oilfields of Texas to customers throughout the region, before later exiting the industry to focus on other opportunities. The other, InterNorth, was based in Omaha, Nebraska and was in the same business — pipelines.
In the mid-1980s, HNG was the subject of a hostile take-over from Coastal Corporation (which, until 2001, operated a chain of refineries and gas stations throughout much of the US mainland). Unable to fend it off by itself, HNG merged with InterNorth, with the combined corporation renamed Enron.
The CEO of this new entity was Ken Lay, an economist by trade who spent most of his career in the energy sector who also enjoyed deep political connections with the Bush family. He co-chaired George H. W. Bush’s failed 1992 re-election campaign, and allowed Enron’s corporate jet to ferry Bush Sr. and Barbara Bush back and forth to Washington. Center for Public Integrity Director Charles Lewis said that “there was no company in America closer to George W. Bush than Enron.”
George W. Bush (the second one) even had a nickname for Lay. Kenny Boy.
Anyway, in 1987, Enron hired McKinsey — the world’s most evil management consultancy firm — to help the company create a futures market for natural gas. What that means isn’t particularly important to the story, but essentially, a futures contract is where a company agrees to buy or sell an asset in the future at a fixed price.
It’s a way of hedging against risk, whether that be from something like price or currency fluctuations, or from default. If you’re buying oil in dollars, for example, buying a futures contract for oil to be delivered in six months time at a predetermined price means that if your currency weakens against the dollar, your costs won’t spiral.
That bit isn’t terribly important. What does matter is while working with McKinsey, Lay met someone called Jeff Skilling — a young engineer-turned-consultant who impressed the company’s CEO deeply, so much so that Lay decided to poach him from McKinsey in 1990 and give him the role of chairman and CEO of Enron Finance Group.
Sidenote: Enron had a bunch of subsidiaries, and some had their own CEOs and boards. I mention this because you may be a bit confused, as Lay was CEO of Enron writ large.
In essence, it’s a bit like how Sam Altman is CEO of OpenAI and Fidji Simo the CEO of Applications.
This bit isn’t important, but I want to be as explicit as possible.
Anyway, Skilling continued to impress Lay, who gave him greater and greater responsibility, eventually crowning him Chief Operating Officer (COO) of Enron.
With Skilling in a key leadership position, he was able to shape the organization’s culture. He appreciated those who took risks — even if those risks, when viewed with impartial eyes, were deemed reckless, or even criminal.
He introduced the practice of stack-ranking (also known as “rank and yank”) to Enron, which had previously been pioneered by Jack Welch at GE (see The Shareholder Supremacy from last year). Here, employees were graded on a scale, and those at the bottom of the scale were terminated. Managers had to place at least 10% (other reports say closer to 15%) of employees in the lowest bracket, which created an almost Darwinian drive to survive.
Staffers worked brutal hours. They cut corners. They did some really, really dodgy shit. None of this bothered Skilling in the slightest.
How dodgy, you ask? Well, in 2000 and 2001, California suffered a series of electricity blackouts. This shouldn’t have happened, because California’s total energy demand (at the time) was 28GW and its production capacity was 45GW.
California also shares a transmission grid with other states (and, for what it’s worth, the Canadian provinces of Alberta and British Colombia, as well as part of Baja California in Mexico), meaning that in the event of a shortage, it could simply draw capacity from elsewhere.
So, how did it happen?
Well, remember, Enron traded electricity like a commodity, and as a result, it was incentivized to get the highest possible price for that commodity. So, it took power plants off line during peak hours, and exported power to other states when there was real domestic demand.
How does a company like Enron shut down a power station? Well, it just asked.
In one taped phone conversation released after the company’s collapse, an Enron employee called Bill called an official at a Las Vegas power plant (California shares the same grid with Nevada) and asked him to “get a little creative, and come up with a reason to go down. Anything you want to do over there? Any cleaning, anything like that?"
This power crisis had dramatic consequences — for the people of California, who faced outages and price hikes; for Governor Gray Davis, who was recalled by voters and later replaced by Arnold Schwarzenegger; for PG&E, which entered Chapter 11 bankruptcy that year; and for Southern California Edison, which was pushed to the brink of bankruptcy as a result.
This kind of stuff could only happen in an organization whose culture actively rewarded bad behavior.
In fact, Skilling was seemingly determined to elevate the dodgiest of characters to the highest positions within the company, and few were more-ethically-dubious than Andy Fastow, who Skilling mentored like a protegé, and who would later become Enron’s Chief Financial Officer.
Even before vaulting to the top of Enron’s nasty little empire, Fastow was able to shape its accounting practices, with the company adopting mark-to-market accounting practices in 1991.
Mark-to-market sounds complicated, but it’s really simple. When listing assets on a balance sheet, you don’t use the acquisition cost, but rather the fair-market value of that asset. So, if I buy a baseball card for a dollar, and I see that it’s currently selling for $10 on eBay, I’d say that said asset is worth $10, not the dollar I paid for it, even though I haven’t actually sold it yet.
This sounds simple — reasonable, even — but the problem is that the way you determine the value of that asset matters, and mark-to-market accounting allows companies and individuals to exercise some…creativity.
Sure, for publicly-traded companies (where the price of a share is verifiable, open knowledge), it’s not too bad, but for assets with limited liquidity, limited buyers, or where the price has to be engineered somehow, you have a lot of latitude for fraud.
Let’s go back to the baseball card example. How do you know it’s actually worth $10, and not $1? What if the “fair value” isn’t something you can check on eBay, but what somebody told me in-person it’s worth? What’s to stop me from lying and saying that the card is actually worth $100, or $1000? Well, other than the fact I’d be committing fraud.
What if I have ten $1 baseball cards, and I give my friend $10 and tell him to buy one of the cards using the $10 bill I just handed him, allowing me to say that I’ve realized a $9 profit on one of my $1 cards, and my other cards are worth $90 and not $9?
And then, what if I use the phony valuation of my remaining cards to get a $50 loan, using the cards as collateral, even though the collateral isn’t even one-fifth of the value of the loan?
You get the idea. While a lot of the things people can do to alter the mark-to-market value of an asset are illegal (and would be covered under generic fraud laws), it doesn’t change the fact that mark-to-market accounting allows for some shenanigans to take place.
Another trait of mark-to-market accounting, as employed by Enron, is that it would count all the long-term potential revenue from a deal as quarterly revenue — even if that revenue would be delivered over the course of a decades-long contract, or if the contract would be terminated before its intended expiration date.
It would also realize potential revenue as actual revenue, even before money changed hands, and when the conclusion of the deal wasn’t a certainty.
For example, in 1999, Enron sold a stake in four electricity-generating barges in Nigeria (essentially floating power stations) to Merrill Lynch, which allowed the company to register $12m in profit.
That sale ultimately didn’t happen, though that didn’t stop Enron from selling pieces to Merrill Lynch, which — I’m not kidding — Merrill Lynch quickly sold back to a Special Purpose Vehicle called “LJM2” controlled by Andrew Fastow. You’re gonna hear that name again.
Although the Merrill Lynch bankers who participated in the deal were eventually convicted of conspiracy and fraud charges (long after the collapse of Enron), their convictions were later quashed on appeal.
But still, for a moment, it gave a jolt to Enron’s quarterly earnings.
Anyway, Enron was incredibly creative when it came to how it valued its assets. Take, for example, fiber optic cables. As the Dot Com bubble swelled, Enron saw an opportunity, and wanted to be able to trade and control the supply of bandwidth, just like it does with other more conventional commodities (like oil and gas).
It built, bought, and leased fiber-optic cables throughout the country, and then, using exaggerated estimates of their value and potential long-term revenue, released glowing financial reports that made the company look a lot healthier and more successful than it actually was.
Sidenote: One of the funniest ironies of Enron is that it was, in many ways, ahead of its time. When most people were still connecting to the Internet through screeching 56k dial-up modems, it saw a future in edge and cloud computing (even if said terms didn’t exist at the time) and streaming video
In 2000, it entered into a 20-year with Blockbuster Video to allow customers to stream films and TV shows through Enron’s fiber network, something that would take Netflix another decade to realize as a product.
Even though it wasn’t clear whether there was much of a market for it (remember, this was in 2000, and broadband was a rarity — and what we defined as “broadband” was well below the standards of today’s Internet) or indeed, whether it was technologically possible.
Anyway, the deal collapsed after just one year, but that didn’t stop Enron’s creative accountants from booking the deal (based on its projected future revenue) as a profitable venture.
Mark-to-market accounting! You gotta love it.
Still, it’s hilarious to think that there’s a future world in which Blockbuster and Enron stuck it out, and the former didn’t collapse around the time of the Global Financial Crisis.
Probably not though.
Enron also loved to create special-purpose entities that existed either to generate revenue that didn’t exist, or to hold toxic assets that would otherwise need to be disclosed (with Enron then using its holdings in said entities to boost its balance sheet), or to disguise its debt.
One, Whitewing, was created and capitalized by Enron (and an outside investor), and pretty much exclusively bought assets from Enron — which allowed the company to recognize sales and profits on its balance sheets, even if they were fundamentally contrived.
Another set of entities — known as LJM, named after the first initial of Andy Fastow’s wife and two children, and which I mentioned earlier — did the same thing, allowing the company to hide risky or failing investments, to limit its perceived debt, and to generate artificial profits and revenues. LJM2 was, creatively, the second version of the idea.
Even though the assets that LJM held were, ultimately, dogshit, the distance that LJM provided, combined with Enron’s use of mark-to-market accounting, allowed the company to turn a multi-billion collective failure into a resounding and (on paper) profitable triumph.
So, how did this happen, and how did it go on for so long?
Well, first, Enron was, at its peak, worth $70bn. Its failure would be a failure for its investors and shareholders, and nobody — besides the press, that is — wanted to ask tough questions.
It had auditors, but they were paid handsomely, turning a blind eye to the criminal malfeasance at the heart of the company. Auditor Arthur Andersen surrendered its license in 2002, bringing an end to the company — and resulting in 85,000 employees losing their jobs.
Well, it’s not so much as it only turned a blind eye, so much as it turned on a big paper shredder, shredding tons — and I’m using that as a measure of weight, and not figuratively — of documents as Enron started to implode, a crime for which it was later convicted of obstruction of justice.
I’ve talked about Enron’s culture, but I’d be remiss if I didn’t mention that Enron’s highest-performers and its leadership received hefty bonuses in company equity, motivating them to keep the charade going.
Enron’s pension scheme, I add, was basically entirely Enron stock, and employees were regularly encouraged to buy more, with Kenneth Lay telling employees weeks before the company’s collapse that “the company is fundamentally sound” and to “hang on to their stock.”
Hah. Yeah.
Additionally, per the terms of the Enron pension plan, employees were prevented from shifting their holdings into other pension funds, or other investments, until they turned 50. When the company collapsed, those people lost everything, even those who didn’t know anything about Enron’s criminality. George Maddox, a retired former Enron employee, had his entire retirement tied up in 14,000 Enron shares (worth at the time more than $1.3 million), was “forced to spend his golden years making ends meet by mowing pastures and living in a run-down East Texas farmhouse.”
The US Government brought criminal charges against Enron’s top leadership. Ken Lay was convicted of four counts of fraud and making false statements, but died on a skiing vacation to Aspen before sentencing. May he burn in Hell.
Skilling was convicted on 24 counts of fraud and conspiracy and sentenced to 24 years in jail. This was reduced in 2013 on appeal to 14 years, and he was released to a halfway house in 2018, and then freed in 2019. He’s since tried to re-enter the energy sector — with one venture combining energy trading and, I kid you not, blockchain technology — although nothing really came out of it.
Sidenote: Credit where credit’s due. This is the opening sentence to Quartz’s coverage of Skilling’s attempted comeback. “Jeffrey Skilling knows a thing or two about blocks and chains.”
Wooooo! Woooooooo!!!! Get his ass!
Andy Fastow pled guilty to two counts — one of manipulation of financial statements, and one of self-dealing. and received ten years in prison. This was later reduced to six years, including two years of probation, in part because he cooperated with the investigations against other Enron executives. He is now a public speaker and a tech investor in an AI company, KeenCorp.
His wife, Lea, who also worked at Enron, received twelve months for conspiracy to commit wire fraud and money laundering and for submitting false tax returns. She was released from custody in July, 2005.
Enron’s implosion was entirely self-inflicted and horrifyingly, painfully criminal, yet, it had plenty of collateral damage — to the US economy, to those companies that had lent it money, to its employees who lost their jobs and their life savings and their retirements, and to those employees at companies most entangled with Enron, like those at auditing firm Arthur Andersen.
This isn’t unique among corporate failures. WorldCom had some dodgy accounting practices. Nortel too. Both companies failed, both companies wrecked the lives of their employees, and the failure of these companies had systemic economic consequences (especially in Canada, where Nortel, at its peak, accounted for one-third of the market cap of all companies on the Toronto Stock Exchange).
The reason why Enron remains captured in our imagination — and why NVIDIA is so vociferously opposed to being compared with Enron — is the extent to which Enron manipulated reality to appear stronger and more successful than it was, and how long it was able to get away with it.
While we may have forgotten the memory of Enron — it happened over two decades ago, after all — we haven’t forgotten the instincts that it gave us. It’s why our noses twitch when we see special-purpose vehicles being used to buy GPUs, and why we gag when we see mark-to-market accounting.
It’s entirely possible that everything NVIDIA is doing is above board. Great! But that doesn’t do anything for the deep pit of dread in my stomach.
A few weeks ago, I published the Hater’s Guide to NVIDIA, and included within it a guide to what this company does.
NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market.
This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture.
In 2006, NVIDIA launched CUDA, a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company.
The thing is, GPUs are great for parallel processing — essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time — which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.
CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs.
Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.
This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies."
Though it took until November 2022 for ChatGPT to really start the fires, in March 2020, NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:
A data center powered by five DGX A100 systems for AI training and inference running on just 28 kilowatts of power costing $1 million can do the work of a typical data center with 50 DGX-1 systems for AI training and 600 CPU systems consuming 630 kilowatts and costing over $11 million, Huang explained.
One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend.
At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000.
You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024. A single B200 GPU costs at least $30,000.
Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly, and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive.
Why did I write this? Because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year.
If you’re looking at this through the cold, unthinking lenses of late-stage capitalism. This all sounds really good! I’ve basically described a company that has an essential monopoly in the one thing required for a high-growth (if we’re talking exclusively about capex spending) industry to exist.
Moreover, that monopoly is all-but assured, thanks to NVIDIA’s CUDA moat, its first-mover advantage, and the actual capabilities of the products themselves — thereby allowing the company to charge a pretty penny to customers.
And those customers? If we temporarily forget about the likes of Nebius and CoreWeave (oh, how I wish I could forget about CoreWeave permanently), we’re talking about the biggest companies on the planet. Ones that, surely, will have no problems paying their bills.
Back in February 2023, I wrote about The Rot Economy, and how everything in tech had become oriented around growth — even if it meant making products harder to use as a means of increasing user engagement or funnelling them toward more-profitable parts of an app.
Back in June 2024, I wrote about the Rot-Com Bubble, and my greater theory that the tech industry has run out of hypergrowth ideas:
Yet, without generative AI, what do these companies have left? What's the next big thing? For the best part of 15 years we've assumed that the tech industry would always have something up its sleeves, but what's become painfully apparent is that the tech industry might have run out of big, sexy things to sell us, and the "disruption" that tech has become so well-known for was predicated on there being markets for them to disrupt, and ideas that they could fund to do so. A paper from Nature from last year posited that the pace of disruptive research is slowing, and I believe the same might be happening in tech, except we've been conflating "innovation" and "finding new markets to add software and hardware to" for twenty years.
The net result of this creative stagnancy is the Rot Economy and the Rot-Com bubble — a tech industry laser-focused on finding markets to disrupt rather than needs to be met, where the biggest venture capital investments go into companies that can sell for massive multiples rather than stable, sustainable businesses. There is no reason that Google, or Meta, or Amazon couldn't build businesses that have flat, sustainable growth and respectable profitability. They just choose not to, in part because the markets would punish it, and partially because their DNA has been poisoned by rot that demands there must always be more.
In simple terms, big tech — Amazon, Google, Microsoft and Meta, but also a number of other companies — no longer has the “next big thing,” and jumped on AI out of an abundance of desperation.
Hell, look at Oracle. This company started off by selling databases and ERP systems to big companies, and then trapping said companies by making it really, really difficult to migrate to cheaper (and better) solutions, and then bleeding said companies with onerous licensing terms (including some where you pay by the number of CPU cores that use the application).
It doesn’t do anything new, or exciting, or impressive, and even when presented with the opportunity to do things that are useful or innovative (like when it bought Sun Microsystems), it turns away. I imagine that, deep down, it recognizes that its current model just isn’t viable in the long-term, and so, it needs something else.
When you haven’t thought about innovation… well… ever, it’s hard to start. Generative AI, on the face of it, probably seemed like a godsend to Larry Ellison.
We also live in an era where nobody knows what big tech CEOs do other than make nearly $100 million a year, meaning that somebody like Satya Nadella can get called a “thoughtful leader with striking humility” for pushing Copilot AI in every single part of your Microsoft experience, even Notepad, a place that no human being would want it, and accelerating capital expenditures from $28 billion across the entirey of FY 2023 to $34.9 billion in its latest quarter.
In simpler terms, spending money makes a CEO look busy. And at a time when there were no other potential growth avenues, AI was a convenient way to make everybody look busy. Every department can “have an AI strategy,” and every useless manager and executive can yell, as ServiceNow CEO did back in 2022, “let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI.”
I should also add that ChatGPT was the first real, meaningful hit that the American tech industry had produced in a long, long time — the last being, if I’m honest, Uber, and that’s if we allow “successful yet not particularly good businesses” into the pile.
If we insist on things like “profitability” and “sustainability,” US tech hasn’t done so great. Snowflake runs at a loss, Snap runs at a loss, and while Uber has turned things around somewhat, it’s hardly created the next cloud computing or smartphone.
Putting aside finances, the last major “hit” was probably Venmo or Zelle, and maybe, if I’m feeling generous, smart speakers like Amazon Echo and Apple Homepod. Much like Uber, none of these were “the next big thing,” which would be fine except big tech needs more growth forever right now, pig!
Aside: None of this is to say there has been no innovation. Just not something on the level of a smartphone or cloud computing.
This is why Google, Amazon and Meta all do 20 different things — although rarely for any length of time, with these “things” often having a shelf life shorter than a can of peaches — because The Rot Economy’s growth-at-all-costs mindset exists only to please the markets, and the markets demanded growth.
ChatGPT was different. Not only did it do something new, it did so in a way that was relatively easy to get people to try and “see the potential” of. It was also really easy to convince people it would become something bigger and better, because that’s what tech does. To quote Bender and Hanna, AI is a “marketing term” — a squishy way of evoking futuristic visions of autonomous computers that can do anything and everything from us, and because both consumers and analysts have been primed to believe and trust the tech industry, everybody believed that whatever ChatGPT was would be the Next Big Thing.
And said “Next Big Thing” is powered by Large Language Models, which require GPUs sold by one company — NVIDIA.
AI became a very useful thing to do. If a company wanted to seem futuristic and attract investors, it could now “integrate AI.” If a hyperscaler wanted to seem enterprising and like it was “building for the future,” it could buy a bunch of GPUs, or invest in its own silicon, or, as Google, Microsoft, Amazon and Meta have done, shove AI in every imaginable crevice of the app.
Investors could invest in AI companies, retail investors (IE: regular people) could invest in AI stocks, tech reporters could write about something new in AI, LinkedIn perverts could write long screeds about AI, the markets could become obsessed with AI…
…and yeah, you can kind of see how things got out of control. Everybody now had something to do. An excuse to do AI, regardless of whether it made sense, because everybody else was doing it.
ChatGPT quickly became one of the most popular websites on the internet — all while OpenAI burned billions of dollars — and because the media effectively published every single thought that Sam Altman had (such as that GPT-4 would “automate away some jobs and create others” and that he was a “little bit scared of it”), AI, as an idea, technology, symbolic stock trope, marketing tool and myth became so powerful that it could do anything, replace anyone, and be worth anything, even the future of your company.
Amongst the hype, there was an assumption related to scaling laws (summarized well by Charlie Meyer):
In 2020, one of the most important papers in the development of AI was published: Scaling Laws for Neural Language Models, which came from a group at OpenAI.
This paper showed with just a few charts incredibly compelling evidence that increasing the size of large language models would increase their performance. This paper was a large driver in the creation of GPT-3 and today’s LLM revolution, and caused the movement of trillions of dollars in the stock market.
In simple terms, the paper suggested that shoving more training data and using more compute power would exponentially increase the ability of a model to do stuff. And to make a model that did more stuff, you needed more GPUs and more data centers. Did it matter that there was compelling evidence in 2022 (Gary Marcus was right!) that there were limits to scaling laws, and that we would hit the point of diminishing returns?
Nah!
Amidst all this, NVIDIA has sold over $200 billion of GPUs since the beginning of 2023, becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share.
You see, Meta, Google, Microsoft and Amazon all wanted to be “part of the future,” so they sunk a lot of their money into NVIDIA, making up 42% of its revenue in its fiscal year 2025. Though there are some arguments about exactly how much of big tech’s billowing capital expenditures are spent on GPUs, some estimate somewhere between 41% to more than 50% of a data center’s capex is spent on them.
If you’re wondering what the payoff is, well, you’re in good company. I estimate that there’s only around $61 billion in total generative AI revenue, and that includes every hyperscaler and neocloud. Large Language Models are limited, AI agents are a pipedream and simply do not work, AI-powered products are unreliable and coding LLMs make developers slower, and the cost of inference — the way in which a model produces its output — keeps going up.
So, due to the fact that so much money has now been piled into building AI infrastructure, and big tech has promised to spend hundreds of billions of dollars more in the next year, big tech has found itself in a bit of a hole.
How big a hole? Well, By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 billion in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years, and intends to spend $400 billion or more in 2026.
As a result, based on my analysis, big tech needs to make $2 trillion in brand new revenue, specifically from AI by 2030, or all of this was for nothing. I go into detail here in my premium piece, but I’m going to give you a short explanation here.
Sadly you’re going to have to learn stuff. I know! I’m sorry. Introducing a term: depreciation. From my October, 31 newsletter:
So, when Microsoft buys, say, $100 million in GPUs, it immediately comes out of its capital expenditures, which is when a company uses money to invest in either buying or upgrading something. It then gets added to its Property, Plants and Equipment Assets — PPE for short, although some companies list this on their annual and quarterly financials as “Property and Equipment.”
PPE sits on the balance sheet — it's an asset — as it’s the stuff the company actually owns or has leased.
GPUs "depreciate" — meaning they lose value — over time, and this depreciation is represented on the balance sheet and the income statement. Essentially, the goal is to represent the value of the assets that a company has, On the income statement, we see how much the assets have declined during that reporting period (whether that be a year, or a quarter, or something else), whereas the balance sheet shows the cumulative depreciation of every asset currently in play. Depreciation does two things. First, it allows a company to accurately (to an extent) represent the value of the things it owns. Secondly, it allows companies to deduct the cost of an asset from their taxes across the useful life of said object, right until its eventual removal.
The way this depreciation is actually calculated can vary — there are several different methods available — with some allowing for greater deductions at the start of the term, which is useful for those items that’ll experience the biggest drop in value right after acquisition and initial usage. An example you’re probably familiar with is a new car, which loses a significant chunk of its value the moment it’s driven off the dealership lot.
Depreciation has become the big, ugly problem with GPUs, specifically because of their “useful life” — defined either as how long the thing is actually able to run before it dies, or how long until it becomes obsolete.
Nobody seems to be able to come to a consensus about how long this should be. In Microsoft’s case, depreciation for its servers is spread over six years — a convenient change it made in August 2022, a few months before the launch of ChatGPT. This means that Microsoft can spread the cost of the tens of thousands of A100 GPUs bought in 2020, or the 450,000 H100 GPUs it bought in 2024, across six years, regardless of whether those are the years they will be either A) generating revenue or B) still functional.
CoreWeave, for what it’s worth, says the same thing — but largely because it’s betting that it’ll still be able to find users for older silicon after its initial contracts with companies like OpenAI expire. The problem is, as the aforementioned linked CNBC article points out, is that this is pretty much untested ground.
Whereas we know how much, say, a truck or a piece of heavy machinery can last, and how long it can deliver value to an organization, we don’t know the same thing about the kind of data center GPUs that hyperscapers are spending tens of billions of dollars on each year. Any kind of depreciation schedule is based on, at best, assumptions, and at worst, hope.
The assumption that the cards won’t degrade with heavy usage. The assumption that future generations of GPUs won’t be so powerful and impressive, they’ll render the previous ones more obsolete than expected, kind of like how the first jet-powered planes of the 1950s did to those manufactured just one decade prior. The assumption that there will, in fact, be a market for older cards, and that there’ll be a way to lease them profitably.
What if those assumptions are wrong? What if that hope is, ultimately, irrational?
Mihir Kshirsagar of the Center for Information Technology Policy framed the problem well:
Here is the puzzle: the chips at the heart of the infrastructure buildout have a useful lifespan of one to three years due to rapid technological obsolescence and physical wear, but companies depreciate them over five to six years. In other words, they spread out the cost of their massive capital investments over a longer period than the facts warrant—what The Economist has referred to as the “$4trn accounting puzzle at the heart of the AI cloud.”
This is why Michael Burry brought it up recently — because spreading out these costs allows big tech to make their net income (IE: profits) look better. In simple terms, by spreading out costs over six years rather than three, hyperscalers are able to reduce a line item that eats into their earnings, which makes their companies look better to the markets.
So, why does this create an artificial time limit?
In really, really simple terms:
So, now that you know this, there’s a fairly obvious question to ask: why are they still buying GPUs? Also…where the fuck are they going?
As I covered in the Hater’s Guide To NVIDIA:
Going off of [Stargate] Abilene’s [OpenAI’s giant data center project in Abilene, TX] mathematics — $40bn of chips across 8 buildings — that means each building is about $5 billion of chips (and I assume the associated hardware). Each building is 400,000 square feet, which is over 9 acres of space.
NVIDIA CEO Jensen Huang claims that NVIDIA has shipped 6 million Blackwell GPUs — and according to CNBC, that specifically refers to AI GPUs shipped. They have left NVIDIA’s warehouses. These chips are in flight. They are real. Where the fuck are they?
So, Stargate Abilene is meant to have 1.2GW of power, and each building is 440,000 square feet according to developer Lancium, and it appears based on some reporting that each building will be 100MW of IT load, though I’m having trouble getting a consistent answer here.
In any case, we can do some napkin maths! 100MW = 50,000 Blackwell GPUs (I’m going to guess B200s), making 6 million Blackwell GPUs somewhere in the region of 12GW of IT load, and because data centers need 30% or more power than their IT loads (to cover for that “design day” i mentioned earlier), that means 15.6GW of power is required to make the last four quarters of NVIDIA GPUs sold turn on.
While I’m not going to copy-paste my whole (premium) piece, I was only able to find, at most, a few hundred thousand Blackwell GPUs — many of which aren’t even online! — including OpenAI’s Stargate Abilene (allegedly 400,000, though only two buildings are handed over); a theoretical 131,000 GPU cluster owned by Oracle announced in March 2025; 5000 Blackwell GPUs at the University of Texas, Austin; “more than 1500” in a Lambda data center in Columbus, Ohio; The Department of Energy’s still-in-development 100,000 GPU supercluster, as well as “10,000 NVIDIA Blackwell GPUs” that are “expected to be available in 2026 in its “Equinox” cluster; 50,000 going into the still-unbuilt Musk-run Colossus 2 supercluster; CoreWeave’s “largest GB200 Blackwell cluster” of 2496 Blackwell GPUs; “tens of thousands” of them deployed globally by Microsoft (including 4600 Blackwell Ultra GPUs); 260,000 GPUs for five AI data centers for the South Korean government…and I am still having trouble finding one million of these things that are actually allocated anywhere, let alone in a data center, let alone one with sufficient power.
I do not know where these six million Blackwell GPUs have gone, but they certainly haven’t gone into data centers that are powered and turned on. In fact, power has become one of the biggest issues with building these things, in that it’s really difficult (and maybe impossible!) to get the amount of power these things need.
In really simple terms: there isn’t enough power or built data centers for those six million Blackwell GPUs, in part because the data centers aren’t built, and in part because there isn’t enough power for the ones that are. Microsoft CEO Satya Nadella recently said on a podcast that his company “[didn’t] have the warm shells to plug into,” meaning buildings with sufficient power, and heavily suggested Microsoft “may actually have a bunch of chips sitting in inventory that [he] couldn’t plug in.”
The news that HPE’s (Hewlett Packard Enterprise) AI server business underperformed, and by a significant margin, only raises more questions about where these chips are going.
So why, pray tell, is Jensen Huang of NVIDIA saying that he has 20 million Blackwell and Vera Rubin GPUs ordered through the end of 2026? Where are they going to go?
I truly don’t know!
AI bulls will tell you about the “insatiable demand for AI” and that these massive amounts of orders are proof of something or rather, and you know what, I’ll give them that — people sure are buying a lot of NVIDIA GPUs!
I just don’t know why.
Nobody has made a profit from AI, and those making revenue aren’t really making much.
For example, my reporting on OpenAI from a few weeks ago suggests that the company only made $4.329 billion in revenue through the end of September, extrapolated from the 20% revenue share that Microsoft receives from the company. As some people have argued with the figures, claiming they are either A) delayed or B) not inclusive of the revenue that OpenAI is paid from Microsoft as part of Bing’s AI integration and sales of OpenAI’s models via Microsoft Azure, I wanted to be clear of two things:
In the same period, it spent $8.67 billion on inference (the process in which an LLM creates an output).
This is the biggest company in the generative AI space, with 800 million weekly active users and the mandate of heaven in the eyes of the media. Anthropic, its largest competitor, alleges it will make $833 million in revenue in December 2025, and based on my estimates will end up having $5 billion in revenue by end of year.
Based on my reporting from October, Anthropic spent $2.66 billion on Amazon Web Services through the end of September, meaning that it (based on my own analysis of reported revenues) spent 104% of its $2.55 billion in revenue up until that point just on AWS, and likely spent just as much on Google Cloud.
While everybody wants to tell the story of Anthropic’s “efficiency” and “only burning $2.8 billion this year,” one has to ask why a company that is allegedly “reducing costs” had to raise $13 billion in September 2025 after raising $3.5 billion in March 2025, and after raising $4 billion in November 2024? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “as soon as 2027.”
These are the two largest companies in the generative AI space, and by extension the two largest consumers of GPU compute. Both companies burn billions of dollars, and require an infinite amount of venture capital to keep alive at a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half. The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt. What happens if these sources dry up?
And, in all seriousness, who else is buying AI compute? What are they doing with it? Hyperscalers (other than Microsoft, which chose to stop reporting its AI revenue back in January, when it claimed a $13 billion, or about $1 billion a month, in revenue) don’t disclose anything about their AI revenue, which in turn means we have no real idea about how much real, actual money is coming in to justify these GPUs.
CoreWeave made $1.36 billion in revenue (and lost $110 million doing so) in its last quarter — and if that’s indicative of the kind of actual, real demand for AI compute, I think it’s time to start panicking about whether all of this was for nothing.
CoreWeave has a backlog of over $50 billion in compute, but $22 billion of that is OpenAI (a company that burns billions of dollars a year and lives on venture subsidies), $14 billion of that is Meta (which has yet to work out how to make any kind of real money from generative AI, and no, its “generative AI ads” are not the future, sorry), and the rest is likely a mixture of Microsoft and NVIDIA, which agreed to buy $6.3 billion of any unused compute from CoreWeave through 2032.
Sorry, I also forgot Google, which is renting capacity from CoreWeave to rent to OpenAI.
Also, I also forgot to mention that CoreWeave’s backlog problem stems from data center construction delays. That and CoreWeave has $14 billion in debt mostly from buying GPUs, which it was able to raise by using GPUs as collateral and that it had contracts from customers willing to pay it, such as NVIDIA, which is also selling it the GPUs.
So, just to be abundantly clear: CoreWeave has bought all those GPUs to rent to OpenAI, Microsoft (for OpenAI), Meta, Google (OpenAI), and NVIDIA, which is the company that benefits from CoreWeave’s continued ability to buy GPUs.
Otherwise, where’s the fucking business, exactly? Who are the customers? Who are the people renting these GPUs, and for what purpose are they being rented? How much money is renting those GPUs? You can sit and waffle on about the supposedly glorious “AI revolution” all you want, but where’s the money, exactly?
And why, exactly, are we buying more GPUs?
What are they doing? To whom are they being rented? For what purpose? And why isn’t it creating the kind of revenue that is actually worth sharing?
Is it because the revenue sucks?
Is it because it’s unprofitable to provide it?
And why, at this point in history, do we not know? Hundreds of billions of dollars that have made NVIDIA the biggest company on the stock market and we still do not know why people are buying these fucking things.
NVIDIA is currently making hundreds of billions in revenue selling GPUs to companies that either plug them in and start losing money or, I assume, put them in a warehouse for safe keeping.
This brings me to my core anxiety: why, exactly, are companies pre-ordering GPUs? What benefit is there in doing so? Blackwell does not appear to be “more efficient” in a way that actually makes anybody a profit, and we’re potentially years from seeing these GPUs in operation in data centers at the scale they’re being shipped — so why would anybody be buying more?
I doubt these are new customers — they’re likely hyperscalers, neoclouds like CoreWeave and resellers like Dell and SuperMicro — because the only companies that can actually afford to buy them are those with massive amounts of cash or debt, to the point that even Google, Amazon, Meta and Oracle are taking on massive amounts of new debt, all without a plan to make a profit.
NVIDIA’s largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation. NVIDIA’s GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means, in a very real sense, that NVIDIA is dependent not on its customers, but on its customers’ credit ratings and financial backers.
To make matters worse, the key reason that one would buy a GPU is to either run services using it or rent it to somebody else, and the two largest parties spending money on these services are OpenAI and Anthropic, both of whom lose billions of dollars, and are thus dependent on venture capital and debt (remember, OpenAI has a $4 billion line of credit, and Anthropic a $2.5 billion one too).
In simple terms, NVIDIA’s customers rely on debt to buy its GPUs, and NVIDIA’s customers’ customers rely on debt to pay to rent them.
Yet it gets worse from there. Who, after all, are the biggest customers renting AI compute?
That’s right, AI startups, all of which are deeply unprofitable. Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June. Perplexity, one of the most “popular” AI companies, raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024. Cognition raised $400 million in September after raising $300 million in March. Cohere raised $100 million in September a month after it raised $500 million.
Venture capital is feeding money to either OpenAI or Anthropic to use their models, or in some cases hyperscalers or neoclouds like CoreWeave or Lambda to rent NVIDIA GPUs. OpenAI and Anthropic then raise venture capital or debt to pay hyperscalers or neoclouds to rent NVIDIA GPUs. Hyperscalers and neoclouds then use either debt or existent cashflow (in the case of hyperscalers, though not for long!) to buy more NVIDIA GPUs.
Only one company actually makes a profit here: NVIDIA.
Aside: I should add there are also NVIDIA resellers like Dell or Supermicro, the latter of which buy NVIDIA GPUs, put them in servers, and sell them to neoclouds like Lambda or CoreWeave.
At some point, a link in this debt-backed chain breaks, because very little cashflow exists to prop it up. At some point, venture capitalists will be forced to stop funnelling money into unprofitable, unsustainable AI companies, which will make those companies unable to funnel money into the pockets of those buying GPUs, which will make it harder for those companies buying GPUs to justify (or raise debt for) buying more GPUs.
And if I’m honest, none of NVIDIA’s success really makes any sense. Who is buying so many GPUs? Where are they going?
Why are inventories increasing? Is it really just pre-buying parts for future orders? Why are accounts receivable climbing, and how much product is NVIDIA shipping before it gets paid? While these are both explainable as “this is a big company and that’s how big companies do business” (which is true!), why do receivables not seem to be coming down?
And how long, realistically, can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money?
I worry about NVIDIA, not because I believe there’s a massive scandal, but because so much rides on its success, and its success rides on the back of dwindling amounts of venture capital and debt, because nobody is actually making money to pay for these GPUs.
In fact, I’m not even saying it goes tits up. Hell, it might even have another good quarter or two. It really comes down to how long people are willing to be stupid and how long Jensen Huang is able to call hyperscalers at three in the morning and say “buy one billion dollars of GPUs, pig.”
No, really! I think much of the US stock market’s growth is held up by how long everybody is willing to be gaslit by Jensen Huang into believing that they need more GPUs. At this point it’s barely about AI anymore, as AI revenue — real, actual cash made from selling services run on GPUs — doesn’t even cover its own costs, let alone create the cash flow necessary to buy $70,000 GPUs thousands at a time. It’s not like any actual innovation or progress is driving this bullshit!
In any case, the markets crave a healthy NVIDIA, as so many hundreds of billions of dollars of NVIDIA stock sit in the hands of retail investors and people’s 401ks, and its endless growth has helped paper over the pallid growth of the US stock market and, by extension, the decay of the tech industry’s ability to innovate.
Once this pops — and it will pop, because there is simply not enough money to do this forever — there must be a referendum on those that chose to ignore the naked instability of this era, and the endless lies that inflated the AI bubble.
Until then, everybody is betting billions on the idea that Wile E. Coyote won’t look down.
2025-12-06 00:36:44
[Editor's Note: this piece previously said "Blackstone" instead of "Blackrock," which has now been fixed.]
I've been struggling to think about what to write this week, if only because I've written so much recently and because, if I'm honest, things aren't really making a lot of sense.
NVIDIA claims to have shipped six million Blackwell GPUs in the last four quarters — as I went into in my last premium piece — working out to somewhere between 10GW and 12GW of power (based on the power draw of B100 and B200 GPUs and GB200 and GB300 racks), which...does not make sense based on the amount of actual data center capacity brought online.
Similarly, Anthropic claims to be approaching $10 billion in annualized revenue — so around $833 million in a month — which would make it competitive with OpenAI's projected $13 billion in revenue, though I should add that based on my reporting extrapolating OpenAI's revenues from Microsoft's revenue share, I estimate the company will miss that projection by several billion dollars, especially now that Google's Gemini 3 launch has put OpenAI on a "Code Red," shortly after an internal memo revealed that Gemini 3 could “create some temporary economic headwinds for [OpenAI]."
Which leads me to another question: why?
Gemini 3 is "better," in the same way that every single new AI model is some indeterminate level of "better." Nano Banana Pro is, to Simon Willison, "the best available image generation model." But I can't find a clear, definitive answer as to why A) this is "so much better," B) why everybody is freaking out about Gemini 3, and C) why this would have created "headwinds" for OpenAI, headwinds so severe that it has had to rush out a model called Garlic "as soon as possible" according to The Information:
Last week, OpenAI’s chief research officer Mark Chen told some colleagues about the new model, which was performing well on the company’s evaluations, at least when compared to Gemini 3 and Anthropic’s Opus 4.5 in tasks involving coding and reasoning, according to a person with knowledge of the remarks.
But Garlic may be a bigger deal. Chen said OpenAI is looking to release a version of Garlic as soon as possible, which we think means people shouldn’t be surprised to see GPT-5.2 or GPT-5.5 release by early next year.
Garlic is a different model from Shallotpeat, a new large language model under development which Altman told staff in October would help OpenAI challenge Gemini 3. Garlic incorporates bug fixes that the company used in developing Shallotpeat during the pretraining process, the first stage of model training in which an LLM is shown data from the web and other sources so it can learn connections between them.
Right, sure, cool, another model. Again, why is Gemini 3 so much better and making OpenAI worried about "economic headwinds"? Could this simply be a convenient excuse to cover over, as Alex Heath reported a few weeks ago, ChatGPT's slowing download and usage growth?
Experts I've talked to arrived at two conclusions:
I don't know about garlic or shallotpeat or whatever, but one has to wonder at some point what it is that OpenAI is doing all day:
Altman said Monday in an internal Slack memo that he was directing more employees to focus on improving features of ChatGPT, such as personalizing the chatbot for the more than 800 million people who use it weekly, including letting each of those people customize the way it interacts with them.
Altman also said other key priorities covered by the code red included Imagegen, the image-generating AI that allows ChatGPT users to create anything from interior-design mockups to turning real-life photos into animated ones. Last month, Google released its own image generation model, Nano Banana Pro, to strong reviews.
Altman said other priorities consisted of improving “model behavior” so that people prefer the AI models that powers ChatGPT more than models from competitors, including in public rankings such as LMArena; boosting ChatGPT’s speed and reliability; and minimizing overrefusals, a term that refers to when the chatbot refuses to answer a benign question.
So, OpenAI's big plan is to improve ChatGPT, make the image generation better, make people like the models better, improve rankings, make it faster, and make it answer more stuff.
I think it's fair to ask: what the fuck has OpenAI been doing this whole time if it isn't "make the model better" and "make people like ChatGPT more"? I guess the company shoved Sora 2 out the door — which is already off the top 30 free Android apps in the US and at 17 on the US free iPhone apps rankings as of writing this sentence after everybody freaked out about it hitting number one. All that attention, and for what?
Indeed, signs seem to be pointing towards reduced demand for these services. As The Information reported a few days ago...
Multiple Microsoft divisions, for instance, have lowered how much salespeople are supposed to grow their sales of certain AI products after many of them missed sales-growth goals in the fiscal year that ended in June, according to two salespeople in Microsoft’s Azure cloud unit.
Microsoft, of course, disputed this, and said...
A Microsoft spokesperson said “aggregate sales quotas for AI products have not been lowered” but declined to comment specifically on the lowered growth targets. The spokesperson pointed to growth in the company’s overall cloud business, which has been lifted by rentals of AI servers by OpenAI and other AI developers.
Well, I don't think Microsoft has any problems selling compute to OpenAI — which paid it $8.67 billion just for inference between January and September — as I doubt there is any "sales team" having to sell compute to OpenAI.
But I also want to be clear that Microsoft added a word: "aggregate." The Information never used that word, and indeed nobody seems to have bothered to ask what "aggregate" means. I do, however, know that Microsoft has had trouble selling stuff. As I reported a few months ago, in August 2025 Redmond only had 8 million active paying licenses for Microsoft 365 Copilot out of the more-than-440 million people paying for Microsoft 365.
In fact, here's a rundown of how well AI is going for Microsoft:
Yet things are getting weird. Remember that OpenAI-NVIDIA deal? The supposedly "sealed" one where NVIDIA would invest $100 billion in OpenAI, with each tranche of $10 billion gated behind a gigawatt of compute? The one that never really seemed to have any fundament to it, but people reported as closed anyway? Well, per NVIDIA's most-recent 10-Q (emphasis mine):
Investment commitments are $6.5 billion as of October 26, 2025, including $5 billion in Intel Corporation which is subject to regulatory approval. In the third quarter of fiscal year 2026, we entered into a letter of intent with an opportunity to invest in OpenAI.
A letter of intent "with an opportunity" means jack diddly squat. My evidence? NVIDIA's follow-up mention of its investment in Anthropic:
In November 2025, we entered into an agreement, subject to certain closing conditions, to invest up to $10 billion in Anthropic.
This deal, as ever, was reported as effectively done, with NVIDIA investing $10 billion and Microsoft $5 billion, saying the word "will" as if the money had been wired, despite the "closing conditions" and the words "up to" suggesting NVIDIA hasn't really agreed how much it will really invest. A few weeks later, the Financial Times would report that Anthropic is trying to go public as early as 2026 and that Microsoft and NVIDIA's money would "form part of a funding round expected to value the group between $300bn and $350bn."
For some reason, Anthropic is hailed as some sort of "efficient" competitor to OpenAI, at least based on what both The Information and Wall Street Journal have said, yet it appears to be raising and burning just as much as OpenAI. Why did a company that's allegedly “reducing costs” have to raise $13 billion in September 2025 after raising $3.5 billion in March 2025, and after raising $4 billion in November 2024? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “as soon as 2027.”
And if this company is so efficient and so good with money, why does it need another $15 billion, likely only a few months after it raised $13 billion? Though I doubt the $15 billion round closes this year, if it does, it would mean that Anthropic would have raised $31.5 billion in 2025 — which is, assuming the remaining $22.5 billion comes from SoftBank, not far from the $40.8 billion OpenAI would have raised this year.
In the event that SoftBank doesn't fund that money in 2025, Anthropic will have raised a little under $2 billion less ($16.5 billion) than OpenAI ($18.3 billion, consisting of $10 billion in June split between $7.5 billion from SoftBank and $2.5 billion from other investors, and an $8.3 billion round in August) this year.
I think it's likely that Anthropic is just as disastrous a business as OpenAI, and I'm genuinely surprised that nobody has done the simple maths here, though at this point I think we're in the era of "not thinking too hard because when you do so everything feels crazy.”
Which is why I'm about to think harder than ever!
I feel like I'm asked multiple times a day both how and when the bubble will burst, and the truth is that it could be weeks or months or another year, because so little of this is based on actual, real stuff. While our markets are supported by NVIDIA's eternal growth engine, said growth engine isn't supported by revenues or real growth or really much of anything beyond vibes. As a result, it's hard to say exactly what the catalyst might be, or indeed what the bubble bursting might look like.
Today, I'm going to sit down and give you the scenarios — the systemic shocks — that would potentially start the unravelling of this era, as well as explain what a bubble bursting might actually look like, both for private and public companies.
This is the spiritual successor to August's AI Bubble 2027, except I'm going to have a little more fun and write out a few scenarios that range from likely to possible, and try and give you an enjoyable romp through the potential apocalypses waiting for us in 2026.
2025-11-25 00:37:51
This piece has a generous 3000+ word introduction, because I want as many people to understand NVIDIA as possible. The (thousands of) words after the premium break get into arduous detail, but I’ve written this so that, ideally, most people can pick up the details early on and understand this clusterfuck.
Please do subscribe to the premium! I really appreciate it.
I've reached a point with this whole era where there are many, many things that don't make sense, and I know I'm not alone. I've been sick since Friday last week, and thus I have had plenty of time to sit and think about stuff.
And by "stuff" I mean the largest company on the stock market: NVIDIA.
Look, I'm not an accountant, nor am I a "finance expert." I learned all of this stuff myself. I learn a great deal by coming to things from the perspective of being a dumbass, a valuable intellectual framework of "I need to make sure I understand each bit and explain it as simply as possible." In this piece, I'm going to try and explain both what this company is, how we got here, and ask questions that I, from the perspective of a dumbass, have about the company, and at least try and answer them.
Let's start with a very simple point: for a company of such remarkable size, very few people — myself included, at times! — seem to actually understand NVIDIA.
NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market.
This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture.
As an aside, NVIDIA makes other things unrelated to the chips that power large language models, like the consumer graphics cards you'd find in a gaming PC or gaming console, but the reason I'm not going to discuss these things is that 90% of NVIDIA's revenue now comes from selling either GPUs for LLMs, or the associated software and hardware to make all of that stuff run.
Back in 2006, NVIDIA launched CUDA, a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company.
The thing is, GPUs are great for parallel processing - essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time - which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.
CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs.
Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.
This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies."
Though it took until November 2022 for ChatGPT to really start the fires, in March 2020, NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:
A data center powered by five DGX A100 systems for AI training and inference running on just 28 kilowatts of power costing $1 million can do the work of a typical data center with 50 DGX-1 systems for AI training and 600 CPU systems consuming 630 kilowatts and costing over $11 million, Huang explained.
One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend.
At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000.
You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024. A single B200 GPU costs at least $30,000.
Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly (edit: I wrote monopsony in a previous version, sorry), and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive.
Anyway, the reason that I'm writing all of this out is because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year.
With Blackwell — the third generation of AI-specialized GPUs — came a problem, in that these things were so much more power-hungry, and required entirely new ways of building data centers, along with different cooling and servers to put them in, much of which was sold by NVIDIA. While you could kind of build around your current data centers to put A100s and H100s into production, Blackwell was...less cooperative, and ran much hotter.
To quote NVIDIA Employee Number 4 David Rosenthal:
The systems are estimated to be more than half the capex for a new data center. Much of its opex is power. Just as with mining rigs, the key feature of each successive generation of AI chips is that it is more efficient at using power. But that doesn't mean they use less power, they use more but less per operation. The need for enhanced power distribution and the concomitant cooling is what has prevented new AI systems being installed in legacy data centers. Presumably the next few generations will be compatible with current state of the art data center infrastructure, so they can directly replace their predecessors and thereby reduce costs.
In simple terms, Blackwell runs hot, so much hotter than Ampere (A100) or Hopper (H100) GPUs that it requires entirely different ways to cool it, meaning your current data center needs to be ripped apart to fit them.
Huang has confirmed that Vera Rubin, the next generation of GPUs, will have the same architecture as Blackwell. I would bet money that it's also much more expensive.
Anyway, all of this has been so good for NVIDIA. As the single vendor for the most important component in the entire AI boom, it has set the terms for how much you pay and how you build any and all AI infrastructure. While there are companies like Supermicro and Dell who buy NVIDIA GPUs and ship them in servers to customers, that's just fine for NVIDIA CEO Jensen Huang, as that's somebody else selling his GPUs for him.
NVIDIA has been printing money, quarter after quarter, going from a meager $7.192 billion in total revenue in the third (calendar year) quarter of 2023 to an astonishing $50 billion in just data center revenue (that's where the GPUs are) in its most recent quarter, for a total of $57 billion in revenue, and the company projects to make $63 billion to $67 billion in the next quarter.
Now, I'm going to stop you here, because this bit is really important, really simple, yet nobody thinks about it much: NVIDIA makes so much money, and it makes it from a much smaller customer base than most companies, because there are only so many entities that can buy thousands of chips that cost $50,000 or more each.
$35 billion, $39 billion, $44 billion, $46 billion and $57 billion are very large amounts of money, and the entities pumping those numbers into the stratosphere are collectively having to spend hundreds of billions of dollars to make it happen.
So, let me give you a theoretical example. I swear I'm going somewhere with this.You, a genius, have decided you are about to join the vaunted ranks of "AI data center ownership." You decide to build a "small" AI data center — 25MW (megawatts, which in this example, refers to the combined power draw of the tech inside the data center). That can't be that much, right? OpenAI is building a 1.2GW one out in Abilene Texas. How much could this tiny little thing cost?
Sidenote: It’s a minor thing, but I want to clarify something. I said “in this example” in the previous paragraph because when we talk about the power capacity of a data center, we could be referring to one of two things. The first is the power draw of the servers in the facility — which is called the IT Load — or the total amount of power that can be provided to that facility.
Here’s where it gets tricky. A facility that can draw, say, 25MW of power from the grid can’t just use all of that in one go. You need a reserve for what’s known as the “design day,” which is the hottest day of the year, when the facility’s cooling systems are under the most strain, and when power transmission losses are at their highest. That reserve is, from what I’ve been told, around 30% of the total available electricity.
Cooling systems are power-hungry! Who knew? Me. I did. I’ve been telling you for over a year.
Okay, well, let's start with those racks. You're gonna need to give Jensen Huang $600 million right away, as you need 200 GB200 racks. You're also gonna need a way to make them network together, because otherwise they aren't going to be able to handle all those big IT loads, so that's gonna be another $80 million or more, and you're going to need storage and servers to sync all of this up, which is, let's say, another $35 million.
So we're at $715 million. Should be fine, right? Everybody's cool and everybody's normal. This is just a small data center after all. Oops, forgot cooling and power delivery stuff — that's another $5 million. $720 million. Okay.
Anyway, sadly data centers require something called a "building." Construction costs for a data center are somewhere from $8 million to $12 million per megawatt, so, crap, okay. That's $250 million, but probably more like $300 million. We're now up to $1.02 billion, and we haven't even got the power yet.
Okay, sick. Do you have one billion dollars? You don't? No worries! Private credit — money loaned by non-banking entities — has been feeding more than $50 billion dollars a quarter into the hungry mouths of anybody who desires to build a data center. You need $1.02 billion. You get $1.5 billion, because, you know, "stuff happens." Don't worry about those pesky high interest rates — you're about to be printing big money, AI style!
Now you're done raising all that cash, it'll now only take anywhere from 6 to 18 months for site selection, permitting, design, development, construction, and energy procurement. You're also going to need about 20 acres of land for that 100,000 square foot data center. You may wonder why 100,000 square feet needs that much space, and that's because all of the power and cooling equipment takes up an astonishing amount of room.
So, yeah, after two years and over a billion dollars, you too can own a data center with NVIDIA GPUs that turn on, and at that point, you will offer a service that is functionally identical to everybody else buying GPUs from NVIDIA.
Your competitors are Amazon, Google and Microsoft, followed by neoclouds — AI chip companies selling the same thing as you, except they're directly backed by NVIDIA, and frequently, the big hyperscaler companies with brands that most people have heard of, like AWS and Azure.
Oh, also, this stuff costs an indeterminately-large amount of money to run. You may wonder why I can't tell you how much, and that's because nobody wants to actually discuss the cost of running GPUs, the thing that underpins our entire stock market.
There're good reasons, too. One does not just run "a GPU" — it's a GPU in a server of other GPUs with associated hardware, all drawing power in varying amounts, all running in sync with networking gear that also draws power, with varying amounts of user demand and shifts in the costs of power from the power company.
But what we can say is that the up front cost of buying these GPUs and their associated crap is such that it's unclear if they ever will generate a profit, because these GPUs run hot, all the time, and that causes some amount of them to die.
Here are some thoughts I have had:
The NVIDIA situation is one of the most insane things I've seen in my life.
The single-largest, single-most-valuable, single-most-profitable company on the stock market has got there through selling ultra-expensive hardware that takes hundreds of millions or billions of dollars (and years of construction in some cases) to start using, at which point it...doesn't make much revenue and doesn't seem to make a profit.
Said hardware is funded by a mixture of cashflow from healthy businesses (see: Microsoft) or massive amounts of debt (see: everybody who is not a hyperscaler, and, at this point, some hyperscalers). The response to the continued proof that generative AI is not making money is to buy more GPUs, and it doesn't appear anybody has ever worked out why.
This problem has been obvious for a long time, too.
Today I'm going to explain to you — simply, but at length — why I am deeply concerned, and how deeply insane this situation has become.