MoreRSS

site iconEd ZitronModify

CEO of national Media Relations and Public Relations company EZPR
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ed Zitron

How Much Money Do OpenAI And Anthropic Actually Make?

2025-08-02 00:20:01

Hello and welcome to the latest premium edition of Where's Your Ed At, I appreciate any and all of your subscriptions. I work very hard on these, and they help pay for the costs of running Ghost and, well, my time investigating different things. If you're on the fence, subscribe! I promise it's worth it.

I also want to give a HUGE thank you to Westin Lee, a writer who has written about business and the use of AI, who was the originator of the whole "what if we used ARR to work out what these people make?" idea. He's been a tremendous help, and I recommend you check out his work.


If you're an avid reader of the business and tech media, you'd be forgiven for thinking that OpenAI has made (or will make) in excess of $10 billion this year, and Anthropic in excess of $4 billion.

Why? Because both companies have intentionally reported or leaked their "annualized recurring revenue" – a month's revenue multiplied by 12. OpenAI leaked yesterday to The Information that it hit $12 billion in "annual recurring revenue" – suggesting that its July 2025 revenues were around $1 billion. The Information reported on July 1 2025 that Anthropic's annual run rate was $4 billion – meaning that its revenue for the month of June 2025 was around $333 million. Then, yesterday, it reported that the run rate was up to $5 billion. 

As a reminder, both of these companies burn billions of dollars – more than $5 billion each in 2024.

These do not, however, mean that their previous months were this high, nor do they mean that they've "made" anything close to these numbers. Annualized recurring revenue is one of the most regularly-abused statistics in the startup world, and can mean everything from "[actual month]x12" to "[30 day period of revenue]x12" and in most cases it's a number that doesn't factor in churn. Some companies even move around the start dates for contracts as a means of gaming this number. 

ARR, also, doesn’t factor seasonality of revenue into the calculations. For example, you’d expect ChatGPT to have peaks and troughs that correspond with the academic year, with students cancelling their subscriptions during the summer break. If you use ARR, you’re essentially taking one month and treating it as representative of the entire calendar year, when it isn’t. 

Sidenote: I want to make one thing especially obvious. When I described ARR as “one of the most regularly-abused statistics in the startup world,” I meant it. ARR is only really used by startups (and other non-public companies). It’s not considered a GAAP-standard accounting practice, and public companies (those traded on the stock market) generally don’t use it because they have to report actual figures, and so there’s no point. You can’t really obfuscate something that you have to, by law, state publicly and explicitly for all to see with crafty trickery. 

These companies are sharing (or leaking) their annualized revenues for a few reasons:

  • So that the tech press reports them in a way that makes it sound like they'll make that much in a year.
  • So that the tech press reports a number that sounds bigger and better than the monthly amount. For example, calling a startup a "$100 million ARR" company (like vibe-coding platform Lovable) sounds way better than calling them an "$8.3 million a month company," in part because the number is smaller, and in part because, I imagine, it might mislead a reader into believing that's what they've made every month. Yes, saying the ARR figure does that already.
  • So that investors will believe the company looks bigger and more successful than it is.

In any case, I want to be clear this is a standard metric in non-public Software-as-a-Service (SaaS) businesses. Nothing is inherently wrong with the metric, save for its use and what's being interpreted from it.

Nevertheless, there has been a lot of reporting on both OpenAI and Anthropic's revenues that has created incredible confusion in the market that benefits both companies, making them seem far more successful than they really are, and giving them credit for revenue they are yet to book.

Before I dive into this — and yes, before the premium break — I want to establish some facts.

OpenAI:

Anthropic:

The intention of either reporting or leaking their annualized revenue numbers was to make you think that OpenAI would hit its projected $12.7 billion revenue number, and Anthropic would hit its "optimistic" $4 billion number, because those "annualized revenue" figures sure seem to have the word "annual" in them.

A sidenote about ARR, and a potential way my analysis is actually too kind: In this analysis I have assumed that OpenAI and Anthropic's revenues have always gone up.

Annualized revenue is a one month snapshot of a business. Though I have no way of proving it — which is why I don't try! — but there is always a chance that one or more of the months I discuss here was lower than the following. If I had to speculate, I’d wager that the summer months — those outside the normal academic calendar — see lower subscription revenue for these companies. Nevertheless, we do not have that information, and thus will not factor it into the analysis. But even one "off" month would be bad for either Anthropic or OpenAI.

I will add that we've never had real reporting about OpenAI or Anthropic's actual total year revenues before, which is why I am doing my best to work it out.

Yet through an historic analysis of reported annual recurring revenue numbers over the past three years, I've found things to be a little less certain. You see, when a company reports their "annual recurring revenue," what they're actually telling you is how much they made in a month, and I've sat down and found every single god damn bit of reporting about these numbers, calculating (based on the compound growth necessary between the months of reported monthly revenue) how much these companies are actually making in cash.

My analysis, while imperfect (as we lack the data for certain months), aligns closely enough with projections that I am avoiding any egregious statements. OpenAI and Anthropic's previous projections were fairly accurate, though as I'll explain in this piece, I believe their new ones are egregious and ridiculous.

More importantly, in all of these stories, there was only one time that these companies shared their revenues — when OpenAI shared its $10 billion runrate in May, though the July $12 billion ARR leak is likely intentional too. In fact, I believe both were an intentional attempt to mislead the general public into believing the company was more successful than it is.

Based on my analysis, OpenAI made around $3.616 billion in revenue in 2024, and so far in 2025 has made, by my calculations, around $5.266 billion in revenue as of the end of July.

This is also a slower growth rate than it’s experienced so far in the year. Going from $5.5 billion in annualized revenue in December 2024 to $10 billion annualized in May 2025 was a compound growth rate of around 12.7%. The "jump" from $10 billion ARR to $12 billion ARR is 9.54%. While I realize this may not seem like a big drop, every single penny counts, and percentage point shifts are worth hundreds of millions (if not billions) of dollars.OpenAI has been projected to make $12.7 billion in revenue in 2025. Making this number will be challenging, and require OpenAI to grow by 14%, every single month, without fail. For OpenAI to hit this number will require it to make nearly $2 billion a month in revenue by the end of the year to account for the disparity with the earlier months in the year when it made far, far less.

How Exactly Is OpenAI Calculating Annualized Revenue?

I also have serious suspicions about how much OpenAI actually made in May, June and July 2025. 

While The Information reported OpenAI hit $12 billion in annualized revenue, they did so in an obtuse way:

OpenAI roughly doubled its revenue in the first seven months of the year, reaching $12 billion in annualized revenue, according to a person who spoke to OpenAI executives.  

Yet the New York Times, mere days later, reported $13 billion annualized revenue:

OpenAl's business continues to surge. DealBook hears that the company's annual recurring revenue has soared to $13 billion, up from $10 billion in June — and is projected to surpass $20 billion by the end of the year.

First and foremost, it’s incredibly fucking suspicious that two very different numbers were reported here so closely, and even more so that the June 9 2025 announcement of OpenAI hitting $10 billion in annualized revenue was not, as I had originally believed, discussing the month of May 2025.

This likely means that OpenAI is not using standard annualized revenue metrics - which would traditionally mean “the last month’s revenue multiplied by 12,” and instead choosing “if all the monthly subscribers and contracts that are currently paying us on this day, June 9 2025, were to be multiplied by 12, we’d have $X annualized revenue.”

This is astronomically fucking dodgy. For the sake of this analysis, I am assuming any announcement of annualized revenue refers to the month previous. So, for example, when OpenAI announced they hit $10 billion in annualized revenue, I am going to assume this is for the month of May 2025.

This analysis is going to favour the companies in question. If OpenAI “hit $10 billion annualized” in or around June 9 2025, it likely means that their May revenues were lower than that. Similarly, OpenAI “hitting” $12 billion in annualized revenue (announced end of July 2025) - which I have factored into my analysis - is considered the revenue they hit in July 2025. 

In reality, this is likely to credit them more revenue than they deserve. If June’s annualized revenue was $10 billion, that means they made $833 million, rather than the $939 million I credit them with for the month. 

One cannot hit $12 billion AND $13 billion annualized in one month unless you are playing extremely silly games with the numbers in question, such as moving around when you start a 30 day period to artificially inflate things. In any case, my analysis for OpenAI’s revenue for August is around $13.145 billion - so in line with a “$13 billion annualized” figure.

In any case, I am sticking with my analysis as it stands. However, the timing of these annualized revenue leaks now makes me doubt the veracity of their previous leaks, in the sense that there’s every chance that they too are either inflated or used in a deceptive manner.

Based on these numbers, OpenAI's current growth rate is around 9.54% — and at that current pace, it will finish the year at around $11.89 billion in revenue. This is an impressive number, meaning it’d be making over $1.5 billion a month in revenue by December 2025 — but such an impressive number will be difficult to reach, and mean it has something in the region of $18 billion annualized revenue by the end of the year.

I also question whether it can make it, and even if it does, how it could possibly afford to serve that revenue long-term.


In Anthropic's case, I am extremely confident, based on its well-reported annualized revenues, that Anthropic has, through July 2025, made around $1.5 billion in revenue. This is, of course, assuming that their annualized revenue leaks are for calendar months, and if they're not, this number could actually be lower.

This is not a question of opinion. Other than April, we have ARR for every single month of the year.  Bloomberg is now reporting that Anthropic sees its revenue rate "maybe [going] to $9 billion annualized by year-end," which, to use a technical term, is total bullshit, especially as this number was leaked as Anthropic is fundraising.

In any case, I believe Anthropic can beat its base case estimates. It will almost certainly cross $2 billion in revenue, but I also believe that revenue growth is slowing for these companies, and the amount of cash we credit them as actually making is decidedly more average than "annualized revenue" would have you believe.

Is SoftBank Still Backing OpenAI?

2025-07-25 01:19:51

Earlier in the week, the Wall Street Journal reported that SoftBank and OpenAI's "$500 billion" "AI Project" was now setting a "more modest goal of building a small data center by year-end."

To quote:

A $500 billion effort unveiled at the White House to supercharge the U.S.’s artificial-intelligence ambitions has struggled to get off the ground and has sharply scaled back its near-term plans.

Six months after Japanese billionaire Masayoshi Son stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate project, the newly formed company charged with making it happen has yet to complete a single deal for a data center.

One might be forgiven for being a little confused here, as there is, apparently, a Stargate Data Center being built in Abilene Texas. Yet the Journal added another detail:

Altman has used the Stargate name, shared with a 1994 Kurt Russell film about aliens who teleport to ancient Egypt, on projects that aren’t being financed by the partnership between OpenAI and SoftBank. The trademark to Stargate is held by SoftBank, according to public filings.

For instance, OpenAI refers to a data center in Abilene, Texas, and another it agreed in March to use in Denton, Texas, as part of Stargate even though they are being done without SoftBank, some of the people familiar with the matter said.

Confusing, right? One might also be confused by the Bloomberg story called "Inside The First Stargate AI Data Center," which had the subheadline "OpenAI, Oracle and SoftBank hope that the site in Texas is the first of many across the US." More-confusingly, the piece talked about Stargate LLC, which OpenAI, Oracle and SoftBank were (allegedly) shareholders of.

Yet I have confirmed that SoftBank never, ever had any involvement with the site in Abilene Texas. It didn't fund it, it didn't build it, it didn't choose the site and, in fact, does not appear to have anything to do with any data center that OpenAI uses. The data center many, many reporters have referred to as "Stargate" has nothing to do with the "Stargate data center project."  Any reports suggesting otherwise are wrong, and I believe that this is a conscious attempt at misleading the public by OpenAI and SoftBank.

I confirmed the following with a PR representative from Crusoe, one of the developers of the site in Abilene Texas:

Funding for construction of [the] Abilene data center is a JV between Crusoe, Blue Owl and Primary Digital Infrastructure. Confirming that Softbank is not and has not been involved in the funding for its construction. 

And, as a reminder, Stargate as an entity was never formedmy source being the CEO of Oracle Safra Catz on Oracle’s earnings call.

This is an astonishing — and egregious — act of misinformation on the part of Sam Altman and OpenAI. By my count, at least 15 different stories attribute the Abilene Texas data center to the Stargate project, despite the fact that SoftBank was never and has never been involved. One would forgive anyone who got this wrong, because OpenAI itself engaged in the deliberate deception in its own announcement of the Stargate Project [emphasis mine]:

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.

You can weasel-word all you want about how nobody has directly reported that SoftBank was or was not part of Abilene. This is a deliberate, intentional deception, perpetrated by OpenAI and SoftBank, who deliberately misled both the public and the press as a means of keeping up the appearance that SoftBank was deeply involved in (and financially obligated to) the Abilene site.

Based on reporting that existed at the time but was never drawn together, it appears that Abilene was earmarked by Microsoft for OpenAI's use as early as July 2024, and never involved SoftBank in any way, shape or form. The "Stargate" Project, as reported, was over six months old when it was announced in January 2025, and there have been no additional sites added other than Abilene.

In simpler terms, Stargate does not exist other than as a name that Sam Altman gives things to make them feel more special than they are, and SoftBank was never involved. Stargate does not exist as reported.

The use of the term "Stargate" is an intentional act of deceit, but beneath the surface lies, in my mind, a much bigger story. Furthermore, I believe this deceit means that we should review any and all promises made by OpenAI and SoftBank, and reconsider any and all statements they've made, or that have been made about them.

Let's review.

According to reporting:

Yet based on my research, it appears that SoftBank may not be able to — or want to — proceed with any of these initiatives other than funding OpenAI's current round, and evidence suggests that even if it intends to, SoftBank may not be able to afford investing in OpenAI further.

I believe that SoftBank and OpenAI's relationship is an elaborate ruse, one created to give SoftBank the appearance of innovation, and OpenAI the appearance of a long-term partnership with a major financial institution that, from my research, is incapable of meeting the commitments it has made.

In simpler terms, OpenAI and SoftBank are bullshitting everyone.

I can find no tangible proof that SoftBank ever intended to seriously invest money in Stargate, and have evidence from its earnings calls that suggests SoftBank has no idea — or real strategy — behind its supposed $3-billion-a-year deployment of OpenAI software.

In fact, other than the $7.5 billion that SoftBank invested earlier in the year, I don't see a single dollar actually earmarked for anything to do with OpenAI at all.

SoftBank is allegedly going to send upwards of $20 billion to OpenAI by December 31 2025, and doesn't appear to have started any of the processes necessary to do so, or shown any signs it will. This is not a good situation for anybody involved.

The Hater's Guide To The AI Bubble

2025-07-22 00:07:38

Hey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too. 

Also, subscribe to my podcast Better Offline, which is free. Go and subscribe then download every single episode. Here's parts 1, 2 and 3 of the audio version of the Hater's Guide.

One last thing: This newsletter is nearly 14,500 words. It’s long. Perhaps consider making a pot of coffee before you start reading. 


Good journalism is making sure that history is actively captured and appropriately described and assessed, and it's accurate to describe things as they currently are as alarming.

And I am alarmed.

Alarm is not a state of weakness, or belligerence, or myopia. My concern does not dull my vision, even though it's convenient to frame it as somehow alarmist, like I have some hidden agenda or bias toward doom. I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money.

I also dislike the fact that I, and others like me, are held to a remarkably different standard to those who paint themselves as "optimists," which typically means "people that agree with what the market wishes were true." Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry, constantly having to prove themselves, as if somehow there's something malevolent or craven about criticism, that critics "do this for clicks" or "to be a contrarian."

I don't do anything for clicks. I don't have any stocks or short positions. My agenda is simple: I like writing, it comes to me naturally, I have a podcast, and it is, on some level, my job to try and understand what the tech industry is doing on a day-to-day basis. It is easy to try and dismiss what I say as going against the grain because "AI is big," but I've been railing against bullshit bubbles since 2021 — the anti-remote work push (and the people behind it), the Clubhouse and audio social networks bubble, the NFT bubble, the made-up quiet quitting panic, and I even, though not as clearly as I wished, called that something was up with FTX several months before it imploded

This isn't "contrarianism."  It's the kind of skepticism of power and capital that's necessary to meet these moments, and if it's necessary to dismiss my work because it makes you feel icky inside, get a therapist or see a priest.

Nevertheless, I am alarmed, and while I have said some of these things separately, based on recent developments, I think it's necessary to say why. 

In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say "the AI bubble," I mean the entirety of the AI trade.

And it's alarmingly simple, too.

But this isn’t going to be saccharine, or whiny, or simply worrisome. I think at this point it’s become a little ridiculous to not see that we’re in a bubble. We’re in a god damn bubble, it is so obvious we’re in a bubble, it’s been so obvious we’re in a bubble, a bubble that seems strong but is actually very weak, with a central point of failure.

I may not be a contrarian, but I am a hater. I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer excitement that some executives and writers have that workers may be replaced by AI — and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so.

And so I present to you — the Hater’s Guide to the AI bubble, a comprehensive rundown of arguments I have against the current AI boom’s existence. Send it to your friends, your loved ones, or print it out and eat it.  

No, this isn’t gonna be a traditional guide, but something you can look at and say “oh that’s why the AI bubble is so bad.” And at this point, I know I’m tired of being gaslit by guys in gingham shirts who desperately want to curry favour with other guys in gingham shirts but who also have PHDs. I’m tired of reading people talk about how we’re “in the era of agents” that don’t fucking work and will never fucking work. I’m tired of hearing about “powerful AI” that is actually crap, and I’m tired of being told the future is here while having the world’s least-useful most-expensive cloud software shoved down my throat.

Look, the generative AI boom is a mirage, it hasn’t got the revenue or the returns or the product efficacy for it to matter, everything you’re seeing is ridiculous and wasteful, and when it all goes tits up I want you to remember that I wrote this and tried to say something.

The Magnificent 7's Weakpoint: NVIDIA

As I write this, NVIDIA is currently sitting at $170 a share — a dramatic reversal of fate after the pummelling it took from the DeepSeek situation in January, which sent it tumbling to a brief late-April trip below $100 before things turned around. 

The Magnificent 7 stocks — NVIDIA, Microsoft, Alphabet (Google), Apple, Meta, Tesla and Amazon — make up around 35% of the value of the US stock market, and of that, NVIDIA's market value makes up about 19% of the Magnificent 7. This dominance is also why ordinary people ought to be deeply concerned about the AI bubble. The Magnificent 7 is almost certainly a big part of their retirement plans, even if they’re not directly invested.

Back in May, Yahoo Finance's Laura Bratton reported that Microsoft (18.9%), Amazon (7.5%), Meta (9.3%), Alphabet (5.6%), and Tesla (0.9%) alone make up 42.4% of NVIDIA's revenue. The breakdown makes things worse. Meta spends 25% — and Microsoft an alarming 47% — of its capital expenditures on NVIDIA chips, and as Bratton notes, Microsoft also spends money renting servers from CoreWeave, which analyst Gil Luria of D.A.Davidson estimates accounted for $8 billion (more than 6%) of NVIDIA's revenue in 2024. Luria also estimates that neocloud companies like CoreWeave and Crusoe — that exist only to prove AI compute services — account for as much as 10% of NVIDIA's revenue.

NVIDIA's climbing stock value comes from its continued revenue growth. In the last four quarters, NVIDIA has seen year-over-year growth of 101%, 94%, 78% and 69%, and, in the last quarter, a little statistic was carefully brushed under the rug: that NVIDIA missed, though narrowly, on data center revenue. This is exactly what it sounds like — GPUs that are used in servers, rather than gaming consoles and PCs (. Analysts estimated it would make $39.4 billion from this category, and NVIDIA only (lol) brought in $39.1 billion. Then again, it could be attributed to its problems in China, especially as the H20 ban has only just been lifted. In any case, it was a miss!

NVIDIA's quarter-over-quarter growth has also become aggressively normal — from 69%, to 59%, to 12%, to 12% again each quarter, which, again, isn't bad (it's pretty great!), but when 88% of your revenue is based on one particular line in your earnings, it's a pretty big concern, at least for me. Look, I'm not a stock analyst, nor am I pretending to be one, so I am keeping this simple:

  • NVIDIA relies not only on selling lots of GPUs each quarter, but it must always, always sell more GPUs the next quarter.
  • 42% of NVIDIA's revenue comes from Microsoft, Amazon, Meta, Alphabet and Tesla continuing to buy more GPUs.
  • NVIDIA's value and continued growth is heavily reliant on hyperscaler purchases and continued interest in generative AI.
  • The US stock market's continued health relies, on some level, on five or six companies (it's unclear how much Apple buys GPU-wise) spending billions of dollars on GPUs from NVIDIA.
    • An analysis from portfolio manager Danke Wang from January found that the Magnificent 7 stocks accounted for 47.87% of the Russell 1000 Index's returns in 2024 (an index fund of the 1000 highest-ranked stocks on FTSE Russell’s index).

In simpler terms, 35% of the US stock market is held up by five or six companies buying GPUs. If NVIDIA's growth story stumbles, it will reverberate through the rest of the Magnificent 7, making them rely on their own AI trade stories.

And, as you will shortly find out, there is no AI trade, because generative AI is not making anybody any money.

The Hollow "AI Trade"

I'm so tired of people telling me that companies are "making tons of money on AI." Nobody is making a profit on generative AI other than NVIDIA. No, really, I’m serious. 

The Magnificent 7's AI Story Is Flawed, With $560 Billion of Capex between 2024 and 2025 Leading to $35 billion of Revenue, And No Profit

If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion.

This is egregiously fucking stupid.

Microsoft AI Revenue In 2025: $13 billion, with $10 billion from OpenAI, sold "at a heavily discounted rate that essentially only covers costs for operating the servers."

Capital Expenditures in 2025: $80 billion

As of January 2025, Microsoft's "annualized" — meaning [best month]x12 — revenue from artificial intelligence was around $13 billion, a number that it chose not to update in its last earnings, likely because it's either flat or not growing, though it could in its upcoming late-July earnings. Yet the problem with this revenue is that $10 billion of that revenue, according to The Information, comes from OpenAI's spend on Microsoft's Azure cloud, and Microsoft offers preferential pricing — "a heavily discounted rental rate that essentially only covers Microsoft's costs for operating the servers" according to The Information.

In simpler terms, 76.9% of Microsoft's AI revenue comes from OpenAI, and is sold at just above or at cost, making Microsoft's "real" AI revenue about $3 billion, or around 3.75% of this year's capital expenditures, or 16.25% if you count OpenAI's revenue, which costs Microsoft more money than it earns.

The Information reports that Microsoft made $4.7 billion in "AI revenue" in 2024, of which OpenAI accounted for $2 billion, meaning that for the $135.7 billion that Microsoft has spent in the last two years on AI infrastructure, it has made $17.7 billion, of which OpenAI accounted for $12.7 billion.

Amazon AI Revenue In 2025: $5 billion

Capital Expenditures in 2025: $105 billion

Things do not improve elsewhere. An analyst estimates that Amazon, which plans to spend $105 billion in capital expenditures this year, will make $5 billion on AI in 2025, rising, and I quote, "as much as 80%," suggesting that Amazon may have made a measly $2.77 billion in 2024 on AI in a year when it spent $83 billion in capital expenditures. [editor's note: this piece originally said "$1 billion" instead of "$2.77 billion" due to a math error, sorry!]

Last year, Amazon CEO Andy Jassy said that “AI represents for sure the biggest opportunity since cloud and probably the biggest technology shift and opportunity in business since the internet." I think he's full of shit.

Google AI Revenue: $7.7 Billion (at most)

Capital Expenditures in 2025: $75 Billion

Bank of America analyst Justin Post estimated a few weeks ago that Google's AI revenue would be in the region of $7.7 billion, though his math is, if I'm honest, a little generous:

Google’s artificial intelligence model is set to drive $4.2 billion in subscription revenue within its Google Cloud segment in 2025, according to an analysis from Bank of America last week.

That includes $3.1 billion in revenue from subscribers to Google’s AI plans with its Google One service, Bank of America’s Justin Post estimates.

Post also expects that the integration of Google’s Gemini AI features within its Workspace service will drive $1.1 billion of the $7.7 billion in revenue he projects for that segment in 2025.

Google's "One" subscription includes increased cloud storage across Google Drive, Gmail and Google Photos, and added a $20-a-month "premium" plan in February 2024 that included access to Google's various AI models. Google has claimed that the "premium AI tier accounts for millions" of the 150 million subscribers to the service, though how many millions is impossible to estimate — but that won't stop me trying! 

Assuming that $3.1 billion in 2025 revenue would work out to $258 million a month, that would mean there were 12.9 million Google One subscribers also paying for the premium AI tier. This isn't out of the realm of possibility — after all, OpenAI has 15.5 million paying subscribers — but Post is making a generous assumption here. Nevertheless, we'll accept the numbers as they are.

And the numbers fuckin' stink! Google's $1.1 billion in workspace service revenue came from a forced price-hike on those who use Google services to run their businesses, meaning that this is likely not a number that can significantly increase without punishing them further.

$7.7 billion of revenue — not profit! — on $75 billion of capital expenditures. Nasty!

Meta AI Revenue: $2bn to $3bn

Capital Expenditures In 2025: $72 Billion

Someone's gonna get mad at me for saying this, but I believe that Meta is simply burning cash on generative AI. There is no product that Meta sells that monetizes Large Language Models, but every Meta product now has them shoved into them, such as your Instagram DMs oinking at you to generate artwork based on your conversation.

Nevertheless, we do have some sort of knowledge of what Meta is saying due to the copyright infringement case Kadrey v. Meta. Unsealed judgment briefs revealed in April that Meta is claiming that "GenAI-driven revenue will be more than $2 billion," with estimates as high as $3 billion.  The same document also claims that Meta expects to make $460 billion to $1.4 trillion in total revenue through 2035, the kind of thing that should get you fired in an iron ball into the sun.

Meta makes 99% of its revenue from advertising, and the unsealed documents state that it "[generates] revenue from [its] Llama models and will continue earning revenue from each iteration," and "share a percentage of the revenue that [it generates] from users of the Llama models...hosted by those companies," with the companies in question redacted. Max Zeff of TechCrunch adds that Meta lists host partners like AWS, NVIDIA, Databricks, Groq, Dell, Microsoft Azure, Google Cloud, and Snowflake, so it's possible that Meta makes money from licensing to those companies. Sadly, the exhibits further discussing these numbers are filed under seal.

Either way, we are now at $332 billion of capital expenditures in 2025 for $28.7 billion of revenue, of which $10 billion is OpenAI's "at-cost or just above cost" revenue. Not great.

Tesla Does Not Appear To Make Money From Generative AI

Capital Expenditures In 2025: $11 billion

Despite its prominence in the magnificent 7, Tesla is one of the least-exposed of the magnificent 7 to the AI trade, as Elon Musk has turned it into a meme stock company. That doesn't mean, of course, that Musk isn't touching AI. xAI, the company that develops racist Large Language Model "Grok" and owns what remains of Twitter, apparently burns $1 billion a month, and The Information reports that it makes a whopping $100 million in annualized revenue — so, about $8.33 million a month. There is a shareholder vote for Tesla to potentially invest in xAI, which will probably happen, allowing Musk to continue to pull leverage from his Tesla stock until the company's decaying sales and brand eventually swallow him whole.

But we're not talking about Elon Musk today.

Apple's AI Story Is Weird

Capital Expenditures In 2025: around $11 billion

Apple Intelligence radicalized millions of people against AI, mostly because it fucking stank. Apple clearly got into AI reluctantly, and now faces stories about how they "fell behind in the AI race," which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don't really want to summarize documents, write emails, or make "custom emoji," and anyone who thinks they would is a fucking alien.

In any case, Apple hasn't bet the farm on AI, insomuch as it hasn't spent two hundred billion dollars on infrastructure for a product with a limited market that only loses money.

The Fragile Five — Amazon, Google, Microsoft, Meta and Tesla — Are Holding Up The US Stock Market By Funding NVIDIA's Future Growth Story

To be clear, I am not saying that any of the Magnificent 7 are going to die — just that five companies' spend on NVIDIA GPUs largely dictate how stable the US stock market will be. If any of these companies (but especially NVIDIA) sneeze, your 401k or your kid’s college fund will catch a cold. 

I realize this sounds a little simplistic, but by my calculations, NVIDIA's value underpins around 8% of the value of the US stock market. At the time of writing, it accounts for roughly 7.5% of the S&P 500 — an index of the 500 largest US publicly-traded companies. A disturbing 88% of Nvidia’s revenue comes from enterprise-scale GPUs primarily used for generative AI, of which five companies' spend makes up 42% of its revenue. In the event that any one of these companies makes significant changes to their investments in NVIDIA chips, it will eventually have a direct and meaningful negative impact on the wider market.

NVIDIA's earnings are, effectively, the US stock market's confidence, and everything rides on five companies — and if we're honest, really four companies — buying GPUs for generative AI services or to train generative AI models. Worse still, these services, while losing these companies massive amounts of money, don't produce much revenue, meaning that the AI trade is not driving any real, meaningful revenue growth.

But Ed, They Said Something About Points of Growth-

Silence!

Any of these companies talking about "growth from AI" or "the jobs that AI will replace" or "how AI has changed their organization" are hand-waving to avoid telling you how much money these services are actually making them. If they were making good money and experiencing real growth as a result of these services, they wouldn't shut the fuck up about it! They'd be in your ear and up your ass hooting about how much cash they were rolling in!

And they're not, because they aren't rolling in cash, and are in fact blowing nearly a hundred billion dollars each to build massive, power-hungry, costly data centers for no real reason.

Don’t watch the mouth — watch the hands. These companies are going to say they’re seeing growth from AI, but unless they actually show you the growth and enumerate it, they are hand-waving. 

Ed! Amazon Web Services Took Years To Become Profitable! People Said Amazon Would Fail!

So, one of the most annoying and consistent responses to my work is to say that either Amazon or Amazon Web Services “ran at a loss,” and that Amazon Web Services — the invention of modern cloud computing infrastructure — “lost money and then didn’t.” 

The thing is, this statement is one of those things that people say because it sounds rational. Amazon did lose money, and Amazon Web Services was expensive, that’s obvious, right? 

The thing is, I’ve never really had anyone explain this point to me, so I am finally going to sit down and deal with this criticism, because every single person who mentions it thinks they just pulled Excalibur from the stone and can now decapitate me. They claim that because people in the past doubted Amazon — because, or in addition to the burn rate of Amazon Web Services as the company built out its infrastructure — that I too am wrong, because they were wrong about that.

This isn't Camelot, you rube! You are not King Arthur!

I will address both the argument itself and the "they" part of it too — because if the argument is that the people that got AWS wrong should not be trusted, then we should no longer trust them, the people actively propagandizing our supposed generative AI future.

Right?

So, I'm honestly not sure where this argument came from, because there is, to my knowledge, no story about Amazon Web Services where somebody suggested its burnrate would kill Amazon.

But let’s go back in time to the May 31 1999 piece that some might be thinking of, called "Amazon.bomb," and how writer Jacqueline Doherty was mocked soundly for "being wrong" about Amazon, which has now become quite profitable.

I also want to be clear that Amazon Web Services didn't launch until 2006, and Amazon itself would become reliably profitable in 2003. Technically Amazon had opened up Amazon.com's web services for developers to incorporate its content into their applications in 2002, but what we consider AWS today — cloud storage and compute — launched in 2006.

But okay, what did she actually say?

Unfortunately for Bezos, Amazon is now entering a stage in which investors will be less willing to rely on his charisma and more demanding of answers to tough questions like, when will this company actually turn a profit? And how will Amazon triumph over a slew of new competitors who have deep pockets and new technologies?

We tried to ask Bezos, but he declined to make himself or any other executives of the company available. He can ignore Barron's, but he can't ignore the questions.

Amazon last year posted a loss of $125 million [$242.6m in today's money) on revenues of $610 million [$1.183 billion in today's money]. And in this year's first quarter it got even worse, as the company posted a loss of $61.7 million [$119.75 million in today's money] on revenues of $293.6 million [$569.82 million in today's money].

Her argument, for the most part, is that Amazon was burning cash, had a ton of competition from other people doing similar things, and that analysts backed her up:

"The first mover does not always win. The importance of being first is a mantra in the Internet world, but it's wrong. The ones that are the most efficient will be successful," says one retail analyst. "In retailing, anyone can build a great-looking store. The hard part is building a great-looking store that makes money."

Fair arguments for the time, though perhaps a little narrow-minded. The assumption wasn't that what Amazon was building was a bad idea, but that Amazon wouldn't be the ones to build it, with one saying:

"Once Wal-Mart decides to go after Amazon, there's no contest," declares Kurt Barnard, president of Barnard's Retail Trend Report. "Wal-Mart has resources Amazon can't even dream about."

In simpler terms: Amazon's business model wasn't in question. People were buying shit online. In fact, this was just before the dot com bubble burst, and when optimism about the web was at a high point. Yet the comparison stops there — people obviously liked buying shit online, it was the business models of many of these companies — like WebVan — that sucked.

But Let's Talk About Amazon Web Services

Amazon Web Services was an outgrowth of Amazon's own infrastructure, which had to expand rapidly to deal with the influx of web traffic for Amazon.com, which had become one of the world's most popular websites and was becoming increasingly more-complex as it sold things other than books. Other companies had their own infrastructure, but if a smaller company wanted to scale, they’d basically need to build their own thing.

It's actually pretty cool what Amazon did! Remember, this was the early 2000s, before Facebook, Twitter, and a lot of the modern internet we know that runs on services like Amazon Web Services, Microsoft Azure and Google Cloud. It invented the modern concept of compute!

But we're here to talk about Amazon Web Services being dangerous for Amazon and people hating on it.

A November 2006 story from Bloomberg talked about Jeff Bezos' Risky Bet to "run your business with the technology behind his web site," saying that "Wall Street [wanted] him to mind the store." Bezos, referred to as a "one-time internet poster boy" that became "a post-dot-com piñata." Nevertheless, this article has what my haters crave:

But if techies are wowed by Bezos' grand plan, it's not likely to win many converts on Wall Street. To many observers, it conjures up the ghost of Amazon past. During the dot-com boom, Bezos spent hundreds of millions of dollars to build distribution centers and computer systems in the promise that they eventually would pay off with outsize returns. That helped set the stage for the world's biggest Web retail operation, with expected sales of $10.5 billion this year.

...

All that has investors restless and many analysts throwing up their hands wondering if Bezos is merely flailing around for an alternative to his retail operation. Eleven of 27 analysts who follow the company have underperform or sell ratings on the stock--a stunning vote of no confidence. That number of sell recommendations is matched among large companies only by Qwest Communications International Inc. (Q ), according to investment consultant StarMine Corp. It's more than even the eight sell opinions on struggling Ford Motor Co. (F )

Pretty bad, right? My goose is cooked? All those analysts seem pretty mad!

Except it's not, my goose is raw! Yours, however, has been in the oven for over a year! 

Emphasis mine:

By all accounts, Amazon's new businesses bring in a minuscule amount of revenue. Although its direct cost of providing them appears relatively low because the hardware and software are in place, Stifel Nicolaus & Co. (SF ) analyst Scott W. Devitt notes: "There's not going to be any economic return from any of these projects for the foreseeable future." Bezos himself admits as much. But with several years of heavy spending already, he's making this a priority for the long haul. "We think it's going to be a very meaningful business for us one day," he says. "What we've historically seen is that the seeds we plant can take anywhere from three, five, seven years."

That's right — the ongoing costs aren't the problem.

Hey wait a second, that's a name! I can look up a name! Scott W. Devitt now works at Wedbush as its managing director of equity research, and has said AI companies would enter a new stage in 2025...god, just read this:

The second stage is "the application phase of the cycle, which should benefit software companies as well as the cloud providers. And then, phase three of this will ultimately be the consumer-facing companies figuring out how to use the technology in ways that actually can drive increased interactions with consumers."

The analyst says the market will enter phase two in 2025, with software companies and cloud provider stocks expected to see gains. He adds that cybersecurity companies could also benefit as the technology evolves.

Dewitt specifically calls out Palantir, Snowflake, and Salesforce as those who would "gain." In none of these cases am I able to see the actual revenue from AI, but Salesforce itself said that it will see no revenue growth from AI this year. Palantir also, as discovered by the Autonomy Institute’s recent study, recently added to the following to its public disclosures:

There are significant risks involved in deploying AI and there can be no assurance that using AI in our platforms and products will enhance or be beneficial to our business, including our profitability.

What I'm saying is that analysts can be wrong! And they can be wrong at scale! There is no analyst consensus that agrees with me. In fact, most analysts appear to be bullish on AI, despite the significantly-worse costs and total lack of growth!

Yet even in this Hater's Parade, the unnamed journalist makes a case for Amazon Web Services:

Sooner than that, those initiatives may provide a boost for Amazon's retail side. For one, they potentially make a profit center out of idle computing capacity needed for that retail operation. Like most computer networks, Amazon's uses as little as 10% of its capacity at any one time just to leave room for occasional spikes. It's the same story in the company's distribution centers. Keeping them humming at higher capacity means they operate more efficiently, besides giving customers a much broader selection of products. And the more stuff Amazon ships, both its own inventory or others', the better deals it can cut with shippers.

But Amazon Web Services Cost Money Ed, Now You Shall Meet Your End!

Nice try, chuckles!

In 2015, the year that Amazon Web Services became profitable, Morgan Stanley analyst Katy Huberty believed that it was running at a "material loss," suggesting that $5.5 billion of Amazon's "technology and content expenses" was actually AWS expenses, with a "negative contribution of $1.3 billion."

Here is Katy Huberty, the analyst in question, declaring six months ago that "2025 [will] be the year of Agentic AI, robust enterprise adoption, and broadening AI winners."

So, yes, analysts really got AWS wrong. But putting that aside, there might actually be a comparison here! Amazon Web Services absolutely created a capital expenditures drain on Amazon. From Forbes’s Chuck Jones:

In 2014 Amazon had $4.9 billion in capital expenditures, up 42% from 2013’s $3.4 billion. The company has a wide range of items that it buys to support and grow its business ranging from warehouses, robots and computer systems for its core retail business and AWS. While I don’t expect Amazon to detail how much goes to AWS I suspect it is a decent percentage, which means AWS needs to generate appropriate returns on the capital deployed.

In today's money, this means that Amazon spent $6.76 billion in capital expenditures on AWS in 2014. Assuming it was this much every year — it wasn't, but I want to make an example of every person claiming that this is a gotcha — it took $67.6 billion and ten years (though one could argue it was nine) of pure capital expenditures to turn Amazon Web Services into a business that now makes billions of dollars a quarter in profit.

That's $15.4 billion less than Amazon's capital expenditures for 2024, and less than one-fifteenth its projected capex spend for 2025. And to be clear, the actual capital expenditure numbers are likely much lower, but I want to make it clear that even when factoring in inflation, Amazon Web Services was A) a bargain and B) a fraction of the cost of what Amazon has spent in 2024 or 2025.

A fun aside: On March 30 2015, Kevin Roose wrote a piece for New York Magazine about the cloud compute wars, in which he claimed that, and I quote, "there's no reason to suspect that Amazon would ever need to raise prices on AWS, or turn the fabled "profit switch" that pundits have been speculating about for years." Less than a month later Amazon revealed Amazon Web Services was profitable. They don't call him "the most right man in tech journalism" for nothing!

Generative AI and Large Language Models Do Not Resemble Amazon Web Services or The Greater Cloud Compute Boom, As Generative AI Is Not Infrastructure

Some people compare Large Language Models and their associated services to Amazon Web Services, or services like Microsoft Azure or Google Cloud, and they are wrong to do so.

Amazon Web Services, when it launched, comprised of things like (and forgive how much I'm diluting this) Amazon's Elastic Compute Cloud (EC2), where you rent space on Amazon's servers to run applications in the cloud, or Amazon Simple Storage (S3), which is enterprise-level storage for applications. In simpler terms, if you were providing a cloud-based service, you used Amazon to both store the stuff that the service needed and the actual cloud-based processing (compute, so like your computer loads and runs applications but delivered to thousands or millions of people). 

This is a huge industry. Amazon Web Services alone brought in revenues of over $100 billion in 2024, and while Microsoft and Google don't break out their cloud revenues, they're similarly large parts of their revenue, and Microsoft has used Azure in the past to patch over shoddy growth.

These services are also selling infrastructure. You aren't just paying for the compute, but the ability to access storage and deliver services with low latency — so users have a snappy experience — wherever they are in the world. The subtle magic of the internet is that it works at all, and a large part of that is the cloud compute infrastructure and oligopoly of the main providers having such vast data centers. This is much cheaper than doing it yourself, until a certain point. Dropbox moved away from Amazon Web Services as it scaled. It also allows someone else to take care of maintenance of the hardware and make sure it actually gets to your customers. You also don't have to worry about spikes in usage, because these things are usage-based, and you can always add more compute to meet demand.

There is, of course, nuance — security-specific features, content-specific delivery services, database services — behind these clouds. You are buying into the infrastructure of the infrastructure provider, and the reason these products are so profitable is, in part, because you are handing off the problems and responsibility to somebody else. And based on that idea, there are multiple product categories you can build on top of it, because ultimately cloud services are about Amazon, Microsoft and Google running your infrastructure for you.

Large Language Models and their associated services are completely different, despite these companies attempting to prove otherwise, and it starts with a very simple problem: why did any of these companies build these giant data centers and fill them full of GPUs?

Amazon Web Services was created out of necessity — Amazon's infrastructure needs were so great that it effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically anywhere, handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and, well, making sure things ran in a stable way. It didn't need to come up with a reason for people to run web applications — they were already doing so themselves, but in ways that cost a lot, were inflexible, and required specialist skills. AWS took something that people already did, and what there was a proven demand for, and made it better. Eventually, Google and Microsoft would join the fray. 

And that appears to be the only similarity with generative AI — that due to the ridiculous costs of both the data centers and GPUs necessary to provide these services, it's largely impossible for others to even enter the market.

Yet after that, generative AI feels more like a feature of cloud infrastructure rather than infrastructure itself. AWS and similar megaclouds are versatile, flexible and multifaceted. Generative AI does what generative AI does, and that's about it.

You can run lots of different things on AWS. What are the different things you can run using Large Language Models? What are the different use cases, and, indeed, user requirements that make this the supposed "next big thing"?

Perhaps the argument is that generative AI is the next AWS or similar cloud service because you can build the next great companies on the infrastructure of others — the models of, say, OpenAI and Anthropic, and the servers of Microsoft. 

So, okay, let's humour this point too. You can build the next great AI startup, and you have to build it on one of the megclouds because they're the only ones that can afford to build the infrastructure.

One small problem.

Companies Built On Top Of Large Language Models Don't Make Much Money (In Fact, They're Likely All Deeply Unprofitable)

Let's start by establishing a few facts:

  • Outside of one exception — Midjourney, which  claimed it was profitable in 2022, which may not still be the case, I’ve reached out to ask— every single Large Language Model company is unprofitable, often wildly so. 
  • Outside of OpenAI, Anthropic and Anysphere (which makes AI coding app Cursor), there are no Large Language Model companies — either building models or services on top of others' models — that make more than $500 million in annualized revenue (meaning month x 12), and outside of Midjourney ($200m ARR) and Ironclad ($150m ARR), according to The Information's Generative AI database, and Perplexity (which just announced it’s at $150m ARR), there are only twelve generative AI-powered companies making $100 million annualized (or $8.3 million a month) in revenue. Though the database doesn't have Replit (which recently announced it hit $100 million in annualized revenue), I've included it in my calculations for the sake of fairness.
    • Of these companies, two have been acquired — Moveworks (acquired by ServiceNow in March 2025) and Windsurf (acquired by Cognition in July 2025).
    • For the sake of simplicity, I've left out companies like Surge, Scale, Turing and Together, all of whom run consultancies selling services for training models.
  • Otherwise, there are seven companies that make $50 million or more ARR ($4.16 million a month).

None of this is to say that one hundred million dollars isn't a lot of money to you and me, but in the world of Software-as-Service or enterprise software, this is chump change. Hubspot had revenues of $2.63 billion in its 2024 financial year.

We're three years in, and generative AI's highest-grossing companies — outside OpenAI ($10 billion annualized as of early June) and Anthropic ($4 billion annualized as of July), and both lose billions a year after revenue — have three major problems:

  • Businesses powered by generative AI do not seem to be popular.
  • Those businesses that are remotely popular are deeply unprofitable...
  • ...and even the less-popular generative AI-powered businesses are deeply unprofitable.

But let's start with Anysphere and Cursor, its AI-powered coding app, and its $500 million of annualized revenue. Pretty great, right? It hit $200 million in annualized revenue in March, then hit $500 million annualized revenue in June after raising $900 million. That's amazing!

Sadly, it's a mirage. Cursor's growth was a result of an unsustainable business model that it’s now had to replace with opaque terms of service, dramatically restricted access to models, and rate limits that effectively stop its users using the product at the price point they were used to.

It’s also horribly unprofitable, and a sign of things to come for generative AI.

Cursor's $500 Million "Annualized Revenue" Was Earned With A Product It No Longer Offers, And Anthropic/OpenAI Just Raised Their Prices, Increasing Cursor’s Costs Dramatically

A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter, and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding "service tiers" and "priority processing," which is tech language for "pay us extra if you have a lot of customers or face rate limits or service delays." These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users

I will now plagiarise myself:

  • In or around May 5, 2025 — Cursor closes a $500 million funding round.
  • May 22 2025 — Anthropic launches Claude 4 Opus and Sonnet, and on May 30, 2025 adds Service Tiers, including priority pricing specifically focused on cache-heavy products like Cursor.
  • May 30, 2025 — Reuters reports that Anthropic's "annualized revenue hit $3 billion," with a "key driver" being "code generation." This translates to around $250 million in monthly revenue.
  • June 9 2025 — CNBC reports OpenAI has hit $10 billion in annualized revenue. They say "annual recurring revenue," but they mean annualized.
  • On or around June 16 2025 — Cursor changes its pricing, adding a new $200-a-month "Ultra" tier that, in its own words, is "made possible by multi-year partnerships with OpenAI, Anthropic, Google and xAI," which translates to "multi-year commitments to spend, which can be amortized as monthly amounts."
  • A day later, Cursor dramatically changed its offering to a "usage-based" one where users got "at least" the value of their subscription — $20-a-month provided more than $20 of API calls — in compute, along with arbitrary rate limits and "unlimited" access to Cursor's own slow model that its users hate.
  • June 18 — Replit announces its "effort-based pricing" increases.
  • July 1 2025 — The Information reports Anthropic has hit "$4 billion annual pace,"  meaning that it is making $333 million a month, or an increase of $83 million a month, or an increase of just under 25% in the space of a month.

In simpler terms, Cursor raised $900 million and very likely had to hand large amounts of that money over to OpenAI and Anthropic to keep doing business with them, and then immediately changed its terms of service to make them worse. As I said at the time:

While some may believe that both OpenAI and Anthropic hitting "annualized revenue" milestones is good news, you have to consider how these milestones were hit. Based on my reporting, I believe that both companies are effectively doing steroids, forcing massive infrastructural costs onto big customers as a means of covering the increasing costs of their own models.

There is simply no other way to read this situation. By making these changes, Anthropic is intentionally making it harder for its largest customer to do business, creating extra revenue by making Cursor's product worse by proxy. What's sickening about this particular situation is that it doesn't really matter if Cursor's customers are happy or sad — they, like OpenAI's enterprise Priority Access API, require a long-term commitment which involves a minimum throughput of tokens for second as part of their Tiered Access program.

If Cursor's customers drop off, both OpenAI and Anthropic still get their cut, and if Cursor's customers somehow outspend even those commitments, they'll either still get rate limited or Anysphere will incur more costs.

Cursor is the largest and most-successful generative AI company, and these aggressive and desperate changes to its product suggest A) that its product is deeply unprofitable and B) that its current growth was a result of offering a product that was not the one it would sell in the long term. Cursor misled its customers, and its current revenue is, as a result, highly unlikely to stay at this level.

Worse still, the two Anthropic engineers who left to join Cursor two weeks ago just returned to Anthropic. This heavily suggests that whatever they saw at Cursor wasn’t compelling enough to make them stay.

As I also said:

While Cursor may have raised $900 million, it was really OpenAI, Anthropic, xAI and Google that got that money.

At this point, there are no profitable enterprise AI startups, and it is highly unlikely that the new pricing models by both Cursor and Replit are going to help.

These are now the new terms of doing business with these companies — a shakedown, where you pay up for priority access or "tiers" or face indeterminate delays or rate limits. Any startup scaling into an "enterprise" integration of generative AI which means, in this case, anything that requires a certain level of service uptime) has to commit to both a minimum amount of months and a throughput of tokens, which means that the price of starting an AI startup that gets any kind of real market traction just dramatically increased.

While one could say "oh perhaps you don't need priority access," the "need" here is something that will be entirely judged by Anthropic and OpenAI in an utterly opaque manner. They can — and will! — throttle companies that are too demanding on their system, as proven by the fact that they've done this to Cursor multiple times.

Why Does Cursor Matter? Simple: Generative AI Has No Business Model If It Can't Do Software As A Service

I realize it's likely a little boring hearing about software as a service, but this is the only place where generative AI can really make money. Companies buying hundreds or thousands of seats are how industries that rely upon compute grow, and without that growth, they're going nowhere.

To give you some context, Netflix makes about $39 billion a year in subscription revenue, and Spotify about $18 billion. These are the single-most-popular consumer software subscriptions in the world — and OpenAI's 15.5 million subscribers suggest that it can't rely on them for the kind of growth that would actually make the company worth $300 billion (or more).

Cursor is, as it stands, the one example of a company thriving using generative AI, and it appears its rapid growth was a result of selling a product at a massive loss. As it stands today, Cursor's product is significantly worse, and its Reddit is full of people furious at the company for the changes.

In simpler terms, Cursor was the company that people mentioned to prove that startups could make money by building products on top of OpenAI and Anthropic's models, yet the truth is that the only way to do so and grow is to burn tons of money. While the tempting argument is to say that Cursor’s "customers are addicted," this is clearly not the case, nor is it a real business model.

This story also showed that Anthropic and OpenAI are the biggest threats to their customers, and will actively rent-seek and punish their success stories, looking to loot as much as they can from them.

To put it bluntly: Cursor's growth story was a lie. It reached $500 million in annualized revenue selling a product it can no longer afford to sell, suggesting material weakness in its own business and any and all coding startups.

It is also remarkable — and a shocking failure of journalism — that this isn’t in every single article about Anysphere.

No, Really, Where Are The Consumer AI Startups?

I'm serious! Perplexity? Perplexity only has $150 million in annualized revenue! It spent 167% of its revenue in 2024 ($57m, its revenue was $34m) on compute services from Anthropic, OpenAI, and Amazon! It lost $68 million!

And worse still, it has no path to profitability, and it’s not even anything new! It’s a search engine! Professional gasbag Alex Heath just did a flummoxing interview with Perplexity CEO Aravind Srivinas, who, when asked how it’d become profitable, appeared to experience a stroke:

Maybe let me give you another example. You want to put an ad on Meta, Instagram, and you want to look at ads done by similar brands, pull that, study that, or look at the AdWords pricing of a hundred different keywords and figure out how to price your thing competitively. These are tasks that could definitely save you hours and hours and maybe even give you an arbitrage over what you could do yourself, because AI is able to do a lot more. And at scale, if it helps you to make a few million bucks, does it not make sense to spend $2,000 for that prompt? It does, right? So I think we’re going to be able to monetize in many more interesting ways than chatbots for the browser.

Aravind, do you smell toast?

And don’t talk to me about “AI browsers,” I’m sorry, it’s not a business model. How are people going to make revenue on this, hm? What do these products actually do? Oh they can poorly automate accepting LinkedIn invites? It’s like God himself has personally blessed my computer. Big deal! 

In any case, it doesn't seem like you can really build a consumer AI startup that makes anything approaching a real company. Other than ChatGPT, I guess?

The Generative AI Software As A Service Market Is Small, With Little Room For Growth And No Profits To Be Seen

Arguably the biggest sign that things are troubling in the generative AI space is that we use "annualized revenue" at all, which, as I've mentioned repeatedly, means multiplying a month by 12 and saying "that's our annualized!"

The problem with this number is that, well, people cancel things. While your June might be great, if 10% of your subscribers churn in a bad month (due to a change in your terms of service), that's a chunk of your annualized revenue gone.

But the worst sign is that nobody is saying the monthly figures, mostly because the monthly figures kinda suck! $100 million of annualized revenue is $8.33 million a month. To give you some scale, Amazon Web Services hit $189 million ($15.75 million a month) in revenue in 2008, two years after founding, and while it took until 2015 to hit profitability, it actually hit break-even in 2009, though it invested cash in growth for a few years after.

Right now, not a single generative AI software company is profitable, and none of them are showing the signs of the kind of hypergrowth that previous "big" software companies had. While Cursor is technically "the fastest growing SaaS of all time," it did so using what amounts to fake pricing. You can dress this up as "growth stage" or "enshittification (it isn't by the way, generally price changes make things profitable, which this did not)," but Cursor lied. It lied to the public about what its product would do long-term. It isn't even obvious whether its current pricing is sustainable.

Outside of Cursor, what other software startups are there?

Glean?

Everyone loves to talk about enterprise search company Glean — a company that uses AI to search and generate answers from your company's files and documents.

In December 2024, Glean raised $260 million, proudly stating that it had over $550 million of cash in hand with "best-in-class ARR growth." A few months later in February 2025, Glean announced it’d "achieved $100 million in annual recurring revenue in fourth quarter FY25, cementing its position as one of the fastest-growing SaaS startups and reflecting a surging demand for AI-powered workplace intelligence." In this case, ARR could literally mean anything, as it appears to be based on quarters — meaning it could be an average of the last three months of the year, I guess?

Anywho, in June 2025, Glean announced it had raised another funding round, this time raising $150 million, and, troublingly, added that since its last round, it had "...surpassed $100M in ARR."

Five months into the fucking year and your monthly revenue is the same? That isn't good! That isn't good at all!

Also, what happened to that $550 million in cash? Why did Glean need more? Hey wait a second, Glean announced its raise on June 18 2025, two days after Cursor's pricing increase and the same day that Replit announced a similar hike!

It's almost as if its pricing dramatically increased due to the introduction of Anthropic's Service Tiers and OpenAI's Priority Processing.

I'm guessing, but isn't it kind of weird that all of these companies raised money about the same time?

Hey, that reminds me.

There Are No Unique Generative AI Companies — And Building A Moat Based On Technology Is Near-Impossible

If you look at what generative AI companies do (note that the following is not a quality barometer), it's probably doing one of the following things:

  • A chatbot, either one you ask questions or "talk to"
    • This includes customer service bots
  • Searching, summarizing or comparing documents, with increasing amounts of complexity of documents or quantity of documents to be compared
    • This includes being able to "ask questions" of documents
  • Web Search
  • "Deep Research" — meaning long-form web search that generates a document
  • Generating text, images, voice, or in some rare cases video
  • Using generative AI to to write, edit or "maintain" code
  • Transcription
  • Translation
  • Photo and video editing

Every single generative AI company that isn't OpenAI or Anthropic does one or a few of these things, and I mean every one of them, and it's because every single generative AI company uses Large Language Models, which have inherent limits on what they can do. LLMs can generate, they can search, they can edit (kind of!), they can transcribe (sometimes accurately!) and they can translate (often less accurately).

As a result, it's very, very difficult for a company to build something unique. Though Cursor is successful, it is ultimately a series of system prompts, a custom model that its users hate, a user interface and connections to models by OpenAI and Anthropic, both of whom have competing products and make money from Cursor and its competitors. Within weeks of Cursor's changes to its services, Amazon and ByteDance released competitors that, for the most part, do the same thing. Sure there's a few differences in how they're designed, but design is not a moat, especially in a high-cost, negative-profit business, where your only way of growing is to offer a product you can't afford to sustain.

The only other moat you can build..is the services you provide, which, when your services are dependent on a Large Language Model, are dependent on the model developer, who, in the case of OpenAI and Anthropic, could simply clone your startup, because the only valuable intellectual property is theirs.

You may say "well, nobody else has any ideas either," to which I'll say that I fully agree. My Rot-Com Bubble thesis suggests we're out of hypergrowth ideas, and yeah, I think we're out of ideas related to Large Language Models too.

At this point, I think it's fair to ask — are there any good companies you can build on top of Large Language Models? I don't mean add features related to, I mean an AI company that actually sells a product that people buy at scale that isn't called ChatGPT.

Established Large Language Models Are A Crutch

In previous tech booms, companies would make their own “models” — their own infrastructure, or the things that make them distinct from other companies — but the generative AI boom effectively changes that by making everybody build stuff on top of somebody else’s models, because training your own models is both extremely expensive and requires vast amounts of infrastructure.

As a result, much of this “boom” is about a few companies — really two, if we’re honest — getting other companies to try and build functional software for them. 

OpenAI And Anthropic Are Their Customers' Weak Point

I wanted to add one note that, ultimately, OpenAI and Anthropic are bad for their customers. Their models are popular (by which I mean their customers' customers will expect access to them) meaning that OpenAI and Anthropic can (as they did with Cursor) arbitrarily change pricing, service availability or functionality based on how they feel that day. Don't believe me? Anthropic cut off access to AI coding platform Windsurf because it looked like it might get acquired by OpenAI.

Even by big tech standards this fucking sucks. And these companies will do it again!

The Limited Use Cases Are Because Large Language Models Are All Really Similar

Because all Large Language Models require more data than anyone has ever needed, they all basically have to use the same data, either taken from the internet or bought from one of a few companies (Scale, Surge, Turing, Together, etc.). While they can get customized data or do customized training/reinforcement learning, these models are all transformer-based, and they all function similarly, and the only way to make them different is by training them, which doesn't make them much different, just better at things they already do.

Generative AI Is Simply Too Expensive To Build A Sustainable Business On Top Of It

I already mentioned OpenAI and Anthropic's costs, as well as Perplexity's $50 million+ bill to Anthropic, Amazon and OpenAI off of a measly $34 million in revenue. These companies cost too much to run, and their functionality doesn't make enough money to make them make sense.

The problem isn't just the pricing, but how unpredictable it is. As Matt Ashare wrote for CIO Dive last year, generative AI makes a lot of companies’ lives difficult through the massive spikes in costs that come from power users, with few ways to mitigate their costs. One of the ways that a company manages their cloud bills is by having some degree of predictability — which is difficult to do with the constant slew of new models and demands for new products to go with them, especially when said models can (and do) cost more with subsequent iterations.

As a result, it's hard for AI companies to actually budget.

Companies Are Using The Term "Agent" To Deceive Customers and Investors

"But Ed!" you cry, "What about AGENTS?"

Let me tell you about agents.

The term "agent" is one of the most egregious acts of fraud I've seen in my entire career writing about this crap, and that includes the metaverse.

When you hear the word "agent," you are meant to think of an autonomous AI that can go and do stuff without oversight, replacing somebody's job in the process, and companies have been pushing the boundaries of good taste and financial crimes in pursuit of them.

Most egregious of them is Salesforce's "Agentforce," which lets you "deploy AI agents at scale" and "brings digital labor to every employee, department and business process." This is a blatant fucking lie. Agentforce is a god damn chatbot platform, it's for launching chatbots, they can sometimes plug into APIs that allow them to access other information, but they are neither autonomous nor "agents" by any reasonable definition.

Not only does Salesforce not actually sell "agents," its own research shows that agents only achieve around a 58% success rate on single-step tasks, meaning, to quote The Register, "tasks that can be completed in a single step without needing follow-up actions or more information." On multi-step tasks — so, you know, most tasks — they succeed a depressing 35% of the time.

Last week, OpenAI announced its own "ChatGPT agent" that can allegedly go "do tasks" on a "virtual computer." In its own demo, the agent took 21 or so minutes to spit out a plan for a wedding with destinations, a vague calendar and some suit options, and then showed a pre-prepared demo of the "agent" preparing an itinerary of how to visit every major league ballpark. In this example's case, "agent" took 23 minutes, and produced arguably the most confusing-looking map I've seen in my life.

It also missed out every single major league ballpark on the East Coast — including Yankee Stadium and Fenway Park — and added a random stadium in the middle of the Gulf of Mexico. What team is that, eh Sam? The Deepwater Horizon Devils? Is there a baseball team in North Dakota? 

I should also be clear this was the pre-prepared example. As with every Large Language Model-based product — and yes, that's what this is, even if OpenAI won't talk about what model — results are extremely variable.

Agents are difficult, because tasks are difficult, even if they can be completed by a human being that a CEO thinks is stupid. What OpenAI appears to be doing is using a virtual machine to run scripts that its models trigger. Regardless of how well it works (it works very very poorly and inconsistently), it's also likely very expensive.

In any case, every single company you see using the word agent is trying to mislead you. Glean's "AI agents" are chatbots with if-this-then-that functions that trigger events using APIs (the connectors between different software services), not taking actual actions, because that is not what LLMs can do.

ServiceNow's AI agents that allegedly "act autonomously and proactively on your behalf" are, despite claiming they "go beyond ‘better chatbots,’" still ultimately chatbots that use APIs to trigger different events using if-this-then-that functions. Sometimes these chatbots can also answer questions that people might have, or trigger an event somewhere. Oh, right, that's the same thing.

The closest we have to an "agent" of any kind is a coding agent, which can make a list of things that you might do on a software project and then go and generate the code and push stuff to Github when you ask them to, and they can do so "autonomously," in the sense that you can let them just run whatever task seems right. When I say "ask them to" or "go and" I mean that these agents are not remotely intelligent, and when let run rampant fuck up everything and create a bunch of extra work. Also, a study found that AI coding tools made engineers 19% slower.

Nevertheless, none of these products are autonomous agents, and anybody using the term agent likely means "chatbot."

And it's working because the media keeps repeating everything these companies say.

But Really Though, Everybody Is Losing Money On Generative AI, And Nobody's Making A Profit

I realize we've taken kind of a scenic route here, but I needed to lay the groundwork here, because I am well and truly alarmed.

According to a UBS report from the 26th of June, the public companies running AI services are making absolutely pathetic amounts of money from AI:

ServiceNow's use of "$250 million ACV" — so annual contract value — may be one of the more honest explanations of revenue I've seen, putting them in the upper echelons of AI revenue unless, of course, you think for two seconds, whether these are AI-specific contracts. Or, perhaps, are they contracts including AI? Eh, who cares. It's also year-long agreements that could churn, and according to Gartner, over 40% of "agentic AI" projects will be canceled by end of 2027.

And really, ya gotta laugh at Adobe and Salesforce, both of whom have talked so god damn much about generative AI and yet have only made around $100 million in annualized revenue from it. Pathetic! These aren't futuristic numbers! They're barely product categories! And none of this seems to include costs.

Oh well.

OpenAI and Anthropic Are The Generative AI Industry, Are Deeply Unstable and Unsustainable, and Are Critical To The AI Trade Continuing

I haven't really spent time on my favourite subject — OpenAI being a systemic risk to the tech industry.

To recap:

  • OpenAI and Anthropic both lose billions of dollars a year after revenue, and their stories do not mirror any other startup in history, not Uber, not Amazon Web Services, nothing. I address the Uber point in this article.
  • SoftBank is putting itself in dire straits simply to fund OpenAI once. This deal threatens its credit rating, with SoftBank having to take on what will be multiple loans to fund the remaining $30 billion of OpenAI's $40 billion round, which has yet to close and OpenAI is, in fact, still raising.
    • This is before you consider the other $19 billion that SoftBank has agreed to contribute to the Stargate data center project, money that it does not currently have available.
  • OpenAI has promised $19 billion to the Stargate data center project, money it does not have and cannot get without SoftBank's funds.
    • Again, neither SoftBank nor OpenAI has the money for Stargate right now.
  • OpenAI must convert to a for-profit by the end of 2025, or it loses $20 billion of the remaining $30 billion of funding. If it does not convert by October 2026, its current funding converts to debt. It is demanding remarkable, unreasonable concessions from Microsoft, which is refusing to budge and is willing to walk away from the negotiations necessary to convert.
  • OpenAI does not have a path to profitability, and its future, like Anthropic's, is dependent on a continual flow of capital from venture capitalists and big tech, who must also continue to expand infrastructure.

Anthropic is in a similar, but slightly better position — it is set to lose $3 billion this year on $4 billion of revenue. It also has no path to profitability, recently jacked up prices on Cursor, its largest customer, and had to put restraints on Claude Code after allowing users to burn 100% to 10,000% of their revenue. These are the actions of a desperate company.

Nevertheless, OpenAI and Anthropic's revenues amount to, by my estimates, more than half of the entire revenue of the generative AI industry, including the hyperscalers.

To be abundantly clear: the two companies that amount to around half of all generative artificial intelligence revenue are ONLY LOSING MONEY.

I've said a lot of this before, which is why I'm not harping on about it, but the most important company in the entire AI industry needs to convert by the end of the year or it's effectively dead, and even if it does, it burns billions and billions of dollars a year and will die without continual funding. It has no path to profitability, and anyone telling you otherwise is a liar or a fantasist.

Worse still, outside of OpenAI...what is there, really?

There Is No Real AI Adoption, Nor Is There Any Significant Revenue

As I wrote earlier in the year, there is really no significant adoption of generative AI services or products. ChatGPT has 500 million weekly users, and otherwise, it seems that other services struggle to get 15 million of them. And while the 500 million weekly users sounds — and, in fairness, is — impressive, there’s a world of difference between someone using a product as part of their job, and someone dicking around with an image generator, or a college student trying to cheat on their homework.

Sidebar: Google cheated by combining Google Gemini with Google Assistant to claim that it has 350 million users. Don't care, sorry.

This is worrying on so many levels, chief of which is that everybody has been talking about AI for three god damn years, everybody has said "AI" in every earnings and media appearance and exhausting blog post, and we still can't scrape together the bits needed to make a functional industry.

I know some of you will probably read this and point to ChatGPT's users, and I quote myself here:

It has, allegedly, 500 million weekly active users — and, by the last count, only 15.5 million paying subscribers, an absolutely putrid conversion rate even before you realize that the actual conversion rate would be monthly active subscribers. That’s how any real software company actually defines its metrics, by the fucking way.

Why is this impressive? Because it grew fast? It literally had more PR and more marketing and more attention and more opportunities to sell to more people than any company has ever had in the history of anything. Every single industry has been told to think about AI for three years, and they’ve been told to do so because of a company called OpenAI. There isn’t a single god damn product since Google or Facebook that has had this level of media pressure, and both of those companies launched without the massive amount of media (and social media) that we have today. 

ChatGPT is a very successful growth product and an absolutely horrifying business. OpenAI is a banana republic that cannot function on its own, it does not resemble Uber, Amazon Web Services, or any other business in the past other than WeWork, the other company that SoftBank spent way too much money on.

And outside of ChatGPT, there really isn't anything else.

Yes, Generative AI "Does Something," But AI Is Predominantly Marketed Based On Lies

Before I wrap up — I'm tired, and I imagine you are too — I want to address something.

Yes, generative AI has functionality. There are coding products and search products that people like and pay for. As I have discussed above, none of these companies are profitable, and until one of them is profitable, generative AI-based companies are not real businesses.

In any case, the problem isn't so much that LLMs "don't do anything," but that people talk about them doing things they can't do.

  • The use of the word "agent" is a deliberate attempt to suggest that LLMs are autonomous.
  • Any and all stories about AI replacing jobs are intentionally manipulative attempts to boost stock valuations and suggest that models are capable of replacing human workers at scale. Allison Morrow of CNN has an excellent piece about this. As I discussed in this piece, this is one of the more egregious failures of the tech media I've ever seen, willingly publishing Dario Amodei outright making stuff up.
  • The discussion of the term "AGI" is an attempt to suggest that Large Language Models can create conscious intelligence, a fictional concept that Meta's chief AI scientist says won't come from scaling up LLMs.
    • Members of the media: every time you talk about the "really smart engineers they're paying," know that you are doing marketing for these companies, when what's really happening is people are giving tens of millions of dollars to guys who will work on teams that are pursuing a totally-unproven concept.
  • The use of the word "singularity" is similarly manipulative.
  • The use of stories about models "lying, cheating and stealing to reach goals" or "stop themselves being turned off" are intentionally deceptive, as these models can (and clearly are) being prompted to take these actions.
    • To be abundantly clear, the manipulative suggestion here is that these models are autonomous or conscious in some way, which they are not.

I believe that the generative AI market is a $50 billion revenue industry masquerading as a $1 trillion one, and the media is helping.

The AI Trade Is Entirely About GPUs, And Is Incredibly Brittle As A Result

As I've explained at length, the AI trade is not one based on revenue, user growth, the efficacy of tools or significance of any technological breakthrough. Stocks are not moving based on whether they are making money on AI, because if they were, they'd be moving downward. However, due to the vibes-based nature of the AI trade, companies are benefiting from the press inexplicably crediting growth to AI with no proof that that's the case.

OpenAI is a terrible business, and the only businesses worse than OpenAI are the companies built on top of it. Large Language Models are too expensive to run, and have limited abilities beyond the ones I've named previously, and because everybody is running models that all, on some level, do the same thing, it's very hard for people to build really innovative products on top of them.

And, ultimately, this entire trade hinges on GPUs.

CoreWeave was initially funded by NVIDIA, its IPO funded partially by NVIDIA, NVIDIA is one of its customers, and CoreWeave raises debt on the GPUs it buys from NVIDIA to build more data centers, while also using the money to buy GPUs from NVIDIA. This isn’t me being polemic or hysterical — this is quite literally what is happening, and how CoreWeave operates. If you aren’t alarmed by that, I’m not sure what to tell you.

Elsewhere, Oracle is buying $40 billion in GPUs for the still-unformed Stargate data center project, and Meta is building a Manhattan-sized data center to fill with NVIDIA GPUs.

OpenAI is Microsoft's largest Azure client — an insanely risky proposition on multiple levels, not simply in the fact that it’s serving the revenue at-cost but that Microsoft executives believed OpenAI would fail in the long term when they invested in 2023 — and Microsoft is NVIDIA's largest client for GPUs, meaning that any changes to Microsoft's future interest in OpenAI, such as reducing its data center expansion, would eventually hit NVIDIA's revenue.

Why do you think DeepSeek shocked the market? It wasn't because of any clunky story around training techniques. It was because it said to the market that NVIDIA might not sell more GPUs every single quarter in perpetuity.

Microsoft, Meta, Google, Apple, Amazon and Tesla aren't making much money from AI — in fact, they're losing billions of dollars on whatever revenues they do make from it. Their stock growth is not coming from actual revenue, but the vibes around "being an AI company," which means absolutely jack shit when you don't have the users, finances, or products to back them up.

So, really, everything comes down to NVIDIA's ability to sell GPUs, and this industry, if we're really honest, at this point only exists to do so. Generative AI products do not provide significant revenue growth, its products are not useful in the way that unlocks significant business value, and the products that have some adoption run at such a grotesque loss.

I'm Alarmed!

I realize I've thrown a lot at you, and, for the second time this year, written the longest thing I've ever written.

But I needed to write this, because I'm really worried.

We're in a bubble. If you do not think we're in a bubble, you are not looking outside. Apollo Global Chief Economist Torsten Slok said it last week. Well, okay, what he said was much worse:

“The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” Slok wrote in a recent research note that was widely shared across social media and financial circles.

We are in a bubble. Generative AI does not do the things that it's being sold as doing, and the things it can actually do aren't the kind of things that create business returns, automate labor, or really do much more than one extension of a cloud software platform. The money isn't there, the users aren't there, every company seems to lose money and some companies lose so much money that it's impossible to tell how they'll survive.

Worse still, this bubble is entirely symbolic. The bailouts of the Great Financial Crisis were focused on banks and funds that had failed because they ran out of money, and the TARP initiative existed to plug the holes with low-interest loans.

There are few holes to plug here, because even if OpenAI and Anthropic somehow became eternal money-burners, the AI trade exists based on the continued and continually-increasing sale and use of GPUs. There are limited amounts of capital, but also limited amounts of data centers to actually put GPUs, and on top of that, at some point growth will slow at one of the Magnificent 7, at which point costs will have to come down from things that lose them tons of money, such as generative AI.

Before you ask-

But, Isn’t The Cost Of Inference Going Down?

You do not have proof for this statement! The cost of tokens going down is not the same thing as the cost of inference goes down! Everyone saying this is saying it because a guy once said it to them! You don't have proof! I have more proof for what I am saying!

While it theoretically might be, all evidence points to larger models costing more money, especially reasoning-heavy ones like Claude Opus 4. Inference is not the only thing happening, and if this is your one response, you are a big bozo and doofus and should go back to making squeaky noises when you see tech executives or hear my name.

But Ed, What About ASICs?

Okay, so one argument is that these companies will use ASICs — customized chips for specific operations — to reduce the amount they're spending.

A few thoughts:

  • When? Say OpenAI and Broadcom actually build their ASIC in 2026 (they won't) — how many of them will they build? Do they have contracts with companies that can actually produce high-performance silicon, of which there are only three (Samsung, TSMC, and arguably SMIC, which is currently sanctioned), and these companies typically have their capacity booked well in advance. Even starting a production run of a semiconductor product can take weeks. Do they have the server architecture prepared? Have they tested it? Does it work? Is the performance actually good? Microsoft has failed to create a workable, reliable ASIC. What makes OpenAI special?
  • It takes a lot of money to build these chips and they are yet to prove they're better than NVIDIA GPUs for AI compute, and even if they do, are they going to retrofit every data center? Can they build enough?
  • If this actually happens, it still fucks up the AI trade. NVIDIA STILL NEEDS TO SELL GPUs!

I am worried because despite all of these obvious, brutal and near-unfixable problems, everybody is walking around acting like things are going great with AI. The New York Times claims everybody is using AI for everything — a blatant lie, one that exists to prop up an industry that has categorically failed to deliver the innovations or returns that it promised, yet still receives glowing press from a tech and business media that refuses to look outside and see that the sky is red and frogs are landing everywhere.

Other than the frog thing, I'm not even being dramatic. Everywhere you look in the AI trade, things get worse — no revenue, billions being burned, no moat, no infrastructure play, no comparables in history other than the dot com bubble and WeWork, and a series of flagrant lies spouted by the powerful and members of the press that are afraid of moving against market consensus.

Worse still, despite NVIDIA's strength, NVIDIA is the market's weakness, through no fault of its own, really. Jensen Huang sells GPUs, people want to buy GPUs, and now the rest of the market is leaning aggressively on one company, feeding it billions of dollars in the hopes that the things they're buying start making them a profit.

And that really is the most ridiculous thing. At the center of the AI trade sits GPUs that, on installation, immediately start losing the company in question money. Large Language Models burn cash for negative returns to build products that all kind of work the same way.

If you're going to say I'm wrong, sit and think carefully about why. Is it because you don't want me to be right? Is it because you think "these companies will work it out"? This isn't anything like Uber, AWS, or any other situation. It is its own monstrosity, a creature of hubris and ignorance caused by a tech industry that's run out of ideas, built on top of one company.

You can plead with me all you want about how there are actual people using AI. You've probably read the "My AI Skeptic Friends Are All Nuts" blog, and if you're gonna send it to me, read the response from Nik Suresh first. If you're going to say that I "don't speak to people who actually use these products," you are categorically wrong and in denial.

I am only writing with this aggressive tone because, for the best part of two years, I have been made to repeatedly explain myself in a way that no AI "optimist" is made, and I admit I resent it. I have written hundreds of thousands of words with hundreds of citations, and still, to this day, there are people who claim I am somehow flawed in my analysis, that I'm missing something, that I am somehow failing to make my case.

The only people failing to make their case are the AI optimists still claiming that these companies are making "powerful AI." And once this bubble pops, I will be asking for an apology.

I Don't Like What's Happening

I love ending pieces with personal thoughts about stuff because I am an emotional and overly honest person, and I enjoy writing a lot.

I do not, however, enjoy telling you at length how brittle everything is. An ideal tech industry would be one built on innovation, revenue, real growth based on actual business returns that helped humans be better, not outright lie about replacing them. All that generative AI has done is show how much lust there is in both the markets and the media for replacing human labor — and yes, it is in the media too. I truly believe there are multiple reporters who feel genuine excitement when they write scary stories about how Dario Amodei says white collar workers will be fired in the next few years in favour of "agents" that will never exist.

Everything I’m discussing is the result of the Rot Economy thesis I wrote back in 2023 — the growth-at-all-costs mindset that has driven every tech company to focus on increasing quarterly revenue numbers, even if the products suck, or are deeply unprofitable, or, in the case of generative AI, both.

Nowhere has there been a more pungent version of the Rot Economy than in Large Language Models, or more specifically GPUs. By making everything about growth, you inevitably reach a point where the only thing you know how to do is spend money, and both LLMs and GPUs allowed big tech to do the thing that worked before — building a bunch of data centers and buying a bunch of chips — without making sure they’d done the crucial work of “making sure this would create products people like.” As a result, we’re now sitting on top of one of the most brittle situations in economic history — our markets held up by whether four or five companies will continue to buy chips that start losing them money the second they’re installed.

I am disgusted by how many people are unwilling or unable to engage with the truth, favouring instead a scornful, contemptuous tone toward anybody who doesn't believe that generative AI is the future. If you are a writer that writes about AI smarmily insulting people who "don't understand AI," you are a shitty fucking writer, because either AI isn't that good or you're not good at explaining why it's good. Perhaps it's both.

If you want to know my true agenda, it's that I see something in generative AI and its boosters something I truly dislike. Large Language Models authoritatively state things that are incorrect because they have no concept of right or wrong. I believe that the writers, managers and executives that find it exciting do so because it gives them the ability to pretend to be intelligent without actually learning anything, to do everything they can to avoid actual work or responsibility for themselves or others.

There is an overwhelming condescension that comes from fans of generative AI — the sense that they know something you don't, something they double down on. We are being forced to use it by bosses, or services we like that now insist it's part of our documents or our search engines, not because it does something, but because those pushing it need us to use it to prove that they know what's going on.

To quote my editor Matt Hughes: "...generative AI...is an expression of contempt towards people, one that considers them to be a commodity at best, and a rapidly-depreciating asset at worst."

I haven't quite cracked why, but generative AI also brings out the worst in some people. By giving the illusion of labor, it excites those who are desperate to replace or commoditize it. By giving the illusion of education, it excites those who are too idle to actually learn things by convincing them that in a few minutes they can learn quantum physics. By giving the illusion of activity, it allows the gluttony of Business Idiots that control everything to pretend that they do something. By giving the illusion of futurity, it gives reporters that have long-since disconnected from actual software and hardware the ability to pretend that they know what's happening in the tech industry.

And, fundamentally, its biggest illusion is economic activity, because despite being questionably-useful and burning billions of dollars, its need to do so creates a justification for spending billions of dollars on GPUs and data center sprawl, which allows big tech to sink money into something and give the illusion of growth.

I love writing, but I don't love writing this. I think I'm right, and it’s not something I’m necessarily happy about.  If I'm wrong, I'll explain how I'm wrong in great detail, and not shy away from taking accountability, but I really do not think I am, and that's why I'm so alarmed.

What I am describing is a bubble, and one with an obvious weakness: one company's ability to sell hardware to four or five other companies, all to run services that lose billions of dollars.

At some point the momentum behind NVIDIA slows. Maybe it won't even be sales slowing — maybe it'll just be the suggestion that one of its largest customers won't be buying as many GPUs. Perception matters just as much as actual numbers, and sometimes more, and a shift in sentiment could start a chain of events that knocks down the entire house of cards. 

I don't know when, I don't know how, but I really, really don't know how I'm wrong.

I hate that so many people will see their retirements wrecked, and that so many people intentionally or accidentally helped steer the market in this reckless, needless and wasteful direction, all because big tech didn’t have a new way to show quarterly growth. I hate that so many people have lost their jobs because companies are spending the equivalent of the entire GDP of some European countries on data centers and GPUs that won’t actually deliver any value. 

But my purpose here is to explain to you, no matter your background or interests or creed or whatever way you found my work, why it happened. As you watch this collapse, I want you to tell your friends about why — the people responsible and the decisions they made — and make sure it’s clear that there are people responsible.

Sam Altman, Dario Amodei, Satya Nadella, Sundar Pichai, Tim Cook, Elon Musk, Mark Zuckerberg and Andy Jassy have overseen a needless, wasteful and destructive economic force that will harm our markets (and by a larger extension our economy) and the tech industry writ large, and when this is over, they must be held accountable.

And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools. You are smarter than they reckon and stronger than they know, and a better future is one where you recognize this, and realize that power and money doesn’t make a man righteous, right, or smart.

I started writing this newsletter with 300 subscribers, and I now have 67,000 and a growing premium subscriber base. I am grateful for the time you’ve given me, and really hope that I continue to help you see the tech industry for what it currently is — captured almost entirely by people that have no interest in building the future.

The Remarkable Incompetence At The Heart Of Tech

2025-07-18 22:51:41

Hello premium subscribers! Today I have the first guest post I've ever commissioned (read: paid) on Where's Your Ed At - Nik Suresh, one of the greatest living business and tech writers, best-known for his piece I Will Fucking Piledrive You If You Mention AI Again, probably my favourite piece of the AI era.

I want to be clear that I take any guest writing on here very seriously, and do not intend to do this regularly. The quality bar is very high, which is why I started with Nik. I cannot express enough how much I love his work. Brainwash An Executive Today is amazing, as is his teardown of Contra Ptacek's All My AI Skeptic Friends Are Nuts. Nik is a software engineer, the executive director of an IT consultancy, and in general someone who actually understands software and the industries built around selling it.

You can check out his work here and check out his team here.


Ed asked me to write about why leaders around the world are constantly buying software they don’t need. He probably had a few high-profile companies in mind, like Snowflake. Put aside whether Snowflake is a good product – most people don’t know what a database is, so why on earth does a specialized and very expensive database have a market cap of $71B? 

That’s a fair question – and being both a software engineer and the managing director at a tech consultancy, I can talk about what’s happening on the ground. And yes, people are buying software that they don’t need.

I wish that was the extent of our problems.

Pointless software purchases are a comparatively minor symptom of the seething rot and stunning incompetence at the core of most companies’ technical operations. Things are bad to a degree that sounds unbelievable to people that don’t have the background to witness or understand it firsthand.

Here is my thesis:

Most enterprise SaaS purchases are simply a distraction – total wishful thinking – for leaders that hope waving a credit card is going to absolve them of the need to understand and manage the true crisis in software engineering. Buying software has many desirable characteristics – everyone else is doing it, it can stall having to deliver results for years, and allows leaders to adopt a thin veneer of innovation. In reality, they’re  settling for totally conservative failure. The real crisis, the one they’re ignoring, is only resolved by deep systems thinking, emotional awareness, and an actual understanding of the domain they operate in.

And that crisis, succinctly stated, is thus: our institutions are filled to burst with incompetents cosplaying as software engineers, forked-tongue vermin-consultants hawking lies to the desperate, and leaders who consider think reading Malcolm Gladwell makes you a profound intellectual (if you don’t understand why this is a problem, please report to my office for immediate disciplinary action). 

I’m going to try and explain things as they’re actually like at normal companies. Welcome to my Hell, and hold your screams until the end.

# I. The Industry is Sick in a Way That Can’t Be Solved by SaaS Spend

The typical team at a large organization – in a truly random office, of the sort that buys products like Salesforce but will otherwise never be in the news – might literally deliver nothing of value for years at a time. I know, I know, how can people be doing nothing for years? A day? Sure, everyone has an off-day. Weeks? Maybe. But years? Someone’s going to notice eventually, right?

Most industries have long-since been seized by a variety of tedious managerialism that’s utterly divorced from actually accomplishing any work, but the abstract nature of programming allows for software teams to do nothing to a degree that stretches credulity. Code can be reported as 90% done in meetings for years; there’s no physical artifact that non-programmers can use to verify it. There’s no wall with half the bricks laid, just lines of incomprehensible text which someone assures you constitutes Value. 

This is a real, well-known phenomenon amongst software engineers, but no one believes us when we bring it up, because surely there’s no way profit-obsessed capitalists are spending millions of dollars on teams with no visible output.

I know it sounds wild. I feel like I’ve been taking crazy pills for years. But enjoy some anecdotes:

My first tech job was “data scientist,” a software engineering subspecialty focused on advanced statistical methods (or “AI” if you are inclined towards grifting). When a data scientist successfully applies statistical methods to solve a business problem, it’s called producing a “model.” My team produced no models in two years, but nonetheless received an innovation award from leadership, and they kept paying me six figures. I know small squads of data scientists with salaries totaling millions that haven’t deployed working models for twice that long.

During my next job, at an entirely unrelated organization, I was tasked with finishing a website that had been “almost done” for a few years, whose main purpose was for a team to do some data entry. This is something that takes about a competent team two weeks – my current team regularly does more complicated things in that time. I finished in good time and handed it to the IT department to host, a task that should take a day if done very efficiently, or perhaps three months if you were dealing with extreme bureaucracy and a history of bad technical decisions. It’s been five years and the organization has to deploy the finished product. I later discovered that the company had spent four years trying before I joined. It’s just a website! When the internet was coming up, people famously hired teenagers to do this!

I’m not even going to get into my third and fourth jobs, except to say they involved some truly spectacular displays of technical brilliance, such as discovering a team burning hundreds of thousands of dollars on Snowflake because they didn’t take thirty seconds to double-check any settings. I suspect that Snowflake’s annual revenue would drop by more than 20% if every team in the world spent five minutes (actually five minutes, it was that easy) to make the change I did – editing a single number in the settings that has no negative side-effects for the typical business – but they’re also staffed by people that don’t read or study, so there’s no way to reach them.

I warn every single friend who enters the software industry that unless they land a role with the top 1% of software engineering organizations, they are about to witness true madness. Without fail, they report back in six months with something along the lines of “I thought you were exaggerating.” In a private conversation about a year ago, an employee that left a well-known unicorn start-up confided:

“After leaving that company, I couldn’t believe that the rest of the world works this way.”

There are places where this doesn’t happen, but this madness is overwhelmingly the experience at companies that purchase huge enterprise products like Salesforce – the relationship between astonishing inefficiency and buying these products is so strong that it’s a core part of how my current team handles sales. We don’t waste time trying to sell to companies that use this stuff – it’s usually too late to save them – and I spend a lot of time tracking down companies in the process of being pitched this stuff by competing vendors.

In 2023, software engineer Emmanuel Maggiore wrote:

“When Twitter fired half of its employees in 2022, and most tech giants followed suit, I wasn’t surprised. In fact, I think little will change for those companies. After being employed in the tech sector for years, I have come to the conclusion that most people in tech don’t work. I don’t mean we don’t work hard; I mean we almost don’t work at all. Nada. Zilch. And when we do get to do some work, it often brings low added value to the company and its customers. All of this while being paid an amount of money some people wouldn’t even dream of.”

This will be totally unrecognizable to about half the software people in the world – those working at companies like Netflix which are famous for their software engineering cultures, or some of those working at startups where there isn’t enough slack to obfuscate. For everyone else, what I’ve described is Tuesday.

Also as a note to my lovely fans who email me about the RSS feed "including the whole premium newsletter" - I give generous previews! The rest of the (premium) article follows.

Anthropic Is Bleeding Out

2025-07-12 00:32:04

Hello premium customers! Feel free to get in touch at [email protected] if you're ever feeling chatty. And if you're not one yet, please subscribe and support my independent brain madness.

Also, thank you to Kasey Kagawa for helping with the maths on this.

Soundtrack: Killer Be Killed - Melting Of My Marrow


Earlier in the week, I put out a piece about how Anthropic had begun cranking up prices on its enterprise customers, most notably Cursor, a $500 million Annualised Recurring Revenue (meaning month multiplied by 12) startup that is also Anthropic’s largest customer for API access to models like Claude Sonnet 4 and Opus 4.

As a result, Cursor had to make massive changes to the business model that had let it grow so large in the first place, replacing (on June 17 2025, a few weeks after Anthropic’s May 22 launch of its  Claude Opus 4 and Sonnet 4 models) a relatively limitless $20-a-month offering with a much-more-limited $20-a-month package and a less-limited-but-still-worse-than-the-old-$20-tier $200-a-month subscription, pissing off customers and leading to most of the Cursor Subreddit turning into people complaining or discussing they’d cancel their subscription.

Though I recommend you go and read the previous analysis, the long and short of it is that Anthropic increased the costs on its largest customer — a coding startup — about 8 days (on May 30 2025) after launching two models (Sonnet 4 and Claude Opus 4) specifically dedicated to coding.

I concluded with the following:

What I have described in this newsletter is one of the most dramatic and aggressive price increases in the history of software, with effectively no historical comparison. No infrastructure provider in the history of Silicon Valley has so distinctly and aggressively upped its prices on customers, let alone their largest and most prominent ones, and doing so is an act of desperation that suggests fundamental weaknesses in their business models.

Worse still, these changes will begin to kneecap an already-shaky enterprise revenue story for two companies desperate to maintain one. OpenAI's priority pricing is basic rent-seeking, jacking up prices to guarantee access. Anthropic's pricing changes are intentional, mob-like attempts to increase revenue by hitting its most-active customers exactly where it hurts, launching a model for coding startups to integrate that’s specifically priced to increase costs on enterprise coding startups.

But the whole time I kept coming back to a question: why, exactly, would Anthropic do this? Was this rent seeking? A desperate attempt to boost revenue? An attempt to bring its largest customer’s compute demands under control as its regularly pushed Anthropic’s capacity to the limit?

Or, perhaps, it was a little simpler: was Anthropic having its own issues with capacity, and maybe even cash flow.

Another announcement happened on May 22 2025 — Anthropic launched Claude Code, a version of Anthropic’s Claude that runs directly in your terminal (or integrates into your IDE) that uses Anthropic’s Claude models to write and manage code. This is, I realize, a bit of an oversimplification, but the actual efficacy or ability of Claude Code is largely irrelevant other than in the sheer amount of cloud compute it requires.

As a reminder, Anthropic also launched its Claude Sonnet 4 and Opus 4 models on May 22 2025, shortly followed by its Service Tiers, and then both Cursor and vibe-coding startup Replit’s price changes, which I covered last week. These are not the moves of a company brimming with confidence about its infrastructure or financial position, which made me want to work out why things might have got more expensive.

And then I found out, and it was really, really fucking bad.

Claude Code, as a product, is quite popular, along with its Sonnet 4 and Opus 4 models. It’s accessible via Anthropic’s $20-a-month “Pro” subscription (but only using the Claude Sonnet 4 model), or the $100 (5x the usage of Pro) and $200 (20x the usage of Pro) ”Max” subscriptions. While people hit rate limits, they seem to be getting a lot out of using it, to the point that you have people on Reddit boasting about running eight parallel instances of Claude Code.

Something to know about software engineers is that they’re animals, and I mean that with respect. If something can be automated, a software engineer is at the very least going to take a look at automating it, and Claude Code, when poked in the right way, can automate a lot of things, though to what quality level or success rate I have no real idea, and while there are limits and restrictions, software engineers absolutely fucking love testing limits and getting around restrictions, and many I know see them as challenges to overcome.

As a result, software engineers are running Claude Code at full tilt, to the limits, to the point that some set alarms to wake up during the night when their limits reset after five hours to maximize their usage, along with specialised dashboards to help them do so. One other note is that Claude Code’s functionality creates detailed logs about the amount of input or output tokens it’s using throughout the day to complete its tasks, including whether said tokens were written to or read from the cache.

Software engineers also love numbers, and they also love deals, and thus somebody created CCusage, a tool that, using those logs, allows Claude Code users to see exactly how much compute you’re burning with your subscription, even though Anthropic is only charging you $20, $100 or $200-a-month. CCUsage compares these logs (which contain both how tokens were used and what models were run) to the up-to-date information from Anthropic’s API prices, and tells you exactly how much you’ve spent in compute (here’s a more detailed run-down).

In simpler terms, CCusage is a relatively-accurate barometer of how much you are costing Anthropic at any given time, with the understanding that its costs may (we truly have no idea) be lower than the API prices they charge, though I add that based on how Anthropic is expected to lose $3 billion this year (that’s after revenue!) there’s a chance that it’s actually losing money on every API call.

Nevertheless, there’s one much, much, much bigger problem: Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.


Some caveats before we continue:

  • CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
  • Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base. 
  • Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers. 
  • We do not know if the current, unrestricted version of Claude Code will last.

The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking. 

In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute.

Sidenote: While researching this piece, I decided to send my editor, Matt Hughes, £20 and told him to create an Anthropic Pro account and install Claude Code with the aim of seeing how much of Anthropic’s money he could burn over the course of an hour or so. 

Matt, a developer-turned-journalist, told Claude Code to build the scaffolding for a browser-based game using the phaser.js library — a simple, incredibly accessible tool for creating HTML 5 games. Just creating that scaffolding ended up burning around $2.50, and that’s with just over the course of an hour. 

It’s easy to see how someone using Claude Code as part of their job, or as part of creating their side-project, could end up burning way more money than they paid as part of their package. 

Furthermore, it’s important to know that Anthropic’s models are synonymous with using generative AI to write or manage code. Anthropic’s models make up more than half of all tokens related to programming that flow through OpenRouter, a popular unified API for integrating models into services, and currently lead on major LLM coding benchmarks. Much of Cursor’s downfall has come from integrating both of these models, in particular the expensive (and I imagine compute intensive) Opus 4 model, which Anthropic only allows users of its $100-a-month or $200-a-month “Max” subscriptions to access.

In my research, I have found more than thirty different reported instances of users that have spent in excess of the amount they pay Anthropic by a factor of no less than 100%. For sake of your sanity, I will split them up by their paid subscription, and how much more than it they spent.

Anthropic and OpenAI Have Begun The Subprime AI Crisis

2025-07-08 00:34:48

Hello premium customers! Feel free to get in touch at [email protected] if you're ever feeling chatty. And if you're not one yet, I'm sorry that I paywalled this, but it took me so much effort and drove me a little insane.


Back in September 2024 I wrote about a phenomena I call The Subprime AI Crisis — that companies like Anthropic and OpenAI are providing their services at a massive loss, and that at some point they would have to start finding ways to recoup their costs, raising the prices of providing their services, which in turn would cause those connecting to their APIs to have to start doing the same to their customers.

As an aside, I also made the following prediction:

I believe that, at the very least, Microsoft will begin reducing costs in other areas of its business as a means of helping sustain the AI boom. In an email shared with me by a source from earlier this year, Microsoft's senior leadership team requested (in a plan that was eventually scrapped) reducing power requirements from multiple areas within the company as a means of freeing up power for GPUs, including moving other services' compute to other countries as a means of freeing up capacity for AI.

Microsoft laid off 9000 people last week, one of its largest in history, hitting its Xbox division hardest, about a month and a half after laying off 6000 people.

But really, my biggest prediction was this:

I hypothesize a kind of subprime AI crisis is brewing, where almost the entire tech industry has bought in on a technology sold at a vastly-discounted rate, heavily-centralized and subsidized by big tech. At some point, the incredible, toxic burn-rate of generative AI is going to catch up with them, which in turn will lead to price increases, or companies releasing new products and features with wildly onerous rates — like the egregious $2-a-conversation rate for Salesforce’s “Agentforce” product — that will make even stalwart enterprise customers with budget to burn unable to justify the expense.

What happens when the entire tech industry relies on the success of a kind of software that only loses money, and doesn’t create much value to begin with? And what happens when the heat gets too much, and these AI products become impossible to reconcile with, and these companies have nothing else to sell?

We may be about to find out.

The Enshittification Of Cursor

Last week, it came out that Anthropic, whose Claude models compete with those made by OpenAI, had hit $4 billion in annualized revenue, meaning [whatever month it is] multiplied by twelve, and expects to lose $3 billion in 2025 because of how utterly unprofitable its models are, though The Information adds that this is an improvement over a loss of $5.6 billion in 2024, which Anthropic claims was due to "a one-off payment to access the data centers that power its technology."

Hey, wait a second. Isn't Anthropic running its services on Amazon Web Services and Google Cloud? Didn't both Google and Amazon fund them? Is Anthropic just handing their money back to them? Weird! Kinda reminds me of how Microsoft is booking the revenue it gets from OpenAI handing it cloud credits. Weird!

Anyway, The Information added in its piece that Anthropic also lost the lead developer of their Claude Code product (Boris Cherny) to Anysphere, maker of the buzzy coding startup Cursor, along with Cat Wu, one of the product managers on Claude Code, with both allegedly going on to develop "agent-like features," a thing that, to quote The Information, involves "automating complex coding tasks involving multiple steps."

I wouldn't usually just write down exactly what a startup has told The Information, but these details are important, as are the following:

Cursor’s growth is also accelerating thanks to advances in Anthropic’s models and what developers say is an easy-to-use interface. The company said last month that it has surpassed $500 million in annual recurring revenue, or $42 million in revenue per month. That’s more than double its pace of $200 million in annual recurring revenue as of March. Anysphere’s valuation is $9.9 billion, up from $2.6 billion in December.

Anysphere has become precious to Silicon Valley — proof that there are startups other than OpenAI and Anthropic that can build actual products that people will pay for that use AI in some way, and, according to The Information's Generative AI database, has the most recurring revenue of any private AI-powered software-as-a-service startup (outside of the aforementioned big two).

Cursor and Anysphere are symbolic. Cursor is a tool that developers actually like, that actually makes money, that grew organically based on people talking about how much they liked it, and it proliferated one of generative AI's only real use cases — being able to generate or edit code quickly. 

To get a little more specific, Cursor is something called an IDE — integrated development environment, which allows a developer to write code, run tests, manage projects, and so on, — but with AI integrations that can predict what your next change to the code might be (which Cursor calls "tab completions"), and the ability to generate code and take actions across an entire project rather than in separate requests. If you want a deeper dive (as I'm not a software developer), I recommend reading this piece from Random Coding.

A note about coding startups and AI-generated code: Code is character-heavy, as I'll get into later, but it means that in general coding startups use way more generative AI services than, say, a company generating text or images. Code is extremely verbose, and bad code often moreso, and changes to it are nuanced, requiring the generative AI model in question to ingest and output a great deal of stuff.

This is, of course, compounded by their propensity for hallucinations. Basically, some degree of relying on AI-generated code means knowing that you're going to generate a certain amount of crap that you'll have to fix. While you save time in the aggregate, you are still burning extra tokens on the mistakes a model might make.

You will eventually realise why that's bad.

Nevertheless, the long and short of it is that Cursor is a well-liked product for using AI to build software, with the ability to ask it to take distinct actions using natural language, specifically using Cursor's (sigh) "agent," which can be told to do something and then work on it in the background as you go and do something else. Nothing about what I'm saying is an endorsement of the product, but it's hard to deny that software developers generally liked Cursor, and that it's become extremely popular as a result.

Or, perhaps, I should've said "liked."