2025-11-06 08:00:00
Just how much are we spending on AI?
Compared to other massive infrastructure projects, AI is the sixth largest in US history, so far.
World War II dwarfs everything else at 37.8% of GDP. World War I consumed 12.3%. The New Deal peaked at 7.7%. Railroads during the Gilded Age reached 6.0%.
AI infrastructure today sits at 1.6%, just above the telecom bubble’s 1.2% & well below the major historical mobilizations.
| Project | Year | Spending (2025$) | % of GDP |
|---|---|---|---|
| World War II | 1944 | $1,152B | 37.8% |
| World War I | 1918 | $138B | 12.3% |
| New Deal | 1936 | $150B | 7.7% |
| Railroads (peak) | 1870 | $18B | 6.0% |
| Interstate Highways | 1964 | $142B | 2.0% |
| AI Infrastructure | 2024 | $500B | 1.6% |
| Telecom Bubble | 2000 | $226B | 1.2% |
| Manhattan Project | 1945 | $36B | 0.9% |
| Apollo Program | 1966 | $59B | 0.7% |
Companies like Microsoft, Google, & Meta are investing $140B, $92B, & $71B respectively in data centers & GPUs. OpenAI plans to spend $295B in 2030 alone.
If we assume OpenAI represents 30% of the market, total AI infrastructure spending would reach $983B annually by 2030, or 2.8% of GDP.1
| Scenario | 2024 | 2030 | % of GDP (2030) |
|---|---|---|---|
| Current AI Infrastructure | $500B | - | 1.6% |
| OpenAI Projected Spending | - | $295B | 0.8% |
| Total Market (projected) | - | $983B | 2.8% |
To match the railroad era’s 6% of GDP, AI spending would need to reach $2.1T per year by 2030 (6% of projected $35.4T GDP), a 320% increase from today’s $500B. That would require Google, Meta, OpenAI, & Microsoft each investing $500-700B per year, a 5-7x increase from today’s levels.
And that should give you a sense of how much we were spending on railroads 150 years ago!
World War I & II:
New Deal:
Railroads:
Telecom Bubble:
Apollo Program:
Manhattan Project:
AI Infrastructure:
All historical spending figures are adjusted to 2025 dollars using Consumer Price Index (CPI) inflation data. Each figure represents peak single-year spending in the year indicated. Percentages show spending as a share of GDP in that specific year, not as a percentage of today’s GDP.
For example, WWII’s $1,152B represents actual 1944 defense spending ($63B nominal) adjusted for inflation, which consumed 37.8% of 1944’s GDP ($175B). This differs from asking “what would 37.8% of today’s $30.5T GDP cost?” which would yield $11.5T.
Assuming 2.5% annual GDP growth to $35.4T in 2030 ↩︎
2025-11-05 08:00:00
I wrote a post titled Congratulations, Robot. You’ve Been Promoted! in which OpenAI declared that their AI coders were no longer junior engineers but mid-level engineers.
The post triggered the largest unsubscription rate in this blog’s history. It was a 4-sigma event.
A Nassim Taleb black swan, this was something that should happen once every 700 years of a blog author’s career.
Clearly, the post struck a nerve.
In a job market 13% smaller for recent grads than recent years, a subtle fear persists that positive developments in AI accuracy & performance accelerate job losses. Stanford’s research found :
“young workers (ages 22–25) in the most AI-exposed occupations, such as software developers & customer service reps, have experienced a 13% relative decline in employment since generative AI became widely used.”
The whispered question beneath all this data : are AI advances a zero-sum game for jobs? Are we in the modern era hearing the same refrain as Springsteen lamenting the impact of globalization on his hometown :
They’re closing down the textile mill across the railroad tracks. Foreman says “These jobs are going, boys, and they ain’t coming back”
Let’s go to the data.
Software engineering employment grew steadily from 3.2m developers in 2010 to 4.7m in 2022 during the ZIRP (Zero Interest Rate Policy) era. The 2020-2022 period was the hottest tech jobs market of all time, with demand doubling since 2020.
Then the Fed raised rates aggressively, increasing the cost of capital, triggering a 4.3% contraction, hitting younger workers.
But data from layoffs suggest that this trend isn’t accelerating.
Tech companies laid off 264,220 employees in 2023. The 2025 data (annualized from 11 months through November) projects 122,890 layoffs for the full year. There’s no acceleration yet in the data.
The data doesn’t yet show what readers are clearly feeling : a trepidation that AI advances will accelerate job losses.
2025-11-03 08:00:00
OpenAI has committed to spending $1.15 trillion on hardware & cloud infrastructure between 2025 & 2035.1
The spending breaks down across seven major vendors: Broadcom ($350B), Oracle ($300B), Microsoft ($250B), Nvidia ($100B), AMD ($90B), Amazon AWS ($38B), & CoreWeave ($22B).2
Using some assumptions, we can generate a basic spending plan through contract completion.3
| Year | MSFT | ORCL | AVGO | NVDA | AMD | AWS | CRWE | Annual Total |
|---|---|---|---|---|---|---|---|---|
| 2025 | $2 | $0 | $0 | $0 | $0 | $2 | $2 | $6 |
| 2026 | $3 | $0 | $2 | $2 | $1 | $3 | $3 | $14 |
| 2027 | $5 | $25 | $4 | $6 | $3 | $4 | $3 | $50 |
| 2028 | $10 | $60 | $10 | $12 | $8 | $5 | $7 | $112 |
| 2029 | $20 | $60 | $25 | $31 | $24 | $6 | $7 | $173 |
| 2030 | $60 | $60 | $64 | $49 | $54 | $8 | $0 | $295 |
| TOTAL | $250 | $300 | $350 | $100 | $90 | $38 | $22 | $1,150 |
Across these vendors, estimated annual compute spending grows from $6B in 2025 to $173B in 2029, reaching $295B in 2030. We built a constrained allocation model with the boundary conditions defined in the appendix below, but this is just a guess. The actual growth rates are 124% (2027→2028), 54% (2028→2029), & 70% (2029→2030).
Coincidentally, OpenAI announced today they would hit $100B in 2027, earlier than expected.4 This gives us another data point to help us understand the business’ trajectory.
OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029.5 If we assume all infrastructure spending flows through cost of goods sold (COGS), we can calculate the implied revenue needed to support these spending levels at OpenAI’s target margins.
| Year | Annual Spending (COGS) | Gross Margin | Implied Revenue |
|---|---|---|---|
| 2025 | $6B | 48% | $12B |
| 2026 | $14B | 48% | $27B |
| 2027 | $50B | 55% | $111B |
| 2028 | $112B | 62% | $295B |
| 2029 | $173B | 70% | $577B |
| 2030 | $295B | 70% | $983B |
The calculation assumes linear margin improvement from 48% (2025) to 70% (2029), then holds at 70% for 2030-2032. Revenue is calculated as: Spending / (1 - Gross Margin).
These implied revenue figures suggest OpenAI would need to grow from ~$10B in 2024 revenue to $577B by 2029, roughly the size of Google’s revenue in the same year (assuming Google grows from $350B in 2024 at ~12% annually).
If nothing else, the estimated annual spending & commitments convey an absolutely enormous level of potential & ambition.
| Vendor | Total Value | Contract Duration | Source |
|---|---|---|---|
| Broadcom | $350B | 7 years (2026-2032, estimated) | 10 GW deployment, financial terms not disclosed |
| Oracle | $300B | 6 years (2027-2032, estimated) | $60B annually for 5 years plus ramp-up |
| Microsoft | $250B | 7 years (2025-2031, estimated) | Based on cloud service contract structure |
| Nvidia | $100B | Not disclosed | Deployment begins H2 2026 |
| AMD | $90B | Not disclosed | Deployment begins H2 2026 |
| Amazon AWS | $38B | 7 years (2025-2031) | Explicitly stated in announcement |
| CoreWeave | $22.4B | ~5 years (2025-2029) | Based on contract expansions |
Average duration of disclosed deals: 5.7 years (rounded to 6 years for estimation purposes)
Broadcom : OpenAI commits to deploying 10 gigawatts of custom AI accelerators designed by OpenAI & developed in partnership with Broadcom. Estimated value of $350B based on industry benchmarks ($35B per gigawatt). Deployment begins H2 2026. We estimate a 7-year deployment timeline (2026-2032) consistent with the scale & complexity of custom chip manufacturing & data center buildout. The systems will be deployed across OpenAI’s facilities & partner data centers.6
Microsoft : OpenAI commits to purchasing an incremental $250 billion in Azure cloud services over an estimated 6 years (2025-2030).7
Nvidia : Nvidia invests up to $100 billion in OpenAI for non-voting shares. OpenAI commits to spending on Nvidia chips across at least 10 gigawatts of AI data centers. First deployment begins in H2 2026 using the Nvidia Vera Rubin platform.8
AMD : AMD provides OpenAI with warrants to purchase up to 160 million AMD shares (approximately 10% of the company) at one cent per share. In exchange, OpenAI commits to purchasing 6 gigawatts of AMD Instinct GPUs, representing $90 billion in cumulative hardware revenue potential. The first 1 gigawatt deployment starts in H2 2026.9
Oracle : OpenAI commits to paying Oracle $60 billion annually for five years (2027-2031) for cloud infrastructure, totaling $300 billion. The contract is part of Oracle’s $500 billion Stargate data center buildout. Larry Ellison stated in Oracle’s earnings call : “The capability we have is to build these huge AI clusters with technology that actually runs faster & more economically than our competitors.”10
Amazon AWS : OpenAI commits to $38 billion over seven years (2025-2031) for cloud infrastructure & compute capacity on Amazon Web Services. The agreement, signed November 3, 2025, provides immediate access to hundreds of thousands of Nvidia GB200 & GB300 GPUs running on Amazon EC2 UltraServers. All planned capacity is targeted to come online by the end of 2026, with room to expand through 2027 & beyond. Sam Altman stated : “Scaling frontier AI requires massive, reliable compute.”11 OpenAI’s first major partnership with AWS, adding to its multi-cloud infrastructure.
CoreWeave : $22.4 billion in committed spending for data center usage rights through 2029, consisting of $11.9B initial contract, $4B expansion, & $6.5B September 2025 expansion.12
The year-by-year breakdowns above are estimates based on publicly announced deal terms & deployment schedules. Here’s how we calculated them :
It’s hard to model the payments because some of the contracts are hardware spending (Nvidia, AMD, Broadcom) while others are cloud services (Microsoft Azure, Oracle Cloud, AWS), each with different payment structures & deployment timelines. Additionally, some contracts include chip design costs (like Broadcom’s custom AI accelerators), further complicating the spending distribution.
Contract Structures : The estimates reflect accelerating deployment starting after 2027, with 2025-2027 representing the ramp-up period & 2028-2030 showing peak deployment with growth rates of 124%, 54%, & 70% respectively. Oracle’s $300B contract : We assume a ramp-up period in 2027 ($25B) as infrastructure comes online, reaching full $60B annual run rate in 2028-2031, then completing with $35B in 2032. This assumption reflects realistic deployment timelines : Oracle’s massive data center buildout requires initial site preparation & infrastructure scaling before reaching full capacity. All other vendors follow deployment-based patterns starting from small initial commitments ($2B-$4B) & accelerating as large-scale infrastructure deployments come online. The spending curves reflect physical & financial realities : you can’t deploy 10 gigawatts of infrastructure overnight.
Microsoft ($250B total) : Based on incremental Azure services commitment announced in October 2025. Contract duration not disclosed. We estimated 7 years (2025-2031) consistent with AWS’s 7-year contract structure. Spending starts at $2B in 2025 & accelerates after 2027 : $10B (2028), $20B (2029), $60B (2030), with the remaining spend allocated to 2031 as large-scale deployments peak.7
Nvidia ($100B total) : Nvidia invests up to $100 billion in OpenAI for non-voting shares. OpenAI commits to spending on Nvidia chips across at least 10 gigawatts of AI data centers. First deployment begins in H2 2026 using the Nvidia Vera Rubin platform.8
AMD ($90B total) : Based on 6 gigawatt commitment & H2 2026 deployment start. AMD’s partnership announcement explicitly states “$90 billion in cumulative hardware revenue potential” from this agreement.9
Oracle ($300B total) : The most concrete, $60B annually for five years, as stated in multiple Oracle earnings calls & confirmed by CEO Safra Catz. We model this as a ramp-up period in 2027 ($25B) as infrastructure comes online, reaching full $60B annual rate in 2028-2031, then $35B in 2032 to reach the $300B total. This reflects Oracle’s Stargate data center buildout timeline & realistic deployment constraints.13
Amazon AWS ($38B total) : Based on announced 7-year agreement signed November 3, 2025. OpenAI commits to $38B over seven years for access to hundreds of thousands of Nvidia GB200 & GB300 GPUs on Amazon EC2 UltraServers. Deployment begins immediately with all capacity targeted for end of 2026.11 We estimated deployment spending with geometric growth : $2B in 2025 (partial year starting November), ramping through 2027-2030 ($4B → $6B → $10B → $11B), then completing with $2B in 2031.
CoreWeave ($22.4B total) : Based on reported $11.9B initial contract, $4B expansion in May 2025, plus $6.5B expansion in September 2025, bringing total contract value to $22.4B.14 Note : CoreWeave also provides compute capacity to Google Cloud, creating an interesting three-way dynamic where Google resells CoreWeave’s Nvidia-powered infrastructure.15
These estimates carry ±30-50% error margins. Actual spending depends on deployment pace, hardware costs, & contract amendments.
A critical complication in estimating OpenAI’s cost structure is determining how much of chip-maker deals like Broadcom represent design services versus manufactured hardware, & how each flows through the income statement.
The Broadcom Deal Structure :
OpenAI & Broadcom collaborated for 18 months designing custom AI accelerators optimized for inference. OpenAI designs the chips, Broadcom provides IP licensing & engineering services, & TSMC manufactures using 3nm process technology. The $350B estimated value represents deployment through 2029, but financial terms weren’t disclosed.
Two Different Accounting Treatments :
Phase 1 : Design & Development (R&D Expense)
Phase 2 : Manufacturing & Deployment (Capitalized → COGS)
Why This Matters for Gross Margins :
The table showing implied revenue at OpenAI’s target margins assumes all infrastructure spending flows through COGS. This simplification works reasonably well because:
However, the true accounting is more complex : upfront design costs hit R&D immediately (worsening near-term operating margins), while manufactured chips depreciate over 3-5 years (smoothing COGS impact). Without disclosed contract terms splitting design services from hardware purchases, precise gross margin modeling remains challenging.
Comparison to Cloud Deals :
AWS ($38B/7 years) & Oracle ($60B/year) are cloud services, immediate COGS expenses with no capitalization benefit. The AWS deal alone represents ~$5.4B/year in direct COGS, making it particularly impactful for gross margins despite being a smaller absolute dollar commitment than hardware contracts.
Calculated from announced deals with Broadcom, Microsoft, Nvidia, AMD, Oracle, Amazon AWS & CoreWeave. CNBC, “A guide to the $1 trillion-worth of AI deals between OpenAI, Nvidia & others,” October 15, 2025. https://www.cnbc.com/2025/10/15/a-guide-to-1-trillion-worth-of-ai-deals-between-openai-nvidia.html ↩︎
Deal breakdowns : Broadcom ($350B estimated for 10 GW), Oracle ($300B contract), Microsoft ($250B Azure commitment), Nvidia ($100B commitment), AMD ($90B for 6 GW), Amazon AWS ($38B), CoreWeave ($22B). The American Prospect, “The AI Ouroboros,” October 15, 2025. https://prospect.org/power/2025-10-15-nvidia-openai-ai-oracle-chips/ ↩︎
See Appendix : Estimation Methodology section below for detailed assumptions & methodology. ↩︎
The Information, “OpenAI’s Revenue Could Reach $100 Billion in 2027, Altman Suggests,” November 3, 2025. https://www.theinformation.com/briefings/openais-revenue-reach-100-billion-2027-altman-suggests Sam Altman said on a podcast with Brad Gerstner that OpenAI’s revenue could reach $100B in 2027, earlier than the company’s previous 2028 projection. ↩︎
OpenAI projects a 48% gross profit margin in 2025, improving to 70% by 2029. The Information, “Investors Float Deal Valuing Anthropic at $100 Billion,” November 2025. https://www.theinformation.com/articles/investors-float-deal-valuing-anthropic-100-billion For comparison, Google Q3 2024 gross margin was 57.7% & Meta Q3 2024 was 81%. https://abc.xyz/assets/94/0e/637c7ab7438fab95911fdc9c2517/2024q3-alphabet-earnings-release.pdf https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-Third-Quarter-2024-Results/default.aspx ↩︎
OpenAI & Broadcom, “OpenAI & Broadcom announce strategic collaboration to deploy 10 gigawatts of OpenAI-designed AI accelerators,” October 13, 2025. https://openai.com/index/openai-and-broadcom-announce-strategic-collaboration/ Financial terms not disclosed; estimated value of $350B based on industry benchmarks of $35B per gigawatt. We estimate 7-year deployment (2026-2032) based on custom chip manufacturing timelines & data center buildout complexity. ↩︎
OpenAI, “The next chapter of the Microsoft–OpenAI partnership,” October 2025. https://openai.com/index/next-chapter-of-microsoft-openai-partnership/ ↩︎ ↩︎
NVIDIA Newsroom, “OpenAI & NVIDIA Announce Strategic Partnership to Deploy 10 Gigawatts of NVIDIA Systems,” October 2025. https://nvidianews.nvidia.com/news/openai-and-nvidia-announce-strategic-partnership-to-deploy-10gw-of-nvidia-systems ↩︎ ↩︎
AMD Press Release, “AMD & OpenAI Announce Strategic Partnership to Deploy 6 Gigawatts of AMD GPUs,” October 6, 2025. https://www.amd.com/en/newsroom/press-releases/2025-10-6-amd-and-openai-announce-strategic-partnership-to-d.html ↩︎ ↩︎
Qz, “Oracle’s massive AI power play,” September 2025. https://qz.com/oracle-earnings-ai-openai-cloud-power-larry-ellison Larry Ellison earnings call quote on technology & economic advantages. ↩︎
Amazon Web Services, “AWS announces new partnership to power OpenAI’s AI workloads,” November 3, 2025. https://www.aboutamazon.com/news/aws/aws-open-ai-workloads-compute-infrastructure OpenAI signs $38 billion deal with Amazon AWS over seven years for hundreds of thousands of Nvidia GB200 & GB300 GPUs. ↩︎ ↩︎
Bloomberg, “CoreWeave Expands OpenAI Deals to as Much as $22.4 Billion,” September 25, 2025. https://www.bloomberg.com/news/articles/2025-09-25/coreweave-expands-deals-with-openai-to-as-much-as-22-4-billion ↩︎
CNBC, “‘We’re all kind of in shock.’ Oracle’s revenue projections leave analysts slack-jawed,” September 9, 2025. https://www.cnbc.com/2025/09/09/were-all-kind-of-in-shock-oracle-projections-analysts-slackjawed.html Oracle CEO Safra Catz confirmed multiple large cloud contracts including $60B annual starting FY2028. ↩︎
CoreWeave expansions with OpenAI : $11.9B initial contract (March 2025), $4B expansion (May 2025), $6.5B expansion (September 2025), totaling $22.4B. Bloomberg, “CoreWeave Expands OpenAI Deals to as Much as $22.4 Billion,” September 2025. ↩︎
Reuters, “CoreWeave to offer compute capacity in Google’s new cloud deal with OpenAI,” June 2025. CoreWeave signed Google as customer in Q1 2025, creating three-way infrastructure arrangement. ↩︎
2025-10-31 08:00:00
I sleep better knowing my agents work through the night. Less work for me in the morning.
My podcast processor transcribes & analyzes conversations. I started on my laptop, needed a little database to collect podcast data & metadata, & booted up a DuckDB instance.
But then the data started to grow, & I wanted the podcast processor to run by itself. I changed two little letters, & the database moved to the cloud :
# Before : local only
conn = duckdb.connect('podcasts.db')
# After : cloud-native
conn = duckdb.connect('md:podcasts.db')
Now, in the small hours, 10 robots listen & summarize podcasts for me while I sleep.
As I collect more & more podcast information, my data has grown. I’m using a larger instance of MotherDuck.
Source : ClickBench
Aside from ease of use, there are real price-performance advantages. MotherDuck systems are two to four times faster than a Snowflake 3XL & from a tenth to a hundredth of the price.
Source : ClickBench
As the amount of data expands & I process more technology podcasts every day, I’m sure I’ll need a data lake. At that point, I can migrate to DuckLake.
Small data becomes big data faster than you know it.
Two letters changed everything. In this era, when those letters aren’t AI it’s worth paying attention.
2025-10-30 08:00:00
Microsoft & Google both announced earnings yesterday, & the scale of AI adoption remains staggering. The infrastructure businesses are growing at accelerating growth rates that are the envy of businesses one-hundredth the size.
Capital expenditures were $34.9 billion, driven by growing demand for our Cloud and AI offerings. This quarter, roughly half of our spend was on short-lived assets, primarily GPUs and CPUs…. We will increase our total AI capacity by over 80% this year and roughly double our total data center footprint over the next 2 years, reflecting the demand signals we see. Just this quarter, we announced the world’s most powerful AI data center, Fairwater in Wisconsin, which will go online next year and scale to 2 gigawatts alone. - Microsoft
We now expect CapEx to be in the range of $91 billion to $93 billion in 2025, up from our previous estimate of $85 billion. - Google
Both companies are increasing their data center buildouts as a result of demand. We could see $100B run rate on data centers in the next 12 months for each of these companies, up 33% from their projections at the beginning of the year.
From Microsoft our commercial RPO increased over 50% to nearly $400 billion with a weighted average duration of only 2 years.
Google Cloud’s backlog increased 46% sequentially and 82% year-over-year, reaching $155 billion at the end of the third quarter.
Remaining performance obligations, or RPO, called backlog by Google, represent commitments customers make to spend dollars on infrastructure. Both metrics are experiencing tremendous growth, which drives predictability in demand & ultimately confidence in capital expenditure investments, now totaling $555 billion across these two companies.
We increased the token throughput for GPT-4.1 and GPT-5, two of the most widely used models by over 30% per GPU.
Our Phi family of SLMs, which now have been downloaded over 60 million times, up 3x year-over-year.
Earlier this year, Microsoft announced that they were generating 90% more tokens per GPU. This increase appears to be concentrated in smaller or non-thinking models, given the statistics around GPT-4.1 & GPT-5.
We now have 900 million monthly active users of our AI features across our products. - Microsoft
The Gemini app now has over 650 million monthly active users, and queries increased by 3x from Q2.
With OpenAI at close to a billion monthly active users, Microsoft & Google are not far behind. These figures are not deduplicated, so many users likely overlap. Nevertheless, the broader adoption of AI, especially by consumers, is incontrovertible.
GitHub is now home to over 180 million developers and the platform is growing at the fastest rate in its history, adding a developer every second. 80% of new developers on GitHub start with Copilot within the first week.
And more than 13 million developers have built with our generative models.
In my market sizing for developer tools, I used to estimate the total developer market to be around twenty-seven million developers. I now see that number approximates two hundred million, or about 3% of the global population, which is staggering.
Commercial bookings increased 112% and 111% in constant currency and were significantly ahead of expectations, driven by Azure commitments from OpenAI as well as continued growth in the number of $100 million-plus contracts for both Azure and M365.
Cloud has signed more billion-dollar deals in the first 9 months of 2025 than in the past 2 years combined.
Despite all the concerns about circular financings & over exuberance in the ecosystem, the financial results are incredibly compelling.
| Company | Cash & Equivalents | Long-Term Debt | Net Cash Position | Annualized Capex | Capex as % of Cash |
|---|---|---|---|---|---|
| Microsoft | $102B | $35.4B | $66.6B | $140B | 137% |
| $98.5B | $21.6B | $76.9B | $92B | 93% |
Both companies are spending at or above their cash positions on AI infrastructure. Microsoft is committing 137% of its cash reserves to annualized capex, while Google’s capex represents 93% of its cash position.
Despite healthy net cash positions ($66.6B for Microsoft, $76.9B for Google), these aggressive investment levels signal extraordinary confidence in AI demand & may require additional debt to finance the AI boom.
2025-10-27 08:00:00
As the world follows every whisper & rumor from AI, what has happened to public SaaS?
The answer is not much! All the fun is in the private markets.
Multiples haven’t moved outside of a narrow band since the post-Covid crash in 2022.
Even here, AI is the story, with AI SaaS companies seeing different valuation correlates : revenue growth. Meanwhile, non-AI SaaS companies are valued more broadly with efficiency metrics included.
The scarcity of hyper-growth companies in 2025 tells its own story. Public SaaS companies have matured. The median company in our dataset grows at 13.7% annually, with a median multiple of 5.5x.
Most companies cluster near the median, with a few outliers driving exceptional growth rates. The distribution shows a steep drop-off from the top performers & a cluster of shrinking businesses.
The private market contains tens of unicorns with growth rates multiple times higher in both AI & classic SaaS. At some point soon, the largest AI companies’ capital requirements will push them to IPO.
At that point, the data will look very different!