2025-11-07 02:34:39
Sam Altman says OpenAI will do “well more than $13 billion” in sales this year1. Moreover, he suggested that the company might reach $100 billion in revenues by 2027. If one were to consult my thesaurus for a word to describe this latter claim, one might end up with one of these to describe it:
Outlandish, ludicrous, farcical, nonsensical, cockamamie, balderdash, harebrained, risible, inane, asinine, patent lunacy, utter madness, sheer insanity.
But I don’t pretend to know it all, so let’s figure out whether that $100 billion claim is feasible. Today, for members of Exponential View, we work backwards to understand how realistic Sam’s expectations are AND where the money could come from.
2025-11-03 20:40:12
Hi all,
Here’s your Monday round-up of data driving conversations this week in less than 250 words.
Let’s go!
White-collar countries ↑ AI usage is clustered in service-based economies—with UAE (59.4%) and Singapore (58.6%) in the lead.
AI in business ↑ 37% of corporations are already using AI in production, according to Goldman Sachs.
Data center uncertainty ↑ BCG’s 2030 US data-center electricity forecast is over 5x the lowest estimate.
Asymmetric markets ↓ AI is chipping away at markets where sellers know more than buyers (healthcare, mortgages, services) by helping people read contracts, compare prices and push back when treated unfairly. 25% of US consumer spending lies in these markets.
2025-11-02 09:37:26
Good morning from London!
In today’s briefing:
75% of big US firms are already making money from AI.
OpenAI rewires itself for a trillion-dollar infrastructure push.
Plus: AI’s self-awareness, a floating solar farm, semi-autonomous cities & Beijing 2035.
Let’s go!
⚡️ Today’s edition is brought to you by Lindy.
Your next hire won’t be human – it will be AI. Lindy lets businesses create AI agents with just a prompt. These AI employees handle sales, support, and ops 24/7, so you can focus on growth.
Seventy-five percent of large US firms already report a return on investment from AI, according to Wharton Business School’s latest update from its three-year enterprise study. Roughly two-thirds of firms report spending more than $5 million annually on generative AI budgets. More than one in ten are spending $20 million.
Extrapolated across some 20,000 comparable American firms, this suggests a conservative floor of $66 billion in annual genAI spending. This is slightly above our estimates for revenues in the global generative AI ecosystem, excluding China. Therefore, I suspect that the Wharton respondents included internal budget allocations in their estimates, which we typically do not. The study used a solid cross-sectional survey, though it remained dependent on self-reported responses.1 Either way, 88% expect to increase spending.
It’s remarkable that most firms are seeing positive returns within three years of ChatGPT’s debut. Usage is deepening rapidly – 82% of respondents engage with generative AI weekly, 46% daily – but the pattern is uneven. Middle managers lag their executives, and hiring expectations vary widely across levels. Whether this early surge can sustain itself remains to be seen.
This study triangulates well with other indicators. JPMorgan reports that 150,000 employees use its LLM tools daily. Results from the hyperscalers show that AI demand continues to drive growth in their cloud businesses. (Microsoft Azure, for example, grew 40% year-over-year to nearly $85 billion in revenue, with perhaps one-fifth attributable to AI services.)
See also:
Goldman Sachs’ bankers reckon that 37% of their clients are at production scale, with uptake expected to hit 50% this year. (This is lower than the Wharton study, but still an impressive proportion considering the immaturity of this market.)
A new preprint finds that even frontier-level agents can complete only 2.5% of real remote work projects at human-acceptable quality. AI is an augmenting force for now.
OpenAI is now a for-profit, public benefit corporation, which allows it to raise capital. Lucky for OpenAI – they need that investment. The firm already has commitments to about 30 GW of compute (about $1.4 trillion in infrastructure) and ultimately wants to add 1 GW per week. That is at a scale of national infrastructure; in fact, the commitments are nearly double the value of the US Interstate Highway System.
With all this compute, OpenAI needs to change tactics. Ambitiously, they want to build a platform. In Sam Altman’s words:
Traditionally, that’s looked like an AI super-assistant inside ChatGPT. But we’re now evolving into a platform that others can build on top of. The pieces that need to fit into the broader world will be built by others.
Where exactly this platform is located is a little ambiguous. OpenAI’s strongest vertical is ChatGPT, with more than 800 million weekly users (Gemini is second with 650 million monthly users). Now, they want people to build on their platform, likely via APIs. OpenAI is currently second to Anthropic on API market share, but they’re right to move into this zone. The closer a company sits to the bottom of the stack, the more control it has, especially as compute and energy become the real bottlenecks. Few players have the clout to commit $1.4 trillion to infrastructure spending, and that bill will only rise.
Anthropic researchers ran an unusual experiment to test whether their Claude models could detect what was happening inside their own “minds.” In about 20% of cases, the models noticed when scientists added specific patterns of neural activity, like slipping a new thought into their “head” (“Don’t think of an elephant!”). When researchers inserted a pattern linked to ALL CAPS, Claude reported a sense of loudness or shouting before its text changed.
This suggests that models can, in some sense, notice what is happening inside themselves, and that ability appears to scale with capability. In principle, you could ask a model why it made a decision and receive a meaningful, if partial, account of its decision process. Today, a model’s values are typically encoded in a “constitution” that instructs it how to balance helpfulness, honesty and harm avoidance. Yet when you stress-test these specifications, you often find that their principles collide. A model told to “assume good intentions” while also being instructed to “avoid harm” cannot always satisfy both. When values clash, models do not reflect on the tension. They simply pick a side. If introspection continues to scale with capability, future systems might be able to understand them.
In 1933, John Maynard Keynes warned that free trade no longer guaranteed peace. Britain’s embrace of globalization, he said, reflected power rather than virtue and had turned life into “a parody of an accountant’s nightmare.” That nightmare replayed in the late 20th century. As recounts, the great wager that open markets and global integration would spread democracy, exported jobs and imported fragility instead. The logic of comparative advantage worked too well: it rewarded efficiency and scale but concentrated production. This hollowed out industrial bases that once anchored resilience.
The response has been to bring the state back in as player, not a referee. Industrial policy, what I call catalytic government, is now a tool of strategy. The question is no longer whether governments should intervene, but how far. Each subsidy or tariff buys security at the cost of dynamism. Unlike Keynes, today’s pragmatists do not seek to abandon globalization, only to discipline it, to treat openness as an instrument of national power rather than an article of faith.
’s new Capabilities Index combines benchmarks to track model performance over time and help avoid benchmark obsolescence.
The obituary for pretraining was written too soon. This is a good rundown of why compute-intensive scaling may return.
Thirty-eight million farmers in India got a month’s head start using AI weather forecasting for an unusually chaotic monsoon season.
A 1.8 MW vertical floating PV plant is now online in Germany, floating in the sun.
Is AI erotica a private decision or a social one? Leah Libresco Sargeant argues that it will shape cultural norms around sex and intimacy and affect even non-users. See also, Character.ai has blocked under-18s from chatting with its chatbots.
The Economist argues that AI gives consumers expert-level leverage by expanding access to information and making markets more transparent.
Beijing is prioritizing tech self-reliance and manufacturing dominance toward “socialist modernization” by 2035.
Semi-autonomous city-building is moving from the fringes into realization, but these projects’ success will depend less on the ability to build and more on the ability to govern. Good survey here by .
Economists have developed a cheap and reliable way to recreate how people thought about the economy across decades, using large language models to simulate historical survey responses.
Today’s edition is supported by Lindy.
If ChatGPT could actually do the work, not just talk about it, you’d have Lindy.
Just describe what you need in plain English. Lindy builds the agent and gets it done.
→ “Create a booking platform for my business”
→ “Handle inbound leads and follow-ups”
→ “Send weekly performance recaps to my team.”
Save hours. Automate tasks. Scale your business.
It is more robust than the weak “MIT” Nanda study earlier this year. I invite the authors of that t challenge my assertion.
2025-11-01 00:52:35
In today’s live I asked whether the cloud and chip surge is exuberance or something deeper. The answer is that we’re seeing a profound, structural shift: the economy is moving into a computational fabric alongside the physical elements of the real economy.
I’ll publish more on this topic this weekend.
Happy Halloween!
Azeem
2025-10-30 23:26:42
A month ago, we released our framework for assessing whether AI is a bubble. The framework uses five key gauges which measure various industry stressors and whether they are in a safe, cautious or danger zone. These zones have been back‑tested against several previous boom‑and‑bust cycles. As a reminder, we track:
Economic strain (capex as a share of GDP)
Industry strain (investment relative to revenue)
Revenue momentum (doubling time in years)
Valuation heat (Nasdaq‑100 P/E ratio)
Funding quality (strength of funding sources)
The framework has circulated through boardrooms, investment memos, and policy circles – and today, we’re taking it a step further.
We are launching v1 of a live dashboard, updated in real time as new data comes in.
The economic strain gauge measures how much of the US economy is being consumed by AI infrastructure spend. We look at AI‑related capital expenditure in the US as a share of US GDP. This gauge is green if capex/GDP is below 1%; amber at 1–2%; and red once it crosses 2%. Historically, the three American railroad busts of the 19th century all exceeded 3%. The ratio was roughly 1% during the late 1990s telecoms expansion and a little higher during the dotcom bubble.
Since we last updated the dashboard, economic strain has increased but remains in safe territory. Google, Microsoft and Meta increased their collective capex by 11% compared to the previous quarter. Hyperscalers’ spending on AI infrastructure shows no sign of slowing.
Revenue is one of the key metrics we track to judge whether AI is a boom or a bubble. It feeds into two of our five gauges: revenue momentum (revenue doubling time) and industry strain, which measures whether revenue is keeping pace with investment. Investment usually comes before revenue. It’s a sign of optimism. But that optimism must be grounded in results: real customers spending real money.
This was one of the most challenging pieces of analysis to assemble, as reliable revenue data in the generative AI sector remains scarce and fragmented. Most companies disclose little detail, and what does exist is often inflated, duplicated or buried within broader cloud and software lines. Our model tackles this by tracing only de‑duplicated revenue: money actually changing hands for generative AI products and services. That means triangulating filings, disclosures and secondary datasets to isolate the signal. The result is a conservative but more realistic picture of the sector’s underlying economics. The simplified Sankey diagram below shows how we think about those flows.
Consumers and businesses pay for generative AI services, including chatbots, productivity tools such as Fyxer or Granola and direct API access.
Third‑party apps may rely on models from Anthropic, Google and others, or host their own.
Big tech firms, particularly Google and Meta, deploy generative AI internally to improve ad performance and productivity, blending proprietary and third‑party systems.
In this simplified public version, we group hyperscalers and neoclouds together and collapse smaller cost categories into “Other.” Flow sizes here are illustrative, but our full model tracks them precisely. (Get in touch if you want institutional access to our revenue data and modeling.)
Back in September, we estimated that revenue covered about one‑sixth of the proposed industry capex. Our historical modeling put this in deep amber territory. We have now updated our models with an improved methodology and more recent data, and the results have changed the look of our dashboard.
It turns out industry strain is the first gauge to cross into red. Remember, zero or one reds indicate a boom. Two reds are cautionary. Three or more reds are imminent trouble and definite bubble territory.
The change in this indicator reflects our improved methodology. We now measure capex each quarter as a look‑back on the previous 12 months’ capex commitments, and revenue on the same basis. In September, we relied on our forecast for 2025 generative‑AI revenue, which included estimates through year‑end. The revised approach allows for more real‑time updates each quarter and helps us smooth short‑term volatility in revenue estimates.
We believe this indicator is improving, and recent events point in that direction. AI startups report rapid ARR growth, while hyperscalers attribute much of their recent gains to AI; Microsoft’s Azure revenue, for instance, rose 40% year over year.
Our estimates of generative‑AI revenue now support quarterly (and even more fine‑grained) updates. The chart shows how trailing 12‑month revenue has grown over the past year. Our forecast for full‑year 2025 is $58 to $63 billion, likely near the higher end.
Revenue momentum estimates revenue doubling time in years. As we have said many times before, real revenue from real customers is what ultimately validates a technology. In our initial update in September, we showed revenue doubling every year. The new data now shows it doubling every 0.8 years, a further improvement. We describe it as “safe but worsening,” so let’s unpack the “worsening” part.
In our gauge, “worsening” simply means the doubling time is lengthening. In other words, it now takes longer for these revenues to double. As the sector expands and matures, a gradual increase in doubling time is expected. However, a rapid slowdown could signal emerging risk if growth cools before the market reaches maturity. This gauge works best in tandem with the industry‑strain indicator: high strain can be offset by exceptionally fast revenue doubling (as is the case now), and conversely, a sharp slowdown can push even moderate strain into red territory.
Valuation heat measures how far investor optimism is running ahead of earnings reality. It captures when price‑to‑earnings multiples stretch beyond underlying profits. Extended multiples detached from earnings power are classic bubble signatures, while elevated but still anchored multiples are consistent with an installation phase of investment. This gauge has slightly worsened, rising from 32 to 35, but remains far below the dotcom peak of 72. The market is running hot but not yet irrational.
Funding quality has also slightly worsened. On our qualitative metric, it rose from 1.1 to 1.4, reflecting several events that raise questions about the stability of financing. These include Oracle’s $38 billion debt deal for two data centers, the subsequent spike in the cost to insure Oracle’s debt and Nvidia’s support for xAI’s $20 billion chip‑linked capital raise. Collectively, these moves suggest funding conditions are becoming more complex and carry slightly higher risk, even as underlying fundamentals, like cash flow coverage, remain broadly stable.
Over the coming weeks and months, we’ll keep tracking the gauges and refining the model. In version 2, we plan to add several sub‑indicators to track AI startup valuations, circularity and GPU depreciation. We want the dashboard to be useful day‑to‑day for sense‑making, so we are internally testing a news feed that tracks changes in the gauges alongside the latest market events. We’ll roll that out as soon as it’s ready.
Tell us what would make the dashboard most useful to you.
If you are interested in institutional access to the modeling and data, get in touch.
2025-10-28 00:06:52
Hi all,
Here’s your Monday round-up of data driving conversations this week in less than 250 words.
Let’s go!
AI industrialization ↑ Anthropic has committed to 1 million Google TPUs to bring over 1GW of compute capacity online in 2026. Competition runs on gigawatts.
Parallel cloud emerging ↑ Neoclouds (AI and GPU-first) grew 205% YoY, on pace for $23 billion in 2025.
mRNA & cancer survival ↑ Getting an mRNA vaccine before immunotherapy boosted the three-year survival rate by 40-60% for lung cancer patients compared to no vaccine. Via