MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

🔮 The demand for infinite compute

2025-11-01 00:52:35

In today’s live I asked whether the cloud and chip surge is exuberance or something deeper. The answer is that we’re seeing a profound, structural shift: the economy is moving into a computational fabric alongside the physical elements of the real economy.

I’ll publish more on this topic this weekend.

Happy Halloween!

Azeem

Subscribe now

📈 The AI dashboard

2025-10-30 23:26:42

OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center  sites | OpenAI
OpenAI’s Stargate data centers

A month ago, we released our framework for assessing whether AI is a bubble. The framework uses five key gauges which measure various industry stressors and whether they are in a safe, cautious or danger zone. These zones have been back‑tested against several previous boom‑and‑bust cycles. As a reminder, we track:

  • Economic strain (capex as a share of GDP)

  • Industry strain (investment relative to revenue)

  • Revenue momentum (doubling time in years)

  • Valuation heat (Nasdaq‑100 P/E ratio)

  • Funding quality (strength of funding sources)

The framework has circulated through boardrooms, investment memos, and policy circles – and today, we’re taking it a step further.

We are launching v1 of a live dashboard, updated in real time as new data comes in. Paying members will have first access for the initial 24 hours; members can access it below. After that, we’ll release it to the general public.

In today’s post, we’ll introduce paying members to the dashboard and share an update on the changes we have observed since our September essay.

Subscribe now

A month later, what’s different?

Economic strain

The economic strain gauge measures how much of the US economy is being consumed by AI infrastructure spend. We look at AI‑related capital expenditure in the US as a share of US GDP. This gauge is green if capex/GDP is below 1%; amber at 1–2%; and red once it crosses 2%. Historically, the three American railroad busts of the 19th century all exceeded 3%. The ratio was roughly 1% during the late 1990s telecoms expansion and a little higher during the dotcom bubble.

Since we last updated the dashboard, economic strain has increased but remains in safe territory. Google, Microsoft and Meta increased their collective capex by 11% compared to the previous quarter. Hyperscalers’ spending on AI infrastructure shows no sign of slowing.

Subscribe now

Industry strain

Revenue is one of the key metrics we track to judge whether AI is a boom or a bubble. It feeds into two of our five gauges: revenue momentum (revenue doubling time) and industry strain, which measures whether revenue is keeping pace with investment. Investment usually comes before revenue. It’s a sign of optimism. But that optimism must be grounded in results: real customers spending real money.

This was one of the most challenging pieces of analysis to assemble, as reliable revenue data in the generative‑AI sector remains scarce and fragmented. Most companies disclose little detail, and what does exist is often inflated, duplicated or buried within broader cloud and software lines. Our model tackles this by tracing only de‑duplicated revenue: money actually changing hands for generative‑AI products and services. That means triangulating filings, disclosures and secondary datasets to isolate the signal. The result is a conservative but more realistic picture of the sector’s underlying economics. The simplified Sankey diagram below shows how we think about those flows.

  • Consumers and businesses pay for generative‑AI services, including chatbots, productivity tools such as Fyxer or Granola and direct API access.

  • Third‑party apps may rely on models from Anthropic, Google and others, or host their own.

  • Big tech firms, particularly Google and Meta, deploy generative AI internally to improve ad performance and productivity, blending proprietary and third‑party systems.

In this simplified public version, we group hyperscalers and neoclouds together and collapse smaller cost categories into “Other.” Flow sizes here are illustrative, but our full model tracks them precisely. (Get in touch if you want institutional access to our revenue data and modeling.)

Back in September, we estimated that revenue covered about one‑sixth of the proposed industry capex. Our historical modeling put this in deep amber territory. We have now updated our models with an improved methodology and more recent data, and the results have changed the look of our dashboard.

It turns out industry strain is the first gauge to cross into red. Remember, zero or one reds indicate a boom. Two reds are cautionary. Three or more reds are imminent trouble and definite bubble territory.

The change in this indicator reflects our improved methodology. We now measure capex each quarter as a look‑back on the previous 12 months’ capex commitments, and revenue on the same basis. In September, we relied on our forecast for 2025 generative‑AI revenue, which included estimates through year‑end. The revised approach allows for more real‑time updates each quarter and helps us smooth short‑term volatility in revenue estimates.

We believe this indicator is improving, and recent events point in that direction. AI startups report rapid ARR growth, while hyperscalers attribute much of their recent gains to AI; Microsoft’s Azure revenue, for instance, rose 40% year over year.

Link to the dashboard

Our estimates of generative‑AI revenue now support quarterly (and even more fine‑grained) updates. The chart shows how trailing 12‑month revenue has grown over the past year. Our forecast for full‑year 2025 is $58 to $63 billion, likely near the higher end.

Revenue momentum

Revenue momentum estimates revenue doubling time in years. As we have said many times before, real revenue from real customers is what ultimately validates a technology. In our initial update in September, we showed revenue doubling every year. The new data now shows it doubling every 0.8 years, a further improvement. We describe it as “safe but worsening,” so let’s unpack the “worsening” part.

In our gauge, “worsening” simply means the doubling time is lengthening. In other words, it now takes longer for these revenues to double. As the sector expands and matures, a gradual increase in doubling time is expected. However, a rapid slowdown could signal emerging risk if growth cools before the market reaches maturity. This gauge works best in tandem with the industry‑strain indicator: high strain can be offset by exceptionally fast revenue doubling (as is the case now), and conversely, a sharp slowdown can push even moderate strain into red territory.

Link to the dashboard

Valuation heat

Valuation heat measures how far investor optimism is running ahead of earnings reality. It captures when price‑to‑earnings multiples stretch beyond underlying profits. Extended multiples detached from earnings power are classic bubble signatures, while elevated but still anchored multiples are consistent with an installation phase of investment. This gauge has slightly worsened, rising from 32 to 35, but remains far below the dotcom peak of 72. The market is running hot but not yet irrational.

Link to the dashboard

Funding quality

Funding quality has also slightly worsened. On our qualitative metric, it rose from 1.1 to 1.4, reflecting several events that raise questions about the stability of financing. These include Oracle’s $38 billion debt deal for two data centers, the subsequent spike in the cost to insure Oracle’s debt and Nvidia’s support for xAI’s $20 billion chip‑linked capital raise. Collectively, these moves suggest funding conditions are becoming more complex and carry slightly higher risk, even as underlying fundamentals, like cash flow coverage, remain broadly stable.

Link to the dashboard

What’s next

Over the coming weeks and months, we’ll keep tracking the gauges and refining the model. In version 2, we plan to add several sub‑indicators to track AI startup valuations, circularity and GPU depreciation. We want the dashboard to be useful day‑to‑day for sense‑making, so we are internally testing a news feed that tracks changes in the gauges alongside the latest market events. We’ll roll that out as soon as it’s ready.

Tell us what would make the dashboard most useful to you.

If you are interested in institutional access to the modeling and data, get in touch.

📈 Data to start your week

2025-10-28 00:06:52

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!


  1. AI industrialization Anthropic has committed to 1 million Google TPUs to bring over 1GW of compute capacity online in 2026. Competition runs on gigawatts.

  2. Parallel cloud emerging ↑ Neoclouds (AI and GPU-first) grew 205% YoY, on pace for $23 billion in 2025.

  3. mRNA & cancer survival Getting an mRNA vaccine before immunotherapy boosted the three-year survival rate by 40-60% for lung cancer patients compared to no vaccine. Via

Read more

🔮 Exponential View #547: When AI invests $10k. Atlas unleashed. Anticipation, cheap robots & bionic eyes++

2025-10-26 08:30:26

Atlas holding the celestial sphere, “Farnese Atlas”, 2nd c. CE Roman marble sculpture

Good morning from London!

In today’s briefing:

  • Why simpler AI may be the next frontier in AI capability

  • The rise of anticipation markets and new ways to price the future

  • One AI model invests better than all the rest

  • Bionic eyes are here…

Let’s go!

Subscribe now


Simpler, smarter AI

Today’s frontier models have gorged on internet’s noise. They are brilliant mimics with blurry reasoning, as argues OpenAI founding member Andrej Karpathy. The problem is that true reliability can’t come from these feats of memory but from deeper understanding. Future AI systems will need this.

Andrej proposes an austere remedy– reduce memorisation, preserve the reasoning machinery and pull in facts only when needed. He pictures a “cognitive core” at the 1-billion-parameter scale that plans, decomposes problems and queries knowledge. It is a librarian, not a library.

Philosopher Toby Ord points out that the very approach that’s given us the surprising capabilities of “reasoning models” like o1 is reaching its own limits. These systems extract gains from post-training reinforcement learning (refining answers through trial-and-error) and extended inference-time reasoning. Compute is paid per query, not once during pre-training. Ord estimates that this burns 1,000 to 1,000,000 times more compute per insight than traditional training. Returns shrink faster to get to the next milestone. Even OpenAI’s o1 reasoning model improves only when it’s given more RL cycles and longer “thinking time,” which raises the cost per task.

How should we make sense of this? Technological progress advances through overlapping S-curves and rarely follows a smooth exponential. Both Ord and Karpathy are pointing to a similar direction of less brute memorization, more search and recursion. Less unlimited inference and more careful allocation of reasoning budgets. Away from monolithic models and toward tool-using, modular agents.

As the cost of using AI systems (rather than training them) becomes dominant, pricing will shift to usage-based models. Firms that deploy AI will precision will be rewarded. And as a result we could see a broad, rapid seep of AI into many corners of the economy, rather than a sudden leap in GDP.


The weight of the Web

OpenAI has entered the browser arena with its own browser Atlas. We argued a few times in the past that…

[t]he company that owns the browser owns the user session, valuable behavioral data and the ability to steer the revenue funnel. Whoever captures the front door to the web gets to watch, and eventually automate, everything we do online.

Read more

🔮 Inside the US-China decoupling: AI, rare earths, Taiwan, trade

2025-10-23 21:28:55

Watch on YouTube
Listen on Spotify or Apple Podcasts

I got together with China expert to understand his view on the new phase of the US-China competition. Both countries are using trade policy, export controls and industrial strategy to shift the balance of global power. Just earlier this month, China rolled out its toughest-ever curbs on rare earths and related tech. Yet, the US and China economies remain tightly bound. Jordan and I sit down to make sense of this.

Jump to the best parts

(01:34) The US and China’s decoupling explained

(08:51) Understanding the Oct 9 “rare earth rules”

(14:23) Is decoupling a strategy to avoid weaponisation?

(26:03) AI incumbents aren’t entrenched – yet

(43:14) Imagining an improved US-China relationship

Subscribe now

Conversation notes:

To accompany this week’s discussion on the US-China decoupling, we’ve pulled together a short set of research notes. These figures and developments sketch how trade, technology and energy are changing and where to watch next.

  • Rare earths. China controls about 70% of mining, about 90% of refining and separation, and about 93% of high-strength magnet production; export licences required for products with ≥0.1% rare earth content from 1 December; 12 of 17 metals now restricted; licence decisions can take up to 45 business days; likely adds about $500-1,500 to EV prices in the short run.

  • Chips. Nvidia moved to a one-year GPU cadence; US rules tightened in 2025 then loosened with revenue sharing on China sales; China responded by steering buyers to domestic silicon; SMIC produced 7 nm via DUV multi-patterning at low tens of thousands of wafers per month; Huawei Ascend adoption is growing.

  • AI models. Chinese developers lead on open-weight releases; the gap between top closed and top open narrowed to low single digits on key benchmarks in 2025; many of the most used open models now come from China. Anecdotally, it seems that even some of leading US firms choose Chinese open-source models over others. Airbnb’s Brian Chesky just shared that his company ‘relies heavily’ on Alibaba’s Qwen models.

  • Manufacturing. EV strategy described as scale up, flood in, starve out; China exports about 7 million vehicles a year and reached about 30% of the UK market within two years; robotics deployments reached a majority of global installs, yet many precision components still come from Japan and Europe.

  • Energy. China maintains large reserve margins and is adding massive solar, storage, and data-centre power; data-centre demand could reach 400 to 600 TWh by 2030.

Controls and workarounds

Export controls and tool restrictions are most effective at the frontier of technology. Coordinated measures across the US, Japan, and the Netherlands have delayed China’s access to the most advanced compute and semiconductor-manufacturing equipment.

Below that frontier, the effects weaken. China has adapted by relying on “good-enough” chips, improving packaging and integration, developing domestic design tools, and re-routing supplies through friendly intermediaries. These measures sustain progress in deployment, even without cutting-edge inputs.

Both systems are now adjusting in parallel. The US and China are investing heavily in local fabrication and packaging capacity, tightening investment and capital rules, and screening outbound flows. The outcome is not isolation but duplication: two partly mirrored ecosystems built for resilience.

Perspectives worth reading

  • Scott Bessent interview. Allied response to rare earths, targeted reshoring, price floors, and strategic reserves; vigilance with time-bound goals. (Read here)

  • Kaiser Kuo, The Great Reckoning. Performance legitimacy; China as a principal architect of modernity; the West should measure outcomes and learn without denial. (Read here)

  • ChinaTalk analysis on synthetic diamonds. Why lab-grown diamond controls matter for wafer slicing, optics, and thermal management; leverage is real but not absolute due to alternative producers. (Read here)

  • Abundance and China, podcast with , and Dan Wang. Abundance framing for state capacity, risk pricing for tail scenarios, and learning from Chinese speed without importing ideology. (Read and listen here)

Thanks for reading!

Share

🔴 A messier AI story

2025-10-22 20:00:42

The latest WSJ report about Sam Altman’s influence on the leading tech companies got a lot of attention because it claims that Altman showed interest in using Google’s homegrown TPU chips alongside Nvidia’s GPUs.

The flirtation apparently triggered Nvidia’s boss Jensen Huang.

What matters is the financing architecture forming around OpenAI’s compute build-out. As the WSJ writes:

As part of the deal, Nvidia is also discussing guaranteeing some of the loans that OpenAI plans to take out to build its own data centers, people familiar with the matter said—a move that could saddle the chip giant with billions of dollars in debt obligations if the startup can’t pay for them. The arrangement hasn’t been previously reported.

We’ve known that OpenAI is creating a web of relationships with its suppliers beyond simple cash-for-services. This has been the firms métier since its deal with Microsoft, and while complicated and unorthodox, it can be seen as expedient in a fast-moving market.

When I reviewed that 11 days ago, I cautioned that while there weren’t real issues yet, we should be watchful of temptations to use

increasingly complex deal structures that blend credit lines, equity stakes, and multi-year purchase commitments.

Well, if the Journal’s reports are correct, we’re seeing exactly that: Nvidia discussing guarantees of OpenAI’s data-centre loans. The chip shop would practically agree to repay OpenAI’s creditors if OpenAI cannot. In other words, Nvidia is extending its balance sheet to backstop a customer’s debt.

In our framework, this would mark a deterioration in our funding quality gauge – the measure of how resilient the sector’s financing structures are. It worsens funding quality for both OpenAI and Nvidia. Worsens but overall FQ for the sector stays in low amber territory, still far from red.

Last Saturday, I made the point that booms tip towards bubbles when balance sheet gymnastics become the norm. Well, this isn’t the norm yet… but if these reports are accurate it would be another early sign of micro-level deterioration in funding quality.

The counter case

In a counter case, what else would Sam Altman do?

If you assume that artificial intelligence is the next big thing, then the last thing you’ll want to do is sit this out. Being confident in your story and momentum could lead you to doing exactly this kind of deal. Of course, being desperate could take you down the same path.

A world leader with Prime Minister Keir Starmer

Read more