MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

📈 Data to start your week

2025-12-15 19:26:12

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. AI at work ↑ Adoption grew from 40% to 45% between Q2 and Q3 this year. But the real growth is in light, frequent use throughout the week.

  2. Enterprise sales ↑ OpenAI reported 8x year-on-year growth in its ChatGPT enterprise use, led by technology, manufacturing and healthcare.

  3. Age of construction ↓ Construction productivity has been flat or falling across most rich countries since the 1990s, despite decades of automation and evolving manufacturing practices.

  4. The cost of letting chips go ↑ ​​Without any exports or smuggling, the US is projected to hold a 21x-49x lead in AI compute over China in 2026. Allowing unrestricted Hopper exports shrinks that lead to as little as ~1.2x-6.7x. (See EV#554)

Read more

🔮 Exponential View #554: Intelligence gets cheaper. Adoption gets harder. Jet engines become power plants; tiny brain chips & emotional AI++

2025-12-14 11:17:27

You help me keep my finger on the pulse of what is happening and what may be to come. – Joanna P., a paying member

Subscribe now


Hi all,

Welcome to the Sunday edition.

Inside:

  • GPT 5.2 shows how rapidly the cost of intelligence is collapsing.

  • AI adoption has hit its first plateau – why breadth is flattening and where intensity will drive the next wave.

  • China’s H200 trap: why limited US chip access could shrink America’s compute lead and still fail to deliver leverage.

  • Plus: AI for teen support, a jet engine turned 42MW data center turbine, and hair‑thin brain chips streaming at ultra‑high bandwidth.

Subscribe now


In my latest talk, I break down what AI needs to scale

Listen on Apple Podcasts or Spotify


Takeaways from OpenAI’s Code Red

Open AI’s GPT 5.2 shows how rapidly the cost of intelligence is collapsing. GDPval is a benchmark that tests professional tasks in fields like finance and healthcare. GPT 5.2 beats industry experts 70% of matchups. It does so at more than 11x the speed and at less than 1% of the cost of human experts. To get a sense of model progress, that is double the wind rate of GPT 5.1 which launched a month ago.

This latest model scores 90.5% of the Arc-AGI benchmark for $11.54, more than 390x cheaper than the previous high score set by o3 one year ago. On Arc-AGI 2, a harder test, GPT 5.2 is substantially more capable than four-month-old GPT-5.

Both GPTval and Arc-AGI’s benchmarks point to a continued acceleration in model progress.

GPT 5.2 reinforces my argument that we’re going to see plenty of model variety. Claude Opus 4.5 remains a better coding model. GPT 5.2 is especially powerful when you run it in Pro Mode, but this can take 15-20 minutes. So Gemini 3 Pro still has an important role, especially for tasks that don’t need that depth. I’ve not yet had a chance to pit Gemini 3 Pro in thinking mode against GPT 5.2 Pro. Let me know in the comments if you’ve done it and what you found!

Who did it better?

Share Exponential View


The first plateau?

The Ramp AI Index, which tracks AI adoption through AI subscription spend, flattened in November and has effectively plateaued since July. That can look like a slowdown, but I see a transition.

Here’s what’s likely going on. Enterprise AI is already a $37 billion market growing 3.2x per year – what Menlo Ventures calls the fastest-scaling software category on record.1 But that growth is uneven and the market’s character is shifting.

Read more

🔮 What it will take for AI to scale (energy, compute, talent)

2025-12-12 00:42:23

I recently set out my macro view on the next 24 months of AI. The response was strong and many of you wrote in with questions. In this episode, I build on that analysis and answer your questions.

Some highlights:

(03:36) The biggest AI constraint right now

(10:43) Why mid-2026 is a crucial turning point

(18:41) The market’s reaction to OpenAI’s code red

(20:51) The best strategy for middling powers?

🔔 Subscribe on YouTube for every video – including the ones we don’t publish here.

📈 Data to start your week

2025-12-08 23:36:24

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Agentic AI ↑ Orchestration systems (like Poetiq) significantly boost frontier-model performance on ARC-AGI-21.

  2. VC screening ↑ LLMs cut venture capital deal screening time from 2 hours to ~13 seconds.

  3. Underestimating competitors ↑ 93% of companies misjudge how quickly rivals are adopting AI and robotics.

  4. Sourcing from China ↓ A third of the members of the European Chamber in China2 are looking to shift sourcing away from China due to tight export controls.

Read more

🔮 Exponential View #553: The story of 100 trillion tokens; China’s chip win; superhuman persuasion, Waymo ethics, Polymarket & hydrology++

2025-12-07 10:59:49

I haven’t read anyone as thorough as you on the AI bubble and related topics. Miguel O., a paying member

Subscribe now


Hi all,

In this edition:

  • Superhuman persuasion. New data shows that AI is more effective than TV ads at changing minds, even when it lies.

  • The chip sanctions failed. How “stacking” old tech is neutralizing the US blockade (and validating my 2020 thesis).

  • Solving the unsolved. Terence Tao calls AI “routine” and autonomous agents are finally cracking the backlog of neglected science.

  • The end of Hollywood. Financialization, not AI, may be the root cause.

In my latest video, I break down how I think about the overlapping technology S-curves that are driving the market upheaval:

There are more exponentials in this AI wave doing their work than just ChatGPT and large language models. These new technologies make new things possible: we start to do things we didn’t do before, either because they weren’t possible or because they were too expensive.

Listen on Apple Podcasts or Spotify


RIP chatbot, hello glass slipper

OpenRouter, an aggregator routing traffic across 300+ AI models for five million developers, released an analysis of 100 trillion tokens of usage. Their data offers a unique, cross-platform look at the market. It’s worth a deep read, but for now I’d highlight two directions:

First, the “glass slipper” effect – retention is driven by “first-to-solve,” not “first-to-market.” When a model is the first to unlock a specific, high-friction workload (like a complex reasoning chain), the cohort that adopts it shows effectively zero churn, even when cheaper or faster competitors emerge. This confirms my long-held view: customers don’t buy benchmarks; they buy solutions. Once a model fits the problem, like a glass slipper, switching costs become irrelevant.

Second, the shift to agentic inference is undeniable. In less than 12 months, reasoning-optimised models have surged from negligible to over 50% of all token volume. Consequently, average prompt lengths have quadrupled to over 6,000 tokens, while completion lengths have tripled. The insight here is that users aren’t becoming more verbose; the systems are. We are seeing the mechanical footprint of agentic loops iterating in the background.


China’s chip tipping point

China’s drive for semiconductor independence is accelerating faster than predicted. The recent Shanghai IPO of Moore Threads, a leading AI chipmaker, surged 425% on debut, signaling voracious domestic capital support for “China’s Nvidia” alternatives. This aligns with a bold forecast from Bernstein, that China is on track to produce more AI chips than it consumes by 2026, effectively neutralizing the intended chokehold of US export controls.

Read more

🔮 The real bottlenecks in AI + Q&A

2025-12-06 01:22:57

In today’s session I reflected on why AI’s bottleneck is no longer the models but the systems expected to absorb them.

I followed with a Q&A that touched on

  • OpenAI’s competitive position

  • Where value will accrue in the stack

  • The role of energy and grid limits

  • The impact of cybersecurity risks

Enjoy!

Azeem

Leave a comment