MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

📈 Data to start your week

2026-01-06 01:47:05

Hi all,

Happy New Year and welcome back to our usual programming! This is the Monday snapshot of market-moving data, in under 250 words.


I am hosting a live member-only 2026 briefing with Q&A this Wednesday at 12pm ET. If you’re a member, register here.

Subscribe now


  1. African solar ↑ In November, Africa imported over 2GW of solar panels for the first time.1

  1. The US outperformed economists’ expectations again. The economy grew at an annualised rate of 4.3% in Q3 of last year. Nearly 40% of this is attributed to net trade effects post-tariffs.

  1. Stack overflowed. Posts to Stack Overflow have fallen to levels last seen in its early years. Answers have moved from public forums to chatbots.

  1. GPU prices ↑ Demand for Nvidia chips continues to outpace supply. H200 AWS rental prices climbed ~35% since July and B200 prices nearly tripled.

  2. Valuation-margin correlation ↑ AI companies’ valuations are more closely correlated with token margins than with raw usage.2

Read more

🔮 Exponential View’s year in review

2025-12-31 10:32:16

2025 was the year the compute economy became real.

Massive infrastructure commitments. Data centres acquiring power plants. AI models that reason. The central question went from “what can AI do?” to “how do we build the energy systems, governance frameworks, and coordination mechanisms to absorb what’s coming?”

That question – how intelligence, energy, and coordination collide – is the organising logic of Exponential View. Nearly every meaningful story we tracked this year sat somewhere on that fault line. You couldn’t understand one without the others.

Here’s a curated map of our most durable work from 2025 – your fastest way to catch up and revisit the ideas that matter.

Begin with the stars (★): the pieces most likely to outrun the news cycle by far. Choose two or three that fit your current challenges to make the most of today’s review.

And if you’ve been reading the free editions, wondering whether to go deeper, now is the time – it’s only going to get weirder!

Subscribe now

Give a gift subscription

Big picture / thesis pieces

The next 24 months in AI (Dec 3) A unified view of AI’s critical pressure points – energy, compute, talent – and what to watch. This is the framework piece.

The great value inversion (Sep 20) “Has value broken?” Electricity at negative prices, credentials losing worth, abundance creating crisis. This is distinctive EV thinking as the economic grammar is out of date.

Is AI a bubble? (Sep 17) + the live dashboard The five-gauge framework for assessing boom vs. bubble. Circulated through boardrooms and policy circles.

Energy is technology (Sep 17 2024) Our evergreen primer on how energy went from being a commodity to being a technology and why this changes everything.

The shift to computation (Nov 7) Zooming out to understand what the infinite compute demand means for the rest of the economy.

China / Geopolitics

The Chinese Sputnik that demands our attention (Dec 31 2024) We were the first to contextualize the DeepSeek surprise in December 2024.

72 hours in China (2 Jul) A firsthand look at daily life, technology, and pace inside contemporary China.

Paul Krugman and Azeem on the world in 2025 (Jan 31) The conversation that framed the year: tariffs, trade wars, the new information ecosystem, AI and debt.

Kimi K2 is the model that should worry Silicon Valley (Jul 15) “AI has its Vostok 1 moment.” The strategic analysis of what Chinese open-source means for US incumbents.

Why China builds while the US debates, with Dan Wang (Oct 1) Why America’s constraints are largely internal, and why China’s advantage lies in coordinated execution.

Under-the-radar

How Europe outsourced its future to fear (Nov 19) Europe is being held back by a precautionary reflex that treats inaction as the safest choice, even as the costs of standing still rise sharply.

The $100 trillion productivity puzzle (Jun 19) The capability-absorption gap: labs racing ahead faster than firms can absorb. Why macro-productivity numbers remain flat despite AI breakthroughs.

Did OpenAI’s $100 billion path just get narrower? (Dec 4) Sam Altman’s internal pivot and what it tells us about the AI market right now.

Unpicking OpenAI’s real revenues (Nov 14) How the misreading of cash flows is spooking investors.

Learning & accountability

Seven lessons from building with AI (Apr 16) Practical takeaways from automating our work with AI, and a stack of 50+ tools that we have used.

What I was right (and wrong) about (Jul 4) Marking your own work. AI capability, solar records, EVs, workforce impact. What held up and what didn’t.

The cheat codes of technological progress (May 31) What still governs technological advance. Revisiting the ‘laws’ of progress. How, when and why they work (or don’t).

The exponential thinker’s reading list: 10 timeless essays (Sep 10) A collection of essays that explain how and why technologies reshape the world.

Subscribe now

🔮 Six mental models for working with AI

2025-12-30 00:03:36

The question of whether AI is “good enough” for serious knowledge work has been answered. The models crossed that threshold this year. What’s slowing organizations down now isn’t capability, but the need to redesign work around what these systems can do.

We’ve spent the past 18 months figuring out how. We made plenty of mistakes but today I want to share what survived. Six mental models that can genuinely change the quality of work you get from generative AI. Together with the seven lessons we shared earlier in the year, this is the operating manual we wish we had all along.

At the end, you’ll also get access to our internal stack of 50+ AI tools. We’ve documented everything we’re actively using, testing or intend to test, to help you decide what tools might work for you.

Enjoy!

Subscribe now


1. The 50x reframe

Most people start working with AI by asking something along the lines of: how do I speed up what I’m already doing?

That question is comfortable and wrong. I find that it anchors me to existing constraints.

A more useful question is:

What would I do if I had 50 people working on this?

Then work backwards.

The 50x reframe forces you to imagine the ideal outcome unconstrained by time or labor. Only then do you ask which parts of that hypothetical organization can be simulated with software. I now encourage our team members to think of who they would hire, what work that person would do, how they’d know if they were successful.

If you’ve not had the experience of hiring fifty people on a project (fair enough!), use this prompt to get you started to identify what it is that you may need:

A prompt you could use:
I currently [describe your task/process]. Walk me through what this would look like if I had a team of 50 people dedicated to doing this comprehensively and systematically. What would each role focus on? What would the ideal output look like? Then help me identify which parts of that hypothetical team’s work could be automated or assisted by AI tools.

For example, we use this approach for podcast guest prospecting and research. We used to rely on network and serendipity to identify 20-30 strong candidates for each season; a mix of the right expertise, timing and editorial fit that consistently delivered good conversations – but left too much to chance. Instead, 50x thinking asks what if we could systematically evaluate the top 1,000 potential guests? What if we could track the people we’re interested in so they surface when they’re most relevant? We built a workflow that researches each candidate, classifies expertise, identifies timely angles, and suggests the most relevant names for any given week’s news cycle.

2. Adversarial synthesis

Even experienced operators have blind spots. We internalize standards of “good” based on limited exposure. No one has seen great outputs across every domain but the models, collectively, have come closer to it than anyone else.

To make the most of this superpower, I give Gemini, Claude and ChatGPT the same task – and make them argue. Then have each critique the others. You’ll quickly surface gaps in your framing, assumptions you didn’t realise you were making, and higher quality bars than you expected.

Generated using Midjourney

When models disagree, it’s usually a sign that the task is underspecified, or that there are real trade-offs you haven’t surfaced yet. Which brings us to the next point.

3. Productize the conversation

If you’re having the same conversation with AI repeatedly, turn it into a tool. Every repeated prompt is a signal. Your workflow is basically telling you that this (t)ask is valuable enough to formalize.

I found that when I productize a conversation by turning it into a dedicated app or agentic workflow, my process gets better at the core and my tool evolves over time. So the benefits of the original conversation end up compounding in a completely new way.

A prompt you could use:
## Context
I have a recurring [FREQUENCY] task: [BRIEF DESCRIPTION].

Currently I do it manually by [CURRENT PROCESS - 1-2 sentences]. 

Here’s an example of this task in action:
<example>
Input I provided: [PASTE ACTUAL INPUT]
Output I needed: [PASTE ACTUAL OUTPUT OR DESCRIBE]
</example>

## What I Need
Turn this into a reusable system with:
1. **Input specification**: What information must I provide each time?
2. **Processing instructions**: What should the AI do, step by step?
3. **Output structure**: Consistent format for results
4. **Quality criteria**: How to know if the output is good

## Constraints
- Time/effort budget: [e.g., “should take <5 min to run”]
- Depth: [e.g., “verify top 10 claims, not exhaustive”]
- Tools available: [e.g., “has web search” or “no external lookups”]
- Error handling: [e.g., “flag uncertain items vs. skip them”]

## Desired Format
Deliver this as a: [CHOOSE ONE]
- [ ] System prompt I can paste into Claude/ChatGPT
- [ ] Zapier/Make automation spec
- [ ] Python script (Replit-ready)
- [ ] Lindy/agent configuration
- [ ] All of the above with tradeoffs explained

## Success Looks Like
A good output will: [2-3 bullet points describing what “done well” means]

I kept asking LLMs for editorial feedback multiple times a week, for weeks. After some fifteen hours of repeat prompting, I built a virtual editor panel in Replit and Lindy.

Read more

🔮 The best of my 2025 conversations: Kevin Kelly, Tyler Cowen, Steve Hsu, Dan Wang, Matthew Prince &amp; others

2025-12-26 18:10:18

This year, I recorded 30 conversations on AI, technology, geopolitics, and the economy.

What follows is a 20-minute edit, a way in. If you only had 20 minutes to orient on key ideas that were top of mind in 2025 and will hold as we step into the new year, these are the ideas I’d start with:

Jump to the key ideas

Part 1 - AI as a general purpose tech
2:27: Why this matters
3:06: Kevin Weil: The test for a startup’s longevity
4:01: Matthew Prince: The “Socialist” pricing debate
4:45: : This will stifle the AI boom
7:42: : The “NBA-ification” of journalism
8:13: : From utopia to protopia

Part 2 - How work is changing
10:13: An evolving labor market
10:45: Steve Hsu: The future of education
11:27: Thomas Dohmke: The inspectability turning point
12:09: Ben Zweig: The new role for entry-level workers
13:16: Ben Zweig: The eroding signal of higher education

Part 3 - The physical world, compute, and energy
14:05: Setting the stakes
14:51: Greg Jackson: “We’re half way across the road. We have to get across as fast as we can.”
15:27: Greg Jackson: Building a “show, don’t tell” company
16:12: : The physical reality of AI

Part 4 - The changing US-China landscape
16:57: A new era
18:09: Dan Wang: The West’s hidden tech gap
18:38: : The two types of accelerationism
19:38: Jordan Schneider: What the US can learn from China

Go deeper into the full conversations

🔔 Subscribe on YouTube for every video, including the ones we don’t publish here.

Give a gift subscription

🧠 We are all spiky

2025-12-23 12:17:18

Photo by Jan Canty on Unsplash

I subscribe because you are naming fractures that matter.” – Lars, a paying member

Subscribe now


argues we’ve got one common AI metaphor wrong. We talk about training AI as if we’re training animals, building instincts and ultimately, shaping behaviour. No, he says in his year-end review:

We’re not “evolving/growing animals”, we are “summoning ghosts”. Everything about the LLM stack is different (neural architecture, training data, training algorithms, and especially optimization pressure) so it should be no surprise that we are getting very different entities in the intelligence space, which are inappropriate to think about through an animal lens.

He goes on to say that the “ghosts”

are at the same time a genius polymath and a confused and cognitively challenged grade schooler, seconds away from getting tricked by a jailbreak to exfiltrate your data.

We’re familiar with that jaggedness. and colleagues ran a field experiment two years ago involving management consultants and GPT-4. Inside the model’s competence zone, people go faster (25%) and their outputs are better (40%). But on a task outside the frontier, participants became far more likely to be incorrect. Same tool, opposite effects.

This jaggedness has more recently surfaced in Salesforce’s Agentforce itself, with the senior VP of product marketing noting that they “had more trust in the LLM a year ago”. Agentforce now uses more deterministic approaches to “eliminate the inherent randomness” of LLMs and make sure critical processes work as intended every time.

Andrej’s observation cuts deeper than it may initially appear. The capability gaps he describes are not growing pains of early AI. They are signatures of how training works. When a frontier lab builds a model, it rewards what it can measure: tasks with clear benchmarks, skills that climb leaderboards and get breathlessly tweeted with screenshots of whatever table they top.

The result is a landscape of jagged peaks and sudden valleys. GPT-4 can pass the bar exam but struggles to count the letters in “strawberry.” And yes, the better models can count the “r”s in strawberry but even the best have their baffling failure modes. I call them out regularly; and recently Claude Opus 4.5 face planted so badly that I had to warn it that Dario might need to get personally involved:

What gets measured gets optimised; what doesn’t, doesn’t. Or, in Charlie Munger’s way, show me the incentive and I’ll show you the outcome.

The benchmarks a system trains on sculpt its mind. This is what happened in the shift to Reinforcement Learning from Verifiable Rewards (RLVR) – training against objective rewards that, by principle, should be more resistant to reward hacking. Andrej refers to this as the fourth stage of training. Change the pressure, change the shape.

The essay has raised an awkward question for me. Are these “ghosts” something alien, or a mirror? Human minds, after all, are also optimised: by evolution, by exam syllabi, by the metrics our institutions reward, by our deep family backgrounds. We have our own jagged profiles, our own blind spots hiding behind our idiosyncratic peaks.

So, are these “ghosts” that Andrej describes, or something more familiar? Our extended family?

Psychometrics has argued for a century about whether there’s a general factor for intelligence, which they call g, or if intelligence is a bundle of capabilities. The durable finding is that both are true. Cognitive tests positively correlate and this common factor often explains up to two-thirds of variation in academic and job performance.

But plenty of specialized strengths and weaknesses remain beneath that statistical umbrella. Humans are not uniformly smart. We are unevenly capable in ways we can all recognize: brilliant at matching faces, hopeless at intuiting exponential growth.

The oddity is not that AI is spiky – it’s that we expected it wouldn’t be.

We carried over the mental model of intelligence as a single dial; turn it up, and everything rises together. But intelligence, human or machine, is not a scalar. Evolution optimized life for survival over deep time in embodied creatures navigating physical, increasingly crowded and ultimately social worlds. The result was a broadly general substrate which retains individuated expression.

Today’s LLMs are selected under different constraints: imitating text and concentrated reward in domains where answers can be cheaply verified. Different selection pressure; different spikes. But spikes nonetheless.

Spikiness & AGI

What does spikiness imply for AGI as a single system that supposedly does everything?

My bet remains that we won’t experience AGI as a monolithic artefact arriving on some future Tuesday. It will feel more like a change in weather: it’s getting warmer, and we gradually wear fewer layers. Our baseline expectation of what “intelligent behaviour” looks like will change. It will be an ambient condition.

The beauty is that we already know how to organise such spiky minds. We’re the living proof.

“Evolution optimized life for survival over deep time in embodied creatures navigating physical, increasingly crowded and ultimately social worlds. The result was a broadly general substrate which retains individuated expression.”

Human society is one coordination machine for spiky minds. It’s the surgeon with no bedside manner. The mathematician baffled by small talk. The sales rep who can close a deal on far better terms than the Excel model ever allowed.

Read more

📈 2025 in 25 stats

2025-12-22 21:26:58

Hi all,

Over the past year, we reviewed thousands of data points on AI and exponential technologies to identify the signals that mattered most. Across almost 50 editions of Monday Data, we shared ~450 insights to start each week.

Today, we’ve distilled them into 25 numbers that defined the year, across five themes:

  1. The state of intelligence

  2. The state of work

  3. The state of infrastructure

  4. The state of capital

  5. The state of society

If you enjoy today’s survey, send it to a friend.

Subscribe now

Give a gift subscription

Let’s jump in!


Intelligence got cheaper, faster and more open

  1. Margin gains. OpenAI’s compute profit margin – revenue after deducting model operation costs – reached 68% in October 2025, roughly doubling from January 2024 (36%) as efficiency improved.

  2. Demand curve. Following the Gemini 3 release, Google has been processing more than 1 trillion tokens per day. Moreover, token output at both Alibaba and OpenRouter has been doubling every 2-3 months.

  3. Open beats on price. Shifting from closed models to open models was estimated to reduce average prices by over 70%, representing an estimated $24.8 billion in consumer savings across 2025 when extrapolated to the total addressable market.

  4. Trust gap. Twice as many Chinese citizens (83%) trust that AI systems serve the best interests of society as Americans (37.5%).

  5. Where it’s used most. AI usage is clustered in service-based economies – with UAE (59.4%) and Singapore (58.6%) in the lead; the US ranks ~23rd.

Work reorganized around AI

  1. Near universal adoption. Close to 90% of organizations now use AI in at least one business function, according to McKinsey.

  2. Time saved. ChatGPT Enterprise users report 40-60 minutes of saved time per day thanks to AI use. Data science, engineering and communications workers save more than average at 60-80 minutes per day.

  3. Hiring tilts to AI. Overall job postings for the whole labor market dropped ~8% YoY1, but AI roles are on the rise. Machine learning engineer postings grew almost 40%.

  4. Automation pressure. In areas with autonomous taxis, human drivers’ pay has fallen. Down 6.9% in San Francisco and 4.7% in Los Angeles year-on-year.

  5. Early-career squeeze. Early-career workers between 22-25 in the most AI-exposed occupations (including software developers) saw a 16% relative decline in employment since late 2022.2

Compute hit the real world

Read more