MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

🔮 Exponential View #556: When execution gets cheap. Capital gains, labor pains. AI buys the grid. CRASH clock, taming complexity & new zones of influence++ ++

2026-01-11 11:55:21

Hi all,

Welcome to the Sunday edition.

Inside:

  • What building two dozen apps over the holidays taught me about the shrinking distance between “Chief Question Officer” and no officer at all.

  • Labor pains, capital gains: US GDP is up, employment not so much. What is going on?

  • The data center, a microgrid: AI labs outran the grid, then hit the turbine factories. Now they’re buying the infrastructure companies themselves.

  • Plus: Utah’s bet on AI prescriptions, taming complexity, robots performing surgeries, and new spheres of influence…

Subscribe now


In the latest member briefing, I share my expectations for 2026. It’s the year AI stops feeling like a tool and starts feeling like a workforce. Plus Q&A.


Execution is cheap

Over the break, I spun up multiple agents in parallel — one building a fact-checker, another an EV slide deck maker and a third a solar forecast model. All ran simultaneously in the background while I did other work.1 I described the problems, LLMs create detailed product specs and the agents made the apps. In my first meeting back, I demoed two dozen working tools to the team. This follows my rule of 5x – if I do something more than five times, I build an app for it. A year ago, each of those apps would have cost a developer weeks to build.

My friend calls the human in this arrangement the “Chief Question Officer.” We ask, machines do, we evaluate. Erik’s framing is elegant, but I don’t think it’s true to the moment; we’ve moved even further. I used to check every output against a strict spec; now I mostly trust the agent to catch and fix its own mistakes – and it usually does. Before Opus 4.5, I had to rescue the model from dead ends. Now it asks good clarifying questions, corrects itself and rarely stalls.

This velocity changes behaviors. For instance, I used to frame briefs carefully; now I leave them a bit looser because the agent fills the gaps. I remain the Chief, yet the role feels like a pilot toggling autopilot ever higher. If progress continues, will I always occupy the cockpit? Would stepping aside, ceding the questions as well as the answers, actually increase what gets built?

Erik warns of the “Turing Trap,” the temptation for firms to use AI to mimic and replace humans. He frames this as a societal choice between augmentation and replacement. I agree we shouldn’t drift into replacement. But my holiday build sprint made it clear that convenience pulls hard. The pressure isn’t just from companies; it’s from us, users making choices. When each small handoff to the agent feels free, can we really resist going all the way to full automation?

Here’s this weekend’s essay on the work after work, the value of human judgement and authenticity:

See also:


Capital gains, labor pains

The US economy has decoupled growth from hiring (see EV#545). While the Atlanta Fed projects a massive 5.4% GDP surge for Q4 2025, the labor market has effectively stalled, adding only 584,000 jobs all year. This is the worst non-recession performance since 2003. This divergence is driven by acceleration in productivity and back-to-back quarterly declines in unit labor costs. Is AI the cause?

Read more

🔮 The work after work

2026-01-10 17:37:45

I became a member of Exponential View because you are consistently insightful and forward-thinking. — Stephen B.

Subscribe now


Nine years ago, I told an audience that the future of human work is artisanal cheese. I got a laugh. GPT-4 didn’t exist yet, the trend line was invisible to most, and “knowledge work” still felt impregnable.

My opening slide from 2017

In a follow-up essay about the path to the artisan economy, I wrote:

In the world of the future, automated perfection is going to be common. Machines will bake perfect cakes, perfectly schedule appointments and keep an eye on your house. What is going to be scarce is human imperfection.

The argument was that when machines handle the specifiable, value migrates to what resists specification. The deliberate imperfection woven into a Persian rug, an act of theological modesty, is a flaw introduced because perfection is not for humans to claim. The texture, the idiosyncrasy, the detail that comes from being human – from judgment and the discretion that no training manual can encode.

There’s a belief that Persian weavers would deliberately leave small flaws in their rugs, trusting that only God’s creations are truly perfect – and that making a perfect rug would be an affront to God.

When I wrote my first book, Exponential, I spent months wrestling with some chapters. The value wasn’t just in the output but in the burning; my 3am rewrites, the discarded frameworks, the specific frustration of trying to explain exponential curves to readers who had never graphed one.

Ben Thompson is the latest to revive this idea that “true value will come from uniqueness and imperfections downstream from a human.” Call it authenticity – proof that a person was actually here.

The paradox of authenticity

But pseudo-authenticity is about to become very easy and cheap. The AI tools can fake texture and simulate idiosyncrasy. They can produce writing that is OK. With enough context, they might even evoke a sense of interiority. So if authenticity can be manufactured, what makes the real thing real?

The distinction cannot be whether AI was used. That line has already collapsed. AI has been used in our searches, writing tools (e.g., Google or Grammarly), but today it’s also used as a brainstormer, thesaurus, an assistant and more. The question becomes what I, Azeem, bring to the interaction with AI that the machine, as my tool/assistant/collaborator, cannot supply on its own. And how can anyone tell the difference?

A spectrum is emerging in how people use these tools. At one end, you prompt and publish, the model’s output is treated as finished. At the other, you treat the output as a draft and reshape it with your own judgment and experience. Both use AI, but only one has a human signature.

When I first wrote about artisanal cheese, I imagined this shift unfolding more slowly, alongside the automation of routine office work and putting more robots on assembly lines. I didn’t anticipate that, nine years later, I could build custom software in an hour or produce work that once required entire teams while walking through customs.

Subscribe now

The done list

Over Christmas, I built three applications for my DJ workflow. AudioScoop scans my hard drives for the 4,000 or so pieces of music I have, finds duplicates, and queues them for processing. Another, which I called Psychic Octopus, enriches each track with metadata about percussion density, vocal presence, and drop locations. Shoal generates playlists based on mood trajectories and genre destinations.

All of it works. It took perhaps an hour in front of the machine all told, broken up into a couple of chunks. It cost a couple of bucks. And all of it had sat at the bottom of my to-do list for months, maybe years. No one was going to build it for me, and I certainly wasn’t paying a dev shop $15,000 for the privilege. All of it works technically. Shoal has appalling taste.

Tom Tunguz calls it moving from the to-do list to the done list. The backlog of things I would never get around to, I now clear in a day.

I am not alone in this. The lead developer of Claude Code revealed that in December, 100% of his code contributions were written by AI. His job is now editing and directing rather than typing syntax. BCG consultants report building 36,000 custom GPTs to assist in work. Stack Overflow, once the repository of engineering knowledge, has sharply declined because people no longer ask questions when the machine answers them in context.

Even the translator function, that thing software engineers did, converting real-world needs into code, is becoming obsolete. I can now build a tool faster than it takes to tell someone the spec. We’re starting to communicate through working prototypes.

In terms of actual velocity, working in this way feels like having 50, maybe 100 people in my team. Putting a specific number on it, I had Claude Code work on a project overnight. Thousands of lines of code, passing hundreds of tests, were ready for me to run when I got back to my desk.

Read more

The year ahead

2026-01-08 02:28:11

In today’s members-only session, I looked ahead to 2026. This is the year when AI will feel less like a set of tools and more like a workforce, where the advantage shifts to those who can orchestrate it into reliable outcomes.

I followed with a Q&A, which covered:

  • How far AI coding and agentic systems really go in practice

  • Where value accrues as building becomes cheaper and faster

  • What this shift means for careers, education, and skills

  • Whether China’s deployment-focused approach could outperform frontier-led strategies

  • How energy, infrastructure, and capital shape the durability of the AI boom

I will be publishing more on these themes later this week.

Happy New Year! Azeem

Leave a comment

Read more

📈 Data to start your week

2026-01-06 01:47:05

Hi all,

Happy New Year and welcome back to our usual programming! This is the Monday snapshot of market-moving data, in under 250 words.


I am hosting a live member-only 2026 briefing with Q&A this Wednesday at 12pm ET. If you’re a member, register here.

Subscribe now


  1. African solar ↑ In November, Africa imported over 2GW of solar panels for the first time.1

  1. The US outperformed economists’ expectations again. The economy grew at an annualised rate of 4.3% in Q3 of last year. Nearly 40% of this is attributed to net trade effects post-tariffs.

  1. Stack overflowed. Posts to Stack Overflow have fallen to levels last seen in its early years. Answers have moved from public forums to chatbots.

  1. GPU prices ↑ Demand for Nvidia chips continues to outpace supply. H200 AWS rental prices climbed ~35% since July and B200 prices nearly tripled.

  2. Valuation-margin correlation ↑ AI companies’ valuations are more closely correlated with token margins than with raw usage.2

Read more

🔮 Exponential View’s year in review

2025-12-31 10:32:16

2025 was the year the compute economy became real.

Massive infrastructure commitments. Data centres acquiring power plants. AI models that reason. The central question went from “what can AI do?” to “how do we build the energy systems, governance frameworks, and coordination mechanisms to absorb what’s coming?”

That question – how intelligence, energy, and coordination collide – is the organising logic of Exponential View. Nearly every meaningful story we tracked this year sat somewhere on that fault line. You couldn’t understand one without the others.

Here’s a curated map of our most durable work from 2025 – your fastest way to catch up and revisit the ideas that matter.

Begin with the stars (★): the pieces most likely to outrun the news cycle by far. Choose two or three that fit your current challenges to make the most of today’s review.

And if you’ve been reading the free editions, wondering whether to go deeper, now is the time – it’s only going to get weirder!

Subscribe now

Give a gift subscription

Big picture / thesis pieces

The next 24 months in AI (Dec 3) A unified view of AI’s critical pressure points – energy, compute, talent – and what to watch. This is the framework piece.

The great value inversion (Sep 20) “Has value broken?” Electricity at negative prices, credentials losing worth, abundance creating crisis. This is distinctive EV thinking as the economic grammar is out of date.

Is AI a bubble? (Sep 17) + the live dashboard The five-gauge framework for assessing boom vs. bubble. Circulated through boardrooms and policy circles.

Energy is technology (Sep 17 2024) Our evergreen primer on how energy went from being a commodity to being a technology and why this changes everything.

The shift to computation (Nov 7) Zooming out to understand what the infinite compute demand means for the rest of the economy.

China / Geopolitics

The Chinese Sputnik that demands our attention (Dec 31 2024) We were the first to contextualize the DeepSeek surprise in December 2024.

72 hours in China (2 Jul) A firsthand look at daily life, technology, and pace inside contemporary China.

Paul Krugman and Azeem on the world in 2025 (Jan 31) The conversation that framed the year: tariffs, trade wars, the new information ecosystem, AI and debt.

Kimi K2 is the model that should worry Silicon Valley (Jul 15) “AI has its Vostok 1 moment.” The strategic analysis of what Chinese open-source means for US incumbents.

Why China builds while the US debates, with Dan Wang (Oct 1) Why America’s constraints are largely internal, and why China’s advantage lies in coordinated execution.

Under-the-radar

How Europe outsourced its future to fear (Nov 19) Europe is being held back by a precautionary reflex that treats inaction as the safest choice, even as the costs of standing still rise sharply.

The $100 trillion productivity puzzle (Jun 19) The capability-absorption gap: labs racing ahead faster than firms can absorb. Why macro-productivity numbers remain flat despite AI breakthroughs.

Did OpenAI’s $100 billion path just get narrower? (Dec 4) Sam Altman’s internal pivot and what it tells us about the AI market right now.

Unpicking OpenAI’s real revenues (Nov 14) How the misreading of cash flows is spooking investors.

Learning & accountability

Seven lessons from building with AI (Apr 16) Practical takeaways from automating our work with AI, and a stack of 50+ tools that we have used.

What I was right (and wrong) about (Jul 4) Marking your own work. AI capability, solar records, EVs, workforce impact. What held up and what didn’t.

The cheat codes of technological progress (May 31) What still governs technological advance. Revisiting the ‘laws’ of progress. How, when and why they work (or don’t).

The exponential thinker’s reading list: 10 timeless essays (Sep 10) A collection of essays that explain how and why technologies reshape the world.

Subscribe now

🔮 Six mental models for working with AI

2025-12-30 00:03:36

The question of whether AI is “good enough” for serious knowledge work has been answered. The models crossed that threshold this year. What’s slowing organizations down now isn’t capability, but the need to redesign work around what these systems can do.

We’ve spent the past 18 months figuring out how. We made plenty of mistakes but today I want to share what survived. Six mental models that can genuinely change the quality of work you get from generative AI. Together with the seven lessons we shared earlier in the year, this is the operating manual we wish we had all along.

At the end, you’ll also get access to our internal stack of 50+ AI tools. We’ve documented everything we’re actively using, testing or intend to test, to help you decide what tools might work for you.

Enjoy!

Subscribe now


1. The 50x reframe

Most people start working with AI by asking something along the lines of: how do I speed up what I’m already doing?

That question is comfortable and wrong. I find that it anchors me to existing constraints.

A more useful question is:

What would I do if I had 50 people working on this?

Then work backwards.

The 50x reframe forces you to imagine the ideal outcome unconstrained by time or labor. Only then do you ask which parts of that hypothetical organization can be simulated with software. I now encourage our team members to think of who they would hire, what work that person would do, how they’d know if they were successful.

If you’ve not had the experience of hiring fifty people on a project (fair enough!), use this prompt to get you started to identify what it is that you may need:

A prompt you could use:
I currently [describe your task/process]. Walk me through what this would look like if I had a team of 50 people dedicated to doing this comprehensively and systematically. What would each role focus on? What would the ideal output look like? Then help me identify which parts of that hypothetical team’s work could be automated or assisted by AI tools.

For example, we use this approach for podcast guest prospecting and research. We used to rely on network and serendipity to identify 20-30 strong candidates for each season; a mix of the right expertise, timing and editorial fit that consistently delivered good conversations – but left too much to chance. Instead, 50x thinking asks what if we could systematically evaluate the top 1,000 potential guests? What if we could track the people we’re interested in so they surface when they’re most relevant? We built a workflow that researches each candidate, classifies expertise, identifies timely angles, and suggests the most relevant names for any given week’s news cycle.

2. Adversarial synthesis

Even experienced operators have blind spots. We internalize standards of “good” based on limited exposure. No one has seen great outputs across every domain but the models, collectively, have come closer to it than anyone else.

To make the most of this superpower, I give Gemini, Claude and ChatGPT the same task – and make them argue. Then have each critique the others. You’ll quickly surface gaps in your framing, assumptions you didn’t realise you were making, and higher quality bars than you expected.

Generated using Midjourney

When models disagree, it’s usually a sign that the task is underspecified, or that there are real trade-offs you haven’t surfaced yet. Which brings us to the next point.

3. Productize the conversation

If you’re having the same conversation with AI repeatedly, turn it into a tool. Every repeated prompt is a signal. Your workflow is basically telling you that this (t)ask is valuable enough to formalize.

I found that when I productize a conversation by turning it into a dedicated app or agentic workflow, my process gets better at the core and my tool evolves over time. So the benefits of the original conversation end up compounding in a completely new way.

A prompt you could use:
## Context
I have a recurring [FREQUENCY] task: [BRIEF DESCRIPTION].

Currently I do it manually by [CURRENT PROCESS - 1-2 sentences]. 

Here’s an example of this task in action:
<example>
Input I provided: [PASTE ACTUAL INPUT]
Output I needed: [PASTE ACTUAL OUTPUT OR DESCRIBE]
</example>

## What I Need
Turn this into a reusable system with:
1. **Input specification**: What information must I provide each time?
2. **Processing instructions**: What should the AI do, step by step?
3. **Output structure**: Consistent format for results
4. **Quality criteria**: How to know if the output is good

## Constraints
- Time/effort budget: [e.g., “should take <5 min to run”]
- Depth: [e.g., “verify top 10 claims, not exhaustive”]
- Tools available: [e.g., “has web search” or “no external lookups”]
- Error handling: [e.g., “flag uncertain items vs. skip them”]

## Desired Format
Deliver this as a: [CHOOSE ONE]
- [ ] System prompt I can paste into Claude/ChatGPT
- [ ] Zapier/Make automation spec
- [ ] Python script (Replit-ready)
- [ ] Lindy/agent configuration
- [ ] All of the above with tradeoffs explained

## Success Looks Like
A good output will: [2-3 bullet points describing what “done well” means]

I kept asking LLMs for editorial feedback multiple times a week, for weeks. After some fifteen hours of repeat prompting, I built a virtual editor panel in Replit and Lindy.

Read more