MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

📈 Data to start your week

2026-01-19 23:53:16

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Silicon détente ↑ The US and Taiwan signed the largest chip reshoring deal in history, $500 billion in US-bound capital plus tariff cuts for Taiwan.

  2. AI lifts TSMC ↑ The chip giant posted a record Q4 profit of $16 billion, up 35% YoY, with high-performance computing making up 58% of total revenue.

  3. Revenue follows compute ↑ OpenAI shared that compute scaled ~3x YoY from 2023 through 2025, while annualized revenue climbed from $2 billion to $20 billion – a near 1:1 correlation. It’s to be seen if this is a durable law or a boom artifact.

Read more

🔮 Exponential View #557: Starlink, Iran & abundance; AI’s exploration deficit; AGI vibes, aliens, Merge Labs & regulating companions++

2026-01-18 10:23:21

“I love the sharpness of your insights and perspectives.” — Satya C., a paying member

Subscribe now


Good morning from Davos. I got into Switzerland yesterday for a week of meetings and, if past years are any guide, a lot of conversations about AI. As in previous years, I’ll be checking in live most days from the Forum to share what I’m hearing behind the scenes.

And now, let’s get into today’s weekend briefing…


An antidote to forced scarcity

Iran’s regime tried to cement control with a near‑total internet shutdown. Even so, Starlink, smuggled in after sanctions eased in 2022, gave citizens a channel the state couldn’t fully police. The regime has tried to block Starlink by jamming GPS, but as always with anything on the internet, there are workarounds. It created real capacity for coordination, even if a full‑scale revolution didn’t follow.

“Commenting from IRAN. It works just FINE” via Reddit

Something similar happened to energy in Pakistan. In 2024, solar panel imports hit 22 gigawatts, nearly half its total generation capacity of 46 GW in 2023 (see our analysis here). Consumers opted out of a grid charging 27 rupees per kWh; rooftop solar comes in at roughly one‑third of that.

Energy is wealth

In both cases – authoritarian or merely dysfunctional – centralized infrastructure created demand for decentralized workarounds: solar panels in Lahore and Starlink dishes in Tehran. In corrupt or dysfunctional systems, power usually flows from dependency, not from legitimacy. Control the grid, control information, control money and compliance follows. When hardware breaks that dependency, only legitimacy remains – and these states have little. Energy, intelligence and coordination underpin growth; two of the three are now slipping beyong centralized control.

Subscribe now


Escaping the explore-exploit trap

Researchers who adopted AI, tracked across 41 million publications in a Nature study, published three times more papers and received 4.8 times more citations. The trade-off is a 4.6% contraction in topics studied and a 22% drop in researcher collaboration.

AI pushes work toward data‑rich problems, and some foundational questions where data is sparse, go unexplored. We face an exploration deficit where AI will do well at exploiting what we already know, but it is eroding the incentive to discover what we don’t. Four other autonomous research attempts show that models excel at literature synthesis but fail at proposing experiments that could falsify their own hypotheses or identifying which variables to manipulate next.

My own experience hits a related wall. I’ve used AI tools extensively for research on my new book; it’s excellent at spotting cross‑field patterns. But safety theatre has hobbled them. When I test plausible tech scenarios with well‑understood trends, models smother answers in caveats for every stakeholder — permitting bodies, developing economies, future generations, and so on. The labs bake this in during post-training through reinforcement learning, to optimize for hedge and cover. That’s the opposite of exploration. So we have tools that synthesise brilliantly, if bureaucratically, but flinch from the leaps that research needs.

As in scientific labs, so in offices, with the possible decline in junior employment1. Companies chase payroll savings, but are they also cutting off the pipeline to expertise? Anthropic’s latest data suggests that AI succeeds at about 66% of complex tasks; and the remaining 34% falls into “O-ring” territory, where one weak link causes the entire system to fail.

Execution is very cheap right now, as I wrote last week. And the remaining bottleneck tasks will likely be strategic judgment, context integration and error correction. In other words, the more we automate routine work, the higher the premium on precise human expertise. Yet by eliminating the junior roles where that expertise is forged, corporations are dismantling the supply line for the very resource they need most.

I unpack this and other trends with Anthropic’s Peter McCrory (dropping next week on YouTube and podcast platforms).


Feel the AGI yet?

Sequoia Capital, the tony Silicon Valley investor, says that AGI is already here, defining it simply as “the ability to figure things out”. The near‑term future they describe is already showing up in my day‑to‑day (see my essay on The work after work):

The AI applications of 2026 and 2027 will be doers. They will feel like colleagues. Usage will go from a few times a day to all-day, every day, with multiple instances running in parallel. Users won’t save a few hours here and there – they’ll go from working as an IC to managing a team of agents.

By that standard, long-horizon agents such as Claude Code would already qualify. The models provide the brain and the scaffolding – memory, tools and decision-making – lets them act. Together, they can figure out a great deal.

As long-time readers know, I think the term “AGI” is too blurry to be meaningful, let alone carry much practical weight. Unburdened by that term, Sequoia is pointing out something many of us feel. We can expand our cognitive capacity for mere dollars a day. It feels like a watershed.

This week, Cursor used more than 100 agents to build a browser with three million lines of code, parsing HTML, CSS and JavaScript in parallel. It did so for an estimated $10,000 in API calls2. It will not replace Chrome, but it also does not require hundreds of millions of dollars.

The bigger question for 2026 for me is whether the emerging capability will translate across domains.

Read more

🔮 The new moat in 2026

2026-01-17 00:39:19

Earlier this month, I briefed members of Exponential View on the year ahead. I explored how the act of making has been transformed, why authenticity and meaning will become the new scarcity, and whether the foundations of energy and capital can hold. I also address the question I was asked most in 2025: when will the AI bubble burst?

Paying members can access the full Q&A session here.

Skip to the best bits:

06:43 From execution to orchestration

09:02 The agentic coding revolution

11:10 The Chief Question Officer

20:30 The new moat in 2026

26:10 How does solar growth affect AI?

28:53 Revisiting the bubble or boom question

Enjoy!

Azeem

🔮 Anthropic's Head of Economics Peter McCrory on their new Economic Index

2026-01-16 02:52:02

Anthropic have just released a new Economic Index report. They’ve analysed millions of real Claude conversations to map exactly where AI is augmenting human work today, and where it isn’t. This is the best empirical window we have into how AI is reshaping work right now.

I spoke with Peter McCrory, their Head of Economics, who led this research.

You can watch the recording here, or wait until next week when we’ll have the edited version out on YouTube, Spotify and Apple Podcasts.

In the meantime, here are three things Peter said that stuck with me:

On AI as meta-innovation: “AI might very well be an innovation in the method of innovation itself.” (38:26)

On human expertise becoming more important, not less: “For the most complex tasks, that’s where the model struggles. That suggests that human expertise to evaluate the quality of the output is more important and you need more human delegation and direction and managerial oversight.” (15:25)

On the risk of de-skilling: “For some jobs, there might be de-skilling where Claude’s taking over the most complex tasks in your job. And that could lead to a greater risk of job displacement or lower wages for that type of role.” (49:13)

Enjoy!

Azeem

📈 Data to start your week

2026-01-12 22:34:01

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Power is the moat ↑ OpenAI and SoftBank are putting a combined $1 billion into SB Energy1, which will secure purpose-built data centers to scale OpenAI’s compute. (See SNL#556).

  2. Renewable gap ↑ At today’s build rates, China is on track to reach 100% renewables by 2051. The US by 2148 unless it solves permitting, grid build-out and siting.

  1. Battery build-out ↑ Also, China commissioned more than 65 GWh of grid-scale battery storage in December alone – over 15 GWh more than the US added in all of 2025.

Read more

🔮 Exponential View #556: When execution gets cheap. Capital gains, labor pains. AI buys the grid. CRASH clock, taming complexity & new zones of influence++ ++

2026-01-11 11:55:21

Hi all,

Welcome to the Sunday edition.

Inside:

  • What building two dozen apps over the holidays taught me about the shrinking distance between “Chief Question Officer” and no officer at all.

  • Labor pains, capital gains: US GDP is up, employment not so much. What is going on?

  • The data center, a microgrid: AI labs outran the grid, then hit the turbine factories. Now they’re buying the infrastructure companies themselves.

  • Plus: Utah’s bet on AI prescriptions, taming complexity, robots performing surgeries, and new spheres of influence…

Subscribe now


In the latest member briefing, I share my expectations for 2026. It’s the year AI stops feeling like a tool and starts feeling like a workforce. Plus Q&A.


Execution is cheap

Over the break, I spun up multiple agents in parallel — one building a fact-checker, another an EV slide deck maker and a third a solar forecast model. All ran simultaneously in the background while I did other work.1 I described the problems, LLMs create detailed product specs and the agents made the apps. In my first meeting back, I demoed two dozen working tools to the team. This follows my rule of 5x – if I do something more than five times, I build an app for it. A year ago, each of those apps would have cost a developer weeks to build.

My friend calls the human in this arrangement the “Chief Question Officer.” We ask, machines do, we evaluate. Erik’s framing is elegant, but I don’t think it’s true to the moment; we’ve moved even further. I used to check every output against a strict spec; now I mostly trust the agent to catch and fix its own mistakes – and it usually does. Before Opus 4.5, I had to rescue the model from dead ends. Now it asks good clarifying questions, corrects itself and rarely stalls.

This velocity changes behaviors. For instance, I used to frame briefs carefully; now I leave them a bit looser because the agent fills the gaps. I remain the Chief, yet the role feels like a pilot toggling autopilot ever higher. If progress continues, will I always occupy the cockpit? Would stepping aside, ceding the questions as well as the answers, actually increase what gets built?

Erik warns of the “Turing Trap,” the temptation for firms to use AI to mimic and replace humans. He frames this as a societal choice between augmentation and replacement. I agree we shouldn’t drift into replacement. But my holiday build sprint made it clear that convenience pulls hard. The pressure isn’t just from companies; it’s from us, users making choices. When each small handoff to the agent feels free, can we really resist going all the way to full automation?

Here’s this weekend’s essay on the work after work, the value of human judgement and authenticity:

See also:


Capital gains, labor pains

The US economy has decoupled growth from hiring (see EV#545). While the Atlanta Fed projects a massive 5.4% GDP surge for Q4 2025, the labor market has effectively stalled, adding only 584,000 jobs all year. This is the worst non-recession performance since 2003. This divergence is driven by acceleration in productivity and back-to-back quarterly declines in unit labor costs. Is AI the cause?

Read more