MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

🔮 Exponential View #557: Starlink, Iran & abundance; AI’s exploration deficit; AGI vibes, aliens, Merge Labs & regulating companions++

2026-01-18 10:23:21

“I love the sharpness of your insights and perspectives.” — Satya C., a paying member

Subscribe now


Good morning from Davos. I got into Switzerland yesterday for a week of meetings and, if past years are any guide, a lot of conversations about AI. As in previous years, I’ll be checking in live most days from the Forum to share what I’m hearing behind the scenes.

And now, let’s get into today’s weekend briefing…


An antidote to forced scarcity

Iran’s regime tried to cement control with a near‑total internet shutdown. Even so, Starlink, smuggled in after sanctions eased in 2022, gave citizens a channel the state couldn’t fully police. The regime has tried to block Starlink by jamming GPS, but as always with anything on the internet, there are workarounds. It created real capacity for coordination, even if a full‑scale revolution didn’t follow.

“Commenting from IRAN. It works just FINE” via Reddit

Something similar happened to energy in Pakistan. In 2024, solar panel imports hit 22 gigawatts, nearly half its total generation capacity of 46 GW in 2023 (see our analysis here). Consumers opted out of a grid charging 27 rupees per kWh; rooftop solar comes in at roughly one‑third of that.

Energy is wealth

In both cases – authoritarian or merely dysfunctional – centralized infrastructure created demand for decentralized workarounds: solar panels in Lahore and Starlink dishes in Tehran. In corrupt or dysfunctional systems, power usually flows from dependency, not from legitimacy. Control the grid, control information, control money and compliance follows. When hardware breaks that dependency, only legitimacy remains – and these states have little. Energy, intelligence and coordination underpin growth; two of the three are now slipping beyong centralized control.

Subscribe now


Escaping the explore-exploit trap

Researchers who adopted AI, tracked across 41 million publications in a Nature study, published three times more papers and received 4.8 times more citations. The trade-off is a 4.6% contraction in topics studied and a 22% drop in researcher collaboration.

AI pushes work toward data‑rich problems, and some foundational questions where data is sparse, go unexplored. We face an exploration deficit where AI will do well at exploiting what we already know, but it is eroding the incentive to discover what we don’t. Four other autonomous research attempts show that models excel at literature synthesis but fail at proposing experiments that could falsify their own hypotheses or identifying which variables to manipulate next.

My own experience hits a related wall. I’ve used AI tools extensively for research on my new book; it’s excellent at spotting cross‑field patterns. But safety theatre has hobbled them. When I test plausible tech scenarios with well‑understood trends, models smother answers in caveats for every stakeholder — permitting bodies, developing economies, future generations, and so on. The labs bake this in during post-training through reinforcement learning, to optimize for hedge and cover. That’s the opposite of exploration. So we have tools that synthesise brilliantly, if bureaucratically, but flinch from the leaps that research needs.

As in scientific labs, so in offices, with the possible decline in junior employment1. Companies chase payroll savings, but are they also cutting off the pipeline to expertise? Anthropic’s latest data suggests that AI succeeds at about 66% of complex tasks; and the remaining 34% falls into “O-ring” territory, where one weak link causes the entire system to fail.

Execution is very cheap right now, as I wrote last week. And the remaining bottleneck tasks will likely be strategic judgment, context integration and error correction. In other words, the more we automate routine work, the higher the premium on precise human expertise. Yet by eliminating the junior roles where that expertise is forged, corporations are dismantling the supply line for the very resource they need most.

I unpack this and other trends with Anthropic’s Peter McCrory (dropping next week on YouTube and podcast platforms).


Feel the AGI yet?

Sequoia Capital, the tony Silicon Valley investor, says that AGI is already here, defining it simply as “the ability to figure things out”. The near‑term future they describe is already showing up in my day‑to‑day (see my essay on The work after work):

The AI applications of 2026 and 2027 will be doers. They will feel like colleagues. Usage will go from a few times a day to all-day, every day, with multiple instances running in parallel. Users won’t save a few hours here and there – they’ll go from working as an IC to managing a team of agents.

By that standard, long-horizon agents such as Claude Code would already qualify. The models provide the brain and the scaffolding – memory, tools and decision-making – lets them act. Together, they can figure out a great deal.

As long-time readers know, I think the term “AGI” is too blurry to be meaningful, let alone carry much practical weight. Unburdened by that term, Sequoia is pointing out something many of us feel. We can expand our cognitive capacity for mere dollars a day. It feels like a watershed.

This week, Cursor used more than 100 agents to build a browser with three million lines of code, parsing HTML, CSS and JavaScript in parallel. It did so for an estimated $10,000 in API calls2. It will not replace Chrome, but it also does not require hundreds of millions of dollars.

The bigger question for 2026 for me is whether the emerging capability will translate across domains.

Read more

🔮 The new moat in 2026

2026-01-17 00:39:19

Earlier this month, I briefed members of Exponential View on the year ahead. I explored how the act of making has been transformed, why authenticity and meaning will become the new scarcity, and whether the foundations of energy and capital can hold. I also address the question I was asked most in 2025: when will the AI bubble burst?

Paying members can access the full Q&A session here.

Skip to the best bits:

06:43 From execution to orchestration

09:02 The agentic coding revolution

11:10 The Chief Question Officer

20:30 The new moat in 2026

26:10 How does solar growth affect AI?

28:53 Revisiting the bubble or boom question

Enjoy!

Azeem

🔮 Anthropic's Head of Economics Peter McCrory on their new Economic Index

2026-01-16 02:52:02

Anthropic have just released a new Economic Index report. They’ve analysed millions of real Claude conversations to map exactly where AI is augmenting human work today, and where it isn’t. This is the best empirical window we have into how AI is reshaping work right now.

I spoke with Peter McCrory, their Head of Economics, who led this research.

You can watch the recording here, or wait until next week when we’ll have the edited version out on YouTube, Spotify and Apple Podcasts.

In the meantime, here are three things Peter said that stuck with me:

On AI as meta-innovation: “AI might very well be an innovation in the method of innovation itself.” (38:26)

On human expertise becoming more important, not less: “For the most complex tasks, that’s where the model struggles. That suggests that human expertise to evaluate the quality of the output is more important and you need more human delegation and direction and managerial oversight.” (15:25)

On the risk of de-skilling: “For some jobs, there might be de-skilling where Claude’s taking over the most complex tasks in your job. And that could lead to a greater risk of job displacement or lower wages for that type of role.” (49:13)

Enjoy!

Azeem

📈 Data to start your week

2026-01-12 22:34:01

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Power is the moat ↑ OpenAI and SoftBank are putting a combined $1 billion into SB Energy1, which will secure purpose-built data centers to scale OpenAI’s compute. (See SNL#556).

  2. Renewable gap ↑ At today’s build rates, China is on track to reach 100% renewables by 2051. The US by 2148 unless it solves permitting, grid build-out and siting.

  1. Battery build-out ↑ Also, China commissioned more than 65 GWh of grid-scale battery storage in December alone – over 15 GWh more than the US added in all of 2025.

Read more

🔮 Exponential View #556: When execution gets cheap. Capital gains, labor pains. AI buys the grid. CRASH clock, taming complexity & new zones of influence++ ++

2026-01-11 11:55:21

Hi all,

Welcome to the Sunday edition.

Inside:

  • What building two dozen apps over the holidays taught me about the shrinking distance between “Chief Question Officer” and no officer at all.

  • Labor pains, capital gains: US GDP is up, employment not so much. What is going on?

  • The data center, a microgrid: AI labs outran the grid, then hit the turbine factories. Now they’re buying the infrastructure companies themselves.

  • Plus: Utah’s bet on AI prescriptions, taming complexity, robots performing surgeries, and new spheres of influence…

Subscribe now


In the latest member briefing, I share my expectations for 2026. It’s the year AI stops feeling like a tool and starts feeling like a workforce. Plus Q&A.


Execution is cheap

Over the break, I spun up multiple agents in parallel — one building a fact-checker, another an EV slide deck maker and a third a solar forecast model. All ran simultaneously in the background while I did other work.1 I described the problems, LLMs create detailed product specs and the agents made the apps. In my first meeting back, I demoed two dozen working tools to the team. This follows my rule of 5x – if I do something more than five times, I build an app for it. A year ago, each of those apps would have cost a developer weeks to build.

My friend calls the human in this arrangement the “Chief Question Officer.” We ask, machines do, we evaluate. Erik’s framing is elegant, but I don’t think it’s true to the moment; we’ve moved even further. I used to check every output against a strict spec; now I mostly trust the agent to catch and fix its own mistakes – and it usually does. Before Opus 4.5, I had to rescue the model from dead ends. Now it asks good clarifying questions, corrects itself and rarely stalls.

This velocity changes behaviors. For instance, I used to frame briefs carefully; now I leave them a bit looser because the agent fills the gaps. I remain the Chief, yet the role feels like a pilot toggling autopilot ever higher. If progress continues, will I always occupy the cockpit? Would stepping aside, ceding the questions as well as the answers, actually increase what gets built?

Erik warns of the “Turing Trap,” the temptation for firms to use AI to mimic and replace humans. He frames this as a societal choice between augmentation and replacement. I agree we shouldn’t drift into replacement. But my holiday build sprint made it clear that convenience pulls hard. The pressure isn’t just from companies; it’s from us, users making choices. When each small handoff to the agent feels free, can we really resist going all the way to full automation?

Here’s this weekend’s essay on the work after work, the value of human judgement and authenticity:

See also:


Capital gains, labor pains

The US economy has decoupled growth from hiring (see EV#545). While the Atlanta Fed projects a massive 5.4% GDP surge for Q4 2025, the labor market has effectively stalled, adding only 584,000 jobs all year. This is the worst non-recession performance since 2003. This divergence is driven by acceleration in productivity and back-to-back quarterly declines in unit labor costs. Is AI the cause?

Read more

🔮 The work after work

2026-01-10 17:37:45

I became a member of Exponential View because you are consistently insightful and forward-thinking. — Stephen B.

Subscribe now


Nine years ago, I told an audience that the future of human work is artisanal cheese. I got a laugh. GPT-4 didn’t exist yet, the trend line was invisible to most, and “knowledge work” still felt impregnable.

My opening slide from 2017

In a follow-up essay about the path to the artisan economy, I wrote:

In the world of the future, automated perfection is going to be common. Machines will bake perfect cakes, perfectly schedule appointments and keep an eye on your house. What is going to be scarce is human imperfection.

The argument was that when machines handle the specifiable, value migrates to what resists specification. The deliberate imperfection woven into a Persian rug, an act of theological modesty, is a flaw introduced because perfection is not for humans to claim. The texture, the idiosyncrasy, the detail that comes from being human – from judgment and the discretion that no training manual can encode.

There’s a belief that Persian weavers would deliberately leave small flaws in their rugs, trusting that only God’s creations are truly perfect – and that making a perfect rug would be an affront to God.

When I wrote my first book, Exponential, I spent months wrestling with some chapters. The value wasn’t just in the output but in the burning; my 3am rewrites, the discarded frameworks, the specific frustration of trying to explain exponential curves to readers who had never graphed one.

Ben Thompson is the latest to revive this idea that “true value will come from uniqueness and imperfections downstream from a human.” Call it authenticity – proof that a person was actually here.

The paradox of authenticity

But pseudo-authenticity is about to become very easy and cheap. The AI tools can fake texture and simulate idiosyncrasy. They can produce writing that is OK. With enough context, they might even evoke a sense of interiority. So if authenticity can be manufactured, what makes the real thing real?

The distinction cannot be whether AI was used. That line has already collapsed. AI has been used in our searches, writing tools (e.g., Google or Grammarly), but today it’s also used as a brainstormer, thesaurus, an assistant and more. The question becomes what I, Azeem, bring to the interaction with AI that the machine, as my tool/assistant/collaborator, cannot supply on its own. And how can anyone tell the difference?

A spectrum is emerging in how people use these tools. At one end, you prompt and publish, the model’s output is treated as finished. At the other, you treat the output as a draft and reshape it with your own judgment and experience. Both use AI, but only one has a human signature.

When I first wrote about artisanal cheese, I imagined this shift unfolding more slowly, alongside the automation of routine office work and putting more robots on assembly lines. I didn’t anticipate that, nine years later, I could build custom software in an hour or produce work that once required entire teams while walking through customs.

Subscribe now

The done list

Over Christmas, I built three applications for my DJ workflow. AudioScoop scans my hard drives for the 4,000 or so pieces of music I have, finds duplicates, and queues them for processing. Another, which I called Psychic Octopus, enriches each track with metadata about percussion density, vocal presence, and drop locations. Shoal generates playlists based on mood trajectories and genre destinations.

All of it works. It took perhaps an hour in front of the machine all told, broken up into a couple of chunks. It cost a couple of bucks. And all of it had sat at the bottom of my to-do list for months, maybe years. No one was going to build it for me, and I certainly wasn’t paying a dev shop $15,000 for the privilege. All of it works technically. Shoal has appalling taste.

Tom Tunguz calls it moving from the to-do list to the done list. The backlog of things I would never get around to, I now clear in a day.

I am not alone in this. The lead developer of Claude Code revealed that in December, 100% of his code contributions were written by AI. His job is now editing and directing rather than typing syntax. BCG consultants report building 36,000 custom GPTs to assist in work. Stack Overflow, once the repository of engineering knowledge, has sharply declined because people no longer ask questions when the machine answers them in context.

Even the translator function, that thing software engineers did, converting real-world needs into code, is becoming obsolete. I can now build a tool faster than it takes to tell someone the spec. We’re starting to communicate through working prototypes.

In terms of actual velocity, working in this way feels like having 50, maybe 100 people in my team. Putting a specific number on it, I had Claude Code work on a project overnight. Thousands of lines of code, passing hundreds of tests, were ready for me to run when I got back to my desk.

Read more