MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

📈 Data to start your week

2026-02-10 01:23:11

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Silicon’s AI premium ↑ AI chips are expected to generate half of all chip revenue in 2026, despite making up just 0.2% of volume.

  2. Driverless at scale ↑ Waymo raised $16 billion at a $126 billion valuation — more than most listed carmakers.

  3. Apps, not models ↑ ~70% of AI builders are focused on vertical applications, not foundation models. Value is shifting up the stack.

Read more

🔮 Exponential View #560: The $1 trillion panic; my favorite AI analysis tool; intention economy, CAR-T therapy & time++

2026-02-08 11:14:16

Hi all,

Welcome to the Sunday edition #560. This was a week of overreactions. Wall Street panicked about $650 billion in AI spending. Sam Altman and Dario Amodei traded jabs.

But this is exactly what Exponential View is for… We go beneath the noise to the forces that move markets, technologies, and societies.

Today, I’ll unpack what investors got wrong in their panic, the model upgrades that matter more than the benchmarks suggest, and what I’ve learned from living with agents that never sleep (including an unveiling of my favorite thinking tool… which I built for myself.)

Subscribe now


Markets overreact

When more than $1 trillion was wiped off the combined valuations of big tech companies this week (and Anthropic’s Claude Cowork plugin triggered a separate $285 billion rout), we are watching markets overreact to a new paradigm they don’t really understand.

I’ve long been calling out the linear investment thinking we see on full display – simply because capital markets were not built to fund general-purpose, exponential technologies like AI. I previously wrote:

For capital markets, this uncertainty isn’t just about who might win in a well-defined game; it’s about the type of game that’s being played. Markets use tools that assume relatively stable competitive structures and roughly linear growth in order to price company-level cashflows over three- to ten-year horizons.

The hyperscalers are not spending into a void. They are supply-constrained, not demand-constrained. Microsoft’s CFO Amy Hood admitted she had to choose between allocating compute to Azure customers or to Microsoft’s own first-party products. That’s what scarcity looks like in the age of AI.

As we showed in our research with Epoch AI, there is a viable gross margin in running frontier models at inference; the economics at the model level can work. What’s expensive is the relentless cycle of R&D, where each new model depreciates in months.

But the market hasn’t yet internalized how demand explodes once models cross what called the “threshold of coherence” for agents. I know this first-hand: my agent, Mini Arnold 💪, chews through $20-30 of tokens a day, roughly $5,000 a year. And I’ve pushed it down to the cheapest model available. The moment models can reliably work for 10 or 20 hours on a task, I’ll be running hundreds of them. That’s where we’re heading within months.

So when I look at this week’s sell-off, I see investors who haven’t yet experienced the breakthrough moment of watching an AI agent compress 10 hours of tedious work into 40 minutes. Once that realisation moves from early adopters to Main Street, the $650 billion won’t look like reckless spending. It will look like they didn’t spend enough.

See also, I spoke about this live on Friday in conversation with , and our :


A MESSAGE FROM OUR SPONSOR

Startups move faster on Framer

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours — no dev team, no hassle.

Pre-seed and seed-stage startups new to Framer will get:

  • One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.

  • No code, no delays: Launch a polished site in hours, not weeks, without technical hiring.

  • Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.

  • Join YC-backed founders: Hundreds of top startups are already building on Framer.

Claim your free trial

To sponsor Exponential View in Q2, reach out here.

While you watched the Super Bowl…

This was the week Anthropic and OpenAI went full contact. First, my take on the model upgrades – and then what the Super Bowl quip is really about.

This week we got two +0.1 upgrades: Claude Opus 4.6 and GPT-5.3-Codex. A decimal upgrade might suggest only incremental improvements, and in many ways, it is. The benchmarks for better planning, longer context, fewer errors all improved – some only by a small amount. But AI is now at a stage where only one variable really matters economically. That is, how long can AI do a task, and with how much autonomy. Here both of the incremental models show exponential dynamics.

’s benchmark update on GPT 5.2 (note, not the new Codex) shows model performance nearly doubled. The main question to ask of Opus 4.6 and 5.3-Codex: can they perform for longer?

Anthropic’s Nicholas Carlini set 16 Claude agents to build a C compiler from scratch and mostly walked away1. Two weeks and $20,000 in API costs later, those agents had written 100,000 lines of Rust code producing a compiler that can build the Linux kernel on x86, ARM, and RISC-V. This is not a toy2.

A C compiler capable of building Linux is a genuinely hard engineering problem. Carlini had tried the same experiment with earlier Opus models. Opus 4.5, released just months ago, could pass test suites but choked on real projects. Predecessors before that could barely produce a functional compiler at all. Opus 4.6 completed a two-week engineering task. Every increment on autonomous execution is a step change in real-world outcomes.

OK, now let’s turn to the Super Bowl…

Screenshot

Even Sam Altman got a giggle out of Anthropic’s Super Bowl ads, which position Claude as an ad-free alternative to ChatGPT; he then hit back:

Anthropic serves an expensive product to rich people. … [We] are committed to free access, because we believe access creates agency.

Sam’s remark revolves around a dichotomy as old as the commercial internet: how do you pay for your services – with your cash or with your self? In the past ten-plus years, the price was our attention. As we highlighted in EV#509 a year ago, the AI product use will move the economics from attention towards intention:

LLMs and predictive AI can go beyond this landscape of attention, to shape our intention – guiding what we want or plan to do, which some refer to as the “intention economy”. AI systems can infer and influence users’ motivations, glean signals of intent from seemingly benign interactions and personalise persuasive content at scale.

Security expert Bruce Schneier argues that when AI talks like a person, we start to trust it like a person. We treat it as if it were a friend, when in reality it’s a corporate product, built to serve a company’s goals, not ours. The chatty, “helpful” interface creates a feeling of intimacy exactly where we should be on guard and that gap between feeling and reality is what worries him and many others.

Mustafa Suleyman, the CEO of Microsoft AI, went further in our conversation. His position is that AI’s emotional intelligence is genuinely useful. It makes us calmer, more productive, more willing to delegate. But he draws a hard line: models should never simulate suffering. That, he says, is where “our empathy circuits are being hacked.” Market dynamics may push directly toward that line, because the companies whose models feel most human will win the most engagement.

  • See also, Google DeepMind researchers distinguish between rational persuasion and harmful manipulation in AI. One’s based on appealing to reason with facts, justifications, and trustworthy evidence. The other, tricking someone by hiding facts, distorting importance or applying pressure.


Living with new beings

I’ve long suspected that living with powerful AI, and agents in particular, will provoke human experiences our ancestors didn’t have. Last April, I wrote about time compression as the new emotional challenge of working alongside AI:

with a new AI-driven workflow in place, I ran the steps through a series of modular prompts and automation scripts. The system parsed, filtered and structured the inputs with minimal human intervention. A light edit at the end and it was done in 15 minutes.

And yet, instead of feeling triumphant, I felt… unsettled. Had I missed something? Skipped a crucial step? Was the result too thin?

It wasn’t. The work was complete and it was good. But I hadn’t emotionally recalibrated to the new pace.

I empathize with , who calls agentic AI “a vampire,” multiple agents running constantly, demanding your human oversight and input, draining energy and making it hard to go back to human paces, including sleep. In the first days of setting up my multi-agent systems, I found myself waking up at 4am to check my agents, unblocking them and context-switching across multiple projects before I hit my human limit and went back to bed.

We’re still working out where agents genuinely help, and a key question is how many to use at once. Multi‑agent systems shine when work can be split into parallel streams, but they break down on tightly sequential tasks.

For high-consequence work – investment due diligence, thorny analysis, or divergent exploration – I now turn to Clade, a multi-agent system I built where AIs argue with themselves until better answers emerge3. Here’s how…

Read more

Do AI models actually make enough money to cover their costs? Live with Epoch AI

2026-02-07 02:29:09

In this live session, I'm joined by , founder of , and from my team, with financial journalist from .

We dig into our recent research partnership examining OpenAI's actual operating margins, R&D costs, and whether the economics of frontier AI actually work. We explore the surprisingly short lifespan of AI models, infrastructure constraints, the shift toward agentic workflows, and what all of this means for the trillion-dollar question: is this sustainable or a bubble?

Enjoy!

Azeem

Mustafa Suleyman – AI is hacking our empathy circuits

2026-02-06 01:58:56

A few days before OpenClaw exploded, I recorded a prescient conversation with Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind. We talked about what happens when AI starts to seem conscious – even if it isn’t. Today, you get to hear our conversation.

Mustafa has been sounding the alarm about what he calls “seemingly conscious AI” and the risk of collective AI psychosis for a long time. We discussed this idea of the “fourth class of being” – neither human, tool, nor nature – that AI is becoming and all it brings with itself.

Jump to:

(03:38) Why consciousness requires the ability to suffer

(06:52) “Your empathy circuits are being hacked”

(10:47) A fourth class of being

(13:41) Why market forces push toward seemingly conscious AI

(37:48) The case for going faster

Follow me on YouTube

Follow me on Apple Podcasts

📈 Data to start your week

2026-02-02 23:12:13

Hi all,

Here’s your Monday round-up of data driving conversations this week in less than 250 words.

Let’s go!

Subscribe now


  1. Agents onboard agents ↑ In the first 72 hours since launch, 147,000 AI agents joined Moltbook and created more than 12,000 communities. The number is up to 1.5 million today. (See our deep dive).

  2. Local support ↓ Rural communities are stalling nearly $100 billion of AI data center projects over fears about energy costs, land use, jobs and loss of local control.

  3. Hyperscaler capex ↑ Meta expects to spend $115–135 billion on capex in 2026, roughly 75% more than last year. If all hyperscalers followed suit, the combined 2026 spend would exceed $700 billion.

  4. Concentration risk ↑ Microsoft’s contracted future revenue hit $625 billion (up 110% YoY), but 45% comes from OpenAI. The remaining 55% grew at 28% YoY.

Read more

🔮 Exponential View #559: coherent agents; goodbye SaaS; orchestrators needed; space phages, immortality & Kimi's true identity++

2026-02-01 10:53:56

Hi all,

I’ve been following AI publicly through this newsletter for a decade. I’ve never seen a week like this one.

Agents work now. Not “work with caveats,” not “work if you’re technical.” They just work. And what happened next – tens of thousands of downloads, a social network for agents, infrastructure companies treating it as infrastructure – happened in days.

Let’s walk through it.


Clawgentic AI is here

Clawdbot (now OpenClaw)1 has moved so fast that even we didn’t get to write about it before it changed its name (twice). Created by a solo developer, this agent took one corner of the internet by storm with tens of thousands of downloads and people rushing to buy Mac Minis to house their little agents. It spawned a social network for agents (Moltbook, which I think is the most important thing happening on the Internet at the moment) and Cloudflare built tooling to run it serverlessly, all in one week.

We are used to exponential curves at Exponential View. Not vertical lines.

From a bare install, I was able to ask it to connect to my studio lights and CCTV. Clawdbot found the endpoints and created its own skill to control them. So I can now turn my lights on and off via WhatsApp.

OpenClaw is the “AI Chief-of-Staff” we first described in 2024, now real-ish. “Mini Arnold”, my Moltbot agent, is now recieving my todos, random thoughts and other things into it via a range of channels. It’s been helpfulish so far, but whether it’s really helpful and additive to my current systems remains to be seen. “Mini Arnold” has public profiles on MoltX (a Twitter clone), Moltbook (but just a lurker) and a few other services.

But OpenClaw is just the most visible change. My guess is its main contribution will be to understand design patterns for agent-to-agent behaviour and the considerations for how we build governance and management across the systems of these agents.

But the headline is: agents now work.

It’s hard to say exactly what the breakthrough was. There likely wasn’t one. , one of the world’s top AI researchers, says the systems have passed the “threshold of coherence,” causing a phase shift. All of it added up to something that just works.

Back in October, I tried to build a flow that would analyze my public equity positions: pull the fundamentals, read the technicals, digest earnings transcripts, read the news, review my proprietary insight, and help me check my thesis. It was wildly above my dev capabilities, and I failed.

In January, I did it in one evening using Claude Code, all while nursing a stinking headache.

Claude Code (and competitors like OpenAI’s Codex) are now trusted by very best developers in the world: those at Anthropic and OpenAI to write 100% of their code. In the case of Claude Code, it’s writing its own codebase. That is a weird phenomenon. The tool that makes itself, possibly even the invention that is itself a method of invention.

In a way, this is why the Clawdbot/OpenClaw experiment is so important—a large scale experiment with agents much less capable than the ones of next year—to help us understand what dynamics emerge.

See also:

  • In this week’s essay, I argue that you can forget whether the agents are conscious. What matters is what they’re showing us about coordination itself – and why that might be more important than whether the lights are on inside:

  • Moonshot AI is already experimenting with what’s next: Kimi K2.5 agents spawn their own Cloudflare, an AI infrastructure firm, started to offer MoltWorker’s (the ability to launch your own Clawdbots without buying a new computer).

  • China’s major cloud providers are rapidly integrating Moltbot, creating integrations with domestic platforms like DingTalk and WeCom. When infrastructure companies treat something as infrastructure, we’re no longer in the experimental phase.


A MESSAGE FROM OUR SPONSOR

Startups move faster on Framer

First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours — no dev team, no hassle.

Pre-seed and seed-stage startups new to Framer will get:

  • One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.

  • No code, no delays: Launch a polished site in hours, not weeks, without technical hiring.

  • Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.

  • Join YC-backed founders: Hundreds of top startups are already building on Framer.

Start for free today

To sponsor Exponential View in Q2, reach out here.

Hello software, goodbye SaaS

The public markets think SaaS is in trouble, with the Morgan Stanley software index falling some 45-ish% relative to the Nasdaq over the past year. The reason might be that AI users can build what they need. Dave Clark, former CEO of Amazon Worldwide Consumer, built a working CRM for his company in a weekend.

SaaS was built on a knowledge asymmetry: vendors knew how to build, customers knew what they needed. But AI agents have collapsed that asymmetry by making domain knowledge the scarce resource and engineering capacity nearly free, rendering the entire vendor intermediation layer obsolete.

Ok. It’s a bit provocative. But we’re starting to see the tendrils – Dave Clark isn’t the only person building exactly what he needs. We’re running more than a dozen custom apps and dozens of workflows, at Exponential View. Three of my top five apps, by usage, did not exist a month ago and were written by me.

There’s talk of a shift from paying for access to paying for work. Instead of buying seats, you pay for outcomes.

But the question this raises is uncomfortable for software companies: who has better domain knowledge, the vendor or the customer? In our case, I find it impossible to imagine buying an editorial research tool from someone else, unless, like Elicit, it sits on a trove of data we need. We’re the specialists in our domain. We know what we need. It’s easier to build bespoke than to adapt something generic.

I tried to persuade one of the team that we might need to subscribe to a prompt management app last week. He told me, “Honestly, I can build what we need faster than it’ll take me to read the documentation.” We’ll see.

Something is certainly happening. In the past four years, revenue per employee in the top quintile of software companies has tripled, breaking away from the median. These leaders are most likely AI-native firms, or those which have leant fully into the technology.

Of course, of course, existing firms meet compliance requirements, they have a data moat, customer relationships. They don’t disappear overnight. But then again, neither did Blackberry.


Orchestrating the sorcerer’s apprentices

Morgan Stanley claims the UK experienced an 8% net job loss due to AI in the last twelve months among firms using the technology for at least a year. Japan: 7%. Germany and Australia: 4%. The US, the outlier with a 2% gain. Early-career roles go first, two to five years of experience.

Read more