MoreRSS

site iconExponential ViewModify

By Azeem Azhar, an expert on artificial intelligence and exponential technologies.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Exponential View

🤑 Which companies generate stock market returns?

2025-06-23 10:29:16

Hendrik Bessembinder is an economist who has conducted fascinating research on the determinants of stock market returns in the United States over the past century. His conclusions are pretty simple. Most companies (and by extension most managers) destroy value. Stock market returns are overwhelmingly concentrated, in fact 2% of public companies drove 90% of wealth creation.

I was reminded of the work after venture investor retweeted it with the observation that “long-term public market investing is venture capital investing, whether you like it or not.”

The outcomes are more stark than that. Last year, while helping some asset allocators think through the AI cycle, I analysed Bessembinder’s results through my frameworks.

I’ll share a brief excerpt here, as it’s super relevant. Just two caveats: this analysis is over a year old, and Bessembinder’s data only ran to the end of 2022. Nvidia alone has had a $3 trillion market cap since 2022, which only serves to support the case I made.

Regarding concentration, 2% of firms, approximately 600, accounted for 90% of all wealth creation. If your portfolio missed that 2%, and you had backed the rest of the market, you’d have lost money. And the concentration continues, 23 firms (less than 0.1% of the sample), account for 30% of returns.

Selection bias

The majority of top performers share a common trait: they are built upon the breakthrough general-purpose technology of the era. Consider the general-purpose technologies of the past 120 years or so: the internal combustion engine, telephony, electricity, and computing. As the table below shows, returns skew heavily towards those firms based on GPTs.

Read more

🔮 Sunday edition #529: Mainframe-to-Mac; routine cognition commoditized; Meta’s AI bet; the LLM OS frontier++

2025-06-22 11:53:15

Hi, it’s Azeem.

Could large language models form the basis of a new operating systems. Meanwhile, Mark Zuckerberg isn’t guessing; he is in beast mode, willing to spend what it takes to secure Meta’s future. The battle to dominate this new computing paradigm isn’t about incremental improvements—it’s about survival. Here’s what you need to know.

Subscribe now


Today’s edition is brought to you by Attio.

Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.


The new OS

We’re entering a new era of computing where LLMs could become the operating systems, argues Andrej Karpathy. His latest keynote frames this clearly: software has moved from hand-coded logic (1.0), through learned neural weights (2.0), and now into prompt-driven LLMs (3.0). His provocation is simple: treat the model as an operating system, a 1960s mainframe with superpowers and cognitive blind spots.

The open question is how you best interact with this operating system. For a mainframe in the 1960s, it was a punch card. Then came the command line. Neither particularly fun nor intuitive. For LLMs, we have already seen interesting, thought-provoking experiments – NotebookLM, for instance, led by Steven Johnson, is in his words “a conduit between my own creativity and the knowledge buried in my sources – stress-testing ideas, spotting patterns, nudging my memory.”1 Another example, noted by Karpathy himself, uses Gemini 2.5 Flash to write the code for a UI and its contents, based solely on what appears on the previous screen. These are still experimental products, and many more will follow. But expect a UX reset as abrupt as DOS giving way to Windows.


Mainframe-to-Mac

If today’s large-language-model clusters resemble the 1960s mainframe—powerful but forbidding—the next strategic prize is the equivalent of the original Macintosh: a human-sized device that makes an AI operating system feel intimate. The economics now make that leap plausible. A cluster of eight M-series Macs – about the cost of a family car – can now run frontier models that would have cost about $300,000 on Nvidia H100s just a year ago. Chinese up-start MiniMax even boasts that its new M1 model bested DeepSeek while using only one-third the compute, a data point that hints at personal AI slipping into everyday reach.

Apple smells this inflection – as I discussed in my Live last week. Its “Liquid” interface—dismissed by analysts as animated eye‑candy—looks more like a prototype for ambient computing: an assistant that listens all day, whispers answers, and leaves the screen dark.

Early experiments, however, have struggled to find their footing. Humane’s “AI pin” and Rabbit R1 evoke memories of ambitious yet failed early computing form factors, like Xerox’s groundbreaking but commercially unsuccessful Alto. Perhaps Sam Altman’s and Jony Ive’s upcoming product will chart a different course. What we do know is that these personal AI devices are trending toward local execution.

Just as Apple understood in the 1980s that a GUI demanded the form factor of the Mac, the form factor must again evolve to match this new operating system.


Is Zuck paranoid or desperate?

Intel founder, Andy Grove, popularised the phrase “only the paranoid survive”. It is the strategic gait you need during a “strategic inflection point” when the fundamentals of the market change so much that adaption or obsolescence are the only paths.

So, what to make of Zuckerberg’s $14bn deal to acquire less than half of Scale AI, placing its 28-year-old founder, Alexandr Wang, in charge of his firm's ambitious new “super-intelligence” lab? Is it Grovian paranoia? Or a desperate last gasp, the final lurch of a singleton to pair up before time runs out?

It’s a bit of both—a third strategic boldness mixed with two-thirds desperation. Zuck is bold: his move towards the metaverse four years ago was just that. It was just wrong.

AI is the real deal, and Facebook (as it was then) had made strides in pursuing the technology. But I’ve heard for more than a year that Meta’s Llama team was unhappy and underperforming, and just a couple of months ago, Joelle Pineau, its boss, left.

Cue Meta falling behind the other major firms in offerings and mind share. And Zuck pursuing Perplexity, Ilya Sutsekever’s Safe Super Intelligence and Mira Murati’s Thinking Machines in an attempt to catch up. Combined with astonishing pay packets for AI researchers, Meta looks desperate.

AI is a “strategic inflexion point”, per Grove. It doesn’t matter whether the optics are desperate or brave; what matters is being able to play the new game or face the lingering irrelevancy that technology bestows upon former titans that don’t grok the shift. For Zuckerberg, there is almost no price he won’t pay to play in the next innings.


The price of labor

The cost of routine cognitive tasks is collapsing toward zero. That single shift dissolves scarcity-based business models and severs the old link between hiring talent and corporate growth.

OpenAI’s ex-research chief Bob McGrew argues the scarcity era for knowledge work is ending: as routine cognition prices out at raw compute cost, “agents” will gut the hire-more-brains-to-grow playbook and push value toward proprietary data, deep context and durable networks. Amazon CEO Andy Jassy is already bracing—generative agents, he says, will shrink the corporate back office. The near-term story isn’t robo-layoffs so much as a violent repricing of skills: expense reports and log scraping vanish first, while synthesis, judgment and relationship-building surge in premium. For firms, moats now depend less on owning intelligence than on integrating it uniquely and securely; advantage will flow to those who harden cyber-posture, accelerate agent pilots and turn abundant cognition into defensible leverage.

See also:


Elsewhere

  • Iran is self-sabotaging its internet infrastructure to slow Israeli attacks. The nationwide shutdown during heightened Israel tensions shows that kinetic conflict now triggers pre-emptive digital blackouts.

  • Korean researchers say a single high-end AI chip could draw 15,000 watts by 2035 – up from about 800 W today. At that scale, electricity and cooling – not chip fabrication – become the main growth bottlenecks. Missing from their analysis, though, are novel approaches to energy-efficient computing, such as reversible architectures.

  • Waymo shows that motion-forecasting accuracy in its autonomous vehicles follows a power law with training compute – scaling data and parameters lets the cars cope with trickier road chaos.

  • Solar-plus-storage can now undercut grid prices for heavy industry – provided the plants can be run intermittently. Terraform plans 1 MW micro-plants that run intermittently at $200 per kW, claiming a fivefold green dividend for steel, ammonia and related sectors.

  • China’s next five‑year plan (2026–30) makes building its own chip‑making machines a top goal. That includes the EUV lithography tools that print the chip patterns, carve the circuits and check they meet specs—equipment usually produced by Dutch giant ASML. By pouring government money into this gear, Beijing aims to cut its reliance on foreign suppliers and weaken US export‑control pressure in just one product cycle. It will be hard; EUV tools are probably the most advanced machines you can buy today.


Today’s edition is brought to you by Attio:

Attio is the AI-native CRM built for the next era of companies.

Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.

Start for free today.

1

See here for my conversation with Steven where we discuss this topic.

🔮 Can AI finally clean my inbox?

2025-06-21 11:32:19

In 1954, Dwight Eisenhower articulated a truth that most of us live viscerally: “What is important is seldom urgent and what is urgent is seldom important.” Long before the first email was sent, professionals struggled with this paradox—the ringing telephone, the knock on the door, the crisis meeting that devoured afternoons earmarked for deep thinking. Other people’s priorities have always had a peculiar talent for masquerading as emergencies.

Today, that masquerade has become a 24-hour, 7-day carnival. You unlock your phone to check the weather; 205 taps later, it’s lunchtime and the clouds have rolled in anyway. The digital age didn’t create the urgency trap—it has simply mechanized, digitized and exponentialized it. What was once an occasional ambush is now a constant barrage. The average worker now processes 117 emails daily while smartphones deliver 146 push notifications (181 for Gen Z). Microsoft’s telemetry reveals an interruption every two minutes during work hours. We’ve become Sisyphus, but instead of pushing one boulder uphill, we’re juggling dozens while climbing.

The productivity-industrial complex has responded with libraries of solutions: Getting Things Done, Atomic Habits, The 4-Hour Workweek. We’ve downloaded the apps, bought the planners, attended the seminars. Eat that pomodoro frog.

Yet the to-do lists get longer, the search for clarity more frantic, we’re owned by our inboxes.

Why? Because these systems demand heroic willpower precisely when our cognitive resources are most depleted. Each interruption costs 23 minutes of refocusing time—a tax we pay dozens of times daily. Email and Slack haven’t invented the phenomenon of other people’s priorities invading our day; they’ve simply made it frictionless. Without a mediator between us and the demands of others, we remain the bottleneck in our own lives.

Generated image
Produced by ChatGPT under my instruction.

AI to the rescue

Read more

💡 The $100 trillion productivity puzzle

2025-06-20 02:17:19

We are only six months into the year, yet AI has already outpaced two decades of ordinary tech cycles. In January, DeepSeek shook the world. Google, OpenAI and Anthropic quickly followed with next-generation models, ones which could command software tools on the internet.

The first wave of agents then appeared: ManusAI—until recently an unknown start-up—unveiled an agent that can autonomously tackle complex tasks, while Anthropic launched Claude Code, a multi-agent system many developers call a dream. The lab race is heating up, but two years after ChatGPT’s debut, the macro-productivity numbers remain stubbornly flat.

This is the capability-absorption gap: frontier labs are racing ahead faster than the traditional economy can keep pace. The current generation of AI is already powerful enough to remake how we work, yet firms are absorbing these capabilities far more slowly than labs are enhancing them. I don’t blame them. Prices halve every six months; models remain stochastic, which complicates reliability; and managerial know-how is scarce.

In today’s post, we will explore this gap and what businesses can do about it.

How far capabilities have already outpaced us

The scoreboard is unambiguous: nearly every public benchmark has moved upward since last year, making previous standards of excellence look decidedly ordinary.

Personal experience underscores this vividly. Occasionally, I run models locally on my laptop, particularly when trapped on flights with poor Wi-Fi. These local models approximate the capabilities of GPT-4 roughly a year ago—lacking the reasoning and tool-use features we have witnessed since. Using these models offline now feels painfully limited compared with the current state of the art.

Practical gains are clear, especially in real-world workflows. In software engineering, AI-powered coding tools have swiftly evolved from basic code hints to managing entire processes—planning, writing, testing and submitting finished work for human review. Systems such as Claude Code and OpenAI’s Codex now automate these tasks end-to-end, reducing humans to supervisory roles. Though not flawless, their rapid improvement streamlines workflows by converting tedious coding into manageable reviews.

These advancements are possible because models can now handle increasingly complex tasks. Consider METR’s latest agent-endurance benchmark, which measures how long AI models can sustain intricate multistep workflows. On this test, top-performing models last three to five times longer than they did only six months ago. This dramatic improvement signals deeper planning capabilities and more reliable tool use.

In short, the capability curve remains steep—and it continues to climb rapidly across multiple dimensions.

Perhaps even more crucially, the unit costs of AI are plunging exponentially. This is central to my definition of exponential technology—not solely about improving performance but about rapidly collapsing costs for a given capability.

Consider ChatGPT’s inference prices, which have roughly halved every six months—outpacing even the historical cost declines in DRAM and solar power. This steep drop stems from relentless algorithmic improvements and fierce competition among providers. Lower prices in turn drive wider adoption: the cheaper an AI agent becomes, the more extensively it can be deployed. (Though clearly there is still room for improvement—I recently burned through $80 of Replit credits in a single evening.)

Importantly, even if AI labs suddenly stopped frontier research and model scaling overnight, these cost improvements would continue accumulating. In my back-room conversations, experts are roughly split 50/50 on whether scaling laws alone can carry us all the way to AGI—whatever that ultimately entails. If scaling alone is not sufficient, most believe we might still be only one or two significant conceptual breakthroughs away.

Yet debates about scaling do not fundamentally change the core argument. Future capability enhancements, while valuable, are additive—not prerequisites—for substantial economic transformation. Existing models already surpass what most enterprises can effectively absorb or leverage. At Exponential View, we are still figuring out how to redefine our workflows around o3; I expect most organizations are still navigating how to integrate GPT-4o fully.

McKinsey reports that nearly every company is investing in AI, yet only 1% claim they have fully integrated it into workflows and achieved meaningful business outcomes. And honestly, even that 1% is probably just PR.

Why absorption lags – the three frictions

This sluggish absorption, rather than frontier innovation, is the main reason we can see the AI boom everywhere except in the economic statistics. That remains true even as AI startups rack up millions to tens of billions in revenue at record speed. But the global economy is huge—about $100 trillion a year—so that is a lot of OpenAIs. It will take several years, not a few quarters, before even the fastest-growing AI startups contribute one percent to global income. (Incidentally, I have little doubt they will, and that AI-native newcomers will replace many incumbents across industries over the next two decades, just not in the next two years.)

The rest of the economy is dominated by incumbents. Those incumbents are laced with friction. They need to tackle it. Three distinct institutional frictions underpin this capability-absorption gap, and each one is structural rather than technological.

Read more

📈 Data to start your week

2025-06-16 23:35:59

Hi all,

Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.

Subscribe now


Today’s edition is brought to you by Attio.

Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.


  1. AI accelerator boom ↑ AMD CEO Lisa Su projects the AI accelerator market will surge beyond $500 billion by 2028, about four times today’s size.

  2. OpenAI’s rapid ascent ↑ ChatGPT’s explosive growth has propelled OpenAI to $10 billion in annual recurring revenue in under three years.

  3. Efficient chat ↑ The average ChatGPT query uses only 0.0003 kWh of energy, the same as a Google search in 2009.

  4. UK AI investment ↑ Prime Minister Starmer has pledged £1 billion for AI infrastructure alongside £187 million for skills programmes—modest compared with the multi-billion-dollar AI spending by Gulf states.

  5. AI-generated revenue ↑ Brazil’s largest ed-tech company, Qconcursos, earned $3 million in 48 hours after launching a premium app built with the AI low-code platform Lovable.

  6. YC’s AI bet ↑ AI-agent startups now make up nearly half (47%) of Y Combinator’s Spring 2025 batch.

  7. Stablecoin surge ↑ US Treasury Secretary Scott Bessent says dollar-linked stablecoins could reach a $2 trillion market, 28% of today’s US money-market funds.

  8. China leads science ↑ The Nature Index 2025 ranks China (32 122 share points)1 well above the United States (22 083), widening the gap fourfold in just a year.

  9. Methane munchers ↓ Windfall Bio’s methane-eating microbes eliminated more than 85% of methane emissions from a California dairy farm.

Thanks for reading!


Today’s edition is brought to you by Attio:

Attio is the AI-native CRM built for the next era of companies.

Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.

Start for free today.

1

Share points measure a country’s or institution’s fractional contribution to high-quality research. Each paper counts as 1 point, divided equally among its authors. The total share is the sum of these fractions, reflecting real participation—not just paper counts.

🔮 Sunday edition #528: Superintelligent states; thinking parrots; paywalls vs overviews; stochastic chips++

2025-06-15 12:38:05

Hi, it’s Azeem.

This week’s edition opens with a question that has gone from sci-fi to strategy: who’s ready for superintelligence? Sam Altman says a gentle takeoff has begun. I agree, but the gentleness of this singularity will depend on how quickly institutions, norms, and minds adapt.

From Apple’s quiet pivot toward ambient computing, to Gulf states reengineering their political economies for AI supremacy, to students offloading cognition onto ChatGPT -every system is under pressure to evolve. What fails to adapt, breaks. What adapts too late, loses relevance.

Here’s another Sunday edition to help you make sense of things.

Subscribe now


Today’s edition is brought to you by Attio.

Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.


Who’s (not) ready for super intelligence

Sam Altman wrote about “The Gentle Singularity” earlier in the week, arguing that AI has already crossed an event horizon. He maintains the transition will feel manageable. I agree with Sam on the overall trajectory but the “gentleness” will depend on several factors. Unless institutions adapt, governance acts proactively and individuals reshape their cognitive models, the singularity will be jarring for many.

, a leading VC investor, says his firm’s core thesis is simple: “Incumbents will be nuked; everything will be rebuilt.” In recent conversations with other top Silicon Valley investors, I’ve heard a similar refrain. A new consensus is forming: AI is a sufficiently general-purpose technology to disrupt, invert, and ultimately reinvent nearly every sector of the economy. I share this view. Based on my recent discussions with senior executives at dozens of public companies, I’m convinced that incumbents have not yet grasped the scale or imminence of what is coming. Many are sleepwalking into a series of Blockbuster–BlackBerry moments that will unfold over the next two decades.

Governments need to rise to the challenge. Policy analyst Ed de Minckwitz argues that existing state machinery—reactive, siloed, trained on yesterday’s problems—cannot withstand agentic, self-improving AGI. I proposed the same in my book Exponential four years ago. And in Will MacAskill’s “compressed century,” decades of change condense into a few years.

Some states are acting. Gulf countries are racing to refit institutions and grids. All the while, Europe remains stuck in pre-AI assumptions. Democracies, in particular, must guide voters through this upheaval, as non-democracies will not wait.

Making sense of Apple

Apple’s WWDC announcements left many underwhelmed, revealing how far behind the company is in the AI race. Apple’s new AI study published this week reinforces this positioning. They tested large reasoning models on classic puzzles, such as the Tower of Hanoi and River Crossing, finding that models often fail when solution steps exceed 100, even with the right algorithms. Many read this as confirmation that today’s AIs are mere pattern matches.

However, a closer look reveals that these breakdowns likely stem from architectural quirks – token-level noise, sampling instability, and an overemphasis on short-form confidence. o4-mini stunned mathematicians at Berkeley by cracking previously unpublished tier-4 FrontierMath proofs – problems that take human experts weeks. These networks can generate genuine insight, albeit through a cognitive architecture that may be alien to ours. The key takeaway of this paper is not in the result but in Apple’s role: it’s analyzing from the outside. Apple is observing limits that others are already working around – less a leader in capability, more a commentator with interface ambitions.

So can Apple stay relevant? The company’s new design language, the Liquid Glass is radical move toward ambient, post-phone computing. This could be Apple’s path to staying relevant and their unique, core strength – being the bridge into a new computing era where interfaces are no longer screens but intelligent agents embedded in our environment. I discussed this future at some length in my Friday live.

This is your brain on AI

An MIT-Media Lab study found that students using ChatGPT to draft essays showed significantly lower α- and β-band EEG connectivity (a proxy for task engagement) than those who wrote unaided, and ChatGPT users later struggled to recall or quote their own work. Generative tools reduce effort, but also offload memory and erode ownership of ideas. So how do we fix this? One answer may be intentional friction: adding pauses, confirmations or limits to restore agency and reduce overload. Designers are beginning to frame friction as a feature, not a flaw. In China, the response has been more blunt – Alibaba, Tencent and ByteDance disabled photo-recognition genAI features during college-entrance exams to prevent cheating. That protects evaluation, but not learning.

To blunt the temptation to “just hit generate,” we have to grade the process, not only the product. Require students to submit their prompts and drafts, defend revisions aloud and pass surprise, AI-free recall checks.

See also:

Elsewhere

  • Ray Dalio says the US is at Stage 5 in his disorder cycle—fiscal strain, value gaps, tech shocks—and one step from civil war unless reforms include everyone.

  • Washington and Beijing have struck a temporary six month agreement that lets US automakers and manufacturers resume rare-earth imports from China.

  • Meanwhile, Brussels is investing €9.5 million in Horizon iBot4CRMs, an AI-driven pilot program that strips neodymium, cobalt and gold from e-waste.

  • OpenAI ARR hits $10 billion. Timely — while compute currently dominates the bill, each ChatGPT 4o query burns just 0.34 Wh (≈ 0.003¢ at US industrial rates) which means that a Plus subscriber must fire off over 400,000 prompts a month before electricity nibbles at the $20 fee.

  • AI’s “move-fast” era is tilting toward a licensing-heavy phase that rewards firms with deep pockets and airtight provenance tools. Meta doubled Scale AI’s valuation with its $15 billion investment for 49%.

  • AWS will cool 120 US facilities with reclaimed wastewater.

  • Worldcoin will deploy “Orb” scanners in major UK cities this year, offering biometric proof-of-human credentials.

  • Researchers have woven hair-thin tellurium wires into a flexible “solar panel” that slides under the retina to restore vision in mice and non-human primates.

Thanks for reading!

Azeem

P.S. Do follow me on my new YouTube channel.


Today’s edition is brought to you by Attio:

Attio is the AI-native CRM built for the next era of companies.

Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.

Start for free today.