2025-08-20 22:58:31
Hi, Azeem here with a special guest opinion piece.
Nowhere are today’s global rifts more visible than in the US. To help make sense of this moment, I’ve invited one of the sharpest macro thinkers on system change, , to share his perspective on why America’s future could be brighter than the fractured present suggests.1
Peter is a veteran futurist. He came to San Francisco at the dawn of the digital revolution to work with the founders of WIRED magazine, and has spent decades at the front edge of technological and societal change. Peter now writes newsletter, where he explores in further depth the ideas presented here.
Enjoy Peter’s essay – and do forward it if you find it useful.
Azeem
By
America is entering its fourth great reinvention. Each time in the past, the pattern has been the same: a wave of transformative technologies arrives and the old order cracks apart in conflict. After a bruising struggle, new rules, institutions and operating models take hold, reshaping the deep structures of the economy and society.
The Revolution, the Civil War and the Great Depression were all moments when Americans tore down an exhausted model and replaced it with one fit for a new age. We are there again today. Donald Trump and the MAGA movement are acting as a demolition crew in Washington, smashing institutions that no longer fit.
The problem is, they are not builders.
In San Francisco, technologists and systems thinkers are sketching the outlines of the next American operating system, built around artificial intelligence, clean energy and biotechnology.
I focus here on technology because it is the accelerant: the force that speeds up and magnifies change.2 Technology opens the door to reinvention, but it does not dictate what walks through. AI can be a tool of empowerment or surveillance; clean energy can heal the planet or scar it anew; bioengineering can widen inequality or cure disease.
Invention is relatively easy. Steering it is hard. The old order is collapsing; the contest is over what replaces it.
A wave of general‑purpose technologies sparked each of America’s three past reinventions, and each lit a political firestorm before settling into decades‑long booms. Steam engines and mechanized mills in the early Republic multiplied human muscle with coal power and sent the economy surging. Railroads, Bessemer steel and the telegraph shrank the continent, creating a national market overnight. After World War II, the internal‑combustion engine, petrochemicals and nuclear science underwrote suburbia, the interstate highway system and a rules‑based world order.
Every time, the economic consequences of new technology collided with entrenched systems. Patriots faced loyalists. The Union fought the Confederacy. Roosevelt’s New Deal internationalists battled America First isolationists. These were not gentle transitions. It took a Civil War and 750,000 dead to uproot an economy built on slavery and clear space for free‑labor manufacturing.
We should expect similar passions today, though we have so far avoided comparable violence. The red line now is carbon. Oil states back Trump so fiercely because clean energy threatens to dismantle their world, just as slavery’s end dismantled the South.
Yet history also shows what comes after the struggle: an era of widespread innovation and broad‑based prosperity. First, the early Republic boom brought canals, turnpikes and the cotton‑gin productivity leap. It also ushered in the first great American economic expansion. Second, the Gilded Age. US steel output overtook Britain’s, and continental rail coverage hammered freight costs. It brought the second great economic expansion and the idea of the American dream.
Lastly, after the conflict of the 1930s and World War II came the Golden Age of Capitalism. Here we had an average growth rate of 4%, the interstate highway system and a Marshall Plan world order that lifted all boats.
In each case, these reinventions were enabled by a new generation of builders often labeled “progressives.” Regardless of the era or the party they were identified with, they were pro‑tech, pro‑economic growth, pro‑change and pro‑innovation – in other words, pro‑progress.
Not every upheaval reaches that scale. Reagan’s revolution in the 1980s revitalized the postwar order but did not replace it. And reinventions can unravel. Reconstruction’s promise of racial equality was violently reversed; the Progressive Era’s reforms were rolled back in the 1920s. Renewal is never destiny. It is a possibility, one that depends on the old order being cleared away and the new one being deliberately built.
The best way to understand Trump and the MAGA movement is as a demolition crew. They are tearing down old systems, not building new ones. Trump is out to dismantle the bureaucratic welfare state, abandon the Pax Americana and strip away constitutional constraints that have framed American politics for eighty years.
How did we get here? Look at the big picture of the last quarter‑century. An $850 billion Pentagon budget, larger than the next seven nations combined. More than $35 trillion in federal debt, swelling by $2 trillion a year. Medicare and Social Security, designed for a 1940s‑sized elderly population, now strain under a baby‑boomer bulge twice as large. The Cold War security machine and mid‑century welfare state were pillars that served for decades; now they are relics.
For much of the 21st century’s first quarter, voters watched politicians from both parties paper over these cracks. Anger festered. Young progressives rallied to Bernie Sanders after the 2008 crash and Wall Street bailouts. White working‑class voters, promised that globalization would “lift all boats,” instead saw factories shutter and wages stagnate, and so turned to Trump. Left and right alike smelled the same rot: systems that served their custodians, not their citizens.
Out of this anger came the populists, and Trump arrived with a wrecking ball, tearing down the Pax Americana abroad and the welfare state at home. Trade pacts? Rip them up. Pull back unilaterally from foreign commitments like NATO. Civil‑service protections? Freeze hiring, gut agency budgets, and outsource what’s left.
The point to keep in mind is that American populists focus on one thing: channeling anger at old systems and tearing them down. They do not build what comes next. Progressives do.
To put it charitably, today’s progressives are in a transition phase. For the past 40 years they have mostly been squeezing the last juice out of earlier ideas: rewiring the New Deal into the Green New Deal, pushing civil rights into new frontiers from race to gender to sexuality.
The new 21st‑century progressives likely to emerge will echo earlier eras: pro‑tech, pro‑growth, system‑builders at heart. Like Lincoln’s Republicans or Roosevelt’s Democrats, they aim to channel the values of equality and opportunity into the 21st century using transformative technologies: AI, clean energy and bioengineering. These technologies aren’t inherently progressive. The challenge is to bend these forces toward humane ends.
You can already glimpse the rebuild in San Francisco. Don’t be fooled by high‑profile tech titans who jumped on the Trump train. Much of the Bay Area’s innovation economy remains left of center and offers prototypes of the new progressive movement. The idea of universal basic income, for instance, re-emerged in tech circles in the last decade when early AI pioneers foresaw job displacement. Many in tech are now deeply involved in next‑generation progressive causes: the YIMBY housing push and the abundance movement, which asks how to expand prosperity in a high‑growth economy rather than fight over slices of a static pie.
Much of the energy isn’t about the tools themselves but the systems around them. Coders sit alongside economists, strategists and intellectuals in a constant churn of meetups, salons and summits. The debates are fundamental: should AI wealth be pre‑distributed by taxing tokens? How do young workers climb when AI removes the bottom rung of knowledge work? Should education be rebuilt around AI tutors that follow students for life? We are already seeing these questions reshape Californian politics. Environmental laws once championed by progressives are being dismantled to make way for housing and clean‑energy infrastructure.
California has often been America’s future; it is playing that role again. The story unfolding here is largely missed by outsiders, especially the media on the East Coast and in Europe. In Washington, the Old World is fighting for its survival through the very figure tearing it down: Trump. But while attention is fixed on this spectacle of destruction, the next story – one of reinvention – is already emerging in San Francisco. If you want to see the future of America, there is only one place to look.
Peter Leyden is a tech expert on AI and other transformative technologies, and a thought leader on a more positive future. He is the author of The Great Progression: 2025 to 2050, to be published by HarperCollins, as well as a Substack series of the same name. Peter came to Silicon Valley to work with the founders of WIRED to start The Digital Age and later founded two of his own media startups as he followed the front lip of technological change.
The views expressed in the guest commentary are the author’s own and do not necessarily reflect those of Exponential View.
History shows reinvention is never only about machines. The Revolution carried Enlightenment ideals; the Civil War was a moral reckoning with slavery; the New Deal was forged in response to global economic collapse.
2025-08-18 22:05:12
Hi all,
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
Data centers ↑ Morgan Stanley expects global spending on data centers to hit nearly $3 trillion between now and 2029 – about the size of France’s economy. Approximately half of this is expected to come from Big Tech capex.
Cobots ↑ Collaborative robots designed to operate safely alongside humans in a shared workspace made up 24% of all robot orders in North America in Q2 and 15% of revenue.
Tech funding ↑ In July, tech firms pulled in a record 50 mega-rounds, each one worth $100 million or more.
Renewables’ costs ↓ In 2024, over 90% of new renewable capacity was cheaper than fossil alternatives.
2025-08-17 10:00:22
“The level of deep insight into technological trends and forward-looking info you share is unmatched. No one is doing it like you.” – Susan, a paying member
Hi all,
Welcome to our Sunday edition, where we explore the latest developments, ideas, and questions shaping the exponential economy.
Enjoy the weekend reading!
Azeem
Four of the top ten YouTube channels in May featured AI in every video. This isn’t inherently bad. But distortion becomes a problem when it’s so pervasive and cheap that it erodes trust in a society where trust is already scarce.
The AI Security Institute, with Oxford University and MIT, found that top AI chatbots can sway political opinions in under 10 minutes, and 36-42% of those shifts persisted a month later. AI is probably being used to shape opinions under the radar.
A new paper further illustrates the systemic impacts of this. A randomized field experiment with Süddeutsche Zeitung (SZ), one of Germany’s largest and most influential daily newspapers, found that when readers learned how hard it was to distinguish AI-generated images from real ones, their trust in news content dropped – even for SZ. Yet those same readers visited SZ more in the days after (up roughly 2.5%) and were more likely to keep their subscriptions five months later!
Trust in content isn’t the same as trust in a source’s ability to navigate misinformation. At the system level, this could change the structure of the media landscape. Baseline trust sinks, but credibility becomes the scarce asset. A few outlets may consolidate audience loyalty, while the rest are lost in noise.
This is fundamentally different from the “firehose of falsehood” strategy. The Russian propaganda machine used it to overwhelm audiences and erode their ability to distinguish truth from falsehood. Steve Bannon adapted and amplified this approach, “as flooding the zone with shit” in the US, to keep the media reactive and off-balance. His goal was to bury audiences in chaos so it becomes easier for a particular faction to control the narrative. Today’s AI-driven distortion could breed chaos, but it may also raise the competitive value of any source that can convincingly cut through it.
See also, my conversation about the future of media and AI with the CEO of The Atlantic .
Unemployment is increasing, but it’s not because of artificial intelligence. If you’re in an AI-exposed occupation, your unemployment rate is dramatically lower than that of workers not exposed. This supports an idea from economist David Autor that I’ve long promoted: AI is likely having a bigger effect on tasks than on employment.
If we take this view, organizations should make it a priority to use AI to boost cognitive abilities, while also protecting what makes us uniquely human.
In our reflection on GPT-5, we wrote:
Some of GPT‑5’s biggest gains are invisible. When it anticipates needs and makes decisions for you, you do not feel the friction it removes; you just experience the smoother path. But that invisibility makes it risky: the more the model guides you, the more your own curiosity and agency can atrophy.
A new study out this week makes that risk tangible.
2025-08-15 01:47:33
The Financial Times quoted me last week calling GPT-5 “evolutionary rather than revolutionary.” Then it twisted the knife: “Release of eagerly awaited system upgrade has been met with a mixed response, with some users calling the gains ‘modest.’”
“Modest” is one of those quietly loaded words. The Oxford English Dictionary defines it as ‘relatively moderate, limited or small’. ‘Relatively’ does a lot of work there. Compared with GPT-4 in March 2023, GPT-5 is a huge leap. Compared with bleeding-edge models released just months ago, it feels incremental. And compared with the sci-fi hopes, it’s restrained.
The funny thing is, GPT‑5 does what no model before it could, yet in the same breath makes its own shortcomings impossible to ignore.
Over the past week of using GPT‑5, I’ve been tracking these tensions. Today, I’ll break down the five paradoxes that define GPT-5’s release and help explain why so many people find it confusing.
These five paradoxes show how this can be the most capable model so far, yet still earn that stubborn label: ‘modest’.
The smarter AI gets at our chosen benchmarks, the less we treat those benchmarks as proof of intelligence.
We measure machine intelligence through goalposts – tests, benchmarks and milestones that promise to tell us when a system has crossed from ‘mere software’ into something more. Sometimes these are symbolic challenges, like beating a human at chess or passing the Turing test. Other times they are technical benchmarks: scoring highly on standardised exams, solving logic puzzles or writing code1.
These goalposts serve two purposes: they give researchers something to aim for and they give the rest of us a way to judge whether progress is real. But they are not fixed. The moment AI reaches one goalpost, we often decide it was never a real measure of intelligence after all.
The first goalpost to shift was the Turing test.
Proposed in 1950 by Alan Turing, the “imitation game” offered a practical way to sidestep the slippery question “can machines think?”. Instead of debating definitions, Turing suggested testing whether a machine could respond in conversation so convincingly that an evaluator could not reliably tell it from a human.
I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’ The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used, it is difficult to escape the conclusion that the meaning and the answer to the question ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the ‘imitation game.’
For decades the test stood as the symbolic summit of AI achievement. Then, in June 2014 – four years before GPT‑1 – a chatbot named Eugene Goostman became the first to “pass”. Disguised as a 13‑year‑old Ukrainian boy, it fooled 33% of judges in a five‑minute exchange. Its strategy was theatrical misdirection: deflect tricky questions, lean on broken English and exploit the forgiving expectations we have of a teenager. As observed at the time:
The winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence.
Earlier this year, a paper claimed that GPT‑4.5 passed a more rigorous, three‑party Turing test, with judges rating it human in 73% of five‑minute conversations. Whether this counts as a pass is still contested. On the one hand, Turing’s test measured substitutability – how well a machine can stand in for a human – not genuine understanding. On the other, critics argued that short exchanges are too forgiving and that a meaningful pass would require longer, open‑ended dialogue.
But if we say that AI has passed the Turing test, what does that even mean? The victory feels hollow. Once systems beat the Turing test, we moved the bar: from conversation to formal benchmarks. LLMs went on to crush many of these, too. Yet the same pattern holds: the smarter the system, the less its achievements feel like proof of intelligence.
Here the conversation shifts from tests we can name to a target we cannot agree on. Part of the problem is definitional: there is no consensus on what artificial general intelligence is. Is it matching human cognition across all domains, or being a flexible, self‑improving agent? Intelligence resists collapsing into a single score. Decades of IQ debates show that. Is AGI a universal problem‑solver, an architecture mirroring human thought, or a form of consciousness? With such a hazy target, success will always feel provisional.
Sam Altman now calls AGI ‘not a super useful term.’ I’ve long found the term problematic; it’s not an accurate descriptor of what LLMs are or their usefulness. Suppose a system were truly “intelligent” in the human sense. Couldn’t we train it only on knowledge up to Isaac Newton and watch it rediscover everything humanity has learned in the 300 years since? By that standard, GPT‑5 is nowhere close – and I did not expect it to be. Its goal was not raw knowledge accumulation, which arguably defined GPT-4’s leap. GPT-5’s focus was on action: better tool use and more agentic reasoning.
GPT-5 performs better on some benchmarks measuring agentic tasks and is on a par with others.2 And compared with the last generation, the raw jumps are striking: on GPQA Diamond (advanced science questions), GPT-4 scored 38.8%, GPT-5 scored 85.7%;3 on ARC-AGI-1, GPT-4o managed 4.5%, GPT-5 hit 65.7%.
Yet the wow factor is muted. Most people are not measuring GPT‑5 against GPT‑4 from March 2023. They are stacking it against o3 from just a few months ago. Frontier models arrive in rapid succession, the baseline shifts at speed and each breakthrough lands half‑forgotten. In that light, even a giant’s stride can feel like treading water.
As systems grow more reliable, their rare failures become less predictable and more jarring. Trust can stagnate – or even decline – despite falling error rates.
On paper, GPT‑5 should be more reliable than previous LLMs. OpenAI’s launch benchmarks suggest it hallucinates far less than o3, especially on conceptual and object‑level reasoning.
In my own use, hallucinations feel rarer. GPT‑5 Thinking aced a 51‑item, nine‑data‑point analysis I gave it and added a derivative analysis I had not asked for. Claude Opus 4.1, by contrast, miscounted the items and gave weaker recommendations. GPT‑5’s output took me 30 minutes to verify in Excel – not because it was wrong, but because the data format was awkward. Across simpler tasks, this is the pattern: more accurate, more often.
The problem is when it is wrong. During a recent trip to Tokyo, I asked GPT‑5 to name the city’s oldest Italian restaurant while standing under it. It named a different place, yet also knew the full history of the restaurant I was in when I prompted it harder. The same kind of jarring mistake popped up in OpenAI’s live demo, where GPT‑5 botched the Bernoulli effect. These errors are not frequent, but they are unpredictable, and that makes them dangerous.
Psychologists call this automation complacency: the more reliable a system is, the less closely we watch it, and the more likely rare errors are to slip through. With GPT‑4‑level error rates, I stayed alert for slip‑ups; with GPT‑5, I can feel myself letting my guard down. The brain’s ‘error detection’ system habituates, so vigilance drops.
This risk compounds in agentic workflows. Even with a 1% hallucination rate, a 25-step autonomous process has roughly a 22% chance of at least one major error. For enterprise use, that is still too high. Last week, AWS released Automated Reasoning Checks, a formal-verification safeguard that encodes domain rules into logic and mathematically tests AI outputs against them. They tout “up to 99% accuracy.” This will help, but it’s not the last word.
Nevertheless, when mistakes are rarer yet less predictable, perceived reliability does not climb as much as the benchmarks suggest. That is why GPT‑5’s improved accuracy can still feel like a modest leap. The progress is real, but it does not fully translate into user confidence.
The more capable the assistant, the more its “helpful” defaults shape our choices – turning empowerment into subtle control.
Walking through Tokyo, I remarked to GPT-5 how empty the streets felt for a metro area of 37 million, especially compared with London’s 15 million. It explained the factors: rail-first transport moving 40 million passenger trips daily, vertical land use and zoning policies that suppress car dominance. Then it added: “If you like, I can map a time-and-place ‘hidden density’ walk for Minato so you can actually see where the people are – without it, you’re just walking the surface layer of a multi-layered machine.”
I hadn’t thought to ask. Within moments it generated a route that revealed the city’s hidden bustle. This is GPT‑5 at its best: it anticipates needs and surfaces options I did not know I wanted.
Some of GPT‑5’s biggest gains are invisible. When it anticipates needs and makes decisions for you, you do not feel the friction it removes; you just experience the smoother path. But that invisibility makes it risky: the more the model guides you, the more your own curiosity and agency can atrophy. In the case of the Tokyo guide, GPT-5 offered a mellifluous extension to my day. But the smoother the path it lays, the less likely we are to notice how our exploration has been framed by its defaults.
The trade‑off is subtle but important. Technologies that take over part of the thinking process can degrade the skills they replace. GPS is the classic example. Dahmani and Bohbot’s three‑year study found heavy GPS users showed steeper declines in hippocampus‑dependent spatial memory, weaker cognitive‑map formation and less landmark encoding.
A meta-analysis of 58 “default effect” studies, spanning 73,000 participants, found the pre-selected option was chosen 27 percentage points more often than the alternative. In other words, defaults chart our course a great deal of the time. With GPT-5 as the default architect of your intellectual journey, that steering happens in how you frame problems, what answers you see first and which ideas you never think to explore. The MIT Media Lab recently found that people writing with ChatGPT showed reduced EEG activity in executive-control regions and produced more formulaic text. Over time, the risk we run into is asking fewer of our own questions.
My friend, physicist and founder found that even when fed strong counter-arguments, GPT-5 struggled to adapt to reasoning outside its learned patterns.
This means that even if you prompt GPT-5 for broader thinking – ‘Give me five vastly different perspectives on Japanese demographics’ – the expansion stays within its training distribution.
Here, there is a subtle dependency loop. If we offload more exploratory thinking to the AI, our own skills in framing problems, critical evaluation and creative inquiry might atrophy. In this case, the offload target is our entire conceptual map of the world. The less we notice how these models steer us, the more likely we are to mistake a narrow, AI-grammed view of the world for the broad richness and texture of reality.
Technical benchmarks show GPT‑5’s biggest gains on the most demanding, edge‑case tasks (raising the ceiling), but most people notice the improvement in everyday, low‑complexity use (raising the floor).
In our initial impressions of GPT‑5 we highlighted:
The benchmark we’re paying most attention to is METR’s ‘Measuring AI Ability to Complete Long Tasks’ benchmark. GPT-5 can now complete software tasks averaging 2 hours and 17 minutes in length with a 50% completion rate, up from 1 hour and 30 minutes for o3. But if you want an 80% completion rate, the maximum task length drops to tasks of about 25 minutes — only slightly longer than o3 and Claude Opus 4, which average around 20 minutes.
Yet EV member replied :
I actually had the opposite reaction on raise the floor vs. ceiling. GPT-5 feels more like raise the floor than raise the ceiling. It’s excellent at getting things right one-shot. Follows instructions and add some flourishes that are helpful rather than overdoing it. It also extrapolates from minimal prompts a lot better.
Both are true because they measure different things. METR’s test gauges how often a model can complete long tasks successfully. That is useful for spotting how high the ceiling has moved. But it misses everyday reality: most users neither work at the benchmark’s ceiling nor on 25-minute tasks. They work in the zone of one-shot tasks, minimal-prompt extrapolation and instruction-following – where GPT-5’s improvements are most tangible.
Scale into agentic applications with extended task chains and the limits reappear. Reliability still drops off too sharply for fully autonomous, multi-hour workflows. The ‘floor-raising’ effect is real, but invisible to METR, which looks at the ceiling rather than the fluency or accuracy of everyday exchanges. Benchmarks and lived experience can both be true, but they often describe different domains.
In daily use, GPT‑5 is noticeably better. For quick, contained tasks, it feels faster, sharper and less needy. But for long, unsupervised chains of reasoning and action, the higher ceiling is there, perched on a floor that has not risen enough to make full autonomy dependable.
As AI capabilities expand, the remaining gaps become more visible and salient, making genuine progress feel smaller than it is.
If you’ve read this far, you’ve probably noticed that most of these paradoxes do not emerge in spite of progress; they emerge because of it. GPT‑5 is faster, more reliable and more capable than any model before it. Yet each gain sharpens the outline of what it still cannot do.
This is the fundamental flaw with AGI: it is a self‑erasing target.
The closer we get, the less progress feels like progress, because our focus shifts to what is still missing. Each new capability makes the gaps more visible. The gaps that remain are disproportionately hard. Early progress knocks off the easy‑to‑scale capabilities. What is left are the hardest parts of ‘general intelligence’, and they may not emerge from the current paradigm at all.
Today, these gaps include:
No reliable long-term memory
Lack of common-sense reasoning
Struggles to update beyond pretraining priors, no matter how compelling the new evidence (senate hopeful ahem)
Some gaps may not close through scale alone. LLMs are extraordinary pattern machines, but that does not guarantee they can sustain memory, reason across time or adapt to new environments. Those capabilities may require different architectures entirely: world models, richer agent frameworks or hybrids that fuse symbolic reasoning with neural networks. Many have called for this, including Yann LeCun, Gary Marcus, Francois Chollet and Fei-Fei Li.
Seen this way, ‘modest’ progress is a signal. It tells us not only how far the current paradigm has carried us, but where it may be running out of road. The next breakthroughs could look less like ‘GPT‑6’ and more like something we do not yet have a name for. Perhaps world models; perhaps something we have yet to discover.
Which is why, over the past week, the same paradoxes kept surfacing in practice. Each one shows how progress and perception can move in opposite directions.
To dig deeper into this, check out our essay from a couple of years ago when we wrote about AI’s benchmarking challenge.
For example, on Tau2-bench which tests agentic performance across domains, GPT-5 outperformed o3 in telecoms tasks, was only marginally better in retail and performed worse in airline.
Without tools.
2025-08-11 21:09:40
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
Nvidia on top ↑ Nvidia now makes up nearly 8% of the S&P 500, the highest weighting of any stock in the index’s history.
Market concentration ↑ The net income of the S&P 500’s ten largest companies has grown by ~180% since 2019, while the rest of the index grew by just ~45%.
Cloud competition ↑ Microsoft Azure captured 44.5% of new cloud revenue in Q2, outpacing that of the current market leader AWS (30%).
2025-08-10 10:09:58
“Always insightful and refreshingly free of an agenda other than the intellectual pursuit of knowledge in how tech shapes our world.” – Vincent, a paying member
Hi all,
Welcome to our Sunday edition, where we explore the latest developments, ideas, and questions shaping the exponential economy.
Enjoy the weekend reading!
Azeem
This week’s big news was the release of GPT-5 (my initial take here, quoted in the FT here), but something bigger is brewing beneath the surface: a fundamental transformation of how the Web itself operates.
A paper I read this week takes stock of the transition from the recommendation paradigm to the action paradigm – a shift toward an agentic Web with fundamentally different incentive structures.
Yang et al. outline three enablers of this change: intelligence, interaction and a nascent economy. The first two form a technical layer: agents that can reason and plan, and protocols that let them communicate. The intelligence pillar is progressing – models can handle longer tasks over time, although reliability remains a concern.
The interaction pillar has some firm foundations. Agents need a shared grammar to talk to websites, APIs and one another. Nascent protocols such as MCP (agents ⇄ tools) and A2A (agent ⇄ agent discovery) promise a common interaction layer. Security remains a worry (see the lethal trifecta), but protection is improving.
The economy pillar is by far the least developed. Attention (and money) flowed through search links and social feeds in the previous era. Now, in some cases, AI answers are swallowing up as much as half that traffic. Publishers have responded by blocking crawlers or charging for access. Cloudflare introduced ‘pay-to-crawl’ gates. While it’s a good experiment, it has some flaws, as we discussed a few weeks ago:
A $0.01 crawl fee might sound small, but it’s ~20x more than the average revenue a human visit generates. It can get much worse: an AI might need 10 pages to answer a question, making it 200x more expensive. So realistically, will AI companies pay that much per page? Probably not. More likely, they’ll keep striking licensing deals or stick to scraping public-domain content.
But the market will not settle until a standard primitive emerges. The paper envisions an Agent Attention market where the scarce resource is an agent’s choice of which API, tool, or external service to invoke when completing a task for a human user. And crucially…