2026-01-25 11:28:19
Hi all,
I just got back from Davos, and this year was different. The AI discussion was practical – CEOs asking each other what’s actually happening with their workforces, which skills matter now. At the same time, I saw leaders struggling to name the deeper shifts reshaping our societies. Mark Carney came closest, and in this week’s essay I pick up his argument and extend it through the Exponential View lens.
Enjoy!
Mark Carney delivered a speech that will echo for a long time, about “the end of a pleasant fiction and the beginning of a harsh reality.” Carney was talking about treaties and trade but the fictions unravelling go much deeper.
Between 2010 and 2017, three fundamental inputs to human progress – energy, intelligence, and biology – crossed a threshold. Each moved from extraction to learning, from “find it and control it” to “build it and improve it.” This is not a small shift. It is an upgrade to the operating system of civilization. For most of history, humanity ran on what I call the Scarcity OS – resources are limited, so the game is about finding them, controlling them, defending your share. This changed with the three crossings. As I write in my essay this weekend:
In each of the three crossings, a fundamental input to human flourishing moved from a regime of extraction, where the resource is fixed, contested, and depleting, to a regime of learning curves, where the resource improves with investment and scales with production.
At Davos, I saw three responses: the Hoarder who concludes the game is zero-sum (guess who), the Manager who tries to patch the system (Carney), and the Builder who sees that the pie is growing and the game is not about dividing but creating more. The loudest voices in public right now are hoarders, the most respectable are managers, and the builders are too busy building to fight the political battle. The invitation of this moment? Not to mourn the fictions, but to ask: what was I actually doing that mattered, and how much more of it can I do now?
Full reflections in this week’s essay:
OpenAI was the dominant player in the chatbot economy, but we’re in the agent economy now. This economy will be huge, arguably thousands of times bigger1 but it’s an area OpenAI is currently not winning: Anthropic is. Claude Code reached a $1 billion run rate within six months, likely even higher after its Christmas social media storm.
OpenAI is still looking for other revenue pathways. In February, ChatGPT will start showing ads to its 900 million users – betting more on network effects than pure token volume. This could backfire, though. At Davos, Demis Hassabis said he was “surprised” by the decision and that Google had “no plans” to run ads in Gemini. In his view, AI assistants act on behalf of the user; but when your agent has third-party interests, it’s not your agent anymore.

Sarah Friar, OpenAI’s CFO, wants maximum optionality and one of the bets will be taking profit-sharing stakes in discoveries made using their technology. In drug discovery, for example, OpenAI could take a “license to the drug that is discovered,” essentially claiming royalties on customer breakthroughs. Both Anthropic and Google2 are already there and have arguably shown more for it. Google’s Isomorphic Labs, built on Nobel Prize-winning AlphaFold technology, already has ~$3 billion in pharma partnerships with Eli Lilly and Novartis, and is entering human clinical trials for AI-designed drugs this year3. Then, there are OpenAI’s hardware ambitions.
OpenAI needs a new alpha. Their main advantage was being the first mover. But the alpha has shifted from models to agents and there, Anthropic moved first properly with Claude Code. It’s hard to see how OpenAI can sustain its projection of $110 billion in free cash outflow through 2028 in a market it isn’t clearly winning. Anthropic, meanwhile, projects burning only a tenth of what OpenAI will before turning cashflow positive in 2027 (although their cloud costs for running models ended up 23% higher in 2025 than forecast).
Perhaps this is why Dario Amodei, CEO of Anthropic, told me at Davos that research-led AI companies like Anthropic and Google will succeed going forward. Researchers generate the alpha, and research requires time, patience and not a lot of pressure from your product team. OpenAI has built its timeline and product pressure. This has an impact on the culture and talent. Jerry Tworek, the reasoning architect behind o1, departed recently to do research he felt like he couldn’t do at OpenAI (more in this great conversation with and ).
None of this means that OpenAI is out for the count. They still have 900 million users, $20 billion in revenue, and Stargate. But they’re currently in a more perilous position than the competition.
See also:
Apple v OpenAI… The iPhone maker is developing a wearable AI pin-sized like an AirTag, release expected in 2027. They also plan to replace Siri later this year with a genAI chatbot, code-named Campos, in partnership with Google.
and highlight how AI agents could transform “matching markets” – hiring, dating, specialized services – by helping people articulate what they actually want.
First impressions matter. With Framer, early-stage founders can launch a beautiful, production-ready site in hours — no dev team, no hassle.
Pre-seed and seed-stage startups new to Framer will get:
One year free: Save $360 with a full year of Framer Pro, free for early-stage startups.
No code, no delays: Launch a polished site in hours, not weeks, without technical hiring.
Built to grow: Scale your site from MVP to full product with CMS, analytics, and AI localization.
Join YC-backed founders: Hundreds of top startups are already building on Framer.
The conventional story treats alignment as a tax on capability. Labs face a prisoner’s dilemma: race fast or slow down for safety while someone else beats you to market. At Davos, Dario Amodei said if it were only Demis and him, they could agree to move slowly. But there are other players. Demis told me the same after dinner.
This framing might suggest to some that we’re in a race toward misaligned superintelligence. But I’ve noticed something in recent dynamics that makes me more hopeful. A coordination mechanism exists and paradoxically, it runs through the market.
When users deploy an agent with file system access and code execution, they cede control. An agent with full permissions can corrupt your computer and exfiltrate secrets. But to use agents to their full potential, you have to grant such permissions. You have to let them rip4.
, a Senior Fellow at the Foundation for American Innovation, noticed that the only lab that lets AI agents take over your entire computer is the “safety-focused” lab, Anthropic. OpenAI’s Codex and Gemini CLI seek permission more often. Why would the safety-focused lab allow models to do the most dangerous thing they’re currently capable of? Because their investment in alignment produced a model that can be trusted with autonomy5.
Meanwhile, the one company whose models have become more misaligned over time, xAI, has encountered deepfake scandals, regulatory attention, and enterprise users unwilling to deploy for consequential work.
Alignment generates trust, trust enables autonomy, and autonomy unlocks market value. The most aligned model becomes the most productive model because of the safety investment.
See also:
Anthropic researchers have discovered the “Assistant Axis,” the area of an LLM that represents the default helpful persona, and introduced a method to prevent the AI from drifting into harmful personas.
Signal Foundation President Meredith Whittaker warns that root-level access required by autonomous AI agents compromises the security integrity of encrypted applications. The deep system integration creates a single point of failure.
Robotics has two of Exponential View’s favourite forces working for it: scaling laws and Wright’s law. In this beautiful essay worth your time, software engineer Jacob Rintamaki shows how those dynamics push robotics toward becoming general-purpose – and doing so much faster than most people expect.
Robotics needs a lot of data. Vision-language-action models are expected to benefit from scaling laws similar to LLMs.6 The problem is data scarcity: language has a lot of data, but vision-language-action data is scarce. Robotics is roughly at the GPT-2 stage of development. But each robot that starts working in the real world becomes a data generator for the specific actions it performs – this creates a flywheel. More deployed robots generate more varied action data. The next generation of models absorbs this variety and becomes more capable of unlocking larger markets worth serving. That’s scaling laws. And Wright’s law compounds the effect: each doubling of cumulative production will drive down costs. Already, the cheapest humanoid robots today cost only $5,000 per unit. Rintamaki argues they’ll eventually cost “closer to an iPhone than a car”; they require fewer raw materials than vehicles and need no safety certifications for 100mph travel.
AI datacenter construction will kick off the flywheels. Post-shell work (installing HVAC systems and running cables) is 30-40% of construction costs and is repetitive enough for current robotics capabilities. The buyers are sophisticated, the environments standardised, and the labour genuinely scarce: electricians and construction crews are in short supply.
See also, World Labs launched the World API for generating explorable 3D worlds from text and images programmatically. A potential training environment for robots.
MIT researchers built a “zero-trained” computational model of the brain that independently discovered a new type of neuron in an old dataset. See EV#553 on the long tail of the unsolved science.
X’s recommendation algorithm is now fully dependent on Grok AI.
Blaise Agüera y Arcas argues that reasoning models spontaneously develop “societies of thought,” internal multi-agent debates that mirror human collective intelligence, where diverse perspectives, structured in dialogue, outperform any single viewpoint.
China’s government tracks every single generative AI tool deployed in the country since 2023.
South Korea launched an “AI Squid Game,” where tech giants and startups compete in a government-sponsored tournament to identify and fund a leading domestic AI foundation model.
Shanghai designated 46% of its city area as free-flying zones for consumer drones, hoping to encourage a “low-altitude economy.”
argues that the “death of reading” has been greatly exaggerated. Book sales are up, independent bookstores are thriving, and most of the decline in reading time happened between 2003 and 2011.
Portugal moved to block Polymarket after a sharp rise in election-related betting.
Ship-tracking data reveals China secretly mobilized thousands of civilian fishing vessels to create a 200-mile-long blockade in the East China Sea.
For instance, based on Simon P. Couch’s analysis, his median Claude Code session consumes 41 Wh, 138x more than a “typical query” of 0.3 Wh. On a median day, he estimates consuming 1,300 Wh through Claude Code, equivalent to 4,400 typical queries. Even if you do 100 queries a day, that is over 400 times more usage. And this is probably still not the furthest you can push out of agents in a day.
Google can afford to play the game more patiently. They have the money and the data-crawling advantage from their dominant position in online advertising – publishers want Google’s bots to crawl their sites to send search traffic. This advantage has concerned competition authorities around the world, most recently the UK CMA.
Although ambitions were for this to happen in 2025.
This is by no means a recommendation – current systems should not be fully trusted yet. There are ways to give agents more permissive environments while limiting damage (e.g. sandboxing).
You can read Claude’s constitution to see the ethics framework it operates under
Although extracting clear scaling laws is harder than for LLMs, robotics has to deal with different embodiments, environments, and tasks that make a single “log-linear curve of destiny” elusive.
2026-01-24 20:04:42
I just got back to London after a week at the Annual Meeting at Davos. For the past few years, the World Economic Forum had become a kind of parody of itself, a place where billionaires flew in on private jets to discuss climate change and “stakeholder capitalism” while nothing much seemed to happen. But this year was different.
The AI discussion at the Forum alone was proof of change. It was practical, CEOs asking each other: what’s actually happening with your workforce? Which skills matter now? Why is that company pulling ahead while everyone else flounders?
And on politics, things moved to the heart of the matter: the fragmentation, the end of the old world order. But neither Davos woman nor Davos man felt a deracination in the face of the crumbling rules-based order. Rather, they used the simple fact that many of those who hold the world’s power were gathered in that one place: to speak and, in many cases, to listen.
The gathering met, nay exceeded, its purpose. Davos showed why it matters, why it is necessary in a world that is fraying rather than cohering.
Canada’s prime minister, Mark Carney, gave a speech that will echo for a long time. He spoke of “the end of a pleasant fiction and the beginning of a harsh reality.” He was referring to the unraveling of the post-war geopolitical settlement, the fading authority of the rules-based order, the growing irrelevance of multilateral institutions designed for a slower, more stable world. If you haven’t seen it yet, I really recommend watching.
Carney was talking about treaties, trade, and power. But these aren’t the only norms that are unravelling.
Today’s reflection originates from the research I’m doing for my second book. There’s a long way to go before it lands on your shelf, but that work is already tracing a similar unraveling – in domains much closer to Exponential View’s home. So let’s get to it.
Between 2010 and 2017, three fundamental inputs to human progress – energy, intelligence and biology – crossed a threshold. Each moved from extraction to engineering and learning, from “find it and control it” to “build it and improve it.”
Energy became a technology. For most of human history, energy meant finding something in the ground and burning it. Coal seams, oil fields, natural gas deposits. The logic was geological; as reserves deplete, access is contested, and the nation that controls the supply controls the game. Wars were fought over this. Empires rose and fell on it. Then, solar costs fell below the threshold where photovoltaics could compete with new fossil generation in sunny regions. Wright’s Law in action, as every doubling of cumulative production drops costs by roughly 20%. The learning curve, once it takes hold, is relentless, as Exponential View readers know.

Intelligence became engineerable. For decades, artificial intelligence was a research curiosity plagued by “winters,” periods of hype followed by disappointment. Neural networks worked, sort of, but scaling them was brutal. Progress was uncertain. Capability gains were unpredictable. Then, in 2017, a team at Google published Attention Is All You Need that introduced the transformer architecture. The insight was technical – a new way to process sequences in parallel using self-attention – but the consequence was civilizational. For the first time, there was a reliable scaling law for intelligence: more compute and more data yielded more capability, predictably. AI became a predictable engineering problem. We crossed from uncertainty to a learning curve.
Biology became readable. The human genome was first sequenced in 2003 after roughly $3 billion and thirteen years of work. By the mid-2010s, sequencing costs had fallen to a few thousand dollars, and dropping faster than Moore’s Law.
The technologies involved (next-generation sequencing, computational genomics) followed their own improvement curves, and they were steeper than anyone predicted. For the first time in the history of life on Earth, a species could read and begin to edit its own source code. Biology moved from evolutionary timescales to engineering timescales, and as a result we got mRNA vaccines and CRISPR.
In each of the three crossings, a fundamental input to human flourishing moved from a regime of extraction, where the resource is fixed, contested, and depleting, to a regime of learning curves, where the resource improves with investment and scales with production.
This is not a small shift. It is an upgrade to the operating system of civilization.
For most of history, humanity ran on what I call the Scarcity OS. Resources are limited in this system so the game is about finding them, controlling them, and defending your share. This logic shaped everything – our institutions, our economics, our social structures, our sense of what’s possible.
Under Scarcity OS, certain “fictions” emerged. And I use this word carefully. These fictions weren’t lies; they were social technologies, coordination mechanisms that worked brilliantly in a world of genuine constraint.
Take jobs… Jobs were a fiction. Not in the sense that work wasn’t real, but in the sense that bundling tasks, identity, healthcare, social status, and income into a single institution called “employment” was a specific solution to a specific problem: how do you distribute resources and organize production when information is expensive and coordination is hard? The job was an answer to that question. It was a brilliant answer. But it was an answer to a question that is now changing.
Likewise, credentials were a fiction. When evaluating someone’s capability was expensive, we outsourced the judgment to institutions. A degree from a prestigious university wasn’t proof that you could do anything in particular – it was proof that you had survived a sorting mechanism. The credential was a proxy, a compression algorithm for trust. It worked when the cost of direct evaluation was prohibitive. That cost is collapsing.
Expertise was a fiction. Not the knowledge itself, but the social construct of the “expert” – the person whose authority derived from scarcity of information and difficulty of access. When knowledge was locked in libraries, accumulated through years of study, and distributed through gatekept institutions, expertise was a genuine bottleneck. The expert was a bridge between the uninformed and the truth. That bridge is being bypassed.
These fictions were functional adaptations to real constraints. The job, the credential, the expert, each solved a genuine problem in a high-friction world. But the constraints changed. And now the adaptations are decaying.
At Davos, I saw three different responses to this decay playing out in real time.
The Hoarder sees the old fictions crumbling and concludes that the game is zero-sum. If the pie is fixed, the only strategy is to take more of it. Build walls. Impose tariffs. Retreat to the nation-state. Punish the outgroup. This is Trump’s instinct, and it resonates precisely because it matches the Scarcity OS that most people still run internally. The hoarder isn’t stupid; he’s applying legacy software to a changed environment.
The Manager sees the same decay and tries to patch the system. Redistribute more fairly. Strengthen institutions. Negotiate better deals within the existing framework. This is Mark Carney’s instinct. It’s more sophisticated than hoarding but it shares an assumption that the pie is still fixed, just poorly divided. The manager wants to optimize the Scarcity OS, not replace it.
The Builder would see something different. If the fundamental inputs are now on learning curves – if energy, biology, and intelligence are becoming cheaper and more abundant – then the pie is not fixed, it’s growing. The game is then about accelerating abundance. The builder’s question will not be “how do I get my share?” but “how do I help make more?”
The tragedy of this moment is that the loudest voices are the hoarders, the most respectable voices are the managers, and the builders are too busy building to fight the political battle.
If you’ve built your identity on the old fictions, this transition is terrifying.
2026-01-23 00:26:02
Live from Davos: Today, the conversations have turned practical. Geopolitics feels less like a forecast and more like a constraint, while AI is no longer about distant futures but about whether it actually works inside businesses today. The question here is not whether these shifts are coming, but who adapts fast enough when the ground starts to move.
2026-01-22 02:34:20
Listen on Apple Podcasts or Spotify
We understand the coal industry better than the AI economy right now. Anthropic’s Economics team, led by Peter McCrory, is changing that. I invited Peter to break down their latest findings in the Economic Index report.
(00:00) Anthropic’s Economic Index report
(01:20) Claude’s two distinct usage patterns
(06:22) Examining AI’s impact on the labor market
(09:20) Where most businesses think too small
(12:03) Why extracting tacit knowledge is so important
(20:33) How do we create the next generation of experts?
(23:22) Why people need to develop cognitive endurance
(29:55) Long-term vs. short-term productivity
(35:56) The future of human knowledge
(37:46) Could AI’s greatest impact go unmeasured?
(41:55) How task bottlenecks have moved
(46:09) Implementation resembles a staircase - not a curve
(50:47) “Capability doesn't instantly deliver adoption”
Here are eleven papers to deepen your understanding and complement the episode.
1. Anthropic Economic Index Report (2026) - The primary subject of discussion. Analyzes millions of Claude conversations to map where AI augments vs automates work.
2. Paul Romer, “Endogenous Technological Change” (1990) - Foundational paper on how technological progress arises from within the economic system
2026-01-21 20:44:21
Live from Davos: Trump lands in Zurich as a queue like I've never seen forms at the Congress Center. The mood is expectant, edgy, nervous. Carney captured it yesterday when he declared the rules-based order dead, that this isn't decline but rupture. The Europeans are waking up to the fact that they can't make their own chips or train their own models.
2026-01-20 22:52:35
Live from Davos: On his 25th trip – with the USA House looming large and Trump’s arrival imminent – Eric Schmidt and I dig into this “Shakespearean moment,” where AI is heading fastest and why he’s an optimist on the technology but a pessimist on the politics.