2025-11-21 23:10:39
Listen on Apple Podcasts
The AI industry is sending mixed signals – markets are turning red while teams report real productivity gains. Today, I examine the latest research to understand this split reality.
(02:53) Unpacking three years of AI productivity data
(09:54) The surprising group benefitting from AI
(14:33) Anthropic’s alarming discovery
(17:29) The counterintuitive truth about AI productivity
There is a lot of opinion in this space and not every hot take is built on reliable data. I stick to carefully executed research. To help you ground your stance, we’ve curated the studies behind this analysis and included the full talk transcript, available to Exponential View members.
Sarkar, S. K. (2025). AI Agents, Productivity, and Higher-Order Thinking: Early Evidence From Software Development. → Senior developers gain the most because they know how to direct and evaluate AI, which drives the biggest productivity jumps.
2025-11-20 00:48:17
Hi, it’s Azeem, here with a special guest essay.
Europe once stood alongside the United States as a central force shaping global technology and industry. Its relative decline in the digital era is often pinned on regulation and bureaucracy.
But our guest, Brian Williamson – Partner at Communications Chambers and a long-time observer of the intersection of technology, economics and policy – argues the deeper issue is a precautionary reflex that treats inaction as the safest choice, even as the costs of standing still rise sharply.
Over to Brian.
If you’re an EV member, jump into the comments and share your perspective.
“Progress, as was realized early on, inevitably entails risks and costs. But the alternative, then as now, is always worse.” — Joel Mokyr in Progress Isn’t Natural
Europe’s defining instinct today is precaution. On AI, climate, and biotech, the prevailing stance is ‘better safe than sorry’ – enshrined in EU law as the precautionary principle. In a century of rapid technological change, excess precaution can cause more harm than it prevents.
The 2025 Nobel laureates in Economic Sciences, Joel Mokyr, Philippe Aghion, and Peter Howitt, showed that sustained growth depends on societies that welcome technological change and bind science to production; Europe’s precautionary reflex pulls us the other way.
In today’s essay, I’ll trace the principle’s origins, its rise into EU law, the costs of its asymmetric application across energy and innovation, and the case for changing course.
The precautionary principle originated in Germany’s 1970s environmental movement as Vorsorgeprinzip (literally, ‘foresight principle’). It reflected the belief that society should act to prevent environmental harm before scientific certainty existed. Errors are to be avoided altogether.
The German Greens later elevated Vorsorgeprinzip into a political creed, portraying nuclear energy as an intolerable, irreversible risk.
The principle did not remain confined to Germany. It was incorporated at the EU level through the environmental chapter of the 1992 Maastricht Treaty, albeit as a non‑binding provision. By 2000, the European Commission had issued its Communication on the Precautionary Principle, formalizing it as a general doctrine that guides EU risk regulation across environmental, food and health policy.
Caution may be justified when uncertainty is coupled with the risk of irreversible harm. But harm doesn’t only come from what’s new and uncertain; the status quo can be dangerous too.
In the late 1950s, thalidomide was marketed as a harmless sedative, widely prescribed to pregnant women for nausea and sleep. Early warnings from a few clinicians were dismissed, and the drug’s rapid adoption outpaced proper scrutiny. As a result of thalidomide use, thousands of babies were born with limb malformations and other severe defects across Europe, Canada, Australia, New Zealand and parts of Asia. This forced a reckoning with lax standards and fragmented oversight.
In the US, a single FDA reviewer’s insistence on more data kept the drug off the market – an act of caution that became a model for evidence‑led regulation. In this instance, demanding better evidence was justified.
Irreversible harm can also arise where innovations that have the potential to reduce risk are delayed or prohibited. Germany’s nuclear shutdown is the clearest example. Following the Chernobyl and Fukushima accidents — each involving different reactor designs and, in the latter case, a tsunami — an evidence‑based reassessment of risk would have been reasonable. Instead, these events were used to advance a political drive for nuclear phase‑out which was undertaken without a rigorous evaluation of trade‑offs.
Germany’s zero‑emission share of electricity generation was about 61% in 2024; one industry analysis found that, had nuclear remained, it could have approached 94%. The missing third was largely replaced by coal and gas, which raises CO₂ emissions and has been linked to higher air‑pollution mortality (about 17 life‑years lost per 100,000 people).
In Japan, all nuclear plants were initially shut after Fukushima. They overhauled the regulation and restarted permits on a case-by-case basis, under new, stringent safety standards. They never codified a legalistic ‘precautionary principle’ and have been better able to adapt. Europe often seeks to eliminate uncertainty; Japan manages it.
A deeper problem emerges when caution is applied in a way that systematically favours the status quo, even when doing so delays innovations that could prevent harm.
A Swedish company, I‑Tech AB, developed a marine paint that prevents barnacle formation, which could improve ships’ fuel efficiency and cut emissions. Sixteen years after its initial application for approval, the paint has not been cleared for use in the EU, though it is widely used elsewhere. The EU’s biocides approval timelines are among the longest globally. Evaluations are carried out in isolation rather than comparatively, so new substances are not judged against the risks of existing alternatives. Inaction is rewarded over improvement.
This attitude of precaution has contributed to Europe’s economic lag. Tight ex‑ante rules, low risk tolerance and burdensome approvals are ill‑suited to an economy that must rapidly expand clean energy infrastructure and invest in frontier technologies where China and the United States are racing ahead. The 2024 Draghi Report on European competitiveness recognized that the EU’s regulatory culture is designed for “stability” rather than transformation:
[W]e claim to favour innovation, but we continue to add regulatory burdens onto European companies, which are especially costly for SMEs and self-defeating for those in the digital sectors.
Yet nothing about Europe’s present circumstances is stable. Energy systems are being remade, supply chains redrawn and the technological frontier is advancing at a pace unseen since the Industrial Revolution.
Like nuclear energy, AI may carry risks, but also holds the potential to dramatically reduce others - and the greater harm may lie in not deploying AI applications rapidly and widely.
This summer, 38 million Indian farmers received AI‑powered rainfall forecasts predicting the onset of the monsoon up to 30 days in advance. For the first time, forecasts were tailored to local conditions and crop plans, helping farmers decide what, when, and how much to plant – and avoid damage and loss.
2025-11-17 20:13:49
Hi all,
Here’s your Monday round-up of data driving conversations this week in less than 250 words.
Let’s go!
Faster than fiber ↑ AI adoption in the US has increased 10% in the 12 months from August last year.
New task length records ↑ An AI system, MAKER, solved a 20‑disk Towers of Hanoi puzzle, with more than a million consecutive steps with no mistakes.1
China’s emissions are flat. CO2 emissions from China have been flat for the last year, with almost 90% of new electricity demand met by renewables.
Electricity demand ↑ China’s per capita electricity consumption is now 30% higher than Europe’s.
2025-11-16 10:41:35
The level of deep insight into technological trends and forward-looking info you share is unmatched. No one is doing it like you. – Susan, a paying member
Hi,
Welcome to our Sunday edition, where we explore the latest developments, ideas, and questions shaping the exponential economy.
Enjoy the weekend reading!
The St. Louis FED estimates that generative AI may have increased labor productivity by up to 1.3% – and has had positive effects on industry-level productivity. While that may sound modest, a sustained 1.3% annual productivity boost would be transformative1.
We’re seeing the micro‑signals of this shift inside firms. A new paper found that AI agents increased code output by 39% without any decline in short-term quality. More telling: for every standard deviation increase in experience, developers accept 6% more agent‑generated code. Senior engineers are simply better at delegation, and delegation is a skill that compounds with time. As AI takes on more execution, the premium – particularly for those entering the workforce – will move from raw output to orchestration.
I dug into this question with Ben Zweig of Revelio Labs, which builds its own data-driven view of the labor market. Ben and I share the view that it’s probably too early to really see AI impact in the early hiring. Ben’s perspective is that softening in early hiring is coming from somewhere else:
there’s a lot of uncertainty in the labor market, partially due to AI, partially due to policy. As a result, employers have a high discount rate. Young workers are the workers that are more uncertain. So, it’s a kind of high-risk, high-reward type of bet, whereas more experienced hires are the safer bets.
And to the million-dollar question for parents of “what should my kid study?”, Ben gave two concrete recommendations, which you’ll find at the end of our conversation.
📈 Quick read: AI remains in expansion mode, but funding quality is deteriorating. Details on the week’s movers and dislocations below.
Boom/Bubble dashboard update: We’ve added projected trajectories for each of the five key indicators between now and 2029 – the indicator history and outlook is interactive and you can access it for free on the Boom/Bubble website.
Movers:
CoreWeave cut its 2025 revenue forecast to $5.05–$5.15 billion after delays in building new data centers. Investors did not like that; its shares fell by nearly a quarter. The cost of protecting against a CoreWeave default, measured by a 10‑year CDS, has jumped more than 60% over the last six weeks to 700 basis points. Likewise, Meta signaled AI spending could exceed $100 billion next year; shares fell 12.6%.
2025-11-15 17:02:01
Listen on Spotify or Apple Podcasts
Something important is happening with the labor market.
US employers announced over 153,000 job cuts in October, the highest monthly total in more than two decades. Amazon announced about 14,000 corporate job cuts as it pivots towards AI-driven operations. At the same time, research shows that entry-level opportunities are shrinking and new entrants to the job market have it harder than most.
To further make sense of what’s happening, I spoke with Ben Zweig, economist and CEO of Revelio Labs. His team analyzes millions of worker profiles to track hiring and job flows – so he sees data most people don’t.
Ben and I cross‑checked Revelio’s data with what I’m hearing on the ground. We don’t normally do career advice – and our audience isn’t entry‑level. But many of you have kids and mentees stepping into this market, and the data is too relevant to skip. If you’re advising a new grad, here’s the concise playbook.
First, understand what’s going on:
Entry-level roles in AI-adjacent fields are contracting.
And managers are risk-averse because they expect workflows to change again next quarter. Firms don’t want to hire someone into a process they know will be redesigned.
How to break into an AI‑shaped job market
Ship end‑to‑end projects: choose or create multi‑step projects with real stakeholders; practice owning the plan and delivering it to a finish line. If AI takes on more of the execution work, the value for humans is increasingly in coordination around those tasks, the orchestration. It’s the ability to decide what needs to be done, in what order and with which tools, and then keep a project moving.
2025-11-15 07:09:21
In today’s live, I explored why AI feels transformative for individuals but frustratingly slow at the organisational level. It’s the exponential gap that I dissect in my book: organisations struggle to update old processes in the face of rapidly improving technology.
Enjoy!
Azeem