2025-10-28 00:06:52
Hi all,
Here’s your Monday round-up of data driving conversations this week in less than 250 words.
Let’s go!
AI industrialization ↑ Anthropic has committed to 1 million Google TPUs to bring over 1GW of compute capacity online in 2026. Competition runs on gigawatts.
Parallel cloud emerging ↑ Neoclouds (AI and GPU-first) grew 205% YoY, on pace for $23 billion in 2025.
mRNA & cancer survival ↑ Getting an mRNA vaccine before immunotherapy boosted the three-year survival rate by 40-60% for lung cancer patients compared to no vaccine. Via
2025-10-26 08:30:26
Good morning from London!
In today’s briefing:
Why simpler AI may be the next frontier in AI capability
The rise of anticipation markets and new ways to price the future
One AI model invests better than all the rest
Bionic eyes are here…
Let’s go!
Today’s frontier models have gorged on internet’s noise. They are brilliant mimics with blurry reasoning, as argues OpenAI founding member Andrej Karpathy. The problem is that true reliability can’t come from these feats of memory but from deeper understanding. Future AI systems will need this.
Andrej proposes an austere remedy– reduce memorisation, preserve the reasoning machinery and pull in facts only when needed. He pictures a “cognitive core” at the 1-billion-parameter scale that plans, decomposes problems and queries knowledge. It is a librarian, not a library.
Philosopher Toby Ord points out that the very approach that’s given us the surprising capabilities of “reasoning models” like o1 is reaching its own limits. These systems extract gains from post-training reinforcement learning (refining answers through trial-and-error) and extended inference-time reasoning. Compute is paid per query, not once during pre-training. Ord estimates that this burns 1,000 to 1,000,000 times more compute per insight than traditional training. Returns shrink faster to get to the next milestone. Even OpenAI’s o1 reasoning model improves only when it’s given more RL cycles and longer “thinking time,” which raises the cost per task.
How should we make sense of this? Technological progress advances through overlapping S-curves and rarely follows a smooth exponential. Both Ord and Karpathy are pointing to a similar direction of less brute memorization, more search and recursion. Less unlimited inference and more careful allocation of reasoning budgets. Away from monolithic models and toward tool-using, modular agents.
As the cost of using AI systems (rather than training them) becomes dominant, pricing will shift to usage-based models. Firms that deploy AI will precision will be rewarded. And as a result we could see a broad, rapid seep of AI into many corners of the economy, rather than a sudden leap in GDP.
OpenAI has entered the browser arena with its own browser Atlas. We argued a few times in the past that…
[t]he company that owns the browser owns the user session, valuable behavioral data and the ability to steer the revenue funnel. Whoever captures the front door to the web gets to watch, and eventually automate, everything we do online.
2025-10-23 21:28:55
Watch on YouTube
Listen on Spotify or Apple Podcasts
I got together with China expert to understand his view on the new phase of the US-China competition. Both countries are using trade policy, export controls and industrial strategy to shift the balance of global power. Just earlier this month, China rolled out its toughest-ever curbs on rare earths and related tech. Yet, the US and China economies remain tightly bound. Jordan and I sit down to make sense of this.
(01:34) The US and China’s decoupling explained
(08:51) Understanding the Oct 9 “rare earth rules”
(14:23) Is decoupling a strategy to avoid weaponisation?
(26:03) AI incumbents aren’t entrenched – yet
(43:14) Imagining an improved US-China relationship
To accompany this week’s discussion on the US-China decoupling, we’ve pulled together a short set of research notes. These figures and developments sketch how trade, technology and energy are changing and where to watch next.
Rare earths. China controls about 70% of mining, about 90% of refining and separation, and about 93% of high-strength magnet production; export licences required for products with ≥0.1% rare earth content from 1 December; 12 of 17 metals now restricted; licence decisions can take up to 45 business days; likely adds about $500-1,500 to EV prices in the short run.
Chips. Nvidia moved to a one-year GPU cadence; US rules tightened in 2025 then loosened with revenue sharing on China sales; China responded by steering buyers to domestic silicon; SMIC produced 7 nm via DUV multi-patterning at low tens of thousands of wafers per month; Huawei Ascend adoption is growing.
AI models. Chinese developers lead on open-weight releases; the gap between top closed and top open narrowed to low single digits on key benchmarks in 2025; many of the most used open models now come from China. Anecdotally, it seems that even some of leading US firms choose Chinese open-source models over others. Airbnb’s Brian Chesky just shared that his company ‘relies heavily’ on Alibaba’s Qwen models.
Manufacturing. EV strategy described as scale up, flood in, starve out; China exports about 7 million vehicles a year and reached about 30% of the UK market within two years; robotics deployments reached a majority of global installs, yet many precision components still come from Japan and Europe.
Energy. China maintains large reserve margins and is adding massive solar, storage, and data-centre power; data-centre demand could reach 400 to 600 TWh by 2030.
Export controls and tool restrictions are most effective at the frontier of technology. Coordinated measures across the US, Japan, and the Netherlands have delayed China’s access to the most advanced compute and semiconductor-manufacturing equipment.
Below that frontier, the effects weaken. China has adapted by relying on “good-enough” chips, improving packaging and integration, developing domestic design tools, and re-routing supplies through friendly intermediaries. These measures sustain progress in deployment, even without cutting-edge inputs.
Both systems are now adjusting in parallel. The US and China are investing heavily in local fabrication and packaging capacity, tightening investment and capital rules, and screening outbound flows. The outcome is not isolation but duplication: two partly mirrored ecosystems built for resilience.
Scott Bessent interview. Allied response to rare earths, targeted reshoring, price floors, and strategic reserves; vigilance with time-bound goals. (Read here)
Kaiser Kuo, The Great Reckoning. Performance legitimacy; China as a principal architect of modernity; the West should measure outcomes and learn without denial. (Read here)
ChinaTalk analysis on synthetic diamonds. Why lab-grown diamond controls matter for wafer slicing, optics, and thermal management; leverage is real but not absolute due to alternative producers. (Read here)
Abundance and China, podcast with , and Dan Wang. Abundance framing for state capacity, risk pricing for tail scenarios, and learning from Chinese speed without importing ideology. (Read and listen here)
Thanks for reading!
2025-10-22 20:00:42
The latest WSJ report about Sam Altman’s influence on the leading tech companies got a lot of attention because it claims that Altman showed interest in using Google’s homegrown TPU chips alongside Nvidia’s GPUs.
The flirtation apparently triggered Nvidia’s boss Jensen Huang.
What matters is the financing architecture forming around OpenAI’s compute build-out. As the WSJ writes:
As part of the deal, Nvidia is also discussing guaranteeing some of the loans that OpenAI plans to take out to build its own data centers, people familiar with the matter said—a move that could saddle the chip giant with billions of dollars in debt obligations if the startup can’t pay for them. The arrangement hasn’t been previously reported.
We’ve known that OpenAI is creating a web of relationships with its suppliers beyond simple cash-for-services. This has been the firms métier since its deal with Microsoft, and while complicated and unorthodox, it can be seen as expedient in a fast-moving market.
When I reviewed that 11 days ago, I cautioned that while there weren’t real issues yet, we should be watchful of temptations to use
increasingly complex deal structures that blend credit lines, equity stakes, and multi-year purchase commitments.
Well, if the Journal’s reports are correct, we’re seeing exactly that: Nvidia discussing guarantees of OpenAI’s data-centre loans. The chip shop would practically agree to repay OpenAI’s creditors if OpenAI cannot. In other words, Nvidia is extending its balance sheet to backstop a customer’s debt.
In our framework, this would mark a deterioration in our funding quality gauge – the measure of how resilient the sector’s financing structures are. It worsens funding quality for both OpenAI and Nvidia. Worsens but overall FQ for the sector stays in low amber territory, still far from red.
Last Saturday, I made the point that booms tip towards bubbles when balance sheet gymnastics become the norm. Well, this isn’t the norm yet… but if these reports are accurate it would be another early sign of micro-level deterioration in funding quality.
In a counter case, what else would Sam Altman do?
If you assume that artificial intelligence is the next big thing, then the last thing you’ll want to do is sit this out. Being confident in your story and momentum could lead you to doing exactly this kind of deal. Of course, being desperate could take you down the same path.
2025-10-20 23:17:48
Hi all,
Here’s your Monday round-up of data driving conversations this week in less than 250 words.
Let’s go!
Premium subscriptions ↑ About 70% of OpenAI’s $13 billion in annualized recurring revenue comes from roughly 40 million paying subscribers.
GenAI goes mainstream ↑ China’s user base doubled to 515 million in six months, now over a third of the population. Across six major markets, weekly usage jumped from 18% to 34% year-on-year. Behavior has flipped from curiosity to habit.
Science investments ↑ Venture-capital funding for autonomous labs now rivals the National Science Foundation’s yearly budget for materials and chemistry. Via
2025-10-19 09:15:15
Good morning!
Grid-scale batteries like these are rewriting the logic of the energy system. In California, they already supply more than a quarter of peak summer demand and have cut gas generation by 37% since 2023. They may scale faster because of AI.
In today’s briefing:
How AI is becoming the accidental accelerator of the energy transition,
What replaces scale as the driving force of AI progress,
How China is tightening control over pessimism,
But first: AI boom, bust… or a third way?
Last month, we laid out one of the most rigorous frameworks yet for assessing whether artificial intelligence is in a bubble. It struck a chord because it was measured and thoughtful. (And because it isn’t clickbait, please take a moment to share it.)
This week I was on ’s podcast to discuss the research and the five gauges we’re tracking to know what’s going on. In the course of the conversation, I offered a “third door”, a scenario in which the AI boom turns into a bust… and that’s not necessarily a bad thing:
In a funny way, we might be grateful for it. Of course, there will be stock market prices going down, but what would have happened is that there will be a lot of GPU infrastructure, computing infrastructure that organisations with less money could pick up at fire sale prices. And those assets will go to smaller players who might have newer approaches. They may prefer open-source, they may decide they don’t want to chase after the machine god. They may decide that pricing needs to be more sensible. We might even see faster innovation alongside democratisation.
When the dotcom boom exploded, it didn’t affect the real economy much. The US didn’t fall into recession but kept growing. The housing bust really hurt.
If an AI bust happened, it feels more like the dotcom than housing. In fact, more so, because right now alternative approaches to AI are likely being crowded out by the “supermajors.” A bust might widen the breadth of innovation and the nature of deployment in ways that could ultimately feel more beneficial than our current trajectory.
You can listen to our conversation here.
See also:
Meta and Blue Owl are striking what is likely the largest private-capital deal ever in tech: nearly $30 billion in a special purpose vehicle to build a hyperscale data center. Meta would retain just 20% ownership and offload the balance to Blue Owl. I explain what it might mean here.
For much of the 2010s, AI progress followed a sort of a rule that more compute means bigger models means better performance.
But by late 2024, the frontier labs found that this no longer held as cleanly. Models like GPT-4.5 were met with lackluster reaction – the increased performance was there but model size made it more expensive and slow. Scaling, or pre-training scaling, to be precise, seemed to have hit a wall in practical terms. At the same time, another emerging domain, scaling reinforcement learning (RL), was delivering exceptional performance gains. RL is prompting an LLM to answer, judge its own accuracy, and learn from the result. This loop powered the performance leaps of OpenAI’s o1 model and DeepSeek R1. So we got to a new paradigm in AI progress but there was a question of whether scaling still applied?
In a new paper this week, researchers find that RL doesn’t follow an open-ended power law like pre-training. Instead, it traces a sigmoidal, S-shaped, curve.
The main bottleneck in AI has moved from raw computational power to the “method” – how we train and adapt models. As a result, progress has become less calendar-predictable and budget-dependent and rather dependent on conceptual breakthroughs. Now is the time for ingenuity, for recalibration.
We’ve known for a while that certain domains remain stubbornly hard for AI. A group of (serious) AI researchers this week formalized this into a new definition of AGI: an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.
I love the ambition. I love the practicality – it provides a diagnostic frame to help research roadmaps. And they resist the urge to turn AI into purely an economic endeavour. But there are two drawbacks to address.