2025-06-01 09:31:47
Hi, it’s Azeem. This week, we’re seeing AI systems quietly achieve new levels of self-improvement, a subtle yet profound advance. Simultaneously, the global semiconductor map continues to be redrawn, with significant developments in China’s push for homegrown silicon. The practical consequences of these shifts are already tangible – from coding to workplace autonomy to war.
You’re in exactly the right place to understand what comes next. Let’s go!
China is breaking through chip controls. The country’s tech leaders have narrowed the gap in areas of AI hardware, notably around inference and by some accounts may only be a single development quarter behind the United States with a remaining holdup being software, not silicon. Alibaba, Tencent and Baidu say rewriting their large-model pipelines from Nvidia’s CUDA platform to Huawei’s CANN toolkit will delay new AI development by about three months, not years. After this, day-to-day AI workloads can run on homegrown chips instead of imported Nvidia parts. Huawei is laying that foundation with the Ascend AI processor. Domestic chipmaker, AMEC, is ramping up its local production. In practice, the companies will keep training on their dwindling Nvidia inventory, where US technology maintains a reliable lead, while shifting the fast-growing inference workload to Ascend processors and other local silicon.
Jensen Huang says China’s $50 billion AI-chip market is now “effectively closed” to American vendors.
China is developing a new Made in China plan to focus on home-made high-end technological goods.
AI models can now learn to reason better by simply trusting their own confidence –no human feedback or gold-standard answers required. A new study out this week introduced a method called Intuitor that rewards outputs the model feels most confident about, creating a self-reinforcing loop that encourages elaborate, structured thinking. The new method made it better at learning how to learn; the AI was only trained on math problems, yet the one that learned to trust its own judgment became significantly better at coding too.
Alongside this, a companion study shows an LLM inventing and iterating on its own algorithms and outperforming the best human-designed techniques on benchmarks. It transferred those gains to entirely new model combinations which is evidence of AI refining the very tools of its self-improvement.
These are two of several recent papers that show a path to self-improvement – there’s a clear pathway from here. This could offer powerful new capabilities but also require vigilance against unintended behaviors like reward hacking.
The VC playbook – sprinkle dollars on lean software start-ups and let network effects do the compounding – no longer matches the geopolitical chessboard.
As John Thornhill notes, the cost curve has flipped: genAI breakthroughs depend on expensive data-centre build-outs and chip supply chains, so Big Tech and state-backed pools of capital (think Saudi Arabia’s $40 billion AI fund) are out-muscling traditional VCs. The median founding year of the companies that anchor Europe’s main equity index is 1892, versus 1946 in the US – a 54-year innovation deficit that exposes how rarely the continent refreshes its industrial leaders.
2025-05-31 18:23:31
Physics has laws. Gravity doesn’t care about your business model. But “technological laws” like Moore’s Law are really just observed patterns — and patterns can break.
That said, dismissing them entirely would be a mistake. These patterns have held up remarkably well across decades and understanding them helps explain some of the most important economic shifts of our time. Why did solar power suddenly become cheap? Why can your phone do things that supercomputers couldn’t do 20 years ago? Why does everything seem to be getting faster, smaller and cheaper all at once?
The answer lies in these technological regularities — Moore’s Law, Wright’s Law, LLM scaling laws and more than a dozen others that most people have never heard of. They’re not iron-clad rules of the universe, but they’re not random either. They emerge from the fundamental economics of learning, scale and competition.
Today we have a slow, special weekend edition for you about what important technology “laws” actually tell us, why they work when they work and what happens when they don’t.
Whether you’re trying to understand the AI boom, the energy transition, or just why your laptop doesn’t suck as much as it used to, these patterns are your best guide to making sense of our accelerating world.
We prepared a scorecard to help you evaluate these technological laws based on their empirical basis, predictive accuracy, longevity, social and market influence, and theoretical robustness. This is only a snippet—for the full scorecard covering seventeen laws, read on!
Most readers know about Moore’s Law, coined by Intel co-founder Gordon Moore in the 1965 when he was asked to predict what would happen in silicon components over 10 years. He predicted that transistor counts would double roughly every two years, driving exponential leaps in computing power.
Yet each anticipated limit, whether from physics or economics, has sparked new waves of innovation such as GPUs, 3D chip stacking and quantum architectures. Moore’s Law was never a single forever-exponential; it is better viewed as a relay race of logistic technology waves. The baton is still being passed –just more slowly, and with ever-higher stakes – so the graph still looks exponential even though the underlying sprint has become a succession of S-curves.
For decades, making computer chips smaller let engineers simply turn up the speed without using extra power – a trend called Dennard Scaling, which worked hand-in-hand with Moore’s Law. But physics pushed back. Smaller parts started to overheat and by the early 2000s manufacturers hit a ceiling: raising the clock speed any higher would melt the chip. The solution was to add more “brains” (cores) that share the work instead of asking one core to sprint faster and hotter. That shift to multi-core processors, explored in detail here, let performance keep climbing without turning your laptop into a space heater.
Still, innovations continued, reaffirming the adaptability of these foundational laws.
Raw computing muscle doesn’t matter if you can’t afford a place to keep all the data it creates. That’s where Kryder’s Law comes in: the cost of storing information has fallen so fast that we’ve gone from guarding a few megabytes on floppy disks to tossing terabytes into the cloud for pocket change.
Cheap, roomy drives unlocked everything from YouTube and Netflix to the photos filling your phone. But the parts we use today—magnetic platters and flash chips—are bumping up against hard physical limits. New ideas, like packing data into strands of DNA, could be a thousand times denser than the best solid-state drives, potentially restarting the price-drop roller-coaster.
If that sounds familiar, it should. Just as Moore’s Law kept shrinking transistors, storage has advanced in overlapping waves: fresh materials, clever techniques, better manufacturing. Up close each wave looks like an S-curve that flattens out, but zoom out and the combined effect is a long, steady surge that keeps powering the next frontier.
Similarly, Haitz’s Law has done for LEDs what Moore’s did for microchips.
Every few years, LEDs get brighter for the same amount of electricity while their price per lumen keeps falling. It’s the result of a thousand tweaks: better semiconductor materials, smarter chip designs, improved heat-sinking, and streamlined factory lines.
For those interested in a longer historical lens, the cost of lighting has plummeted by orders of magnitude since the 14th century, when tallow candles made with cow or sheep fat provided meager illumination at a high price.
Gas lamps in the 19th century and Edison’s incandescent bulbs in the 20th were both significant leaps, yet neither revolution can match the accelerating pace of modern LEDs.
Swanson’s Law is the solar-power version of a volume discount because each time the world’s total solar capacity doubles, the price of a panel drops by roughly 20%. Over a few short decades those repeated cuts have worked like compound interest in reverse – turning solar from an expensive science-fair project into, in many places, the cheapest way on Earth to make electricity. It’s a textbook example of how steady, exponential cost declines can flip a market on its head.
Connectivity has had a profound impact on all our lives and there’s a set of laws that can tell us what’s going on. Nielsen’s Law is the “speed limit” that keeps jumping. Top-tier internet connections get about 50% faster every year, a journey that has taken us from the screech of dial-up to whisper-quiet gigabit fiber and made streaming, cloud gaming and video calls feel effortless.
Edholm’s Law says the walls between wired, wireless, and optical links keep crumbling. Today’s Wi-Fi 6 and Wi-Fi 7 routers pump out gigabit-class speeds that used to require an Ethernet cable, showing how quickly those once-separate lanes are merging.
Throw in Gilder’s Law1 (which boldly claimed total bandwidth in telecom networks might triple yearly) and Butters’ Law (the cost of transmitting a bit over optical networks halves every nine months), and you get a picture of nearly unstoppable expansions in capacity and falling network costs.
Even the airwaves have their own rule: Cooper’s Law, courtesy of cell phone pioneer Martin Cooper, states that the number of simultaneous wireless calls in a certain spectrum doubles about every 30 months.
In 1901, when Guglielmo Marconi first transmitted Morse code across the Atlantic, the technology was so primitive that his signal used a significant fraction of the world’s radio spectrum. Today, more than one trillion radio signals can be simultaneously sent without interfering with each other.
That’s why we can cram more and more people into the same frequencies without meltdown – assuming we keep improving our spectral efficiency with tech like MIMO, 5G or even future 6G. If Cooper’s Law continues at its current pace, by 2070, each person on Earth could theoretically use the entire radio spectrum without interfering with anyone else’s signals. Of course, practical deployments must still respect physical and regulatory limits, but the trend of maximizing capacity continues to hold.
This is great news for networks themselves. Metcalfe’s Law suggests that the value of a network is proportional to the square of the number of users. This was the core of social media growth – each new user exponentially increased the connections and value of the network itself.
It has also been applied to make bitcoin and enterprise valuations. This is a reminder that many of these laws matter most when enough people adopt the technology. In other words, scaling is crucial to turn a neat idea into a major change.
2025-05-31 03:20:31
I was in conversation with just earlier today (thanks to everyone who joined us live!).
Headline growth will be modest, not miraculous. Tyler pushes back on predictions of a 10–25% relative boost to GDP growth, arguing that unavoidable frictions – energy supply, regulation, inertia – cap the AI dividend.
Markets are pricing in *interesting but incremental*. Currency, equity and bond markets look normal; if AI were about to explode growth, we’d see it.
Generations use LLMs very differently. Boomers treat ChatGPT as “Google-plus”; Gen-Z treats it as a permanent confidant, a shift Tyler thinks could even reshape spirituality.
Career strategy for twenty-somethings. Learn AI deeply, stay adaptable and anchor work in genuine passions; in a field moving this fast, nobody can get far ahead of you, but the old “good-grades” playbook is obsolete .
School is out of sync. Scan-grade homework and one-size exams no longer make sense; faculty need to become mentors, yet education systems look “frozen,” risking a generation that feels unprepared .
Geopolitics will shift east and south. The EU risks falling behind under heavy regulation, while the Gulf states are racing to be a third AI pole.
You can’t really opt-out. Choosing life without AI will soon feel like living without a smartphone – technically possible, but socially and economically costly.
Liberal institutions matter more than ever. Tyler considers free speech, competitive politics and transparent governance as the critical shields against state misuse of AI.
Marginal Revolution – Tyler Cowen and Alex Tabarrok’s blog for daily economics & culture riffs.
“AI will change what it means to be human. Are we ready?” Tyler Cowen & Avital Balwit,
2025-05-26 20:00:38
Hi all,
Here’s your Monday round-up of data driving conversations this week — all in less than 250 words.
Anthropic AI blackmail test ↑. Claude Opus 4 AI attempted blackmail in 84% of scenarios when threatened with replacement.
xAI expansion ↑. Musk’s xAI is constructing a $25 billion supercomputer near Memphis, Tennessee, with 1 million GPUs – ten times the size of its initial build.
AI GPU growth revised downward ↓. JPMorgan cut AI GPU shipment forecasts for 2026 to +28% (from +38%) and for 2027 to +10% (from +17%), citing supply imbalances and shifting cloud provider strategies.
2025-05-25 09:30:30
Hi, it’s Azeem.
This week, we explore the shift from raw AI horsepower to systemic integration. Models are being wired into feedback loops, infrastructure and ecosystems. From Claude 4’s autonomous coding sprints to the rise of open agent protocols, the “agentic web” is no longer a theory – it’s being built. Across sectors, the race is on to operationalize AI.
Let’s go!
2025 was billed as the year of AI agents and in many ways it has arrived. Google, OpenAI and Anthropic ship agents that code on-demand and fetch citations in minutes. Codex can spin up a bug fix faster than you can draft a ticket. But the question is no longer how smart they are – it’s whether they can run unattended across systems. Today, they can’t - even a 1% hallucination rate can unravel a long task chain.
Still, progress is unmistakable. Rakuten, a Japanese tech conglomerate, let Claude 4 refactor code for seven hours with zero intervention. Each win is still shadowed by lethal slips – Claude 4, for all its impressiveness, still bungled “What’s 9.9 minus 9.11” – proof that 1 percent error still matters.
Microsoft CTO Kevin Scott calls this build-out the “agentic web,” a mesh of AIs working through shared protocols. Apple’s forthcoming Intelligence SDK will hand those sockets to every developer, and emerging standards – MCP for tools, Agent2Agent for AI-to-AI chat – are becoming the thread tape.
The pipes are going in, the journeyman is still on probation, but every new coupling cuts leaks and gets us closer to “hands-free” AI.
See also:
Sam Altman and Jony Ive’s AI device collaboration targets late 2025 – will new interfaces push agent potential further?
Google unveiled Veo 3, an AI model that generates talking video. Its realism adds to worries about a flood of synthetic media.
Google’s inference load soared 50-fold in just 12 months, from 10 trillion tokens in April 2024 to 480 trillion in April 2025. More users played a role, but the bigger driver is the rise of reasoning models, which consume about 17 times as many tokens as predecessors because they run long internal chains of thought.
This exponential growth will continue as agentic workloads become more commonplace.
For instance, I spent 2 million tokens building a small video game with Claude in under an hour; a typical chatbot session might use only 10,000 tokens in the same time. Chatbots wait for humans to type, while agents run continuously, limited only by available compute. That translates into steep prices: Google AI Ultra costs $250 per month, Claude Max $100–200, and ChatGPT Pro $200. OpenAI is reportedly thinking of a $20,000-per-month tier.
Last week, we showed how training costs keep sliding down predictable curves, but inference is different. Although Moore’s Law and better algorithms cut the cost per token, demand still grew 50 times last year, far faster than efficiency gains. Whether the economics of inference can keep up with our appetite for longer contexts, deeper reasoning, and always-on agents is an open question.
We may need to apply hard budgets to autonomous agents – just like the robot police in THX 1138, who abandon a chase once the cost crosses a set threshold. This kind of cost-governed autonomy where agents must justify, cap, or cancel actions based on compute limits, could become a defining constraint. The inference bottleneck won’t kill agentic AI, but it might force it to act with surgical precision.
AI consumes energy; everything does. The real issue is whether the electricity we spend on AI will help us fight climate change. I believe it will.
RAND projects that by 2030 AI could draw about 327 GW – roughly 3.6–4 percent of today’s global generating capacity. That load is significant, but AI can also discover ways to shrink its own footprint and accelerate climate solutions.
Consider Microsoft’s Discovery platform, which designed a PFAS-free data-center coolant in hours instead of years and FutureHouse’s Robin AI, which identified promising age-related macular degeneration drug candidates in weeks. The same processors driving up power bills can uncover breakthroughs that cut them.
Local strain is real. Collectively, around a dozen hyperscaler projects in Nevada have asked NV Energy for nearly 6 GW of new electricity capacity – about 40 percent of the state’s entire grid. Governments should insist that new data centers run as much as possible on clean power and recycle waste heat.
Is the trade-off worth it? Yes. If AI expansion is paired with aggressive clean energy build-outs, the technology will help decouple economic growth from emissions. Limiting AI today to save energy would throttle a tool that can slash energy use tomorrow.
See also:
Rapid “event-attribution” studies now use AI to link extreme weather to climate change in weeks. The World Weather Attribution group has run more than forty such analyses, giving hard numbers on how warming raised the odds or severity of events.
In recent years, automakers have fought for the industry’s future. We previously argued that Germany’s automotive sector faced an existential threat. Today it is clear that China has won the electric vehicle hardware race.
Evidence is everywhere. BYD claims its batteries can recharge in five minutes. Xiaomi delivers Ferrari-like styling at Volkswagen prices and Morgan Stanley projects Xiaomi will earn $32 billion in automotive revenue by 2027— about the size of Tesla’s entire auto business in 2020. China also controls the entire supply chain – batteries, motors and electronics. The advantage is not only speed but an ecosystem density the West cannot replicate.
Hardware dominance, however, is only half the story. Morgan Stanley analyst Adam Jonas notes that value is migrating from hardware to software. Tesla’s pivot from building the “best car” to achieving the “best autonomy” shows that once hardware commoditizes, software differentiates. Profit pools now lie in over-the-air updates, data services, and robotaxi networks. Algorithms and regulatory approvals, not sheet metal, create the new moats.
China owns the hardware victory, but the software endgame remains wide open. Either Tesla wins here or it doesn’t win at all.
See also:
Xiaomi will invest $6.9 billion in chipmaking, pursuing vertical integration that reaches beyond its electric-car unit.
Analyst argues China is moving ahead of the United States in key technologies — progress, he says, accelerated during Trump’s trade war.
China now runs a fleet of 100 autonomous mining trucks, showing how industrial AI can scale up heavy equipment.
warns the United States looks like a “late-stage republic” and must rebuild technological and cultural strength to compete with China. See my conversation about the state of the US here.
Separate AI models trained on different data still learn the same hidden map of meaning. This hints that concepts share an underlying universal structure.
Berkeley’s automated A-Lab runs robots around the clock. It tests 50–100 times more material samples per day than a human team.
Stephen Wolfram says bigger brains could tap pockets of simple computation. These pockets would let them form higher-level ideas and handle larger mental spaces.
Anthropic has switched on AI Safety Level-3 safeguards for Claude Opus 4 because the company can’t yet rule out the possibility that it could materially aid CBRN1 weapon projects.
A statistical study of Myanmar, Sri Lanka, Thailand, and Singapore finds that state privilege – not Buddhist belief – drives Buddhist violence against minorities.
AI executives now call data centers “AI factories.” The term seeks subsidies and casts the sites as strategic assets even though they employ few people.
By choosing Robert Prevost as Pope Leo XIV, the Church signals an alternative model of American global leadership grounded in humility.
Ukraine has approved the Krampus robot, which fires thermobaric rounds (i.e., a flamethrower). This brings its roster of armed ground robots to more than 80.
Olivier Blanchard’s new NBER paper reviews 40 years of mainstream macroeconomics. He gives the field a maturity score of 7.5 out of 8 but notes its weak forecasting record.
Thanks for reading!
CBRN weapon projects, involving chemical, biological, radiological and nuclear weapons, encompass various initiatives aimed at developing, testing, deploying and mitigating the risks associated with such weapons.
2025-05-24 02:00:39
What happens when AI writes more of our code than we do?
In today’s Live show, I spoke with , CEO of GitHub, the epicentre of modern software development, about the transformative impact AI is having on the way we build software.
We explored:
How AI, including tools like GitHub Copilot, fundamentally reshapes the developer's role from coding to problem-solving and creativity.
Why AI tools like Copilot dramatically increase developer productivity by automating repetitive tasks, enhancing flow, and enabling quicker prototyping.
The implications of AI-driven coding in enterprise environments.
What a world with a billion developers looks like, and how personalised software might empower every user to innovate.
The rise of an agent-centric web, where interactions are increasingly automated and context-aware.
A playful take on how AI could revolutionise sorting your Lego pieces (because who doesn't need help with that?).
Enjoy!
📅 Catch me live every Friday at 9am PT | 12pm ET | 5pm UK.