MoreRSS

site iconUncharted TerritoriesModify

By Tomas Pueyo. Understand the world of today to prepare for the world of tomorrow: AI, tech; the future of democracy, energy, education, and more
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Uncharted Territories

Peak Oil Is Coming

2026-02-03 21:02:43

Renewable electricity is so cheap that it’s taking over the world. It will replace most fossil fuels: in power generation, car propulsion, heating… When it does, the budgets of dozens of countries will be destroyed because they mostly rely on oil and gas production today. What will happen to these countries? To global geopolitics? This is what we’ll explore in this series, starting by asking ourselves: When will oil sales start shrinking?

When the Oil Runs Out

Energy has driven the geopolitics of the last two centuries:

First, the expansion of coal in the 19th Century drove the Industrial Revolution. Then, the expansion of oil and gas in the 20th fueled the world’s wealth explosion.

Everybody won, but the suppliers of oil and gas won an outsized return. The USSR was only viable as long as oil prices were high. Today, Russia’s war in Ukraine is financed by Russian gas.

Many countries are dead without oil and gas income, as their entire governments’ budgets depend on these resources:

So what happens if fossil fuel incomes crater? That’s entirely possible. The share of all world energy coming from fossil fuels is shrinking:

And this shrinking is accelerating because of electricity.

The world is electrifying, and that will accelerate: Unfortunately for fossil fuel countries—and fortunately for the world’s climate—renewable energy will completely take over electricity generation.

The share of electricity generation from wind and solar was 13.5% in 2023, 15% in 2024, and 18% in 2025. This trend is accelerating! Source.

The fact that solar generation in particular is accelerating can be seen through the installed base of solar capacity:

This exponential is fueled by the virtuous cycle of production and costs:

Indeed, solar costs keep shrinking:

That’s just for solar panels, but overall solar electricity generation costs are also shrinking and will continue to do so.

The Sun only shines during the day, but batteries bring sunshine to the night, and their cost is shrinking too.

Which is why battery installations are exploding too.

It doesn’t take a genius to connect the dots:

  1. Solar is already the cheapest source of electricity, and its costs keep shrinking. Together with batteries, their combined cost will keep getting cheaper vs alternatives.

  2. Installed capacity will continue soaring globally to cover demand. What we’re seeing in China will happen everywhere.

  3. This will further accelerate electrification: Everybody wants cheap energy! Electric vehicles will replace internal combustion engines (ICE) faster, electric heat pumps will replace gas-fueled heaters, electric arc furnaces will replace combustion ones…

  4. As electricity eats up global energy consumption, via solar and batteries, the demand for oil & gas will plummet.

  5. The countries whose economies and government budgets depend on oil & gas…

What happens to them? Well, it depends on when all of this happens.

When Peak Oil?

This is no easy calculation. Dozens of organizations project demand for oil and gas in the coming decades, but this is the type of stupid mistake they make:

And conversely, for coal:

Why are these forecasts so flawed? I think one reason is vested interests: For example, of course OPEC forecasts see an increase in oil demand by 2045!1 Its existence depends on it.

The other reason is that people assume the world will continue with business as usual. They don’t realize that, in energy, transitions can be extremely fast.

Fast Transitions

Look at transportation:

Here’s industrial heating:

Here’s lighting:

Electricity went from less than 5% electric to over 90% in less than 20 years!

At the beginning, new technologies take some time to figure out, but once they’re ready, uptake can be vertiginous.

Solar, wind, and batteries seem to be on that path.

If we assume that’s indeed the case, how will oil demand change in the coming decades?

1. Total Energy Demand

Energy demand has been quite stable for decades. Let’s assume it continues.

Here we’re assuming energy consumption will continue the same path as in the last 6 decades. Is that fair? On one side, population growth has already started declining, so GDP growth will decelerate, and that will shrink energy demand. We’re also becoming more efficient. However, AI is also arriving, which will dramatically increase electricity demand. All in all, I think it’s a fair bet that energy will continue growing as it has.

2. Electrification

Here’s where we face a much harder problem.

It’s clear to me that electrification will start accelerating due to cheaper electricity prices from renewables, which will prop up electric vehicles (EVs), heat pumps, electric arc furnaces, and other electrification technologies. How do you model this?

The future is already here, it’s just not evenly distributed.—William Gibson

According to this, China’s consumption of O&G will shrink by 40% by 2050 and 60% by 2060. I personally think that the transition will be much faster, because of solar.

3. The Solar Revolution

If you just project the growth of the last few years into the future, you get this:

The amount of solar capacity that China is installing is so massive that, if the annual growth continues, solar electricity would surpass the current trend of all electricity generation within 10 years! Of course that’s impossible, so what would happen instead is that:

  1. This would accelerate electrification. That’s why China is a leader in solar panels, batteries, and EVs: The three technologies go hand in hand.

  2. Primary energy would also grow faster, given such cheap electricity prices.

  3. China will flood the world market with these electric devices.

  4. Solar capacity growth will have to slow down in the coming years.

For that, we should be seeing exponential growth for China’s solar generation, electricity generation, EVs, heat pumps… Is that what we see?

Yes for solar.

Yes for electricity (although of course its growth must look less aggressive because solar is still just a tiny part). To put this in context:

In 2024, the total installed electricity capacity of the planet—every coal, gas, hydro, and nuclear plant and all of the renewables—was about 10 TW. The Chinese solar supply chain can now pump out 1 TW of panels every year.Source

Let me repeat that: China can produce every year solar capacity equivalent to 10% of all electricity in the world today!

This is why China (and India) have cut emissions from electricity generation for the first time in decades.

What about cars?

Yes for electric vehicles. For the first time, sales of EVs have surpassed those of ICE.

What about heat pumps?

Annual sales of heat pumps (in millions of units) around the world

It’s less true for heat pumps, although I’d assume these would take longer to penetrate the market, because:

  • Electricity wasn’t so cheap in the past

  • The real estate market crashed in 2020-21

  • Once a heating system is installed, it requires huge savings in energy to be retrofitted, and electricity prices are not quite there yet (but will be).

I think what’s clear is this: By 2050, the share of China’s primary energy coming from O&G will be tiny.

And if you think this is just China…

Exponentials everywhere! Look at these lines and try to project them into the future. No, actually, let me do that for you:

For the last few years, after a great start, EV sales didn’t look so good in Europe, but now it’s finally true: There are more EV sales than ICE.

I think the slowdown in EV sales is because of a series of one-off issues:

  • Richer customers bought Teslas fast, but Elon Musk’s politicization slowed that down

  • EV fiscal support has shrunk, making them more expensive to buy for citizens

  • The charging infrastructure isn’t quite there yet

  • Europeans can’t yet feel the reduction in electricity prices from solar

  • Car makers focused on premium models (which competed against Tesla), when the true market was in cheap EVs like those of China’s BYD

But it’s not just China and the rich world. Medium income and poor countries are seeing similar trends. The Pakistani story is especially interesting: In a country where electricity is expensive and unreliable, people have flocked to solar so quickly that it has gone from less than 1% of electricity generation to more than 10% in 5 years! The more cheap electricity there is, the more EVs people buy.

In Turkey, sales of EVs and hybrids2 are soaring so much that, even in a growing overall car market, gasoline and diesel cars are shrinking.

Look at Indonesia!

Dots are actuals, the red line is the estimate. Indonesia is on path to go from 20% of car sales as EVs to 80% in less than two years! Other countries that will transition in less than 10 years include Singapore, Denmark, Uruguay, Thailand, Malaysia, Albania, Poland, Turkey, Brazil, Norway, Belgium, Finland, Sweden, Chile, Luxembourg, China, Portugal, and the Netherlands. That’s as of today. I believe many more countries will qualify: They’re laggards to start, but will accelerate as the entire world starts buying EVs. Source.

This will happen across the world, especially in places where electricity from the grid is expensive and unreliable but the Sun shines a lot, like Africa.3

It might take a bit longer there, because poor countries don’t buy new cars (only second-hand ones), and the EV car market is not old enough yet,4 but it will happen within a decade.

All these trends are the reason why so many models predicting O&G demand are off:

  • They try to figure out primary energy and electricity from past trends

  • Except you can’t do that because renewables are coming in like a bullet train in a china shop. They will upend everything, drive prices to the bottom, and with that electricity consumption will grow faster than in the past.

  • This will drive a massive electrification of the world, which will increase overall energy consumption, but it will shrink the share of that coming from O&G.

What if you just take the share of all energy coming from renewables, and assume it continues accelerating at the same pace as in the last 20 years, what would fossil fuel energy look like in that case?

Share

According to this, fossil fuels would peak by the early 2030s:

  • The first to crash would be coal, and later it’d be followed by oil & gas.

  • Gas remains stable for the longest

  • Peak oil seems to happen in the early 2030s.

The Net Zero Proxy

For years, society has been dreaming of Net Zero: A world where all governments get together to limit their own emissions. It turns out the naiveté of rich countries was exposed by poor countries when they said: “No way we’re remaining poor.”

But what we’re saying here is that something akin to that is going to happen. Not because governments can agree—they can’t agree on anything—but because the economics of new tech overwhelmed politics. What it means is we can take the Net Zero modeling exercises and use them as a proxy for what might happen. BP has a good one:

This model shows how carbon emissions were going to peak in the coming years regardless, and shrink from hereon. The question is whether it will be a little or a lot, and my model suggests it’s closer to the blue line than the green.

The shrinking would be due to renewables and batteries (“power” below), industry (both electricity and heat, which can be achieved through heat pumps and electric arcs), and transportation (EVs):

So it doesn’t look like we’re far off.

Takeaways

If solar, wind, batteries, and EVs keep going as they have been, peak oil will come soon, and by 2050, demand for oil and gas will have shrunk considerably. It’s the dusk of the age of fossil fuels.

This will be great for the environment!

But the consequences for geopolitics are up in the air.

What will happen then to countries like Russia, Venezuela, or Saudi Arabia?
Will their economies crater?
Will this reorganize global geopolitics?

That’s what we’re going to explore in the next articles.

Subscribe now

1

Just to give you an example: This OPEC study projects the demand for oil & gas (O&G) to increase from now to 2045. Why? It assumes:

  • An optimistic increase in global population

  • That electrification won’t go as fast as China suggests—notably OPEC assumes that in poor countries, people will buy lots of cars with internal combustion engines (ICE)!

  • That oil & gas electricity generation will barely budge!

  • That electric heat pumps won’t take over the heating market!

Of course, OPEC has a vested interest in O&G demand increasing, so we can’t blame them. But everybody has vested interests, making it really hard to estimate when it will peak and shrink.

2

In early markets, hybrids always prevail because there’s not enough electric infrastructure. As it develops and people can charge their cars more easily, the share of EVs increases.

3

And as we know, warmer countries are poorer.

4

Plus, battery aging will be a problem there. What’s most likely to happen is that 10 year old EVs will find themselves in places like Africa, coupled with new batteries, which will be much cheaper then.

The $100M Worker

2026-01-30 16:01:48

Tech companies are trying to attract workers with over $100M compensation packages. How can this make sense?

It does.

After today’s article, not only will you understand the logic. You’ll wonder why there aren’t more companies doing the same, and spending even more money.

Not only that. You’ll also understand how fast AI will progress in the coming years, what types of improvements we can expect, and by when we’re likely to reach AGI.


Here’s what we’ve said so far in AI:

First: We’re investing a lot in AI, but there’s massive demand for it. This makes sense if you believe we’re approaching AGI.1 So it’s unlikely that there is an AI bubble.

Second: It’s likely that we’re approaching AGI because we’re improving AI intelligence by orders of magnitude every few years. As long as we can keep improving the effective compute of AI, its intelligence will continue growing proportionally. We should hit full automation of AI researchers within a couple of years, and from there, AGI won’t be far off.

Third: So how likely is it that we’re going to continue growing our effective compute? It depends on how much better our computers are, how much more money we spend every year, and how much better our algorithms are. On the first two, it looks like we can keep going until 2030.

  • Computers will keep improving about 2.5x every year, so that by the end of 2030 they will be 100x better.

  • We will also continue spending 2x more money every year, which adds up to 32x more investment by the end of 2030.

These two together mean AI will get 3,000x better by 2030, just through more (quantity) and more efficient (quality) computers.

4: But we also get better at how we use these machines. We still need to make sure that our algorithms can continue improving as well as they have until now. Will they? That’s what we’re going to discuss today.

In other words, today we’re going to focus on the two upper areas of the graph below.

Algorithm optimization and unhobbling refer to similar things:

  • Algorithm optimization means improving them little by little, through intelligent tweaks here and there.

  • Algo unhobbling means big, radical changes that are uncommon but yield massive improvements.2

Let’s start by looking at how things have evolved in the past.

1. How Much Have We Optimized Algorithms?

This is how much algorithms for image classification improved in the 9 years between 2012 and 2021:

This paper recorded how well AIs progressed in their ability to classify images, based on a given amount of compute and data. Each year formed a line. And every year, these AI algorithms got better and better. On average, in 9 years, they improved by ~4.5 orders of magnitude (OOMs), for ~0.5 OOM improvements per year.

About 0.5 orders of magnitude (OOMs) per year. Crucially, you can see the distance between lines is either the same or increases over time, suggesting that algorithmic improvements don’t slow down, they actually accelerate!

That’s for images. This paper found that, between 2012 and 2023, every 8 months LLM3 algorithms required half as much compute for the same performance (so they doubled their efficiency). That translates to ~0.5 OOM improvements per year, too.

As you can see, they report a doubling every ~8.7 months or so, but if you look at the other papers they quote across fields, we can see that it looks like doubling is more like every 10-15 months. If we take 12 months, that’s 0.3 OOMs per year. That said, most of these didn’t have a fraction of the investment we’re making in LLMs today, so it’s quite reasonable to expect the 0.05 OOMs figure per year. Source for the paper.

This paper looked at general AI efficiency improvements between 2012 and 2023 and saw a 22,000x improvement, which backs out to 0.4 OOMs per year.4

So across the board, it looks like we can get ~0.4 to 0.5 OOMs of algorithmic optimization per year. We won’t have that forever, but we have had it for a bit over a decade, so it’s likely that we’ll continue enjoying it for at least 5 years. If so, by 2030, we should have optimized our algorithms by ~300x.

2. What about Unhobblings?

The three papers I mentioned above quantify improvements at the heart of algorithms, but not the things we do on top of them to make them better, called unhobblings.5 These include things like reinforcement learning, chain-of-thought, distillation, mixture of experts, synthetic data, tool use, using more compute in inference rather than pre-training…

I covered some of these in the past when I discussed DeepSeek, but I think it’s quite important to understand these big paradigm shifts: They are at the core of how AIs work today, so they are basically the foundations of god-making. So I’m going to describe some of the most important ones. If you know all this, just jump to the next section.

Most Important Unhobblings

1. Supervised Instruction Fine-Tuning (SFT)

LLMs are at their core language prediction machines. They take an existing text and try to predict what comes next. This was ideal for things like translations. They were trained to do this by taking all the content from the Internet (and millions of books) and trying to predict the next word.

The problem is that text online is not usually Q&A, but a chatbot is mostly Q&A. So LLMs didn’t usually interpret requests from users as questions they had to answer.

Supervised instruction fine-tuning does that: It gives LLMs plenty of examples of questions and answers in the format that we expect from them, and LLMs learn to copy it.

2. Reinforcement Learning with Human Feedback (RLHF)

Even after instruction tuning, the models often gave answers that sounded fine but humans disliked: too long, evasive, unsafe, or subtly misleading.

So humans were shown multiple answers from the model and asked which one they preferred, and we trained the AIs on that.

What’s the difference between these two? The way I think about them is that SFT was the theory for AIs: It gave plenty of examples, but didn’t let them try. RLHF asks AIs to try, and corrects them when they’re making mistakes. RLHF is practice.

3. Direct Preference Optimization (Reinforcement Learning Without Human Feedback)

We then took all the questions and answers (both good and bad) from the previous approach (RLHF) and other sources and fed it to models,6 telling them: “See this question and these two answers? Your answer should be more like the first one, and less like the second one.”

4. Reinforcement Learning on High-Quality Data

In scientific fields, answers are either right or wrong, and you can produce them programmatically. For example, you create a program that calculates large multiplications, so you know the questions and answers perfectly. You then feed them to an AI for training. You can also generate bad answers to help the AI discern good from bad.

The result is that, the more scientific a field, the better LLMs are. That’s why Claude Code is so superhuman, and why LLMs are starting to solve math problems nobody has solved before.

5. Constitutional AI

Human feedback was expensive and inconsistent, especially for safety and norms, so we gave AIs explicit written principles (e.g. be honest, avoid harm) and trained them to critique and revise its own answers using those rules.

For example, Anthropic recently updated Claude’s constitution.

6. Chain-of-Thought

When you’re asked a question and you must answer with the first thing that comes to mind, how well do you respond? Usually, pretty badly. AIs too.

Chain-of-thought asks them to break down the questions being asked, then proceed step by step to gather an answer. Only when the reasoning is done does the AI give a final answer, and the reasoning is usually somewhat hidden.

Why does this work? The thinking part is simply a section where the LLM is told “The next few words should look like you’re thinking. They should have all the hallmarks of a thinking paragraph.” And the LLM goes and does exactly that. It basically produces paragraphs that mimic thinking in humans, and that mimicry turns out to have a lot of the value of the thinking itself!

Here’s another example:

Chain-of-thought basically means that you force the AI to think step by step like an intelligent human being, rather than like your wasted uncle spouting nonsense at a family dinner.

Before this was integrated into the models, you could kind of hack it with proper prompting. For example, by telling the AI “Please first think about the core principles of this question, then identify all the assumptions, then question all the assumptions. Then summarize these principles and validated assumptions, until you can finally answer the question.”

Now start combining some of the principles we’ve outlined already, and you can see how well they work together. For example, it’s very hard to learn math just by getting a question and the answer. But if you get a step-by-step answer, you’re much more likely to actually understand math. That’s what’s happening.

One consequence of this is that now we don’t use compute mostly to train a new model. We also use compute to run existing models, to help them think through their answers.

7. Distillation

Distillation is a nice word for extracting intelligence from another LLM.

Researchers ask a ton of questions to another AI and record the answers. They then feed the pairs of question-answers into their new model to train it with great examples of how to answer. That way they train the new AI with the intelligence of an existing AI, at little cost.

This is one way of producing synthetic data, and it’s especially useful for cheap models to get a lot of the value of expensive ones, like DeepSeek distilling OpenAI’s ChatGPT:

The Chinese keep doing it.7 Apparently the Chinese model Kimi K2.5 is as good as the latest Claude Opus 4.5, it’s 8x cheaper, and it believes it’s Claude when you ask it.

This is interesting: The better our AIs are, the more we can use them to train the next AIs, in an accelerating virtuous cycle of improvement.

It’s especially valuable for chain-of-thought, because if you copy the reasoning of an intelligent AI, you will become more intelligent.

8. Mixture of Experts

Do you have a PhD in physics and gender studies? Are you also a jet pilot? No, each of these require substantial human specialization. Why wouldn’t AIs also specialize?

That’s the concept behind mixture of experts: Instead of creating one massive model, developers create a bunch of smaller models, each specialized in one area like math, coding, and the like. Having many small models is cheaper and allows each one to focus on one area, becoming proficient in it without tradeoffs from attempting other specializations. For example, if you need to talk like a mathematician and a historian, you will have a hard time predicting the next word. But if you know you’re a mathematician, it’ll be much easier. Then, the LLM just needs to identify the right expert and call it when you’re asking it a question.

Note that this is something you could partially achieve with simple prompting before. You might have heard advice to tell LLMs things like You’re a worldwide expert in social media marketing, used to make the best ads for Apple. This goes in the same direction.

9. Basic Tool Use

What’s better, to calculate 19,073*108,935 mentally, or to use a calculator? The same is true for AI. If they can use a calculator in this situation, the result will always be right.

This also works for searches. Yes, LLMs have been trained on all the history of Internet content, but all of that blurs a bit. If they can look up data in search engines, they’ll stop hallucinating false facts.

10. Context Window

Before, LLMs could only hold so much information from the conversation in their brain. After a few paragraphs, they would forget what was said before. Now, context windows can reach into the millions of words.8

11. Memory

If every time you start a new conversation your LLM doesn’t remember what you discussed in your last one, it’s as if you hired an intern to help you work, and after the first conversation, you fired her and hired a new one. Terrible. So LLMs now have some memory of the key facts of your conversations.

12. Scaffolding

This coordinates across many of the tools outlined above. For example, it forces the LLM to begin by planning the answer. Then, it pushes it to use tools to seek information. Then, it registers that information into its context window. Then, it uses chain-of-thought to reason with the data, and finally it replies.

13. Agents

With agents, you can take scaffolding to a superpower. Instead of having only a few steps, you can have several AIs take lots of different roles, and interact with each other. One can plan, another records information, another calls tools, several others answer with the data, others assess the answers, others rate them, others pick the right one, which they send to the planner to go to the next step, etc.

So these are the biggest unhobblings. How powerful have they been?

How Much Have Unhobblings Improved Our Algorithms?

Reinforcement Learning achieved an improvement over 100x (InstructGPT). Chain-of-thought: over 10x. Source for the graph.

As you can see, many of these provide gains from 3-100x in effective compute.

Subscribe now

How Much Improvement per Year Then?

Let’s take 10x as the average improvement per unhobbling mentioned above. AFAIK, they were all released between 2020 and 2023, so in just four years, over a dozen unhobblings were described just in this article! And if each one improves effective compute by 10x, that’s a 5,000x improvement per year! Obviously, this is not true because you can’t multiply the increase in performance of all these unhobblings, but it gives you a sense of how impactful they have been.

The estimation of 0.5 OOMs per year from Situational Awareness sounds quite reasonable, even conservative given all this.9

Will we continue finding these breakthroughs? How much will AI improve overall in the coming years? And how does that justify salaries over $100M?

Read more

Not Sustainable ➡️ Abundant

2026-01-27 22:05:29

The solution to pollution is dilution.

That’s what people thought in the mid 20th Century.
Then, we realized it wasn’t.

I grew up in a world where fish in the sea were being decimated.
Where forests were burning.
Where oil was extracted at breakneck speed.
Where CO2 accumulated dangerously in the atmosphere.
Where cropland invaded forests and drove animals to extinction.
Where rivers became sewers.
Where cities sank as we drank their water tables.
Where garbage became mountains we just hid with a skin of mud.
Where plastics colonized the oceans and then colonized our brains.

That’s when Gaia appeared.

Gaia

Gaia, we were told, was weak. She was delicate. She had finite resources that humans were depleting at an ever-accelerating pace. She was a unique and beautiful blue marble to protect from the dangerous actions of her parasitic host, which was spreading like an infection that had started gangrening her limbs and would eventually kill her.

If you have a world with finite resources, you want it to be sustainable. You need to protect these resources from depletion. You need to stop the disease, the gangrene. You need to halt humans’ propensity to keep growing and consuming.

That’s what sustainability is: Keep what is there.
It implies: Shrink your population, shrink your consumption.
The motto had changed.

The solution to pollution is reduction.

But then I started studying each one of these resources, and the role of humans in depleting them, and the more I learned, the more my intuition for the world changed.

The Dramatic Changes of the World

I discovered that, in the beginning, the atmosphere had no oxygen. Then, cyanobacteria and plants started releasing oxygen in a completely unsustainable way:

First, this oxygen oxidized the seas and the earth’s crust, literally changing the composition of everything we see today. Oxygen is a highly aggressive gas, and it attacks everything it touches. That red dirt everywhere on Earth? It wasn’t always red.

Pyrite can’t form (iron sulfide, FeS2) where there’s lots of oxygen

As oxygen accumulated, CO2 was catastrophically depleted.

Plants need CO2 to live, and yet 99% of it has been depleted from the atmosphere! How would we have reacted if we had seen this depletion happen during our existence? We would have panicked: CO2 is disappearing! We’re releasing too much of this toxic O2! This is all unsustainable!

Even things we consider immutable are changing all the time. When the Earth formed, days didn’t last 24h, they lasted 410h! This means a year may have had over 2,000 days! And that’s after the proto-Earth and Theia collided, forming the Earth and the Moon!

This was a world where continents were nothing like today.

And this is in just 150M years!

And yet these geological changes aren’t all ancient. Sea levels were 120 m lower just 20k years ago!

Of course, that’s because at the time, a huge chunk of the Earth was frozen.

But that’s not the most frozen the Earth might have ever been. That could have been during its Snowball Earth period, about 600M years ago.1

The Earth is a world where species have disappeared in five mass extinctions already, in an ongoing cycle of destruction and explosion.

Dinosaurs came and went.

There have been mammals bigger than the T-Rex.

But humans have driven huge mammals to extinction.

Humans have witnessed a green Sahara turn dry, oceans come and go, rivers shift, forests appear and disappear…

And yet, despite all these cataclysmic changes, despite these brutal forces that completely transformed the face of the Earth, here we all are, with our forests and our animals and our oxygen and our CO2 and our electromagnetic field. The Earth is not a delicate ecosystem; it is an engine, a mechanism with enough sunlight and heat and water and atmosphere for life and humans to thrive.

In this world, the most beautiful thing has emerged: intelligence. First, prokaryotes, followed by eukaryotes, and then vertebrates, and little by little, the complexity of life and intelligence exploded, reaching mammals, apes, and now humans. With humans, brains were good enough that they started improving intelligence faster than evolution. First through culture, then societies, books, media, the Internet, and now AI. Humans now shape sand to capture light and think, so that they become God.2

From Chaotic Cataclysms to Controlled Engineering

In our ignorance, we had no clue how to optimize nature. But with our capabilities, we now can. The world is a chaotic system we can improve.

This difference is crucial, because a delicate system must be preserved, while a complex mechanism can be optimized. A system that must be preserved is one that has a limited amount of resources. That’s what sustainability means: there’s only so much we have, so we should make sure we keep as much as we can untouched. There’s only so much we can use, we must make do with what we have.

But that’s not the reality. The reality is that there is so much on Earth that we can get substantially more from it than we have.

We can see it with GDP per capita, which increases all the time, showing that our ideas get better and better, and they allow us to make more with less.

We do that while working less.

We produce more food with less land.

We generate more and more wealth with every watt.

Solar panels are getting better all the time.

Every two years, we double our capacity for intelligence, and this is accelerating with AI.

Transportation speeds get better.

We see it with every system in the world: If we are free to optimize it, it gets better, and we get more with less.

That’s what abundance is: We can create a world of plenty. We can take what exists and make much more with it. Our only limit is our intelligence.

Why, then, are so many people fearful of running out of everything, of exhausting the Earth?

The Fear of Ourselves

Part of it is our evolution: We evolved in a world of scarcity. That’s the other side of the coin: We can only get better because we were worse. As we evolved, we never had enough food, we never had enough shelter, enough water, enough safety. And so when we see resources, we crave them, we protect them, for fear that we will exhaust them—or worse, somebody else will take them from us.

Another reason is history: We used to love technological progress.

But then, we used it to kill each other.

WW1 was bad, WW2 was worse.

We discovered the power we held, and we grew scared.

And we saw it in the environment, too. As we followed the motto of The solution to pollution is dilution, we dissolved all our pollutants in nature until we realized that it was not, in fact, the solution.

Sustainable ➡️ Abundant

Since then, we’ve overcorrected. That’s good! Humans work like a pendulum, overcorrecting over and over again until we reach the optimal point. That’s why forest area is increasing in Europe.

That’s why China might have reached peak emissions.

From a chaotic planet evolved life.
From life evolved intelligence.
Intelligence birthed humans.
Humans accelerated intelligence.
We created culture.
We created societies.
We created the Internet.
We are creating gods.

The Earth is no longer a chaotic system with limited resources that we barely understand and which we must protect to make it sustainable. It is a machine that we understand pretty well, that we’re getting better at improving every day when we treat it as a system.

Fishery depletion is not a problem of eating too much fish. It’s a problem of eating too much wild fish.

CO2 emissions are not a problem of too much fossil fuel burning. They’re a problem of rapid temperature increases, which can be reversed.

Fresh water scarcity is not a problem of drinking too much. It’s a problem of getting the water from unsustainable underground wells instead of from the sea.

The Earth is not a stable goddess, it’s a system, and it has changed dramatically over its history—much more wildly than we’ve ever experienced.

As we mindlessly tinkered with the planet, our excesses surfaced these mechanisms. We thought the answer was “Don’t touch it!” But that lesson is outdated. “Sustainable” is defeatist because it says “I don’t know how this works, I can’t know, just don’t touch it.” It’s a bit like an old religion that tries to placate the gods.

Subscribe to Uncharted Territories

But all this tinkering revealed the mechanisms of the Earth. Now, we know much better how it works. And that’s why we can finally optimize for abundance. We can engineer the world. We can have more food, more forests, more money, more people, more animals, more houses, more nature. But only if we strive for abundance instead of sustainability.

The solution to pollution is profusion.

Share

1

Currently a hypothesis

2

Solar panels are made primarily of silicon, which is basically sand. That’s also the main component of transistors and chips, which make our computers. And we’re using computers to build gods: artificial superintelligence (ASI).

AI in 2026

2026-01-24 21:02:15

Note: the last article updating on robotaxis (from two days ago) has new data that changes the conclusions. Read it here (premium).

Things are moving so fast in AI, and there’s so much to digest, that it’s hard to write a good article on it, but that’s why it’s so important. Here’s my update on the most important trends right now, to cover who might win the AI bubble, how fast progress is going, how we’re doing on AI self-improvement, the emergence of consciousness, and the impact on jobs.

Next up in AI:

  • Can algorithms keep improving?

  • Blockers to AI progress

  • Can we make AIs aligned to our goals, or will they kill us all?

  • How should we manage a world after AGI?

  • Is AI changing our culture?

Subscribe now


At night, a group of friends is drinking and laughing at the bar, with its warm light flooding the street. They are enjoying their Friday night together, as they have every week for over a decade.

On the other side of the street, behind metal detectors, security passes, gated doors, and closed rooms, another group of people is huddled around a computer screen. They’re all still, listening as the AI researcher has an existential crisis:

They are summoning God from silicon. From fucking sand! The world is about to be completely upended! Nothing will be the same! Will we survive? I don’t know! Will I have a job? I don’t know! Will we be rich enough to enjoy life? I don’t know! How many planets will the great great grand-children of Sam Altman and Dario Amodei and Elon Musk own? What should we do? What if it’s not aligned? Do we really know? It’s so nice, but is it really? Or is it faking it so we release it and it turns against us? Are we really going to fucking release this thing into the wild?

Silence falls over the room and the group, murmuring, slowly disperses. One of the engineers takes his badge, passes the three security gates, and leaves the building. He looks to the other side of the street and thinks:

They have no idea.

AI Bubble?

AI is now so big that its mental structure can be seen from space:

The top white building does the thinking, while the bottom one contains the memory, controls, and other things. Source.

These are the revenue projections for OpenAI and Anthropic:

Anthropic’s optimistic forecasts are higher than the most optimistic ones from 2024!

And look at OpenAI’s! $100B in revenue! As a result, it’s planning this in terms of spending and investments:

OpenAI expects its cloud spending to reach hundreds of billions per year! It will have to raise tens of billions of dollars every year until 2030, when free cash flow might finally compensate for the cost of cloud compute. Staggering.

The danger, though, is that a lot of this demand is subsidized. A task might cost them a few dollars but nothing to you. This only works if our demand for AI keeps increasing and their costs decreasing. Will it?

Builders are certainly using it like never before to create more and more apps.

Who Wins in the AI Bubble?

In 2014, Google purchased 40,000 NVIDIA GPUs at $130M to power their first AI workloads. However, they quickly recognized the high cost of running those AI workloads at scale, so they hired a highly talented team to develop TPUs, focusing on making them significantly more performant and cost-effective for AI workloads compared to GPUs. This is the real sustainable first-mover advantage in the AI race that investors should focus on.Source.

This is one of the reasons I don’t have all my money invested in NVIDIA.1 I can see the demand continuing to grow tremendously, but I can’t see NVIDIA always being in the near-monopoly situation it has been in until now. More on this when we discuss potential blockers in a future article.

Rate of Progress

The key is the rate of progress. It’s been running very fast, but for how long can it be maintained?

In the article about compute, I explained how compute has been progressing by 10x every two years, and must continue doing so. And here we are: The new NVIDIA architecture, Vera Rubin, reduces token (“thinking”) costs by 10x over the previous architecture (Blackwell) and training costs by 4x.

As a result of this and algorithmic improvements, models keep improving at an incredible rate. The cost per task has shrunk by 300x in one year, and scores on the very hard ARC-AGI-2 test2 are now up from below 20% a year ago, to 45% in November, to 55% now.

I’ve reported in the past when AIs passed the ARC-AGI test. Now they’ve passed ARC-AGI 2. This doesn’t mean AGI for the creators of ARC-AGI, though. Source.

Frontier AI capabilities have not just continued progressing. Over the last 2 years, they’ve accelerated!

The applications are crazy. This elite astrophysicist observed how ChatGPT was able to match his level in an obscure, unpublished question.

AI now solves math problems that were previously unsolved.

In Is There an AI Bubble?, I highlighted how important it is to make sure error rates decline to near zero to reach AGI, because then you can chain infinite numbers of reasoning steps, effectively becoming a superintelligence. And how a good benchmark for this is the length of tasks with 50% error rates. This is what they look like right now:

But can we do better than 50% mistakes after 5h tasks? This paper claims so. It explains how to solve a million-step task with zero errors. It achieves this by breaking down the task into very tiny ones and using a series of agents voting on the solutions to these microtasks to ensure quality. The reliability comes from the process, not the model—which was a cheaper, older version of ChatGPT.

But one AI stands out, and it might well be AGI.

Read more

Updated Evidence on Robotaxis

2026-01-22 22:32:17

This article was edited a few hours after posting, on news from Tesla’s Unsupervised Driving.

This is a quick premium note adding more information about the robotaxi market, which I analyzed in these three articles. In those articles, I concluded that:

  1. Waymo is growing too slowly, probably because each car is too expensive.

  2. Tesla’s Robotaxi will likely sta…

Read more

Is the World a Better Place After the Capture of Maduro?

2026-01-20 03:39:57

If the US’s capture of Venezuela’s leader, Maduro, was not for its oil, what was it for?
Some people say the attack was great, others terrible. Who is right?
How has the world changed since then?
That’s what we’ll answer today.

These were the US’s main adversaries in mid-2024:

This is an AI-generated map for illustrative purposes only. Don’t take it literall…

Read more