MoreRSS

site iconUncharted TerritoriesModify

By Tomas Pueyo. Understand the world of today to prepare for the world of tomorrow: AI, tech; the future of democracy, energy, education, and more
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Uncharted Territories

If I Were King, How Would I Prepare for AI?

2025-11-28 00:58:41

Yesterday, we discussed how societies might collapse as AI replaces jobs… unless we do something about it. What could we do?

What if we were the ruling monarch of a medium-sized developed country? This eliminates all the politicking, and helps us focus on the measures we need to take.

Mel Medarda from Arcane leaked as first 2025 League of Legends champion |  esports.gg
Mel Medarda from Arcane, weighing how technology will impact her socie…

Read more

AI: How Do We Avoid Societal Collapse and Build a Utopia?

2025-11-26 21:02:43

In the previous article, we saw that we’ll eventually live in a utopia, a world of full abundance and without the conflicts over scarce resources that have plagued humanity throughout history.

But on our path to get there, we will face a difficult transition, a social unraveling due to fewer people working while those who do work make millions. This will usher in an inequality that will push taxes up, but the makers will have freedom of movement and will want to avoid taxation. This will lead to social conflict.

How will we get through to the other side? This is what we’re going to answer in this article.

The World Just Before Full Abundance

It’ll help if we fast forward to a world just before full abundance.

In this world, all human work has been automated. Machines are working for us to eliminate the last few scarcities:

  • Energy: Machines are still building huge fusion reactors and plenty of solar panels in space, beaming energy to Earth.

  • Land: Machines are reshaping Earth. They are making deserts habitable, reshaping coasts for more beaches, creating new seas, creating new cities underwater, building taller and taller buildings, erecting new polar cities, constructing the first few space habitats. They’re also terraforming Mars.

  • Raw Materials: Machines are mining deeper and deeper parts of the crust. Spaceships have brought asteroids from the asteroid belt to the Earth’s orbit to harvest them.

Underwater city
An O’Neill Cylinder space habitat, concept by Blue Origin. Source.

Why will these things (land, energy, raw materials) be the last ones to be fully automated? Because it’s harder to move atoms than to move bits. Even a superintelligence will have a hard time making all these things at once, and will have to prioritize them and their downstream applications. For example, if it can access one trillion tons of iron, and a few million tons of rarer minerals, how much of that does it allocate to building more compute vs more energy vs more land?

Also, intelligence will not be infinite until there’s infinite energy and infinite compute, which will also need plenty of raw materials and land. So the scarcity of intelligence might not even be completely eliminated until more of these inputs are sufficiently abundant.

Assuming AIs still serve humans, how will they prioritize? They will need a signal of what matters most to humans. How will humans convey that? Through money.

Money is like a vote. You get to spend it however you want, and that determines where companies put their efforts. If everybody wants a second home at the beach, they will bid them up, prices will go up, and AIs will know to build more coasts. If everybody wants more gold jewelry, it will know to mine for more, or maybe to invest more in energy to transmute other elements into gold. AIs will be able to prioritize more energy for gold transmutation against more machinery production for mining or more machinery for beachmaking.

So it’s very unlikely that we’re going to get rid of capitalism. We need the price signal to convey the optimal allocation of capital.

But in that world, where humans are not working anymore because they’ve been fully automated by AIs more intelligent than themselves, how do you decide how much money each person should have?

Today, capitalism works because it says: Your claim on scarce resources is equivalent to how much you contribute scarce resources to the world. The more you add, the more you can consume. So if your skills are extremely valuable, you’re going to have a huge salary. For example, if you are an entrepreneur and build something that the entire world covets, you’re going to make a lot of money. If you already have money and you place it in the right places so that your investment is multiplied many times over, you will make more money. That’s the “contribute to the world” part of the sentence. Then, you can have a “claim on scarce resources” with that money: Have more houses, consume more energy, travel more, pay for other humans to serve you (home assistants, coaches, drivers, service employees…).1

Or even simpler:

In a world where AIs are much more intelligent than any human (including dexterity, so physical labor is also automated), the “contribute to the world” part doesn’t exist anymore.

So you need another way to distribute scarce resources that is independent from what you can add to the world.

This famous Communist slogan reveals its own naiveté: It’s beautiful, but it disregards that humans are lazy. What motivates most people to contribute to the world is to give them a benefit, so if there’s none, they don’t work.2 And if they can consume endlessly, they will, so the “needs” part just keeps growing.

But in a post-scarcity world, you don’t need to produce anymore, you can just consume. However, you’ll want to consume everything (why not 50 beach houses?) so there must be a limit to your desires. So how can we limit people’s consumption while unshackling it from their production?

With a UBI, a Universal Basic Income: Give everybody the same amount of money (the same claim on scarce resources), because now we don’t need an incentive to work, and every human is fundamentally equal in terms of value.

Imagine that each person gets $10,000 per month as UBI. Food, transportation, energy, and most objects cost near to nothing. Humanoid support robots cost $1,000 apiece. A five-bedroom house in the suburbs of Paris costs $100,000, while an apartment in the middle of Manhattan costs $100,000,000. In other words, with this UBI, you can get nearly everything you want, with the only exception being the stuff that is still scarce.

Where does the money come from? Let’s imagine for now a world government that is coordinated by machines. It takes in what you spend, uses it as a pricing signal to know what to build, and then redistributes that cash in the form of UBI. If need be, it can print money,3 which would cause inflation, but that’s not too problematic, as only the prices of the truly scarce stuff would go up, and the rest not so much. That’s actually what you want.

Now, UBI is not quite according to their needs. Some people have more needs than others—for example, disabled people will need more support. So we might want to correct a UBI with some other income that compensates for this type of need. Let’s call it a disability benefit. For example, an additional $2,000 per month to get more androids to help or whatever.

Also, that world might be one where fertility is through the roof, because the cost of having children and childcare has disappeared, but the emotional benefit is still there. In that case, after enough generations, and before we colonize space, we might want to limit fertility. For example, by only bestowing UBI when children become adults, or by actively penalizing (taxing) childrearing. Conversely, if fertility is too low, we might want to have an incentive to procreate.

Humans Will Still Make Money

Even in that world, however, it’s likely that humans will still transact between each other. Today, we pay more for the imperfect objects made by an artisan. We prefer playing videogames against other humans than against AIs. Although robots are much better drivers than humans, F1 drivers are still humans because we want to admire other humans.

It’s likely that in a future of superintelligences, we might still want to pay for services from other humans: art, artisans, massages, companionship…

People might want to work a little bit, especially if they enjoy their job, to top up their UBI.

OK, so we’ve gone a bit back in time from utopia, and we’ve realized that we still have jobs and we still need to allocate capital, so we still want money and some sort of capitalism, but we also have UBIs that untie production from consumption, enabling a pretty cool utopian world where everybody can live comfortably. We also probably need other incomes for things like disability benefits, and we might need to incentivize or disincentivize some behaviors, like having children. This will require to keep some sort of state and taxation.

Now let’s walk back a bit further.

Social World

Now we’re in a weird world, where superintelligence is here, and for some time it’s been infinitely superior to any human mind, and yet we haven’t been able to solve all scarcity because we don’t agree on what we want in non-economic relationships.

Some people have opinions on what others should be able to do. Should abortion be legal? Should you be able to marry your cousin? Should women wear a veil at all times on the street? Should you be able to insult somebody else? Should we build taller buildings in San Francisco? Should streets have more greenery? More traditional architecture? Should we remove all limits on immigration? Should drugs be legal? Suicide? Euthanasia? What should children learn? Are artificial wombs acceptable? What about human genetic engineering?

Status, control, identity, moral order, fairness, meaning, self-actualization… These are all things that AIs won’t be able to solve with infinite abundance because they’re inherently relational. They rule our interactions with each other, and no amount of physical abundance will solve that, because humans will never be infinitely abundant, and people will always have an opinion on how others should behave.

For many of these questions, different beliefs are incompatible. Sometimes, there is a strictly optimal solution (eg, artificial wombs can solve fertility issues) and hopefully a superintelligence will help us see it. In other cases, there isn’t a solution: What type of street is more beautiful? When does life start (which determines your stance on abortion)?

The only way to solve this is to have different societies live in different places under different rules. Maybe there is some sort of global government (ideally not4), but there will definitely be different jurisdictions.

The moment you have those, you have problems, because each jurisdiction has a different claim on scarce resources. Eg, coasts are more valuable than landlocked areas, some have easier access to good mining resources than others… So polities will fight each other to claim better resources, and potentially to impose their views on others.

And of course, some jurisdictions will force superintelligences to manage things in a suboptimal way. For example, NIMBYs might preempt more housing, which would continue driving prices through the roof in some places, making housing unaffordable.

This, in turn, will cause migration, as some people will want to go from poorly-managed places to better-managed ones, or to places that better represent their values. For example, some areas with more freedom might attract people from stricter jurisdictions. But immigration is very problematic in a world of UBI, because the point of UBI is to give an equal claim on scarce resources to all the people in a jurisdiction. The more you accept people, the smaller the claim is for each. This is not always true today, as in many cases immigrants produce more than they consume. So limits on immigration will have to continue existing, yet unlike today, there won’t be a good reason for accepting migrants.

OK so we’ve reinvented jurisdictions, fights between them, immigration, and limits to it.

Now, we’re ready for the crucial step.

Humans in the Loop

Now let’s go to a world where humans are not quite automated yet, say, 90% of jobs are automated, but we still have 10% of humans toiling alongside the AIs and robots. What does that look like?

Since the majority of the population is unemployed, we need a UBI. But we can’t make do with just that, because then why would the remaining 10% still work?

Let’s make it concrete. Imagine that the remaining jobs are hard things to automate like nurses, electricians, janitors, CEOs, entrepreneurs, and elite computer scientists and designers. There’s a clear distinction between these:

  1. Service people that work physically and locally with their bodies, like nurses or plumbers. Their services aren’t tradeable (they can’t export them) and don’t scale.

  2. Automators: CEOs, entrepreneurs, software developers and designers, who will work on further automating the economy. Their jobs scale immensely and they can work from anywhere.

1. Service Jobs

The service people will be needed and available everywhere in the world. They will be the fundamental jobs for middle classes. The unemployment will be so high that many will try to get into these trades, driving wages down—as long as these wages are much higher than UBI.

So now we have a problem with the UBI amount: Too much, and we don’t get the service workers society needs, because UBI is enough to live comfortably. Too little, and most people starve, because they can’t find a job.

Let’s imagine that UBI is fixed at this point at $1,000 per month. This sounds like it’s not much, but remember that we’re in a world where most services are now automated!

  • A full lunch might cost $1

  • A transportation ride might cost $0.01

  • An android might cost $3,000

  • An apartment in a low-cost area might cost $200/month

  • Electricity, phone, and Internet service might cost all together $30/month

$1,000 would be plenty!

So if an electrician makes $10,000 per month, she’ll be rich. Even with $5,000, or $3,000, it could be enough to incentivize people to get into these trades.

Conversely, if UBI is $10,000 and the monthly income of a nurse is $3,000, a couple of things might happen:

  1. This huge UBI would mean people consume a lot, and scarcity would still be hit, bringing prices up.

  2. This would be painful in the short term for all service workers if their income can’t adjust quickly to inflation.

  3. Because $3,000 is not much, people might not think it’s worth it to become a nurse, and there might be a shortage. If their wages depend on the free market, their wages would shoot up until they’re high enough to motivate newcomers.

2. Automators

The mechanics of automators are completely different. If they automate a part of the economy successfully, they can get millions of customers paying them every month, so they can make a fortune from anywhere in the world.

Knowing this, a ton of people will try to automate parts of the economy! The competition will be brutal, and most people will actually not succeed. But those who do will be super rich.

You actually want them to be super rich! Because that’s the siren call to attract others into this market, copy the product, drop prices, and compete benefits away. The more millionaires are minted this way, the more competition appears, accelerating automation for the benefit of all.

To make this tangible, let’s take legal services. Imagine somebody makes a flawless AI that is better than all existing human lawyers. You want that company to be worth billions, and plaster pictures of the amazing entrepreneurs behind it across social media. One entrepreneurial lawyer who has lost her job might see that success and think “I know this tool, it doesn’t work so well in this part of law that I know well, I can do better.” So she partners with a handful of designers and software developers, and together they build a smaller AI that is better in this particular aspect of law. They make millions.

Another ex-lawyer sees this and thinks: “I can do better…”

So the key here is to keep the current mechanisms of compensation intact, to push automation ever further.

Being motivated by making fortunes, you can’t tax their fortunes away though, because they’re mobile and they’re going to go elsewhere to build these AIs. So how do you tax them and how much?

The Role of the Government

The problem we’re facing here is that 10% of the population would sustain the remaining 90%.

Today, the share of the population that works is ~45% in the Western World.5 We’re talking nearly doubling those who don’t produce and dividing the number that produces by 4, to make the system 8x less viable than it is today. Not great.

Just to give you orders of magnitude, I asked Grok what the free cash flow of NVIDIA, Meta, Alphabet, Amazon, Apple, Microsoft, and Broadcom is going to be in 2025, and it estimates $450B, which is $110 per US resident per month. Not enough to live.

And what about regions that don’t even have these companies to tax, like European countries or Japan? They’re also smaller than the US or China, so they have less leverage to compel companies into taxation.

This is why Europe is playing with a tax on consumption of digital services, since Europe is not producing much AI, but it’s consuming lots of it. It’s also why it’s exploring global taxation (taxing European nationals abroad, something the US already does).

This is also why some people have suggested taxing robots: If we tax these things that generate so much abundance, maybe we can pay for human consumption! But this is stupid because:

  • It’s impossible to enforce: What qualifies as a robot? A humanoid robot? What if it works half as fast as a human? 10x as fast? What if the robot doesn’t look human? What if it’s a big machine that does the work of 10 humans? How would you even know it replaces 10 jobs, when it just makes existing employees more productive? What if it’s a digital agent? Or a series of millions of them, interconnected?

  • It disincentivizes the very thing that we want! We do want as much automation, as fast as possible, to bring all costs down! If we tax them, we increase their cost, and reduce the return on their investment. We will build fewer of them. They will remain more expensive, and will only replace a few tasks.

What about the automators? They will move to cheaper jurisdictions, many of which are purposefully targeting digital entrepreneurs with low taxes. These include countries like the UAE, Singapore, Estonia, Malta, some Caribbean islands, Special Economic Zones… Some people are already moving, and I expect more to follow.

The main source of capital that can’t move as easily is land. And most developed countries are endowed with great, valuable land. So you can expect that land to be taxed much more heavily than it has been in the past. This would have the benefit that people would try to optimize its value in a way that isn’t done today. I think the cleverest way to achieve this is through Harberger Taxes. Other types of assets that could be similarly taxed include intermodal transportation hubs, factories, and the like.

Another way governments could approach this is with a wealth fund: Take taxes today and invest them into AI companies, using their dividends afterwards to fund UBI. This is a bit like what Norway or Saudi Arabia are doing today. We could give every newborn in a country an endowment invested in all the companies in the stock market, and they could use the proceeds as they please.

The problem I see with this is that “visionary” and “fiscally disciplined” don’t belong in the same sentence as “government”. I don’t foresee any party pushing for this. Yet.

What’s the Most Likely to Happen, Unless We Do Something?

We’re about to enter an even more chaotic time, with awe-inspiring AIs delivering amazing services for a fraction of the current price, while at the same time people lose their jobs, a few automators get rich, and the government in the middle will have to redistribute with something like a UBI, but will be handicapped by automators’ mobility. So what can we do?

I think we’re already on our way to this world, both its problems and its solutions.

In an interview6, Tyler Cowen said something like:

I used to think that the solution to automation was going to be UBI, but now I’m not so sure. I changed my mind in 2009, when an obvious solution to the problem of the real estate bubble burst was to help those that had bad mortgages. But people hated that. They said: ‘Why would this person, who clearly didn’t manage their money well, benefit from help with their mortgage, and the ownership of a new house, when I managed my money better, but I wouldn’t get that benefit? That’s unfair!’ So we ended up just bailing out big corporations. That led me to think that we people won’t accept a UBI.

So what are we going to get?

Disability benefits. Retirement benefits. Earned income tax credits. Child tax credits. Homeless care. Free education. Free healthcare. In a way, we’re already on the path of UBI, we’re just calling it by different names and making it a bit more complicated.”

I don’t think people will reject UBI. UBI is a very simple thing to sell: Everybody gets the same amount of money every month, no matter what. It’s fair, and it eliminates the biggest downsides in life. You’ll always have a cushion for any fall; you can take risks.

But I think Cowen is spot on in his interpretation that we’re already on our way to UBI under a different name. It’s just less fair and more complicated right now.

So governments need to simplify these entitlements and make them fairer. For example:

  • Retirement benefits are too high in many countries today.7 They need to go down.

  • The retirement age must increase: 65 year old people can still produce.8

  • Free or highly subsidized university and graduate education is unfair. Students should pay for it.9

But another key piece of the puzzle is that all costs need to go down. If some things become dirt cheap while others remain super expensive, the expensive things will become the lion’s share of people’s expenses, and they will feel miserable because they can’t afford them. This is happening today in high-quality healthcare and education, and especially in real estate, across many countries but especially in the US. This won’t fly: If you can afford everything but shelter, you’re still going to be irate. So governments must obsess about reducing the cost of these services.10

This suggests how the world will evolve in the coming years, if an AI superintelligence doesn’t kill us all, and the speed of job destruction is faster than that of job creation:

  • Every year, we will have more automation.

  • This will eliminate jobs.

  • Some people will recycle themselves into the new economy: influencers, AI entrepreneurs, and the like.

  • Others will change trades.

  • Governments will increase entitlements, converging towards UBI, in name or in function.

  • This might come forced by socialist voting, or by visionary leaders who see the writing on the wall.

  • To fund this, governments will emit debt, print money, and try to tax automators.

  • This will generate inflation in jurisdictions that can’t easily tax automators.

  • Automators will move to low-tax jurisdictions to avoid this taxation.

  • Lots of new millionaires that are uprooted from the land will be minted, living in the jurisdictions that welcome them. There will be more and more jurisdictions competing for them. The ones that emerge now will have an advantage later.

  • Automator communities will emerge to push for their rights, which include low taxation.

  • Countries will react by improving their tracking of citizens, inside and outside their countries, to know how much they have and how much they make, and try to tax them even from abroad. Conflict between countries will be intense: They’ll all compete for the same tax dollars.

  • Countries will be torn between existing populations that don’t produce much but demand UBI or equivalents, and automators who demand low taxes.

  • Taxation will have to attack automation consumption somehow, less so production.

  • Governments with strong automator ecosystems like the US or China will have a strong advantage and more stability.

  • Governments with other benefits (like countries with high standards of living, safe streets, beautiful architecture, strong social life) might also have advantages to attract and tax automators, and will be able to tax their assets, too.

  • Governments will have to work to lower the cost of the most expensive items, especially real estate, education, and healthcare.

  • Over time, the more automation kicks in, the cheaper everything is going to be, making everything affordable.

  • In parallel, UBI (or effectively similar redistribution schemes) will keep growing, until they’re high enough that everybody can live comfortably.

  • Developed countries will close to immigrants. The more there’s automation, the less valuable new workers are, and the more expensive their entitlements become.

Share

Based on all this, what would I do if I were the leader of a developed nation that is not the US? I’ll discuss this in a premium article tomorrow. After that, in AI, I’ll cover:

  • How much further can AI algorithms take us? (premium)

  • The potential blockers to AGI: electricity? Data? Something else? (premium)

  • The geopolitical ramifications: Who will win, the US or China?

Subscribe to read all these AI articles, plus others upcoming about Argentina under Milei, how real estate is the source of all problems, and more

1

This system doesn’t always work perfectly, and that’s why the field of economics is important, to correct for all the times it goes awry.

2

The compensation doesn’t need to be monetary. It can be status, simple gratitude from recipients… But they need to get something in return. Monetary compensation is just the simplest, clearest signal, because it’s scalable, impossible to game, includes a clear claim on scarce resources, fungible, scarce, divisible, portable…

3

I.e., create money from thin air. Press a button and generate more money.

4

A global government is extremely dangerous, because it has a single point of failure. If it’s taken over by bad actors, the result is infinite misery forever. Better to have different countries that can act as checks and balances on each other.

5

The employment rate is the share of 15-65 year olds who work, and that’s ~70%. The share of working-age population over the total population is about 65%. Combine them and you get ~45%.

6

I think it’s this one where he interviewed Sam Altman. Cowen is so good that sometimes he knows more than his interviewees on some aspects in their fields, so he gets to share insights as interesting as his guests’.

7

I will write about this. But Spain and France are two examples of ridiculously generous pensions.

8

If they can’t they should have disability compensation, not retirement.

9

If Bryan Caplan is right in The Case Against Education that most higher education is signaling. The portion that’s not (eg STEM) can be financed privately, if the return to society is so high. Although I’m more amenable to public financing of this type of education.

10

Caveat: Real estate costs might drop naturally. But might take too long.

When AI Takes Our Jobs

2025-11-21 21:03:39

The last premium article covers how much compute will increase in the coming years, to assess how close we are to AGI. Read it here.


2035, New York.
The doorbell rings.

“Can you open the door?” says Alice.
Silence.
“NOAH!!!”
A lazy and guttural voice emerges from the basement: “I’m busyyyyyyy!”
“Playing videogames?! Go get the door!” commands his mother.
Silence
“Ughhhh!!”

Alice opens the door. The android delivers two packages: One has the words Optimus and Tesla emblazoned on a huge box. The other is a pink slip.

She has to sit on her sofa for a minute. How is this possible? Her job as director of performance marketing was going well: She has replaced her entire team, and is able to single-handedly run the advertising campaigns across Google, Facebook, Instagram, LinkedIn, TikTok, ChatGPT, and Claude: She has set up a pipeline to get images, copy, craft hundreds of variants of ads with them, audience targeting, price setting… All of this is automated. She’s been making millions for her company!

How is she going to find another job with 20% unemployment? How will she be able to pay rent? Support her son who never found work? Pay for the installments of the Tesla android that was just delivered?


AI is taking over the world, but we’re acting as if it’s just another trend to monitor. It isn’t. It’s everything. We need to prepare.

If we survive superintelligence, we might lose our jobs. What will happen then?

This is from the US FEDERAL RESERVE!! Source.

Will we end up in a world of bounty, where all scarcity has ended and we can pursue our dreams assisted by robots?

Or will it lead to a world of inequality, where the elites swim in abundance while the masses starve?

Most economists think this won’t happen in the future because it didn’t happen in the past, but it actually did: Luddites were right to oppose automation because they lost their livelihoods with the jobs that never came back.

Swing Riots in England, the largest wave of protest in England’s history, saw workers attack threshing machines, destroy barns and workhouses, burn ricks and maim cows. Source.

It was mostly not them, but their descendants who benefited from automation. The difference is that now the speed of automation will be on steroids, while AI will hinder the creation of new massive sources of jobs. More job destruction and less creation.

Another argument against this is: “If AGI really happens as you say, we would see GDP per capita grow 5%, 10%, maybe 20% per year, and that’s impossible, it has never happened.”

Between 1934 and 1944, GDP per capita grew an average of 7.2% per year. It doubled in 10 years. Of course, it was an exceptional time. But isn’t creating gods an exceptional time?

Also, if you told somebody in 1700 that some economies would start growing at 2% per year per capita for the following 250 years, they would have laughed. And yet…

The strongest argument against a major upheaval that I know of comes from Tyler Cowen, who would say something like “The world after AGI won’t look too dissimilar from ours, because there are many other limiting factors to development. Whenever something stops being a bottleneck and its limitations are released, the process improves until the new bottlenecks emerge. When we unleash intelligence, all the places where it’s the limiting factor to development will disappear, and the remaining limiting factors will take over. For example, housing regulations will still limit the number of homes we can build, so real estate prices won’t crash. FDA regulations will continue making new drug discovery slow. NEPA (environment) reviews will still take an eternity, etc.”

This is why economists predict a slight increase in GDP growth of ~0.5% annually, while AI practitioners expect more like 3-5%.

But why are regulations still in place? Because interest groups care more about them than the broader public, so they invest a lot of time and resources to keep them in place, or to add even more. It takes forever for the rest of society to react and redress these inefficiencies. But in a world where lobbyists, political influencers, communicators, community managers, and all the people who keep the current regulatory status quo can be automated, will these regulations remain in place as long as today, or will they fall faster?

In other words: In the early 1800s we released a bottleneck to growth through the Industrial Revolution, and we’ve lived in that post-world since then. Why can’t we release a new one now, with the help of our gods?

So it’s very possible that we do end up losing massive amounts of jobs to AI. Maybe we don’t, but maybe we do. We need to be prepared for the case in which we do, meaning we need to face that scenario now. If it does happen, it might be the biggest social fight of our lives. This is what we’ll do in this article: Explore how we can structure society in a world after superintelligence, if most human work has become irrelevant.

Scarcity in a Post-Scarcity World

Most economic activity, most politics, and most conflicts in the world are about scarce resources:

  • The economy handles housing, energy, food, computers, travel, entertainment, mining, healthcare, education…

  • Politics try to channel the economy, and also redistribute the wealth generated by the economy, for a fairer, safer, and happier world.

  • Conflicts emerge when two people or groups want the same thing, whether it’s land, money, status…

In a world of superintelligence, nearly all these sources of scarcity will have disappeared. You can boil down most scarcity today to the scarcity of human labor, land, energy, and raw materials, since you can combine these things to make anything else.

  • Human labor will have been replaced by AI.

  • With enough intelligence, energy can be made nearly free and infinite through fission, fusion, and renewables—on Earth or in space.

  • With enough intelligence and energy, raw materials can be better mined, better recycled, transmuted, or mined from outside of Earth, to be made nearly infinitely cheap and available.

  • Land is actually dirt cheap with the current population. We have plenty. What’s expensive is living in specific points of the planet, mostly cities and coasts. Without the need for work, cities become less useful. With infinite entertainment with AI, and the ability to live anywhere with your friends, demand for cities might lower. The coastline can be dramatically increased, as Dubai has shown.

  • And if we need more land, we can always build O’Neil Cylinders or terraform other planets like Mars.

In such a world, we won’t need money, we won’t need taxes, we won’t need redistribution, because robots will just be doing everything we want.

We might still have some scarcity: A limited number of humans to have relationships with or against whom to fight for status, for example. This might justify some sort of currency. But this type of economy, where scarcity is just other people’s feelings and attention, is not one where we have a crisis to distribute scarce physical resources.

So the problem is not what do we do then—that will be a utopia—but rather what do we do in the interim, while robots have replaced only some people’s jobs?

Scarcity in a Few Years

Now instead of fast forwarding to a distant future, let’s go forward just a few years—for example, a year or two after AGI, but before Superintelligence, when AIs can do all human tasks (including physical ones with robots), but don’t actually do them yet. What does that world look like?

AI Makers

The companies building AI services and robots will continue automating parts of the economy. They will partner with business leaders to give them tools that can do in minutes what their employees used to do in weeks. Maybe senior executives won’t need to hire dozens of analysts like before, and instead will orchestrate their work with a dozen agents, leaving their core value to dreaming up new strategies for the company’s growth. Foremen won’t need a crew, they’ll just coordinate dozens of robots, leaving them time to explore how to do a better job, faster, cheaper, with a more beautiful outcome. Customer Service managers will check the performance metrics of their legions of call center agents to tweak them with the AI company that provides them. These business leaders will fire hundreds of thousands of employees. This will happen to auditors, consultants, lawyers, customer service representatives, financial analysts, copywriters, graphic designers, translators, insurance underwriters, loan officers, real estate listing agents, procurement analysts…

A dozen AI companies with a few dozen employees each will emerge to specialize in each of these disciplines, making their owners and employees rich.

Within these industries, highly productive humans will be even more productive thanks to AI tools, so they’ll keep their jobs and even get huge wage increases. Meanwhile, low-productivity people, who are not as competitive against robots and agents, will lose their jobs. Maybe that’s why new graduates are not finding jobs today. Existing employees that are not very productive might keep their jobs longer, but as the tech gets better, either their companies start getting more efficient and lay them off, or new companies that are AI native replace them, accelerating inequality.

Some of those without a job might take their destiny in their own hands and build something with AI. Others will try to become influencers, and the supply of content (both with and without AI) will skyrocket. Most of them won’t be able to make a living that way.

Others will try to retrain into jobs that seem safest, like nurse or electrician. Many won’t succeed: Even in the past, changing industries was hard. Imagine now, when you’ll have to compete with AI agents. These people might reach for jobs that make less but fulfill them more, like coach or pilates teacher. Others will simply downgrade their jobs and income.

The first waves of automation will go for new graduates and low-skill individual contributors. But the next wave will be middle managers, like Alice above. Little by little, the middle class will hollow out.

A Million Mamdanis

The more that happens, the more people will sour on the economic system.

And they might elect Mamdanis, whose socialist program as the elected mayor of New York City sounds very nice but will backfire. Taxes will increase on the remaining workers and entrepreneurs who are now making more money—but these people are mobile. They will look for the best jurisdiction to minimize their taxes, whether that’s Texas, Puerto Rico, Dubai, or Singapore.

The more they escape, the more those who lost their jobs will be angry at the existing inequality, and the more they might vote for socialism, which will be a short term gain for long term pain, accelerating the cycle of inequality and social erosion. The more these governments raise taxes, the more AI companies, entrepreneurs, and employees will move away.

So taxes raised will go down, but socialist governments won’t be keen to reduce entitlements in a world of increased inequality, so they’ll either emit more debt or print more money. While that will increase the value of Bitcoin and gold, it will cause inflation, balloon the cost of servicing the debt, and lead to currency devaluations and government defaults.

The cycles of inflation and lack of political confidence and stability will devastate industries. Many will underinvest or go out of business, furthering the hollowing out of middle classes and eroding the tax base.

The Precedent: The Spring of Nations

Unfortunately, something similar happened in the past, as I shared in this article:

In 1848, the economic conditions of French workers were so bad that people were irate. The French Government tried to approve better conditions for people, but it wasn’t enough, and citizens revolted.

Soon, those revolts spread to the rest of France, and then to Europe. There were Revolutions of 1848 across France, Germany, Italy, Austria-Hungary, Poland, Moldova, the Ottoman Empire…

Red indicates centers of revolutionary outbreaks. Source.

It is not a coincidence that, in the middle of the European Revolutions of 1848, this appeared:

The Communist Manifesto, Karl Marx and Friedrich Engels. Source. Here’s the entire manifesto if you’re interested.

Let’s pause and think about this. The technological innovations of the Industrial Revolution created such an economic dislocation that the many losers were furious. They revolted everywhere and came up with an entirely different political system to take power from the winners and distribute it to the losers.

So if the techno-foolish tell you that this time it’s the same, start panicking.

The economic trigger was different, but it was also triggered by the poverty of the common people in an unfair system in the middle of a big economic shift.

So the same way Marx introduced the world to Communism in this context, so might we see equivalent economic theories emerge now. I’m not sure that’s what we want.

Where Will This Happen

The last two countries where this will happen are China and the US:

  • They both have huge, well-diversified economies.

  • The US:

    • Controls most of the AI value generation, which will be taxed.

    • Is so big and isolated geographically that its investors and workers will be reluctant to leave.

    • Has enough competition between states to pressure against high taxation.

    • Has a strong individualist, capitalist, anti-socialist ethos.

  • China:

    • Is already used to wealth redistribution.

    • Has a very strong government.

    • Already limits capital and people’s movement.

    • Already has a decent AI industry.

So all this will apply first to other wealthy countries, such as in Europe, Canada, or Australia.

How Do We Go from Short-Term Collapse to Long-Term Utopia?

Alice, from our introduction, might be you or me in a few years. And if she can’t find a job, she will be an angry elite who turns anti-elite, fighting the system that brought us there.

Automation might end up solving all our problems of scarcity, but on our path to that point, our society might collapse in a runaway process of fewer jobs, more inequality, more taxes, capital and expert/entrepreneur flight, government debt and money overprinting, defaults, economic crises, fewer jobs…

Share

So how do we smooth out the path from here to there?
This is what we’re going to explore in the next article.

Subscribe to read it

Compute—the Oil of the 21st Century

2025-11-18 21:01:59

In the last two articles we’ve covered the massive scale-up of AI investments:

  • First, whether we’re in an AI Bubble

  • Second, how hyperscalers believe they’re close to making God: They’re scaling investments to automate AI research, and from there reach superintelligence.

But what investment are we talking about? It’s hard to figure out whether we’re overbui…

Read more

When Will We Make God?

2025-11-11 21:02:44

Hyperscalers believe they might build God within the next few years.
That’s one of the main reasons they’re spending billions on AI, soon trillions.1

They think it will take us just a handful of years to get to AGI—Artificial General Intelligence, the moment when an AI can do nearly all virtual human tasks better than nearly any human.

They think it’s a straight shot from there to superintelligence—an AI that is so much more intelligent than humans that we can’t even fathom how it thinks. A God.

If hyperscalers thought it would take them 10, 20, 30 years to get there, they wouldn’t be investing so much money. The race wouldn’t be as cut-throat.

The key question becomes why. Why do they think it’s coming in the next few years, and not much later? If they’re right, you should conclude that we’re not in an AI bubble. If anything, we’re underinvested. You should double down on your investments, and more importantly, you should be making intense preparations, because your life and those of all your loved ones are about to be upended. If they’re wrong, it would be useful to know, for you could become a millionaire shorting their stock—or at least not lose your money when the bubble pops.

So what is the argument they’re following to believe AGI is coming now, and superintelligence soon after?

What Does AGI Look Like?

Today, LLMs are very intelligent, but you can’t ask them to do a job. You can ask them questions, and they can answer better than most humans, but they’re missing a lot of skills that would allow them to replace a job: understanding the context of what’s going on, learning by doing, taking initiative, making things happen, interacting with others, not making dumb mistakes…

We could define AGI as solving all that: “Now an AI is good enough that you can tell it to do a task that would take a human several weeks of tough, independent work, and they go and actually make it happen.” This is quite similar to the most typical definition of AGI, which says something like “AGI is when AIs will be able to do nearly all tasks that humans can do.” When we reach that point, AIs will start taking over full jobs, accelerate the economy, create abundance where there was scarcity, and change society as we know it.

For some jobs, this will be extremely hard—like a janitor, who needs to do hundreds of very different tasks, many of which are physical, and require specialization in lots of different fields.

For some jobs, it might be much easier. For example, AI researcher:

The jobs of AI researchers and engineers at leading labs can be done fully virtually and don’t run into real-world bottlenecks [like robots]. And the job of an AI researcher is fairly straightforward, in the grand scheme of things: read ML literature and come up with new questions or ideas, implement experiments to test those ideas, interpret the results, and repeat.—Leopold Aschenbrenner, Situational Awareness

The thing that’s special about AI researchers is not just that they seem highly automatable, but also that:

  • AI researchers know how to automate tasks with AI really really well

  • AI labs have an extremely high incentive to automate as much of that job as possible

Automate the Godcrafters, or God Inventors, find God. I like thinking of Godcrafters’ work like a very long incantation: Teams of humans are tapping on their keyboards infinitely long strings of runes that end up forming a God.

AGI—Automating God Inventors

Once you automate AI researchers, you can speed up AI research, which will make AI better much faster, accelerating our path to superintelligence, and automating many other disciplines along the way. This is why hyperscalers believe there’s a straight shot from AGI to superintelligence. We’ll explore this process in another article, but for now, this leads us to a key reframing of our original question, because a more practical way to define AGI is an AI that’s good enough to replace AI researchers, as this will accelerate the process to automate everything else.

When Can We Automate AI Researchers?

The first step to replacing AI researchers is to be intelligent enough. A key insight on this topic came from one of the most interesting, important, and weird graphs I’ve ever seen:

Source: Scaling Laws for Neural Language Models, Kaplan et. al, 2020

“Test loss” is a way to measure the mistakes that Large Language Models (LLMs) make.2 The lower, the better. What this is telling you is that predictions get linearly better as you add more orders of magnitude of compute, data, and parameters to an LLM. And we can predict how this will keep going, because it has remained valid over seven orders of magnitude!3

So the jump here would simply be: Let’s just throw more resources at these models and they’ll eventually get there. We don’t need magic, just more and more efficient resources.

The previous graph is from a 2020 paper, but we are witnessing something tangible in the wild akin to that: The length of tasks AIs are doing is improving very consistently.

Every few months, AIs can do longer and longer human tasks at the same success rate.


<Interlude: Why task length improvements also entail mistake reductions>

Here, I took 80% success rate because last time I shared the 50% equivalent, some readers came at me saying “Who cares about 50%, we’ll talk when they’re at 99%!” But if you understand how this works, you’ll understand the progress in length of tasks with 50%, 80%, or 90% accuracy are basically the same.

The reason why an AI might not be able to do a longer task is because mistakes accumulate. If it makes a small mistake in minute 2, in minute 5 it’ll use that early mistake to make another one, and now you have two mistakes. The more of these mistakes you accumulate, the more they snowball.

One way to visualize this is with this graph

The above graph represents how successful a 100-step process can be, depending on how good each step is. The horizontal axis shows the success rate of eachevery step. The vertical shows what happens if you chain 100 steps. If your success in one task is 90%, it looks amazing, but if you chain two tasks it’s now just 81%. If you add another one, it’s 73%. By the 100th step, it’s basically 0%. The success at the 100th step is very sensitive to the improvements of the success rate of the average step. If you get to 98% success in every step, you still only succeed after 100 steps 13% of the time. But if you get to 99.9%, now it’s 90% over 100 steps.

Conversely, if you need fewer steps, you accumulate fewer mistakes.

All of this is just to give you an intuition of how mistakes compound. In reality, it won’t be quite like this, because LLMs can self-correct.

<End of Interlude>


This is why there’s a pretty clear tradeoff between the success rate and the task length in real life:

Source: Measuring AI Ability to Complete Long Tasks, Kwa et. al. Different colors represent different evals

So if you’re keeping the success rate constant, and your task length is doubling every few months, it means the entire line above is moving to the right pretty consistently every few months.

OK and when will that be enough? Do we need to keep improving for one year? Five years? 100?

The Researcher Threshold

It helps to see how AIs perform across domains. What we see time and again is that AIs get to human levels faster and faster, and then surpass them.

Look at how good GPT got in just one year:

The following benchmarks were designed to compare AIs to experts across different fields:

These AI performance evaluations (“evals”) were designed to last a very long time, but they last less and less. The ultimate eval (named “Humanity’s Last Exam”) was meant to last years or decades by making questions extremely hard for AIs. It hoped to remain relevant until AGI. Instead, in less than a year, GPT has gone from a ~3% score to ~32%.

The race to 100% is inexorable.

But you know what I think is most revealing? The evals of AI research replacement. Yes, that exists!

What this graph is telling you is that AIs (in this case, the “very old” Claude 3.5 and o1-preview) beat AI researchers in tasks up to 2-4h, but as AIs and humans are given more time, AI performance increases slowly whereas AI researcher performance goes through the roof. This is why it’s so valuable that AI’s length of tasks keeps improving.

Something similar was discovered in this paper from OpenAI:

We’re Starting to See It: AI Improving AI

It’s not like AIs improving AIs is a pipe dream, they’ve been doing it for years, and every year it accelerates:

  • Neural Architecture Search (NAS) uses AI to optimize AI neural networks

  • AutoML automates the entire neural network creation process, including tasks like data preprocessing, feature engineering, and hyperparameter tuning.

  • AlphaProof (2024) does math at the International Mathematical Olympiad level, generating proofs that inform AI algorithm design.

  • And of course, coding:

    • Anthropic: AI agents now write 90% of code!

    • OpenAI: “Almost all new code written at OpenAI today is written by Codex users”

    • Google: AI agents write 50% of code characters.

    • Meta and Microsoft: ~50% next year and 20-30% today respectively

Seeing these data points, one wonders how AI research is not fully automated yet! This Reddit comment shed some light on it:

I and several people I know went from zero to near zero usage of AI a year or two ago to using it everyday. It saves me time, and even more than that it saves me from doing boring work.

Now, did my productivity increase in any way? Not really. I just have more down time. First: I don’t want to advertise my now available time to my manager because they still think in the old way. And I can guarantee that once they find out how productive I can be, I will get more work, not a raise. So I have little incentive to advertise my newly found productivity increase. Second: even if I could do more, most of the things I work for require other people somewhere in the workflow. And those people have not started using AI at work or don’t use it extensively. It doesn’t matter if it took me 5 minutes instead of two days for a document. It will still take an old geezer somewhere 5 days to get back to me after reviewing it.

AI will get better every day, and AI managers will get better too, maybe automating not just tasks, but their orchestration, until both researchers and managers are all automated, and the speed of progress increases by orders of magnitude.

The Market’s Assessment

To figure out when precisely that might happen, it’s always good to look at prediction markets. According to Metaculus, we will reach their definition of weak AGI by the end of 2027 (as of writing this article):4

Note that the mode for this prediction is September 2026, in 10 months!

The graph below shows when the market guesses we will reach a more demanding definition of AGI:

AGI in 2033. Here again the mode is much earlier, April 2029, in 4 years

The definition of AGI in this second market includes robotics, though, which I think is the reason why the market believes it will take much longer. Without robotics, I believe the market would predict this date to arrive much faster.

The Current Progress Speed Won’t Last

A key argument that Aschenbrenner makes to conclude that AGI will happen within the decade is because we’re going to increase our machines’ ability to think much faster now than in the future:

  • The ramp up in AI investment we’re making is unsustainable. We can’t keep dedicating more and more money to AI forever.

  • We’re finding the low-hanging fruits of computer and algorithm optimization.

These two together suggest that, if we are to find AGI, it’s either in the next few years, or it will take dramatically longer.

Matthew Barnett illustrated it visually:

When Do Hyperscalers Think We Will Reach AGI?

Why does all this matter? What we’re saying here is that we’re just a few years away from AGI, and that’s a key reason why hyperscalers are spending so much money on AI. If this were true, then hyperscaler leaders should manifest their belief that AGI will be reached in the next few years, probably before 2030. Do they?

Elon Musk thinks AGI will be reached at the end of this year, or the beginning of next year:

The market thinks Grok 5 will be released in Q1 2026. If the link doesn’t work, Polymarket is blocked in your country.

Dario Amodei, CEO of Anthropic, thinks it’s going to be in 2026-2027.

“We will just fall short of 2030 to achieve AGI, and it’ll take a bit longer. But by 2030 there’ll be such dramatic progress… And we’ll be dealing with the consequences, both positive and negative, by 2030.”—Sundar Pichai, CEO of Google and Alphabet

Over the next 5 to 10 years, we’ll start moving towards what we call artificial general intelligence.Demis Hassabis, CEO DeepMind.

Sam Altman, CEO of OpenAI, believes the path to AGI is solved, and that we will reach it in 2028.

We’re closer than anyone publicly admits.Sam Altman

So they think it’s coming in 1-10 years, with most of them in the 2-5 range.

Share

Takeaways

The arguments to claim we’re about to make gods are:

  • AI expertise is growing inexorably. Threshold after threshold, discipline after discipline, it masters it, and then beats humans at it.

  • We’re now tackling the PhD level.

  • In the current trajectory, we should reach AI Researcher levels soon.

  • Once we do, we can automate AI research and turbo-boost it.

  • If we do that, superintelligence should be around the corner.

How soon should we expect to reach AI Researcher level?

  • The past progression suggests it’s a matter of a few years.

  • We can see this happening, with parts of the AI workflows being automated as we speak.

  • Markets are consistent with this belief.

  • What hyperscalers say and what they do is consistent, too.

  • We’re about to increase effective compute so much that, if it doesn’t happen in the next few years, it might take decades.

You can have two types of disagreements with these statements:

  1. Effective compute won’t keep increasing as it has until now

  2. Even if it does, it won’t be enough to reach AGI

Let’s focus on them in the coming articles and answer the following questions:

  1. Can our compute keep getting better as fast as it has in the past?

    1. What’s the relationship between that and the AI bubble? Does that clarify whether there’s an AI bubble?

  2. Can algorithms keep getting better as fast as they have in the past?

    1. What about the data issue?

  3. What other obstacles are there to AGI?

  4. If we do reach AGI, will we reach superintelligence? Will we reach God?

  5. If we do, what happens next?

Half of this content will be premium. Subscribe to receive it all!

1

As we discussed in the previous article, another is simply that revenue justifies it. But these things are entangled: If we reach AGI, of course money will justify massive amounts of investment, since it will take over huge parts of the economy.

2

LLMs are really just word prediction machines. Here, test loss measures how well the LLM can predict the next word in a text it has never seen.

3

This is so weird to me because it’s a sign of how intelligence can emerge in animals, and especially human brains. They’re just a matter of having enough neurons and enough connections between them. It means intelligence emerges, rather than requires delicate design. It makes humans much less magical than people might think. We’ve just stumbled upon the first path to intelligence that nature found, and it found it not because it was especially difficult. It just had to optimize for intelligence for long enough to surf down these lines.

4

This AGI definition is easier to achieve than ours, but it’s pretty telling we’re getting into the realm of what was defined as AGI a few years ago. The definition in Metaculus: “We define AGI as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.

  • Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.

  • Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the “Winogrande” challenge or comparable data set for which human performance is at 90+%

  • Be able to score 75th percentile (as compared to the corresponding year’s human students; this was a score of 600 in 2016) on the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages.

  • Be able to learn the classic Atari game “Montezuma’s revenge” (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play

Is There an AI Bubble?

2025-11-04 20:42:52

The Financial Times has written about it. The Economist. The New York Times. The Atlantic. It was a big theme at a few recent conferences I went to. The investment community thinks we’re in a bubble:

The fear is that we’re in a similar situation to the tech bubble in 2000-2001, or the railroads in the late 19th – early 20th century. In both cases, the investment (in Internet cables, websites, railroads) was amazing in the long term, but there was initial overinvestment. When early demand didn’t materialize, lots of companies went down and investors lost their money.

A market correction of the same magnitude as the dotcom crash could wipe out over $20trn in wealth for American households, equivalent to roughly 70% of American GDP in 2024. Foreign investors could face wealth losses exceeding $15trn, or about 20% of the rest of the world’s GDP. For comparison, the dotcom crash resulted in foreign losses of around $2trn, roughly $4trn in today’s money and less than 10% of the rest of the world’s GDP at the time.Gita Gopinath on the crash that could torch $35trn of wealth

This is what happened to top tech stocks in the dotcom crash:

If you invest in stocks and like having money, you might not want your portfolio to look like this. So is this happening now?

The Signs of a Bubble

They’re everywhere if you pay any attention.

Exuberant Valuations

If you take a huge step back, we should be in a recession. Interest rates went up dramatically after COVID, there’s a trade war around the world, the European economy is in tatters because of overspending on social programs, the high cost of energy, and competition from China. Yet the economy is booming and the stock market with it.

This is single-handedly driven by the impact of AI, which accounted for 85% of the gain in US stocks so far in 2025!1 AI companies now make up half of the S&P500!

This viral chart puts it in context:

“12-month forward P/E ratio” means: “What is the price of this company? How does this compare to this company’s earnings in the next 12 months?”. If the price is very high compared to the earnings, it means the company is not making much money yet but its stock price is still high. This can happen when the long-term earnings potential of a company is very high, so investors are willing to pay a lot for the company even if it’s not making money yet. The crux is: Will the company actually make money according to its potential?

From peak (2002) to trough (2006), the S&P lost 64% of its value. Yet the stock market is more expensive today (vs its actual earnings) compared to the equivalent numbers in 2000, just before the dotcom burst.

If instead of P/E ratio you look at price-to-book value (the price compared to the value of the company in the books; its assets minus liabilities):

People love comparing the evolution of stock prices:

Warren Buffett is smelling the blood and is accumulating cash, anticipating a buying spree when the markets crash.

He’s been selling stocks for four straight years to build up this war chest:

Huge Investments

The outsized valuations Buffett is preying on are based on the promise that AI will make these companies a ton of money. For that, they need to invest a lot upfront in chips, data centers, servers, software… This is so much investment that ~45% of the US economy’s growth so far in 2025 is due to AI build-up! A lot of it comes from the top 4 biggest AI companies.

They are not the only ones investing.

There’s a projected $1.5T debt for AI data centers by 2028. Together, all these investments account for a higher share of GDP than in the dotcom bubble!

CapEx (Capital Expenditure, a type of investment in physical stuff) of the S&P 500 as a share of the US GDP

It’s not yet at the level of railroads or electricity, but if it keeps growing as planned, it might reach ~4% of GDP, competing with the electrification craze.

But also, everybody is investing in everybody else.

Circular Investments

Microsoft has invested at least $13B in OpenAI, which uses Microsoft’s Azure cloud infrastructure. Amazon has committed $8B to Anthropic, which uses Amazon’s AWS cloud infrastructure, and Amazon plans to insert Anthropic into its products. Amazon and OpenAI just published a $38B deal in which Amazon will power OpenAI compute on NVIDIA machines. NVIDIA will invest over $100B into OpenAI, which will use it to buy NVIDIA systems. OpenAI has signed a $22B deal with CoreWeave (an AI infrastructure company I which I had invested pre-IPO), including a $350M stock investment…

The fear here is that companies might be inflating valuations by investing in and spending with each other. There’s also the fear that if one fails, many others might, too.

So are all these investments supported by sound financials?

Return on Investment

How can a company with $13B in revenue make $1.4T of spending commitments?—Brad Gerstner, All Things AI podcast

Sam Altman says OpenAI makes more than $13B, and revenue is growing fast, but still. Only 1% of spending commitments are covered by revenue?

Most of what we’re building out at this point is the inference [...] We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.Sam Altman

What this means is that the initial investment costs are so humongous that OpenAI, the company that makes the most money from end consumers of AI, loses money. In fact, it’s valued at $500B with no profits expected for 5+ years! The company is burning about $12B per quarter right now, on 700M weekly users.

OpenAI is burning $12B per quarter

And what happens if chips are outdated in 3-5 years? Wouldn’t that be a way to burn trillions in value?

Profitability

All of this would be OK if customers were seeing a lot of value in AI and couldn’t stop buying more and more: OpenAI and its peers could learn how to spend its money better over time, amortize their huge investments over decades, and print money. But this doesn’t seem to be the case

95% of organizations are getting zero return in their GenAI investments.—State of AI in Business, MIT.

If end customers like Unilever or Walmart don’t see a return on their investment into AI companies like OpenAI or Anthropic, will they keep investing? If they don’t, what will happen with the hundreds of billions of investment commitments the AI companies have with the likes of NVIDIA and Coreweave?

Demand Is Lagging

That’s why it’s a worrying sign that corporate AI adoption remains low and is shrinking!

Remember that the big four tech companies are investing hundreds of billions per year in their AI build-up. They would need trillions in revenue to make this back!

This reminds me of Internet infrastructure.

By the early 2000s, telecom companies had raised almost $2T in equity and $600B in debt, mostly to build over 80M miles of fiber optic cable in the U.S. alone. By 2005, as much as 85% of those cables were still unused (“dark fiber”). This led to plummeting bandwidth prices and eventually, serious financial losses and bankruptcies in the telecom sector.—Akshat Shrivastava, checked here

The supply was built too far ahead of the demand. Worryingly, Microsoft is canceling data center leases due to the fear of oversupply. They are specifically targeted to limit OpenAI’s training of better AIs.

Competition

Maybe you could imagine a return to reason, where players realize they might lose trillions, and decide to lower investments and increase prices? But this will not happen here because these are the biggest companies in the world fighting to be the first ones to create a god, and harness all the power and value that come from it. Luckily for them, they’re already making money hand over fist, guaranteeing that they can keep financing this at steep losses for a long time.

These companies produce hundreds of billions of dollars every year. They can keep plowing through this money for a long time, which means they can operate at a loss for a long time, which means this industry might not be profitable for a long time… Until they decide to pull the plug.

So hold on a second: The bulk of GDP growth and market exuberance comes from a handful of companies with circular investments that lose money servicing every customer. Meanwhile, these customers don’t understand these AI products well enough to make them useful, so their adoption has slowed down, but this whole castle is built on estimates of revenue growing exponentially?! Which must happen through more customers, because more revenue per customer might be hard, as all these companies are competing on price, and their pockets are so deep they can keep pushing prices down for decades?

Is it a bubble then? Here’s why I don’t think it is.

Read more