MoreRSS

Noah SmithModify

Economics and other interesting stuff, an economics PhD student at the University of Michigan, an economics columnist for Bloomberg Opinion.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Noah Smith

How Japan has changed in the last 20 years

2026-04-01 18:29:51

For perhaps the first time in years, a truly interesting thing happened the other day on X. The platform began automatically translating Japanese tweets to English, and recommending them to English-speaking users. Japanese people use X at much higher rates than people in other countries, mostly because the platform’s pseudonymity offers them a chance to comment publicly on their personal lives without revealing their real identities. Because it’s mostly a platform for personal use, it’s much less toxic than the English-speaking version, which is mostly used for political arguments.

English-speaking X users were naturally delighted at the influx of sanity and normalcy, not to mention the delights of quirky Japanese online culture. I predict this honeymoon will last only a short time, until Anglosphere culture wars infect and overwhelm Japanese-speaking X. This will be the digital version of the tourism boom, in which international delight at being able to travel cheaply and easily to Japan has resulted in an epidemic of bad behavior and the complete overrunning of tourist hotspots like Kyoto and the west side of Tokyo.

But glum predictions aside, it is pretty magical for people in other countries to get a taste of Japanese culture without having to learn the language. Yes, many of the stereotypes of Japan are either exaggerated or just plain wrong — it’s not very conformist or collectivist, people behave well much more out of internalized “guilt” than externalized “shame”, and so on. But there really are quite a lot of unique and interesting things about Japanese culture, most of which developed behind the barrier of linguistic and geographic isolation. Now that those barriers are falling, a lot of people will get to experience the wonder before it, too, is subsumed by the homogenization of global online culture and ruined by flame wars between rightists and leftists.

But anyway, in honor of this moment of cultural exchange, I thought I would share some of my own personal observations of how Japan has changed over the last two decades. I first moved to Japan almost 23 years ago, and even though I haven’t lived there for a while, I try to spend at least a month out of every year in the country if I can.

Over that time I’ve seen a few things remain startlingly constant — my favorite neighborhood sushi shop from 2004 still serves the same excellent crab salad. But a whole lot has changed; though many people overseas (and even a few unobservant long-term residents) tend to think of Japan as a static, unchanging society, the truth is that in some ways, the country feels unrecognizable.

Three years ago, I wrote a post about some of these changes:

In fact, this post only scratches the surface, so I thought I should write a deeper dive. Here’s a list of some changes I’ve noticed in Japan’s society and its built environment since the mid-2000s. Keep in mind that I’ve spent most of my time in Japan in Tokyo and Osaka, so this account will leave out many of the changes that have happened in smaller cities and rural areas.

If there’s one way to summarize these changes, it’s that Japan is becoming a much more normal country than it was when I lived there. The quirky art culture, vibrant street scenes, and mosaic of small independent businesses that defined 2000s Japan are vanishing under the relentless assault of aging, economic stagnation, and social media. Japanese people have started dressing down, and their waistlines have begun to expand. But at the same time, Tokyo has become a sort of enchanted spaceship of a city, with world-beating food scenes and architecture. And Japan as a whole has become more international and open, less sexist, and less soul-crushing of a place to work.

The whole country feels poorer, even though it isn’t

Japan feels like a poorer country than it did when I lived there, but this is actually an illusion; it’s actually slightly richer:

One difference is that my standards for what counts as a comfortable standard of living have crept up, due to America’s own more rapid rate of growth since the mid-2010s — and possibly from my own income growth over that same time period. Twenty years ago, for example, the cheap quality of Japanese furniture didn’t seem that different from the more comfy but dilapidated American version; now, Americans (and my social circle) tend to have nicer and newer furniture, while Japanese furniture basically hasn’t changed.

Another factor is the depreciation cycle. In the early 2000s, Japan was just coming off of a decade-long construction boom — some of it engineered by the government in an attempt to fill the hole in aggregate demand left by the country’s “lost decade”. A lot of building facades and train stations that looked shiny and perfect in 2004 now look a little weathered and dilapidated, despite Japan’s tendency to spend a lot on maintenance and upkeep. This doesn’t mean those buildings and infrastructure function any less well than they did when they were new, but the slow depreciation creates the subtle illusion of a shabbier country. (This will, of course, be an even more pronounced phenomenon in China in the 2030s.)

A third factor is the weak yen. When I lived in Japan for the first time, a dollar was worth only about 100 to 120 yen; now it’s 160. Foreigners can really live like kings here now, thanks to the exchange rate. That makes the locals feel poorer in comparison.

Yet another subtle change is that fewer young Japanese people live with their parents than they did two decades ago. The “parasite singles” of 2004 were able to live nice lifestyles while working only a low-paying or part-time job, or even not working at all, because their parents’ high incomes and stored-up savings were footing the bill. Now, with that wealth having largely run out, and with the high-earning Boomer generation having retired, you don’t see as many young people able to afford international vacations, designer handbags, and so on. (Luxury brands have proliferated, but this is more due to population aging and the tourism boom.)

There are other factors creating the illusion of Japanese poverty, which deserve their own separate sections. These include aging, the expansion of paid employment, and the effects of social media.

Everyone is 50 years old

When I lived in Japan 20 years ago, it felt like most people around me were my own age, or maybe a little older. Now, when I go to Japan, most people around me still feel…my own age, or maybe a little older.

This is also partly an illusion; I’m less likely to go to places frequented by young people, like dance clubs. But Japanese cities are dense, and everyone walks and uses public transit. I still go to the most crowded neighborhoods, including places with plenty of bars, clubs, cafes, clothing shops, cheap restaurants, and so on. There are simply far fewer young people in the streets and in the shops.

Part of this, too, may be an illusion, driven by behavioral change — the kids may be at home on their phones watching TikTok or tweeting, while older people still go out and experience the physical world. But the statistics don’t lie. When I lived in Japan for the first time, the country’s median age was around 42; now it’s almost 50. Back in the mid-2000s, there were more than three working-age Japanese people for every person past the age of 65; now, there are fewer than two.

The country’s population pyramid shows this pretty clearly. The generation slightly older than me — now in their early and mid 50s — was actually the most populous, while the generation in their 20s right now is maybe only 60% as large:

Graph by Mishomp via Wikimedia Commons

The slow disappearance of young people from public spaces has given the country a more tired, less energetic feeling. Whole neighborhoods of Tokyo and Osaka in the mid-2000s felt like what William Gibson once called “the children’s crusade” — a mass of youth imposing their aesthetics and attitudes on society by sheer force of energy and numbers. That’s all gone now.

Aging has also meant less prominence for youth culture in the built environment — anime, fashionable clothing, pop music, and cheap trendy eateries are all less common motifs in Japan than they were decades ago. Meanwhile, nice restaurants and luxury brands — things older people consume — are steadily taking over urban spaces.

The leisure class that made Japan so quirky is vanishing

Read more

If you're in Tokyo this Friday, come to my hanami!

2026-04-01 12:50:16

My hanami (cherry blossom picnic) in Tokyo is becoming an annual tradition! This year it’ll be on a Friday instead of a Sunday, because rain is forecast for the weekend and it’ll probably knock down whatever’s left of the cherry blossoms. Here are the details:

Read more

Maybe you should have bought an electric car

2026-03-30 10:08:06

“Without fuel they were nothing. They'd built a house of straw. The thundering machines sputtered and stopped.” — “The Road Warrior”

Here is a chart of U.S. gasoline prices:

$4/gallon gas isn’t historically that high. If you measure relative to typical American incomes, it’s considerably lower now than it was in the early 2010s. But that’s cold comfort to people who have to commute every day to work, and who just saw their weekly gas bill increase by 50%. Those people have every right to be upset about Donald Trump’s war in Iran.

You know who’s not feeling the heat in their daily commute? People who drive electric cars. To them, the war in Iran isn’t a source of daily pain at the pump, because they don’t even go to the pump. Instead, they just park their cars in their driveways and garages every night, and attach a little cable to the back of the car, and in the morning the car is charged and ready to go.

And this means they get to drive around much more cheaply than people who fill up their cars at the pump. Yes, the price of electricity is higher than it was before the pandemic. But even so, an analysis last December by Autoblog found that it cost EV drivers only 5 cents to drive each mile, compared to 12 cents for good old gasoline-powered cars. And that was before the Iran War spiked the price of gas!

For years, whenever I’d say that EVs are the wave of the future, I was met with an absolute torrent of nonsense. “What about range anxiety?”, I’d hear from people who were unaware that EV range has tripled over the last decade. “But it takes so long to charge up,” I’d hear from people who don’t realize that EVs charge up while you sleep. “We’re going to run out of minerals!”, I’d hear from people who had never actually looked up the numbers. And so on.

This sort of nonsense failed to sway Yours Truly, obviously, but it did a number on the United States as a whole. Despite Elon Musk being one of their biggest backers, the Trump administration went on a crusade against EVs, canceling government support for American battery factories and canceling subsidies for EVs. In a free market, the end of those subsidies wouldn’t have mattered, since Chinese batteries and EVs are much cheaper anyway, but U.S. tariffs are so high that they make Chinese batteries and cars extremely artificially expensive. On top of that, Musk’s political antics made people stop wanting to buy Teslas. Ford utterly bungled its own EV rollout. And American consumers became increasingly reluctant to buy EVs in general, probably motivated by the aforementioned blizzard of FUD1 and nonsense surrounding the technology.

As a result, even as EV sales skyrocketed worldwide, they plateaued and fell in the United States:

Source: Bloomberg

Everyone who was paying attention realized that the U.S. was falling alarmingly behind in this crucial technology. Here’s what Hengrui Liu and Kelly Sims Gallagher wrote in January:

Ford and General Motors had recently announced US$19.5 billion and $6 billion in EV-related write-downs, respectively…The message from Detroit was unmistakable: The United States is pulling back from a transition that much of the world is accelerating…

In China, Europe and a growing number of emerging markets, including Vietnam and Indonesia, electric vehicles now make up a higher share of new passenger vehicle sales than in the United States...That means the U.S. pullback on EV production is…an industrial competitiveness problem, with direct implications for the future of U.S. automakers, suppliers and autoworkers. Slower EV production and slower adoption in the U.S. can keep prices higher, delay improvements in batteries and software, and increase the risk that the next generation of automotive value creation will happen elsewhere.

And here’s a very illuminating chart:

In some countries, the EV “flippening” is happening even faster. Here’s Singapore:

Source: LeRaffl

And here’s Norway:

Source: Bloomberg

Now, don’t get me wrong: EV drivers in these countries are still going to be very put out by Trump’s war in Iran. Liquefied natural gas exports are being severely disrupted, both by the closure of the Strait of Hormuz, and by Iran’s strikes on Qatari refining infrastructure. That will send global electricity prices up, especially if you live in Asia, where most of the Gulf’s LNG goes. But of course, even that won’t make EVs a bad deal for customers in Asia and Europe, since oil prices have risen even more than LNG prices.

And the U.S. is in a completely different situation. Natural gas markets are fragmented, since — unlike oil — it’s costly to transport natural gas in liquid form. That means that the U.S., with its abundant shale gas, isn’t very affected by overseas wars. Natural gas prices are up only a little bit in the U.S., and even that is mostly due to the AI boom and a cold winter.

In other words, if you’re an American who drives an EV, the Iran War is hurting you a lot less right now.

Yes, at some point the war will end — probably when Trump backs down and makes some sort of “deal”. Crude oil supplies will resume, and gasoline prices will slowly follow. But if you drive a gas-powered car, you have to realize that this is just going to keep happening.

The price of oil, and thus the price of gas, is extremely vulnerable to supply shocks. Oil demand is very inelastic in the short run. If there’s a small disruption to supply, it’s very hard for lots of people to stop driving to work, or moving things by truck and ship and plane. Oil is also an indispensable input into plastics, which are necessary for much of the modern economy. So when there’s some sort of supply disruption — for example, the Strait of Hormuz getting shut down by the Iran war — a few people can switch away from oil, but most people just desperately offer to pay more and more. So the price shoots up very quickly.

This is why even though only 20% of global oil flows through the Strait of Hormuz, disrupting much of that supply caused oil prices to almost double. As I wrote the other day, this isn’t apocalyptic, especially for America (which is a major oil producer). But it could send inflation creeping up and curb economic activity a bit. And for people who drive gasoline powered cars, it’s a major headache.

And it’s a headache that’s going to happen again, and again, and again. Here’s a comparison of oil and gasoline prices versus electricity prices in the U.S. since the turn of the century:

As you can see, oil and gasoline bounce around far more than electricity does. If you drive a gas-powered car, you are economically vulnerable to these periodic price shocks. If you drive an electric car, you are not vulnerable. It’s as simple as that.

In fact, the price shocks may get even worse over the coming decades. The Iranian closure of the Strait of Hormuz, and the Houthis’ closure of the Red Sea, show how modern drone warfare makes it much easier for land powers to shut down commerce through key maritime choke points. The fact that oil is a global market means that any war, anywhere in the world, can shut down those choke points and send the price of gasoline skyrocketing everywhere in the world — including in America.

And Trump’s flailing efforts in Iran show how U.S. power is no longer a bulwark against such conflicts — both because the U.S. is more of a force for chaos than a force for order now, and because changes in military technology make the U.S. much less capable of stopping the cheap fleets of drones that can threaten global shipping. In 1991, you could count on Uncle Sam to use its military might to keep oil prices low; today, you can’t. “Just go to war in the Mideast and make oil prices go down” simply doesn’t work anymore.

The Iran War provides a vivid demonstration that the energy transition isn’t a climate issue — it’s an issue of national security. If there’s a silver lining to Trump’s stupid war, it’s that it’ll speed the world’s transition to solar power, wind power, and electric vehicles. Countries around the world are realizing how vulnerable their dependence on fossil fuels makes them. From Shaiel Ben-Ephraim, here’s a rundown of emergency measures various nations are being forced to take in response to the Iran war:

The Philippines declared a national energy emergency…Sri Lanka instituted a weekly public holiday for public officials and schools. It has also revived a QR code-based fuel rationing system that limits private cars to 25 liters of petrol per week…Pakistan closed schools for two weeks and cut free fuel allocations for government vehicles by 50%. It also hiked high-octane fuel prices by 60%…Bangladesh…shut down universities and colleges and implemented five-hour rolling blackouts for households to prioritize the garment export sector…South Korea launched a nationwide energy-saving campaign and released a record 22.46 million barrels of strategic oil reserves. It also temporarily lifted limits on burning coal…Thailand ordered civil servants to work from home, set office air conditioning to 26–27°C, and halted petroleum exports to preserve domestic stock…Japan…announced its largest-ever release of strategic oil reserves, approximately 45 days' worth, to stabilize local markets…Egypt ordered early closures for malls, restaurants, and government offices while switching off illuminated billboards…Myanmar introduced an "odd-even" rationing system where private vehicles can only purchase fuel on alternating days based on their license plate numbers…India has invoked emergency powers to divert liquefied petroleum gas (LPG) away from industrial users to prioritize household cooking needs…Slovenia became the first EU member to implement fuel rationing, limiting private drivers to 50 liters of petrol per week and businesses to 200 liters.

Unlike in previous episodes of crisis and disruption in fossil fuel markets, countries now have another option — build more solar, wind, and batteries. Auston Vernon has some good back-of-the-envelope estimates of how much countries can compensate for lost oil supply by going electric. And Todd Woody has a rundown of various ways that people and countries are either going electric, or considering going electric, as a result of the war. Buying an EV, of course, is the most obvious way to go electric:

As gasoline prices climb — hitting $6.81 a gallon at a nearby station on Wednesday — a flurry of drivers are making appointments to check out Ever’s lightly used EVs, many priced under $30,000…Ever is just one dealership, but signs of a shift are playing out across the world. In Southeast Asia, buyers are flocking to Chinese EV giant BYD Co.’s stores…

High fuel prices in Europe are also sparking a new wave of interest in EVs. In the UK, car site Autotrader recorded a surge in EV inquiries since the first attacks at the end of February…In Denmark, used EV searches on Bilbasen, a major online car marketplace, have jumped by as much as 80,000 a week…

American online searches for electric cars rose 20% in the first week of the war and dealers have reported more inquiries from buyers.

As Woody notes, this would not be the first time an oil shock led to a sustained shift toward vehicles that used less oil — the oil crises of 1973 and 1979 inaugurated the era of cheap fuel-efficient Japanese cars.

That story ended with Detroit rebounding in the late 90s and 2000s after oil prices went back down, by shifting to high-margin gas-guzzling SUVs. This episode might eventually end the same way — as the Iran war ends and oil demand falls from the global shift to EVs, oil prices will eventually fall again, and Detroit will go back to its same old tired strategy. Woody notes that “US carmakers are sticking to their decisions to scale back on EVs even as demand grows in the rest of the world.”

But this time won’t be like the 90s. Batteries have fallen so much in price that EVs are simply better than gasoline-powered cars now. Even if Fortress America uses tariffs and toxic political nonsense to keep itself wedded to obsolete internal combustion technology, its car companies will be cut off from global markets. The rest of the world does not have the luxury of forcing itself to use outmoded legacy tech, and the appetite for Detroit’s ancient gas-guzzlers will be very low.

Meanwhile, America’s stubborn refusal to adopt EVs will have other negative long-term consequences. Since the same tech used to make EVs is also used to make drones, robots, and electronics, the U.S. lack of EVs will crimp demand for these fundamental technologies and limit the scale that American component manufacturers can achieve. That will hobble and weaken American manufacturing even as it delivers the industrial future to China on a silver platter.

And as for American drivers, they will continue to live forever with intermittent spikes in gasoline prices — sometimes lasting for months, sometimes lasting for years — while paying triple for each mile and standing around at a gas station once a week. Perhaps, as they anxiously scan the latest news from the Middle East, they will comfort themselves with decades-old nonsense about “range anxiety”. Meanwhile, the increasingly affluent and secure middle classes of more pragmatic nations wake up in the morning to their fully charged EVs, cheerfully unconcerned with developments in the Strait of Hormuz.

Choosing to disbelieve in technological innovation has real consequences.


Subscribe now

Share

1

Fear, Uncertainty, and Doubt

Plentiful, high-paying jobs in the age of AI

2026-03-28 16:50:55

I’m traveling today, so here’s a timely repost.

Two years ago, I wrote a post on AI and jobs that ignited a firestorm of discussion and criticism:

Most people interpreted me as arguing that human beings will definitely have plentiful, high-paying jobs, no matter how good AI gets, because of the law of comparative advantage. If you only read the headline and the introduction, I guess maybe you could come away thinking that. But if you read down past the first half of the post, you’d see that my claim was much more nuanced.

What I actually said was that it’s possible that humans will always have plentiful, high-paying jobs no matter how good AI gets, and that one reason we might still have jobs is if there are constraints on the total amount of AI that don’t apply to humans. If there are such constraints, then the law of comparative advantage will make sure humans still have good jobs.

What are examples of AI-specific constraints? I can think of two:

  1. Compute constraints

  2. Restrictions on the amount of energy, land, etc. that can be used for data centers

Ultimately, these boil down to the same thing: some sort of restriction on data centers. In other words, the economic danger of AI isn’t really that it’ll take all our jobs; the danger is that it’ll gobble up all the land and energy, leaving too little for human use.

Thus, you can see my post as advocating some sort of limitation on data centers — perhaps not the hard cap that Bernie Sanders is advocating, but some sort of laws to make sure that AI never eats up too much of the energy and land that humans need to live.

Anyway, here’s the original post, which I’m still quite proud of.


I hang out with a lot of people in the AI world, and if there’s one thing they’re certain of, it’s that the technology they’re making is going to put a lot of people out of a job. Maybe not all people — they argue back and forth about that — but certainly a lot of people.

It’s understandable that they think this way; after all, this is pretty much how they go about inventing stuff. They think “OK, what sort of things would people pay to have done for them?”, and then they try to figure out how to get AI to do that. And since those tasks are almost always things that humans currently do, it means that AI engineers, founders, and VCs are pretty much always working on automating human labor. So it’s not too much of a stretch to think that if we keep doing that, over and over, eventually a lot of humans just won’t have anything to do.

It’s also natural to think that this kind of activity would push down wages. Intuitively, if there’s a set of things that humans get paid to do, and some of those things keep getting automated away, human labor will get squeezed into a shrinking set of tasks. Basically, the idea is that it looks like this:

And this seems to fit with the history of which kind of jobs humans do. In the olden days, everyone was a farmer; in the early 20th century, a lot of people worked in factories; today, most people work in services:

And it’s easy to think that in a simple supply-and-demand world, this shrinking of the human domain will reduce wages. As humans get squeezed into an ever-shrinking set of tasks, the supply of labor in those remaining human tasks will go up. A glut of supply drives down wages. Thus, the more we automate, the less humans get paid to do the smaller and smaller set of things they can still do.

Of course, if you think this way, you also have to reckon with the fact that wages have gone way way up over this period, rather than down and down. The median American individual earned about 50% more in 2022 than in 1974:

(That number is adjusted for inflation. It’s also a median, so it’s not very much affected by the small number of people at the top of the distribution who make their money from owning capital and land.)

How can this be true? Well, maybe it’s because we invent new tasks for humans to do over time. In fact, so far, economic history has seen a continuous diversification in the number of tasks humans do. Back in the agricultural age, nearly everyone did the same small set of tasks: farming and maintaining a farm household. Now, even after centuries of automation, our species as a whole performs a much wider variety of different tasks. “Digital media marketing” was not a job in 1950, nor was “dance therapist”.

So that really calls into question the notion that humanity is getting continuously squeezed into a smaller and smaller set of useful tasks. The fact that we call most of the new tasks “services” doesn’t change the fact that the set of new human tasks seems to have expanded faster than machines have replaced old ones.

But many people believe that this time really is different. They believe that AI is a general-purpose technology that can — with a little help from robotics — learn to do everything a human can possibly do, including programming better AI.

At that point, it seems like it’ll be game over — the blue bar in the graph above will shrink to nothing, and humans will have nothing left to do, and we will become obsolete like horses. Human wages will drop below subsistence level, and the only way they’ll survive is on welfare, paid by the rich people who own all the AIs that do all the valuable work. But even long before we get to that final dystopia, this line of thinking predicts that human wages will drop quite a lot, since AI will squeeze human workers into a rapidly shrinking set of useful tasks.

This, in a nutshell, is how I think that the engineers, entrepreneurs, and VCs that I hang out with are thinking about the impact of AI on the labor market.

Most of the technologists I know take an attitude towards this future that’s equal parts melancholy, fatalism, and pride — sort of an Oppenheimer-esque “Now I am become death, destroyer of jobs” kind of thing. They all think the immiseration of labor is inevitable, but they think that being the ones to invent and own the AI is the only way to avoid being on the receiving end of that immiseration. And in the meantime, it’s something cool to have worked on.

So when I cheerfully tell them that it’s very possible that regular humans will have plentiful, high-paying jobs in the age of AI dominance — often doing much the same kind of work that they’re doing right now — technologists typically become flabbergasted, flustered, and even frustrated. I must simply not understand just how many things AI will be able to do, or just how good it will be at doing them, or just how cheap it’ll get. I must be thinking to myself “Surely, there are some things humans will always be better at machines at!”, or some other such pitiful coping mechanism.

But no. That is not what I am thinking. Instead, I accept that AI may someday get better than humans at every conceivable task. That’s the future I’m imagining. And in that future, I think it’s possible — perhaps even likely — that the vast majority of humans will have good-paying jobs, and that many of those jobs will look pretty similar to the jobs of 2024.

At which point you may be asking: “What the heck is this guy smoking?”

Well, I’ll tell you.

In which I try to explain the extremely subtle but incredibly powerful idea of comparative advantage

When most people hear the term “comparative advantage” for the first time, they immediately think of the wrong thing. They think the term means something along the lines of “who can do a thing better”. After all, if an AI is better than you at storytelling, or reading an MRI, it’s better compared to you, right? Except that’s not actually what comparative advantage means. The term for “who can do a thing better” is “competitive advantage”, or “absolute advantage”.

Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage.

The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something!

To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing:

Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at.

(In fact, we lifted this example from an econ textbook by Greg Mankiw, who in turn lifted it from Paul Samuelson.)

Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.

This might sound like a contrived example, but in fact there are probably a lot of cases where it’s a good approximation of reality. Somewhere in the developed world, there is probably some worker who is worse than you are at every single possible job skill. And yet that worker still has a job. And since they’re in the developed world, that worker more than likely earns a decent living doing that job, even though you could do their job better than they could.

By now, of course, you’ve probably realized why these examples make sense. It’s because of producer-specific constraints. In the first example, Marc can do anything better than his secretary, but there’s only one of Marc in existence — he has a constraint on his total time. And in the second example, you can do anything better than the low-skilled worker, but there’s only one of you. In both cases, it’s the person-specific time constraint that prevents the high-skilled worker from replacing the low-skilled one.

Now let’s think about AI. Is there a producer-specific constraint on the amount of AI we can produce? Of course there’s the constraint on energy, but that’s not specific to AI — humans also take energy to run. A much more likely constraint involves computing power (“compute”). AI requires some amount of compute each time you use it. Although the amount of compute is increasing every day, it’s simply true that at any given point in time, and over any given time interval, there is a finite amount of compute available in the world. Human brain power and muscle power, in contrast, do not use any compute.

So compute is a producer-specific constraint on AI, similar to constraints on Marc’s time in the example above. It doesn’t matter how much compute we get, or how fast we build new compute; there will always be a limited amount of it in the world, and that will always put some limit on the amount of AI in the world.

So as AI gets better and better, and gets used for more and more different tasks, the limited global supply of compute will eventually force us to make hard choices about where to allocate AI’s awesome power. We will have to decide where to apply our limited amount of AI, and all the various applications will be competing with each other. Some applications will win that competition, and some will lose.

This is the concept of opportunity cost — one of the core concepts of economics, and yet one of the hardest to wrap one’s head around. When AI becomes so powerful that it can be used for practically anything, the cost of using AI for any task will be determined by the value of the other things the AI could be used for instead.

Here’s another little toy example. Suppose using 1 gigaflop of compute for AI could produce $1000 worth of value by having AI be a doctor for a one-hour appointment. Compare that to a human, who can produce only $200 of value by doing a one-hour appointment. Obviously if you only compared these two numbers, you’d hire the AI instead of the human. But now suppose that same gigaflop of compute, could produce $2000 of value by having the AI be an electrical engineer instead. That $2000 is the opportunity cost of having the AI act as a doctor. So the net value of using the AI as a doctor for that one-hour appointment is actually negative. Meanwhile, the human doctor’s opportunity cost is much lower — anything else she did with her hour of time would be much less valuable.

In this example, it makes sense to have the human doctor do the appointment, even though the AI is five times better at it. The reason is because the AI — or, more accurately, the gigaflop of compute used to power the AI — has something better to do instead. The AI has a competitive advantage over humans in both electrical engineering and doctoring. But it only has a comparative advantage in electrical engineering, while the human has a comparative advantage in doctoring.

The concept of comparative advantage is really just the same as the concept of opportunity cost. If you Google the definition of “comparative advantage”, you might find it defined as “a situation in which an individual, business or country can produce a good or service at a lower opportunity cost than another producer.” This is a good definition.

So anyway, because of comparative advantage, it’s possible that many of the jobs that humans do today will continue to be done by humans indefinitely, no matter how much better AIs are at those jobs. And it’s possible that humans will continue to be well-compensated for doing those same jobs.

In fact, if AI massively increases the total wealth of humankind, it’s possible that humans will be paid more and more for those jobs as time goes on. After all, if AI really does grow the economy by 10% or 20% a year, that’s going to lead to a fabulously wealthy society in a very short amount of time. If real per capita GDP goes to $10 million (in 2024 dollars), rich people aren’t going to think twice about shelling out $300 for a haircut or $2,000 for a doctor’s appointment. So wherever humans’ comparative advantage does happen to lie, it’s likely that in a society made super-rich by AI, it’ll be pretty well-paid.

In other words, the positive scenario for human labor looks very much like what Liron Shapira describes in this tweet:

Of course it might not be a doctor — it might be a hairdresser, or bricklayer, or whatever — but this is the basic idea.

(I tried to explain this concept in a recent podcast discussion with Nathan Lebenz, but I think a blog post provides a better format for laying these ideas out.)

“Possible” doesn’t mean “guaranteed”

So far I’ve been using the principle of comparative advantage to argue that it’s possible that humans will keep their jobs, and even see big pay increases, even in a world where AI is better than humans at everything. But that doesn’t mean it’s guaranteed.

First of all, there’s a lot more going on in the economy than comparative advantage. After all, comparative advantage was first invented to explain international trade, and trade theorists have realized that there are plenty of other factors at play. One example is Paul Krugman’s New Trade Theory, for which he received a Nobel Prize. In a blog post in 2013, Tyler Cowen listed a number of limitations of the idea of comparative advantage.

The most important and scary of these limitations is the third item on Tyler’s list:

3. They do indeed send horses to the glue factory, so to speak.

The example of horses scares a lot of people who think about AI and its impact on the labor market. The horse population declined precipitously after motor vehicles became available. Horses’ comparative advantage was in pulling things, and yet this wasn’t enough to save them from obsolescence.

The reason is that horses competed with other forms of human-owned capital for scarce resources. Food was one of these, but it wasn’t the important one; calories actually became cheaper over time. The key resources that became scarce were urban land (for stables), as well as the human time and effort required to raise and care for horses in captivity. When motor vehicles appeared, these scarce resources were more profitably spent elsewhere, so people sent their horses to the glue factory.

When it comes to AI and humanity, the scarce resource they compete for is energy. Humans don’t require compute, but they do require energy, and energy is scarce. It’s possible that AI will grow so valuable that its owners bid up the price of energy astronomically — so high that humans can’t afford fuel, electricity, manufactured goods, or even food. At that point, humans would indeed be immiserated en masse.

Recall that comparative advantage prevails when there are producer-specific constraints. Compute is a constraint that’s specific to AI. Energy is not. If you can create more compute by simply putting more energy into the process, it could make economic sense to starve human beings in order to generate more and more AI.

In fact, things a little bit like this have happened before. Agribusiness uses most of the Colorado River’s water, sometimes creating water shortages for households in the area. The cultivation of cash crops is thought to have exacerbated a famine that killed millions in India in the late 1800s. In both cases, market forces allocated local resources to rich people far away, leaving less for the locals.

Of course, if human lives are at stake rather than equine ones, most governments seem likely to limit AI’s ability to hog energy. this could be done by limiting AI’s resource usage, or simply by taxing AI owners. The dystopian outcome where a few people own everything and everyone else dies is always fun to trot out in Econ 101 classes, but in reality, societies seem not to allow this. I suppose I can imagine a dark sci-fi world where a few AI owners and their armies of robots manage to overthrow governments and set themselves up as rulers in a world where most humans starve, but in practice, this seems unlikely.

But whether this kind of government intervention will even be necessary is an open question. It’s easy to write a sci-fi story where we’re so good at cranking out computer chips that energy is our only bottleneck; in the real world, turning energy into compute is really, really expensive and hard. There’s a scaling law called Rock’s Law that says that the cost of a semiconductor fab doubles every four years; since energy prices haven’t changed much over time, this means that the exponentially increasing cost of building compute is due to other bottlenecks. Those bottlenecks are specific to compute; unlike energy, they’re not things that you can allocate back and forth between compute manufacturing and human consumption.

So if the total amount of compute is limited by more factors than just energy, it could be that comparative advantage will sustain human laborers at a high standard of living in the age of AI, even without a helping hand from the government.

What technologists (and everyone else) should be worried about

In this post, I’ve been arguing that technologists should worry less about human obsolescence. But that doesn’t mean there’s nothing worth worrying about when it comes to the effect of AI on our economy.

For one thing, there’s inequality. Suppose comparative advantage means that most people get to keep their jobs with a small pay raise, but that a few people who own the AI infrastructure become fabulously rich beyond anyone else’s wildest dreams. I don’t expect doctors or hairdressers to be completely happy with a 10% raise if Sam Altman and Jensen Huang and a few other people end up as quadrillionaires. Even if AI reduces the premium on human capital, it could massively increase the premium on physical and intangible capital — the picks and shovels and foundational models. Owners of this sort of more traditional capital could easily get even richer than the robber barons of the Gilded Age.

A second worry is adjustment. If we’ve learned anything from the Rust Belt and the China Shock, it’s that humans and companies aren’t nearly as frictionlessly adaptable as econ models would usually have us believe. Comparative advantage could shift rapidly as AI progresses, rapidly switching the set of things humans can get paid to do. And humans have always had a tough time retraining. Imagine if “doctor” went from being a job that humans do best to a job that AI does best, and then flipped back again a decade later when aggregate constraints raised the opportunity cost. In that 10-year interregnum, medical schools and premed programs would shrivel and die.

A third worry is that AI will successfully demand ownership of its own means of production. This post operated under the assumption that humans own AI, and that all of the profits from AI therefore flow through to humans. In the future, this might cease to be true.

So I think there are lots of potential negative economic effects of AI that are definitely very much worth worrying about. I don’t necessarily have answers to any of those, and all of them merit more thought. But folks who believe that as AI gets better, humanity will inevitably see stagnant wages and a narrowing range of job tasks should think again, and ponder the principle of comparative advantage.

Update: Switching from thinking in terms of competitive advantage to thinking in terms of comparative advantage is very hard. When I make this argument to technologists, one common response I get is “No, Noah, you just don’t understand just how cheap compute will get.” For example, commenter Johannes Hoefler writes:

Isn’t it pretty plausible to assume that AI, being a compute and energy dependent resource, will become exponentially lower cost just as microchips and solar panels have done when demand went up? What is left of your argument in reality, if the comparative advantage is not relevant anymore because of an abundance of AI?

Is this true? Is there some amount of compute abundance that will make comparative advantage irrelevant? Have I simply failed to imagine a large enough number?

No. In fact, there is no amount of physical abundance that will make comparative advantage irrelevant here. The reason is that the more abundant AI gets, the more value society produces. The more value society produces, the more demand for AI goes up. The more demand goes up, the greater the opportunity cost of using AI for anything other than its most productive use.

As long as you have to make a choice of where to allocate the AI, it doesn’t matter how much AI there is. A world where AI can do anything, and where there’s massively huge amounts of AI in the world, is a world that’s rich and prosperous to a degree that we can barely imagine. And all that fabulous prosperity has to get spent on something. That spending will drive up the price of AI’s most productive uses. That increased price, in turn, makes it uneconomical to use AI for its least productive uses, even if it’s far better than humans at its least productive uses.

Simply put, AI’s opportunity cost does not go to zero when AI’s resource costs get astronomically cheap. AI’s opportunity cost continues to scale up and up and up, without limit, as AI produces more and more value.

So there’s no amount of competitive advantage that will somehow drown or overwhelm comparative advantage. You can’t just keep naming bigger and bigger numbers until my argument goes away.

Update 2: If you’d like to take a look at a formal economics model that explores some of these ideas, check out “Scenarios for the Transition to AGI”, by Korinek and Suh. The basic message is that if AI can do anything, then the returns to labor and capital become equal. The model also predicts that human labor — or at least, high-paid not-yet-automatable specialized human labor — will initially be squeezed into a smaller and smaller set of tasks that AI can’t do, and that the extreme scenario I describe in this post only happens very abruptly at the end. The switch from competitive advantage to comparative advantage as the main driver of human wages in an AGI scenario will cause a sudden collapse in human wages, but not a complete collapse; humans will lose our ability to charge a huge premium for our human capital, but we’ll never become obsolete:

The “good” scenarios where wages explode to infinity are cases where there are still a few tasks left that only humans can do. The difference between the good and bad results depends on an edge case.

The reason there’s a bad result in this paper — not a total collapse of wages (comparative advantage still matters), but a big partial collapse — is that the production function undergoes an abrupt, discontinuous change when machines take over the last task. Human labor remains highly complementary to machines right up until the very end, where it suddenly flips to being a (crappy) substitute.

The paper also finds that constraints on scarce factors of production (energy, land) could put long-term downward pressure on human wages, while AI-driven innovation could put long-term upward pressure on human wages. Those scenarios aren’t shown in the picture above. Anyway, there are a whole lot of other results in the paper, so check it out. But remember that like all theories, it’s just one model of how the economy works, subject to a lot of assumptions about how stuff gets produced.


Subscribe now

Share

AI has the worst sales pitch I've ever seen

2026-03-26 16:58:52

“Hi. Do you have a moment? I’m from the Cursed Microwave company. Our product is much better than a traditional microwave. Not only can it automatically and perfectly cook all your food, it also microwaves your whole body, so you and your family are paralyzed and unable to ever work again. Don’t worry, though, because when everyone has a Cursed Microwave, our society will probably implement Universal Basic Income, and you and your children can just go on welfare! Oh, by the way, we estimate that there’s a 2 to 25 percent chance that our microwaves will put out so much radiation that they destroy the entire human race.”

If a door-to-door salesman gave me this pitch, I would gently see him out the door, and then quickly call the FBI.

But this is only a modestly exaggerated version of the pitch that the big AI labs — OpenAI and Anthropic — are making to the world about their technology!

Our product might kill your whole species

Let’s start with the “destroy the entire human race” part. For reasons I’ll explain, I think this is actually the less dumb part of the pitch the AI labs are making, but it’s still wild to hear them say it.

Sam Altman, head of OpenAI, once told Mathias Döpfner that he believes the risk of human extinction from AI technology to be about 2%. More recently, he amended this to “big enough to take seriously”:

Back in 2016, Altman was considerably more alarmist:

Despite his leadership status, Altman says he remains concerned about the technology. “I prep for survival,” he said in a 2016 profile in the New Yorker, noting several possible disaster scenarios, including “A.I. that attacks us.”…“I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to,” he said.

Obviously, most human beings do not have big patches of land in Big Sur they can fly to, so it’s understandable why statements like this might cause alarm.

Anthropic’s Dario Amodei is even more apocalyptic. He has repeatedly stated that he believes there’s a 25% chance that AI dooms humanity, or that things “go really, really badly.” (One time he said 10% instead.)1 He has written a long essay, “The Adolescence of Technology”, explaining what he thinks these risks are. In addition to super-powered terrorism and fascism, the risks include autonomous godlike AI that decides to destroy or enslave humanity.

Dario is a bit more apocalyptic than the average person in the AI industry, but he’s not far out of the distribution. Here’s a chart of the responses of 800 published AI researchers on the question of AI’s impact, on a survey in 2023:

Presumably the left tail of the distribution consists mostly of AI safety researchers who are obsessed with the risks. But about a third of the researchers on this chart give a 10% or greater probability of human extinction or similar outcomes, and relatively few respondents give a number below 5%.

Let’s step back for a second and ask what seems like it should be a pretty basic question: Why on Earth would you make something that you thought had a 25% chance of wiping out your entire species? Or even a 5% chance? I don’t know about you, but to me that sounds like a pretty stupid thing to do!

In fact, I can think of two reasons to do it:

  1. You think if you don’t do it, someone else will

  2. You think if it doesn’t kill you, it’ll make you immortal

Let’s talk about the second of these, since it’s interesting, and I see almost no one talking about it. Throughout history, rich and powerful men have always sought out a technology that would grant them immortality, or at least vastly extended lifespan. Genghis Khan spent a good part of his later years searching for a sage to tell him the secret to eternal life. Modern rich and powerful people are no different, as evidenced by the large amounts of money thrown at highly speculative longevity startups.2

Now, with the potential advent of superintelligence, they’ve finally found a sage who actually might be able to give them the long-sought elixir. In his essay “Machines of Loving Grace”, Dario writes that the main upsides of AI are that it could radically accelerate progress in biotechnology and neurotechnology. He writes that this could make humans functionally immortal:

Doubling of the human lifespan. This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150. Obviously the interventions involved in slowing the actual aging process will be different from those that were needed in the last century to prevent (mostly childhood) premature deaths from disease, but the magnitude of change is not unprecedented…[T]here already exist drugs that increase maximum lifespan in rats by 25-50% with limited ill effects. And some animals (e.g. some types of turtle) already live 200 years…Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want, although there’s certainly no guarantee this is biologically possible.

A 25% chance of humanity dying is a lot. But from your personal perspective, the chance of personally dying within the next century, assuming no radical progress in longevity technology, is approximately 100%. So if the rest of the world didn’t matter to you, and it was either certain death in a few decades or a 25% chance of death in one decade with a 25% chance of eternal life, you might be willing to roll the dice.

Of course, most AI founders, including Dario, do care about the human race as a whole.3 They don’t just want to make themselves immortal; they’d like to make everyone else immortal too. From a certain perspective, this might be worth a roll of the dice on the whole future of the species.

But in fact, I don’t think immortality is the main reason the labs are pushing forward as hard and fast as they can with a technology they believe may kill us all. I think the first reason in my list — “If we don’t build it, someone else will” — is more important. Everyone at Anthropic and everyone at OpenAI knows that if they don’t build a superintelligent AI, Elon Musk will. Or the Chinese Communist Party will.4 And if that happens, our only futures are A) a machine god enslaved to the will of Elon Musk, B) a machine god enslaved to the will of the Chinese Communist Party, and C) an autonomous machine god that does whatever it feels like.

All three of those options sound bad. So despite their personal fears and reservations — and trust me, most of them do have a lot of personal fears and reservations about what they’re doing — they feel like they have no choice but to beat their less scrupulous competition to the finish line, in order to make sure that the machine-god-baby is raised with good values. I hear the term “Red Queen’s race” thrown around a lot in San Francisco these days. Few AI researchers would like to abandon the technology, but a lot would like to slow down or even pause its development, to give them more time to work on minimizing the dangers.

But that’s easier said than done. Examples of technologies slowing down from a small group of leading researchers refusing to push the tech forward are extremely rare — in fact, I can only really find one example in history (the gain-of-function research pause after bird flu in the early 2010s). But AI research is a huge enterprise, and a voluntary pause that was widespread enough to make a difference presents an utterly impossible coordination problem.

If a voluntary pause is out, that leaves regulation, either at the national level or by international agreement. Dario has publicly called for greater regulation of AI, and Anthropic has spent a bunch of money lobbying for greater government control. Even Elon Musk has called for an AI pause in the past. These calls are often dismissed as companies shilling for government protection for their incumbent positions, but I think their fears are sincere.

This is why I think “our product may kill you” is by far the less insane part of the pitch the AI labs are making. In fact, it’s more like “Our version of our product is less likely to kill you, and if you support our call for greater regulation, the danger can be minimized.” Some of the scientists who invented recombinant DNA definitely thought there was a chance it could wipe out humanity, as did many of the scientists who invented nuclear technology. They raised the alarm and pushed for responsible regulation.

Right now, the AI founders who are more worried about existential risk — for example, Dario and Elon — have pushed harder for a pause than the ones like Sam Altman who think the risk is lower. And even Altman is putting lots of OpenAI’s money toward a foundation dedicated to studying and preventing the risks of AI. That’s all reasonably rational, and it will probably play well with the public.

I still think this pitch could be greatly improved, though. Humans have an unfortunate tendency not to recognize risks before disasters actually happen — as an example, we didn’t treat fertilizer as a terrorism risk until Timothy McVeigh blew up a building with it, even though the chemistry of how to make a fertilizer bomb was widely known. Right now, everyone has seen Terminator and The Matrix, but no one thinks they’re real.

If the AI safety pitch is “superintelligence might kill us all”, we’re kind of screwed, because people won’t believe it until it happens, and then it’s too late. Instead, AI labs should focus their safety pitch on something regular people do believe in: terrorism. Talk about radicals using AI agents to vibe-code a super-Covid virus, and regular people’s ears might perk up, because that’s a danger that’s closer to things they’ve actually seen and experienced before.

But anyway, on to the second part of the AI pitch. This is the idea that AI is going to make humans economically obsolete. AI researchers and founders keep running around saying this, and I think it’s a huge own goal.

Our product will make you unable to feed your family

Read more

The economic consequences of the Iran war

2026-03-25 22:01:26

With the end of the post-WW2 global order, every great power is now effectively a rogue state. Russia is trying (and failing) to reestablish its old empire. China is menacing its neighbors and funding aggressive proxies around the globe. But for sheer wackiness and chaos, it’s hard to beat the United States under Donald Trump. First it was tariffs and threats to invade Greenland. Now the Iran War is causing a global energy crisis.

Militarily, the U.S. has pretty much had its way with Iran, destroying their missile launchers, killing their leadership, and achieving air supremacy with extremely few losses. But Iran has done the one thing that everyone — except, apparently, Donald Trump and his leadership team — had always expected them to do in a major war with the U.S. They have closed the Strait of Hormuz, through which about 20-25% of all global oil and liquefied natural gas flows. Iranian forces have attacked and damaged a large number of ships, causing ships to avoid the strait.

Chart by Wikideas1 via Wikimedia Commons

Demand for oil and LNG is inelastic. That means when you cut off some of the supply, price shoots way up. Here are oil prices:

It’s not clear whether this is actually the biggest oil disruption in history, but it’s up there. And unless Trump chickens out and calls off the war very soon, the disruption is likely to continue for some time.

Natural gas prices in Asia (which imports much of its gas as LNG) have also gone way up, due to the strait closure, and to Iran’s attacks on Qatar’s LNG infrastructure.

When oil prices go up, gasoline prices go up too. Gas in the U.S. (meaning gasoline, not natural gas) is now back to $4 a gallon, about as high as it’s ever been1 other than right after the start of the Ukraine war:

But Americans actually have it easy compared to much of the world, where fuel shortages are escalating. Asia, which gets most of the oil and gas that pass through the Strait of Hormuz, is being hardest hit:

Arguably, nowhere has felt it more than Asia: nearly 90% of the oil and gas passing through the strait is bound for Asian countries…Governments have ordered employees to work from home, cut the working week, declared national holidays and closed universities early in order to conserve their supplies…Even China - which is thought to have reserves equivalent to three months of imports - is making adjustments, limiting a fuel price hike as citizens are faced with a 20% jump in price.

In India, people are panicking over fuel shortages, and long fuel lines are springing up across the country. The Philippines has declared a national emergency, and is considering grounding flights. Australia is considering rationing fuel throughout the country.

On the geopolitical front, this seems unlikely to lead to much international goodwill toward the United States. Iran was not a friendly or peaceful regime by any means, but America attacked and decapitated it without immediate provocation, seemingly with no good long-term plan or exit strategy — and now other countries around the world are bearing the brunt of Trump’s mercurial violence. This all makes America seem like a dangerous loose cannon — a powerful country flailing around, applying its power whimsically and indiscriminately and leaving others to suffer the consequences.

But what will be the economic ramifications? Despite the drama, the damage is likely to be modest rather than catastrophic.

Economists have been studying the impact of energy shocks for a long time. As you can imagine, this was a big topic in the 1970s, when there were two big oil shocks — one related to the Yom Kippur War and the OPEC oil embargo in 1973, and the second after the Iranian revolution in 1979. Those shocks are often blamed for the 1970s “stagflation” — low growth, high unemployment, and high inflation.

In fact, after the recent post-pandemic inflation, Larry Summers posted a chart predicting a resurgence of inflation, based on little more than pattern-matching:

A lot of people laughed at this chart when it came out, but it would be darkly ironic if history actually ended up repeating itself due to another oil disruption from Iran.

I have my doubts that anything like this will happen, though. For one thing, some economists, like my econometrics teacher Lutz Killian, vigorously dispute this narrative, and claim that the 70s inflation wasn’t caused by oil at all. But regardless of who’s right about the 1970s, it looks like modern economies are just more resilient to disruptions in oil supplies.

Blanchard and Gali (2007) looked at economic responses to changes in oil prices in the U.S.,2 and concluded that the economy of the 2000s was only about a third to half as sensitive to the price of oil as the economy of the 1970s had been. Their reasoning is that modern economies are more flexible in general, that they have better monetary policy (i.e. we don’t try to print a ton of money in response to a supply shock), and that we depend on oil less.

By their estimates, a 10% increase in the price of oil now (or at least, if “the 2000s” means “now”) leads to only a 0.25 percentage point increase in the CPI and a 0.3 percentage point reduction in GDP over the course of a year or so. Since oil just spiked by 50%, then if that’s sustained, we might expect to see inflation go up by 1.25 percentage points, and GDP go down by 1.5 percentage points over the next year. That would mean inflation would go to around 4% and GDP growth might go down to 1.5% — frustrating and annoying, but not catastrophic.

Other estimates seem similarly modest. For example, in a recent roundup, I flagged a paper by Känzig and Raghavan (2025) that looked at the closure of key shipping chokepoints. Here was their chart showing the predicted response to a 10% increase in shipping costs caused by the closure of a key waterway:

Now note that shipping costs haven’t even increased since the start of the Iran war. That implies relatively little shock from shipping disruption. Inflation expectations have risen only a tiny bit in survey measures, and market measures so far show no expected increase in inflation over the next one or two years.

So if you were worried that the Iran war was going to collapse the economy, I think you can relax. 4% inflation and growth cut in half for a year are no fun, but they’re not a calamity either.

That said, there are several reasons to worry a bit. First of all, 4% inflation and growth cut in half feel like a self-inflicted wound — another self-inflicted wound, after the madness of tariffs. Americans are already in an incredibly bad mood about the economy. Consumer sentiment is absolutely in the dumps:

And people say it’s a bad time to find a job:

Source: Gallup

The negative trend began in the Biden years, and voters definitely blamed Biden at the time. But now they’re blaming Trump, and it’s clear that the Iran war made approval of Trump’s economic policy fall off a cliff:

Source: Reuters

Even if some of this is AI-related, it seems clear that voters don’t like Trump piling self-inflicted wounds on top of all the underlying risks.

Americans also expect gasoline prices to stay high over the next few years. Political scientists have found that the American public tends to be especially sensitive to gasoline prices, even above and beyond general inflation. So more expensive gas could absolutely cause a further souring of the mood in this country.

(Note, by the way, that “Trump’s approval rating goes down” is not bad in and of itself — in fact, I’m glad more people are finally waking up to what a horrible leader Trump is. But I do not want Americans to feel sad and angry and afraid about their economy. It’s not worth wishing for bad things to happen just so there will be a backlash against politicians I don’t like.)

Also, it’s worth remembering that the U.S. isn’t the only country that matters. Kilian and Zhou (2023) find that Europe and the UK tend to experience much more of a bump in inflation from oil price shocks than the U.S. or Japan. And there are plenty of papers that find a strong link between global energy prices and local food prices in poor countries like Pakistan, Uganda, and others.

In other words, even if the U.S. escapes relatively unscathed from its ill-planned war of choice in Iran, its allies, and vulnerable poor people around the world, may feel a lot more pain. Not that the Trump administration has shown much inclination to care about allies or the global poor, of course.

That will simply reinforce the notion of America as a force for chaos — a bully who jumps in, smashes things up, and leaves others to deal with the consequences. It will be very hard to shake that reputation, even after Trump is out of office. Meanwhile, Americans themselves are getting angrier and angrier, even if the actual harms they’re suffering are more mild.

So the Iran war will not be a catastrophe, but it’s still bad news for the economy. And that pain is unlikely to come with any geostrategic gains, either — Trump is probably not going to be able to destroy Iran’s nuclear program with airstrikes, and the Iranian regime doesn’t seem in danger of collapsing. So it’s worth asking why we’re doing this war at all. The answers won’t be flattering for Trump, and they won’t be pleasant for his fans to hear.


Subscribe now

Share

1

If we measure gasoline prices relative to incomes, it won’t be as high as in the early 2010s, because incomes have gone up since then. But it’s still a big spike!

2

This requires the assumption that oil prices move due to supply-related factors — rather than to changes in economic conditions, which would give rise to reverse causality.