2025-05-20 04:55:41
“I set no value on objects strange or ingenious” — The Qianlong Emperor
Ten years ago, when I was still writing freelance articles in my spare time, I wrote a post for The Week in which I mused about America becoming like China’s Ming Dynasty — powerful but insular, rich but stagnant, arrogantly disdainful of science and technology, and ignorant of progress being made in the world outside:
Ming China was by far the greatest nation on the planet for most of the 15th and 16th centuries…But with the hindsight of history, the Ming doesn't look so awesome. While China was basking in seemingly timeless stability, Europe was seething with new ideas and technological progress. Even as the Chinese government banned oceanic shipping and heavily restricted foreign trade, European countries were discovering the New World and building trading empires…Another likely reason for the Ming's decline was disrespect of science…[T]he Ming education system de-emphasized science and technical studies, and instead forced aspiring bureaucrats to learn "Confucianist" philosophy…
Why did the Ming allow itself to become isolationist, stagnant, and backward-looking? Historians are divided, but the leading explanation is what…Mark Elvin calls the "high-level equilibrium trap." Simply put, when a country thinks it's in a golden age, it stops focusing on progress…
America shows signs of falling into this trap…We gape and gawk when we first travel to Japan or Switzerland and find that all the trains run perfectly on time — not to mention the fact that there are trains in the first place. We ignore our sky-high infrastructure costs…never pausing to wonder why West Europe and East Asia don't have these problems…America had an extraordinary run of success in the 20th century…But other countries have been racing to catch up with us, and in some ways they have already succeeded.
At the time, this post was not particularly well-received. Experts on Chinese history scoffed at the analogy and told me to stay in my lane. The broader public simply yawned, seemingly secure in the belief that America would continue to be the world’s leading nation, as it had been for their entire lives.
I agree that sweeping historical analogies are overwrought and rarely useful, especially when they draw parallels between modern times and the agricultural age. To be honest, at the time, I wanted the post to be more of a warning than a prediction. I didn’t think the U.S. was well on its way to becoming the Ming dynasty, but I saw a few troubling signs of complacency and insularity, and I wanted Americans to be more proactive about fixing our country’s problems and pushing progress forward. We have a long and hallowed tradition of using declinist histrionics as a form of self-motivation.
These days, however, I often find myself thinking about that post, and it feels like the Ming analogy fits a bit better than it did in 2015. I thought about it again when I saw the results of a recent Ipsos poll, which asked countries around the world about their attitudes toward AI. Of all the countries surveyed, Americans were among the most negative toward the technology, and Chinese people were among the most positive:
This survey closely parallels my own experience. When I hear other Americans talk about AI, it’s usually disapprovingly — many seem to think of AI as primarily something that threatens their jobs while producing little that they need or want. Over on X, I decided to see what would happen if I expressed bland, anodyne, positive sentiments about AI:
Here were some of the representative responses that I got:
It’s not just AI, though. Look at the two countries’ different approaches to nuclear power. Over the past decade, China has nearly tripled its nuclear capacity, while America’s has declined:
And this divergence is accelerating. China just approved 10 new nuclear plants, which will catapult it past the U.S. and France to be the world’s top country in nuclear energy. Meanwhile, in the U.S., the Republicans in the House of Representatives are poised to wreck the domestic nuclear industry. Thomas Hochman and Pavan Venkatakrishnan write:
Earlier this week, two House committees released their sections of the [budget] bill, which includes cuts to key energy loan and tax credit programs. If enacted, the provisions would constitute the biggest setback to U.S. energy security in a generation — and nuclear energy would be hardest hit…Nuclear projects face massive up-front capital costs…As a result, private lenders either charge prohibitive interest rates…This is why every commercial nuclear reactor to enter service since the turn of the century has relied on…the Department of Energy’s Loan Programs Office — save one, which was built with similar federal support via the Tennessee Valley Authority.
The Trump White House has previously recognized the importance of these loans…But the draft reconciliation language would wipe that progress away. By limiting both funding for administrative costs and virtually all unobligated credit subsidies…the bill could leave the office unable to pay staff or originate most new deals…Tax credits are on the chopping block, too…[the GOP bill] would also terminate transferability, the mechanism that lets a project developer sell its energy tax credits…Its elimination would kneecap next-generation projects just as they’re getting started.
I don’t think nuclear will be our most important energy technology over the next half century, but this is still a very bad sign.
Why are Americans so much more negative about AI and nuclear power than Chinese people? We’re always tempted to blame partisan ideology, and I admit that sometimes that is the root of the problem (mRNA vaccines). In general, there’s a recent pattern where Democrats fear the software industry while Republicans are apprehensive about new physical technologies.
But in these cases, I just don’t think that explanation fits. Nuclear power is traditionally thought of as being Republican-coded, and yet this time it’s Republicans who are now voting to gut it. And fear of AI is very bipartisan:
The most likely reason, I think, comes from the two countries’ recent histories. We know that people’s life experiences deeply shape their macroeconomic expectations; why should attitudes toward technology and progress be any different?
The U.S. has been a slow-growing country at the technological frontier for as long as almost all of us have been alive. If your country generally grows at 2%, you can expect to see your living standards quadruple over your lifetime. That’s much better than nothing, but it means that in the shorter term — over a five-year or ten-year period — your economic fortunes will be primarily determined by random shocks, not by the slow and steady march of technological improvement. A spell of unemployment, a medical bankruptcy, a decline in the price of your house, the loss of a government contract or a big customer — any of these could wipe out many years of slow improvement in living standards.
In other words, to most Americans, risks loom larger than opportunities. If everything stays the same, then they’ll continue to be wealthy and comfortable; if something changes, they might not. In an environment like that, it makes sense to be afraid of change, because change means risk.
For most of Americans’ lives, technological progress has been a major source of risk. The advent of the internet put encyclopedia salesmen and term life insurance salesmen out of a job. Hybrid cars from Japan put competitive pressure on traditional carmakers. Flip-phone makers were wiped out by smartphones. Electronic trading made many human “specialists” obsolete. And so on and so forth, throughout the economy. At the aggregate level, these innovations drove growth in living standards, but at an individual level, having the technology in your industry change was generally a source of peril.
Someone who grew up in modern China has experienced something utterly different. Over the course of their lifetime, rapid technological progress has radically transformed their lives and the lives of the people around them, allowing them to experience a level of comfort and security utterly undreamt of by their grandparents.
Meanwhile, the risks from new technology were pretty low. In a fast-growing economy, if your job gets automated, you can often just go get a better one. If your industry gets destroyed by competition from a new technology, you can often just go work for the winners, since everyone is just expanding business so quickly.
You can see this pattern in all sorts of polls. A decade ago, Pew did a survey and found that fast-growing countries tend to be much more optimistic about the future than slow-growing ones:
China’s once-torrid growth has now slowed, of course; it has caught up to the advanced economies technologically, and has less room to grow. In 30 years, as the young generation grows up in a world of slower growth,1 Chinese attitudes toward things like AI — or whatever the next cutting-edge thing is — may converge toward those of their rich-world peers.2 But 30 years is a long time, by modern standards — long enough to determine whether the U.S. or China will be the world’s premier civilization.
The example technologies I cited above — AI and nuclear — aren’t actually the best illustration of the divergence between a pessimistic, backward-looking America and an optimistic, forward-looking China. America invented AI, and has most of the world’s premier AI companies and researchers. Despite popular anxieties, both the government and large companies recognize the crucial importance of the technology, and are investing heavily to stay at the cutting edge. Meanwhile, the U.S. still has more nuclear power per capita than China, and that will continue to be true for a while. If we were just talking about AI and nuclear, then this post would be more of a warning than an alarm — much like my post about the Ming dynasty back in 2015.
But we’re not just talking about AI and nuclear. In fact, there’s a crucial set of technologies that has recently experienced revolutionary breakthroughs, and is poised to reorder global power and wealth over the next few decades. And when it comes to this set of technologies, the U.S. is moving backwards, while China has seized global leadership.
I’m talking, of course, about electrical technology.
Over the past few years, I’ve come to understand that the world is undergoing a profound technological shift — the kind of thing that only happens once a century, and perhaps only a few times in all of human history. I tried to explain it in this post:
During the 19th century, the Industrial Revolution was powered by combustion technology — you burned coal to boil water into steam, and steam turned a series of gears that powered trains, ships, and factories. In the 20th century, internal combustion gave us cars, planes, and rockets. But in the 20th century, we also harnessed an alternative method of generating power — electricity. When we needed fine control instead of raw oomph — for example, to power the parallel workstations of a modern factory, or to power a delicate device like a TV or a computer — we pushed electrons through wires instead of creating a controlled explosion.
Then, at the close of the 20th century and the beginning of the 21st, there were three key technological breakthroughs that changed the balance between combustion and electricity. These were:
Rare-earth permanent magnets
Gallium nitride and silicon carbide transistors
Lithium-ion batteries
In my experience, very few Americans know that the first and second of these breakthroughs even happened — or even what these technologies are. That in itself is shameful, and represents a failure of our education system. In brief:
Rare-earth permanent magnets are able to hold a much stronger magnetic field than traditional magnets like the toys you played with when you were a kid or the little magnets you stick on your refrigerator. You can use these super-powerful magnets to create electric motors with very high torque, to power cars, drones, or other stuff that could previously only be powered by combustion engines. These were developed in the 1980s and 1990s:
GaN and SiC transistors are able to operate at much higher levels of current and voltage than the transistors that run your computer or your phone. This allows them to work in things like cars and drones that operate at very high power:
They were developed in the 2000s and 2010s.
Lithium-ion batteries were developed in the late 20th century, and a series of incremental innovations led to a huge increase in the amount of energy they can hold, and a huge decrease in their manufacturing cost:
Together, these three inventions have made electricity better than combustion for a wide range of physical technologies. They’ve made electric cars better than combustion cars along almost every possible dimension — and even the few drawbacks that EVs still have, like slow charging, are being solved pretty easily by a series of more modest inventions. And they’ve made battery-powered drones the dominant weapon on the modern battlefield.
Increasingly, it’s electricity, not combustion, that rules the land and the sky.
Whichever country dominates electrical technology will therefore rule the land and the sky in the 21st century. AI is amazing, and yet without drones for it to command, it will not be able to win modern wars. And no matter how entertaining our online lives are, humans will still need to move themselves and their possessions around in physical space, so the auto industry will continue to be incredibly lucrative and important.
Currently, America is losing this race, in dramatic and catastrophic fashion. I wrote about this back in December:
While the U.S. lags behind Europe and the world overall in terms of the percent of its energy it gets from electricity, China has surged ahead of the field:
Meanwhile, China is now dominating the global auto industry because it’s embracing electric cars more than other countries are:
China also dominates the manufacturing of batteries, drones, and electric motors.
So while America is remaining competitive (for now) in software, and still has a lead in the old combustion technologies of the 20th century, China is absolutely trouncing us in the race for the new electrical technologies that will define physical power this century.
Given this dire situation, you’d think America would be racing to catch up. Instead, we’re intentionally forfeiting the race, destroying our nascent electrical capabilities as fast as we can:
President Donald Trump's 'Big, Beautiful Bill' spans 1,116 pages. Some of those pages include serious cuts to clean energy incentives in a number of sectors such as transportation…Section 112002 on page 30 of the One Big, Beautiful Bill document is titled "Termination of clean vehicle credit." The credit was originally set to expire December 21, 2032. A provision in the bill "accelerates the expiration to December 31, 2025." Americans would lose the ability to claim the EV tax credit in 2026…The bill also targets EV and hybrid vehicle owners…Electric vehicle owners could be charged $250 annually and hybrid owners could be charged $100 annually if the bill is passed…The President has been taking aim at EVs for years…The passage of the bill as is would mean electric vehicles are about to get much more expensive.
And:
House Republicans also propose to kill a loan program that supports the manufacture of certain advanced technology vehicles. It would rescind any unobligated funding and rescind corporate average fuel economy standards and greenhouse gas emission rules for 2027 and beyond. That portion will be taken up by the Energy and Commerce Committee.
Among outstanding loans finalized in President Joe Biden’s last weeks in office are $9.63 billion to a joint venture of Ford Motor and South Korean battery maker SK On for construction of three battery manufacturing plants in Tennessee and Kentucky; $7.54 billion to a joint venture of Chrysler-parent Stellantis and Samsung SDI for two EV lithium-ion battery plants in Indiana; and $6.57 billion to Rivian for a plant in Georgia to begin building smaller, less expensive EVs in 2028.
This is intentionally forfeiting the technological future to China. The auto industry is lucrative and important, but even that pales in comparison to the importance of drones. If you can’t build batteries and electric motors in the U.S., you can’t build drones here, either. And if you can’t build drones, you can’t win a modern war.
Defenders of Trump’s move to kill battery subsidies will ask: If batteries are the best technology, why do they need subsidies? The answer is so simple and so obvious that anyone who even asks the question should feel a deep sense of intellectual shame: Markets do not provide national defense. The battery and motor manufacturing capacity America needs in order to defend itself against China will simply not be sufficient without subsidies, since China is heavily subsidizing its own manufacturing capabilities in this area. Of course, as military ground vehicles go electric, the same will hold true for those as for drones.
Would you want China to make all of the world’s plutonium? No? Would you want China to make all of the world’s jet engines and rockets? No? Then you’re a fool to want to let China make all of the world’s batteries and electric motors.
I don’t want to get too partisan here. After all, it was NIMBY progressives who blocked many of the Biden administration’s attempts to build solar power and transmission lines, and it was progressive contracting requirements that stymied Biden’s attempt to build a network of EV chargers. Some progressives even hate electric cars, because they hate cars in general. This is all very foolish.
Among Republicans, however, and especially among followers of Trump, hatred of electrical technology has been raised almost to the level of a cultish religion. This is partly downstream of a tactical error by progressives, who sold their own base on the electric future by painting it as a climate policy instead of as a policy for promoting economic growth and national security. But that shouldn’t let conservatives off the hook here. At the end of the day, Democrats tried (inefficiently) to give electric technology a boost, and Republicans are the ones killing that effort.
Try talking to a conservative about the importance of EVs, and you’ll typically be confronted with a confused eruption of the most outdated arguments why the technology will never work. “EVs don’t have good range!” (False.) “EVs are slow to charge!” (False.) “Batteries can’t be recycled!” (False.) And so on, ad infinitum. It’s like hearing someone in 1910 tell you that cars will never replace horses, even as that replacement was well underway.
You simply cannot will a technological shift out of existence, or make the world go back to the way it used to be, no matter how big and powerful your country is. In 1793, a British mission to China (which was then ruled by the even more stagnant and isolationist Qing dynasty that succeeded the Ming) offered the emperor various technological marvels from the West. The emperor was unimpressed, and expressed utter disinterest in the wonders he was being shown, uttering the famous line: “As your Ambassador can see for himself, we possess all things. I set no value on objects strange or ingenious, and have no use for your country’s manufactures.”
Fast forward a couple of centuries, and the shoe is on the other foot. It’s America — the leading civilization of the West — that is turning up its nose at new inventions that are remaking the map of global power. We really are becoming a technophobic civilization, looking inward to our petty domestic conflicts and looking backward to our great historical achievements, even as a rival civilization embraces the future. My Ming dynasty analogy was more appropriate than I realized at the time.
In fact, the rule I’ve postulated — that people who grow up with faster growth are more optimistic — is far too simple. When growth slows down during a person’s lifetime, having grown up under rapid growth tends to make them structurally pessimistic, since it raises their expectations to unsustainable levels. Matsumoto et al. (2024) document this effect. This may happen with the current generation of young Chinese people.
This is actually why the Ming dynasty analogy is more rhetorical than real. In the industrial age, things like technology, economic growth rates, and demographics change much more quickly than they did back in the agricultural age. It took only two or three decades for Japanese people to go from optimism to pessimism about their country’s future, and China might end up being the same. Maybe as every country ages into senescence and catch-up growth runs its course all over the world, we’ll become a global Ming dynasty. Or perhaps AI will supercharge growth rates and we’ll figure out a solution to the low fertility problem, and optimism will be restored. Either way, the long slow decline of an agricultural society like the Ming seems less likely in the modern unstable age.
2025-05-18 07:40:12
A week ago I wrote a post arguing that globalization didn’t hollow out the American middle class (as many people believe):
After I wrote the post, John Lettieri of the Economic Innovation Group wrote a great thread that strongly supports my argument. He showed that the timing of America’s wage stagnation — roughly, 1973 through 1994 — just didn’t line up well with the era of globalization that began with NAFTA in 1994. In fact, American wages started growing again right after NAFTA was passed. Check it out!
In fact, wage growth since NAFTA has been almost as strong as in the decades after World War 2!
Now, I think this might be too simple of a story. Although there was a lot of noise and political hand-wringing over NAFTA, most Americans probably don’t think it was competition from Mexico that hollowed out the U.S. middle class — they think it was China. And while economists think NAFTA hurt some specific manufacturing industries in a few specific places, they generally conclude that it helped most Americans; it’s the China Shock, after China’s entry into the WTO in 2001, that many economists think was overall harmful to the working class.
And if you add the China Shock to Lettieri’s timeline, you see that by some measures — but not by others — there’s a second, shorter era of wage stagnation that lines up with it pretty well. I’ve modified Lettieri’s charts to show the China Shock:
You can see that median wages flatten out between 2003 and 2015, while average hourly earnings of production and nonsupervisory workers continue to rise. Obviously the Great Recession is the biggest factor after 2007 (and many economists believe the China Shock only lasted through 2007). But there’s a good argument that Chinese competition did hold American wages down for a few years in the 2000s.
And in case you were wondering, here’s the breakdown for men and women:
And Lettieri has more charts showing that the story looks the same for the working class as it does for the middle class.
So I think the story is more nuanced than Lettieri makes it out to be. The surge in middle-class and working-class wages in the late 1990s might have come in spite of some small headwinds from NAFTA, and the China Shock might have exerted a drag on American wages during the 2000s. But the much bigger story that these charts tell is that the biggest wage stagnation in modern American history came before the era of globalization — roughly from 1973 through 1994.
What was the cause of that epic stagnation? In macroeconomics, it’s very hard to isolate cause and effect, since there are so many things going on at the same time. The decades between 1973 and 1994 featured two oil shocks, major inflation, two big changes in the global monetary regime, multiple major recessions, changes in trade deficits and imports, and plenty more. So much was going on that it’s possible that the wage stagnation was just a series of negative shocks that lasted for a long time — “just one damn thing after another”, as the saying goes.
But as a first pass, we can look at some of the theories of why that stagnation happened, and see if they match up with the timeline.
Part of the stagnation in wages was due to rising inequality. If we look at average versus median hourly compensation (which includes benefits like health insurance and retirement matching contributions), we see that the average stagnated less than the median:
But you can still clearly see that from the early 1970s through the mid 1990s, the average value stagnated as well. This suggests that there was something systemic going on — it wasn’t just the middle class getting hit.
Part of that “something” was a productivity stagnation. If you look at average hourly compensation versus average labor productivity (output per hour worked), you see a modest divergence, but the productivity slowdown from the early 1970s until the mid 1990s is clearly visible, and it exactly lines up with the stagnation in wages :
Nobody knows exactly why productivity slowed down for two decades, but in my opinion the leading candidate explanation is that the oil shock of 1973 inaugurated an era of energy scarcity that forced industrial economies to shift away from energy-intensive growth.
Is it also possible that the same underlying shifts that made productivity slow down during those two decades also caused inequality to rise, and labor’s share of income to fall from 63% to 61% over the exact same period? It seems plausible, because the timing lines up so perfectly. But I don’t know of a good theory as to how a technological shift could cause all of these things at once.
One common theory is that in the 1970s and 1980s, American industrial policy — including trade policy — stopped favoring manufacturing and started favoring the financial sector. This is, for example, the thesis of Judith Stein’s Pivotal Decade: How the United States Traded Factories for Finance in the Seventies. But if you look at the growth of the finance industry as a share of the U.S. economy, it’s a more or less unbroken rise from the end of WW2 through the turn of the century:
And if you look at financial profits, these actually fell as a share of the total in the 1970s before surging in the 1980s and again in the late 90s and early 00s:
The timing here doesn’t really line up. There’s no clear measure of financialization that coincides specifically with the early 1970s through the mid 1990s. The explosion of finance profits in the 1980s might explain part of the wage stagnation, if it came via financiers putting pressure on companies to suppress wages. But that can’t explain the wage stagnation in the 1970s, nor the re-acceleration in the late 90s and early 00s (when financial profits exploded but wages did well).
A lot of research suggests that unions drive down economic inequality (though researchers disagree on exactly how big the effect is). Farber et al. (2021) write:
U.S. income inequality has varied inversely with union density over the past hundred years…We develop a new source of microdata on union membership dating back to 1936, survey data primarily from Gallup (N ≈ 980,000), to examine the long-run relationship between unions and inequality…Using distributional decompositions, time-series regressions, state-year regressions, as well as a new instrumental-variable strategy based on the 1935 legalization of unions and the World-War- II era War Labor Board, we find consistent evidence that unions reduce inequality, explaining a significant share of the dramatic fall in inequality between the mid-1930s and late 1940s.
Here’s a picture of that relationship:
As we saw above, wage inequality — the divergence between average and median compensation — was responsible for part of the stagnation in middle-class wages, though not all of it.
But the timing doesn’t seem to fit here either. As you can see from that chart, unions have been in decline since the mid-1950s. The decline was a bit faster in the 1980s, which might slightly help explain wage stagnation in that decade. But overall it’s been pretty smooth. That doesn’t match up with the 20-year wage stagnation that started in the early 70s and ended in the mid 90s.
The chart of real wages for production and nonsupervisory workers shows a dramatic slowdown from around 1973-1994. But a chart of nominal wages for those same workers — i.e. the actual number of dollars they earned per hour — shows no such slowdown, except maybe a very gentle flattening in the 1980s:
The difference, of course, is inflation. From around 1973 to 1983, prices increased at rapid rates:
The smoothness of nominal wage growth raises the possibility that nominal wage growth is very sticky — that workers are able to negotiate about the same number of additional dollars from year to year, despite big changes in the purchasing power of a dollar.
Again, the timing here doesn’t line up with the era of wage stagnation. But I suppose it might explain the first half of it.
Finally, we’re back to trade and globalization. Certainly, Americans worried a lot about competition from European and Japanese companies, especially in the early 1980s. The Japanese and European auto and machine tool industries really did put American companies under intense competitive pressure starting in the 1970s. But it’s very hard to see this effect in the aggregate statistics. Import penetration rose in the 1970s, but flatlined in the 1980s and early 1990s:
As for the trade deficit, that was zero in the 1970s and then had a brief but temporary surge in the 1980s:
Some people I talk to seem to think that wage stagnation began as a result of the abolition of the Bretton Woods currency system in 1971-73. But that change, which ended the U.S. dollar’s role as the world’s official reserve currency, caused the U.S. dollar to depreciate, which made U.S. exports more competitive and actually discouraged imports. The dollar then surged again in the early 80s and collapsed in the late 80s after the Plaza Accord (an agreement to weaken the dollar):
And the Japanese yen strengthened more or less steadily against the dollar during the entire period of wage stagnation.
So trade with Europe and Japan just doesn’t line up with the wage stagnation in terms of timing, either. If you think overall import penetration is the key measure of globalization, then maybe trade had an effect in the 1970s; if you think trade deficits are a better measure, then maybe trade had an effect in the 1990s. But then the trade deficit and imports both surged in the late 1990s, which is when the wage stagnation ended.
In any case, we’re left with a bit of a mystery. The only macro trend that lines up very neatly with the great wage stagnation of 1973-1994 is the productivity slowdown, but there’s no good theory explaining how that could explain all of the wage stagnation, since productivity rose more than wages. Meanwhile, de-unionization, financialization, inflation, and trade with Europe and Japan can at best explain only some sub-periods of the wage stagnation — not the whole thing.
In fact, the great wage stagnation might have been from a patchwork of causes — first inflation and a surge of imports in the 70s, then accelerated de-unionization and financialization and the collapse of exports in the 1980s, with the productivity stagnation playing a corrosive role the whole time. But we should always be suspicious of complex, multi-factorial explanations for trend breaks on a chart. That wage stagnation started and ended suddenly enough that it cries out for a simple story. We just don’t have one yet.
Update: Some people have been asking me if the wage stagnation of 1973-1994 might have been caused by the mass entry of women into the U.S. workforce. Here’s the employment rate (also called the “employment-population ratio”) for American women:
You can see that the first part of the timing doesn’t line up here. When the wage stagnation began, American women had already been entering the workforce at a steady clip for 25 years. (The labor force participation rate for women looks much the same). Also, empirical evidence suggests at most a small effect of female labor supply on male wages — and if you look at the breakdown for men and women, you see that the stagnation for men was worse than for women over 1973-1994.
And theoretically speaking, women’s mass entry into the workforce shouldn’t produce an overall decline in wages. Just like immigration or a baby boom, women’s entry into the workforce is both a positive labor supply shock and also a positive labor demand shock at the same time — when women earn more, they spend most of what they earn, on things that require labor to produce.1 So we shouldn’t expect the addition of women to the workforce to hold down wages.
Thus, this theory also doesn’t line up with the timing of the stagnation, and it’s not clear why we would expect it to be a major factor in the first place.
And they save the rest, which in normal times should drive down interest rates and make it cheaper for companies to invest.
2025-05-15 09:58:33
Back in 2022, the United States was showing surprising signs of strength, after a lot of people (myself included) had lamented its seeming decline. Seemingly against all odds, the Russian invasion of Ukraine had been partially halted and rolled back, with U.S. military aid providing a crucial and timely lifeline (U.S. intelligence also managed to correctly predict the invasion well in advance). The U.S. had come out of the pandemic looking a lot more competent than people had initially thought, with a generous and effective financial relief program and a world-beating vaccine development effort that the Chinese were unable to match. Inflation was high, but the U.S. labor market recovered quickly from the pandemic (and the following year, inflation began to fall as well). ISIS was a memory, defeated at low cost by U.S. intervention. Meanwhile, the transatlantic alliance was stronger than ever, and China’s economy was in the dumps after a real estate bust.
Once again, America seemed to have beaten the odds and resisted decline. Then two years later, in a spasm of anger over immigration and inflation and woke culture, Americans ignored this amazing run of success, threw out the Democrats, and brought back Donald J. Trump.
As of 2025, I’m feeling distinctly less optimistic than I was in 2022.
First, the good news. President Donald J. Trump has put a 90-day pause on most of his tariffs on China. Although targeted tariffs on China would have been helpful in securing America’s strategic industries, blanket tariffs just hurt U.S. manufacturing while threatening to cause a recession. So compared to the alternative, the pause is a good thing. Stock markets are certainly taking the news well:
A lot of investors seem to have been convinced — rightly or wrongly — that Trump’s tariffs are almost all talk and bluster, and that when push comes to shove he’ll back down before real harm occurs.
And Trump’s approval rating, while still a lot lower than when he took office, has rebounded a bit from its nadir:
OK, now for the bad news. Trump’s latest tariff pause is only a partial and temporary remedy for an economic problem he himself created out of thin air. It’s temporary because in 90 days we’ll be right back where we were before, and Trump will have to choose between pressing ahead with tariffs or doing another walkback. It’s partial because even after this walkback, U.S. tariffs are still higher than they’ve been in living memory:
But in fact there’s more bad news. Trump’s seemingly never-ending series of rapid-fire tariff policy changes — by one recent count, there have been over 50 now — is still creating uncertainty. Trade policy uncertainty has gone up and down, but is still far higher than at normal times:
But beyond simple uncertainty, Trump’s willingness to announce big tariffs and then concede on them without major concessions from trading partners makes both him and the United States as a whole seem deeply unserious — an object of international ridicule. Here’s the WSJ on China’s reaction:
On Chinese social media, opinion leaders portrayed the tariff truce as a resounding victory for Xi…“China fought a very beautiful ‘counterattack in self-defense,’” said Ren Yi, a commentator who goes by the pen name “Chairman Rabbit,” in an online post. Beijing showed the world that Trump is irrational and America is a paper tiger, while China offers stability and certainty, he wrote…Xi has made a show of defying U.S. pressure and leaned into his self-styled image as a staunch steward of Chinese sovereignty.
Here in the U.S., Trump’s faithful react with adulation and praise to every daily reversal and walkback. But internationally, Trump just looks erratic and weak. Obviously, it was better for Trump to back down than to stay the course. But the fact that he made threats he couldn’t back up in the first place leaves America looking like a deeply unserious country; Trump’s move is being widely mocked throughout China.
And in fact, this is far from the only arena in which America is looking less like a colossus and more like a clown show these days. Other examples include:
America’s continued military and industrial weakness
Trump’s $400 million gift from Qatar and other likely cases of corruption
The U.S. government’s turn against vaccines
Chaos at the FAA
The pointless destruction of U.S. scientific and technical capacity
The pointless degradation of the transatlantic alliance
Incompetence at the Defense Department
Unfortunately, this meme pretty much sums up how U.S. tariff policy looks to the rest of the world:
2025-05-14 18:29:05
In recent years, I’ve read a bunch of people talk about a stagnation in American pop culture. I doubt that this sort of complaint is particularly new. For decades in the mid-20th century, Dwight Macdonald railed against mass culture, which he viewed as polluting and absorbing high culture. In 1980, Pauline Kael wrote an op-ed in the New Yorker entitled “Why Are Movies So Bad? or, The Numbers”, where she argued that the capitalistic incentives of movie studios were causing them to turn out derivative slop.1
So if I try to answer the question “Why has American pop culture stagnated?”, there’s always the danger that I’ll be coming up with an explanation for a problem that doesn’t actually exist — that this is just one of those things that someone is always saying, much like “Kids these days don’t respect their parents anymore” and “Scientists have discovered everything there is to discover.” To make matters worse, there’s no objective definition of cultural stagnation in the first place; it’s a fun topic precisely because what feels new and interesting is purely a matter of opinion.
With that said, I do think there’s some evidence that many forms of U.S. pop culture — music, movies, video games, books — are stagnating, at least as far as mass consumption is concerned. For example, back in 2022, Adam Mastroianni had a good post where he showed that an increasing percent of what Americans consume comes from franchises, sequels, remakes, and established creators. Here’s his chart for movies:
And Ted Gioia, who is probably the most well-known proponent of the “cultural stagnation” thesis, has some more evidence:
I’ve written repeatedly about music fans choosing old songs instead of new ones. But this trend has gotten more extreme since I first covered it. According to the latest figures, only 27% of tracks streamed are new or recent…The $15 billion market for comic books is driven by the same brand franchises that were dominant in the 1960s and 1970s…The top grossing shows on Broadway in 2023 are also retreads from the last century. The Phantom of the Opera and The Lion King boast the highest weekly gross revenues this year…83% of Hollywood revenues now come from franchise films featuring familiar characters from the past.
For what it’s worth, most Americans share this sense of declinism when it comes to movies, music, and TV, telling pollsters that these things peaked somewhere between the 1970s and the 2000s. Maybe this is because most of the poll respondents are middle-aged people nostalgic for their youth. But as Gioia points out, young people are listening to music from their parents’ generation nowadays; that’s not simple nostalgia.
Personally speaking, I’ve felt this stagnation in a number of areas. Movies, for example, don’t feel nearly as interesting or as vital of an art form as they did when I was younger, despite the fact that you can now shoot an indie film on your phone. A decent amount of good music is coming out, but a lot of the best stuff feels like refinement of what came before.
To name just one small example, young people have once again become fans of shoegaze, a micro-genre of dreamy, layered rock music that I enjoyed back when I was young in the 2000s. I love this revival. Here are two examples of recent shoegaze songs that I thought were absolutely excellent:
To be honest, I like these songs just as much as my old favorites by My Bloody Valentine, Tokyo Shoegazer, Oeil, etc. But they’re recognizably the same thing. I guess maybe it’s natural for a middle-aged guy like me to be a fan of things that sound like the music he loved in his youth, but what’s really striking is that the kids are into this too.
And nostalgia can’t explain why TV seems to me like it’s been in a golden age over the past decade. I loved Star Trek: Deep Space Nine and Seinfeld when I was a kid, but there was just nothing like Game of Thrones, or Andor, or One Piece. The Karate Kid movies were great2, but they don’t compare to the TV show Cobra Kai. Even in middle age, I’m perfectly capable of recognizing novelty and improvement in pop culture. It’s just that most types of pop culture don’t seem like they’re innovating and pushing the boundaries like TV is.
So anyway, let’s assume for the moment that the pop culture stagnation is real, at least across many domains. Why would that be happening?
A lot of people who write about culture seem to see it as something autonomous — either a grassroots upwelling that just sort of comes about on its own, or something imposed top-down from the people in power. The implication of this tacit assumption is that if a bunch of bloggers and critics and tastemakers get together and yell enough, culture will simply change. Perhaps if we call modern American pop culture “stagnant” enough, artists will be shamed into making something new.
I just have trouble seeing the world like that. My instinct is always to trace changes in culture to changes in economics — and, ultimately, to changes in technology. Technology maps out the space of the possible — together with nature itself, technology sets the boundaries for what human beings can do. That possibility space is then filled by human initiative and institutions, until they bump up against the walls.
Despite his famous line that “The culture always changes first,” Ted Gioia basically believes that the root of cultural stagnation is technological. He thinks the advent of the smartphone and scrolling social media feeds has made it hard for young people to pay attention to anything except the next quick dopamine hit, thus destroying the audience for longer, more sophisticated works of art. He writes:
Twenty years ago, the culture was flat. Today it’s flattened…I still participate in many web platforms…But now they feel constraining…Instead of connecting with people all over the world, I now get “streaming content” 24/7…Facebook no longer wants me stay in touch with friends overseas, or former classmates, or distant relatives. Instead it serves up memes and stupid short videos…And they are the exact same memes and videos playing non-stop on TikTok—and Instagram, Twitter, Threads, Bluesky, YouTube shorts, etc…“Are we all beginning to have the same taste?” asks critic Rebecca Nicholson—complaining about her inescapable sense of repetition and sameness pervading music, TV shows, films, and everything else.
Like most people, he blames specific bad guys — the social media companies that serve people their soulless algorithmic feeds. But those folks are just trying to make money, and doing what the market incentives tell them to do; if they didn’t, someone else would, and the end result would be the same.
Once it becomes possible for everyone to have an internet-connected supercomputer in their pocket, everyone will. Once it becomes possible to smoothly and seamlessly deliver infinite feeds of short-form video content to everyone’s phone, someone will do that too. And if people keep tapping and swiping on it, that’s what they’ll get served.3 In the absence of laws or other forms of government power to stop the market from working, the market will give people what they demand.
Only within the constrained space of that market demand do artists and creators have choices about what to make — at least, not if they want to make a living doing their art, or get their art in front of a large number of eyeballs. Sure, plenty of people make art as a hobby, out of pure passion. There are plenty of people out there still composing symphonies even as their peers hum the latest TikTok jingle. But the twin desires for money and popularity are strong for most artists, and that means that their output will be constrained by the market — which in turn will be determined by the intersection of preferences and technology.
But technology doesn’t only determine what consumers demand; it also determines what kinds of creations artists are able to supply.
When I was a kid, there was a popular genre of music called “alternative rock”. The melody of this type of music would often consist of a sequence of distorted power chords. Examples include “Shine” by Collective Soul, “Freak” by Silverchair, “Machinehead” by Bush, “All Over You” by Live, and so on.
It’s a nice sound, but it’s pretty limited in terms of what you can do with it — there are only so many short sequences of power chords you can construct. And during the mid to late 1990s, when every suburban boy in the country thought he was going to make it big as an alternative rock star, there were probably hundreds of thousands or even millions of guitar-playing youths sitting in their garages or their bedrooms finding every possible sequence. It was a swarm intelligence doing a brute-force grid-search algorithm over a fairly low-dimensional space.
I don’t think the 1990s kids found every possible alt-rock song. There were still some great new ones left to be written. Here’s “Chokecherry” by PONY, released in 2021:
But by and large, the alt-rockers successfully mined out all the low-hanging fruit of their micro-genre. There just wasn’t much left to do in that vein, and so the canon of alt-rock is now just mostly complete.
The alt-rock example illustrates how a particular entertainment format has a finite amount of low-hanging fruit that eventually runs out. In principle, the same should be true of much broader categories of entertainment — like melodic music itself.
The number of melodies that it’s possible to write is very large, but finite. In 2020, two programmers algorithmically generated every possible MIDI tune and published them for free, hoping to head off IP lawsuits about melodic plagiarism. Of course, that set of melodies is so large that in practice, it’ll be impossible for humans to record and release songs based on all of them.
But in practice, the number of melodies that tugs at human heartstrings is probably far fewer. These days when I search for new rock songs on Spotify and YouTube, most of what I find sounds like tuneless slop; the sequences of notes are technically a melody by some simple mathematical definition, but they don’t sound like anything to me.
And the more of the good melodies we find, the more similar the new melodies will sound to something that already exists. The Flaming Lips’ “Turn It On” has a different melody from Hootie and the Blowfish’s “Let Her Cry”, but it’s similar enough that if you try to hum one, you might accidentally find yourself humming the other. Kurt Cobain thought “Smells Like Teen Spirit” sounded like the Pixies song “Gouge Away”. Long before we exhaust all the melodic possibilities, we find enough of them where the distance from any new melody to the nearest old melody starts to shrink, making novelty feel more and more incremental.
The more constraining an entertainment format is — i.e., the smaller the space that artists have to search for novelty — the quicker the innovations stop feeling novel. For alternative rock, the space was very small. For melodic music, it’s bigger. For music as a whole, it’s far bigger — there are many musical elements that you can add to melodies in order to make the whole arrangement sound new, and some songs have no melody at all.
But even for music as a whole, there are probably a finite number of things you can do. There are probably a number of ways you can make music that conveys a sense of dreamy, ethereal longing, but not an infinite number of ways — and so when you try to evoke those emotions, you’re reasonably likely to end up with something that sounds a bit like shoegaze.
I wonder if this is why movies have become more repetitive than television. As Adam Mastroianni’s graph shows, movies took only about 20 years to go from 25% remakes and sequels to over 80%. But television’s novelty has decreased at a much slower rate:
Movies are simply much shorter than TV series. This constrains them pretty severely in terms of plot and characterization. When we talk about why Americans have been moving from the big screen to the small screen, we usually talk about the increasing quality of TVs. But it’s also possible that movies themselves may have exhausted a large fraction of their available novelty, while longer-format TV series are still going strong.4 This also might be why TV seemed like it was in a golden age in the 2010s, even as films’ creativity flagged.
The solution to this, of course, is to explore new artistic forms. If movies are getting stale, see what you can do with TV. If rock music is getting stale, see what you can do with electronica.
Some optimists argue that American pop culture isn’t stagnating at all — it’s just shifting to new and different forms. In a post last year, Katherine Dee argued that old formats like books, music, and movies have simply become less important, as cultural output has shifted to new formats:
[T]here’s a new culture all around us…We just don’t register it as “culture.”…We’re witnessing the rise of new forms of cultural expression. If these new forms aren’t dismissed by critics, it’s because most of them don’t even register as relevant…The social media personality is one example of a new form….not quite performance art, but something like it…The same is true of TikTok…[T]here is a lot of innovation on TikTok — particularly with comedy…Creating mood boards on Pinterest or curating aesthetics on TikTok are evolving art forms, too. Constructing an atmosphere, or “vibe,” through images and sounds, is itself a form of storytelling, one that’s been woefully misunderstood…They’re a type of immersive art that we don’t yet have the language to fully describe.
And Spencer Kornhaber writes something similar:
The great media of the 20th century—the art-pop album, the feature-length film, the gallery show, the literary novel—may be fighting for their life, but that’s because of competition from new forms defined by a sense of immediacy: short-form video, chatty podcasts, video games, memes. Like the old media, these forms foster tons of mediocrity. But they also invite surprising excellence[.]
If you look at the history of pop culture, novelty has always been driven by changes in technology. Rock music was able to exist only after the amplifier and the pickup microphone were invented. Electronic dance music was only able to exist after synthesizers, mixers, and samplers were created. Movies and TV required cameras and a host of other technologies. Even books were very hard to make before the printing press.
And yet in the realm of technology too, we may eventually see stagnation. Innovation is getting more expensive, and the pool of potential researchers is set to shrink. Perhaps AI can save us and revive both technological progress and cultural novelty, but that remains to be seen.
Existing alongside the criticism that American pop culture has become repetitive, there’s also the subtly different allegation that it has become less artistic — that the avant-garde has disappeared, replaced by shallow consumerist masscult. One of the chief proponents of this idea is my friend David Marx. In a response to Katherine Dee’s post about new cultural forms, he wrote:
TikTok/Reels skits just feel like the glossy, expertly-edited versions of the inside jokes that kids make at summer camp talent shows…Most of the creators…are non-art minded amateurs working in templated formats…“[D]igital vaudeville” says it all — it's just not an "inventive creative practice that expands what is possible with art."
And in a response to Spencer Kornhaber’s long piece in The Atlantic, Marx wrote:
The only way to define "progress" in culture is to draw clear lines between entertainment and art, something that has become extremely unpopular…A new critical consensus [in the 2000s and 2010s] demanded that we stop thinking about creativity in a hierarchical way: there was no "high" culture and "low" culture — just culture…This ideology has become known as "poptimism"…
Poptimism…made mass culture the center of the cultural conversation, which was bound to disappoint us. And, second, poptimist criticism provided a false promise that "creativity" can happen anywhere, ignoring the fact that some creative endeavors and formats are much more conducive to the kind of cultural invention that provides lasting works of art…
The poptimist generation of critics in the 21st century rejected [the] separation of high and low art. They argued that there was no meaningful difference between Mariah Carey and Kurt Cobain…Not every creative endeavor provides the same degree of originality or formalistic mastery. A child's finger-painting is not equivalent to a Rothko. A work only verges towards art in challenging or playing with the existing conventions to create new aesthetic effects. Entertainment…just needs to provide enough stimulus to momentarily keep an audience's attention, and it can usually achieve this by tapping well-tested conventional formulas.
[A]udiences are not so easily fooled…They know when a song is just a jam and not a radical piece of transformative art.
Marx differentiates “art” and “entertainment” based not on their quality, but on the intent of their creators. “Art” is when creators try to push the boundaries of creative expression with new forms and new ideas; “Entertainment” is when creators just want to please the masses. When creators stop trying to make art and just make entertainment, you get a decrease in novelty, because people aren’t trying as hard to push the boundaries — there is no avant-garde.
That’s a reasonable definition, and a reasonable hypothesis. I don’t think things are nearly quite so cut-and-dried — plenty of entertainers work very hard and very creatively to find new lucrative ways to entertain the masses. I highly recommend the book The Making of Star Wars as a window into just how much effort and inventiveness went into the 1977 movie.5 But OK, I do accept that if you have fewer people intentionally trying to create novelty for novelty’s sake, you will probably end up with less novelty.
The question is why creators have moved from art to entertainment. Marx thinks of culture as something autonomous, so he blames the malign influence of “poptimism” for telling people that entertainment and art are the same. But I naturally tend to look for technological explanations.
When I think about the avant-garde, I think about artists making art for other artists. If you make a painting that’s just a bunch of colored squares or splashes of paint on a canvas, an average person might not be able to tell your art from the efforts of a small child. But other artists will know that you’re trying to subvert the paradigm they’ve been working in — to make a statement about what art is. That’s the kind of thing that only one’s artistic peers understand.
Some artists will always want to make things for other artists to see and react to and judge. But I think that in the old days, many did it out of technological necessity.
Discovering good artists in the old days was a very difficult endeavor. Production companies and publishers had to spend a lot of effort scouting around, and then make a guess as to how a creator’s work would perform in the commercial sphere. An easy way to separate the wheat from the chaff was to basically use a peer review system — to use a creator’s standing in the artistic community as a proxy for whether they would sell to a more general audience.
And so many artists tried to impress other artists because they had to — because other artists were always the first gatekeeper of their work. Standing within the artistic community was what got you discovered. When George Lucas pitched a Flash Gordon movie to Fox, the studio told him that it had to be directed by Federico Fellini — a guy they knew as a top arthouse director. (This was impossible, of course, so Lucas made Star Wars instead.)
Fast-forward to the 2020s, and the artistic community has been largely disintermediated. If you want to be a successful commercial creator, the way to get started now is not first to struggle to prove yourself in the closed and cosseted artistic community — it’s to simply throw your work up online and see if it goes viral. If it does, you’re in.
This means that any creator whose goal is to sell out can do so without spending years making art that impresses artists. Of course, some creators still just intrinsically want to impress other artists. But if the money-motivated creators have left the community, there are just fewer people in that community left to impress. It becomes more and more niche and hipster. And there are fewer crossovers from the art world to mass culture, because the people left in the art world are the ones who don’t really care if they get famous and rich.
So if this hypothesis is true, and if you wanted to bring back the avant-garde, how would you do it? One idea would be to follow the university model — to create fairly closed-off spaces where artists live in material equality with lots of public goods. The high baseline standard of living would reduce artists’ need to get rich. And the close proximity of so many artists would make them try to produce novel art in order to impress each other — just like in academia, professors try to impress each other with their research.
I doubt any new institution like this is in the offing; the university model itself is in trouble, and I don’t think the government is going to be willing to fund art schools just so the professors can make cool art.
But that’s the basic principle — if you want more novelty, I think you’ve got to make the artists work for each other more. How you do that, in a world where technology has made artists irrelevant as gatekeepers, is not something I have a concrete answer for. We may simply be in for a long period of artistic stagnation in America.
To sum up, I sort of believe that cultural stagnation is real, but I also think the root of the problem is probably technological — and therefore very hard to expunge.
I had AI (ChatGPT o3) find these examples for me, in case you’re wondering how I’ve incorporated AI into my writing process. I did have to check them thoroughly to make sure I wasn’t mischaracterizing them, as the AI also listed a whole bunch of examples that didn’t say what it claimed they said.
Well, the first one, at least.
Note that when we talk about the toxic effects of social media, we usually talk about network effects that keep users trapped in an ecosystem because everyone else they want to interact with is trapped in there with them. TikTok and other algorithmic feeds just aren’t that. There’s very little interaction between the users. If I show you a video from Instagram Shorts, you’ll be just as entertained as if it had come from TikTok; there’s really no switching cost involved. Algorithmic feeds are really push media — a form of television. They may be addictive, but they probably aren’t network traps.
TV also got a later start, since only recently did technology make it cheap to produce many hours of good-looking TV.
In fact, this is the only “making of” book that I’ve ever recommended to anyone.
2025-05-12 16:04:56
The theme of this week’s roundup is the fall of capitalism. With Trump going for price controls and tariffs, and anticorporate progressives savagely attacking Ezra Klein for wanting to build more houses, the political constituency for capitalism seems moribund for now. Meanwhile, Warren Buffett is retiring, hiring in the tech sector also seems dead, and the UK seems to have embraced degrowth. What’s a capitalist to do in times like these?
But first, an episode of Econ 102, about (what else?) tariffs:
Anyway, on to the list:
An investing legend has passed into history. Warren Buffett, the chairman and CEO of Berkshire Hathaway, announced that he will step down at the end of this year — at the age of 95. Over the decades, Berkshire — the vehicle through which Buffett made his investments — has beaten the stock market spectacularly:
If you invested a thousand dollars in the S&P 500 back in 1964, you’d now have several million dollars. If you invested it in Buffett’s company instead, you’d now have over a billion dollars.
And in terms of his Sharpe ratio — the measure of his excess returns relative to the amount of risk he took — Buffett stands alone above all other mutual funds:
Buffett thus did something that finance theory says no investor should be able to do: He beat the market, consistently, by huge amounts. What’s more, his market-beating returns compounded — unlike most top investment funds, which will make you withdraw some of your money every year instead of reinvesting it into the fund, Buffett simply kept turning all of Berkshire stockholders’ money into more money at stupendous rates.
Well…almost. Buffett beat the market by a lot more in his early decades:
The simple explanation is that when Berkshire got big enough, it became harder and harder to keep finding as many mouth-watering overlooked investment opportunities for that growing pile of cash. Buffett’s returns took a heck of a long time to attenuate, but attenuate they did.
Another big question is how much of Buffett’s performance could be attributed to systematic approaches that any investor can copy, and how much was due to some ineffable individual talent. Books have been written about the “Warren Buffett Way”, but can regular people really be their own Buffett?
Frazzini et al. (2018) say yes, they can. They find that Buffett’s entire outperformance against the broader market can be explain by two systematic factors that they call “betting against beta” (BMJ) and “quality minus junk” (QMJ):
A loading on the BAB factor reflects a tendency to buy safe (i.e., low-beta) stocks while shying away from risky (i.e., high-beta) stocks. Similarly, a loading on the QMJ factor reflects a tendency to buy high-quality companies—that is, companies that are profitable, growing, and safe and have high payout…
Buffett likes to buy safe, high-quality stocks. Controlling for these factors drives the alpha of Berkshire’s public stock portfolio down to a statistically insignificant annualized 0.3%. That is, these factors almost completely explain the performance of Buffett’s public portfolio. Hence, a significant part of the secret behind Buffett’s success is the strategy of buying safe, high-quality, value stocks…Our statistical finding is consistent with Buffett’s own words from the Berkshire Hathaway 2008 Annual Report: “Whether we’re talking about socks or stocks, I like buying quality merchandise when it is marked down.”
But I’m not so sure the story ends there. In finance theory, factors that explain returns are supposed to be risk factors — in an efficient market, the excess expected returns in the long run are supposed to compensate you for the risk you take in the short run. “Betting against beta” doesn’t sound like a risk factor, because beta (correlation with the overall market) is itself a risk factor — instead, it sounds like a market inefficiency, where investors take on too much beta for too little gain. Similarly, it’s hard to imagine why “quality” stocks — companies that are profitable and growing — would be systematically riskier than other stocks.
A likelier explanation is that in the 1960s, 1970s, and 1980s, the U.S. stock market was just somewhat inefficient — investors were just making a bunch of dumb bets. And Buffett, with his trademark approach of cool rationality and hunting for bargains outside the major stock indices, was one of the first investors to realize just how many mistakes the average investor was making — and to bet a huge amount of money on that realization. Eventually, as more investors learned to be cool and rational like Buffett, his advantage over the market decreased.
If that’s the case, then as Jason Zwieg writes, there will never be another Warren Buffett. His basic approaches are probably still sound, but they will never again produce such stellar outperformance. Buffett defined an age that is now over.
During the 2024 presidential campaign, Donald Trump and his surrogates and allies railed against Kamala Harris’ flirtation with price controls. Now, Trump is declaring that he’s going to set prices on pharmaceuticals:
Whether Trump has the legal ability to do this is an open question; it could just be a populist move that gets blocked by the courts in a month or two.
Even if the move does go through, it’s more likely to just raise prices in poor countries than to reduce them in America. If Trump mandates that drugmakers have to charge the same price in America that they do in poor countries, they’ll probably just mostly raise prices in the poor countries instead of lowering them in America:
So even if it goes through, the policy is probably going to fail in its goal of making drugs cheaper for regular Americans.
Of course, if drugmakers were forced to lower their U.S. prices by a huge amount, it would certainly help a lot of people in the short term. In the long term, however, it would probably lower medical innovation, by reducing the incentive to do long, arduous, expensive pharma research. A larger literature shows that the more revenue pharma companies make, the more they spend on research. Limiting these companies’ market size by limiting the prices they can charge will result in less research spending, and ultimately fewer new medicines and treatments.
Ji and Rogers (2025) find what looks like a recent example of this actually happening:
We investigate the effects of substantial Medicare price reductions in the medical device industry, which amounted to a 61% decrease over 10 years for certain device types. Analyzing over 20 years of administrative and proprietary data, we find these price cuts led to a 29% decline in new product introductions and an 80% decrease in patent filings, indicating significant reductions in innovation activity. Manufacturers reduced market entry and relied more heavily on outsourcing to other producers, which was associated with higher rates of product defects. Our calculations suggest the value of lost innovation may fully offset the direct cost savings from the price cuts. We propose that better-targeted pricing reforms could mitigate these negative effects. These findings underscore the need to balance cost containment with incentives for innovation and quality in policy design.
Here’s a thread Rogers wrote, summarizing the findings.
Some biotech entrepreneurs are already worried that this could hurt their businesses:
The biotech folks shouldn’t be too worried. As Abaluck shows, the likeliest result of Trump’s policy, even if goes through (which seems unlikely), is for Americans to be mostly unaffected while people in poor countries suffer.
But the precedent of Trump calling for price controls — another idea he cribbed from the left — shows that we’re sliding ever farther away from free-market capitalism, and toward a sort of boneheaded Maoism where economic central planning is done by a few very foolish old men whose ideas about economics all came from watching CNN in 1992.
When Ezra Klein and Derek Thompson published Abundance, a number of progressives rushed to attack it. Initially, most of the critics on the left argued that the authors had given insufficient attention to corporate power and the need for a movement against monopolies.
In a recent discussion, Klein pressed two of his progressive critics — Zephyr Teachout and Saikat Chakrabarti — to tell him what concrete problems they think an antimonopoly movement would solve. Again and again, Teachout’s answer is “power”:
It’s a democracy vision…I think for 40 years…we basically stopped asking the power question…[B]oth Republicans and Democrats got on board with…this idea that we should just focus on outputs and not on power. So that’s part of the reason you hear some resistance from the antimonopolists to your [abundance] vision.
Here we see the fundamentally different goals of abundance liberalism and anticorporate progressivism. Abundance liberals care about what stuff people get, while anticorporate progressives care about who holds power in society. The progressives have trouble explaining exactly how changing the distribution of power would lead to better material outcomes for the masses, but that doesn’t phase them much; to them, reducing corporate power is an end in and of itself.
As I’ve noted before, this ideas smacks of class resentment — a professional class that cares more about dunking on the entrepreneurial class than about helping the working class. That’s why critics of Abundance, like Aaron Regunburg, tend to focus so strongly on accusations that abundance liberalism is being secretly supported by class enemies:
Our concern is that corporate-aligned interests are using abundance to head off the Democratic Party’s long-delayed and desperately needed return to economic populism…[W]ho are the villains we should be naming? A growing number of Democrats are coalescing around a simple answer to that question: oligarchy…[M]any Democratic elites still oppose any attempts to identify billionaires and corporations as villains…[T]hey are terrified of the prospect of a populist takeover of a party…that has for decades served as a comfortable partner to oligarchy…
Groups like Third Way, which are largely funded by billionaires and corporations, have been major boosters of the abundance framework, as have other key pillars of US oligarchy, including crypto, Big Tech, and Big Oil. These interests have a clear vested interest in derailing the growing Democratic turn toward economic populism. And they have found in abundance advocates—like Abundance coauthor Derek Thompson, who recently argued that oligarchy “does a terrible job of describing today’s problems”—a valuable tool for redirecting the anti-establishment rage building within the Democratic base away from themselves[.]
This also probably explains why critics who claim to address the substance of Klein and Thompson’s arguments often seem not to have actually read Abundance at all. For example, the progressive economist Isabella Weber claims that Klein and Thompson call for deregulation but ignore the importance of state capacity in getting big things done. In fact, Klein and Thompson spend most of their book arguing in favor of increased state capacity, and bemoaning how progressives have focused their energies on shackling the power of big government. The most likely explanation for this mischaracterization is that Weber simply assumes that abundance liberals are allied with her class enemies, and thus must be simple small-government libertarians.
These critiques will find some purchase among progressives, including among the all-important Democratic staffer class (who I’ve now started to think of as the main audience for most of these factional food fights). But the critiques are fundamentally bankrupt. Companies, including large ones, have a crucial and indispensable role in providing the material living standards that make life in a developed country so good. There are no rich countries without big corporations. Of course if they capture the government it can cause big problems, but America’s progressives seem to think that the simple fact that corporations are large and profitable means that government has been captured. This is wrong, but class resentment makes it a compelling idea.
One of the most overlooked economic stories in America is the multi-year bust in tech hiring. In 2022, tech stocks crashed, but then eventually recovered. But the wave of layoffs that accompanied that crash never reversed — or at least, it hasn’t yet. It’s a brutal job market for tech workers out there:
Last year, I argued that this was because the internet has been mostly completed, leaving much less work for software folks to do:
Of course, AI has replaced internet software as the Next Big Thing — the locus of hype, VC investment, and engineer excitement. But it’s not clear just how many humans are going to be needed to build the AI future:
This chart seems like it probably should have included OpenAI, Anthropic, and xAI. Their total employment is a bit less than 10,000, and most of those folks have been hired in the last three years, so including them will moderate this picture a bit. Still, there’s no gigantic AI hiring boom to match the giant software hiring boom of the 2010s. No one talks about OpenAI as an omnipresent reliable cushy gig that a smart person can always go get a job at if their startup doesn’t work out, the way they talked about Google in 2018.
One possibility is that AI is already coming for software engineers’ jobs.1 AI basically does for software production what machine tools did for hardware production. If overall demand for software rises as a result of AI, then we’ll see an employment boom and a wage boom, but if demand remains constant and production gets automated, the industry will simply need fewer engineers.
In any case, I wonder if this protracted quiet tech bust is a significant factor in the emergence of the Tech Right. With the boom times over, and AI threatening disruption without making it clear where profits will be made, the software industry has gone from a gold rush to a defensive crouch. That might have scared some tech people in to thinking that only Trumpian deregulation, crypto-pumping, and special favors might be able to restore their wealth to its upward growth path.
Why is the British economy so stagnant? The Resolution Foundation’s Simon Pittaway has an interesting new report that tries to answer that question. (Here’s a thread where he explains the report’s main findings.)
Pittaway shows how the U.S. has led the pack in terms of productivity, while the UK is falling behind:
Pittaway finds a number of problems that are probably holding back UK productivity, but his three main culprits are health, energy, and information technology.
The UK’s health industry has been dragging the country’s productivity down like no other:
Of course, productivity in the health sector is notoriously hard to measure, because the market is so un-market-like — value-added depends on how much people pay for health care, rather than how much actual value they’re getting. But including alternative estimates of health care productivity changes the overall picture very little.
It may be that the UK’s National Health Service is becoming less efficient.
Anyway, Pittaway then notes that the U.S. has a lot more fossil fuel resources than the UK. This not only adds a big important industry to the U.S. roster, it lowers energy costs for American companies:
(As a side note, the fact that France’s energy costs have gone up as much as the UK’s, despite France mostly running on nuclear power, should serve as a wake-up call for people who think nuclear power can solve the UK’s problems.)
As for IT, Pittaway notes that U.S. companies purchase a lot more software than British companies do, for reasons that aren’t quite clear. To all this, add things like low business dynamism, disinvestment from Brexit, and so on. The problem of low British productivity is overdetermined — the country is just getting a lot of things wrong.
Although Pittaway is diplomatic in not pointing this out, a lot of these mistakes are probably politically driven. Brexit certainly was. The progressive antipathy toward fossil fuels, though rooted in very real concern over climate change, is probably overdone. And of course the NHS is probably shielded from reforms by politics. The UK, like the U.S., has fallen into a political equilibrium where popular ideologies push the country toward degrowth, and regular people just have to deal with it.
Correction: A previous version of the intro to this post stated that Warren Buffett is dead. He is not.
Derek Thompson argues that AI is already coming for the jobs of new college grads, noting that recent grads now have an unusually high unemployment rate:
I have to say I’m not convinced. AI wasn’t writing code in 2013-15, when the biggest rise in new grad unemployment occurred. A more likely story is that this chart reflects elite overproduction, though I suppose AI might have added a bit of fuel to the fire in the most recent year or two.
2025-05-10 18:15:36
The big news of the last couple of days is that India and Pakistan are at war. A terror attack by a (possibly) Pakistan-sponsored group drew Indian airstrikes in response. But unlike in 2019 when something very similar happened, the two countries didn’t immediately cool things down. Instead, there has been an increasing cycle of escalation, with airstrikes, missile strikes, shelling, aerial dogfights, and so on. A lot of people are still calling it a “standoff”, but by now it’s pretty obvious that it’s a war; no one is actually standing off.
(Update: Shortly after I wrote this post, the Trump administration announced that it had brokered a ceasefire between India and Pakistan. India agreed that there was a ceasefire, but denied that the U.S. had brokered it. Shortly after that, there were reports that the ceasefire had been violated.)
This post isn’t about the blow-by-blow of the conflict — if you want, you can follow the list that I made on X.1 Instead, I want to make a more general observation: We are slipping further and further toward a world of war.
It’s a popular trope to say that war is always with us, but that’s true only in the vaguest, least informative sense. If you just count up all the countries where any kind of violence is occurring, the rate has increased only modestly over the last decade. But most of those are minor skirmishes and simmering conflicts with gangs and terror groups and little bands of ineffectual revolutionaries. When you look at estimates of the actual deaths in state-based conflicts, you can see that it took a big jump after the pandemic:
A chart of ten of the deadliest conflicts of the 21st century shows that deaths have been concentrated in recent years:
But to me, this data is less convincing than a single terrifying fact: Out of the world’s nine nuclear powers, four — India, Pakistan, Russia, and Israel — are now at war. (Update: Actually, if you count North Korea as being at war in Ukraine, it’s five!)
Even if India and Pakistan manage to climb down from the brink and avoid a protracted conflict, we should all still be unsettled. Russia and Israel are fighting non-nuclear enemies, which suggests that nuclear weapons can stabilize regions. But the fact that India and Pakistan, which are both armed with nukes, can fight each other to the degree they have — instead of climbing down like they did in 2019 — should worry us deeply.
This episode only reinforces what has been apparent for several years now — war is returning to our world. We’re slowly exiting a world of guerrillas and gangs and petty border skirmishes, and returning to the days when great powers clashed with each other regularly.
Ultimately, the reason for this is the end of Pax Americana. The power vacuum created by the decline of the global hegemon is prompting a scramble for power.
In Europe, three of the last four centuries featured some kind of big “crisis” — a massive outbreak of war that destroyed the old power equilibrium among nations and ushered in a new one. The Thirty Years’ War in the early 1600s featured all of the region’s powers, and devastated the population of Germany. In the late 1700s and early 1800s there were the Wars of the French Revolution and Napoleonic Wars. And then of course in the early 1900s there were the World Wars. Other regions of the globe didn’t have such regular contests among regional powers, but they did have crises of their own — the fall of the Ming Dynasty in the mid-1600s, the Taiping Rebellion in the mid-1800s, the American Civil War, the Pacific Theater of World War 2, and so on.
This isn’t some regular clockwork cycle. The point is that over time, the stability of power relations seems to decay, both between nations and within them. This decay eventually results in a military contest to see where the new power lies.