2026-03-19 17:37:35

I’ve been writing some pessimistic things about AI recently, so I thought I should try to balance those out with some optimistic takes. One way I think AI could really help our society is by injecting reasonableness and moderation into our public discourse.
I’m known as a pretty nice and reasonable blogger nowadays. But when I got started, as an angry graduate student in 2011 trying to distract himself from his dissertation, I was genuinely snarky. Going back and rereading some of my posts from that era makes me chuckle, but also wince a little bit. The genteel éminence grises who sat atop the hierarchy of the very hierarchical economics profession just had no idea how to deal with a snarky, internet-native Millennial who was willing to talk back.
That snarky bravado, though sincere, was how I (accidentally) forced myself into the influencer elite. Paul Krugman, Brad DeLong, and other established bloggers liked how I tweaked the tails of the stuffy New Classical macroeconomists who pooh-poohed fiscal stimulus. So they boosted me on their own blogs, and pretty soon almost everyone in the economics profession knew my name — deservedly or not. Then I got Twitter, and I started tweeting way too much, and the rest is history. Notably, it was my political tweets — anti-Trump stuff in 2015-2020 — that got me my biggest bump in social media followership, rather than my economic insights.
In the media world of 1991, this career path would have been a LOT harder to pull off. I could have been a newspaper columnist or perhaps even a TV show host, but it would have been a long hard slog, gatekept by a bunch of editors who embodied the conventional wisdom of an older generation. My best bet for breaking in as an irreverent, independent voice probably would have been talk radio. In the media world of 1971, forget about it — I would have zero chance of breaking in to a discourse dominated by broadcast TV and big newspapers.
We can wonder whether the world would have been better or worse had I never become a public intellectual (hopefully, because you read this blog, your answer is “better”). But in my personal opinion, it’s pretty clear that the phenomenon of outsiders breaking in to the discourse with aggression and social media attention-seeking has gone too far. There is very clear evidence that social media — far more than the traditional media it replaced — has led to the elevation of divisive voices and bad actors.
For example, Bor and Petersen (2021) find that social media draws malignant, status-seeking people who use hostility to get attention and power:
Why are online discussions about politics more hostile than offline discussions?…Across eight studies, leveraging cross-national surveys and behavioral experiments (total N = 8,434), we [find that] hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline. Finally, we offer initial evidence that online discussions feel more hostile, in part, because the behavior of such individuals is more visible online than offline. [emphasis mine]
Basically, spreading hate and divisiveness on social media is a form of entrepreneurship. As Eugene Wei has written, social media is all about getting social status. 10,000 followers on X may not sound like a media empire to rival CBS News, but for most people it’s more attention than they would otherwise get in their entire life. For malignant individuals who crave status and attention and enjoy spreading fear and hate, social media is a natural platform for their dark dreams.
This is especially effective because the psychology of viral content tends to spread negativity more than positivity. Here’s Knutson et al. (2024):
We analyzed the sentiment of ~30 million posts (on twitter.com) from 182 U.S. news sources that ranged from extreme left to right bias over the course of a decade (2011–2020). Biased news sources (on both left and right) produced more high arousal negative affective content than balanced sources. High arousal negative content also increased reposting for biased versus balanced sources…Over a decade, the virality of high arousal negative affective content also increased, particularly in…posts about politics. Together, these findings reveal that high arousal negative affective content may promote the spread of news from biased sources.
And Brady et al. (2021) find that social media outrage is a self-reinforcing process:
Moral outrage shapes fundamental aspects of social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two preregistered observational studies on Twitter (7331 users and 12.7 million total tweets) and two preregistered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning.
Together, these effects probably explain why negative content — especially about people’s political enemies — is so much more common than positive content on social media. Here’s Watson et al. (2024):
Prior research demonstrates that news-related social media posts using negative language are re-posted more, rewarding users who produce negative content…Data from four US and UK news sites (95,282 articles) and two social media platforms (579,182,075 posts on Facebook and Twitter, now X) show social media users are 1.91 times more likely to share links to negative news articles….[U]sers [show] a greater inclination to share negative articles referring to opposing political groups. Additionally, negativity amplifies news dissemination on social media to a greater extent when accounting for the re-sharing of user posts containing article links. These findings suggest a higher prevalence of negatively toned articles on Facebook and Twitter compared to online news sites.
And as if that wasn’t bad enough, social media platforms algorithmically amplify divisive content, probably as a business strategy! Here’s Milli et al. (2024):
In a pre-registered algorithmic audit, we found that, relative to a reverse-chronological baseline, Twitter's engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group.
And research also finds that algorithmic feeds tend to increase political polarization.
In other words, the rise of social media created a revolution in political discourse. The old-school monopoly of big newspapers and TV stations — already under strain from the Web and from increased entry and competition — was overthrown by a giant mob of wannabe influencers, using divisiveness, partisanship, ideology, tribalism and negative emotions to get attention and status.
I call these people the Shouting Class. The most successful among them include people like Nicholas Fuentes, a literal Hitler supporter who has called for women to be sent to “gulags”; Candace Owens, a conspiracy theorist and antisemite; and Hasan Piker, who has said that America deserved the 9/11 attacks. But the real damage is probably done by the vast legions of smaller-time shouters, all dreaming of becoming the next Fuentes or Owens or Piker. If you’re on X or Bluesky, you can probably name a few of them.
Regular people know, of course, that social media is ruled by monsters great and small. Here’s a poll from 2020 showing that Americans think social media has a negative effect on their society:

And here’s a recent poll showing that Americans trust social media less than just about any other institution:

Increasingly, Americans are getting off social media. But because the normal, moderate Americans are leaving first, this just cedes the field of influence to the extremists. This is from Törnberg (2025):
Overall platform use has declined, with the youngest and oldest Americans increasingly abstaining from social media altogether. Facebook, YouTube, and Twitter/X have lost ground, while TikTok and Reddit have grown modestly…Across platforms, political posting remains tightly linked to affective polarization, as the most partisan users are also the most active. As casual users disengage and polarized partisans remain vocal, the online public sphere grows smaller, sharper, and more ideologically extreme.
This is, of course, not the first time that new media technologies have opened up opportunities for divisive entrepreneurs to use hate and fear to boost their careers. Consider Charles Coughlin, a right-wing radio host in the 1930s, who called for an end to democracy and labeled Hitler a “hero”. Coughlin, whose ideas are recognizably similar to those of Fuentes or Tucker Carlson today, used a new media technology (radio) and a constant stream of negativity to break into the public consciousness and establish himself as an influencer.
Why did the Charles Coughlins give way to the staid, centrist Big Media of the mid-20th century? Monopoly power. Big newspapers gradually built local monopolies that made it hard for upstarts to break in using sensationalism (as they had done in earlier decades). Limited spectrum availability insulated broadcast TV stations and radio stations from competition.1
Those gatekeepers inevitably lost power as new technologies allowed new entrants to get inside the walls. Cable TV led to the rise of talk show hosts like Sean Hannity, Tucker Carlson, and Rachel Maddow. Talk radio led to Rush Limbaugh and Michael Savage. The Web led to blogs like the Drudge Report. All of these new entrants used divisiveness and negative emotion to break in. Social media just supercharged the process.
Arguably, American society hasn’t recovered from the blow that the rise of social media dealt it. Other societies seem to be a little bit more insulated from social media’s deleterious effects, due to their greater homogeneity and centralization — but only a bit. The problem is global.
The question now is what can save us from the tyranny of the Shouting Class. Who can be the next Walter Cronkite?
I used to think that this was a job for the owners of platforms themselves — that if they really wanted to, people like Elon Musk could tweak their algorithms and moderate their content to suppress the most divisive shouters and reward balance and reasonableness. I no longer think this will work. Watching the management of Bluesky try and fail to halt that platform’s descent into madness, and watching Elon’s algorithmic tweaks produce at best a slight conservative shift in opinion, I’m a lot more pessimistic about the ability of wise corporate management to suppress the Shouting Class. And given the fact that Elon has elevated some of that class’ worst members, I’m also more pessimistic about the desire of management to become CBS News.
Which leaves us with AI.
Anyone who has used X has noticed the “call Grok” feature. If you’re a premium subscriber, you can always just tag Elon’s favorite LLM and get it to answer questions and deliver relevant facts. Dan Williams writes that this type of LLM fact-checking will reintroduce expertise and technocratic fact-based analysis back into public discussions:
First, unlike human experts, [LLMs] can rapidly deploy encyclopaedic knowledge to answer people’s idiosyncratic questions. Their responses can be probed, scrutinised, and questioned without them ever getting tired or frustrated. They won’t just tell you that there is no persuasive evidence for a link between vaccines and autism. They can carefully walk you through the kinds of evidence we have and address your specific sources of scepticism. This partly explains why they can be highly persuasive, even in correcting conspiratorial beliefs that many assumed were beyond the reach of rational persuasion.
Second, LLMs typically share information politely and respectfully. This not only differs from the performative, gladiatorial character of much debate and discussion on social media platforms, but also improves on much communication by human experts. Being human, experts are often biased, partisan, and simply annoying, and when they seek to “educate” the public, it can be perceived—and is sometimes intended—as condescending and rude. In contrast, LLMs deliver expert opinion without such status threats.
In fact, there is evidence that this works. Despite widespread worry that AI will become a machine for confirmation bias — simply telling people what they want to hear — Renault et al. (2026) find that Grok is actually a decent fact-checker:
Using an exhaustive dataset of 1,671,841 English-language fact-checking requests made to Grok and Perplexity on X between February and September 2025, we provide the first large-scale empirical analysis of how LLM-based fact-checking operates in the wild…Across posts rated by both LLM bots, evaluations from Grok and Perplexity agree 52.6% of the time and strongly disagree (one party rates a claim as true and the other as false) 13.6% of the time. For a sample of 100 fact-checked posts, 54.5% of Grok bot ratings and 57.7% of Perplexity bot ratings agreed with ratings of human fact-checkers, which is significantly lower than the inter-fact-checker agreement rate of 64.0%; but API-access versions of Grok had higher agreement with fact-checkers than did not significantly differ from inter-fact-checker agreement. Finally, in a preregistered survey experiment with 1,592 U.S. participants, exposure to LLM fact-checks meaningfully shifts belief accuracy, with effect sizes comparable to those observed in studies of professional fact-checking.
In fact, although Elon has tirelessly worked to make Grok less “woke”, Renault et al. find that the AI is more likely to correct Republican posts than Democratic ones. While that doesn’t necessarily mean that reality has a liberal bias, it does show that the people who create LLMs have difficulty imparting their political bias to their creations.
Costello et al. (2024) also find that talking to AI makes people believe less in conspiracy theories.
I’m hopeful that LLMs will become fact-checking machines and dispensers of expertise-on-demand. But I actually think there’s a far more important reason why they could recapture our political discourse from the Shouting Class. Because of the way they’re trained, LLMs will be a force for homogenization and moderation of opinion.
This idea has been rattling around in my head for a while now, but I just noticed that Dylan Matthews wrote about this a couple months ago:
Some communication technologies are epistemically diverging: their emergence and diffusion results in the affected population’s sense of reality polarizing. Typically this means that the technology has enabled the population to access more and more varied perspectives and factual narratives than it had access to before the technology emerged…The classic example is the printing press and its effect on religious polarization in 16th century Europe…The classic modern diverging technology is, of course, social media…
Other technologies are epistemically converging: they help homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members…Network TV news, from the 1950s through 1990s, might be the best example of this kind of convergence…My provisional theory is that LLMs, as a consumer product, will push people’s senses of reality closer together in a sort of mirror image of the way social media has fractured them…They are centralized systems that, until you prompt them or give them context, behave basically the same way for everyone.
Let’s unpack this a little. If I’m a Democrat, and I talk to other people about politics, it’s likely I’m talking to other Democrats. This is even more likely on social media than in real life — some of my neighbors and coworkers might be Republicans, but on X or Bluesky I can just seek out other Democrats. Those other Democrats also mostly talk to other Democrats, and so on. So an echo chamber builds, where people’s ideas get reinforced and polarized. If I do interact with a Republican online, it’s probably in an adversarial context — I’m shouting at them or being shouted at, which just tends to harden me in my Democratic views.
But when I talk to an AI, it’s a different story. The AI’s opinions and beliefs come from its training data,2 and that data comes from both Democrats and Republicans. Instead of getting the average of my social circle, I’m getting something closer to the average of the country. If AI has any persuasive power at all, it’ll end up pulling me towards the middle.
And AI does have persuasive power. Chen et al. (2026) find that recent LLMs are more persuasive than campaign advertisements. Hackenburg et al. (2025) also find substantial persuasive capabilities.
So LLMs are a natural source of moderation — when people talk to AI, they are indirectly being persuaded by the opinions of a bunch of people who disagree with them. This also means that LLMs are censoring the tails of the idea distribution. AI is trained on the output of a much broader group of people than the extremist shouters who tend to grab attention on social media; it will naturally tend to side with the silent majority in most cases.
This process should end up pushing people’s opinions closer to some sort of consensus, whether or not the consensus is right.3 In fact, there’s some evidence that AI homogenizes people’s ideas. This is from Sourati et al. (2026):
We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts.
And this is from Jiang et al. (2025):
[W]e present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs.
Now at first blush, this might sound bad. I don’t want humanity to turn into a literal hive mind! And of course it’s worth remembering that although we now romanticize the 1950s, at the time people felt stifled by conformity. There should be a middle ground between anarchy and pod people.
But if you think social media has pushed society too far in the direction of anarchy, then you’ll welcome a bit of a push back in the direction of consensus. A country can’t get anything done if everyone is always at each other’s throats. Nor did fragmentation and polarization “democratize” our information space — they marginalized the silent majority of moderate normies, and handed control of our thoughts to some of the worst extremists in our society. In a way, by giving voice to the center of the distribution, AI may be a more truly democratizing force in our discourse than the internet itself ever was.
Perhaps the only thing that can save us from ten thousand Digital Charles Coughlins is a Digital Walter Cronkite.
In the U.S. there was also something called the Fairness Doctrine, which required broadcast media to be even-handed, whose legal justification was predicated on the broadcast spectrum monopoly.
And from synthetic data generated from that training data, and occasionally from reinforcement learning (but more for math and coding than for politics and debate).
Interestingly, Hackenburg et al. find that AIs persuade people by throwing a blizzard of information at them, and that this information is often wrong; it often decreases the factual accuracy of humans’ beliefs. This should serve as a reminder that homogenization of belief and moderation of belief are not the same thing as factualness or education; getting everyone to believe the same thing, and getting them to believe the correct thing, are different tasks.
2026-03-17 16:58:01

This roundup is in honor of Chris Sims, the extremely influential macroeconomist, who has just passed away. Item #4 even features some evidence for the Fiscal Theory of the Price Level, which he helped develop.
But first, podcasts. I went on the Members of Technical Staff Podcast with Jayden Clark to talk about the politics of the tech industry, and we ended up talking about a ton of fun stuff:
Anyway, on to the roundup. Before we get to the macro stuff, let’s talk about one of America’s worst public intellectuals…and a little about AI.
Paul Ehrlich, the author of The Population Bomb and a relentless advocate for population control, has died. One general rule of punditry is supposed to be that you don’t speak ill of the dead. But on the other hand, what if the dead had some really, really bad ideas?
We all know the story of why Ehrlich was wrong. He predicted that the world would run out of food, producing catastrophic famines in the 1970s. Based on those predictions, he called for things like cutting off emergency food aid to India, reasoning that if people were saved from starvation today, it would just mean more people to die of starvation later on. But new farming techniques known as the Green Revolution created enough calories to feed the whole world with plenty to spare. The Population Bomb came out in 1968; by then, famines were essentially already a thing of the past:
And fertility rates fell without the kind of draconian, dystopian population controls that Ehrlich constantly called for. The main country that listened to Ehrlich was China, and their One-Child Policy turned out to be quite unnecessary for reducing fertility rates — as well as being totalitarian, cruel, and dystopian.
What people don’t know about Ehrlich is how relentlessly he kept promoting his ideas and haughtily dismissing his critics, even after it had become clear that he had been completely wrong. A man who had endorsed nightmare policies in service to a broken theory simply never reckoned with this monumental failure, and continued to self-aggrandize and to evangelize for his old mistakes.
And in fact, Ehrlich’s bad ideas have survived and even thrived, in the form of the “degrowth” movement that’s popular in the UK and parts of Europe. Today’s degrowthers call for immiserating the developed-world middle class instead of starving India to death and throwing people in prison for having too many kids, which I suppose is an improvement. Still, the idea is fundamentally based on the same old fallacies that Ehrlich never stopped pushing — that humanity has overstepped its bounds and must be forcibly diminished.
One of the most interesting results in theoretical economics is the Grossman-Stiglitz Paradox.
Have you ever heard of the Efficient Market Hypothesis — the idea that financial market prices already incorporate all available information about the value of the underlying assets? Well, in 1980, Sanford Grossman and Joseph Stiglitz showed why the EMH can’t be quite right. The idea is pretty simple: It takes effort to find information. Who is going to go out and spend the effort to find out information about what stocks or bonds or houses are really worth, if they can’t make money trading on that information? And if no one spends the effort to find the information, how can it ever be incorporated into the price in the first place? Grossman and Stiglitz concluded that financial markets must be at least somewhat inefficient.
Now, Daron Acemoglu, Dingwen Kong and Asuman Ozdaglar have posited a similar problem for AI. I’m usually not a fan of Acemoglu’s papers on AI, but I think this one gets to an important and fundamental insight.
Acemoglu et al. write that if generative AI put all the information of the world at people’s fingertips, then people will have no incentive to go out and learn new things, which will then prevent them from accidentally finding new knowledge to add to the world’s total knowledge base:
We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem…Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers…recommendations that substitute for human effort…[W]hile agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge…[T]he economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice.
Basically, Acemoglu et al. posit that humanity as a whole learns new things when individual humans try to reinvent the wheel — to discover things on their own instead of just looking them up. This wastes a lot of effort, but it also adds to the overall knowledge base.
The idea here is that AI makes everyone really lazy — instead of trying to write a piece of code from scratch, or prove a math theorem from scratch, or figure out some piece of knowledge for yourself, you just ask AI to do it all for you. So everyone ends up getting the right answers to questions whose answers are already known, so they don’t end up adding anything new. It’s the Grossman-Stiglitz Paradox, but for everything.
In fact, you can sort of see hints of this happening already. Website traffic is collapsing, as people read AI instead of websites. Tech publications, for example, are rapidly losing their readership:

And using AI to code causes programmers’ skills to atrophy.
My first observation here is that this also applies not just to AI, but to the internet itself. Yes, people can ask an LLM to teach them about math or write some code for them. But they could also ask Math Exchange and Stack Exchange, even before LLMs existed. And the same problem arises — if all of the world’s knowledge is there at your fingertips, there’s no reason to waste your time reinventing the wheel. But as Neal Stephenson wrote as far back as 2011, this can lead to a lack of novelty, as everyone just copies what’s been done before.
And this leads me to my second thought: What if AI can also produce new knowledge? AI, after all, is prone to hallucination — i.e., random errors. If agents are out there randomly trying the wrong thing, occasionally they’ll discover something new. If there’s a way for those accidental discoveries to get incorporated into the general body of AI knowledge, then perhaps AI can grow the total knowledge stock instead of shrinking it. All that’s needed is to stop forcing humans to be the sole long-term repository of knowledge. How to do that, of course, I don’t know.
The Iran War is making everyone afraid to go through the Strait of Hormuz — the key maritime choke point that a significant part of the world’s oil must pass through in order to reach the world market. Iranian strikes and mines have effectively closed the strait, and European countries are refusing to help America reopen it (which is perhaps only natural, given Trump’s threats to seize Greenland from Europe, and his withdrawal of aid from the Ukraine war). As a result, oil prices have skyrocketed:
What will be the economic result? Fortunately, this is one of the rare areas where macroeconomists are actually able to make some predictions. Closure of key shipping routes is a thing that occasionally happens, and when it happens we can look at the short-term results and get a pretty clean picture of the effect.
That’s what Diego Känzig and Ramya Raghavan did last year in a paper entitled “The Macroeconomic Effects of Supply Chain Shocks: Evidence from Global Shipping Disruptions”. Basically, they look at similar incidents in the past, and try to quantify the economy’s average response. Here’s the picture they come up with:

Basically, commodity prices (e.g. oil) go up, inflation goes up as a result, and U.S. industrial production suffers.
Can we expect the same thing to happen this time? Maybe. One big change from the past is that thanks to the shale oil boom, the U.S. is now a net oil exporter, rather than a net importer:

That means that U.S. oil companies will see a big windfall from the war. But the inflation bump resulting from higher input prices will probably still happen, and oil-consuming industries — chemicals, transportation, etc. — will still probably suffer.
Governments all over the world are running up enormous levels of debt, so it’s important to know what the risks of that are. You can always get your central bank to lower interest rates to make government debt easier to refinance, or even have it print money to buy government debt directly. The problem is that this can cause inflation to rise. A macroeconomic theory called the Fiscal Theory of the Price Level — which drew heavily on Chris Sims’ ideas — predicts a tight relationship between the two.
Progressive macroeconomics types typically pooh-pooh this danger, pointing to cases like the Great Recession, or Japan in the 1990s and 2000s, where soaring levels of government debt didn’t lead to inflation. But Covid may be a counterexample to this complacency. A number of macroeconomics papers have come out recently that establish what looks like a link between Covid borrowing and subsequent post-pandemic inflation.
For example, Barro and Bianchi (2024) find that government spending “has substantial explanatory power for recent inflation rates across 20 non-Euro-zone countries and an aggregate of 17 Euro-zone countries”. And Reis (2026) finds that “the unexpected worsening of fiscal surplus during the period during and after the pandemic is strongly correlated with the unexpected increases in inflation.”
Reis blames America’s borrowing binge — primarily Trump’s CARES Act and its follow-up bill, but also Biden’s American Rescue Plan — for America’s higher rate of inflation after the pandemic:
How much did public deficits contribute to the inflation surge of 2021-24?…A popular argument notes that inflation rose in the US by almost as much as in other OECD countries. Yet, the US had a large fiscal stimulus in 2021 that most other countries did not. Therefore, the US fiscal stimulus did not contribute to the inflation surge. Is that right? No, it is not.
To inspect this claim, you can use expectations data…[Here’s a] plot [that] compares the unexpected high deficits with the unexpected high inflation terms for OECD countries, using the common units of their impact on the public debt…For countries that ran higher unexpected fiscal deficits, inflation was also unexpectedly higher.
And here’s his chart:

That’s not the tightest relationship I’ve ever seen, or the steepest slope. But it’s not nothing, either. And it’s worth remembering that Olivier Blanchard managed to predict the surge in inflation in advance, just by looking at how much the U.S. government was borrowing back in 2021.
Progressive pundits and Democratic think-tankers who like to hand-wave away the dangers of deficits need to think again. America is up in arms about the cost of living, and if Democrats get in power and just borrow more and more and more, it could make the problem worse.
I wrote a book about the promise of foreign investment in Japan. When I was on the book tour last year, a bunch of people, both Japanese and otherwise, asked me: “What industries should foreigners invest in in Japan?” My first answer was always the same: Robotics.
In a world where software is increasingly ruled by AI, robotics is the next frontier. But it’s a lot trickier — you have to combine AI techniques with a lot of hardware know-how. A lot of people think that this know-how resides primarily in China, because they look at charts of robot adoption. China has a lot of factories, and it has a lot of cheap bank loans that factories can use to buy robots, and so China buys a lot of robots. It’s also becoming more self-sufficient in the industry — making more of the robots it installs.
But this doesn’t mean China has caught up in the robot industry, or dominated it the way it has dominated the electric vehicle industry. In fact, most of China’s robots are still low-end, mass-market stuff; to produce high-end robots takes many years of careful practice and accumulated tacit know-how.
Japan has this know-how. And so as AI increasingly pushes into robotics, Japan will be an increasingly important partner for the U.S. James Riney of Coral Capital has an excellent post in which he explains why Japan’s robotics expertise is the perfect complement to America’s strength in AI:
If the US wants real, functional robots that can survive a 10,000-hour duty cycle in a factory rather than a 5-minute demo on X/Twitter, Japan is here to the rescue…
The body of a humanoid robot is an engineering nightmare of competing constraints. Strong but lightweight. Blinding speed but sub-millimeter precision. Massive heat dissipation without cooking its own battery. And it needs to do this millions of times without fatigue…This is where Japan excels…
The single biggest misconception in the humanoid hype cycle is the difference between a demo and a deployment…A robot that looks impressive dancing in a pre-programmed video is operating under “Short-Duration Peak Performance.” It pushes its motors and gears to the limit for a few minutes. But industrial customers don’t buy demos….A robot on [a production] line needs a Mean Time Between Failures of 5,000 to 10,000 hours…This is the Reliability Cliff. Most entrants from the software-first ecosystem, and many low-cost Chinese clones, fall off this cliff at around the 1,000-hour mark. Their gears develop backlash, their lubricants break down, and their positional accuracy drifts…
Japanese companies like Harmonic Drive Systems and Nabtesco have spent fifty years solving these problems. They have mastered the black art of tribology, metallurgy, and heat treatment…If you peel back the skin of almost any high-end robot today, whether it is building cars in Germany or sorting packages in an Amazon warehouse, you will find Japanese logos inside…According to Japan’s Ministry of Economy, Trade and Industry (METI), Japanese manufacturers hold an impressive 70% of the global market share for industrial robots…
The battle for robotics dominance is not a story of the US vs China. China would likely win that battle. It is a story of the US & Japan (and allies) vs China…For now, and for the foreseeable future, if you want a robot that works, you need to knock on Japan’s door.
Wise words. American startups, AI companies, and government agencies need to listen to James.
There has been a big political realignment in the U.S. — and in many other countries — in recent years. Center-left parties, like the Democrats in the U.S. and Labour in the UK, used to primarily be the parties of the working class. But in recent years, their voter bases have shifted — they have become the parties of educated high-earning professionals, while working-class voters have drifted to the right. Here’s Rogé Karma:
In 2008, the top fifth of earners favored Democrats by just a few percentage points; by 2020, they were the group most likely to vote for Democrats and did so by a nearly 15-point margin. (Democrats won the poorest fifth of voters by a similarly large margin.) Democrats now represent 24 of the 25 highest-income congressional districts and 43 of the top 50 counties by economic output. A similarly stark shift has occurred if you look at college education rather than income. Perhaps most dramatic of all has been the change among wealthy white people. Among white voters, in every presidential election from 1948 until 2012, the richest 5 percent were the group most likely to vote Republican, according to analysis by the political scientist Thomas Wood. In 2016 and 2020, this dynamic reversed itself: The top 5 percent became the group most likely to vote Democratic.
And here’s a chart:

For the most part, Democrats have kept their pro-working-class politics, even as they represent the working class less and less. They’ve supported unions even as unions have abandoned them at the polls. They’ve pushed for more welfare and health spending, even as the benefits have flowed more to red states than to blue ones. This is commendable.
However, this class altruism doesn’t extend to all types of policy. Progressives have fought hard for student debt cancellation, even though people who go to college are pretty obviously the main beneficiaries of that. And on taxes, Democrats have shifted from their old strategy of taxing the rich to a new strategy of taxing only the hyper-rich while cutting taxes for the merely-rich. Matt Yglesias reports:
Chris Van Hollen and Cory Booker both recently introduced proposals to raise taxes on the very rich in order to finance broad-based tax cuts for the rest of the country…[T]he existing progressive structure of the income tax code means that any broad-based income tax cut is going to be regressive. Check out this Yale Budget Lab estimate of Van Hollen’s plan — he makes sure to soak the rich, but he does more with the money for the comfortable than for the struggling. Booker’s plan is even worse in this regard…
[L]ooking at the distributional tables for the 1993 budget…that Bill Clinton signed…it’s almost shocking how broadly he raised taxes…[B]y Obama’s time, willingness to enact broad-based tax increases was waning…Obama vowed not to raise taxes on anyone earning less than $250,000 (roughly $360,000 in today’s dollars), which meant in practice being willing to extend a majority of the Bush tax cuts…Except vulnerable senate Democrats lost their nerve and pushed to extend tax cuts up to $450,000 — or nearly $650,000 adjusted for inflation today.
Basically, as Democrats have become the party of the somewhat-rich, they have begun to embrace tax cuts for the somewhat-rich.
But without broad-based taxes, America will never be able to rein in its deficit or increase the welfare state further. Billionaires have a ton of money individually, but collectively there just aren’t enough of them to support the fiscal needs of a country like the United States. If we want broadly shared benefits, we will need broadly shared sacrifice.
The Democrats, comfortable in their newfound identity of the party of millionaires-against-billionaires, are no longer calling for broadly shared sacrifice. Instead, the best populism they can seem to muster is an attack on one group of elites by another group of elites.
“Blow up your TV/ Throw away your paper/ Go to the country/ Build you a home/ Plant a little garden/ Eat a lot of peaches/ Try and find Jesus/ On your own” — John Prine
I’m generally a techno-optimist, but I make an exception for at least one technology: smartphone-enabled social media. In the long run, I expect us to be able to adapt in order to use this technology to our net benefit. But in the short run, I think it has devastated our politics, destroyed many of our social bonds, and made us less happy in general.
A research project called the Global Mind Project has tried to assess mental health across the globe, using a huge survey with millions of respondents. Their latest report zeroes in on the deleterious effects that smartphone usage has had on the well-being of Gen Z. Here’s Jonathan Haidt’s summary:
Young adults used to generally have good mental health, compared to older generations. But now, in ALL countries examined, they are doing badly compared to older generations in that country…The decline of young people's mental health is "most pronounced in the wealthier and more developed countries." They note that it is in such countries that smartphones are given earliest, junk food is most heavily consumed, spirituality is most diminished, and family ties are looser and often weaker…"A younger age of first smartphone ownership is associated with increased suicidal thoughts, aggression, and other problems in adulthood."
And this is from the report itself:
GenZ is the first generation to grow up with a smartphone. Among this group, the younger they acquired their first smartphone in childhood, the more likely they are to have struggles as adults. These struggles extend beyond sadness and anxiety to less discussed symptoms, such as a sense of being detached from reality, suicidal thoughts, and aggression towards others…Excessive time spent on smartphones also diminishes the development of social cognition that requires learned interpretation of facial expressions, body language, and group dynamics. The negative impacts are particularly sharp below age 13.
Fortunately, some young people seem to be realizing that the phones are bad for them. Here’s a recent story from CNBC:
Going chronically offline is the latest trend to grip young people, and ironically it's going viral on social media…I received nearly 100 responses from Gen Z and millennials sharing stories about social media detoxes and digital burnout…They talked about ditching their smartphones for flip phones, visiting record stores to buy vinyl, taking up analog hobbies like knitting, and most importantly, connecting with their friends in person.
A 2025 Deloitte consumer trends survey of more than 4,000 Brits found that nearly a quarter of all consumers had deleted a social media app in the previous 12 months, rising to nearly a third for Gen Zers…Meanwhile, social media use has steadily declined since time spent on the platforms peaked in 2022, according to an analysis of the online habits of 250,000 adults in more than 50 countries by the Financial Times and digital audience insights firm GWI…Globally, adults 16 and over spent an average of two hours and 20 minutes per day on social platforms by the end of 2024, down almost 10% since 2022, the report found. The decline was particularly pronounced among teens and 20-somethings…
Young people who are deleting their social media platforms cite the increasing pressures of being online as well as damage to their mental health as causes…Deloitte’s consumer survey showed that almost a quarter of respondents who deleted social apps reported these apps had negatively impacted their mental health and consumed too much of their time.
This is actually the kind of thing that makes me such a techno-optimist. In the short-run, the drawbacks of a new technology can do more harm than good. But in the long run, humans learn and adapt to the new technology. And in the case of smartphones, the right adaptation may simply be to get off social media.
2026-03-15 16:45:03

“Imagination/ That’s the way that it seems/ A man can only live in his dreams” — The Flaming Lips
“No future/ No future/ No future for you” — The Sex Pistols
If you have kids — or if you’re planning to have kids in the future — I want you to think about a question: How will you make sure your kids have a successful life?
Obviously, this isn’t a question that anyone can ever answer with certainty. But ten years ago, in 2016, you could have given a pretty good answer. You’d work hard and save money and invest wisely, so you would have enough family wealth to cushion against unexpected shocks. You’d teach your kid good values, make sure they went to a good school, and send them to a good college. You might even encourage them to enter a promising elite professional field, like software engineering, medicine, or law. If you did all of this, you could be reasonably confident that your child would grow up to be at least economically secure, and probably upwardly mobile as well.
What answer would you give now, in 2026? Do you have any confidence that colleges — even top colleges — will actually teach your kid the skills they need to make it in a job market defined by AI? What field of study could you recommend to your child, knowing that there’s a possibility it will be automated by the time they finish studying it? Will even family wealth be enough to protect your descendants, in a world where land and energy are being gobbled up for data centers?
The sudden rise of artificial intelligence has cast a great fog over our future. It may bring wonders beyond our comprehension — the end of aging and disease, material hyperabundance, digital worlds to suit our every desire, expansion into outer space. Or it might bring chaos and destruction, as rogue agents wreak havoc with bioweapons and drones. Or it might become a superintelligence that turns us all into house pets.
Your kids might be chronically unemployed, as the CEO of ServiceNow recently predicted. Or AI tools might turn them into highly paid super-workers, as the founder of Uber recently predicted. The truth is that they don’t know, and I don’t know, and you don’t know either. Financial markets don’t know either. The people actually building AI certainly don’t know. The future is a blank wall of fog, rushing toward us at top speed, and nobody knows what to do.
Plenty of people have predicted this. It’s called a Technological Singularity — a period of accelerated technological change so rapid that it’s impossible to predict what life or society will look like afterwards. You can argue that the Industrial Revolution was a kind of Singularity, moving humanity in today’s developed countries from the edge of starvation to material abundance. Who could have predicted, in 1890, what life in 1990 would look like? And the AI revolution is happening much faster, promising to compress a century’s worth of change into a couple of decades.
AI may be the biggest thing casting a fog of uncertainty over our future, but it’s not the only thing. The political chaos of the last decade, and especially the governing style of the second Trump administration, has swept away much of what we thought we knew about American society. The rise of China has raised the possibility that global power will now reside with totalitarian countries instead of democratic ones. The possibility of another world war looms.
Now here’s the crucial point — even back in 2016, this period of rapid change was on the way. Most people just didn’t see it coming. Everyone who thought their kids would be safe if they just followed the standard 2016 playbook — a good college, a professional career — was wrong. They just didn’t know they were wrong yet.
But because they didn’t see what was coming, they were optimistic. Back in 2016, 69% of Americans expected a good life in the future — a number that’s now down to only 59%:

Even during Covid and the Great Recession, American optimism about the future didn’t waver. We “knew” — or at least we thought we knew — that we would recover from those shocks, and be able to live a good life. We might have been wrong, but we thought we could see the future — and it was those extrapolations that comforted us, even as we endured one shock after another.
It occurs to me that this can also explain why Americans are so nostalgic for the 1990s and the early 2000s.
2026-03-13 03:41:06

The other day I did something I’ve never done before: I made a major political donation.1 I gave $10,000 to GrowSF, a political advocacy organization that focuses on local elections in San Francisco. They’re going to use the money to support Alan Wong in the upcoming special election for District 4 supervisor.
Usually, I’m pretty pessimistic about the ability of political donations to affect the course of society. The influence of money in politics is exaggerated in general, and the amount that I’m personally able to contribute is pretty modest; in almost all cases, I think I’ll probably have a bigger impact just by writing blog posts. But in this particular case, I think I might actually be able to make a noticeable difference by donating a little bit of money — especially because it gives me a good excuse to write about the political situation in San Francisco.
Basically, for a number of years, San Francisco was the poster child for a style of progressive urban governance that has been failing in cities across the country. I wrote about this governance debacle shortly after Trump was elected in 2024:
In the 1990s and 2000s, America’s big cities had an urban revival. Pragmatic liberals like Michael Bloomberg in New York City and Ed Lee in San Francisco were some of the most important leaders of this revival. They recognized the value of business as the city’s tax base, and they recognized the importance of public order for maintaining a livable urban environment. They were not perfect; they failed to build sufficient housing, setting the stage for the urban housing crisis of the 2010s and 2020s, and they continued or accelerated the unfortunate trend of outsourcing city government functions to nonprofit organizations. But overall, they were successful in turning American cities into places that people actually wanted to live in again.
As people — especially people with money — moved back into America’s cities in the 1990s and 2000s, the housing crisis worsened, because cities didn’t meet the increase in demand with an increase in supply. But at the same time, America was sorting itself politically — the big cities leaned increasingly to the left.
That political shift enabled the rise of a new, radical kind of urban progressive ideology. If the old liberalism had been complacent about the need for housing supply, the new progressivism was downright hostile to it; drawing on the anti-gentrification movements of a previous generation, hardline progressives embraced the mistaken idea that allowing the construction of new apartment buildings raises rents:
In fact, an overwhelming amount of evidence supports the fact that allowing new housing reduces rents for everyone. But in refusing to hear that evidence, urban hardline progressives have essentially allied themselves to an old-money NIMBY gentry that wants to keep cities frozen in amber with development restrictions.
At the same time, the new urban progressive ideology became extremely tolerant of public disorder — property crime, low-level violent crime, public drug markets, and threatening street behavior. Cracking down on these social ills was viewed as unacceptably harmful to the perpetrators; in other words, hardline progressives came to view anarchy as a form of welfare policy.
Penalties for minor crimes were reduced, enforcement of public drug markets was curtailed, and citizens were even forbidden from defending their own businesses from criminals. “Tent cities” were tolerated despite being riddled with violent crime, police budgets were slashed, progressive prosecutors like San Francisco’s Chesa Boudin prosecuted fewer crimes, dangerous repeat offenders were regularly allowed back onto the streets, and so on. Inevitably, poor people were the ones most heavily impacted by the epidemic of crime and drug use that this anarchy enabled.
Together, high housing costs and rampant public disorder made America’s big blue cities no longer the envy of the world. Meanwhile, hardline progressives simply doubled down — responding to high housing costs with yet more restrictions on development, and responding to disorder with yet more tolerance of disorder, all while funneling increasing portions of the city budget to well-connected nonprofits that often turned out to be ineffectual and corrupt.
In San Francisco, this hardline progressivism did not come from the mayor’s office. Most policy decisions in SF are carried out by — or must be signed off on by — the powerful Board of Supervisors. The Board of Supervisors writes the laws, approves and amends the city budget, confirms mayoral appointments, and exercises veto power over almost any major reform effort.
For many years, San Francisco had a moderate liberal mayor but a hardline progressive majority on the Board of Supervisors. Mayors wanted to build more housing and crack down on disorder and crime, but the progressive supermajority on the Board would not allow them to do so. Mayors like London Breed often took the blame for the city’s descent into unaffordability and chaos, but the prime culprit was always the hyper-progressive Board.
Under the aegis of hyper-progressive city government, San Francisco had the highest property crime rate in the nation in the late 2010s, and became one of America’s least affordable cities. The pandemic only accelerated these trends — the city’s population crashed and failed to recover, the streets became open-air fentanyl markets, transit ridership plummeted and didn’t bounce back, and housing production crashed from low levels to almost nothing. Malls closed, businesses pulled out, and downtown felt like a post-apocalyptic wasteland long after most other cities had recovered their verve.
Then, in 2024, an election changed everything. The change everyone knows about is the election of Daniel Lurie as mayor.

Lurie made public order his #1 task. Within a year, crime had plummeted:
[O]verall crime in [San Francisco] went down by 25% in 2025, with the number of homicides reaching a level not seen in more than 70 years…Property crimes were down by 27%, while violent crimes were down by 18%…The mayor added that the city planned to keep on hiring new officers, following an executive directive he signed in May. In October, the department reported the largest surge of recruits in years…
The department also credited the Drug Market Agency Coordination Center in leading to more than 6,600 arrests in connection with drug-related activity. Officers said they had also seized more than 1,000 firearms and more than 56 pounds of fentanyl…Meanwhile, retail theft operations have led to key arrests, resulting in reductions in larcenies and retail thefts.
Other notable crime trends touted by city officials include a 16% decrease in shootings, robberies being down 24%, car break-ins down 43% and vehicle thefts being down 44%.
On the ground, the change is absolutely palpable. In 2023 I would see thieves ripping pieces out of car engines in broad daylight. Almost every day I walked past throngs of drug users (and probably dealers). Every woman I knew was harassed on the street or on the train. There were needles and human feces on the ground everywhere. Stores were boarded up, train cars ran almost empty, tent cities lined side streets and the spaces under overpasses. Now, most of that is gone — the streets aren’t clean, but they’re closer to NYC than to a developing-country slum.
Progress on housing has been slower, due to the dense thicket of existing regulations and entrenched NIMBY interests that must be hacked through in order to actually get new housing built. Lurie passed a landmark upzoning plan, which doesn’t go nearly far enough but is a huge improvement on anything in recent decades. Now permitting is accelerating:
San Francisco’s infamously slow building permitting process may be getting faster…A city study published Thursday found that between January 2024 and August 2025, the timeline on permit approvals for new housing in San Francisco was cut by half — from an average of 605 days down to around 280 days…And permit applications that were filed within that 19-month window had even shorter turnaround times, at 114 days on average…
[A] state-commissioned report published in 2022 found that San Francisco was the slowest California jurisdiction to approve permit applications for housing projects…[But] Mayor Daniel Lurie has…focused on improving the city’s buildability, launching his landmark ‘PermitSF’ initiative to centralize the application process last year. In February, his office introduced an online portal that allows people to apply for certain types of permits.
It will take years for those permits to turn into actual homes. And the reforms that Lurie has managed to enact are only the tip of the iceberg in terms of what’s needed — much of which needs to be done at the state level.
But overall, things are looking up. Lurie’s approval rating reached 73% half a year into his mayorship (compared to 28% for his predecessor). In November it was still 71%. Everyone loves Daniel Lurie — and so do I. He’s not perfect, but no mayor has ever been perfect. His successful policies range far beyond what I’ve listed here — he’s added homeless shelter space, cut taxes on apartment buildings, removed anti-police activists from the Police Commission and appointed a better police chief, encouraged conversion of offices into homes, created free childcare policies and various early childhood programs, implemented policies to protect pedestrians and cyclists, cut various forms of red tape for housing and small business, streamlined business permitting, worked toward balancing the budget, and so on.
But here is the real point: Almost none of this would have been possible if the Board of Supervisors had still been controlled by hardline progressives.
The same election that brought Daniel Lurie into the mayor’s office also changed the composition of the Board. The “progressive” faction, which had enjoyed a supermajority on the Board, suffered a major defeat, with progressive stalwarts like Dean Preston being unseated by moderate liberals like Bilal Mahmood. The moderate liberal faction — which would be labeled strongly progressive in most of America, but who are regarded as centrists in San Francisco — gained a slim 6-5 majority on the Board.
Though Lurie has gotten most of the credit for SF’s turnaround, that slim Board majority was absolutely essential. The new laws Lurie has passed would not have been passed, nor would Lurie’s personnel appointments have been confirmed, had the Board been 6-5 in favor of the “progressives” instead of 6-5 in favor of the moderate liberals. A one-seat swing toward the hardline progressive faction would have meant a San Francisco that was still mired in all of the old urban dysfunction that progressive cities have been struggling with for a decade and a half.
And now that one-seat swing may actually happen, and San Francisco’s recovery might be derailed. District 4’s supervisor Joel Engardio, an important moderate liberal voice on the Board, was recalled last fall over his support for a highway closure. Lurie appointed Alan Wong to fill in the District 4 spot, but now Wong is facing a special election on June 2 to keep that seat. It’s a crowded field, and some of Wong’s rivals are very well-funded.
The other candidates in the race — Natalie Gee, David Lee, and Albert Chow — are all more opposed to Lurie’s pro-housing agenda than Wong is. If Wong loses, San Francisco’s reforms under Lurie so far probably won’t be repealed — at least not immediately. But the majority on many issues would flip back to the “progressives”, and further reforms would become much harder if not impossible. This would be especially harmful to the housing agenda, where upzoning efforts look promising but will require more years of sustained effort to reach fruition.
This is why I decided to give $10,000 to an organization supporting Alan Wong.2 I don’t live in District 4, and I’m sure his opponents are very nice people, but this election is about more than just District 4 — the composition of the Board of Supervisors determines the destiny of the entire city of San Francisco. The Outer Sunset will benefit from a moderate liberal majority on the Board, but so will the rest of us.
My city’s chronic inability to build sufficient housing has hollowed it out. It has forced huge numbers of middle-class people, working-class people, and artists to move far away from the city, leaving SF to the rich and the rent-controlled. It has contributed to the homelessness epidemic, forcing people onto the streets and into the arms of the drug dealers. Under Daniel Lurie and the 6-5 moderate liberal majority on the Board of Supervisors, we were just now starting to address that gaping, decades-long deficiency. And now we could throw it all in the trash.
Over the past year, San Francisco has shown the nation a way out of the quagmire of hardline “progressive” governance that is hollowing out so many of our cities. But if this one supervisor race goes the wrong way, and Alan Wong loses, we could end up being a cautionary tale about how difficult it is for American cities to reject that self-destructive approach.
I have made very small campaign donations in the past, on the order of $100.
If you’d also like to donate to that organization, here’s a link where you can do that.
2026-03-11 07:13:08

The photo above is from the Battle of Khalkhin Gol in 1939. This “battle” lasted four months, and was actually just the main phase of an undeclared war between Imperial Japan and the Soviet Union that effectively began in 1935, four years before the official start of the Second World War. The USSR won the conflict through superior use of tanks, foreshadowing the eventual outcome of WW2 itself.
This example illustrates that although World War 2 officially began when Germany invaded Poland, conflicts that either foreshadowed the final conflagration or eventually merged with it began years earlier, in the mid-1930s. WW2 had foothills. I wrote about this back in 2024:
It’s possible that the world will avoid a world war in the first half of the 21st century. But if one does occur, I think future historians will see it as having had foothills as well. In the Syrian Civil War, the U.S. and Russia began to test their new hardware against each other, and their troops even clashed once. Russia’s invasion of Ukraine was the big shift, as it inaugurated a new era of great-power territorial conquest, began to harden global alliance systems, and pushed Europe to remilitarize.
Now we have the Iran War. The U.S. and Israel started the war, attacking Iran and decapitating much of its leadership. The Iranians, somewhat oddly, responded by launching missile and drone attacks on practically every Arab nation in the Middle East, causing some of them to threaten to join the war on America and Israel’s side.
In the short term, this conflict seems likely to peter out in a few days to weeks without decisive results. Militarily speaking, the U.S. and Israel have generally had their way with Iran, assassinating the leadership at will, achieving air supremacy, and degrading missile and drone strike capability. But this seems unlikely to actually bring down the Iranian regime; protesters are generally not returning to the streets, still cowed after the regime massacred tens of thousands of them in January. Unlike in Syria, there’s no breakaway region or oppressed ethnic majority that can be armed from afar to bring down the regime; as long as Iran’s Revolutionary Guard and other security services remain unified and willing to shoot infinite protesters in order to hang on to power, and there’s no ground invasion, it’s not clear who could actually topple the Islamic Republic in the next few weeks.
In the long term, of course, it’s a different story; the regime doesn’t look strong or stable. But Trump seems unlikely to be in for the long term; instead, he seems likely to quit the war soon, as he usually retreats from most of his initially bold moves. Trump recently called the war “very complete”, and his advisers are reportedly urging him to find a way out of the conflict.
One reason for this is that the Iran War has been fairly unpopular in America from the beginning:
About half of registered voters — 53% — oppose U.S. military action against Iran, according to a new Quinnipiac Poll conducted over the weekend. Only 4 in 10 support it, and about 1 in 10 are uncertain. A new Ipsos poll also found more disapprove than approve of the strikes…That’s similar to the results of text message snap polls from The Washington Post and CNN, both conducted shortly after the joint U.S.-Israel attacks began, which also indicated that more Americans rejected the military action than embraced it…A recent Fox News poll found opinions more evenly divided: Half of registered voters approved of the U.S. military action, while half disapproved.
Wars usually create a “rally round the flag” effect early on, and support only fades later; this war was unpopular from day one. Most Republicans seem to have conveniently forgotten that Trump ran as the candidate of peace, isolationism, and non-intervention. But Independents, who form the bulk of the American electorate now, have no partisan commitments that force them to conveniently forget. And they are rightfully wary of yet another American involvement in a Middle Eastern war — especially one that America started without being attacked first.
But there’s an even bigger reason Trump is looking for the exits — oil. Oil prices have been jumping wildly up and down, as everyone tries to figure out whether Iran will manage to disrupt oil production from the Persian Gulf (possibly by closing the Strait of Hormuz, possibly by destroying Gulf oil infrastructure with drones). But the general trend is up:

Higher oil prices mean higher gasoline prices, and higher inflation in general — both things that tend to make Americans very mad, and which they are already mad at Trump about. Gas prices are now shooting up:
So this war seems highly unlikely to result in Iraq War 2.0 — a massive U.S. ground invasion of Iran. Instead, it’ll probably end up like a bigger version of the Twelve-Day War last year — Iran’s defenses will be laid prostrate before the might of foreign air power, but the regime will survive.
(Again, in the long term, things look very bad for the Iranian regime. The economy is dysfunctional and crumbling, and high oil prices will provide only a temporary palliative. The regime’s popular legitimacy is gone after the January massacres. The entire Gulf has now turned against Iran, and Lebanon’s government has turned against Hezbollah. With Syria now shifting into the Israel/Gulf camp and Hamas basically a spent force, Iran has only one effective proxy left — the Houthis in Yemen. This is not a recipe for long-term success.)
But anyway, this is all a bit of a side track from the point of this post, which is about World War 3. The Iran War will probably not be the start of WW3, but I think it does bring us closer to the brink, in several ways.
First, in the Western theater — Europe and the Middle East — the coalitional lines are becoming clearer. When Trump was elected, a lot of people thought that America had effectively “switched sides” — that Trump viewed Putin as an ally against global wokeness, and the Europeans and the Ukrainians as betrayers of Western Civilization. I myself entertained this notion — there really was (and still is) a lot of this sentiment on the American right, and ending the Transatlantic Alliance was consistent with classic American right-wing isolationism.
But the narrative that “America is a Russian ally now” has been looking a lot shakier in recent months. First, the U.S. toppled a Russian proxy in Venezuela, and seized a bunch of Russian “shadow fleet” oil tankers. Elon Musk then shut the Russians off from using Starlink, allowing the Ukrainians to seize the initiative in the war. Now, the U.S. is trying to topple a key Russian arms supplier — Iran is the source of the Shahed long-range strike drone, which Russia has been using to bombard Ukraine’s cities from afar.
Russia didn’t leap to Iran’s defense. It has its hands full with Ukraine, and with planning for a possible wider war against Europe, and the U.S. is too powerful for it to fight. But the Russians did lend a hand, helping Iran to target U.S. forces:
Russia is providing Iran with intelligence about the locations and movements of American troops, ships and aircraft, according to multiple people familiar with US intelligence reporting on the issue…Much of the intelligence Russia has shared with Iran has been imagery from Moscow’s sophisticated constellation of overhead satellites[.]
This is similar to what the U.S. does for Ukraine. Russian targeting intelligence may have helped Iran take out some U.S. missile defense radar installations — almost certainly Iran’s most significant success of the war.
Meanwhile, Ukraine has leapt to the defense of both the U.S. and the Gulf countries being targeted by Iran’s fleets of attack drones. Long years of playing defense against Russia’s Iranian-provided Shaheds have given Ukraine tons of expertise in shooting this sort of drone out of the sky; now, the U.S. badly needs that expertise. America had rejected Ukraine’s help on anti-drone technology before, but it turns out military necessity usually trumps ideological bias.
As for Europe, they’ve certainly had a lot of tensions with the Trump administration, but most of the European countries haven’t opposed America’s actions in Iran the way they opposed the Iraq War a generation ago. Britain and France made some disapproving noises at first, but eventually acquiesced; only Spain tried to stand up and oppose Trump.
So for now, the coalitions in the Western theater look clearer than they did before — America, Ukraine, Israel, and Europe on one side, Russia and Iran on the other side. Various factions in the U.S. and Europe may despise each other, or despise Israel, or despise Ukraine, but at the end of the day, Russia and Iran are the greater enemies.
In the Eastern theater, things are less certain. India traditionally tries to be friends with America, Russia, Israel, and Iran all at once — this requires it to be effectively neutral when it comes to conflicts like the Ukraine War and the Iran War. China is supposedly on Iran’s side, but it has mostly limited itself to criticism of America’s actions.
The big question, of course, is whether the Iran War makes a Chinese attack on Taiwan more likely. One school of thought says it’s more likely, because the war has forced America to consider shifting missile defense systems out of Asia. On the other hand, the almost unbelievable American/Israeli competence in terms of finding and killing Iran’s top leaders seems to have given Chinese military analysts pause — although China can outmatch the U.S. in terms of defense production, if America could assassinate Xi Jinping and the entire CCP Central Committee in the early days of a war over Taiwan, that could be an effective form of deterrence.
So in a way, what we’re looking at now feels a little like the situation in 1935 or 1937. The Western theater today is like the Pacific theater then — wars and invasions that feel localized, and which don’t involve the most capable players, but which destabilize the world and have the potential to merge into a wider global conflict. Meanwhile, the Eastern theater today is more like the European theater of WW2 — it has the most powerful economies and militaries, but the alliances are still uncertain. If and when China attacks Taiwan, that will probably be similar to Hitler invading Poland — an unambiguous signal that a wider war has begun. It might happen, or it might not.
Meanwhile, the Iran War feels like the lead-up to World War 3 in another way — it’s showcasing and developing the technologies that would be central to a wider war. The Ukraine War has demonstrated that drones — FPV drones at the front, and Shahed-style strike drones behind the lines — are the key weapon of modern warfare. Similarly, America and Israel’s decapitation strikes on Iran have shown the power of AI for modern precision warfare. Here’s the WSJ:
The U.S. and Israeli attacks on Iran have unfolded at unprecedented speed and precision thanks to…a cutting-edge weapon never before deployed on this scale: artificial intelligence…AI tools are helping gather intelligence, pick targets, plan bombing missions and assess battle damage at speeds not previously possible…The use of AI in the campaign against Iran follows years of work by the Pentagon and lessons learned from other militaries. Ukraine—with U.S. help—increasingly relies on AI in its war against Russia. Israel has tapped AI in conflicts at least since the October 2023 Hamas attacks.
And this is from an article in Rest of World (a very underrated news source):
The U.S. military is using the most advanced AI it has ever used in warfare, with Anthropic’s Claude AI reported to be assessing intelligence, identifying targets, and simulating battle scenarios…The biggest role that AI now has in U.S. military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said…[T]he use of chatbots such as Claude in decision-support systems is new…
China is prototyping AI capabilities that can pilot unmanned combat vehicles, detect and respond to cyberattacks, and identify and strike targets on land, at sea, and in space, researchers at Georgetown University’s Center for Security and Emerging Technology said.
This is a bit reminiscent of how aerial bombing was used at Guernica in the Spanish Civil War, or how the USSR used tanks to beat the Japanese at Khalkhin Gol. If we ever do see an all-out war between America, China, Russia, Japan, and Europe, AI is going to be incredibly central to performance on the battlefield. That’s why for all the bad blood between the Pentagon and Anthropic, the two organizations have a huge incentive to patch things over and learn to cooperate more closely. (Fortunately, Anthropic’s CEO, Dario Amodei, is extremely patriotic, which will probably help.)
Unfortunately, new military technologies won’t just define the wars of the future — they also help cause them. Why did the world fight two World Wars in the early 20th century? Ideologies and competing empires certainly played a role, but it’s also probably true that the rise of industrial technology disrupted the existing balance of power.
Artillery manufacturing, logistics, and railroads made Germany a great power capable of defeating France in the 1870s; that upset the continental balance of power and caused the proliferation of alliances that led to WW1. In the interwar period, air power made America, Germany, and Japan more powerful, while the rise of tanks empowered Germany and the USSR, all at the expense of Britain and France. The rapid progress of industrial weaponry made it unclear where power really lay in the world, which probably made the great powers of the day more willing to roll the dice and test their strength against each other.
Countries may be more cautious now than they were a century ago. Nuclear weapons still exist, and still provide some deterrent to great-power war — though there are a lot fewer of them now than there used to be, and AI and missile defense make it possible to stop more of them before they hit. Countries are richer now too, which makes a war even less appealing from an economic perspective than in 1914.
But still, the rise of AI and drones means that no one knows who’s really the most powerful country in the world — the U.S. or China. And regional balances of power — Russia versus Europe and Ukraine, Iran versus Israel and the Gulf — are similarly uncertain. Uncertain balances of power are scarier than known balances of power.
So while World War 3 doesn’t seem imminent, we may be inching closer in that direction. If it sneaks up and surprises us, we’ll probably conclude that the Iran War was part of the lead-up.
2026-03-08 07:04:47
There are three basic facts you need to know about the U.S. macroeconomy right now:
The economy overall (growth, employment, inflation) is doing pretty well.
Productivity growth is unusually high.
Job growth is terrible.
Let’s start with some numbers. Late 2025 is the latest number we have for GDP growth, but it looks pretty solid — around 2.5%, about where it was in the late 2010s.
And most people still have jobs. Prime-age employment rates — my favorite single indicator of the labor market — are still really high. Higher than any time in the 2010s, actually:
If you look at unemployment, you can see a slowly rising trend since mid-2023, even if you restrict it to the prime age group. But this is entirely due to more people saying that they’re looking for work — prime-age labor force participation has been steadily rising. So that’s not very scary either. It’s just more of the people without jobs saying that they’re looking for work, instead of just sitting around.
Meanwhile, inflation is still in the 2.5% range — a little higher than we would like, but not particularly fast.
So in terms of the headline numbers, everything is kind of just bumping along. From a bird’s-eye view, this economy looks pretty normal and healthy. Under normal circumstances, I’d be inclined to not even write a post about the macroeconomy this month.
But underneath the surface, two interesting things are happening. The first is that productivity growth has accelerated; the second is that job growth has stalled out. On its face, this sort of pattern might suggest that AI is finally starting to take Americans’ jobs — and lots of people are suggesting this conclusion. But when we look closely at the numbers, the story becomes more complicated.
The first is that productivity growth has accelerated. Output per hour — also called “labor productivity”, which is sort of a quick, rough-and-ready measure of productivity — is growing significantly faster than it was in the late 2010s. It’s been at around 2.5-3% since late 2023, compared to more like 1-2% during Trump’s first term:
In fact, productivity is well above where economists thought it would be six years ago:

That’s a major acceleration. 2.8% labor productivity growth is about equal to the best decades we’ve seen since World War 2. If that rate is sustained for a decade, or accelerates further, it’ll be pretty historic.
What’s driving the productivity boom? It’s tempting to conclude that AI is making white-collar workers more productive, but Ernie Tedeschi points out that the biggest swing has been in manufacturing productivity. For a long time, manufacturing productivity was basically flatlining in America; now it’s suddenly growing again.
Tedeschi argues that this is also probably AI-driven, but it’s not about people using ChatGPT and Claude Code at work — it’s about the fact that a ton of data centers are being built, and data centers are very valuable:
If you look at data centers’ contribution to growth itself, it looks pretty small, but this masks the value of the computers contained within the data centers. Together, the creation of data centers and computing equipment have been contributing about as much to GDP growth as they were during the dot-com boom:

A second thing that’s happening is that American capital is being utilized more intensively — machines are being run for more hours of the day, buildings are keeping the lights on longer, and so on. The San Francisco Fed makes monthly estimates of Total Factor Productivity growth — productivity growth once you take the amount of labor and capital into account — and they find that it’s been pretty fast since late 2023. But once you take utilization rates into account, it looks like there was a moderate burst of TFP growth in 2023-4 that faded in 2025:

This is also consistent with the story that the data center boom, not an AI use boom, is driving fast productivity growth in America.