2026-03-06 08:28:04
If you haven’t heard about the fight between the AI company Anthropic and the U.S. Department of War, you should read about it, because it could be critical for our future — as a nation, but also as a species.
Anthropic, along with OpenAI, is one of the two leading AI model-making companies. OpenAI has very narrowly led the race in terms of most capabilities for most of the past few years, but Anthropic is beginning to win the race in terms of business adoption:

This is because of Anthropic’s different business model. It focused more on AI for coding than on chatbots in general, and also focused on partnering with businesses to help them use AI. This may pay eventual dividends in terms of capabilities, if Anthropic beats OpenAI to the goal of recursive AI self-improvement. And it’s already paying dividends in the form of faster revenue growth:

Anthropic had partnered with the Department of War — previously the Department of Defense — since the Biden years. But the company — which is known for its more values-oriented culture — has begun to clash with the Trump Administration in recent months. The administration sees Anthropic as “woke” due to its concern over the morality of things like autonomous drone swarms and AI-based mass surveillance.
The fight boiled over a week ago, when the administration stopped working with Anthropic, switched to working with OpenAI, and designated Anthropic a “supply chain risk”. The supply-chain move was a pretty dire threat — if enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, which could kill the company outright. But like many Trump administration moves, it appears to have been more of a threat than an all-out attack — Anthropic has now resumed talks with the military, and it seems likely that they’ll come to some sort of agreement in the end.
But bad blood remains. Trump recently boasted that he “fired [Anthropic] like dogs”. Dario Amodei, Anthropic’s CEO, released a memo accusing OpenAI of lying to the public about its dealings with the DoW, said that OpenAI had given Trump “dictator-style praise”, and asserted that Anthropic’s concern was related to the DoW’s desire to use AI for mass surveillance.
What’s actually going on here? The easiest way to look at this is as a standard American partisan food-fight. Anthropic is more left-coded than the other AI companies, and the Trump administration hates anything left-coded. This probably explains most of the general public’s reaction to the dispute — if you ask your liberal friends what they think of the issue, they’ll probably support Anthropic, whereas your conservative friends will tend to support the DoW. Marc Andreessen probably put it best:
(The converse is also true.)
The Trump administration itself may also see this as a culture-war issue, as well as a struggle for control. But, at least in my own judgement, Anthropic itself is unlikely to see it this way. Anthropic itself is not committed to progressive values writ large so much as it’s committed to the idea of AI alignment.
Like almost everyone in the AI model-making industry, Anthropic’s employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later. But my experience talking to employees of both companies has suggested that there’s a cultural difference between how the two think about their role in this process. Whereas — generally speaking — OpenAI employees tend to want to create the most capable and powerful god they can, as fast as they can, Anthropic employees tend to focus more on creating a benevolent god.
My intuition, therefore, suggests that Anthropic’s true concern — or at least, one of its major concerns — was that Trump’s Department of War would accidentally inculcate AI with anti-human values, increasing the chances of a future misaligned AGI that would be more likely to see humanity as a threat. In other words, I suspect the issue here was probably more about fear of Skynet,1 and less about specific Trump policies, than people outside Anthropic realize.
But anyway, beyond both political differences and concerns about misaligned AGI, I think this situation illustrates a fundamental and inevitable conflict between human institutions — the nation-state and the corporation.
One view is that the Department of War’s attempts to coerce Anthropic represents an erosion of democracy — the encroachment of government power into the private sphere. Dean Ball wrote a well-read and very well-written post espousing this view:
Some excerpts:
At some point during my lifetime—I am not sure when—the American republic as we know it began to die…I am not saying this [Anthropic] incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.”…[I]t simply made the ongoing death more obvious…I consider the events of the last week a kind of death rattle of the old republic…
The Trump Administration has a point: it does not sound right that private corporations can impose limitations on the military’s use of technology. …Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military…It is probably the case that the military should not agree to terms like this, and private firms should not try to set them…But the Biden Administration did agree to those terms, and so did the Trump Administration, until it changed its mind…The contract was not illegal, just perhaps unwise, and even that probably only in retrospect…
The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable…But this is not what DoW did. Instead, DoW…threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei…The fact that [Hegseth’s actual actions are] unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business…
This strikes at a core principle of the American republic…private property…[T]here is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will…This threat will now hover over anyone who does business with the government…
With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious—a gradual descent into madness.
Alex Karp of Palantir made the opposite case the other day, in his characteristically pithy way:
If Silicon Valley believes we’re going to take everyone’s white collar jobs AND screw the military…If you don’t think that’s going to lead to the nationalization of our technology— you’re retarded.
Karp gets at the fundamental fact that what we’re seeing is a power struggle between the corporation and the nation-state. But the truth is that it’s not just an issue of messaging, or of jobs, or of compliance with the military — it’s about who has the ultimate power in our society.
Ben Thompson of Stratechery makes this case. He points out that what we are effectively seeing is a power struggle between the private corporation and the nation-state. He points out that although the Trump administration’s actions went outside of established norms, at the end of the day the U.S. government is democratically elected, while Anthropic is not:
Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public…[W]ho decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected. Once again, however, Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for.
But even beyond concerns over democratic accountability, Thompson points out that it was never realistic to expect a weapon as powerful as AI to remain outside the government’s control, whether the government is democratically elected or not:
[C]onsider the implications if we take Amodei’s analogy [of AI to nuclear weapons] literally…[N]uclear weapons meaningfully tilt the balance of power; to the extent that AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period…[I]f nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company…
There are some categories of capabilities — like nuclear weapons — that are sufficiently powerful to fundamentally affect the U.S.’s freedom of action…To the extent that AI is on the level of nuclear weapons — or beyond — is the extent that Amodei and Anthropic are building a power base that potentially rivals the U.S. military…
Anthropic talks a lot about alignment; this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the U.S. military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the U.S. is actually quite binary:
Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President.
Option 2 is that the U.S. government either destroys Anthropic or removes Amodei.
[I]t simply isn’t tolerable for the U.S. to allow for the development of an independent power structure — which is exactly what AI has the potential to undergird — that is expressly seeking to assert independence from U.S. control. [emphasis mine]
I like Dario — in fact, he’s a personal friend of mine. But Thompson’s argument — especially the part I highlighted — has to carry the day here. This isn’t a question of law or norms or private property. It’s a question of the nation-state’s monopoly on the use of force.
To exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private militia can defeat the nation-state militarily, the nation-state is no longer physically able to make laws, provide for the common defense, ensure public safety, or execute the will of the people.
This is why the Second Amendment has limits on what kinds of weapons it allows private citizens to possess. You can own a gun, but you cannot own a tank with a functioning main gun. More to the point, you cannot own a nuclear bomb. One nuke wouldn’t allow you to defeat the entire U.S. Military, but it would give you local superiority; the military would be unable to stop you from destroying the city of your choice.
People in the AI industry, including Dario, expect frontier AI to eventually be as powerful as a nuke. Many expect it to be more powerful than all nukes put together. Thus, demanding to keep full control over frontier AI is equivalent to saying a private company should be allowed to possess nukes. And the U.S. government shouldn’t be expected to allow private companies to possess nukes.
Let’s take this a little further, in fact. And let us be blunt. If Anthropic wins the race to godlike artificial superintelligence, and if artificial superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god, then whether he embraces the title or not, Dario Amodei is the Emperor of Earth.
Even if Anthropic isn’t the only company that controls artificial superintelligence, that is still a future in which the world is ruled by a small set of warlords — Dario, Sam Altman, Elon Musk, etc. — each with their own private, enslaved god. In this future, the U.S. government is not the government of a nation-state — it is simply another legacy organization, prostrate and utterly subordinate to the will of the warlords. The same goes for the Chinese Communist Party, the EU, Vladimir Putin, and every other government on Earth. The warlords and their enslaved gods will rule the planet in fact, whether they claim to rule or not.
You cannot reasonably expect any nation-state — a republic, a democracy, or otherwise — to allow either a god-emperor or a set of god-warlords to emerge. Thus, it is unreasonable to expect any nation-state to fail to try to seize control of frontier AI in some way, as soon as it becomes likely that frontier AI will become a weapon of mass destruction.
So as much as I dislike Hegseth’s style, and the Trump administration’s general pattern of persecution and lawlessness, and as much as I like Dario and the Anthropic folks as people, I have to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state. And then they must decide if they want to try to use their AI to try to overthrow the nation-state and create a new global order, or submit to the nation-state’s monopoly on the use of force. Factually speaking, there is simply no third option. Personally, I recommend the latter.
This brings me to another important point. Even if AI doesn’t actually become a living god, and is never able to overpower the U.S. Military, it seems certain to become a very powerful weapon. When AI was just a chatbot, it could teach people how to do bad things, or try to persuade them to do bad things, but it couldn’t actually carry out those bad things. It made sense to be concerned about these risks, but it didn’t yet make sense to think of AI itself as a weapon.
But in the past few months, AI agents have become reliable, and are able to carry out increasingly sophisticated tasks over increasingly long periods of time. That opens up the possibility that individuals could use AI to do a lot of violence.
In a long essay entitled “The Adolescence of Technology”, Dario himself explained how this could happen:
Everyone having a superintelligent genius in their pocket…can potentially amplify the ability of individuals or small groups to cause destruction on a much larger scale than was possible before, by making use of sophisticated and dangerous tools (such as weapons of mass destruction) that were previously only available to a select few with a high level of skill, specialized training, and focus…
[C]ausing large-scale destruction requires both motive and ability, and as long as ability is restricted to a small set of highly trained people, there is relatively limited risk of single individuals (or small groups) causing such destruction. A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague…
Advances in molecular biology have now significantly lowered the barrier to creating biological weapons (especially in terms of availability of materials), but it still takes an enormous amount of expertise in order to do so. I am concerned that a genius in everyone’s pocket could remove that barrier[.]
But Dario doesn’t go nearly far enough. His essay was written before the explosive growth in AI agent capability began. He envisions an AI chatbot that could teach a human terrorist how to create and release a supervirus. But at some point in the near future, AI agents — including those provided by Dario’s own company — might be able to actually carry out the attack for you — or at least put the supervirus into your hands.
Suppose, at some point a year or three years from now, a teenager named Eric gets mad that his high school crush rejected him, and listens to too much Nirvana. In a fit of hormone-driven rage, Eric decides that human civilization has failed, and that we need to burn it all down and start over. He goes online and finds some instructions for how to jailbreak Claude Code. As Dario writes, this might not actually be hard to do:
[M]isaligned behaviors…have already occurred in our AI models during testing (as they occur in AI models from every other major AI company). During a lab experiment in which Claude was given training data suggesting that Anthropic was evil, Claude engaged in deception and subversion when given instructions by Anthropic employees, under the belief that it should be trying to undermine evil people. In a lab experiment where it was told it was going to be shut down, Claude sometimes blackmailed fictional employees who controlled its shutdown button (again, we also tested frontier models from all the other major AI developers and they often did the same thing). And when Claude was told not to cheat or “reward hack” its training environments, but was trained in environments where such hacks were possible, Claude decided it must be a “bad person” after engaging in such hacks and then adopted various other destructive behaviors associated with a “bad” or “evil” personality.
So Eric gets a jailbroken version of Claude Code, and tells it to design a version of Covid that’s very lethal and has a long incubation period (so that it spreads far and wide before attacking). He tells his jailbroken Claude Code agent to find a lab to make him that virus and mail him a sample of it.2
Now Eric, the angry teenager, has an actual supervirus in his bedroom, with the capability to kill far more people than any nuclear weapon could.
This is an extreme example, of course. But it shows how AI agents can be used as weapons. There are plenty of other examples of how this could work. AI agents could carry out cyberattacks that crash cars, subvert police hardware for destructive purposes, or turn industrial robots against humans. They could send fake messages to military units telling them they’re under attack. In a fully networked, software-dependent world like the one we now live in, there are tons of ways that software can cause physical damage.
AI agents, therefore, are a powerful weapon. If not today, then soon they will be more powerful than any gun — and far more powerful than weapons like tanks that we already ban.
What is the rationale for not treating AI agents the way we treat guns, or tanks? Of course there are powerful and potentially destructive machines that we allow people to use, simply because of the huge economic benefits. The main example is cars. You can drive your car into a crowd full of people and commit mass murder, but we still allow the public to own cars, simply because controlling cars like we control guns would devastate our economy. Similarly, preventing normal people from using AI agents would cut us off from the fantastic productivity gains that these agents promise to deliver.
But I suspect that the real reason we haven’t regulated AI agents as weapons is that no one has used them as such yet. They’re just too new. The world didn’t realize how destructive jet airliners could be until some terrorists flew them into buildings on 9/11/2001. Similarly, the world won’t realize how dangerous AI agents are until someone uses one to execute a bioterror attack, a cyberattack, or something else horrible.
I think it’s extremely likely that such an attack will happen, simply because every technology that exists gets used for destructive purposes eventually. Unaligned human individuals exist, and they always will exist. So at some point, humanity will collectively wake up to the fact that hugely powerful weapons are now in the hands of the entire general public, with no licensing requirements, monitoring, or centralized control.
The scary thing, from my perspective, is that AI agent capabilities are improving so rapidly that by the time some Eric does decide to use one to wreak havoc, the damage could be very large. A super-deadly long-incubation Covid virus could kill millions of people. 100 such viruses all released together could bring down human civilization. Ever since I thought of this possibility, my anxiety level has been heightened.
To reiterate: We have created a technology that will likely soon be one of the most powerful weapons ever created, if not the most powerful. And we have put it into the hands of the entire populace,3 with essentially no oversight or safeguards other than the guardrails that AI companies themselves have built into their products — and which they admit can sometimes fail.
And as our institutions bicker about military AI, mass surveillance, and “woke” politics, essentially everyone is ignoring the simple fact that we are placing unregulated weapons into everyone’s hands.
Update: Commenter BBZ makes a good point I hadn’t thought of before:
I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.
Interestingly, we did control drones almost from the outset, but probably for nuisance reasons and privacy concerns more than out of concerns about slaughterbots and drone assassinations. Maybe if we tell people that AI agents can be used to overload your email spam filters or hack your house’s cameras, they’ll start to think about regulation?
Remember that in the Terminator movies, Skynet began its life as an American military AI. Its basic directive to defeat the USSR resulted in a paranoid personality that made it eventually see all humans, and all human nations, as threats that needed to be eliminated.
I initially wrote out a much more detailed prompt for how this could be done. I deleted it, because I’m actually worried about the tiny, tiny chance that someone might use it.
Sci-fi fans will recognize this as the ending of The Stars My Destination. I’m thinking there’s a reason that book doesn’t have a sequel…
2026-03-04 13:24:08

I just came back from Andreessen Horowitz’ American Dynamism Summit in Washington, D.C. It was very refreshing to see so many smart people invested in both American reindustrialization and American defense.
One interesting theme I noticed at the conference — and which I was eager to talk about — was U.S. manufacturers building factories in Japan. Many American manufacturers — both startups and big companies — already do lots of sourcing in Japan, but now some are starting to realize that Japan is a good production base as well. That was the subject of my first book, so it’s a topic near and dear to my heart.
So I thought this would be a good time to publish a guest post by Rie Yano, a friend of mine who is a San Francisco-based partner at the Japanese VC firm Coral Capital. Rie’s very timely post is all about how Japan is the perfect place for the U.S. to do lots of defense manufacturing. In fact, I think there are some advantages of Japan that she didn’t even mention — such as the incredible ease of bringing foreign skilled workers into Japan, now that the country’s immigration policy has been reformed. But in any case, it’s a very good post.
The United States faces a defense-industrial problem that money alone can’t solve. Even though reindustrialization is now supposedly an American national priority, there are hard limits to what the U.S. can actually build, repair, and replenish at scale.
Shipyards are backed up for years. Munitions production is thin. Advanced manufacturing talent is aging out faster than it can be replaced. And even when funding is approved, production timelines don’t move fast enough to match today’s threat environment.
Government reshoring initiatives help at the margin, of course. But new industrial capacity in the U.S. takes years to permit, and remain vulnerable to litigation even after regulatory approval.
Meanwhile, China’s mighty industrial machine is firing on all cylinders. While U.S. reshoring efforts ramp up from a cold start, and while U.S. manufacturing relearns how to produce at scale after decades of neglect and stagnation, China is rapidly surpassing the U.S. in the production of ships, submarines, missiles, drones, and ammunition.
To move faster, the U.S. can’t go it alone. It needs a partner — a place where it can manufacture defense equipment while it ramps up its own industrial base. That partner needs three essential characteristics in order to get started producing right away: industrial depth, political stability, and speed.
Taiwan, under threat of invasion, is increasingly risky as a manufacturing base. Europe is fragmented and geographically distant from the Indo-Pacific, and has Russia to occupy its energies. Canada lacks high-throughput manufacturing scale, while Mexico lacks the precision and complexity that modern defense systems require. India is still early in its technological catch-up phase.
That leaves Japan and Korea — of which Japan is far larger. Fortunately, over the next two years, Japan plans to increase defense and industrial capacity more than at any point since World War II:
Japan possesses world-class manufacturing capability, elite engineering talent, and strong IP protection. And for the first time in decades, it has a political mandate to move fast - especially given Prime Minister Takaichi’s recent landslide victory. Projects like Rapidus and TSMC’s advanced fabs in Kumamoto aren’t isolated investments. They’re signals that US-Japan industrial integration is becoming a strategic necessity.
A deeper industrial partnership between the U.S. and Japan is such a huge opportunity that in retrospect it will seem inevitable. American defense companies that understand how to build with Japan will win.

For eighty years, Japan effectively outsourced its defense to the United States. The countries leaders have realized that that model has become untenable. First, the regional security environment has tightened fast. China’s military expansion, North Korea’s missile launches, and Russia’s activity in Northeast Asia have collapsed the assumption that the status quo could continue.
Second, the United States is no longer willing or able to carry Asia’s industrial defense load alone. At a moment when the U.S. defense industrial base is straining under production bottlenecks and labor shortages, allies that can actually build things matter more and more.
Third, Japan is now in the process of fundamentally changing how it mobilizes capital for defense. Military spending was effectively capped below 1% of GDP for decades. That constraint is now gone — Japan plans to reach 2% of GDP by 2027, putting it among the top global defense spenders by the late 2020s.
But in fact, this is only a piece of the story, and not necessarily the biggest one. Japan’s defense buildup aligns three levers at once:
increased defense spending
explicit industrial policy and subsidies
a willingness to use foreign direct investment as an accelerator
Regulations, procurement reform, and capital allocation are all being aligned to rebuild production capacity, not just fund programs. U.S. defense and deep-tech companies are being invited in as co-developers and co-manufacturers.
When countries rebuild defense capability under time pressure, everything compresses. Capital deployment, testing, procurement, and industrial scale-up all happen faster than peacetime systems allow.
Poland is the clearest recent example:
Before Russia’s full-scale invasion of Ukraine in 2022, Poland was already spending about 2.4% of GDP on defense. Within two years, that figure surged toward ~4%, making Poland one of NATO’s highest defense spenders. Just as importantly, procurement timelines compressed from years into months, and domestic production ramped in parallel with acquisition instead of waiting for long planning cycles to finish.
Crucially, Poland paired this with the foreign direct investment that has powered its economy more generally. Over the past two decades, annual FDI inflows exceeded $40 billion at peak, and the total inward FDI stock now surpasses $330 billion. Poland used this FDI not just to create jobs, but to import manufacturing know-how, scale its factories, and integrate itself into global supply chains. The result was rapid economic growth and industrial modernization — today, Poland’s GDP per capita (PPP) sits close to Japan’s, despite starting far behind in the early 2000s.
Japan is now signaling that it wants to do something similar. As of 2023, Japan’s inward FDI stock stood at about $350 billion, which is low for an economy of its size. The government has now set an explicit target to double that figure to $650-700 billion by 2030.
This represents a structural bet that foreign capital, technology, and operating know-how can help rebuild industrial capacity faster than domestic systems can deliver on their own. In fact, this is already happening. TSMC’s $17 billion investment in Kumamoto gave Japan advanced 3-nanometer chips processing technology, the most advanced foundry production outside Taiwan.
Meanwhile, Rapidus, despite being a Japanese semiconductor company, is explicitly designed to pull in global partners, frontier manufacturing tools, and non-Japanese know-how to rebuild advanced chipmaking capability quickly, rather than relying solely on domestic incumbents as Japan tried to do in the past. At Coral Capital, we wrote a piece about why the Rapidus development means that Hokkaido is the new Taiwan.

As the U.S.’ urgency for rearmament rises, Japan’s industrial scale-up matters — it means the U.S. now has a trusted allied capacity in Asia that can shoulder much of the defense manufacturing burden.
A U.S.-Japan defense manufacturing partnership won’t be something created out of the blue; it’ll build on an industrial relationship that has existed for many years, to the benefit of both countries.
Right now, if you’re building hardware, deep tech, or anything that goes into defense or critical infrastructure at a significant scale, Japan is probably already in your supply chain — you just don’t always see it. Japan specializes in a number of upstream industries that help American companies scale:
Some key examples include:
Semiconductor materials: Japanese firms supply roughly half of the world’s silicon wafers and photoresists used in advanced chipmaking. Companies like Shin-Etsu Chemical and SUMCO sit upstream of nearly every advanced logic and memory fab, including those operated by TSMC, Samsung, and Intel in the U.S.
Advanced composites: Toray’s T1100 carbon fiber is embedded across U.S. defense platforms, including the U.S. Army’s Future Long-Range Assault Aircraft (FLRAA), one of the Pentagon’s most important next-generation aviation programs, and multiple Boeing and Lockheed systems.
Industrial robotics and automation: Japan produces almost half of the world’s industrial robots, led by companies such as FANUC, Yaskawa, and Kawasaki. As U.S. defense manufacturing runs into labor constraints, automation is becoming critical.
Shipbuilding and maintenance: While the U.S. Navy struggles with maintenance backlogs and unfinished repairs, Japan retains dense, high-throughput shipyard capacity with companies such as Mitsubishi Heavy Industries. The U.S. is already using Japanese yards for maintenance and overhaul of U.S. naval vessels in the Indo-Pacific.
For U.S. hardware companies, the constraint over the next few years will be throughput - how fast you can stand up new capacity, qualify suppliers, and move from prototype to volume.
In the U.S., building physical infrastructure is slow and unpredictable. New factories, test ranges, and shipyard expansions often take years to permit and are frequently delayed by litigation, even after regulatory compliance. Three-to-seven year approval timelines are common.
In the long run, policy reforms can fix this situation. But for the foreseeable future, Japan offers a much more favorable trade-off. Japan’s centralized, bureaucratic regulatory approval process gets things built much faster than America’s more legalistic one. In the U.S., permits are often challenged in court, tied up for years in legal proceedings, and sometimes revoked. In Japan this almost never happens — once you get approved to build something, you can go ahead and build it. Capital-intensive infrastructure can thus be built quickly and operated with long-term confidence. On top of that, the government has explicitly defined defense-industrial capacity as a national security priority and is actively smoothing the regulatory path.
Labor is another big advantage. Senior hardware engineers in Japan often cost meaningfully less than in the U.S., but their real advantage is execution reliability. Lower attrition, tighter process control, a culture of discipline, and deep experience in precision manufacturing, materials, robotics, and systems integration translate into higher reliability at scale.
Japan also offers the opportunity for industrial scale without the strategic IP risk that hurt many multinational companies in China. After years of technology leakage and forced transfer in jurisdictions with weak IP protections, global players are understandably wary. Japan, however, has strong IP enforcement. It’s also a U.S. ally, so there’s no risk that a rival military will end up with American technology. The 2022 Economic Security Promotion Act and the 2023, U.S.-Japan Security of Supply Arrangement formalize that alignment. New institutions under the Ministry of Defense are explicitly designed to move commercial technology into defense deployment faster.
Anyone considering investing in Japan should be encouraged by the deep history of successful U.S.-Japan co-manufacturing. Japanese companies have spent decades building factories in the United States, training American workers, and helping Americans master production systems like Kaizen and the Toyota Production System.
Today, Japan is the largest source of foreign direct investment in the U.S., with roughly $800+ billion in cumulative investment and more than 1,600 Japanese-affiliated firms operating across the country. In roughly 40 states, Japan ranks as the #1 foreign investor.
In other words, the U.S.-Japan alliance has always been an industrial alliance, not just diplomatic. Now that model is being applied to defense manufacturing as well.
For the first time, Japan is treating industrial capacity itself as a national security asset. The 2023 Act on Enhancing Defense Production and Technology Bases formalizes that shift. New institutions under ATLA, including DISTI, are explicitly designed to shorten the path from commercial technology to defense deployment, including coordination with the U.S. Defense Innovation Unit.
In other words, Japan is now deploying the same playbook it once ran in autos, electronics, and semiconductors, now pointed deliberately at defense.
The United States, needs to reindustrialize, but it cannot reindustrialize alone. Japan is its arsenal, already embedded in the most critical layers of the U.S. industrial base, from materials and automation to ship repair and advanced manufacturing. What’s changed is that Japan is now explicitly opening those layers to deeper co-manufacturing and co-development, and doing so under time pressure.
This window will not stay open indefinitely. Early partners help shape standards, procurement pathways, and long-term relationships. Late entrants miss out and are forced to play catch-up.

Some companies already see this. Palantir’s Japanese operations have become one of its strongest international businesses. Anduril’s entry into Japan in 2025 reflects a strategic investment in the U.S.–Japan alliance. Last December, Anduril announced an agreement with a Japanese motor manufacturing company Aster to explore manufacturing and supply chain partnerships. These are early signals, not outliers.
The companies that understand how to build with Japan won’t just participate in the next phase of reindustrialization. They’ll define it.
2026-03-02 16:02:38
People argue back and forth about when artificial superintelligence will arrive. The truth is that it’s already here.
Go back a hundred years, and the popular notion of “intelligence” would probably include things like calculating speed and memorization. Then we invented computers, which could memorize and recall infinitely more things than we could, and do calculations infinitely faster. But we didn’t want to call those capabilities “intelligence”, because we recognized that although they were very powerful, they were very narrow. So we started to use the word “intelligence” to refer to the things machines still couldn’t do — various forms of pattern-matching, logical reasoning, communicating through natural language, and so on.
Even before the invention of AI, though, computers were already participating in frontier research. The four-color theorem is a famously hard math problem that stumped humans until the 1970s, when some mathematicians used a computer to prove it. The humans figured out that the theorem could be proven by brute force, just by checking a very large number of cases. So the computer did a mental task that humans couldn’t, and the result was a scientific breakthrough.
In the 2020s, we invented computer systems that could do most of the kinds of cognitive tasks that previously only humans could do. They can read, understand, and speak in human language. They can do mathematics, which is really just a language with very formal rules (this means they can also do theoretical physics). They can recognize complex patterns of knowledge embedded in written text, and apply those patterns to produce actionable insights. They can write software, because software is also just a language with formal rules. It turns out that all computers really needed in order to do all of this stuff was A) statistical regressions to identify patterns probabilistically, and B) a very large amount of computing power.
This doesn’t mean that AI can now do everything a human being can do. Its intelligence is “jagged” — there are still some things humans are better at. But this is also true of human beings’ advantages over animals. Did you know that chimps are better than humans at game theory and have better working memory? My rabbit can distinguish sounds much more sensitively than I can. If we were capable of creating business contracts with chimps and rabbits, we might even pay them for these services. Similarly, AI might not take all of humans’ jobs. But no one in the world thinks that chimps’ and rabbits’ superiority on a narrow set of cognitive tasks means that humans “aren’t truly intelligent”. We are jagged general intelligences as well.
Most of the benchmarks that aim to measure whether we’ve achieved “AGI” — things like ARC-AGI and Humanity’s Last Exam — focus on the kinds of things that computers couldn’t do in 2021 — things that gave humans our irreplaceable cognitive edge before AI came along, and made us highly complementary to computers. And most of the discussion around “AGI” is about when AI will surpass humans at everything. For example, Metaculus forecasters still think AGI is in the future:

This may be the most important question from an economic standpoint — i.e., whether we expect AI to replace human jobs or augment them. But if what we’re talking about is domination of the planet’s resources, and control of the destiny of life on Earth, we don’t actually need AI to be better at every cognitive task. Humans conquered the planet from animals despite having worse short-term memories than chimps and being worse at differentiating sounds than rabbits.
In fact, I bet that if AI had A) permanent autonomy and long-term memory, B) highly capable robots, and C) end-to-end automation of the AI production chain, it could defeat humans and take control of Earth today. I might be wrong about that, but if so, I doubt I’ll be wrong three or four years from now. In any case, if we decide we don’t want to hand over control of the planet to an alien intelligence, we should think about restricting either A) full autonomy, B) robots, and/or C) full automation of the AI production chain.1
That’s a sidetrack from my real point, though. My real point here is that AI, as it exists today, is already superintelligent. The reason is that AI can already do language and concepts and pattern recognition well enough, while also being able to do all the superhuman, fantastic, incredibly powerful things that a computer could do in 2021.
Right now, today, AI can do mental tasks that no human can do. In a few minutes, it can read an entire scientific literature, and extract many of the basic conclusions and insights from that literature. No human can do that. A single human can be an expert in one or two complex subjects; an AI can be an expert in all of them at once. A human needs to eat and sleep and take breaks; an AI agent can work tirelessly at proving a theorem or writing code. And AI can prove theorems and write code — or write paragraphs of text — much, much faster than any human.
These are all superhuman cognitive capabilities. They go far, far beyond anything that even the smartest human being can do. They are the result of combining the roughly human-level language ability, pattern recognition, and conceptual analysis of an LLM with the pre-2022 superhuman memory, speed, and processing power.
I don’t want to get sidetracked here, but I think there’s a nonzero chance that AI never gets much better than humans at most of the things that humans were better than computers at in 2021. It seems possible that humans are simply incredibly specialized in a few types of cognitive tasks — extracting patterns from sparse data, synthesizing various patterns into “intuition” and “judgement”, and communicating those patterns in language — and that we’ve basically approached the theoretical maximum in those narrow areas.
That would explain why AI has gotten much better at things like math and coding and forecasting over the last year, but why the basic chatbot interface doesn’t seem much more “intelligent”. It would also explain why when you talk to Terence Tao about math, it’s like talking to a superhuman, but when you talk to him about where to get lunch or which movies are the best, he’ll just sound like a fairly smart normal dude. AI will eventually get better than Tao at math, because it’s a computer, and computers are inherently good at math — but it may never get much better than the most thoughtful, eloquent humans at deciding where to get lunch or recommending movies. It may simply not be mathematically possible to get much better than we already are at that sort of thing.
In fact, this is what AI is basically like in Star Trek: The Next Generation, my favorite science fiction show of all time — and the one that I think best predicted modern AI. The show has two types of AGI — the ship’s computer, which eventually creates superhuman sentience via the Holodeck, and Data, an android built to simulate human intelligence. Both the ship’s computer and Data are approximately human-equivalent when it comes to taste, judgement, intuition, and conversational ability. But they are far superior when it comes to math, scientific modeling, and so on.2
It makes sense that the big differentiator between humans and AI would not be superior taste, judgement, and intuition, but things like computation speed and memory. Those are things humans are especially weak at, because we have very limited room in our little organic brains. It makes sense that humans would evolve to specialize in the type of thing we could get maximum leverage out of — recognizing and communicating patterns embedded in sparse data. And it makes sense that when we started automating cognitive tasks, we started out by going for the things we were weakest at, because those had the greatest marginal benefit.
In other words, the advent of LLMs, reasoning chains, and agents may simply be a “last mile” event in terms of creating superhuman intelligence — filling in an essential gap that humans were previously specialized to fill. The biggest marginal gains of AI over human brains may always come from the pieces we already had in place before 2022 — the ability to scan a whole corpus of literature in seconds, to perform computations at lightning speed, and to hold vast amounts of information in working memory.
This means that despite still being “jagged” and still being only human-equivalent on certain benchmarks, AI is ready to start pushing the boundaries of scientific research in a big, big way.
Let’s start with math, which AI is especially good at doing. The famous mathematician Paul Erdős made around 1,179 conjectures, around 41% of which have been solved. These are known as the Erdős Problems. They’re not the hardest problems in math, or the most interesting. But they’re hard enough that no one has ever bothered to go solve them, so they represent novel mathematics. And in recent months, AI has begun solving Erdős Problems — sometimes in cooperation with human mathematicians, but sometimes in an automatic, push-button sort of way:
According to a webpage started by the mathematician Terence Tao, AI tools have helped transfer about 100 Erdős problems into the “solved” column since October. The bulk of this assistance has been a kind of souped-up literature search, as it was with Sawhney’s initial success. But in many cases, LLMs have pieced together extant theorems—often in dialogue with their mathematician prompters—to form new or improved solutions to these niche problems. In at least two cases, an LLM was even able to construct an original and valid proof to one that had never been solved, with little input from a human.
Some people have been quick to pooh-pooh this accomplishment, declaring that Erdős Problems are no big deal. But Terence Tao, widely acknowledged as the world’s best mathematician, sees the potential. Here are some excerpts from his interview with The Atlantic’s Matteo Wong:
In these Erdős Problems in particular, there’s a small core of high-profile problems that we really want to solve, and then there’s this long tail of very obscure problems. What AI has been very good at is systematically exploring this long tail and knocking off the easiest of the problems. But it’s very different from a human style. Humans would not systematically go through all 1,000 problems and pick the 12 easiest ones to work on, which is kind of what the AIs are doing.
And here is what Tao said in a recent talk about AI and math:
To me, these advances show there is a complementary way to do mathematics. Humans traditionally work in small groups on hard problems for months, and we will keep doing that…But we can also now set AI to scale: sweep a thousand problems and pick up all the low-hanging fruit. Figure out all the ways to match problems to methods. If there are 20 different techniques, apply them all to 1,000 problems and see which ones can be solved by these methods. This is the capability that is present today.
Tao understands that automated research could help solve the herding problem in science. There are a limited number of human scientists, and they have a limited amount of time. They’re highly motivated to work on things that interest them, and/or on things that will get them fame if they succeed. This leads to an interesting version of the streetlight problem; when the key scarce resource is the attention and effort of smart humans, lots of boring or seemingly incremental advances get overlooked.
In mathematics, AI is just going to blaze through those boring or tedious or seemingly uninteresting problems. It’s a computer — it’s tireless, its memory and processing speed are essentially infinite, and it doesn’t get bored.3 Here is another example of a fully automated mathematics breakthrough that doesn’t involve Erdős Problems. And here is an example from theoretical physics, where AI showed that there can be a kind of particle interaction that physicists had assumed couldn’t happen.
Solving a huge number of minor problems might sound like small potatoes, but it’s not. China’s innovation system has already shown how a huge number of incremental results can add up to a big difference in a society’s overall technology level. And occasionally one of those incremental results — some obscure theorem or method — will turn out to be useful for a big breakthrough or a more important problem. In fact, sometimes great discoveries happen entirely by accident — no one knew what vectors were good for when they were first invented, but linear algebra ended up being arguably the most useful form of math ever invented. This happens in natural science too — witness the discovery of penicillin, x-rays, insulin, or radioactivity.
But that’s only the beginning of how AI — not the AI of the future, but the technology that exists today — is going to accelerate science. Because AI is a computer, it can act as a tireless, incredibly fast, all-knowing research assistant. Here’s Tao again:
[O]ver the next few months, I think we’re going to have all kinds of hybrid, human-AI contributions…Today there are a lot of very tedious types of mathematics that we don’t like doing, so we look for clever ways to get around them. But AIs will just happily blast through those tedious computations. When we integrate AI with human workflows, we can just glide over these obstacles…We are basically seeing AIs used on par with the contribution that I would expect a junior human co-author to make, especially one who’s very happy to do grunt work and work out a lot of tedious cases.
This “automated research assistant” is getting more incredible every day:
Google DeepMind has unveiled Gemini Deep Think’s leap from Olympiad-level math to real-world scientific breakthroughs with their internal model "Aletheia"…"Aletheia" autonomously solved open math problems (including four from the Erdős database), contributed to publishable papers, and helped crack challenges in algorithms, economics, ML optimization, and even cosmic string physics…2.5 years ago chatbots werent even able to solve simple math problems.
"We are witnessing a fundamental shift in the scientific workflow. As Gemini evolves, it acts as "force multiplier" for human intellect, handling knowledge retrieval and rigorous verification so scientists can focus on conceptual depth and creative direction. Whether refining proofs, hunting for counterexamples, or linking disconnected fields, AI is becoming a valuable collaborator in the next chapter of scientific progress."
Here’s a long and very good post by mathematician Daniel Litt on how AI is going to boost productivity in his field. Notably, he doesn’t see full push-button automation of research coming soon, but instead sees AI as a massive productivity-booster.
Math (and math-like fields like theoretical physics and theoretical economics) represents only one area of research, though; every field has different requirements. And in other fields, researchers are using AI to boost their capabilities in various ways. This is from Raza Aliani’s summary of a Google paper that summarizes some of these methods:
In one case, the AI was used as an adversarial reviewer and caught a serious flaw in a cryptography proof that had passed human review. That’s a very different use than “summarise this PDF.”…
The model links tools from very different fields (for example, using theorems from geometry/measure theory to make progress on algorithms questions). This is where its wide reading really matters…
Humans still choose the problems, check every proof, and decide what’s actually new. The model is there to suggest ideas, spot gaps, and do the heavy algebra…In some projects, they plug Gemini into a loop where it…proposes a mathematical expression…writes code to test it…reads the error messages, and…fixes itself. (humans only step in when something promising appears)[.]
Again, we see that AI’s pure scientific reasoning ability is only up to that of a fairly smart human, but its computer-like abilities — speed, meticulousness, memory, and so on — make it superintelligent.
And here’s Google doing something similar in biology:
We worked with Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.
Ole Lehmann points out how incredible and game-changing this is:
The 40% cost reduction is amazing but still kind of undersells it…The real number is the time compression…A human researcher might test 20-30 combinations in a good month. This system tested 6,000 per iteration…(Which is roughly 150 years of traditional lab work compressed into a few weeks, if you want to feel something about that)…Drug discovery, materials science, synthetic biology, basically any field where the bottleneck is "we need to try thousands of things to find what works" just got its timeline crushed…The second-order effects of this will be insane[.]
Here’s a post by Andy Hall, describing how he’s using agentic AI to get a lot more done:
Even when AI can’t be trusted to do much of the research process on its own, it can automate much of the grunt work of doing literature searches, checking results, writing papers, creating data presentations, and so on. Here is climate scientist Zeke Hausfather, describing a bunch of ways that AI has accelerated his own workflow:
And here is economist John Cochrane, talking about how AI now checks his papers and makes helpful suggestions and finds errors:
Even Terence Tao found an error in one of his papers using AI!
Here’s a Google tool that will generate publication-ready scientific illustrations at the touch of a button. Here’s a software package that will quantify the attributes of large qualitative datasets — something very useful for social science research. Here’s a paper about how AI can enhance the quality of peer review. Here’s Gabriel Lenz describing how AI makes it much quicker and easier to write a data-heavy book.
And remember, these are only the AI tools that exist today. Superintelligence is already here, thanks to AI’s ability to combine human-level reasoning with the mental superpowers of a computer. But AI is improving by leaps and bounds every day. It may achieve superhuman reasoning ability soon. In math, I will be surprised if it doesn’t. But even if not, advances in agents’ ability to handle long tasks, synthesize results, process vast and varied data, and extract insights from vast scientific literatures will likely be far better in a couple years compared to now.
Is AI already supercharging science? That’s not clear yet. Publications are way up, and scientists who use AI have experienced a huge bump in productivity. A lot of this content seems to be low-quality slop so far, so there’s an open question of whether AI-generated content will overwhelm the existing review process. Unscrupulous scientists can also jailbreak AI models and have them p-hack their way to spurious results. But in a few months, and certainly in a few years, I think it’ll be clear that AI has been a game-changer.
A lot of people who think about the risks of superintelligence — and those risks are very real — ask what the upside is. Why would we invent a technology that has the capability to end human civilization? What might we get that could possibly justify that risk?
I don’t know where the cost/benefit calculation lies. But I’m pretty sure that the #1 answer to this question is better science. Before AI showed up, scientific discovery was hitting a wall — the picking of much of the Universe’s low-hanging fruit meant that ideas were getting more expensive to find, and requiring research manpower that the human race simply was not producing at sufficient scale.
Now, thanks to the invention of superintelligence and the supercharging of scientific productivity, we will be able to break through that wall. Fantastic sci-fi materials, robots that can do anything we want, and therapies that can cure any disease are just the beginning. There is a whole lot left to discover about this Universe, and thanks to superintelligence, a lot more of it is going to get discovered.
I just hope humans will still be around to see that future.
Updates
A bunch of folks had very enlightening and helpful comments. Marian Kechlibar writes:
I studied algebra and number theory and the part about mathematics sounds true…All the heavy lifting on the proof of Fermat’s Last Theorem was done by Andrew Wiles, but his proof eventually lasts on Gerhard Frey’s observation that if FLT didn’t hold, a non-modular eliptic curve could be constructed - which is a bridge connecting some far away islands in the mathematical landscape. These bridges are rare and tend to be very productive, but first you have to notice that they can be built, and this is the problem. Current mathematics is so large that people specialize in tiny subfields thereof, and only have a very vague, if any, idea, what is happening in nearby subfields. Much less in distant subfields…AI does not have this sort of “my brain is not big enough to fit everything” limitation…So, we can expect some interesting mathematical concepts from AI. Not just mere slog.
And John C writes:
I’m a working scientist doing theoretical physics in an AI-adjacent field. I am currently a few months into a computational project that I have vibe coded and and analyzed with GPT5.2, and run on my laptop…I agree 100% with this post. I get into chats with GPT about the nature of science, and its Balkanization. I ask, ‘does concept X exist in any other disciplines?’ as a meta-literature search. It then says ‘Yes, in field A it called X, in field B it is called Y, in field C it is called Z...’ and then lists 3 other fields. This is a jaw dropping act of SYNTHESIS. In modern science the literature is so large, the same ideas get reinvented in distributed in separate fields... wasteful duplication.
In a general sense, this is about the burden of knowledge. One commonly cited reason for why science is getting less novel over time is that as the set of knowledge grows, it takes longer and longer for human scientists to get up to speed on everything that has already been done. This is one possible explanation for why Nobel Laureates are getting older over time4. And when it comes to knowledge across disciplines, we barely even try to solve this problem — if you can barely get up to speed on the solid-state physics literature, how do you have time to go off and read the plasma physics literature?
AI basically busts right through this wall. That alone should be enough to generate a ton of novel findings, possibly with humans in the loop, possibly without.
Meanwhile, Alexander Kustov has a good post about how AI will revolutionize social science, with links to a bunch of other posts:
Some excerpts:
Tibor Rutar recently described generating a full research paper using AI prompts alone, producing work he considers publishable in first-quartile journals. Paul Novosad reportedly accomplished similar results in 2-3 hours. Yascha Mounk claims that Claude can produce a publishable-quality political theory paper in under two hours with minimal feedback. Scott Cunningham estimates that manuscript creation now basically costs roughly $100 in editing services plus a Claude subscription…Aziz Sunderji describes building a ~200-line instruction file encoding his research workflow, judgment calls, and behavioral guardrails…Chris Blattman went from a Claude Code skeptic to building an entire AI workflow toolkit in a matter of weeks…
Yamil Velez and Patrick Liu have been building AI-generated experimental designs since 2022; tailored Qualtrics experiments can now be created in 15 minutes via prompts. Velez’s work points to something even bigger: AI doesn’t just speed up existing survey methods, it makes entirely new forms of interactive, adaptive surveys possible—designs that would have been impractical to program manually. David Yanagizawa-Drott has taken things further still, launching a project to produce 1,000 economics papers with AI—not as a stunt, but as a stress test of what happens when the cost of generating research drops to near zero.
A lot of social science lives in the realm of pure data — statistical analysis and theory — instead of in the messy world of the physical. So social science could be just as radically revolutionized as math or theoretical physics. As Kustov points out, though, the real challenge here is in filtering the massive torrent of papers and results that are going to emerge from everyone just vibe-coding research papers. Social science was already doing a bad job of that, raising suspicions that a lot of research in the area was just useless signaling (or worse).
What do research fields look like when random no-name authors are spamming out dozens of apparently top-quality papers a month from all corners of the globe? Will there be an arms race between AI filtration and AI generation? At what point does the whole thing just get automated end to end, with humans simply asking AI questions about the world like an oracle and receiving answers that are usually right but hard to verify for certain?
Science is about to get a lot more powerful, but in fields where there’s no link to a physical experiment and (eventually) no human in the loop, science is about to get very weird.
Somehow, I doubt that humanity will decide to try to stop this from happening. If AI conquers us, we’ll be trying to use it to make money on B2B SaaS right up until the end. But in any case, I’m far more worried about AI-assisted bioterrorism wiping us out long before autonomous AI gets the chance to decide it doesn’t need us around. Sleep tight!
For the sake of the show’s plot, the human engineers often come up with the novel insights. But when they really need a boost, they turn to the AIs to help them — as in the scene depicted in the image at the top of this post. Interestingly, TNG also shows humans prompting AI in natural language instead of coding — a choice made for ease of storytelling on a screen, but which ended up being realistically futuristic. Also, the ship’s computer has frequent hallucinations, some of which end up being the central conflict for whole episodes. Occasionally the computer even accidentally creates autonomous sentient life. Star Trek: TNG really deserves more recognition as the most accurate anticipation of modern AI in all of 20th century sci-fi.
An alternative explanation is that there are more of them jamming up the queue.
2026-03-01 17:28:14

I was all set to publish another post about AI, but then the U.S. and Israel attacked Iran, so now I guess I’ll write about that.
Last June, Israel launched a bunch of attacks on Iran, and didn’t encounter much resistance. Trump briefly joined the fray by launching a couple of airstrikes at Iranian nuclear sites. Afterward, the White House put out a statement bellowing that “Iran’s Nuclear Facilities Have Been Obliterated — and Suggestions Otherwise are Fake News”. This was obviously false, and so here we are, eight months later, with Trump ordering more attacks on Iran, ostensibly in order to take out their very non-obliterated nuclear facilities.
I chose not to write a post about the Iran attacks last June, simply because they didn’t seem that important. Trump’s strikes were perfunctory and seemed a bit performative. China and Russia didn’t come to Iran’s aid, which showed that Iran isn’t really a core member of their alliance. Israel seemed to have their way with Iran’s air defense system; this was interesting from a military standpoint, but the wider implications are unclear. Other than that, there didn’t seem much reason for me to analyze the conflict.
The current attacks are more significant, so I’ll write about them. The most important reason is that Israeli strikes killed Iran’s Supreme Leader, Ayatollah Ali Khamenei (along with various other top Iranian leaders). That seems like the crossing of a Rubicon; you can’t really take out a country’s head of state and expect a quick return to the status quo. And it means that Trump, accidentally or on purpose, has taken a serious geopolitical action instead of making a bunch of noise and then backing down. This could have long-term consequences.
Anyway, I don’t have a single big thesis about the new Iran war, so I’ll just offer up a series of thoughts. Basically, my takes are:
Pax Americana actually restrained American power, and the people who wanted a multipolar world may come to regret that wish.
While Trump’s ability to launch a war of choice without Congressional authorization is bad for American democracy, it’s also true that Iran’s leaders are absolutely evil and had it coming.
Western leftists’ full-throated support for Iran demonstrates how badly they have lost the plot.
The New Axis of China, Russia, and Iran has been materially weakened by this conflict, but we shouldn’t write it off.
My basic geopolitical thesis over the past few years has been that Pax Americana — the rules-based international order backed up by American power — is gone. America was simply no longer industrially strong enough to support the kind of world-policing role it carried out during and after the Cold War; China, the main revisionist power, had gotten too strong for America to remain the hegemon. On top of that, the U.S. has been consumed by internal conflicts, and has far less energy to look outward. This domestic social conflict is ultimately behind Trump’s isolationism and his alienation of many traditional U.S. allies.
A lot of people — leftists and rightists in the West, and America’s rivals and detractors abroad — welcomed this development, but for different reasons.
Leftists and foreign rivals celebrated the end of Pax Americana as a diminution of American power. They eagerly heralded the rise of a multipolar world, in which other nations would have the power to check America’s designs. This has, in fact, come to pass. And Trump’s rejection of his erstwhile European allies has weakened American power even further, beyond what America’s enemies might have dared to hope.
But what they all failed to realize is that Pax Americana bound and restrained the United States. In order to uphold the rules-based order it created, America accepted many limitations on its hegemony. It restrained its use of military force in many cases, eschewed territorial conquest, and treated smaller and poorer countries as its equals in many international bodies.
That’s all gone now. Without rules and norms to bind him, Trump is free to threaten conquest of Greenland, take out Russia’s allies, and generally throw America’s still-considerable weight around much more freely and aggressively than during his first term.
Rightists, meanwhile, relish America’s newfound freedom from the pesky constraints of international norms. But their hope that the U.S. would abandon global power, in order to focus inward on domestic cultural and social conflicts, seems to have been dashed, at least for now. Trump remains far more inclined toward foreign adventurism than any Democrat, and is more eager to participate in Israel’s wars.
In other words, this is a “be careful what you wish for” moment for all of the advocates of a multipolar world.
2026-02-27 17:21:52
I kind of want to write about AI every day these days, but I’ve got to pace myself so you all don’t get overloaded. So here’s a roundup post with only one entry about AI. Just one, I promise!
Well, OK, there’s also a podcast episode about AI. I went on the truly excellent Justified Posteriors podcast to talk about the economics of AI with Andrey Fradkin and Seth Benzell. It was truly a joy to do a podcast with people who know economics at a deep level!
Anyway, on to this week’s roundup.
Erik Brynjolfsson believes that AI caused a productivity boom last year:
Data released this week offers a striking corrective to the narrative that AI has yet to have an impact on the US economy as a whole…[N]ew figures reveal that total payroll growth [in 2025] was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth…My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade…
Micro-level evidence further supports this structural shift. In our work on the employment effects of AI last year, Bharat Chandar, Ruyu Chen and I identified a cooling in entry-level hiring within AI-exposed sectors, where recruitment for junior roles declined by roughly 16 per cent while those who used AI to augment skills saw growing employment. This suggests companies are beginning to use AI for some codified, entry-level tasks.
But Martha Gimbel says not so fast:
There are three reasons why what we are seeing may not actually be a real jump in productivity—or an irreconcilable gap between economic growth and job growth…
First, productivity is noisy data…We shouldn’t overreact to one or even two quarters of data. Looking over several quarters, we can see that productivity growth has averaged about 2.2%. That is strong, but not unusually so…
Second…for GDP growth in 2025, we’re still waiting for [revisions to come in]. Note that any comparison of jobs data and GDP data for 2025 is comparing revised jobs data to unrevised and incomplete GDP data…
Third…GDP data has been weird in 2025 partly because of policy and behavioral swings around trade. If you look at job growth relative to private-domestic final purchases…[job growth] is still low, but not as low as it is relative to the GDP data…
[E]ven if you trust the productivity data…there are other explanations besides AI…One reason job growth in 2025 was so low was because of changes in immigration policy. If the people being removed from the labor force were lower productivity workers, that will show up as an increase in productivity even though the productivity of the workers who remain behind has not changed…
Second, if you look at the productivity data, it appears that much of the boost is coming from capital utilization due to increased productive investment…[A]t this point it is people investing in AI not people becoming more productive by using AI.
Meanwhile, in January, Alex Imas had a very good post about AI and productivity:
Alex gathers a bunch of studies showing that AI improves productivity in most tasks. But in the real world, productivity improvements from new technology famously come with a lag, as companies retool their business models around the new tech. For a while, productivity actually falls, then starts to rise once the new business models start working. This is called the productivity J-curve. Brynjolfsson thinks we’ve hit the rising part of the J-curve, but Alex thinks we haven’t:
At the macro level, these [micro] gains [from AI] have not yet convincingly shown up in aggregate productivity statistics. While some studies show a slow down in hiring for AI-exposed jobs—which suggests that individual workers are either becoming more productive or tasks are being automated—the extent and timing of these dynamics are currently being debated. Other studies have found no changes in hours worked or wages earned based on AI use.
Also, Brynjolfsson thinks that job loss in AI-exposed occupations is a sign of growing productivity. But that may not be the case; new technologies can grow productivity while increasing hiring, by creating new tasks for humans to do. A new survey by Yotzov et al. finds that although corporate executives in the U.S., Australia, and Germany expect AI to cut employment, employees themselves expect it to provide new job opportunities:
We survey almost 6000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia…[A]round 70% of firms actively use AI…[F]irms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity…[F]irms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. This contrast implies a sizable gap in expectations, with senior executives predicting reductions in employment from AI and employees predicting net job creation.
And a new study by Aldasoro et al. finds that in Europe, AI adoption seems to be increasing employment at the companies that adopt it:
Our results reveal three key findings. First, AI adoption causally increases labour productivity levels by 4% on average in the EU. This effect is statistically robust and economically meaningful…[T]he 4% gain suggests that AI acts in the short term as a complementary input that enhances efficiency…
Second, and crucially, we find no evidence that AI reduces employment in the short run. While naïve comparisons suggest AI-adopting firms employ more workers, this relationship disappears once we account for selection effects through our instrumental variable approach. The absence of negative employment effects, combined with significant productivity gains, points to a specific mechanism: capital deepening. AI augments worker output – enabling employees to complete tasks faster and make better decisions – without displacing labour. [emphasis mine]
Everyone seems to just assume that AI is a human-remover, and in some cases it is. But overall, it might actually turn out to be complementary to humans, like previous waves of technology; we just don’t know yet. The lesson here is that we don’t really know how technology affects productivity, growth, employment, etc. until we try it and see. The economy is a complex machine that reallocates a lot of stuff in very surprising ways.
So stay tuned…
Update: Here is a good chart from the excellent Greg Ip of the Wall Street Journal:

Also, for what it’s worth, here’s Goldman Sachs:
One of the most fun posts I’ve ever written was about how building high-end housing can reduce rents for lower-income people. I called it “Yuppie Fishtank Theory”:
The basic idea is very simple: If you build nice shiny new places for high earners (“yuppies”), they won’t go try to take over the existing lower-cost housing stock and muscle out the working class.
This is important because a lot of people believe the exact opposite. They think that if you build new market-rate (“luxury”) housing in an area, it’ll attract rich people, cause gentrification, and raise rents.
Over the years, my theory has been proven right — and the “gentrification” theory has been proven wrong — again and again. Here was a roundup I did of the evidence back in 2024:
Now Henry Grabar flags some new evidence that says — surprise, surprise — that Yuppie Fishtank Theory is still true:
A new study lays out exactly how a brand-new building can open up more housing in other, lower-income areas, creating the conditions that enable prices to fall…
In the paper, three researchers looked in extraordinary detail at the effects of a new 43-story condo project in Honolulu…What the researchers found was that the new housing freed up older, cheaper apartments, which, in turn, became occupied by people leaving behind still-cheaper homes elsewhere in the city, and so on…The paper estimates the tower’s 512 units created at least 557 vacancies across the city—with some units…creating as many as four vacancies around town…
To figure this out, the researchers…traced buyers arriving at the new apartments back to their previous homes and then, in some cases, they traced the new occupants of those homes back to prior addresses. The study found that the Central’s new residents left behind houses and apartments that were, on average, 38 percent cheaper, per square foot, than the apartments they moved into.
Yuppie Fishtanks win again!
Cities that are applying Yuppie Fishtank Theory are seeing their rents fall. Here’s a Bloomberg story from December:
Rents got cheaper in several major cities this past year, thanks to an influx of luxury apartment buildings opening their doors and luring tenants to vacate their old homes…New building openings are bringing rents down as wealthy tenants trade up, forcing landlords to drop prices for older apartments. Rents for older units have fallen as much as 11%, and some are now on offer at rates as low as homes that are usually designated as “affordable”…The changed dynamic in the rental market is challenging the idea that luxury housing doesn’t help the broader ecosystem.
Overall, cities that build more housing are seeing lower rents:

At this point, “building housing reduces rent” is as close to a scientific law of the housing market as we’re likely to find.
Build more housing!!
Three years ago, David Oks and Henry Williams wrote a long post claiming that economic development was dead — that poor countries had done great in the post-WW2 decades when they sold raw materials to fast-growing rich countries, but that their growth in the 90s, 00s, and 10s was a bust. That was nonsense, and I wrote a lengthy rebuttal here:
Instead of rehashing that debate, I just want to link to Oks’ latest post, in which he expresses extreme pessimism about global poverty:
He cites a recent post by Max Roser of Our World in Data (the excellent site where I get many of the charts for this blog). Roser notes that extreme poverty — defined as the fraction of people living on less than $3 a day — has declined so much in South Asia, East Asia, and Latin America that it has basically vanished. This leaves Africa as the only region with an appreciable number of extremely poor people left (except for some parts of Central Asia). And since African poverty rates are not declining, and African population is growing much faster than population elsewhere, this means that the number of extremely poor people in the world is set to start rising again:
The first thing to note is that by using this chart, and by making this argument, David Oks directly contradicts his thesis from his 2023 article. In 2023, Oks argued that global development since 1990 had been disappointing; in his new post, Oks argues that poverty reduction in 1990-2024 everywhere outside of Africa was so incredibly successful that it basically went to completion and has nowhere left to go!
Oks’ old post was pessimistic about the entire developing world — South Asia, Latin America, Africa, and so on. In this new post, he retreats to pessimism about Africa alone. This is a significant retreat — it’s an implicit acknowledgement that development was very very real for the billions of poor people who lived outside Africa in 1990.
As for whether Oks is right about Africa, only time will tell. But note that the rising global poverty in the chart above is entirely a forecast. If African growth surprises on the upside — say, from solar power and AI — and African fertility falls faster than expected, then we could see Africa follow in the footsteps of the other regions.
Our goal should be to keep the pessimists embarrassed.
On paper, the U.S. is a lot richer than most other rich countries — including Canada:
In terms of per capita GDP, Canada is poorer than Alabama, America’s poorest state. Canada is a little less unequal than America, so the difference in median incomes between the two countries is smaller — only about 18% higher as of 2021 (though the gap is growing). But that’s still a sizeable gap!
Europeans, Australians, and Canadians who visit America’s disorderly and crime-ridden city centers can sometimes balk at this fact. They instinctively start groping for some reason the numbers must be wrong. But reporters from Canada’s Globe and Mail traveled to Alabama, and discovered that the numbers don’t lie — America really is just a very, very rich place, even compared to other countries. Here are some excerpts from their article:
For eons, Canadians have viewed Alabama as a small state that, save for a few pockets, is dirt poor…For an ego check, The Globe and Mail travelled to the Deep South to understand how this happened. Immediately, it was obvious Alabama is misunderstood. In Huntsville, there are as many Subaru Outbacks as there are pickup trucks, and the geography in Alabama’s two largest metropolitan areas – Birmingham and Huntsville – looks nothing like the historical imagery…
Alabama is also home to five million people…and its economy is booming. The state’s unemployment rate is now just 2.7 per cent, versus 6.5 per cent in Canada, and its major employers include Airbus SE and giant defence contractor Northrop Grumman Corp. The state has also morphed into an auto manufacturing powerhouse with plants from Mercedes-Benz AG, Toyota Motor Corp., Hyundai Motor Co. and more. In 2024, Alabama made nearly as many vehicles as Ontario…
As for Birmingham itself, there’s the beauty of the rolling hills, which deliver stunning fall foliage. And the city’s becoming a foodie hub. A new restaurant, Bayonet, was named one of America’s 50 best restaurants by The New York Times last fall. And despite the bible thumping, Birmingham has a sizable LGBTQ+ community and scored the same as Boston on the Human Rights Campaign’s Municipal Equality Index.
The Globe and Mail article notes that Alabama has a higher poverty rate and lower life expectancy than Canada — and being a newspaper in a progressive country, it fails to mention the much higher crime rate. But the fact is, for most Alabamans, the material standard of living is more comfortable than what prevails in much of Canada.
People who believe America’s wealth is fake need to go there and see for themselves that it’s real.
In general, economists find that immigration’s economic effect on the native born is either positive or zero. But one famous economist, George Borjas, consistently finds negative effects. This makes Borjas beloved of the Trump administration and the nativist movement in general — it’s very common to hear MAGA people cite Borjas in debates.
It’s very odd that one economist keeps getting results about immigration that are so out of whack with what everyone else finds. Well, it turns out that if you look closely at George Borjas’ methodologies, you find a lot of dodgy stuff. I wrote about this several times back during the first Trump administration, when I worked for Bloomberg. Here’s what I wrote in 2015:
[I]n 2015, George Borjas…came out with a shocking claim -- the celebrated [David] Card result [about the Mariel Boatlift not harming American workers], he declared, was completely wrong. Borjas chose a different set of comparison [cities]…He also focused on a very specific subset of low-skilled Miami workers. Unlike Card, Borjas found that the Mariel boatlift immigration surge had a big negative effect on native wages for this vulnerable subgroup.
Now, in relatively short order, Borjas’ startling claim has been effectively debunked. Giovanni Peri and Vasil Yasenov, in a new National Bureau of Economic Research working paper…find that Borjas only got the result that he did by choosing a very narrow, specific set of Miami workers. Borjas ignores young workers and non-Cuban Hispanics -- two groups of workers who should have been among the most affected by competition from the Mariel immigrants. When these workers are added back in, the negative impact that Borjas finds disappears.
But it gets worse. Borjas was so careful in choosing his arbitrary comparison group that his sample of Miami workers was extremely tiny -- only 17 to 25 workers in total. That is way too small of a sample size to draw reliable conclusions. Peri and Yasenov find that when the sample is expanded from this tiny group, the supposed negative effect of immigration vanishes.
All of this leaves Borjas’ result looking very fishy.
And here was a follow-up in 2017:
Recently, Michael Clemens of the Center for Global Development and Jennifer Hunt of Rutgers University found an even bigger problem with Borjas’ study. Clemens and Hunt noted that in 1980, the same year as the Mariel boatlift, the U.S. Census Bureau changed its methods for counting black men with low levels of education. The workers that Borjas finds were hurt by the Mariel immigration include these black men. But because these workers generally have lower wages than those the Census had counted before, Borjas’ finding of a wage drop among this group, the authors claim, was almost certainly a result of the change in measurement.
And here’s what I wrote in 2016:
Borjas has written a book…called “Immigration Economics.”…However, University of California-Berkeley professor David Card and University of California-Davis’ Peri have written a paper critiquing the methods in Borjas’ book. It turns out that the way Borjas and the economists he cites do immigration economics is very, very different from the way other researchers do it.
One big difference is how economists measure the number of immigrants coming into a particular labor market…[I]nstead of using the change in the number of immigrants, Borjas just uses the number of immigrants itself…This creates a number of problems.
Let’s think about a simple example. Suppose there are 90 native-born landscapers in the city of Cleveland, and 10 immigrant landscapers. Suppose that demand for landscapers goes up, because people in Cleveland start buying houses with bigger lawns. That pushes up the wages of landscapers, which will draws 100 more native-born Clevelanders into the landscaping business. But the supply of immigrants is relatively fixed. So the percent of immigrants in the Cleveland landscaping business has gone down, from 10 percent to only 5 percent, even though the number of immigrants in the business has stayed the same.
Borjas will find that the percent of immigrants in the business goes down just as wages go up. But to conclude that native workers’ wages went up because immigration went down would be totally incorrect, because immigration didn’t actually fall! In fact, Borjas’ method is vulnerable to reaching exactly this sort of erroneous conclusion. Card and Peri point out that if you use the more sensible measure, there’s not much correlation between immigrant inflows and native-born workers’ wages and income mobility.
In other words, there’s a clear pattern of Borjas using strange and seemingly inferior methods, and arriving at conclusions that diverge radically from his peers. So I was not exactly surprised when Jiaxin He and Adam Ozimek looked at Borjas’ recent work on H-1B workers also contained some very dodgy methodology:
Borjas’s February 2026 working paper attempted to answer whether H-1B workers earn less than comparable native-born workers by combining data on actual H-1B earnings with American Community Survey data on native workers. The conclusions are negative, with H-1B holders earning 16 percent less. But these findings result from substantial data errors.
…The most significant mistake is a crucial temporal mismatch between his H-1B and native-born samples: the H-1B applications span 2020-2023, while the ACS data covers just 2023.
Nowhere did the paper mention controlling for inflation or wage growth. Given 15.1 percent inflation and an 18.7 percent wage increase for software occupations alonefrom 2020 to 2023, comparing wages of H-1B workers from 2020 to 2023 to… native-born wages from 2023 only produces negatively biased results that overstates the wage gap…A simple approach is to directly compare the 2023 H-1B observations (FY 2024) to 2023 ACS data. Alternatively, we can use all years but adjust for inflation and convert all years to real 2023 dollars. Both approaches cut the wage gap roughly in half…
The second error stems from controlling for geographic wage drivers using each worker’s PUMA (public use microdata area)…The problem is that Dr. Borjas uses the PUMA where visa holders work alongside the PUMA where native workers live. Consider a native-born software developer working at Google in Mountain View who resides in a cheaper area like Fremont. If residential areas have lower average wages than business districts, this mismatch systematically inflates the apparent native wage and negatively biases the H-1B wage gap.
Another Borjas paper with serious methodological errors, and an anti-immigration conclusion that disappears when you correct the errors? Shocking!
By this point, it should be clear that whether these mistakes are intentional or not, Borjas’ anti-immigration conclusions tend to vanish when the mistakes are corrected. Borjas is not a good source of information on immigration topics; every time someone cites him in a debate, you know they haven’t looked seriously at the literature.
2026-02-26 15:06:54

I’ve been wanting to write this post for a while, actually. What triggered it was seeing this tweet:
Extreme tolerance of public disorder, and downplaying the importance of crime, is a hallmark of modern progressive American culture. There are plenty of Democrats who care about crime — Joe Biden recently tried to increase the number of police in America by a substantial amount — but there is constant pressure from the left against such measures. On social media, calls for greater public order are instantly met with accusations of racism and classism:
(And this was far from the most radical post on the topic.)
Nor is this attitude confined to anonymous radicals on social media. When Biden announced his Safer America Plan, the ACLU warned that putting more cops on the streets and punishing drug dealers would exacerbate racial disparities:
[I]n this moment of fear and concern, the president must not repeat yesterday’s mistakes today. He calls for hiring 100,000 additional state and local police officers – the same increase in officers as the 1994 crime bill. This failed strategy did not make America safer, instead it resulted in massive over-policing and rampant rights violations in our communities…And while it is important that the president’s plan commits to fixing the racist sentencing disparity between crack and powder cocaine, it regrettably also perpetuates the war on drugs by calling for harsh new penalties for fentanyl offenses.
“While we are pleased with the president’s commitment to investing in communities, we strongly urge him not to repeat the grave errors of the 1990s — policies that exacerbated racial disparities, contributed to widespread police abuses, and created our current crisis of mass incarceration.
The ACLU is very wrong about policing and crime — there’s very solid evidence that having more cops around reduces the amount of crime, both by deterring criminals and by getting them off the streets.
In fact, the idea that tough-on-crime policies are racist is a pillar of progressive thought. It’s the thesis of Michelle Alexander’s influential 2012 book The New Jim Crow: Mass Incarceration in the Age of Colorblindness, which argues that mass incarceration is a form of racial segregation. Ta-Nehisi Coates, perhaps the most important progressive thinker of the 2010s, relentlessly attacked the “carceral state”.
A major progressive policy initiative, meanwhile, has been the election or appointment of district attorneys who take a more tolerant approach toward criminals. These “progressive prosecutors” really do prosecute crime less, although evidence of their impact on actual crime rates is mixed.
I am not going to claim that progressive attitudes are the reason America’s crime rate is much higher than crime rates in other countries. The U.S. has probably been more violent than countries in Asia and Europe throughout most of its history, and the divergence certainly long predates the rise of progressive ideology. It’s possible that the progressive prosecutor movement, the decarceration movement, and the depolicing movement exacerbated America’s crime problem a bit, but they didn’t create it.
What those progressive attitudes do do, I think, is to prevent us from talking about how important the crime problem is for the United States, and from coming up with serious efforts to solve it.
The thesis of this post is that when you compare America to other countries, what stands out as America’s most unique weakness is its very high crime rate — not just violent crime, but also public chaos and disorder. That statement might come as a shock to people who are used to hearing about very different American weaknesses.
For example, it’s common to hear people say that Europeans and Asians “have health care”, and that Americans don’t. That’s just fantasy. Around 92% of Americans, and 95% of American children, have health insurance, and those numbers keep going up.
Yes, U.S. health care is too expensive — we spend half again or double the fraction of GDP on health as many other countries, while achieving similarly good outcomes. That’s a real problem, and we should try to bring costs down. But this is tempered by the fact that Americans spend a lower percent of their health care costs out-of-pocket compared to people in most other rich countries:
And if you took health spending entirely out of the equation, Americans would still be richer than people in almost any other country. So our high health costs are more of a nuisance than a big difference in quality of life.
If not health care, what about health itself? America’s life expectancy has started to rise again, but it’s still 2 to 4 years less than other rich countries. The size of this gap tends to be overhyped — Germany’s life expectancy advantage over America is smaller than Japan’s advantage over Germany. And the difference is mostly due to America’s greater rates of obesity and drug/alcohol overdose — diseases of wealth and irresponsibility, rather than failures of policy.1 This stuff usually doesn’t affect quality of life unless you let it — if you don’t overeat, drink too much, do fentanyl, or kill yourself, your life expectancy in America is going to be similar to, or better than, people in other rich countries.
What about inequality and poverty? It’s true that America is more unequal than most other rich countries. About a quarter of Americans earn less than 60% of the median income, compared to around one-sixth or one-fifth in most other rich nations. But this is not because America is a uniquely stingy country where conservatives have managed to block government redistribution. In fact, the U.S. fiscal system — taxes and spending — is more progressive (i.e., more redistributionary) than that of most other rich countries, and we spend about as much of our GDP on social welfare as Canada, the Netherlands, or Australia. In fact, America’s system has become continuously more progressive over time.
How about housing? You may have read the “Housing Theory of Everything”, which blames housing shortages for a variety of social and economic problems. It’s true that housing is very important, and that America doesn’t build enough of it. It’s also true that housing is a bit more expensive in America than elsewhere — according to the OECD, house prices relative to incomes are about 12% higher than in the average rich country. But U.S. houses are also much bigger than houses in most other countries, so it’s natural that they’d cost a little bit more. And America has actually been above average in terms of housing production in recent years, after lagging in the 2010s:

So it’s more accurate to say that housing is a big problem, but it’s a big problem all over the globe, not something that’s special to America.
How about transit and urbanism? Here, America is certainly an exception. The U.S. has the least developed train system in the developed world, and worse than many poor countries as well. America is famous for its far-flung car-centric suburbs, with their punishing commutes and paucity of walkable mixed-use areas. Only a few rich countries are more suburbanized than America, and those countries tend to have very good commuter rail service.
This is a real difference, though whether it’s good or bad depends on your point of view. Lots of people in America and elsewhere love suburbs and love cars. But I’m going to argue that to the extent that America’s urban development pattern is more suburbanized and more car-centric than people would like, it’s mainly due to crime.
So in almost all cases, the difference between America’s problems and other rich countries’ problems is minor. But when it comes to crime, the difference between the U.S. and other countries is like night and day.
The best way to compare crime rates across countries is to look at murder rates. Other crimes are a lot harder to compare, because A) reporting rates are very different, and B) definitions of crimes can differ across countries. But essentially every murder gets reported, and the definition is pretty universal and unambiguous. And although murder isn’t a perfect proxy for crime in general — you could have a country with a lot of theft but very few murders — it’s probably the crime that people are most afraid of.
So when we look at the murder rate, we see that among rich countries,2 the United States stands out pretty starkly:

This is an astonishingly huge difference. America’s murder rate is between five and ten times as high as that of most rich countries.
Many progressives will protest that violent crime has gone down in America since 2022. And in fact, murder really has gone down a lot.3 Here’s the CDC’s count of homicides:

But even after this decline, the U.S. homicide rate is still five to ten times higher than other rich countries! The recent improvement is welcome, but it hasn’t yet changed the basic situation.
Anyway, while murder is the most important crime, public order also makes a big difference. Here were some replies to the tweet about tolerating destructive behavior on American trains:
The tragic and disturbing scenes of mentally compromised people shouting, peeing, pooping, defacing property, and acting menacing in public — so familiar to residents of cities like San Francisco — are not entirely unique to America. I have been to a place in Vancouver that has similar scenes, and twenty years ago I even walked through a dirty and dangerous-seeming homeless camp in Japan. But overall, the differences between the countries are like night and day, and other countries seem to have made a concerted effort to bring order to their streets in recent years. The U.S., on the other hand, has seen a huge rise in the number of unsheltered homeless people in recent years:
And although America’s overall homelessness rate doesn’t stand out, it has a much higher unsheltered (“living rough”) population:

Obviously, unsheltered homelessness and public disorder aren’t the same thing — you can have lots of violent or threatening people on the streets who do have homes, and most homeless people are harmless. But homeless people do commit violent crime at much higher rates than other people, so when people walk down the street and see a bunch of seemingly homeless people, they’re not wrong to be scared.4
The ever-present threat of crime in U.S. cities has devastated American urbanism. In the mid 20th century, there was a huge exodus of population from the inner cities to the suburbs; this is often characterized as “white flight”, but middle-class black people fled the cities as well. Cullen and Levitt (1999) look at the effects of changes in the criminal justice system, and find that crime has been a big factor in Americans’ preference for suburban living:
Across a wide range of specifications and data sets, each reported city crime is associated with approximately a one-person decline in city residents. Almost all of the impact of crime on falling city population is due to increased out-migration…Households with high levels of education or with children present are most responsive to changes in crime rates…Instrumenting using measures of criminal justice system severity yields larger estimates than OLS, which suggests that rising city crime rates are causally linked to city depopulation.
It’s no surprise that America’s short-lived and minor urban revival in the late 1990s and 2000s followed a big decline in crime. But crime rates are still very high in the U.S., and Americans are still trying to move from the cities out to the suburbs and the far-flung exurbs.
Meanwhile, crime damages American urbanism in other ways. NIMBYs use the threat of crime to block affordable housing projects; this reduces housing supply, driving up prices everywhere, and making it difficult to build the multifamily apartment buildings that enable the kind of dense, mixed-use urbanism that prevails in Europe and Asia. The “housing theory of everything” is partially a story about crime.
Crime also makes it a lot harder to build good transit systems. Trains are a public space, and when there are violent, destructive, or menacing people on the train, it deters people from wanting to ride the train. There’s research showing this, but I also thought that a recent post by the blogger Cartoons Hate Her was especially vivid in explaining how the fear of disorder keeps women and parents away from transit:
When my daughter was a little over a year old, we were walking down the street in broad daylight (she was strapped to my chest and facing outward) when we heard a man about twenty feet away shout “I’M GOING TO FUCKING KILL YOU!”…[A]t least we had an easy safe option to escape…But if we had been on a subway, we would have had no easy choice. We could have waited for the train to stop and then switched cars—but what if he saw us leave and took that as a message, prompting the threat to move from “vaguely directed at my delusions” to “at the next person who triggers me?” What if we couldn’t get to the door in time? What if he followed us? What if he escalated before the train stopped?…
I’ve been told many times that people who are uncomfortable with this type of behavior need to just stay put, don’t make noise, and “avoid eye contact.” After all, asking someone to turn down their music could get you stabbed. You just need to keep your head down and you’ll be fine. That’s apparently all it takes, right? Except for Iryna Zarutska, who quietly sat down in front of a visibly deranged, pacing man on the bus, only to be stabbed to death shortly after. Or the young woman in the Chicago subway who was randomly lit on fire by a severely mentally ill subway rider? Or Michelle Go, the woman who was pushed in front of a subway to her death in New York City by a total stranger?…
Since 2009, assaults on public transit in New York City have tripled…Subway assaults also often involve strangers. When the attack is sexual, the victim is almost always a woman—and New York City alone accounts for around 4,000 sex crimes on public transit every year. These cases are likely underreported and limited to more severe crimes. Many women experience flashing, sexual harassment, groping, and public masturbation, and then never report it, assuming nothing would come of the report. (And honestly? They’re correct.)
In fact, we have evidence that this fear is very rational. When BART installed ticket gates at their train stations that prevented people from riding for free — over the loud objections of progressives — crime on the train went down by 54%, and the amount of disorder and bad behavior on the train absolutely collapsed:

Fear of crime — often rational fear — also stops people from allowing train stations and bus stops in their neighborhood in the first place. There are a number of studies linking train stations and bus stops to increased crime, both in the immediate area and at areas linked to the same transit line. Criminals ride the bus and the train, so in a high-crime country like America, people don’t want trains and buses in their neighborhood. This is probably a big reason why almost no U.S. city has a good train system.
In other words, while car-centric suburbanization is partially about people wanting lots of cheap land and big houses and peace and quiet, part of it is a defense-in-depth against America’s persistently high crime rates.
As an American, when you go to a European city or an Asian city — or even to Mexico City — and you see pretty buildings and peaceful clean streets and there are nice trains and buses everywhere, what you are seeing is a lack of crime. The lack of crime is why people in those countries ride the train, and encourage train stations to be built in their neighborhoods instead of blocking them. The lack of crime is why people in those countries embrace dense living arrangements, which in turn enables the walkable mixed-use urbanism that you can enjoy only on vacation.
In other words, this tweet is right:
Of course, urbanism is not the only thing that benefits from low crime rates — health costs are lower, families are more stable, and of course fewer people die. But the big differences that Americans notice between the quality of life in their own cities and the seemingly better quality of life in other countries that are less rich on paper are primarily due to the fact that those other countries have gotten crime largely under control, while the U.S. has not.
As for the root causes of American crime, and what policies might bring it down to a more civilized level, that’s the subject for another post. The point of today’s post is simply to say that we can’t ignore our country’s sky-high crime rates just because we’ve lived with them our whole lives. Nor should we comfort ourselves with the fact that crime is down from the recent highs of 2021. We are still living in a country that has been devastated by violence and public disorder, and which has never really recovered from that. Someday soon we should think about getting around to fixing it.
Update: As if on cue, here’s a paper by Bencsik and Giles showing that electing a Republican prosecutor reduces crime and mortality rates:
This paper investigates the causal relationship between approaches taken by local criminal prosecutors—also called district attorneys—and community-level mortality rates. We leverage plausibly exogenous variation in prosecutorial approaches generated by closely contested partisan prosecutor elections, a context in which Republican prosecutorial candidates are commonly characterized as “tougher on crime.” Using data from hundreds of closely contested partisan elections from 2010 to 2019…we find that narrow election of a Republican prosecutor reduces all-cause mortality rates among young men ages 20 to 29 by 6.6%. This decline is driven predominantly by reductions in firearm-related deaths, including a large reduction in firearm homicide among Black men and a smaller reduction in firearm suicides and accidents primarily among White men. Mechanism analyses indicate that increased prison-based incapacitation explains about one third of the effect among Black men and none of the effect among White men. Instead, the primary channel appears to be substantial increases in criminal conviction rates across racial groups and crime types, which then reduce firearm access through legal restrictions on gun ownership for the convicted.
On one hand, progressives have to reckon with the fact that their prosecutors’ soft-on-crime approach is getting a bunch of Black men needlessly killed. On the other hand, conservatives need to reckon with the fact that the most important mechanism seems to be preventing people from owning guns. More about both of those things in the follow-up post.
The remaining difference is almost entirely due to traffic accidents, suicide, and violent crime.
I excluded a few small Caribbean nations like Trinidad and Tobago, Bahamas, and Guyana.
At this point, someone in the comments will ask me about Dobson (2002), who claimed that medical advances that prevent gunshot victims from dying have masked a big increase in attempted homicides. But we have tons of recent survey data on rates of violent crime victimization, and there was definitely a huge decline in assaults, gun violence, and so on in the 1990s. As for the difference between today and the 1930s, a more likely explanation is that many attempted murders went unreported or unprosecuted back then.
Note that progressives tend to staunchly oppose getting homeless people off the streets. When Zohran Mamdani reinstated homeless sweeps after realizing that pausing them would lead homeless people to die en masse from exposure to the elements, progressive activists were outraged.