2026-03-11 07:13:08

The photo above is from the Battle of Khalkhin Gol in 1939. This “battle” lasted four months, and was actually just the main phase of an undeclared war between Imperial Japan and the Soviet Union that effectively began in 1935, four years before the official start of the Second World War. The USSR won the conflict through superior use of tanks, foreshadowing the eventual outcome of WW2 itself.
This example illustrates that although World War 2 officially began when Germany invaded Poland, conflicts that either foreshadowed the final conflagration or eventually merged with it began years earlier, in the mid-1930s. WW2 had foothills. I wrote about this back in 2024:
It’s possible that the world will avoid a world war in the first half of the 21st century. But if one does occur, I think future historians will see it as having had foothills as well. In the Syrian Civil War, the U.S. and Russia began to test their new hardware against each other, and their troops even clashed once. Russia’s invasion of Ukraine was the big shift, as it inaugurated a new era of great-power territorial conquest, began to harden global alliance systems, and pushed Europe to remilitarize.
Now we have the Iran War. The U.S. and Israel started the war, attacking Iran and decapitating much of its leadership. The Iranians, somewhat oddly, responded by launching missile and drone attacks on practically every Arab nation in the Middle East, causing some of them to threaten to join the war on America and Israel’s side.
In the short term, this conflict seems likely to peter out in a few days to weeks without decisive results. Militarily speaking, the U.S. and Israel have generally had their way with Iran, assassinating the leadership at will, achieving air supremacy, and degrading missile and drone strike capability. But this seems unlikely to actually bring down the Iranian regime; protesters are generally not returning to the streets, still cowed after the regime massacred tens of thousands of them in January. Unlike in Syria, there’s no breakaway region or oppressed ethnic majority that can be armed from afar to bring down the regime; as long as Iran’s Revolutionary Guard and other security services remain unified and willing to shoot infinite protesters in order to hang on to power, and there’s no ground invasion, it’s not clear who could actually topple the Islamic Republic in the next few weeks.
In the long term, of course, it’s a different story; the regime doesn’t look strong or stable. But Trump seems unlikely to be in for the long term; instead, he seems likely to quit the war soon, as he usually retreats from most of his initially bold moves. Trump recently called the war “very complete”, and his advisers are reportedly urging him to find a way out of the conflict.
One reason for this is that the Iran War has been fairly unpopular in America from the beginning:
About half of registered voters — 53% — oppose U.S. military action against Iran, according to a new Quinnipiac Poll conducted over the weekend. Only 4 in 10 support it, and about 1 in 10 are uncertain. A new Ipsos poll also found more disapprove than approve of the strikes…That’s similar to the results of text message snap polls from The Washington Post and CNN, both conducted shortly after the joint U.S.-Israel attacks began, which also indicated that more Americans rejected the military action than embraced it…A recent Fox News poll found opinions more evenly divided: Half of registered voters approved of the U.S. military action, while half disapproved.
Wars usually create a “rally round the flag” effect early on, and support only fades later; this war was unpopular from day one. Most Republicans seem to have conveniently forgotten that Trump ran as the candidate of peace, isolationism, and non-intervention. But Independents, who form the bulk of the American electorate now, have no partisan commitments that force them to conveniently forget. And they are rightfully wary of yet another American involvement in a Middle Eastern war — especially one that America started without being attacked first.
But there’s an even bigger reason Trump is looking for the exits — oil. Oil prices have been jumping wildly up and down, as everyone tries to figure out whether Iran will manage to disrupt oil production from the Persian Gulf (possibly by closing the Strait of Hormuz, possibly by destroying Gulf oil infrastructure with drones). But the general trend is up:

Higher oil prices mean higher gasoline prices, and higher inflation in general — both things that tend to make Americans very mad, and which they are already mad at Trump about. Gas prices are now shooting up:
So this war seems highly unlikely to result in Iraq War 2.0 — a massive U.S. ground invasion of Iran. Instead, it’ll probably end up like a bigger version of the Twelve-Day War last year — Iran’s defenses will be laid prostrate before the might of foreign air power, but the regime will survive.
(Again, in the long term, things look very bad for the Iranian regime. The economy is dysfunctional and crumbling, and high oil prices will provide only a temporary palliative. The regime’s popular legitimacy is gone after the January massacres. The entire Gulf has now turned against Iran, and Lebanon’s government has turned against Hezbollah. With Syria now shifting into the Israel/Gulf camp and Hamas basically a spent force, Iran has only one effective proxy left — the Houthis in Yemen. This is not a recipe for long-term success.)
But anyway, this is all a bit of a side track from the point of this post, which is about World War 3. The Iran War will probably not be the start of WW3, but I think it does bring us closer to the brink, in several ways.
First, in the Western theater — Europe and the Middle East — the coalitional lines are becoming clearer. When Trump was elected, a lot of people thought that America had effectively “switched sides” — that Trump viewed Putin as an ally against global wokeness, and the Europeans and the Ukrainians as betrayers of Western Civilization. I myself entertained this notion — there really was (and still is) a lot of this sentiment on the American right, and ending the Transatlantic Alliance was consistent with classic American right-wing isolationism.
But the narrative that “America is a Russian ally now” has been looking a lot shakier in recent months. First, the U.S. toppled a Russian proxy in Venezuela, and seized a bunch of Russian “shadow fleet” oil tankers. Elon Musk then shut the Russians off from using Starlink, allowing the Ukrainians to seize the initiative in the war. Now, the U.S. is trying to topple a key Russian arms supplier — Iran is the source of the Shahed long-range strike drone, which Russia has been using to bombard Ukraine’s cities from afar.
Russia didn’t leap to Iran’s defense. It has its hands full with Ukraine, and with planning for a possible wider war against Europe, and the U.S. is too powerful for it to fight. But the Russians did lend a hand, helping Iran to target U.S. forces:
Russia is providing Iran with intelligence about the locations and movements of American troops, ships and aircraft, according to multiple people familiar with US intelligence reporting on the issue…Much of the intelligence Russia has shared with Iran has been imagery from Moscow’s sophisticated constellation of overhead satellites[.]
This is similar to what the U.S. does for Ukraine. Russian targeting intelligence may have helped Iran take out some U.S. missile defense radar installations — almost certainly Iran’s most significant success of the war.
Meanwhile, Ukraine has leapt to the defense of both the U.S. and the Gulf countries being targeted by Iran’s fleets of attack drones. Long years of playing defense against Russia’s Iranian-provided Shaheds have given Ukraine tons of expertise in shooting this sort of drone out of the sky; now, the U.S. badly needs that expertise. America had rejected Ukraine’s help on anti-drone technology before, but it turns out military necessity usually trumps ideological bias.
As for Europe, they’ve certainly had a lot of tensions with the Trump administration, but most of the European countries haven’t opposed America’s actions in Iran the way they opposed the Iraq War a generation ago. Britain and France made some disapproving noises at first, but eventually acquiesced; only Spain tried to stand up and oppose Trump.
So for now, the coalitions in the Western theater look clearer than they did before — America, Ukraine, Israel, and Europe on one side, Russia and Iran on the other side. Various factions in the U.S. and Europe may despise each other, or despise Israel, or despise Ukraine, but at the end of the day, Russia and Iran are the greater enemies.
In the Eastern theater, things are less certain. India traditionally tries to be friends with America, Russia, Israel, and Iran all at once — this requires it to be effectively neutral when it comes to conflicts like the Ukraine War and the Iran War. China is supposedly on Iran’s side, but it has mostly limited itself to criticism of America’s actions.
The big question, of course, is whether the Iran War makes a Chinese attack on Taiwan more likely. One school of thought says it’s more likely, because the war has forced America to consider shifting missile defense systems out of Asia. On the other hand, the almost unbelievable American/Israeli competence in terms of finding and killing Iran’s top leaders seems to have given Chinese military analysts pause — although China can outmatch the U.S. in terms of defense production, if America could assassinate Xi Jinping and the entire CCP Central Committee in the early days of a war over Taiwan, that could be an effective form of deterrence.
So in a way, what we’re looking at now feels a little like the situation in 1935 or 1937. The Western theater today is like the Pacific theater then — wars and invasions that feel localized, and which don’t involve the most capable players, but which destabilize the world and have the potential to merge into a wider global conflict. Meanwhile, the Eastern theater today is more like the European theater of WW2 — it has the most powerful economies and militaries, but the alliances are still uncertain. If and when China attacks Taiwan, that will probably be similar to Hitler invading Poland — an unambiguous signal that a wider war has begun. It might happen, or it might not.
Meanwhile, the Iran War feels like the lead-up to World War 3 in another way — it’s showcasing and developing the technologies that would be central to a wider war. The Ukraine War has demonstrated that drones — FPV drones at the front, and Shahed-style strike drones behind the lines — are the key weapon of modern warfare. Similarly, America and Israel’s decapitation strikes on Iran have shown the power of AI for modern precision warfare. Here’s the WSJ:
The U.S. and Israeli attacks on Iran have unfolded at unprecedented speed and precision thanks to…a cutting-edge weapon never before deployed on this scale: artificial intelligence…AI tools are helping gather intelligence, pick targets, plan bombing missions and assess battle damage at speeds not previously possible…The use of AI in the campaign against Iran follows years of work by the Pentagon and lessons learned from other militaries. Ukraine—with U.S. help—increasingly relies on AI in its war against Russia. Israel has tapped AI in conflicts at least since the October 2023 Hamas attacks.
And this is from an article in Rest of World (a very underrated news source):
The U.S. military is using the most advanced AI it has ever used in warfare, with Anthropic’s Claude AI reported to be assessing intelligence, identifying targets, and simulating battle scenarios…The biggest role that AI now has in U.S. military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said…[T]he use of chatbots such as Claude in decision-support systems is new…
China is prototyping AI capabilities that can pilot unmanned combat vehicles, detect and respond to cyberattacks, and identify and strike targets on land, at sea, and in space, researchers at Georgetown University’s Center for Security and Emerging Technology said.
This is a bit reminiscent of how aerial bombing was used at Guernica in the Spanish Civil War, or how the USSR used tanks to beat the Japanese at Khalkhin Gol. If we ever do see an all-out war between America, China, Russia, Japan, and Europe, AI is going to be incredibly central to performance on the battlefield. That’s why for all the bad blood between the Pentagon and Anthropic, the two organizations have a huge incentive to patch things over and learn to cooperate more closely. (Fortunately, Anthropic’s CEO, Dario Amodei, is extremely patriotic, which will probably help.)
Unfortunately, new military technologies won’t just define the wars of the future — they also help cause them. Why did the world fight two World Wars in the early 20th century? Ideologies and competing empires certainly played a role, but it’s also probably true that the rise of industrial technology disrupted the existing balance of power.
Artillery manufacturing, logistics, and railroads made Germany a great power capable of defeating France in the 1870s; that upset the continental balance of power and caused the proliferation of alliances that led to WW1. In the interwar period, air power made America, Germany, and Japan more powerful, while the rise of tanks empowered Germany and the USSR, all at the expense of Britain and France. The rapid progress of industrial weaponry made it unclear where power really lay in the world, which probably made the great powers of the day more willing to roll the dice and test their strength against each other.
Countries may be more cautious now than they were a century ago. Nuclear weapons still exist, and still provide some deterrent to great-power war — though there are a lot fewer of them now than there used to be, and AI and missile defense make it possible to stop more of them before they hit. Countries are richer now too, which makes a war even less appealing from an economic perspective than in 1914.
But still, the rise of AI and drones means that no one knows who’s really the most powerful country in the world — the U.S. or China. And regional balances of power — Russia versus Europe and Ukraine, Iran versus Israel and the Gulf — are similarly uncertain. Uncertain balances of power are scarier than known balances of power.
So while World War 3 doesn’t seem imminent, we may be inching closer in that direction. If it sneaks up and surprises us, we’ll probably conclude that the Iran War was part of the lead-up.
2026-03-08 07:04:47
There are three basic facts you need to know about the U.S. macroeconomy right now:
The economy overall (growth, employment, inflation) is doing pretty well.
Productivity growth is unusually high.
Job growth is terrible.
Let’s start with some numbers. Late 2025 is the latest number we have for GDP growth, but it looks pretty solid — around 2.5%, about where it was in the late 2010s.
And most people still have jobs. Prime-age employment rates — my favorite single indicator of the labor market — are still really high. Higher than any time in the 2010s, actually:
If you look at unemployment, you can see a slowly rising trend since mid-2023, even if you restrict it to the prime age group. But this is entirely due to more people saying that they’re looking for work — prime-age labor force participation has been steadily rising. So that’s not very scary either. It’s just more of the people without jobs saying that they’re looking for work, instead of just sitting around.
Meanwhile, inflation is still in the 2.5% range — a little higher than we would like, but not particularly fast.
So in terms of the headline numbers, everything is kind of just bumping along. From a bird’s-eye view, this economy looks pretty normal and healthy. Under normal circumstances, I’d be inclined to not even write a post about the macroeconomy this month.
But underneath the surface, two interesting things are happening. The first is that productivity growth has accelerated; the second is that job growth has stalled out. On its face, this sort of pattern might suggest that AI is finally starting to take Americans’ jobs — and lots of people are suggesting this conclusion. But when we look closely at the numbers, the story becomes more complicated.
The first is that productivity growth has accelerated. Output per hour — also called “labor productivity”, which is sort of a quick, rough-and-ready measure of productivity — is growing significantly faster than it was in the late 2010s. It’s been at around 2.5-3% since late 2023, compared to more like 1-2% during Trump’s first term:
In fact, productivity is well above where economists thought it would be six years ago:

That’s a major acceleration. 2.8% labor productivity growth is about equal to the best decades we’ve seen since World War 2. If that rate is sustained for a decade, or accelerates further, it’ll be pretty historic.
What’s driving the productivity boom? It’s tempting to conclude that AI is making white-collar workers more productive, but Ernie Tedeschi points out that the biggest swing has been in manufacturing productivity. For a long time, manufacturing productivity was basically flatlining in America; now it’s suddenly growing again.
Tedeschi argues that this is also probably AI-driven, but it’s not about people using ChatGPT and Claude Code at work — it’s about the fact that a ton of data centers are being built, and data centers are very valuable:
If you look at data centers’ contribution to growth itself, it looks pretty small, but this masks the value of the computers contained within the data centers. Together, the creation of data centers and computing equipment have been contributing about as much to GDP growth as they were during the dot-com boom:

A second thing that’s happening is that American capital is being utilized more intensively — machines are being run for more hours of the day, buildings are keeping the lights on longer, and so on. The San Francisco Fed makes monthly estimates of Total Factor Productivity growth — productivity growth once you take the amount of labor and capital into account — and they find that it’s been pretty fast since late 2023. But once you take utilization rates into account, it looks like there was a moderate burst of TFP growth in 2023-4 that faded in 2025:

This is also consistent with the story that the data center boom, not an AI use boom, is driving fast productivity growth in America.
2026-03-06 08:28:04
If you haven’t heard about the fight between the AI company Anthropic and the U.S. Department of War, you should read about it, because it could be critical for our future — as a nation, but also as a species.
Anthropic, along with OpenAI, is one of the two leading AI model-making companies. OpenAI has very narrowly led the race in terms of most capabilities for most of the past few years, but Anthropic is beginning to win the race in terms of business adoption:

This is because of Anthropic’s different business model. It focused more on AI for coding than on chatbots in general, and also focused on partnering with businesses to help them use AI. This may pay eventual dividends in terms of capabilities, if Anthropic beats OpenAI to the goal of recursive AI self-improvement. And it’s already paying dividends in the form of faster revenue growth:

Anthropic had partnered with the Department of War — previously the Department of Defense — since the Biden years. But the company — which is known for its more values-oriented culture — has begun to clash with the Trump Administration in recent months. The administration sees Anthropic as “woke” due to its concern over the morality of things like autonomous drone swarms and AI-based mass surveillance.
The fight boiled over a week ago, when the administration stopped working with Anthropic, switched to working with OpenAI, and designated Anthropic a “supply chain risk”. The supply-chain move was a pretty dire threat — if enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, which could kill the company outright. But like many Trump administration moves, it appears to have been more of a threat than an all-out attack — Anthropic has now resumed talks with the military, and it seems likely that they’ll come to some sort of agreement in the end.
But bad blood remains. Trump recently boasted that he “fired [Anthropic] like dogs”. Dario Amodei, Anthropic’s CEO, released a memo accusing OpenAI of lying to the public about its dealings with the DoW, said that OpenAI had given Trump “dictator-style praise”, and asserted that Anthropic’s concern was related to the DoW’s desire to use AI for mass surveillance.
What’s actually going on here? The easiest way to look at this is as a standard American partisan food-fight. Anthropic is more left-coded than the other AI companies, and the Trump administration hates anything left-coded. This probably explains most of the general public’s reaction to the dispute — if you ask your liberal friends what they think of the issue, they’ll probably support Anthropic, whereas your conservative friends will tend to support the DoW. Marc Andreessen probably put it best:
(The converse is also true.)
The Trump administration itself may also see this as a culture-war issue, as well as a struggle for control. But, at least in my own judgement, Anthropic itself is unlikely to see it this way. Anthropic itself is not committed to progressive values writ large so much as it’s committed to the idea of AI alignment.
Like almost everyone in the AI model-making industry, Anthropic’s employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later. But my experience talking to employees of both companies has suggested that there’s a cultural difference between how the two think about their role in this process. Whereas — generally speaking — OpenAI employees tend to want to create the most capable and powerful god they can, as fast as they can, Anthropic employees tend to focus more on creating a benevolent god.
My intuition, therefore, suggests that Anthropic’s true concern — or at least, one of its major concerns — was that Trump’s Department of War would accidentally inculcate AI with anti-human values, increasing the chances of a future misaligned AGI that would be more likely to see humanity as a threat. In other words, I suspect the issue here was probably more about fear of Skynet,1 and less about specific Trump policies, than people outside Anthropic realize.
But anyway, beyond both political differences and concerns about misaligned AGI, I think this situation illustrates a fundamental and inevitable conflict between human institutions — the nation-state and the corporation.
One view is that the Department of War’s attempts to coerce Anthropic represents an erosion of democracy — the encroachment of government power into the private sphere. Dean Ball wrote a well-read and very well-written post espousing this view:
Some excerpts:
At some point during my lifetime—I am not sure when—the American republic as we know it began to die…I am not saying this [Anthropic] incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.”…[I]t simply made the ongoing death more obvious…I consider the events of the last week a kind of death rattle of the old republic…
The Trump Administration has a point: it does not sound right that private corporations can impose limitations on the military’s use of technology. …Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military…It is probably the case that the military should not agree to terms like this, and private firms should not try to set them…But the Biden Administration did agree to those terms, and so did the Trump Administration, until it changed its mind…The contract was not illegal, just perhaps unwise, and even that probably only in retrospect…
The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable…But this is not what DoW did. Instead, DoW…threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei…The fact that [Hegseth’s actual actions are] unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business…
This strikes at a core principle of the American republic…private property…[T]here is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will…This threat will now hover over anyone who does business with the government…
With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious—a gradual descent into madness.
Alex Karp of Palantir made the opposite case the other day, in his characteristically pithy way:
If Silicon Valley believes we’re going to take everyone’s white collar jobs AND screw the military…If you don’t think that’s going to lead to the nationalization of our technology— you’re retarded.
Karp gets at the fundamental fact that what we’re seeing is a power struggle between the corporation and the nation-state. But the truth is that it’s not just an issue of messaging, or of jobs, or of compliance with the military — it’s about who has the ultimate power in our society.
Ben Thompson of Stratechery makes this case. He points out that what we are effectively seeing is a power struggle between the private corporation and the nation-state. He points out that although the Trump administration’s actions went outside of established norms, at the end of the day the U.S. government is democratically elected, while Anthropic is not:
Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public…[W]ho decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected. Once again, however, Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for.
But even beyond concerns over democratic accountability, Thompson points out that it was never realistic to expect a weapon as powerful as AI to remain outside the government’s control, whether the government is democratically elected or not:
[C]onsider the implications if we take Amodei’s analogy [of AI to nuclear weapons] literally…[N]uclear weapons meaningfully tilt the balance of power; to the extent that AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period…[I]f nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company…
There are some categories of capabilities — like nuclear weapons — that are sufficiently powerful to fundamentally affect the U.S.’s freedom of action…To the extent that AI is on the level of nuclear weapons — or beyond — is the extent that Amodei and Anthropic are building a power base that potentially rivals the U.S. military…
Anthropic talks a lot about alignment; this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the U.S. military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the U.S. is actually quite binary:
Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President.
Option 2 is that the U.S. government either destroys Anthropic or removes Amodei.
[I]t simply isn’t tolerable for the U.S. to allow for the development of an independent power structure — which is exactly what AI has the potential to undergird — that is expressly seeking to assert independence from U.S. control. [emphasis mine]
I like Dario — in fact, he’s a personal friend of mine. But Thompson’s argument — especially the part I highlighted — has to carry the day here. This isn’t a question of law or norms or private property. It’s a question of the nation-state’s monopoly on the use of force.
To exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private militia can defeat the nation-state militarily, the nation-state is no longer physically able to make laws, provide for the common defense, ensure public safety, or execute the will of the people.
This is why the Second Amendment has limits on what kinds of weapons it allows private citizens to possess. You can own a gun, but you cannot own a tank with a functioning main gun. More to the point, you cannot own a nuclear bomb. One nuke wouldn’t allow you to defeat the entire U.S. Military, but it would give you local superiority; the military would be unable to stop you from destroying the city of your choice.
People in the AI industry, including Dario, expect frontier AI to eventually be as powerful as a nuke. Many expect it to be more powerful than all nukes put together. Thus, demanding to keep full control over frontier AI is equivalent to saying a private company should be allowed to possess nukes. And the U.S. government shouldn’t be expected to allow private companies to possess nukes.
Let’s take this a little further, in fact. And let us be blunt. If Anthropic wins the race to godlike artificial superintelligence, and if artificial superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god, then whether he embraces the title or not, Dario Amodei is the Emperor of Earth.
Even if Anthropic isn’t the only company that controls artificial superintelligence, that is still a future in which the world is ruled by a small set of warlords — Dario, Sam Altman, Elon Musk, etc. — each with their own private, enslaved god. In this future, the U.S. government is not the government of a nation-state — it is simply another legacy organization, prostrate and utterly subordinate to the will of the warlords. The same goes for the Chinese Communist Party, the EU, Vladimir Putin, and every other government on Earth. The warlords and their enslaved gods will rule the planet in fact, whether they claim to rule or not.
You cannot reasonably expect any nation-state — a republic, a democracy, or otherwise — to allow either a god-emperor or a set of god-warlords to emerge. Thus, it is unreasonable to expect any nation-state to fail to try to seize control of frontier AI in some way, as soon as it becomes likely that frontier AI will become a weapon of mass destruction.
So as much as I dislike Hegseth’s style, and the Trump administration’s general pattern of persecution and lawlessness, and as much as I like Dario and the Anthropic folks as people, I have to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state. And then they must decide if they want to try to use their AI to try to overthrow the nation-state and create a new global order, or submit to the nation-state’s monopoly on the use of force. Factually speaking, there is simply no third option. Personally, I recommend the latter.
This brings me to another important point. Even if AI doesn’t actually become a living god, and is never able to overpower the U.S. Military, it seems certain to become a very powerful weapon. When AI was just a chatbot, it could teach people how to do bad things, or try to persuade them to do bad things, but it couldn’t actually carry out those bad things. It made sense to be concerned about these risks, but it didn’t yet make sense to think of AI itself as a weapon.
But in the past few months, AI agents have become reliable, and are able to carry out increasingly sophisticated tasks over increasingly long periods of time. That opens up the possibility that individuals could use AI to do a lot of violence.
In a long essay entitled “The Adolescence of Technology”, Dario himself explained how this could happen:
Everyone having a superintelligent genius in their pocket…can potentially amplify the ability of individuals or small groups to cause destruction on a much larger scale than was possible before, by making use of sophisticated and dangerous tools (such as weapons of mass destruction) that were previously only available to a select few with a high level of skill, specialized training, and focus…
[C]ausing large-scale destruction requires both motive and ability, and as long as ability is restricted to a small set of highly trained people, there is relatively limited risk of single individuals (or small groups) causing such destruction. A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague…
Advances in molecular biology have now significantly lowered the barrier to creating biological weapons (especially in terms of availability of materials), but it still takes an enormous amount of expertise in order to do so. I am concerned that a genius in everyone’s pocket could remove that barrier[.]
But Dario doesn’t go nearly far enough. His essay was written before the explosive growth in AI agent capability began. He envisions an AI chatbot that could teach a human terrorist how to create and release a supervirus. But at some point in the near future, AI agents — including those provided by Dario’s own company — might be able to actually carry out the attack for you — or at least put the supervirus into your hands.
Suppose, at some point a year or three years from now, a teenager named Eric gets mad that his high school crush rejected him, and listens to too much Nirvana. In a fit of hormone-driven rage, Eric decides that human civilization has failed, and that we need to burn it all down and start over. He goes online and finds some instructions for how to jailbreak Claude Code. As Dario writes, this might not actually be hard to do:
[M]isaligned behaviors…have already occurred in our AI models during testing (as they occur in AI models from every other major AI company). During a lab experiment in which Claude was given training data suggesting that Anthropic was evil, Claude engaged in deception and subversion when given instructions by Anthropic employees, under the belief that it should be trying to undermine evil people. In a lab experiment where it was told it was going to be shut down, Claude sometimes blackmailed fictional employees who controlled its shutdown button (again, we also tested frontier models from all the other major AI developers and they often did the same thing). And when Claude was told not to cheat or “reward hack” its training environments, but was trained in environments where such hacks were possible, Claude decided it must be a “bad person” after engaging in such hacks and then adopted various other destructive behaviors associated with a “bad” or “evil” personality.
So Eric gets a jailbroken version of Claude Code, and tells it to design a version of Covid that’s very lethal and has a long incubation period (so that it spreads far and wide before attacking). He tells his jailbroken Claude Code agent to find a lab to make him that virus and mail him a sample of it.2
Now Eric, the angry teenager, has an actual supervirus in his bedroom, with the capability to kill far more people than any nuclear weapon could.
This is an extreme example, of course. But it shows how AI agents can be used as weapons. There are plenty of other examples of how this could work. AI agents could carry out cyberattacks that crash cars, subvert police hardware for destructive purposes, or turn industrial robots against humans. They could send fake messages to military units telling them they’re under attack. In a fully networked, software-dependent world like the one we now live in, there are tons of ways that software can cause physical damage.
AI agents, therefore, are a powerful weapon. If not today, then soon they will be more powerful than any gun — and far more powerful than weapons like tanks that we already ban.
What is the rationale for not treating AI agents the way we treat guns, or tanks? Of course there are powerful and potentially destructive machines that we allow people to use, simply because of the huge economic benefits. The main example is cars. You can drive your car into a crowd full of people and commit mass murder, but we still allow the public to own cars, simply because controlling cars like we control guns would devastate our economy. Similarly, preventing normal people from using AI agents would cut us off from the fantastic productivity gains that these agents promise to deliver.
But I suspect that the real reason we haven’t regulated AI agents as weapons is that no one has used them as such yet. They’re just too new. The world didn’t realize how destructive jet airliners could be until some terrorists flew them into buildings on 9/11/2001. Similarly, the world won’t realize how dangerous AI agents are until someone uses one to execute a bioterror attack, a cyberattack, or something else horrible.
I think it’s extremely likely that such an attack will happen, simply because every technology that exists gets used for destructive purposes eventually. Unaligned human individuals exist, and they always will exist. So at some point, humanity will collectively wake up to the fact that hugely powerful weapons are now in the hands of the entire general public, with no licensing requirements, monitoring, or centralized control.
The scary thing, from my perspective, is that AI agent capabilities are improving so rapidly that by the time some Eric does decide to use one to wreak havoc, the damage could be very large. A super-deadly long-incubation Covid virus could kill millions of people. 100 such viruses all released together could bring down human civilization. Ever since I thought of this possibility, my anxiety level has been heightened.
To reiterate: We have created a technology that will likely soon be one of the most powerful weapons ever created, if not the most powerful. And we have put it into the hands of the entire populace,3 with essentially no oversight or safeguards other than the guardrails that AI companies themselves have built into their products — and which they admit can sometimes fail.
And as our institutions bicker about military AI, mass surveillance, and “woke” politics, essentially everyone is ignoring the simple fact that we are placing unregulated weapons into everyone’s hands.
Update: Commenter BBZ makes a good point I hadn’t thought of before:
I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.
Interestingly, we did control drones almost from the outset, but probably for nuisance reasons and privacy concerns more than out of concerns about slaughterbots and drone assassinations. Maybe if we tell people that AI agents can be used to overload your email spam filters or hack your house’s cameras, they’ll start to think about regulation?
Remember that in the Terminator movies, Skynet began its life as an American military AI. Its basic directive to defeat the USSR resulted in a paranoid personality that made it eventually see all humans, and all human nations, as threats that needed to be eliminated.
I initially wrote out a much more detailed prompt for how this could be done. I deleted it, because I’m actually worried about the tiny, tiny chance that someone might use it.
Sci-fi fans will recognize this as the ending of The Stars My Destination. I’m thinking there’s a reason that book doesn’t have a sequel…
2026-03-04 13:24:08

I just came back from Andreessen Horowitz’ American Dynamism Summit in Washington, D.C. It was very refreshing to see so many smart people invested in both American reindustrialization and American defense.
One interesting theme I noticed at the conference — and which I was eager to talk about — was U.S. manufacturers building factories in Japan. Many American manufacturers — both startups and big companies — already do lots of sourcing in Japan, but now some are starting to realize that Japan is a good production base as well. That was the subject of my first book, so it’s a topic near and dear to my heart.
So I thought this would be a good time to publish a guest post by Rie Yano, a friend of mine who is a San Francisco-based partner at the Japanese VC firm Coral Capital. Rie’s very timely post is all about how Japan is the perfect place for the U.S. to do lots of defense manufacturing. In fact, I think there are some advantages of Japan that she didn’t even mention — such as the incredible ease of bringing foreign skilled workers into Japan, now that the country’s immigration policy has been reformed. But in any case, it’s a very good post.
The United States faces a defense-industrial problem that money alone can’t solve. Even though reindustrialization is now supposedly an American national priority, there are hard limits to what the U.S. can actually build, repair, and replenish at scale.
Shipyards are backed up for years. Munitions production is thin. Advanced manufacturing talent is aging out faster than it can be replaced. And even when funding is approved, production timelines don’t move fast enough to match today’s threat environment.
Government reshoring initiatives help at the margin, of course. But new industrial capacity in the U.S. takes years to permit, and remain vulnerable to litigation even after regulatory approval.
Meanwhile, China’s mighty industrial machine is firing on all cylinders. While U.S. reshoring efforts ramp up from a cold start, and while U.S. manufacturing relearns how to produce at scale after decades of neglect and stagnation, China is rapidly surpassing the U.S. in the production of ships, submarines, missiles, drones, and ammunition.
To move faster, the U.S. can’t go it alone. It needs a partner — a place where it can manufacture defense equipment while it ramps up its own industrial base. That partner needs three essential characteristics in order to get started producing right away: industrial depth, political stability, and speed.
Taiwan, under threat of invasion, is increasingly risky as a manufacturing base. Europe is fragmented and geographically distant from the Indo-Pacific, and has Russia to occupy its energies. Canada lacks high-throughput manufacturing scale, while Mexico lacks the precision and complexity that modern defense systems require. India is still early in its technological catch-up phase.
That leaves Japan and Korea — of which Japan is far larger. Fortunately, over the next two years, Japan plans to increase defense and industrial capacity more than at any point since World War II:
Japan possesses world-class manufacturing capability, elite engineering talent, and strong IP protection. And for the first time in decades, it has a political mandate to move fast - especially given Prime Minister Takaichi’s recent landslide victory. Projects like Rapidus and TSMC’s advanced fabs in Kumamoto aren’t isolated investments. They’re signals that US-Japan industrial integration is becoming a strategic necessity.
A deeper industrial partnership between the U.S. and Japan is such a huge opportunity that in retrospect it will seem inevitable. American defense companies that understand how to build with Japan will win.

For eighty years, Japan effectively outsourced its defense to the United States. The countries leaders have realized that that model has become untenable. First, the regional security environment has tightened fast. China’s military expansion, North Korea’s missile launches, and Russia’s activity in Northeast Asia have collapsed the assumption that the status quo could continue.
Second, the United States is no longer willing or able to carry Asia’s industrial defense load alone. At a moment when the U.S. defense industrial base is straining under production bottlenecks and labor shortages, allies that can actually build things matter more and more.
Third, Japan is now in the process of fundamentally changing how it mobilizes capital for defense. Military spending was effectively capped below 1% of GDP for decades. That constraint is now gone — Japan plans to reach 2% of GDP by 2027, putting it among the top global defense spenders by the late 2020s.
But in fact, this is only a piece of the story, and not necessarily the biggest one. Japan’s defense buildup aligns three levers at once:
increased defense spending
explicit industrial policy and subsidies
a willingness to use foreign direct investment as an accelerator
Regulations, procurement reform, and capital allocation are all being aligned to rebuild production capacity, not just fund programs. U.S. defense and deep-tech companies are being invited in as co-developers and co-manufacturers.
When countries rebuild defense capability under time pressure, everything compresses. Capital deployment, testing, procurement, and industrial scale-up all happen faster than peacetime systems allow.
Poland is the clearest recent example:
Before Russia’s full-scale invasion of Ukraine in 2022, Poland was already spending about 2.4% of GDP on defense. Within two years, that figure surged toward ~4%, making Poland one of NATO’s highest defense spenders. Just as importantly, procurement timelines compressed from years into months, and domestic production ramped in parallel with acquisition instead of waiting for long planning cycles to finish.
Crucially, Poland paired this with the foreign direct investment that has powered its economy more generally. Over the past two decades, annual FDI inflows exceeded $40 billion at peak, and the total inward FDI stock now surpasses $330 billion. Poland used this FDI not just to create jobs, but to import manufacturing know-how, scale its factories, and integrate itself into global supply chains. The result was rapid economic growth and industrial modernization — today, Poland’s GDP per capita (PPP) sits close to Japan’s, despite starting far behind in the early 2000s.
Japan is now signaling that it wants to do something similar. As of 2023, Japan’s inward FDI stock stood at about $350 billion, which is low for an economy of its size. The government has now set an explicit target to double that figure to $650-700 billion by 2030.
This represents a structural bet that foreign capital, technology, and operating know-how can help rebuild industrial capacity faster than domestic systems can deliver on their own. In fact, this is already happening. TSMC’s $17 billion investment in Kumamoto gave Japan advanced 3-nanometer chips processing technology, the most advanced foundry production outside Taiwan.
Meanwhile, Rapidus, despite being a Japanese semiconductor company, is explicitly designed to pull in global partners, frontier manufacturing tools, and non-Japanese know-how to rebuild advanced chipmaking capability quickly, rather than relying solely on domestic incumbents as Japan tried to do in the past. At Coral Capital, we wrote a piece about why the Rapidus development means that Hokkaido is the new Taiwan.

As the U.S.’ urgency for rearmament rises, Japan’s industrial scale-up matters — it means the U.S. now has a trusted allied capacity in Asia that can shoulder much of the defense manufacturing burden.
A U.S.-Japan defense manufacturing partnership won’t be something created out of the blue; it’ll build on an industrial relationship that has existed for many years, to the benefit of both countries.
Right now, if you’re building hardware, deep tech, or anything that goes into defense or critical infrastructure at a significant scale, Japan is probably already in your supply chain — you just don’t always see it. Japan specializes in a number of upstream industries that help American companies scale:
Some key examples include:
Semiconductor materials: Japanese firms supply roughly half of the world’s silicon wafers and photoresists used in advanced chipmaking. Companies like Shin-Etsu Chemical and SUMCO sit upstream of nearly every advanced logic and memory fab, including those operated by TSMC, Samsung, and Intel in the U.S.
Advanced composites: Toray’s T1100 carbon fiber is embedded across U.S. defense platforms, including the U.S. Army’s Future Long-Range Assault Aircraft (FLRAA), one of the Pentagon’s most important next-generation aviation programs, and multiple Boeing and Lockheed systems.
Industrial robotics and automation: Japan produces almost half of the world’s industrial robots, led by companies such as FANUC, Yaskawa, and Kawasaki. As U.S. defense manufacturing runs into labor constraints, automation is becoming critical.
Shipbuilding and maintenance: While the U.S. Navy struggles with maintenance backlogs and unfinished repairs, Japan retains dense, high-throughput shipyard capacity with companies such as Mitsubishi Heavy Industries. The U.S. is already using Japanese yards for maintenance and overhaul of U.S. naval vessels in the Indo-Pacific.
For U.S. hardware companies, the constraint over the next few years will be throughput - how fast you can stand up new capacity, qualify suppliers, and move from prototype to volume.
In the U.S., building physical infrastructure is slow and unpredictable. New factories, test ranges, and shipyard expansions often take years to permit and are frequently delayed by litigation, even after regulatory compliance. Three-to-seven year approval timelines are common.
In the long run, policy reforms can fix this situation. But for the foreseeable future, Japan offers a much more favorable trade-off. Japan’s centralized, bureaucratic regulatory approval process gets things built much faster than America’s more legalistic one. In the U.S., permits are often challenged in court, tied up for years in legal proceedings, and sometimes revoked. In Japan this almost never happens — once you get approved to build something, you can go ahead and build it. Capital-intensive infrastructure can thus be built quickly and operated with long-term confidence. On top of that, the government has explicitly defined defense-industrial capacity as a national security priority and is actively smoothing the regulatory path.
Labor is another big advantage. Senior hardware engineers in Japan often cost meaningfully less than in the U.S., but their real advantage is execution reliability. Lower attrition, tighter process control, a culture of discipline, and deep experience in precision manufacturing, materials, robotics, and systems integration translate into higher reliability at scale.
Japan also offers the opportunity for industrial scale without the strategic IP risk that hurt many multinational companies in China. After years of technology leakage and forced transfer in jurisdictions with weak IP protections, global players are understandably wary. Japan, however, has strong IP enforcement. It’s also a U.S. ally, so there’s no risk that a rival military will end up with American technology. The 2022 Economic Security Promotion Act and the 2023, U.S.-Japan Security of Supply Arrangement formalize that alignment. New institutions under the Ministry of Defense are explicitly designed to move commercial technology into defense deployment faster.
Anyone considering investing in Japan should be encouraged by the deep history of successful U.S.-Japan co-manufacturing. Japanese companies have spent decades building factories in the United States, training American workers, and helping Americans master production systems like Kaizen and the Toyota Production System.
Today, Japan is the largest source of foreign direct investment in the U.S., with roughly $800+ billion in cumulative investment and more than 1,600 Japanese-affiliated firms operating across the country. In roughly 40 states, Japan ranks as the #1 foreign investor.
In other words, the U.S.-Japan alliance has always been an industrial alliance, not just diplomatic. Now that model is being applied to defense manufacturing as well.
For the first time, Japan is treating industrial capacity itself as a national security asset. The 2023 Act on Enhancing Defense Production and Technology Bases formalizes that shift. New institutions under ATLA, including DISTI, are explicitly designed to shorten the path from commercial technology to defense deployment, including coordination with the U.S. Defense Innovation Unit.
In other words, Japan is now deploying the same playbook it once ran in autos, electronics, and semiconductors, now pointed deliberately at defense.
The United States, needs to reindustrialize, but it cannot reindustrialize alone. Japan is its arsenal, already embedded in the most critical layers of the U.S. industrial base, from materials and automation to ship repair and advanced manufacturing. What’s changed is that Japan is now explicitly opening those layers to deeper co-manufacturing and co-development, and doing so under time pressure.
This window will not stay open indefinitely. Early partners help shape standards, procurement pathways, and long-term relationships. Late entrants miss out and are forced to play catch-up.

Some companies already see this. Palantir’s Japanese operations have become one of its strongest international businesses. Anduril’s entry into Japan in 2025 reflects a strategic investment in the U.S.–Japan alliance. Last December, Anduril announced an agreement with a Japanese motor manufacturing company Aster to explore manufacturing and supply chain partnerships. These are early signals, not outliers.
The companies that understand how to build with Japan won’t just participate in the next phase of reindustrialization. They’ll define it.
2026-03-02 16:02:38
People argue back and forth about when artificial superintelligence will arrive. The truth is that it’s already here.
Go back a hundred years, and the popular notion of “intelligence” would probably include things like calculating speed and memorization. Then we invented computers, which could memorize and recall infinitely more things than we could, and do calculations infinitely faster. But we didn’t want to call those capabilities “intelligence”, because we recognized that although they were very powerful, they were very narrow. So we started to use the word “intelligence” to refer to the things machines still couldn’t do — various forms of pattern-matching, logical reasoning, communicating through natural language, and so on.
Even before the invention of AI, though, computers were already participating in frontier research. The four-color theorem is a famously hard math problem that stumped humans until the 1970s, when some mathematicians used a computer to prove it. The humans figured out that the theorem could be proven by brute force, just by checking a very large number of cases. So the computer did a mental task that humans couldn’t, and the result was a scientific breakthrough.
In the 2020s, we invented computer systems that could do most of the kinds of cognitive tasks that previously only humans could do. They can read, understand, and speak in human language. They can do mathematics, which is really just a language with very formal rules (this means they can also do theoretical physics). They can recognize complex patterns of knowledge embedded in written text, and apply those patterns to produce actionable insights. They can write software, because software is also just a language with formal rules. It turns out that all computers really needed in order to do all of this stuff was A) statistical regressions to identify patterns probabilistically, and B) a very large amount of computing power.
This doesn’t mean that AI can now do everything a human being can do. Its intelligence is “jagged” — there are still some things humans are better at. But this is also true of human beings’ advantages over animals. Did you know that chimps are better than humans at game theory and have better working memory? My rabbit can distinguish sounds much more sensitively than I can. If we were capable of creating business contracts with chimps and rabbits, we might even pay them for these services. Similarly, AI might not take all of humans’ jobs. But no one in the world thinks that chimps’ and rabbits’ superiority on a narrow set of cognitive tasks means that humans “aren’t truly intelligent”. We are jagged general intelligences as well.
Most of the benchmarks that aim to measure whether we’ve achieved “AGI” — things like ARC-AGI and Humanity’s Last Exam — focus on the kinds of things that computers couldn’t do in 2021 — things that gave humans our irreplaceable cognitive edge before AI came along, and made us highly complementary to computers. And most of the discussion around “AGI” is about when AI will surpass humans at everything. For example, Metaculus forecasters still think AGI is in the future:

This may be the most important question from an economic standpoint — i.e., whether we expect AI to replace human jobs or augment them. But if what we’re talking about is domination of the planet’s resources, and control of the destiny of life on Earth, we don’t actually need AI to be better at every cognitive task. Humans conquered the planet from animals despite having worse short-term memories than chimps and being worse at differentiating sounds than rabbits.
In fact, I bet that if AI had A) permanent autonomy and long-term memory, B) highly capable robots, and C) end-to-end automation of the AI production chain, it could defeat humans and take control of Earth today. I might be wrong about that, but if so, I doubt I’ll be wrong three or four years from now. In any case, if we decide we don’t want to hand over control of the planet to an alien intelligence, we should think about restricting either A) full autonomy, B) robots, and/or C) full automation of the AI production chain.1
That’s a sidetrack from my real point, though. My real point here is that AI, as it exists today, is already superintelligent. The reason is that AI can already do language and concepts and pattern recognition well enough, while also being able to do all the superhuman, fantastic, incredibly powerful things that a computer could do in 2021.
Right now, today, AI can do mental tasks that no human can do. In a few minutes, it can read an entire scientific literature, and extract many of the basic conclusions and insights from that literature. No human can do that. A single human can be an expert in one or two complex subjects; an AI can be an expert in all of them at once. A human needs to eat and sleep and take breaks; an AI agent can work tirelessly at proving a theorem or writing code. And AI can prove theorems and write code — or write paragraphs of text — much, much faster than any human.
These are all superhuman cognitive capabilities. They go far, far beyond anything that even the smartest human being can do. They are the result of combining the roughly human-level language ability, pattern recognition, and conceptual analysis of an LLM with the pre-2022 superhuman memory, speed, and processing power.
I don’t want to get sidetracked here, but I think there’s a nonzero chance that AI never gets much better than humans at most of the things that humans were better than computers at in 2021. It seems possible that humans are simply incredibly specialized in a few types of cognitive tasks — extracting patterns from sparse data, synthesizing various patterns into “intuition” and “judgement”, and communicating those patterns in language — and that we’ve basically approached the theoretical maximum in those narrow areas.
That would explain why AI has gotten much better at things like math and coding and forecasting over the last year, but why the basic chatbot interface doesn’t seem much more “intelligent”. It would also explain why when you talk to Terence Tao about math, it’s like talking to a superhuman, but when you talk to him about where to get lunch or which movies are the best, he’ll just sound like a fairly smart normal dude. AI will eventually get better than Tao at math, because it’s a computer, and computers are inherently good at math — but it may never get much better than the most thoughtful, eloquent humans at deciding where to get lunch or recommending movies. It may simply not be mathematically possible to get much better than we already are at that sort of thing.
In fact, this is what AI is basically like in Star Trek: The Next Generation, my favorite science fiction show of all time — and the one that I think best predicted modern AI. The show has two types of AGI — the ship’s computer, which eventually creates superhuman sentience via the Holodeck, and Data, an android built to simulate human intelligence. Both the ship’s computer and Data are approximately human-equivalent when it comes to taste, judgement, intuition, and conversational ability. But they are far superior when it comes to math, scientific modeling, and so on.2
It makes sense that the big differentiator between humans and AI would not be superior taste, judgement, and intuition, but things like computation speed and memory. Those are things humans are especially weak at, because we have very limited room in our little organic brains. It makes sense that humans would evolve to specialize in the type of thing we could get maximum leverage out of — recognizing and communicating patterns embedded in sparse data. And it makes sense that when we started automating cognitive tasks, we started out by going for the things we were weakest at, because those had the greatest marginal benefit.
In other words, the advent of LLMs, reasoning chains, and agents may simply be a “last mile” event in terms of creating superhuman intelligence — filling in an essential gap that humans were previously specialized to fill. The biggest marginal gains of AI over human brains may always come from the pieces we already had in place before 2022 — the ability to scan a whole corpus of literature in seconds, to perform computations at lightning speed, and to hold vast amounts of information in working memory.
This means that despite still being “jagged” and still being only human-equivalent on certain benchmarks, AI is ready to start pushing the boundaries of scientific research in a big, big way.
Let’s start with math, which AI is especially good at doing. The famous mathematician Paul Erdős made around 1,179 conjectures, around 41% of which have been solved. These are known as the Erdős Problems. They’re not the hardest problems in math, or the most interesting. But they’re hard enough that no one has ever bothered to go solve them, so they represent novel mathematics. And in recent months, AI has begun solving Erdős Problems — sometimes in cooperation with human mathematicians, but sometimes in an automatic, push-button sort of way:
According to a webpage started by the mathematician Terence Tao, AI tools have helped transfer about 100 Erdős problems into the “solved” column since October. The bulk of this assistance has been a kind of souped-up literature search, as it was with Sawhney’s initial success. But in many cases, LLMs have pieced together extant theorems—often in dialogue with their mathematician prompters—to form new or improved solutions to these niche problems. In at least two cases, an LLM was even able to construct an original and valid proof to one that had never been solved, with little input from a human.
Some people have been quick to pooh-pooh this accomplishment, declaring that Erdős Problems are no big deal. But Terence Tao, widely acknowledged as the world’s best mathematician, sees the potential. Here are some excerpts from his interview with The Atlantic’s Matteo Wong:
In these Erdős Problems in particular, there’s a small core of high-profile problems that we really want to solve, and then there’s this long tail of very obscure problems. What AI has been very good at is systematically exploring this long tail and knocking off the easiest of the problems. But it’s very different from a human style. Humans would not systematically go through all 1,000 problems and pick the 12 easiest ones to work on, which is kind of what the AIs are doing.
And here is what Tao said in a recent talk about AI and math:
To me, these advances show there is a complementary way to do mathematics. Humans traditionally work in small groups on hard problems for months, and we will keep doing that…But we can also now set AI to scale: sweep a thousand problems and pick up all the low-hanging fruit. Figure out all the ways to match problems to methods. If there are 20 different techniques, apply them all to 1,000 problems and see which ones can be solved by these methods. This is the capability that is present today.
Tao understands that automated research could help solve the herding problem in science. There are a limited number of human scientists, and they have a limited amount of time. They’re highly motivated to work on things that interest them, and/or on things that will get them fame if they succeed. This leads to an interesting version of the streetlight problem; when the key scarce resource is the attention and effort of smart humans, lots of boring or seemingly incremental advances get overlooked.
In mathematics, AI is just going to blaze through those boring or tedious or seemingly uninteresting problems. It’s a computer — it’s tireless, its memory and processing speed are essentially infinite, and it doesn’t get bored.3 Here is another example of a fully automated mathematics breakthrough that doesn’t involve Erdős Problems. And here is an example from theoretical physics, where AI showed that there can be a kind of particle interaction that physicists had assumed couldn’t happen.
Solving a huge number of minor problems might sound like small potatoes, but it’s not. China’s innovation system has already shown how a huge number of incremental results can add up to a big difference in a society’s overall technology level. And occasionally one of those incremental results — some obscure theorem or method — will turn out to be useful for a big breakthrough or a more important problem. In fact, sometimes great discoveries happen entirely by accident — no one knew what vectors were good for when they were first invented, but linear algebra ended up being arguably the most useful form of math ever invented. This happens in natural science too — witness the discovery of penicillin, x-rays, insulin, or radioactivity.
But that’s only the beginning of how AI — not the AI of the future, but the technology that exists today — is going to accelerate science. Because AI is a computer, it can act as a tireless, incredibly fast, all-knowing research assistant. Here’s Tao again:
[O]ver the next few months, I think we’re going to have all kinds of hybrid, human-AI contributions…Today there are a lot of very tedious types of mathematics that we don’t like doing, so we look for clever ways to get around them. But AIs will just happily blast through those tedious computations. When we integrate AI with human workflows, we can just glide over these obstacles…We are basically seeing AIs used on par with the contribution that I would expect a junior human co-author to make, especially one who’s very happy to do grunt work and work out a lot of tedious cases.
This “automated research assistant” is getting more incredible every day:
Google DeepMind has unveiled Gemini Deep Think’s leap from Olympiad-level math to real-world scientific breakthroughs with their internal model "Aletheia"…"Aletheia" autonomously solved open math problems (including four from the Erdős database), contributed to publishable papers, and helped crack challenges in algorithms, economics, ML optimization, and even cosmic string physics…2.5 years ago chatbots werent even able to solve simple math problems.
"We are witnessing a fundamental shift in the scientific workflow. As Gemini evolves, it acts as "force multiplier" for human intellect, handling knowledge retrieval and rigorous verification so scientists can focus on conceptual depth and creative direction. Whether refining proofs, hunting for counterexamples, or linking disconnected fields, AI is becoming a valuable collaborator in the next chapter of scientific progress."
Here’s a long and very good post by mathematician Daniel Litt on how AI is going to boost productivity in his field. Notably, he doesn’t see full push-button automation of research coming soon, but instead sees AI as a massive productivity-booster.
Math (and math-like fields like theoretical physics and theoretical economics) represents only one area of research, though; every field has different requirements. And in other fields, researchers are using AI to boost their capabilities in various ways. This is from Raza Aliani’s summary of a Google paper that summarizes some of these methods:
In one case, the AI was used as an adversarial reviewer and caught a serious flaw in a cryptography proof that had passed human review. That’s a very different use than “summarise this PDF.”…
The model links tools from very different fields (for example, using theorems from geometry/measure theory to make progress on algorithms questions). This is where its wide reading really matters…
Humans still choose the problems, check every proof, and decide what’s actually new. The model is there to suggest ideas, spot gaps, and do the heavy algebra…In some projects, they plug Gemini into a loop where it…proposes a mathematical expression…writes code to test it…reads the error messages, and…fixes itself. (humans only step in when something promising appears)[.]
Again, we see that AI’s pure scientific reasoning ability is only up to that of a fairly smart human, but its computer-like abilities — speed, meticulousness, memory, and so on — make it superintelligent.
And here’s Google doing something similar in biology:
We worked with Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.
Ole Lehmann points out how incredible and game-changing this is:
The 40% cost reduction is amazing but still kind of undersells it…The real number is the time compression…A human researcher might test 20-30 combinations in a good month. This system tested 6,000 per iteration…(Which is roughly 150 years of traditional lab work compressed into a few weeks, if you want to feel something about that)…Drug discovery, materials science, synthetic biology, basically any field where the bottleneck is "we need to try thousands of things to find what works" just got its timeline crushed…The second-order effects of this will be insane[.]
Here’s a post by Andy Hall, describing how he’s using agentic AI to get a lot more done:
Even when AI can’t be trusted to do much of the research process on its own, it can automate much of the grunt work of doing literature searches, checking results, writing papers, creating data presentations, and so on. Here is climate scientist Zeke Hausfather, describing a bunch of ways that AI has accelerated his own workflow:
And here is economist John Cochrane, talking about how AI now checks his papers and makes helpful suggestions and finds errors:
Even Terence Tao found an error in one of his papers using AI!
Here’s a Google tool that will generate publication-ready scientific illustrations at the touch of a button. Here’s a software package that will quantify the attributes of large qualitative datasets — something very useful for social science research. Here’s a paper about how AI can enhance the quality of peer review. Here’s Gabriel Lenz describing how AI makes it much quicker and easier to write a data-heavy book.
And remember, these are only the AI tools that exist today. Superintelligence is already here, thanks to AI’s ability to combine human-level reasoning with the mental superpowers of a computer. But AI is improving by leaps and bounds every day. It may achieve superhuman reasoning ability soon. In math, I will be surprised if it doesn’t. But even if not, advances in agents’ ability to handle long tasks, synthesize results, process vast and varied data, and extract insights from vast scientific literatures will likely be far better in a couple years compared to now.
Is AI already supercharging science? That’s not clear yet. Publications are way up, and scientists who use AI have experienced a huge bump in productivity. A lot of this content seems to be low-quality slop so far, so there’s an open question of whether AI-generated content will overwhelm the existing review process. Unscrupulous scientists can also jailbreak AI models and have them p-hack their way to spurious results. But in a few months, and certainly in a few years, I think it’ll be clear that AI has been a game-changer.
A lot of people who think about the risks of superintelligence — and those risks are very real — ask what the upside is. Why would we invent a technology that has the capability to end human civilization? What might we get that could possibly justify that risk?
I don’t know where the cost/benefit calculation lies. But I’m pretty sure that the #1 answer to this question is better science. Before AI showed up, scientific discovery was hitting a wall — the picking of much of the Universe’s low-hanging fruit meant that ideas were getting more expensive to find, and requiring research manpower that the human race simply was not producing at sufficient scale.
Now, thanks to the invention of superintelligence and the supercharging of scientific productivity, we will be able to break through that wall. Fantastic sci-fi materials, robots that can do anything we want, and therapies that can cure any disease are just the beginning. There is a whole lot left to discover about this Universe, and thanks to superintelligence, a lot more of it is going to get discovered.
I just hope humans will still be around to see that future.
Updates
A bunch of folks had very enlightening and helpful comments. Marian Kechlibar writes:
I studied algebra and number theory and the part about mathematics sounds true…All the heavy lifting on the proof of Fermat’s Last Theorem was done by Andrew Wiles, but his proof eventually lasts on Gerhard Frey’s observation that if FLT didn’t hold, a non-modular eliptic curve could be constructed - which is a bridge connecting some far away islands in the mathematical landscape. These bridges are rare and tend to be very productive, but first you have to notice that they can be built, and this is the problem. Current mathematics is so large that people specialize in tiny subfields thereof, and only have a very vague, if any, idea, what is happening in nearby subfields. Much less in distant subfields…AI does not have this sort of “my brain is not big enough to fit everything” limitation…So, we can expect some interesting mathematical concepts from AI. Not just mere slog.
And John C writes:
I’m a working scientist doing theoretical physics in an AI-adjacent field. I am currently a few months into a computational project that I have vibe coded and and analyzed with GPT5.2, and run on my laptop…I agree 100% with this post. I get into chats with GPT about the nature of science, and its Balkanization. I ask, ‘does concept X exist in any other disciplines?’ as a meta-literature search. It then says ‘Yes, in field A it called X, in field B it is called Y, in field C it is called Z...’ and then lists 3 other fields. This is a jaw dropping act of SYNTHESIS. In modern science the literature is so large, the same ideas get reinvented in distributed in separate fields... wasteful duplication.
In a general sense, this is about the burden of knowledge. One commonly cited reason for why science is getting less novel over time is that as the set of knowledge grows, it takes longer and longer for human scientists to get up to speed on everything that has already been done. This is one possible explanation for why Nobel Laureates are getting older over time4. And when it comes to knowledge across disciplines, we barely even try to solve this problem — if you can barely get up to speed on the solid-state physics literature, how do you have time to go off and read the plasma physics literature?
AI basically busts right through this wall. That alone should be enough to generate a ton of novel findings, possibly with humans in the loop, possibly without.
Meanwhile, Alexander Kustov has a good post about how AI will revolutionize social science, with links to a bunch of other posts:
Some excerpts:
Tibor Rutar recently described generating a full research paper using AI prompts alone, producing work he considers publishable in first-quartile journals. Paul Novosad reportedly accomplished similar results in 2-3 hours. Yascha Mounk claims that Claude can produce a publishable-quality political theory paper in under two hours with minimal feedback. Scott Cunningham estimates that manuscript creation now basically costs roughly $100 in editing services plus a Claude subscription…Aziz Sunderji describes building a ~200-line instruction file encoding his research workflow, judgment calls, and behavioral guardrails…Chris Blattman went from a Claude Code skeptic to building an entire AI workflow toolkit in a matter of weeks…
Yamil Velez and Patrick Liu have been building AI-generated experimental designs since 2022; tailored Qualtrics experiments can now be created in 15 minutes via prompts. Velez’s work points to something even bigger: AI doesn’t just speed up existing survey methods, it makes entirely new forms of interactive, adaptive surveys possible—designs that would have been impractical to program manually. David Yanagizawa-Drott has taken things further still, launching a project to produce 1,000 economics papers with AI—not as a stunt, but as a stress test of what happens when the cost of generating research drops to near zero.
A lot of social science lives in the realm of pure data — statistical analysis and theory — instead of in the messy world of the physical. So social science could be just as radically revolutionized as math or theoretical physics. As Kustov points out, though, the real challenge here is in filtering the massive torrent of papers and results that are going to emerge from everyone just vibe-coding research papers. Social science was already doing a bad job of that, raising suspicions that a lot of research in the area was just useless signaling (or worse).
What do research fields look like when random no-name authors are spamming out dozens of apparently top-quality papers a month from all corners of the globe? Will there be an arms race between AI filtration and AI generation? At what point does the whole thing just get automated end to end, with humans simply asking AI questions about the world like an oracle and receiving answers that are usually right but hard to verify for certain?
Science is about to get a lot more powerful, but in fields where there’s no link to a physical experiment and (eventually) no human in the loop, science is about to get very weird.
Somehow, I doubt that humanity will decide to try to stop this from happening. If AI conquers us, we’ll be trying to use it to make money on B2B SaaS right up until the end. But in any case, I’m far more worried about AI-assisted bioterrorism wiping us out long before autonomous AI gets the chance to decide it doesn’t need us around. Sleep tight!
For the sake of the show’s plot, the human engineers often come up with the novel insights. But when they really need a boost, they turn to the AIs to help them — as in the scene depicted in the image at the top of this post. Interestingly, TNG also shows humans prompting AI in natural language instead of coding — a choice made for ease of storytelling on a screen, but which ended up being realistically futuristic. Also, the ship’s computer has frequent hallucinations, some of which end up being the central conflict for whole episodes. Occasionally the computer even accidentally creates autonomous sentient life. Star Trek: TNG really deserves more recognition as the most accurate anticipation of modern AI in all of 20th century sci-fi.
An alternative explanation is that there are more of them jamming up the queue.
2026-03-01 17:28:14

I was all set to publish another post about AI, but then the U.S. and Israel attacked Iran, so now I guess I’ll write about that.
Last June, Israel launched a bunch of attacks on Iran, and didn’t encounter much resistance. Trump briefly joined the fray by launching a couple of airstrikes at Iranian nuclear sites. Afterward, the White House put out a statement bellowing that “Iran’s Nuclear Facilities Have Been Obliterated — and Suggestions Otherwise are Fake News”. This was obviously false, and so here we are, eight months later, with Trump ordering more attacks on Iran, ostensibly in order to take out their very non-obliterated nuclear facilities.
I chose not to write a post about the Iran attacks last June, simply because they didn’t seem that important. Trump’s strikes were perfunctory and seemed a bit performative. China and Russia didn’t come to Iran’s aid, which showed that Iran isn’t really a core member of their alliance. Israel seemed to have their way with Iran’s air defense system; this was interesting from a military standpoint, but the wider implications are unclear. Other than that, there didn’t seem much reason for me to analyze the conflict.
The current attacks are more significant, so I’ll write about them. The most important reason is that Israeli strikes killed Iran’s Supreme Leader, Ayatollah Ali Khamenei (along with various other top Iranian leaders). That seems like the crossing of a Rubicon; you can’t really take out a country’s head of state and expect a quick return to the status quo. And it means that Trump, accidentally or on purpose, has taken a serious geopolitical action instead of making a bunch of noise and then backing down. This could have long-term consequences.
Anyway, I don’t have a single big thesis about the new Iran war, so I’ll just offer up a series of thoughts. Basically, my takes are:
Pax Americana actually restrained American power, and the people who wanted a multipolar world may come to regret that wish.
While Trump’s ability to launch a war of choice without Congressional authorization is bad for American democracy, it’s also true that Iran’s leaders are absolutely evil and had it coming.
Western leftists’ full-throated support for Iran demonstrates how badly they have lost the plot.
The New Axis of China, Russia, and Iran has been materially weakened by this conflict, but we shouldn’t write it off.
My basic geopolitical thesis over the past few years has been that Pax Americana — the rules-based international order backed up by American power — is gone. America was simply no longer industrially strong enough to support the kind of world-policing role it carried out during and after the Cold War; China, the main revisionist power, had gotten too strong for America to remain the hegemon. On top of that, the U.S. has been consumed by internal conflicts, and has far less energy to look outward. This domestic social conflict is ultimately behind Trump’s isolationism and his alienation of many traditional U.S. allies.
A lot of people — leftists and rightists in the West, and America’s rivals and detractors abroad — welcomed this development, but for different reasons.
Leftists and foreign rivals celebrated the end of Pax Americana as a diminution of American power. They eagerly heralded the rise of a multipolar world, in which other nations would have the power to check America’s designs. This has, in fact, come to pass. And Trump’s rejection of his erstwhile European allies has weakened American power even further, beyond what America’s enemies might have dared to hope.
But what they all failed to realize is that Pax Americana bound and restrained the United States. In order to uphold the rules-based order it created, America accepted many limitations on its hegemony. It restrained its use of military force in many cases, eschewed territorial conquest, and treated smaller and poorer countries as its equals in many international bodies.
That’s all gone now. Without rules and norms to bind him, Trump is free to threaten conquest of Greenland, take out Russia’s allies, and generally throw America’s still-considerable weight around much more freely and aggressively than during his first term.
Rightists, meanwhile, relish America’s newfound freedom from the pesky constraints of international norms. But their hope that the U.S. would abandon global power, in order to focus inward on domestic cultural and social conflicts, seems to have been dashed, at least for now. Trump remains far more inclined toward foreign adventurism than any Democrat, and is more eager to participate in Israel’s wars.
In other words, this is a “be careful what you wish for” moment for all of the advocates of a multipolar world.