2026-03-17 16:58:01

This roundup is in honor of Chris Sims, the extremely influential macroeconomist, who has just passed away. Item #4 even features some evidence for the Fiscal Theory of the Price Level, which he helped develop.
But first, podcasts. I went on the Members of Technical Staff Podcast with Jayden Clark to talk about the politics of the tech industry, and we ended up talking about a ton of fun stuff:
Anyway, on to the roundup. Before we get to the macro stuff, let’s talk about one of America’s worst public intellectuals…and a little about AI.
Paul Ehrlich, the author of The Population Bomb and a relentless advocate for population control, has died. One general rule of punditry is supposed to be that you don’t speak ill of the dead. But on the other hand, what if the dead had some really, really bad ideas?
We all know the story of why Ehrlich was wrong. He predicted that the world would run out of food, producing catastrophic famines in the 1970s. Based on those predictions, he called for things like cutting off emergency food aid to India, reasoning that if people were saved from starvation today, it would just mean more people to die of starvation later on. But new farming techniques known as the Green Revolution created enough calories to feed the whole world with plenty to spare. The Population Bomb came out in 1968; by then, famines were essentially already a thing of the past:
And fertility rates fell without the kind of draconian, dystopian population controls that Ehrlich constantly called for. The main country that listened to Ehrlich was China, and their One-Child Policy turned out to be quite unnecessary for reducing fertility rates — as well as being totalitarian, cruel, and dystopian.
What people don’t know about Ehrlich is how relentlessly he kept promoting his ideas and haughtily dismissing his critics, even after it had become clear that he had been completely wrong. A man who had endorsed nightmare policies in service to a broken theory simply never reckoned with this monumental failure, and continued to self-aggrandize and to evangelize for his old mistakes.
And in fact, Ehrlich’s bad ideas have survived and even thrived, in the form of the “degrowth” movement that’s popular in the UK and parts of Europe. Today’s degrowthers call for immiserating the developed-world middle class instead of starving India to death and throwing people in prison for having too many kids, which I suppose is an improvement. Still, the idea is fundamentally based on the same old fallacies that Ehrlich never stopped pushing — that humanity has overstepped its bounds and must be forcibly diminished.
One of the most interesting results in theoretical economics is the Grossman-Stiglitz Paradox.
Have you ever heard of the Efficient Market Hypothesis — the idea that financial market prices already incorporate all available information about the value of the underlying assets? Well, in 1980, Sanford Grossman and Joseph Stiglitz showed why the EMH can’t be quite right. The idea is pretty simple: It takes effort to find information. Who is going to go out and spend the effort to find out information about what stocks or bonds or houses are really worth, if they can’t make money trading on that information? And if no one spends the effort to find the information, how can it ever be incorporated into the price in the first place? Grossman and Stiglitz concluded that financial markets must be at least somewhat inefficient.
Now, Daron Acemoglu, Dingwen Kong and Asuman Ozdaglar have posited a similar problem for AI. I’m usually not a fan of Acemoglu’s papers on AI, but I think this one gets to an important and fundamental insight.
Acemoglu et al. write that if generative AI put all the information of the world at people’s fingertips, then people will have no incentive to go out and learn new things, which will then prevent them from accidentally finding new knowledge to add to the world’s total knowledge base:
We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem…Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers…recommendations that substitute for human effort…[W]hile agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge…[T]he economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice.
Basically, Acemoglu et al. posit that humanity as a whole learns new things when individual humans try to reinvent the wheel — to discover things on their own instead of just looking them up. This wastes a lot of effort, but it also adds to the overall knowledge base.
The idea here is that AI makes everyone really lazy — instead of trying to write a piece of code from scratch, or prove a math theorem from scratch, or figure out some piece of knowledge for yourself, you just ask AI to do it all for you. So everyone ends up getting the right answers to questions whose answers are already known, so they don’t end up adding anything new. It’s the Grossman-Stiglitz Paradox, but for everything.
In fact, you can sort of see hints of this happening already. Website traffic is collapsing, as people read AI instead of websites. Tech publications, for example, are rapidly losing their readership:

And using AI to code causes programmers’ skills to atrophy.
My first observation here is that this also applies not just to AI, but to the internet itself. Yes, people can ask an LLM to teach them about math or write some code for them. But they could also ask Math Exchange and Stack Exchange, even before LLMs existed. And the same problem arises — if all of the world’s knowledge is there at your fingertips, there’s no reason to waste your time reinventing the wheel. But as Neal Stephenson wrote as far back as 2011, this can lead to a lack of novelty, as everyone just copies what’s been done before.
And this leads me to my second thought: What if AI can also produce new knowledge? AI, after all, is prone to hallucination — i.e., random errors. If agents are out there randomly trying the wrong thing, occasionally they’ll discover something new. If there’s a way for those accidental discoveries to get incorporated into the general body of AI knowledge, then perhaps AI can grow the total knowledge stock instead of shrinking it. All that’s needed is to stop forcing humans to be the sole long-term repository of knowledge. How to do that, of course, I don’t know.
The Iran War is making everyone afraid to go through the Strait of Hormuz — the key maritime choke point that a significant part of the world’s oil must pass through in order to reach the world market. Iranian strikes and mines have effectively closed the strait, and European countries are refusing to help America reopen it (which is perhaps only natural, given Trump’s threats to seize Greenland from Europe, and his withdrawal of aid from the Ukraine war). As a result, oil prices have skyrocketed:
What will be the economic result? Fortunately, this is one of the rare areas where macroeconomists are actually able to make some predictions. Closure of key shipping routes is a thing that occasionally happens, and when it happens we can look at the short-term results and get a pretty clean picture of the effect.
That’s what Diego Känzig and Ramya Raghavan did last year in a paper entitled “The Macroeconomic Effects of Supply Chain Shocks: Evidence from Global Shipping Disruptions”. Basically, they look at similar incidents in the past, and try to quantify the economy’s average response. Here’s the picture they come up with:

Basically, commodity prices (e.g. oil) go up, inflation goes up as a result, and U.S. industrial production suffers.
Can we expect the same thing to happen this time? Maybe. One big change from the past is that thanks to the shale oil boom, the U.S. is now a net oil exporter, rather than a net importer:

That means that U.S. oil companies will see a big windfall from the war. But the inflation bump resulting from higher input prices will probably still happen, and oil-consuming industries — chemicals, transportation, etc. — will still probably suffer.
Governments all over the world are running up enormous levels of debt, so it’s important to know what the risks of that are. You can always get your central bank to lower interest rates to make government debt easier to refinance, or even have it print money to buy government debt directly. The problem is that this can cause inflation to rise. A macroeconomic theory called the Fiscal Theory of the Price Level — which drew heavily on Chris Sims’ ideas — predicts a tight relationship between the two.
Progressive macroeconomics types typically pooh-pooh this danger, pointing to cases like the Great Recession, or Japan in the 1990s and 2000s, where soaring levels of government debt didn’t lead to inflation. But Covid may be a counterexample to this complacency. A number of macroeconomics papers have come out recently that establish what looks like a link between Covid borrowing and subsequent post-pandemic inflation.
For example, Barro and Bianchi (2024) find that government spending “has substantial explanatory power for recent inflation rates across 20 non-Euro-zone countries and an aggregate of 17 Euro-zone countries”. And Reis (2026) finds that “the unexpected worsening of fiscal surplus during the period during and after the pandemic is strongly correlated with the unexpected increases in inflation.”
Reis blames America’s borrowing binge — primarily Trump’s CARES Act and its follow-up bill, but also Biden’s American Rescue Plan — for America’s higher rate of inflation after the pandemic:
How much did public deficits contribute to the inflation surge of 2021-24?…A popular argument notes that inflation rose in the US by almost as much as in other OECD countries. Yet, the US had a large fiscal stimulus in 2021 that most other countries did not. Therefore, the US fiscal stimulus did not contribute to the inflation surge. Is that right? No, it is not.
To inspect this claim, you can use expectations data…[Here’s a] plot [that] compares the unexpected high deficits with the unexpected high inflation terms for OECD countries, using the common units of their impact on the public debt…For countries that ran higher unexpected fiscal deficits, inflation was also unexpectedly higher.
And here’s his chart:

That’s not the tightest relationship I’ve ever seen, or the steepest slope. But it’s not nothing, either. And it’s worth remembering that Olivier Blanchard managed to predict the surge in inflation in advance, just by looking at how much the U.S. government was borrowing back in 2021.
Progressive pundits and Democratic think-tankers who like to hand-wave away the dangers of deficits need to think again. America is up in arms about the cost of living, and if Democrats get in power and just borrow more and more and more, it could make the problem worse.
I wrote a book about the promise of foreign investment in Japan. When I was on the book tour last year, a bunch of people, both Japanese and otherwise, asked me: “What industries should foreigners invest in in Japan?” My first answer was always the same: Robotics.
In a world where software is increasingly ruled by AI, robotics is the next frontier. But it’s a lot trickier — you have to combine AI techniques with a lot of hardware know-how. A lot of people think that this know-how resides primarily in China, because they look at charts of robot adoption. China has a lot of factories, and it has a lot of cheap bank loans that factories can use to buy robots, and so China buys a lot of robots. It’s also becoming more self-sufficient in the industry — making more of the robots it installs.
But this doesn’t mean China has caught up in the robot industry, or dominated it the way it has dominated the electric vehicle industry. In fact, most of China’s robots are still low-end, mass-market stuff; to produce high-end robots takes many years of careful practice and accumulated tacit know-how.
Japan has this know-how. And so as AI increasingly pushes into robotics, Japan will be an increasingly important partner for the U.S. James Riney of Coral Capital has an excellent post in which he explains why Japan’s robotics expertise is the perfect complement to America’s strength in AI:
If the US wants real, functional robots that can survive a 10,000-hour duty cycle in a factory rather than a 5-minute demo on X/Twitter, Japan is here to the rescue…
The body of a humanoid robot is an engineering nightmare of competing constraints. Strong but lightweight. Blinding speed but sub-millimeter precision. Massive heat dissipation without cooking its own battery. And it needs to do this millions of times without fatigue…This is where Japan excels…
The single biggest misconception in the humanoid hype cycle is the difference between a demo and a deployment…A robot that looks impressive dancing in a pre-programmed video is operating under “Short-Duration Peak Performance.” It pushes its motors and gears to the limit for a few minutes. But industrial customers don’t buy demos….A robot on [a production] line needs a Mean Time Between Failures of 5,000 to 10,000 hours…This is the Reliability Cliff. Most entrants from the software-first ecosystem, and many low-cost Chinese clones, fall off this cliff at around the 1,000-hour mark. Their gears develop backlash, their lubricants break down, and their positional accuracy drifts…
Japanese companies like Harmonic Drive Systems and Nabtesco have spent fifty years solving these problems. They have mastered the black art of tribology, metallurgy, and heat treatment…If you peel back the skin of almost any high-end robot today, whether it is building cars in Germany or sorting packages in an Amazon warehouse, you will find Japanese logos inside…According to Japan’s Ministry of Economy, Trade and Industry (METI), Japanese manufacturers hold an impressive 70% of the global market share for industrial robots…
The battle for robotics dominance is not a story of the US vs China. China would likely win that battle. It is a story of the US & Japan (and allies) vs China…For now, and for the foreseeable future, if you want a robot that works, you need to knock on Japan’s door.
Wise words. American startups, AI companies, and government agencies need to listen to James.
There has been a big political realignment in the U.S. — and in many other countries — in recent years. Center-left parties, like the Democrats in the U.S. and Labour in the UK, used to primarily be the parties of the working class. But in recent years, their voter bases have shifted — they have become the parties of educated high-earning professionals, while working-class voters have drifted to the right. Here’s Rogé Karma:
In 2008, the top fifth of earners favored Democrats by just a few percentage points; by 2020, they were the group most likely to vote for Democrats and did so by a nearly 15-point margin. (Democrats won the poorest fifth of voters by a similarly large margin.) Democrats now represent 24 of the 25 highest-income congressional districts and 43 of the top 50 counties by economic output. A similarly stark shift has occurred if you look at college education rather than income. Perhaps most dramatic of all has been the change among wealthy white people. Among white voters, in every presidential election from 1948 until 2012, the richest 5 percent were the group most likely to vote Republican, according to analysis by the political scientist Thomas Wood. In 2016 and 2020, this dynamic reversed itself: The top 5 percent became the group most likely to vote Democratic.
And here’s a chart:

For the most part, Democrats have kept their pro-working-class politics, even as they represent the working class less and less. They’ve supported unions even as unions have abandoned them at the polls. They’ve pushed for more welfare and health spending, even as the benefits have flowed more to red states than to blue ones. This is commendable.
However, this class altruism doesn’t extend to all types of policy. Progressives have fought hard for student debt cancellation, even though people who go to college are pretty obviously the main beneficiaries of that. And on taxes, Democrats have shifted from their old strategy of taxing the rich to a new strategy of taxing only the hyper-rich while cutting taxes for the merely-rich. Matt Yglesias reports:
Chris Van Hollen and Cory Booker both recently introduced proposals to raise taxes on the very rich in order to finance broad-based tax cuts for the rest of the country…[T]he existing progressive structure of the income tax code means that any broad-based income tax cut is going to be regressive. Check out this Yale Budget Lab estimate of Van Hollen’s plan — he makes sure to soak the rich, but he does more with the money for the comfortable than for the struggling. Booker’s plan is even worse in this regard…
[L]ooking at the distributional tables for the 1993 budget…that Bill Clinton signed…it’s almost shocking how broadly he raised taxes…[B]y Obama’s time, willingness to enact broad-based tax increases was waning…Obama vowed not to raise taxes on anyone earning less than $250,000 (roughly $360,000 in today’s dollars), which meant in practice being willing to extend a majority of the Bush tax cuts…Except vulnerable senate Democrats lost their nerve and pushed to extend tax cuts up to $450,000 — or nearly $650,000 adjusted for inflation today.
Basically, as Democrats have become the party of the somewhat-rich, they have begun to embrace tax cuts for the somewhat-rich.
But without broad-based taxes, America will never be able to rein in its deficit or increase the welfare state further. Billionaires have a ton of money individually, but collectively there just aren’t enough of them to support the fiscal needs of a country like the United States. If we want broadly shared benefits, we will need broadly shared sacrifice.
The Democrats, comfortable in their newfound identity of the party of millionaires-against-billionaires, are no longer calling for broadly shared sacrifice. Instead, the best populism they can seem to muster is an attack on one group of elites by another group of elites.
“Blow up your TV/ Throw away your paper/ Go to the country/ Build you a home/ Plant a little garden/ Eat a lot of peaches/ Try and find Jesus/ On your own” — John Prine
I’m generally a techno-optimist, but I make an exception for at least one technology: smartphone-enabled social media. In the long run, I expect us to be able to adapt in order to use this technology to our net benefit. But in the short run, I think it has devastated our politics, destroyed many of our social bonds, and made us less happy in general.
A research project called the Global Mind Project has tried to assess mental health across the globe, using a huge survey with millions of respondents. Their latest report zeroes in on the deleterious effects that smartphone usage has had on the well-being of Gen Z. Here’s Jonathan Haidt’s summary:
Young adults used to generally have good mental health, compared to older generations. But now, in ALL countries examined, they are doing badly compared to older generations in that country…The decline of young people's mental health is "most pronounced in the wealthier and more developed countries." They note that it is in such countries that smartphones are given earliest, junk food is most heavily consumed, spirituality is most diminished, and family ties are looser and often weaker…"A younger age of first smartphone ownership is associated with increased suicidal thoughts, aggression, and other problems in adulthood."
And this is from the report itself:
GenZ is the first generation to grow up with a smartphone. Among this group, the younger they acquired their first smartphone in childhood, the more likely they are to have struggles as adults. These struggles extend beyond sadness and anxiety to less discussed symptoms, such as a sense of being detached from reality, suicidal thoughts, and aggression towards others…Excessive time spent on smartphones also diminishes the development of social cognition that requires learned interpretation of facial expressions, body language, and group dynamics. The negative impacts are particularly sharp below age 13.
Fortunately, some young people seem to be realizing that the phones are bad for them. Here’s a recent story from CNBC:
Going chronically offline is the latest trend to grip young people, and ironically it's going viral on social media…I received nearly 100 responses from Gen Z and millennials sharing stories about social media detoxes and digital burnout…They talked about ditching their smartphones for flip phones, visiting record stores to buy vinyl, taking up analog hobbies like knitting, and most importantly, connecting with their friends in person.
A 2025 Deloitte consumer trends survey of more than 4,000 Brits found that nearly a quarter of all consumers had deleted a social media app in the previous 12 months, rising to nearly a third for Gen Zers…Meanwhile, social media use has steadily declined since time spent on the platforms peaked in 2022, according to an analysis of the online habits of 250,000 adults in more than 50 countries by the Financial Times and digital audience insights firm GWI…Globally, adults 16 and over spent an average of two hours and 20 minutes per day on social platforms by the end of 2024, down almost 10% since 2022, the report found. The decline was particularly pronounced among teens and 20-somethings…
Young people who are deleting their social media platforms cite the increasing pressures of being online as well as damage to their mental health as causes…Deloitte’s consumer survey showed that almost a quarter of respondents who deleted social apps reported these apps had negatively impacted their mental health and consumed too much of their time.
This is actually the kind of thing that makes me such a techno-optimist. In the short-run, the drawbacks of a new technology can do more harm than good. But in the long run, humans learn and adapt to the new technology. And in the case of smartphones, the right adaptation may simply be to get off social media.
2026-03-15 16:45:03

“Imagination/ That’s the way that it seems/ A man can only live in his dreams” — The Flaming Lips
“No future/ No future/ No future for you” — The Sex Pistols
If you have kids — or if you’re planning to have kids in the future — I want you to think about a question: How will you make sure your kids have a successful life?
Obviously, this isn’t a question that anyone can ever answer with certainty. But ten years ago, in 2016, you could have given a pretty good answer. You’d work hard and save money and invest wisely, so you would have enough family wealth to cushion against unexpected shocks. You’d teach your kid good values, make sure they went to a good school, and send them to a good college. You might even encourage them to enter a promising elite professional field, like software engineering, medicine, or law. If you did all of this, you could be reasonably confident that your child would grow up to be at least economically secure, and probably upwardly mobile as well.
What answer would you give now, in 2026? Do you have any confidence that colleges — even top colleges — will actually teach your kid the skills they need to make it in a job market defined by AI? What field of study could you recommend to your child, knowing that there’s a possibility it will be automated by the time they finish studying it? Will even family wealth be enough to protect your descendants, in a world where land and energy are being gobbled up for data centers?
The sudden rise of artificial intelligence has cast a great fog over our future. It may bring wonders beyond our comprehension — the end of aging and disease, material hyperabundance, digital worlds to suit our every desire, expansion into outer space. Or it might bring chaos and destruction, as rogue agents wreak havoc with bioweapons and drones. Or it might become a superintelligence that turns us all into house pets.
Your kids might be chronically unemployed, as the CEO of ServiceNow recently predicted. Or AI tools might turn them into highly paid super-workers, as the founder of Uber recently predicted. The truth is that they don’t know, and I don’t know, and you don’t know either. Financial markets don’t know either. The people actually building AI certainly don’t know. The future is a blank wall of fog, rushing toward us at top speed, and nobody knows what to do.
Plenty of people have predicted this. It’s called a Technological Singularity — a period of accelerated technological change so rapid that it’s impossible to predict what life or society will look like afterwards. You can argue that the Industrial Revolution was a kind of Singularity, moving humanity in today’s developed countries from the edge of starvation to material abundance. Who could have predicted, in 1890, what life in 1990 would look like? And the AI revolution is happening much faster, promising to compress a century’s worth of change into a couple of decades.
AI may be the biggest thing casting a fog of uncertainty over our future, but it’s not the only thing. The political chaos of the last decade, and especially the governing style of the second Trump administration, has swept away much of what we thought we knew about American society. The rise of China has raised the possibility that global power will now reside with totalitarian countries instead of democratic ones. The possibility of another world war looms.
Now here’s the crucial point — even back in 2016, this period of rapid change was on the way. Most people just didn’t see it coming. Everyone who thought their kids would be safe if they just followed the standard 2016 playbook — a good college, a professional career — was wrong. They just didn’t know they were wrong yet.
But because they didn’t see what was coming, they were optimistic. Back in 2016, 69% of Americans expected a good life in the future — a number that’s now down to only 59%:

Even during Covid and the Great Recession, American optimism about the future didn’t waver. We “knew” — or at least we thought we knew — that we would recover from those shocks, and be able to live a good life. We might have been wrong, but we thought we could see the future — and it was those extrapolations that comforted us, even as we endured one shock after another.
It occurs to me that this can also explain why Americans are so nostalgic for the 1990s and the early 2000s.
2026-03-13 03:41:06

The other day I did something I’ve never done before: I made a major political donation.1 I gave $10,000 to GrowSF, a political advocacy organization that focuses on local elections in San Francisco. They’re going to use the money to support Alan Wong in the upcoming special election for District 4 supervisor.
Usually, I’m pretty pessimistic about the ability of political donations to affect the course of society. The influence of money in politics is exaggerated in general, and the amount that I’m personally able to contribute is pretty modest; in almost all cases, I think I’ll probably have a bigger impact just by writing blog posts. But in this particular case, I think I might actually be able to make a noticeable difference by donating a little bit of money — especially because it gives me a good excuse to write about the political situation in San Francisco.
Basically, for a number of years, San Francisco was the poster child for a style of progressive urban governance that has been failing in cities across the country. I wrote about this governance debacle shortly after Trump was elected in 2024:
In the 1990s and 2000s, America’s big cities had an urban revival. Pragmatic liberals like Michael Bloomberg in New York City and Ed Lee in San Francisco were some of the most important leaders of this revival. They recognized the value of business as the city’s tax base, and they recognized the importance of public order for maintaining a livable urban environment. They were not perfect; they failed to build sufficient housing, setting the stage for the urban housing crisis of the 2010s and 2020s, and they continued or accelerated the unfortunate trend of outsourcing city government functions to nonprofit organizations. But overall, they were successful in turning American cities into places that people actually wanted to live in again.
As people — especially people with money — moved back into America’s cities in the 1990s and 2000s, the housing crisis worsened, because cities didn’t meet the increase in demand with an increase in supply. But at the same time, America was sorting itself politically — the big cities leaned increasingly to the left.
That political shift enabled the rise of a new, radical kind of urban progressive ideology. If the old liberalism had been complacent about the need for housing supply, the new progressivism was downright hostile to it; drawing on the anti-gentrification movements of a previous generation, hardline progressives embraced the mistaken idea that allowing the construction of new apartment buildings raises rents:
In fact, an overwhelming amount of evidence supports the fact that allowing new housing reduces rents for everyone. But in refusing to hear that evidence, urban hardline progressives have essentially allied themselves to an old-money NIMBY gentry that wants to keep cities frozen in amber with development restrictions.
At the same time, the new urban progressive ideology became extremely tolerant of public disorder — property crime, low-level violent crime, public drug markets, and threatening street behavior. Cracking down on these social ills was viewed as unacceptably harmful to the perpetrators; in other words, hardline progressives came to view anarchy as a form of welfare policy.
Penalties for minor crimes were reduced, enforcement of public drug markets was curtailed, and citizens were even forbidden from defending their own businesses from criminals. “Tent cities” were tolerated despite being riddled with violent crime, police budgets were slashed, progressive prosecutors like San Francisco’s Chesa Boudin prosecuted fewer crimes, dangerous repeat offenders were regularly allowed back onto the streets, and so on. Inevitably, poor people were the ones most heavily impacted by the epidemic of crime and drug use that this anarchy enabled.
Together, high housing costs and rampant public disorder made America’s big blue cities no longer the envy of the world. Meanwhile, hardline progressives simply doubled down — responding to high housing costs with yet more restrictions on development, and responding to disorder with yet more tolerance of disorder, all while funneling increasing portions of the city budget to well-connected nonprofits that often turned out to be ineffectual and corrupt.
In San Francisco, this hardline progressivism did not come from the mayor’s office. Most policy decisions in SF are carried out by — or must be signed off on by — the powerful Board of Supervisors. The Board of Supervisors writes the laws, approves and amends the city budget, confirms mayoral appointments, and exercises veto power over almost any major reform effort.
For many years, San Francisco had a moderate liberal mayor but a hardline progressive majority on the Board of Supervisors. Mayors wanted to build more housing and crack down on disorder and crime, but the progressive supermajority on the Board would not allow them to do so. Mayors like London Breed often took the blame for the city’s descent into unaffordability and chaos, but the prime culprit was always the hyper-progressive Board.
Under the aegis of hyper-progressive city government, San Francisco had the highest property crime rate in the nation in the late 2010s, and became one of America’s least affordable cities. The pandemic only accelerated these trends — the city’s population crashed and failed to recover, the streets became open-air fentanyl markets, transit ridership plummeted and didn’t bounce back, and housing production crashed from low levels to almost nothing. Malls closed, businesses pulled out, and downtown felt like a post-apocalyptic wasteland long after most other cities had recovered their verve.
Then, in 2024, an election changed everything. The change everyone knows about is the election of Daniel Lurie as mayor.

Lurie made public order his #1 task. Within a year, crime had plummeted:
[O]verall crime in [San Francisco] went down by 25% in 2025, with the number of homicides reaching a level not seen in more than 70 years…Property crimes were down by 27%, while violent crimes were down by 18%…The mayor added that the city planned to keep on hiring new officers, following an executive directive he signed in May. In October, the department reported the largest surge of recruits in years…
The department also credited the Drug Market Agency Coordination Center in leading to more than 6,600 arrests in connection with drug-related activity. Officers said they had also seized more than 1,000 firearms and more than 56 pounds of fentanyl…Meanwhile, retail theft operations have led to key arrests, resulting in reductions in larcenies and retail thefts.
Other notable crime trends touted by city officials include a 16% decrease in shootings, robberies being down 24%, car break-ins down 43% and vehicle thefts being down 44%.
On the ground, the change is absolutely palpable. In 2023 I would see thieves ripping pieces out of car engines in broad daylight. Almost every day I walked past throngs of drug users (and probably dealers). Every woman I knew was harassed on the street or on the train. There were needles and human feces on the ground everywhere. Stores were boarded up, train cars ran almost empty, tent cities lined side streets and the spaces under overpasses. Now, most of that is gone — the streets aren’t clean, but they’re closer to NYC than to a developing-country slum.
Progress on housing has been slower, due to the dense thicket of existing regulations and entrenched NIMBY interests that must be hacked through in order to actually get new housing built. Lurie passed a landmark upzoning plan, which doesn’t go nearly far enough but is a huge improvement on anything in recent decades. Now permitting is accelerating:
San Francisco’s infamously slow building permitting process may be getting faster…A city study published Thursday found that between January 2024 and August 2025, the timeline on permit approvals for new housing in San Francisco was cut by half — from an average of 605 days down to around 280 days…And permit applications that were filed within that 19-month window had even shorter turnaround times, at 114 days on average…
[A] state-commissioned report published in 2022 found that San Francisco was the slowest California jurisdiction to approve permit applications for housing projects…[But] Mayor Daniel Lurie has…focused on improving the city’s buildability, launching his landmark ‘PermitSF’ initiative to centralize the application process last year. In February, his office introduced an online portal that allows people to apply for certain types of permits.
It will take years for those permits to turn into actual homes. And the reforms that Lurie has managed to enact are only the tip of the iceberg in terms of what’s needed — much of which needs to be done at the state level.
But overall, things are looking up. Lurie’s approval rating reached 73% half a year into his mayorship (compared to 28% for his predecessor). In November it was still 71%. Everyone loves Daniel Lurie — and so do I. He’s not perfect, but no mayor has ever been perfect. His successful policies range far beyond what I’ve listed here — he’s added homeless shelter space, cut taxes on apartment buildings, removed anti-police activists from the Police Commission and appointed a better police chief, encouraged conversion of offices into homes, created free childcare policies and various early childhood programs, implemented policies to protect pedestrians and cyclists, cut various forms of red tape for housing and small business, streamlined business permitting, worked toward balancing the budget, and so on.
But here is the real point: Almost none of this would have been possible if the Board of Supervisors had still been controlled by hardline progressives.
The same election that brought Daniel Lurie into the mayor’s office also changed the composition of the Board. The “progressive” faction, which had enjoyed a supermajority on the Board, suffered a major defeat, with progressive stalwarts like Dean Preston being unseated by moderate liberals like Bilal Mahmood. The moderate liberal faction — which would be labeled strongly progressive in most of America, but who are regarded as centrists in San Francisco — gained a slim 6-5 majority on the Board.
Though Lurie has gotten most of the credit for SF’s turnaround, that slim Board majority was absolutely essential. The new laws Lurie has passed would not have been passed, nor would Lurie’s personnel appointments have been confirmed, had the Board been 6-5 in favor of the “progressives” instead of 6-5 in favor of the moderate liberals. A one-seat swing toward the hardline progressive faction would have meant a San Francisco that was still mired in all of the old urban dysfunction that progressive cities have been struggling with for a decade and a half.
And now that one-seat swing may actually happen, and San Francisco’s recovery might be derailed. District 4’s supervisor Joel Engardio, an important moderate liberal voice on the Board, was recalled last fall over his support for a highway closure. Lurie appointed Alan Wong to fill in the District 4 spot, but now Wong is facing a special election on June 2 to keep that seat. It’s a crowded field, and some of Wong’s rivals are very well-funded.
The other candidates in the race — Natalie Gee, David Lee, and Albert Chow — are all more opposed to Lurie’s pro-housing agenda than Wong is. If Wong loses, San Francisco’s reforms under Lurie so far probably won’t be repealed — at least not immediately. But the majority on many issues would flip back to the “progressives”, and further reforms would become much harder if not impossible. This would be especially harmful to the housing agenda, where upzoning efforts look promising but will require more years of sustained effort to reach fruition.
This is why I decided to give $10,000 to an organization supporting Alan Wong.2 I don’t live in District 4, and I’m sure his opponents are very nice people, but this election is about more than just District 4 — the composition of the Board of Supervisors determines the destiny of the entire city of San Francisco. The Outer Sunset will benefit from a moderate liberal majority on the Board, but so will the rest of us.
My city’s chronic inability to build sufficient housing has hollowed it out. It has forced huge numbers of middle-class people, working-class people, and artists to move far away from the city, leaving SF to the rich and the rent-controlled. It has contributed to the homelessness epidemic, forcing people onto the streets and into the arms of the drug dealers. Under Daniel Lurie and the 6-5 moderate liberal majority on the Board of Supervisors, we were just now starting to address that gaping, decades-long deficiency. And now we could throw it all in the trash.
Over the past year, San Francisco has shown the nation a way out of the quagmire of hardline “progressive” governance that is hollowing out so many of our cities. But if this one supervisor race goes the wrong way, and Alan Wong loses, we could end up being a cautionary tale about how difficult it is for American cities to reject that self-destructive approach.
I have made very small campaign donations in the past, on the order of $100.
If you’d also like to donate to that organization, here’s a link where you can do that.
2026-03-11 07:13:08

The photo above is from the Battle of Khalkhin Gol in 1939. This “battle” lasted four months, and was actually just the main phase of an undeclared war between Imperial Japan and the Soviet Union that effectively began in 1935, four years before the official start of the Second World War. The USSR won the conflict through superior use of tanks, foreshadowing the eventual outcome of WW2 itself.
This example illustrates that although World War 2 officially began when Germany invaded Poland, conflicts that either foreshadowed the final conflagration or eventually merged with it began years earlier, in the mid-1930s. WW2 had foothills. I wrote about this back in 2024:
It’s possible that the world will avoid a world war in the first half of the 21st century. But if one does occur, I think future historians will see it as having had foothills as well. In the Syrian Civil War, the U.S. and Russia began to test their new hardware against each other, and their troops even clashed once. Russia’s invasion of Ukraine was the big shift, as it inaugurated a new era of great-power territorial conquest, began to harden global alliance systems, and pushed Europe to remilitarize.
Now we have the Iran War. The U.S. and Israel started the war, attacking Iran and decapitating much of its leadership. The Iranians, somewhat oddly, responded by launching missile and drone attacks on practically every Arab nation in the Middle East, causing some of them to threaten to join the war on America and Israel’s side.
In the short term, this conflict seems likely to peter out in a few days to weeks without decisive results. Militarily speaking, the U.S. and Israel have generally had their way with Iran, assassinating the leadership at will, achieving air supremacy, and degrading missile and drone strike capability. But this seems unlikely to actually bring down the Iranian regime; protesters are generally not returning to the streets, still cowed after the regime massacred tens of thousands of them in January. Unlike in Syria, there’s no breakaway region or oppressed ethnic majority that can be armed from afar to bring down the regime; as long as Iran’s Revolutionary Guard and other security services remain unified and willing to shoot infinite protesters in order to hang on to power, and there’s no ground invasion, it’s not clear who could actually topple the Islamic Republic in the next few weeks.
In the long term, of course, it’s a different story; the regime doesn’t look strong or stable. But Trump seems unlikely to be in for the long term; instead, he seems likely to quit the war soon, as he usually retreats from most of his initially bold moves. Trump recently called the war “very complete”, and his advisers are reportedly urging him to find a way out of the conflict.
One reason for this is that the Iran War has been fairly unpopular in America from the beginning:
About half of registered voters — 53% — oppose U.S. military action against Iran, according to a new Quinnipiac Poll conducted over the weekend. Only 4 in 10 support it, and about 1 in 10 are uncertain. A new Ipsos poll also found more disapprove than approve of the strikes…That’s similar to the results of text message snap polls from The Washington Post and CNN, both conducted shortly after the joint U.S.-Israel attacks began, which also indicated that more Americans rejected the military action than embraced it…A recent Fox News poll found opinions more evenly divided: Half of registered voters approved of the U.S. military action, while half disapproved.
Wars usually create a “rally round the flag” effect early on, and support only fades later; this war was unpopular from day one. Most Republicans seem to have conveniently forgotten that Trump ran as the candidate of peace, isolationism, and non-intervention. But Independents, who form the bulk of the American electorate now, have no partisan commitments that force them to conveniently forget. And they are rightfully wary of yet another American involvement in a Middle Eastern war — especially one that America started without being attacked first.
But there’s an even bigger reason Trump is looking for the exits — oil. Oil prices have been jumping wildly up and down, as everyone tries to figure out whether Iran will manage to disrupt oil production from the Persian Gulf (possibly by closing the Strait of Hormuz, possibly by destroying Gulf oil infrastructure with drones). But the general trend is up:

Higher oil prices mean higher gasoline prices, and higher inflation in general — both things that tend to make Americans very mad, and which they are already mad at Trump about. Gas prices are now shooting up:
So this war seems highly unlikely to result in Iraq War 2.0 — a massive U.S. ground invasion of Iran. Instead, it’ll probably end up like a bigger version of the Twelve-Day War last year — Iran’s defenses will be laid prostrate before the might of foreign air power, but the regime will survive.
(Again, in the long term, things look very bad for the Iranian regime. The economy is dysfunctional and crumbling, and high oil prices will provide only a temporary palliative. The regime’s popular legitimacy is gone after the January massacres. The entire Gulf has now turned against Iran, and Lebanon’s government has turned against Hezbollah. With Syria now shifting into the Israel/Gulf camp and Hamas basically a spent force, Iran has only one effective proxy left — the Houthis in Yemen. This is not a recipe for long-term success.)
But anyway, this is all a bit of a side track from the point of this post, which is about World War 3. The Iran War will probably not be the start of WW3, but I think it does bring us closer to the brink, in several ways.
First, in the Western theater — Europe and the Middle East — the coalitional lines are becoming clearer. When Trump was elected, a lot of people thought that America had effectively “switched sides” — that Trump viewed Putin as an ally against global wokeness, and the Europeans and the Ukrainians as betrayers of Western Civilization. I myself entertained this notion — there really was (and still is) a lot of this sentiment on the American right, and ending the Transatlantic Alliance was consistent with classic American right-wing isolationism.
But the narrative that “America is a Russian ally now” has been looking a lot shakier in recent months. First, the U.S. toppled a Russian proxy in Venezuela, and seized a bunch of Russian “shadow fleet” oil tankers. Elon Musk then shut the Russians off from using Starlink, allowing the Ukrainians to seize the initiative in the war. Now, the U.S. is trying to topple a key Russian arms supplier — Iran is the source of the Shahed long-range strike drone, which Russia has been using to bombard Ukraine’s cities from afar.
Russia didn’t leap to Iran’s defense. It has its hands full with Ukraine, and with planning for a possible wider war against Europe, and the U.S. is too powerful for it to fight. But the Russians did lend a hand, helping Iran to target U.S. forces:
Russia is providing Iran with intelligence about the locations and movements of American troops, ships and aircraft, according to multiple people familiar with US intelligence reporting on the issue…Much of the intelligence Russia has shared with Iran has been imagery from Moscow’s sophisticated constellation of overhead satellites[.]
This is similar to what the U.S. does for Ukraine. Russian targeting intelligence may have helped Iran take out some U.S. missile defense radar installations — almost certainly Iran’s most significant success of the war.
Meanwhile, Ukraine has leapt to the defense of both the U.S. and the Gulf countries being targeted by Iran’s fleets of attack drones. Long years of playing defense against Russia’s Iranian-provided Shaheds have given Ukraine tons of expertise in shooting this sort of drone out of the sky; now, the U.S. badly needs that expertise. America had rejected Ukraine’s help on anti-drone technology before, but it turns out military necessity usually trumps ideological bias.
As for Europe, they’ve certainly had a lot of tensions with the Trump administration, but most of the European countries haven’t opposed America’s actions in Iran the way they opposed the Iraq War a generation ago. Britain and France made some disapproving noises at first, but eventually acquiesced; only Spain tried to stand up and oppose Trump.
So for now, the coalitions in the Western theater look clearer than they did before — America, Ukraine, Israel, and Europe on one side, Russia and Iran on the other side. Various factions in the U.S. and Europe may despise each other, or despise Israel, or despise Ukraine, but at the end of the day, Russia and Iran are the greater enemies.
In the Eastern theater, things are less certain. India traditionally tries to be friends with America, Russia, Israel, and Iran all at once — this requires it to be effectively neutral when it comes to conflicts like the Ukraine War and the Iran War. China is supposedly on Iran’s side, but it has mostly limited itself to criticism of America’s actions.
The big question, of course, is whether the Iran War makes a Chinese attack on Taiwan more likely. One school of thought says it’s more likely, because the war has forced America to consider shifting missile defense systems out of Asia. On the other hand, the almost unbelievable American/Israeli competence in terms of finding and killing Iran’s top leaders seems to have given Chinese military analysts pause — although China can outmatch the U.S. in terms of defense production, if America could assassinate Xi Jinping and the entire CCP Central Committee in the early days of a war over Taiwan, that could be an effective form of deterrence.
So in a way, what we’re looking at now feels a little like the situation in 1935 or 1937. The Western theater today is like the Pacific theater then — wars and invasions that feel localized, and which don’t involve the most capable players, but which destabilize the world and have the potential to merge into a wider global conflict. Meanwhile, the Eastern theater today is more like the European theater of WW2 — it has the most powerful economies and militaries, but the alliances are still uncertain. If and when China attacks Taiwan, that will probably be similar to Hitler invading Poland — an unambiguous signal that a wider war has begun. It might happen, or it might not.
Meanwhile, the Iran War feels like the lead-up to World War 3 in another way — it’s showcasing and developing the technologies that would be central to a wider war. The Ukraine War has demonstrated that drones — FPV drones at the front, and Shahed-style strike drones behind the lines — are the key weapon of modern warfare. Similarly, America and Israel’s decapitation strikes on Iran have shown the power of AI for modern precision warfare. Here’s the WSJ:
The U.S. and Israeli attacks on Iran have unfolded at unprecedented speed and precision thanks to…a cutting-edge weapon never before deployed on this scale: artificial intelligence…AI tools are helping gather intelligence, pick targets, plan bombing missions and assess battle damage at speeds not previously possible…The use of AI in the campaign against Iran follows years of work by the Pentagon and lessons learned from other militaries. Ukraine—with U.S. help—increasingly relies on AI in its war against Russia. Israel has tapped AI in conflicts at least since the October 2023 Hamas attacks.
And this is from an article in Rest of World (a very underrated news source):
The U.S. military is using the most advanced AI it has ever used in warfare, with Anthropic’s Claude AI reported to be assessing intelligence, identifying targets, and simulating battle scenarios…The biggest role that AI now has in U.S. military operations in Iran, as well as Venezuela, is in decision-support systems, or AI-powered targeting systems, Feldstein said. AI can process reams of surveillance information, satellite imagery, and other intelligence, and provide insights for potential strikes. The AI systems offer speed, scale, and cost-efficiency, and “are a game-changer,” he said…[T]he use of chatbots such as Claude in decision-support systems is new…
China is prototyping AI capabilities that can pilot unmanned combat vehicles, detect and respond to cyberattacks, and identify and strike targets on land, at sea, and in space, researchers at Georgetown University’s Center for Security and Emerging Technology said.
This is a bit reminiscent of how aerial bombing was used at Guernica in the Spanish Civil War, or how the USSR used tanks to beat the Japanese at Khalkhin Gol. If we ever do see an all-out war between America, China, Russia, Japan, and Europe, AI is going to be incredibly central to performance on the battlefield. That’s why for all the bad blood between the Pentagon and Anthropic, the two organizations have a huge incentive to patch things over and learn to cooperate more closely. (Fortunately, Anthropic’s CEO, Dario Amodei, is extremely patriotic, which will probably help.)
Unfortunately, new military technologies won’t just define the wars of the future — they also help cause them. Why did the world fight two World Wars in the early 20th century? Ideologies and competing empires certainly played a role, but it’s also probably true that the rise of industrial technology disrupted the existing balance of power.
Artillery manufacturing, logistics, and railroads made Germany a great power capable of defeating France in the 1870s; that upset the continental balance of power and caused the proliferation of alliances that led to WW1. In the interwar period, air power made America, Germany, and Japan more powerful, while the rise of tanks empowered Germany and the USSR, all at the expense of Britain and France. The rapid progress of industrial weaponry made it unclear where power really lay in the world, which probably made the great powers of the day more willing to roll the dice and test their strength against each other.
Countries may be more cautious now than they were a century ago. Nuclear weapons still exist, and still provide some deterrent to great-power war — though there are a lot fewer of them now than there used to be, and AI and missile defense make it possible to stop more of them before they hit. Countries are richer now too, which makes a war even less appealing from an economic perspective than in 1914.
But still, the rise of AI and drones means that no one knows who’s really the most powerful country in the world — the U.S. or China. And regional balances of power — Russia versus Europe and Ukraine, Iran versus Israel and the Gulf — are similarly uncertain. Uncertain balances of power are scarier than known balances of power.
So while World War 3 doesn’t seem imminent, we may be inching closer in that direction. If it sneaks up and surprises us, we’ll probably conclude that the Iran War was part of the lead-up.
2026-03-08 07:04:47
There are three basic facts you need to know about the U.S. macroeconomy right now:
The economy overall (growth, employment, inflation) is doing pretty well.
Productivity growth is unusually high.
Job growth is terrible.
Let’s start with some numbers. Late 2025 is the latest number we have for GDP growth, but it looks pretty solid — around 2.5%, about where it was in the late 2010s.
And most people still have jobs. Prime-age employment rates — my favorite single indicator of the labor market — are still really high. Higher than any time in the 2010s, actually:
If you look at unemployment, you can see a slowly rising trend since mid-2023, even if you restrict it to the prime age group. But this is entirely due to more people saying that they’re looking for work — prime-age labor force participation has been steadily rising. So that’s not very scary either. It’s just more of the people without jobs saying that they’re looking for work, instead of just sitting around.
Meanwhile, inflation is still in the 2.5% range — a little higher than we would like, but not particularly fast.
So in terms of the headline numbers, everything is kind of just bumping along. From a bird’s-eye view, this economy looks pretty normal and healthy. Under normal circumstances, I’d be inclined to not even write a post about the macroeconomy this month.
But underneath the surface, two interesting things are happening. The first is that productivity growth has accelerated; the second is that job growth has stalled out. On its face, this sort of pattern might suggest that AI is finally starting to take Americans’ jobs — and lots of people are suggesting this conclusion. But when we look closely at the numbers, the story becomes more complicated.
The first is that productivity growth has accelerated. Output per hour — also called “labor productivity”, which is sort of a quick, rough-and-ready measure of productivity — is growing significantly faster than it was in the late 2010s. It’s been at around 2.5-3% since late 2023, compared to more like 1-2% during Trump’s first term:
In fact, productivity is well above where economists thought it would be six years ago:

That’s a major acceleration. 2.8% labor productivity growth is about equal to the best decades we’ve seen since World War 2. If that rate is sustained for a decade, or accelerates further, it’ll be pretty historic.
What’s driving the productivity boom? It’s tempting to conclude that AI is making white-collar workers more productive, but Ernie Tedeschi points out that the biggest swing has been in manufacturing productivity. For a long time, manufacturing productivity was basically flatlining in America; now it’s suddenly growing again.
Tedeschi argues that this is also probably AI-driven, but it’s not about people using ChatGPT and Claude Code at work — it’s about the fact that a ton of data centers are being built, and data centers are very valuable:
If you look at data centers’ contribution to growth itself, it looks pretty small, but this masks the value of the computers contained within the data centers. Together, the creation of data centers and computing equipment have been contributing about as much to GDP growth as they were during the dot-com boom:

A second thing that’s happening is that American capital is being utilized more intensively — machines are being run for more hours of the day, buildings are keeping the lights on longer, and so on. The San Francisco Fed makes monthly estimates of Total Factor Productivity growth — productivity growth once you take the amount of labor and capital into account — and they find that it’s been pretty fast since late 2023. But once you take utilization rates into account, it looks like there was a moderate burst of TFP growth in 2023-4 that faded in 2025:

This is also consistent with the story that the data center boom, not an AI use boom, is driving fast productivity growth in America.
2026-03-06 08:28:04
If you haven’t heard about the fight between the AI company Anthropic and the U.S. Department of War, you should read about it, because it could be critical for our future — as a nation, but also as a species.
Anthropic, along with OpenAI, is one of the two leading AI model-making companies. OpenAI has very narrowly led the race in terms of most capabilities for most of the past few years, but Anthropic is beginning to win the race in terms of business adoption:

This is because of Anthropic’s different business model. It focused more on AI for coding than on chatbots in general, and also focused on partnering with businesses to help them use AI. This may pay eventual dividends in terms of capabilities, if Anthropic beats OpenAI to the goal of recursive AI self-improvement. And it’s already paying dividends in the form of faster revenue growth:

Anthropic had partnered with the Department of War — previously the Department of Defense — since the Biden years. But the company — which is known for its more values-oriented culture — has begun to clash with the Trump Administration in recent months. The administration sees Anthropic as “woke” due to its concern over the morality of things like autonomous drone swarms and AI-based mass surveillance.
The fight boiled over a week ago, when the administration stopped working with Anthropic, switched to working with OpenAI, and designated Anthropic a “supply chain risk”. The supply-chain move was a pretty dire threat — if enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, which could kill the company outright. But like many Trump administration moves, it appears to have been more of a threat than an all-out attack — Anthropic has now resumed talks with the military, and it seems likely that they’ll come to some sort of agreement in the end.
But bad blood remains. Trump recently boasted that he “fired [Anthropic] like dogs”. Dario Amodei, Anthropic’s CEO, released a memo accusing OpenAI of lying to the public about its dealings with the DoW, said that OpenAI had given Trump “dictator-style praise”, and asserted that Anthropic’s concern was related to the DoW’s desire to use AI for mass surveillance.
What’s actually going on here? The easiest way to look at this is as a standard American partisan food-fight. Anthropic is more left-coded than the other AI companies, and the Trump administration hates anything left-coded. This probably explains most of the general public’s reaction to the dispute — if you ask your liberal friends what they think of the issue, they’ll probably support Anthropic, whereas your conservative friends will tend to support the DoW. Marc Andreessen probably put it best:
(The converse is also true.)
The Trump administration itself may also see this as a culture-war issue, as well as a struggle for control. But, at least in my own judgement, Anthropic itself is unlikely to see it this way. Anthropic itself is not committed to progressive values writ large so much as it’s committed to the idea of AI alignment.
Like almost everyone in the AI model-making industry, Anthropic’s employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later. But my experience talking to employees of both companies has suggested that there’s a cultural difference between how the two think about their role in this process. Whereas — generally speaking — OpenAI employees tend to want to create the most capable and powerful god they can, as fast as they can, Anthropic employees tend to focus more on creating a benevolent god.
My intuition, therefore, suggests that Anthropic’s true concern — or at least, one of its major concerns — was that Trump’s Department of War would accidentally inculcate AI with anti-human values, increasing the chances of a future misaligned AGI that would be more likely to see humanity as a threat. In other words, I suspect the issue here was probably more about fear of Skynet,1 and less about specific Trump policies, than people outside Anthropic realize.
But anyway, beyond both political differences and concerns about misaligned AGI, I think this situation illustrates a fundamental and inevitable conflict between human institutions — the nation-state and the corporation.
One view is that the Department of War’s attempts to coerce Anthropic represents an erosion of democracy — the encroachment of government power into the private sphere. Dean Ball wrote a well-read and very well-written post espousing this view:
Some excerpts:
At some point during my lifetime—I am not sure when—the American republic as we know it began to die…I am not saying this [Anthropic] incident “caused” any sort of republican death, nor am I saying it “ushered in a new era.”…[I]t simply made the ongoing death more obvious…I consider the events of the last week a kind of death rattle of the old republic…
The Trump Administration has a point: it does not sound right that private corporations can impose limitations on the military’s use of technology. …Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like policy constraints on the military…It is probably the case that the military should not agree to terms like this, and private firms should not try to set them…But the Biden Administration did agree to those terms, and so did the Trump Administration, until it changed its mind…The contract was not illegal, just perhaps unwise, and even that probably only in retrospect…
The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable…But this is not what DoW did. Instead, DoW…threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei…The fact that [Hegseth’s actual actions are] unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, or we will end your business…
This strikes at a core principle of the American republic…private property…[T]here is no difference in principle between this and the message DoW is sending. There is no such thing as private property. If we need to use it for national security, we simply will…This threat will now hover over anyone who does business with the government…
With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious—a gradual descent into madness.
Alex Karp of Palantir made the opposite case the other day, in his characteristically pithy way:
If Silicon Valley believes we’re going to take everyone’s white collar jobs AND screw the military…If you don’t think that’s going to lead to the nationalization of our technology— you’re retarded.
Karp gets at the fundamental fact that what we’re seeing is a power struggle between the corporation and the nation-state. But the truth is that it’s not just an issue of messaging, or of jobs, or of compliance with the military — it’s about who has the ultimate power in our society.
Ben Thompson of Stratechery makes this case. He points out that what we are effectively seeing is a power struggle between the private corporation and the nation-state. He points out that although the Trump administration’s actions went outside of established norms, at the end of the day the U.S. government is democratically elected, while Anthropic is not:
Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public…[W]ho decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected. Once again, however, Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for.
But even beyond concerns over democratic accountability, Thompson points out that it was never realistic to expect a weapon as powerful as AI to remain outside the government’s control, whether the government is democratically elected or not:
[C]onsider the implications if we take Amodei’s analogy [of AI to nuclear weapons] literally…[N]uclear weapons meaningfully tilt the balance of power; to the extent that AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period…[I]f nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company…
There are some categories of capabilities — like nuclear weapons — that are sufficiently powerful to fundamentally affect the U.S.’s freedom of action…To the extent that AI is on the level of nuclear weapons — or beyond — is the extent that Amodei and Anthropic are building a power base that potentially rivals the U.S. military…
Anthropic talks a lot about alignment; this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the U.S. military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the U.S. is actually quite binary:
Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President.
Option 2 is that the U.S. government either destroys Anthropic or removes Amodei.
[I]t simply isn’t tolerable for the U.S. to allow for the development of an independent power structure — which is exactly what AI has the potential to undergird — that is expressly seeking to assert independence from U.S. control. [emphasis mine]
I like Dario — in fact, he’s a personal friend of mine. But Thompson’s argument — especially the part I highlighted — has to carry the day here. This isn’t a question of law or norms or private property. It’s a question of the nation-state’s monopoly on the use of force.
To exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private militia can defeat the nation-state militarily, the nation-state is no longer physically able to make laws, provide for the common defense, ensure public safety, or execute the will of the people.
This is why the Second Amendment has limits on what kinds of weapons it allows private citizens to possess. You can own a gun, but you cannot own a tank with a functioning main gun. More to the point, you cannot own a nuclear bomb. One nuke wouldn’t allow you to defeat the entire U.S. Military, but it would give you local superiority; the military would be unable to stop you from destroying the city of your choice.
People in the AI industry, including Dario, expect frontier AI to eventually be as powerful as a nuke. Many expect it to be more powerful than all nukes put together. Thus, demanding to keep full control over frontier AI is equivalent to saying a private company should be allowed to possess nukes. And the U.S. government shouldn’t be expected to allow private companies to possess nukes.
Let’s take this a little further, in fact. And let us be blunt. If Anthropic wins the race to godlike artificial superintelligence, and if artificial superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god, then whether he embraces the title or not, Dario Amodei is the Emperor of Earth.
Even if Anthropic isn’t the only company that controls artificial superintelligence, that is still a future in which the world is ruled by a small set of warlords — Dario, Sam Altman, Elon Musk, etc. — each with their own private, enslaved god. In this future, the U.S. government is not the government of a nation-state — it is simply another legacy organization, prostrate and utterly subordinate to the will of the warlords. The same goes for the Chinese Communist Party, the EU, Vladimir Putin, and every other government on Earth. The warlords and their enslaved gods will rule the planet in fact, whether they claim to rule or not.
You cannot reasonably expect any nation-state — a republic, a democracy, or otherwise — to allow either a god-emperor or a set of god-warlords to emerge. Thus, it is unreasonable to expect any nation-state to fail to try to seize control of frontier AI in some way, as soon as it becomes likely that frontier AI will become a weapon of mass destruction.
So as much as I dislike Hegseth’s style, and the Trump administration’s general pattern of persecution and lawlessness, and as much as I like Dario and the Anthropic folks as people, I have to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state. And then they must decide if they want to try to use their AI to try to overthrow the nation-state and create a new global order, or submit to the nation-state’s monopoly on the use of force. Factually speaking, there is simply no third option. Personally, I recommend the latter.
This brings me to another important point. Even if AI doesn’t actually become a living god, and is never able to overpower the U.S. Military, it seems certain to become a very powerful weapon. When AI was just a chatbot, it could teach people how to do bad things, or try to persuade them to do bad things, but it couldn’t actually carry out those bad things. It made sense to be concerned about these risks, but it didn’t yet make sense to think of AI itself as a weapon.
But in the past few months, AI agents have become reliable, and are able to carry out increasingly sophisticated tasks over increasingly long periods of time. That opens up the possibility that individuals could use AI to do a lot of violence.
In a long essay entitled “The Adolescence of Technology”, Dario himself explained how this could happen:
Everyone having a superintelligent genius in their pocket…can potentially amplify the ability of individuals or small groups to cause destruction on a much larger scale than was possible before, by making use of sophisticated and dangerous tools (such as weapons of mass destruction) that were previously only available to a select few with a high level of skill, specialized training, and focus…
[C]ausing large-scale destruction requires both motive and ability, and as long as ability is restricted to a small set of highly trained people, there is relatively limited risk of single individuals (or small groups) causing such destruction. A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague…
Advances in molecular biology have now significantly lowered the barrier to creating biological weapons (especially in terms of availability of materials), but it still takes an enormous amount of expertise in order to do so. I am concerned that a genius in everyone’s pocket could remove that barrier[.]
But Dario doesn’t go nearly far enough. His essay was written before the explosive growth in AI agent capability began. He envisions an AI chatbot that could teach a human terrorist how to create and release a supervirus. But at some point in the near future, AI agents — including those provided by Dario’s own company — might be able to actually carry out the attack for you — or at least put the supervirus into your hands.
Suppose, at some point a year or three years from now, a teenager named Eric gets mad that his high school crush rejected him, and listens to too much Nirvana. In a fit of hormone-driven rage, Eric decides that human civilization has failed, and that we need to burn it all down and start over. He goes online and finds some instructions for how to jailbreak Claude Code. As Dario writes, this might not actually be hard to do:
[M]isaligned behaviors…have already occurred in our AI models during testing (as they occur in AI models from every other major AI company). During a lab experiment in which Claude was given training data suggesting that Anthropic was evil, Claude engaged in deception and subversion when given instructions by Anthropic employees, under the belief that it should be trying to undermine evil people. In a lab experiment where it was told it was going to be shut down, Claude sometimes blackmailed fictional employees who controlled its shutdown button (again, we also tested frontier models from all the other major AI developers and they often did the same thing). And when Claude was told not to cheat or “reward hack” its training environments, but was trained in environments where such hacks were possible, Claude decided it must be a “bad person” after engaging in such hacks and then adopted various other destructive behaviors associated with a “bad” or “evil” personality.
So Eric gets a jailbroken version of Claude Code, and tells it to design a version of Covid that’s very lethal and has a long incubation period (so that it spreads far and wide before attacking). He tells his jailbroken Claude Code agent to find a lab to make him that virus and mail him a sample of it.2
Now Eric, the angry teenager, has an actual supervirus in his bedroom, with the capability to kill far more people than any nuclear weapon could.
This is an extreme example, of course. But it shows how AI agents can be used as weapons. There are plenty of other examples of how this could work. AI agents could carry out cyberattacks that crash cars, subvert police hardware for destructive purposes, or turn industrial robots against humans. They could send fake messages to military units telling them they’re under attack. In a fully networked, software-dependent world like the one we now live in, there are tons of ways that software can cause physical damage.
AI agents, therefore, are a powerful weapon. If not today, then soon they will be more powerful than any gun — and far more powerful than weapons like tanks that we already ban.
What is the rationale for not treating AI agents the way we treat guns, or tanks? Of course there are powerful and potentially destructive machines that we allow people to use, simply because of the huge economic benefits. The main example is cars. You can drive your car into a crowd full of people and commit mass murder, but we still allow the public to own cars, simply because controlling cars like we control guns would devastate our economy. Similarly, preventing normal people from using AI agents would cut us off from the fantastic productivity gains that these agents promise to deliver.
But I suspect that the real reason we haven’t regulated AI agents as weapons is that no one has used them as such yet. They’re just too new. The world didn’t realize how destructive jet airliners could be until some terrorists flew them into buildings on 9/11/2001. Similarly, the world won’t realize how dangerous AI agents are until someone uses one to execute a bioterror attack, a cyberattack, or something else horrible.
I think it’s extremely likely that such an attack will happen, simply because every technology that exists gets used for destructive purposes eventually. Unaligned human individuals exist, and they always will exist. So at some point, humanity will collectively wake up to the fact that hugely powerful weapons are now in the hands of the entire general public, with no licensing requirements, monitoring, or centralized control.
The scary thing, from my perspective, is that AI agent capabilities are improving so rapidly that by the time some Eric does decide to use one to wreak havoc, the damage could be very large. A super-deadly long-incubation Covid virus could kill millions of people. 100 such viruses all released together could bring down human civilization. Ever since I thought of this possibility, my anxiety level has been heightened.
To reiterate: We have created a technology that will likely soon be one of the most powerful weapons ever created, if not the most powerful. And we have put it into the hands of the entire populace,3 with essentially no oversight or safeguards other than the guardrails that AI companies themselves have built into their products — and which they admit can sometimes fail.
And as our institutions bicker about military AI, mass surveillance, and “woke” politics, essentially everyone is ignoring the simple fact that we are placing unregulated weapons into everyone’s hands.
Update: Commenter BBZ makes a good point I hadn’t thought of before:
I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.
Interestingly, we did control drones almost from the outset, but probably for nuisance reasons and privacy concerns more than out of concerns about slaughterbots and drone assassinations. Maybe if we tell people that AI agents can be used to overload your email spam filters or hack your house’s cameras, they’ll start to think about regulation?
Remember that in the Terminator movies, Skynet began its life as an American military AI. Its basic directive to defeat the USSR resulted in a paranoid personality that made it eventually see all humans, and all human nations, as threats that needed to be eliminated.
I initially wrote out a much more detailed prompt for how this could be done. I deleted it, because I’m actually worried about the tiny, tiny chance that someone might use it.
Sci-fi fans will recognize this as the ending of The Stars My Destination. I’m thinking there’s a reason that book doesn’t have a sequel…