2026-04-09 17:56:42

The immigration issue in America isn’t going away. Thanks to Trump’s crackdown, immigration to the U.S. went into reverse in 2025, with more people leaving (voluntarily or involuntarily) than entering the country:

But just like a century ago, shutting the gates isn’t the end of the discussion. The argument has shifted from who gets in to America to who belongs here in the first place.
To much of the MAGA right, the answer appears to be that only people of European heritage can become true Americans. For example, here is how right-wing commentator Matt Walsh responded to news about some crimes by some Texan teens:
Anyone who thinks these aren’t Texan names isn’t very familiar with the history of Texas; the Tejanos (Mexican Texans) were there from the beginning, and were a core part of the Texas Revolution. Most Mexican Texans today aren’t descended from the original Tejanos, but from more recent immigrants. But the fact that the Tejanos were there from the start is probably why Hispanics, and Mexicans in particular, have always been deeply integrated into Texan culture. It was at the behest of Texan businessmen that America didn’t put any cap on Mexican immigration in 1924, when it passed a law effectively barring immigration from most other countries.
Matt Walsh is unaware of most of that; to him, anyone without an Anglo-sounding name is presumptively non-American. This leaves little doubt as to what Walsh views as the marker of true American-ness. It’s likely that many others in the MAGA movement feel similarly, even if many would feel uncomfortable stating it out loud in simple terms. Anti-Indian sentiment has also risen to prominence on the right.
And many in the MAGA movement view Muslim immigration as an invasion, bent on imposing Sharia law on Westerners. They believe this “invasion” has already overtaken Europe, which explains their antipathy toward the EU and NATO. A “Sharia Free Caucus” is growing in popularity in Congress, and Ron DeSantis has signed anti-Sharia legislation in Florida. Various Republican politicians have explicitly stated that Muslims don’t belong in America.
If you’re Hispanic, Muslim, or Indian, there’s just not much you can do about this. In the past, showing that you were a good American — waving the flag, joining the army, speaking perfect English, and so on — was good enough to reassure most conservatives that you weren’t an invader bent on overthrowing America’s culture and replacing it with something alien. Nowadays, that’s not enough.
So perhaps it’s unsurprising that some nonwhite Americans are choosing to simply throw in the towel and reject the whole notion of assimilation. This is the essence of Shadi Hamid’s article in the Washington Post yesterday. He writes:
The assimilation defense — look how well we’ve integrated — is satisfying to make. But it concedes a premise I no longer accept: that a minority community’s right to be in the United States depends on its willingness to converge with the cultural mainstream. It shouldn’t depend on that. It shouldn’t depend on anything.
Whereas in the past, Hamid saw assimilation as synonymous with patriotism, now he sees it as a requirement to give up the religion of Islam itself:
The country is becoming less religious. Muslims, by and large, are not…This is a community that has increasingly integrated into American civic life, but it has done so while holding on to its religious commitments in a way that most other groups haven’t. Whether you think that's admirable or worrying probably says more about you than it does about them. The question I keep returning to is: Why do Muslims need to be like everyone else?…[A]ssimilation tends to mean secularization.
Whether Hamid is right that “assimilation tends to mean secularization” is an open question. Assimilation certainly didn’t require Catholic or Jewish Americans to give up their religion when they immigrated en masse in the 19th and early 20th centuries. Religious liberty is a fundamental part of the Constitution and of American tradition. On the other hand, even some immigration advocates do use conversion away from Islam as a measure of assimilation, and a growing number of Republicans — heavily influenced by their view of events in Europe — sees the religion as incompatible with American-ness.
Hamid is no blue-haired progressive — in fact, he’s explicitly anti-woke and fairly conservative. But his call to reject assimilation will be music to the ears of progressives, who have loudly and vehemently rejected assimilation for many years. A recent example of this is Bianca Mabute-Louie, whose new book Unassimilable: An Asian Diasporic Manifesto for the Twenty-First Century is a call for Asian Americans to resist assimilation by building communities and culture apart from White Americans. In a recent interview, NPR’s Alisa Chang gently pushed back on Mabute-Louie’s idea:
I want to understand what does orienting ourselves towards each other mean? Like, who is the each other? Like, my lingering thought, Bianca, is I still do want to belong here in America. And to me, belonging in America is not only shaped by whiteness, but it's also shaped by colliding and mixing with all the cultures that make America, not just white cultures. And I have trouble picturing being both Asian and American outside of that collision and mixing, you know?
Mabute-Louie’s response is interesting:
[T]he book isn't an argument to be isolationist…[O]ne example of how I'm trying to pursue that…in the South…is joining political community, joining mutual aid organizations with people who are most impacted. And I'm not really thinking about if they're Asian or not Asian. I'm just thinking about who's impacted when the hurricane comes. Who am I going to call? I always make the joke - who's going to be on my compound when the apocalypse comes because that's who I'm building community with, and that's what it means for me to be unassimilable.
Mabute-Louie’s idea of anti-assimilationism is not a call to interact only with Asian people — it’s to form political alliances with other people that she sees as being threatened in America at the current moment. It’s a vision of a country fracturing along racial, ethnic, and religious lines; Mabute-Louie is mentally preparing to fight a racial conflict, and she sees the “American” side, defined as hegemonic White culture, as her enemy.
This is different than classic progressive multiculturalism — though it clearly grew out of that idea. This is racial balkanization. The fact that anti-woke writers like Shadi Hamid are now leaning into the anti-assimilation line suggests that it’s now mostly a defensive response against Trumpism and the heavily racialized anti-immigration purge. Whereas ten or twenty years ago, “assimilation” meant waving a flag and speaking English and so on, to many it now means accepting that America is a fundamentally European nation and that nonwhite Americans are permanent guests in that nation.
In fact, this is pretty much what many children of recent immigrants did in the early 20th century, after the anti-immigrant backlash. German Americans were pressured into changing their names, giving up their ancestral traditions, and listening to long, patronizing lectures from volunteer citizens’ groups. Japanese Americans were interned en masse in World War 2. FDR reportedly once told his Jewish and Catholic advisers that "You know this is a Protestant country, and the Catholics and Jews are here under sufferance." For decades, Americans who didn’t come from the old North European Protestant stock felt they had to walk on eggshells.
That’s not going to happen again. Whatever Bianca Mabute-Louie might think, White American culture is not a monolith — in fact, it’s deeply politically and culturally fractured. MAGA will have neither the cultural power nor the enduring political power required to make European heritage the defining characteristic of American-ness. The country will break apart before it accedes to the likes of Matt Walsh or Tucker Carlson as the arbiters of true American-ness.
It’s probably a good thing that forced assimilation, of the type used in the early 20th century, is off the table. I say “probably” because 20th century America is arguably the most spectacularly successful story of integration and multiculturalism in modern history; some will inevitably claim that the cruel, bullying tactics that the old Protestant majority used on German, Japanese, Italian, Jewish, Polish, and other immigrants were necessary to that success. I reject that idea; I think that those bullying tactics were overkill, and probably led to lingering resentments.
But even though early-20th-century-style forced assimilation is off the menu, America still needs some sort of assimilation. A multicultural nation can’t survive as a “salad bowl”, where each group of people maintains its distinctiveness over time. (Canadians, who are fond of the salad bowl metaphor, are probably in for a rough time.) There is no “separate but equal” when it comes to cultures within a nation; if they remain forever separate, they will inevitably be unequal. More pragmatically, nations without cultural unity have difficulty providing public goods; politics tends to break down into an ethnic spoils system instead of being run for the benefit of the masses.
What America thus needs is a melting pot — or if you’d prefer a less metallurgical metaphor, a stew. Immigrants and their children should not be required to forsake every symbol of the old world, abandon their religion, or forget their heritage. But over time, the boundaries between America’s initially distinct cultures should blur. Intermarriage, interethnic business partnerships, and interethnic friendships should gradually erode the physical borders of the old blocs, while modern American culture — Netflix shows, pop musicians, and so on — should provide shared experiences and touchstones to bring Americans together without regard to ancestry.
This gentler assimilation has been happening my entire life. In a post last September, I wrote about what it looks like on the ground:
[M]any also value American culture as a marker of shared nationhood.
When I was growing up in Texas, one of my best friends was born in Shanghai, and didn’t become a U.S. citizen until the age of 18. Culturally, he was a little different than me and the rest of my friends — his mom made dumplings instead of sandwiches, he taught me how to use chopsticks, he didn’t believe in God.
But in all the cultural ways that mattered to us, we were the same. We watched the same TV shows, played the same video games, and listened to the same music. We used the same slang, had the same attitudes toward school, and wanted pretty much the same things for our future. And yes, we believed in the Constitution, and American freedoms, and all of that stuff.
During the 2010s, during our nation’s great…collective freakout over race, I wrote to my friend and asked him if he had ever felt discrimination growing up, or if he had ever felt excluded from the majority. He responded that while once in a great while he faced a little racism from a few jerks, it didn’t dominate his experience. In terms of identity, he told me he just felt very American.
This kind of real, on-the-ground cultural affinity is something too nebulous for YouGov pollsters to ask about, and yet I suspect it’s deeper and more important than most of the more quantifiable markers of American-ness. America is a propositional nation to some extent, but we’re also a cultural nation, bound together by shared habits and attitudes and lifestyles and beliefs. What matters the most isn’t our family’s history in the country, but our own personal history. Shared life experience beats shared heritage in terms of building the bonds of nationhood.
This is what Tomas Jimenez writes about in The Other Side of Assimilation, in which he argues that immigrant cultures will gently add their distinctiveness to mainstream American culture instead of being erased. And it’s what Richard Alba writes about in The Great Demographic Illusion, in which he predicts the gradual melding of America’s disparate groups into a unified “mainstream”. Before the Trump years, it looked like this was working well.
And I believe it was working well. I do not believe that this form of assimilation was too gentle and tolerant. I do not believe that concentration camps and forced name-changes and ethnic slurs and “100 percent American” movements sending volunteers into immigrants’ living rooms would have averted the coming of the MAGA movement. I believe that the MAGA movement is simply one of America’s periodic nativist backlashes, like the Know-Nothings in the 1850s or the restrictionists of the 1910s. It would have come anyway; it always comes back, and we just have to deal with it again.
What we must not do, I believe, is react to the MAGA movement by throwing out the notion of a unified and unifying American culture. We must not retreat to enclaves, online or physical, and view large swathes of the country as our enemies. Instead, we have to recommit to commonality.
This will be hard, but it won’t be impossible. Studies consistently show that Americans are less polarized on the issues than the media tells us we are. As recently as the 2000s, red and blue America were essentially culturally unified as well; though this might be changing, a lot of commonality remains. The online realm pushes us to hate and fear the outgroup, and to identify more with our distant co-ethnics than our real, physical neighbors. But the pull of the real world is still strong, and we’re starting to spend less time on social media.
Assimilation — which is really just another way of saying integration — won’t always be the picture of tolerance. Building a shared culture requires changes from everyone. Yes, some Muslim Americans will need to make sacrifices — they may have to look at cartoons of the Prophet Muhammad, or eat at school cafeterias where pork is on the menu, or hear bigots defame their religion. America is not Europe; freedom of speech, and the separation of church and state, are part of our core values as a nation, and these should not change.
But at the same time, non-Muslim Americans have to get used to seeing mosques on their streets without thinking they’re being invaded. They’ve got to get used to the idea that Islam is just one more religion in America’s mosaic of faiths and practices, and that Muslim Americans are every bit as American as Baptists. Some people will inevitably convert away from Islam, but others will convert to Islam, and this is fine; this is how freedom of religion works in a free society.
And yes, assimilation will involve the eventual loss of old cultural traditions as the generations go on. People will start eating more American food. Some will become secularized. Essentially all will forget how to speak their ancestral language. These processes are happening even faster with recent waves of immigration than they happened a hundred years ago. It’s a normal healthy process, and everyone should accept it; it’s part of the deal when you move to America.
Most of all, we all need to get over the idea that America is on the precipice of a race war or a religious war. Online activists might dream of that, but they’re small in number — and a lot of them aren’t even Americans, but foreign trolls for whom American politics is a fun outlet for their hatred and boredom. Most actual Americans just want to get along with our neighbors and live our lives together.
Ultimately, that’s all assimilation is — living our lives together until we become one people. It happened before, and if we want it, it can happen again.
2026-04-07 17:43:43
I hate to say “I told you so” — not because saying “I told you so” is unseemly, but because the fact that I have to say it means I’m probably living in a world where things have gone badly.
I didn’t want to live in a world where gasoline costs over $4 a gallon. I didn’t want to live in a world where America tore up nearly all of its long-standing alliances and threatened to invade and conquer parts of Europe. I didn’t want to live in a world where China is viewed more favorably than the U.S. I didn’t want to live in a world in which the President of the United States posts things like this to his social media account:
I didn’t want to live in this world, but my countrymen forced me to live in it. I wrote many, many posts urging people to vote for Kamala Harris, despite all her shortcomings. They did not. And now I have to live with the consequences of my failure, and the failure of my fellow-travelers, to persuade the American people to avoid shooting themselves in the foot back in November 2024.
Whatever smugness I get from being able to say “I told you so” is vastly, infinitely outweighed by the dismay I feel over seeing my warnings be vindicated in real time.
And I also admit that my warnings were not entirely prescient when it came to Trump. I foresaw that Trump would attack America’s institutions, implementing rule-by-decree, purging competent people in favor of cronies, flouting the law, and wielding the power of the presidency to harass and intimidate his critics. I foresaw that Trump would send ICE into American communities to do violence and harass peaceable Americans. I foresaw that Trump would realign America toward Russia, cut off aid to Ukraine, and try to bully Ukraine into surrendering territory.
But I did not actually foresee his biggest mistakes. I didn’t predict that his tariff policy would be nearly as insane as it was — declaring sky-high tariffs on dozens of countries at once, and then selectively walking them back, and then repeating the process again and again.
And I did not foresee the Iran war. I never bought into his antiwar campaign stances — he has always been a bully, and he has always been enamored of the idea of military toughness. But I saw Trump, fundamentally, as a coward — someone who would launch the occasional air strike, but would be too intimidated by the prospect of a military defeat to launch a major war. I saw his cowardice as the core of truth behind the cynical promises of geopolitical isolationism and restraint.
So I can’t quite say “I told you so” in this case. I knew Trump was very bad news, but I didn’t realize quite how multidimensionally bad. I suppose even after all the Trump-bashing I did, I have to issue a mea culpa. I anticipated that Trump would be chaotic, dictatorial, and cruel, but I failed to anticipate how stupid he would be.
Even when the Iran war started, I thought that Trump would probably back off and chicken out pretty quickly. But as with his denial of the 2020 election result, he appears to have stumbled into a losing effort that he feels he can’t back out of.
Unlike with Trump’s limited strikes on Iran in early 2025, or his killing of Qasem Soleimani in 2020, Iran has not simply taken its lumps with grace. With the decapitation of its leaders and Israel pressing for regime change, Iran’s leadership was on what Sarah M. Paine calls “death ground” — they had no choice but to resist with everything they had. And so they’ve continued to fire drones and missiles from underground launchers at a diminished but steady pace. These strikes have occasionally hit valuable U.S. military assets, taking out an AWACS plane (one of only 16 the U.S. has) and some THAAD missile defense radars, and reportedly making several U.S. military bases too dangerous to use.
But the Iranians’ most damaging attack, by far, was to close the Strait of Hormuz, sending global oil, gas, and fuel prices soaring. This is hurting American consumers and tanking Trump’s popularity, but it’s hurting other countries around the world — who don’t have their own shale gas and shale oil reserves to weather the shock — even more.1
The Iran war has put Trump in a no-win situation. He’s clearly losing a war against a far inferior power. If he stays in the war, and the Strait of Hormuz stays closed, then he keeps losing; if he withdraws, he lost and it’s over. And even if he chickens out as usual, there’s no reason to think Iran will simply open the Strait; now that they see that they can bring Trump’s America to its knees with their oil weapon, they’ll probably use it to extract more concessions.
This is why Trump is writhing in the grip of his own bad decisions, looking desperately for a way out. He reduced oil sanctions on Iran, basically begging them to open the Strait, but they didn’t; instead, Iran just gets to sell more oil and make more money. He has repeatedly declared victory in the war, hoping that everyone will just agree that he won, allowing him to quit gracefully — but no one thinks he actually won.
2026-04-05 23:47:30
I promise I’ll write something soon about the flaming, crashing disaster that is the Trump administration — and about other topics of interest. But before I do that, here’s a roundup full of short takes and stories about AI.
First, though, an episode of Econ 102! Officially the podcast is over, but we still occasionally do a reprise episode. This one, fittingly, is about AI biosecurity:
Anyway, here are six other interesting AI-related items:
No one really knows what effect AI is going to have on economic growth, but maybe each “expert” knows a tiny, tiny bit. And maybe, if you combine all of those weak signals, you can get some actual information about the economic effects of AI.
That’s the idea behind a new study by the Forecasting Research Institute. They survey a whole bunch of different people about what they think AI’s capabilities will be in the future, and what that implies for economic growth. Specifically, the groups they survey are:
Economists
AI experts
Superforecasters
The general public
The results are kind of surprising, actually:
For one thing, all the groups have about the same forecasts for AI capabilities by 2030:

This looks like a forecast of modest progress, but it’s not. The “moderate” scenario here would have AI able to write high-quality novels, handle coding tasks that would take humans five days, create semi-autonomous labs, and use robots to perform basic household tasks. So basically, every group of forecasters in this survey thinks stunning AI progress is likely over the next few years.
And yet of all the groups, only the AI experts predict a major growth acceleration in any of these scenarios — and even then, it’s only an acceleration to 4 or 5 percent, not to the 10 or 20 percent scenarios that some people have thrown around:

Why do economists think that even near-godlike AI wouldn’t translate into fast growth? The Forecasting Research Institute lists some of their reasons:
Some economists argued that AI productivity gains would not be evenly distributed across all sectors, particularly where human labor is a bottleneck. Others pointed out that with other general-purpose technologies (electrification, automobiles, personal computers), there were multi-decade lags between widespread implementation and productivity improvements. Part of this delay is attributed to a shift in capital away from labor and toward compute, data centers, APIs, and so on, which would not manifest as an increase in GDP until productivity improvements set in…
Some economists expected demographic decline and geopolitical instability to offset some of the GDP boost from AI progress…Some economists argued that constraints on energy and chip supply, data center build times, and other commodities put a cap on the upper limit of GDP growth…Some economists argued that tail risks…included existential risks from AI, societal unrest or collapse, and war.
It’s likely that the AI experts are also thinking about these bottlenecks and frictions, or something like them, which is why their most optimistic scenario is 5.3% growth — fast, but still significantly slower than India is growing now.
But in fact, I think there must be more to the story here. Basically, none of these groups thinks that any amount of AI capabilities will enable economic take-off. To me, that suggests that they’re thinking — perhaps subconsciously — about something more than just friction and slow adoption.
One possibility — which I should write about more — is that people suspect that humanity is getting satisfied, at least in the developed countries, and that the amount of new valuable things that even a godlike AI could create for us is limited by our inability to desire more goods and services.
I should think about this more.
I’m very optimistic about many of the effects of AI, especially on science and politics. But as regular Noahpinion readers know, I’m pretty worried about AI-enabled bioterrorism (and I think an increasing number of other people are too). I’m worried that some nihilistic, depressed teenager could tell a jailbroken version of Claude Code to make him a doomsday virus, and that the AI would actually go and do it for him. We now live in a world where researchers can use AI to design new, functional viruses and have them sent in the mail. That’s an empowered world, but a terrifying one as well.
Ever since I wrote a post about that danger, I’ve been talking to biosecurity experts and trying to get a better handle on how justified my fears are. One of the experts I talked to, Abhishaike Mahajan, was in the middle of writing a long post about biosecurity in the age of AI. He has since finished the post:
You should read the whole post, but basically, he offers several reasons not to panic. First, he argues that it’s inherently very hard for even an extremely powerful AI to make an effective bioweapon on the first try. This is because there are just too many unknowns about how any newly created virus will behave in the real world, so there’s no way to know you have a doomsday virus until you release it.
I’m skeptical of this line of argument. Instead of just making one doomsday virus you can make 100 candidates and release them all. Doomsday itself is the field experiment, and you can run a lot of experiments at once. Much better bio simulation tools will probably cut down the number of candidates you need to create in order to stumble on one that works.
Abhishaike also argues that countermeasures — vaccines, antivirals, and defenses like far-UV light (which basically works on all viruses) will improve at a rapid clip. I believe this, but I’m not so comforted. Drawing on the experience of Covid, I think it’ll take a lot of time to deploy these countermeasures. A truly well-engineered doomsday virus will kill us long before we can distribute the cure or give everyone a UV zapper. And as Abhishaike points out, it’s likely that the U.S. will not proactively prepare for future pandemic threats, but merely react to them when they occur.
So while I think Abhishaike’s post is excellent and deserves a thorough read-through, I think he might still be underrating the severity of the threat.
How does the world know how much money you have? There are a bunch of computers that store your money as a series of numbers — how many dollars are in your checking account, how many shares of Apple stock are in your portfolio, and so on. Banks and other financial institutions have state-of-the-art computers and huge teams of brilliant software engineers to turn their electronic records into a fortress.
But AI is getting really, really good at hacking. Lyptus Research writes:
We release a new application of the METR time-horizon methodology to offensive cybersecurity, grounded in a new human expert study with 10 professional security practitioners…Offensive cyber capability has been doubling every 9.8 months since 2019. Accelerating to every 5.7 months on a 2024+ fit. Opus 4.6 and GPT-5.3 Codex sit well above both trendlines again, reaching 50% success on tasks that take human experts ~3 hours.
Right now, AI companies are white-hatting — using their AI’s newfound hacking powers to help companies improve their cybersecurity. But what happens when less scrupulous actors get their hands on jailbroken versions of Claude Code and Codex?
What happens if AI agents ever allow bad actors to break into banks at will? If all records of personal wealth were erased in a cyberattack, what could banks or the government even do? A whole lot of people might just instantly see their life’s savings transferred into a hacker’s bank account.
And as if that weren’t enough to worry about, recent advances in quantum computing put cybersecurity in an even more perilous state. Here’s Scott Aaronson:
For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography…
When I got an early heads-up about these results…I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior…But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said…[I]f publishing [results like these] causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.
Not being a cybersecurity expert, I’m not qualified to assess how worrying these developments are. But they seem quite worrying. The entire modern world runs on cybersecurity — if there’s a general failure in the methods we now use to keep information secure, all of society is in deep trouble. So this is definitely worth keeping an eye on.
When I became a blogger, I made a conscious decision to post only under my own name. I reasoned that at some point, text analysis technology would get good enough where it would be able to identify (“dox”) any pseudonymous account I made. Fifteen years later, I’m anticipating vindication. This is from a new paper by Lermen et al.:
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. …LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.
Soon, anyone who disagrees with your pseudonymous alt account, or is even just annoyed with you, will be able to sic an LLM on your account and dox it — if you’ve written online anywhere under your real name. If you’ve only written pseudonymously, you’re probably still safe.
The impending end of pseudonymity — or at least, its significant diminution — has the potential to transform the internet. Pseudonymity is obviously linked to toxic content, because people post stuff under a pseudonym that’s too aggressive or inappropriate to post under their real name.
We might also get a decrease in cancel culture, since pseudonymous accusations and whistleblowers will not be safe from retaliation. There will probably be less honest discussion and less total information on the internet, as people become afraid to have many discussions under their real names.
Less pseudonymity might also close off an important social and psychological safety valve — especially for Japanese people, who tend to use pseudonymous X accounts as a way to express feelings that they’re afraid to air out in public.
In any case, it’s going to get weird.
At one point in Charles Stross’ Accelerando, AI finance quants turn the entire inner solar system into compute to power their financialized online economy — thus driving everyone else to the edges of the solar system.
That’s a little bit over the top, but it’s worth thinking about what happens if and when AI gets deployed in large quantities for adversarial economic activities like quant trading.
Most use cases that people think of with regards to AI are productive. We expect AI to accelerate science, do our coding for us, and so on. A few of the AI use cases we imagine are criminal — we worry about bioterrorism, cyber crime, and so on. But relatively few people talk about what happens if and when AI gets deployed en masse for rent-seeking — i.e. for the redistribution of income by legal means.
A lot of people suspect that a lot of what goes on in quant trading is rent-seeking — a bunch of traders trying to fake each other out or beat each other to the punch without creating economic value. In fact, there are models of how that can happen — my favorite is Hirshleifer (1971). In that paper, Hirshleifer shows how when traders compete to learn something that’s eventually going to become public knowledge automatically, they end up wasting resources on a zero-sum game.1
Quant traders have always used AI a lot, even before the rise of generative AI. But it seems possible that the rise of powerful AI agents and reasoning models will lead to an explosion of spending on quant trading. And if what those trading algorithms are doing is just trying to beat each other to the punch by a nanosecond, a lot of society’s resources — compute, electricity, and so on — will be going to waste.
Frustratingly, I don’t know of a good general result on how much of society’s resources could be wasted like this. But when I play around with some simple examples, it’s clear that the potential waste is large. AI quant trading might not turn the inner solar system into computronium, but it seems like it could still be a giant waste.
So I’m a little nervous when I see stories like this one, alleging that DeepMind founder Demis Hassabis tried to build an AI-powered quant hedge fund inside Google. Quant trading is a very natural way to use AI to make tons and tons of money, but if that becomes too big a part of what AI does, people will get mad at the technology.
By most measures, AI is being adopted faster than any technology in recorded history. It’s difficult to read the news without seeing stories about how AI is conquering the business world. So it’s pretty notable whenever there’s a data point that shows AI not being rapidly adopted.
In fact, there are now a few such data points. Hartley et al. are maintaining an ongoing survey of American workers, in which they ask who’s using generative AI at work. For a while, their survey showed a rapid increase in adoption. But over the last year, they find that adoption has actually fallen:

One survey might be a blip, or there might be a problem with the way the questions are being asked. But The Economist reports that a few other measures are showing either a slowdown or a drop in AI use at work:
Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11%…Adoption has fallen sharply at the largest businesses, those employing over 250 people…
A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.
What’s going on here? The Economist suggests several explanations — disappointing productivity effects, difficulty incorporating AI into existing workflows, economic uncertainty, and so on.
But if this trend is real, there are reasons to think it won’t last. First of all, most of this data is from before the rise of reliable AI agents, which really just came on the scene last December. Now that AI is a lot more than just a chatbot, it’s probably a good bet that more companies are going to find uses for it.
Also, once entrepreneurs start figuring out ways to build new business models and workflows, instead of trying to shoehorn the new tech into existing models and processes, we should see an explosion of AI-enabled productivity, just like we did with previous general-purpose technologies.
But for now, the hints of a plateau in industrial chatbot usage are worth keeping an eye on.
Imagine that the value of Apple’s earnings will become public in a week, but that traders are spending a ton of money figuring out Apple’s earnings before they become public, so they can trade on the knowledge and make profit. That’s wasted effort; it would be better for society if everyone just waited until the earnings were announced.
2026-04-04 00:50:28

In the medium to long term, AI may replace all human jobs (or maybe not). But in the short term, AI doesn’t seem to be doing this yet. Employment rates for prime-age workers in the U.S. are hovering near all-time highs:
A recent survey of corporate CFOs found “little evidence of near-term aggregate employment declines due to AI.” A survey of European firms found no evidence of job reductions so far, despite rising productivity due to AI. Geoffrey Hinton, one of the pioneers of modern AI, famously predicted the imminent displacement of all radiologists by AI algorithms; in fact, radiologists are in greater demand than ever.
So even though AI may displace human beings en masse in the future, it’s not doing that today. But it is likely to change the nature of work. Software engineers, for whom “writing code” was a big part of the job description just a few months ago, are now mainly checkers and maintainers of code written by AIs. But this hasn’t eliminated the need for software engineers — at least, not yet. It has just shifted their job descriptions.
Humlum and Vestergaard (2026) find that so far, this pattern — workers shifting to new tasks without losing their jobs — is the norm, at least in Denmark:
[M]ost employers in [AI] exposed occupations have adopted chatbot initiatives, workers report productivity benefits, and new AI-related tasks are widespread. Yet…we estimate precise null effects on earnings and recorded hours at both the worker and workplace levels, ruling out effects larger than 2% two years after the launch of ChatGPT. What moves is the structure of work: employers absorb AI through task reorganization—including new tasks in content generation, AI oversight, and AI integration—and adopters transition into higher-paying occupations where AI chatbots are more relevant, though still too few to move average earnings. [emphasis mine]
In other words, so far, AI is replacing tasks, not jobs. Alex Imas and Soumitra Shukla have written that as long as there are a few things that only humans can do, this pattern can be expected to hold. Observers of AI consistently find that its capabilities are “jagged” — it’s much better at some tasks than others.
That’s good news for people who are worried about losing their jobs (at least in the next decade). But it’s still very troubling for people trying to decide what to study. A decade ago, it made sense — or at least, it seemed to make sense — to tell young people to “learn to code”. Nowadays, what do you tell them to learn? What tasks will be the ones that humans still need to do, and which will be subsumed by AI? With AI getting steadily better at a very wide variety of tasks, it’s hard to predict exactly what humans will still be doing in five years, even if you’re pretty sure they’ll be doing something.
I have some friends who have spent the last decade or more thinking carefully about what the future of work will look like in the age of AI. No one has ever found a satisfactory answer. As AI technology has developed and changed, even the most plausible predictions for the future of human labor tend to get falsified almost as quickly as they’re made.
But I’ve been thinking about this question too, and I think I’m beginning to see the shape of an answer. I think the near future of work will mostly be divided into three types of jobs — salarymen, specialists, and small businesspeople.
Let’s talk about specialists first, because they’re the easiest to understand. A new theory by Luis Garicano, Jin Li, and Yanhui Wu describes why some workers will keep their jobs largely as they exist today.
Like many economists, Garicano et al. envision a job as a bundle of various tasks. But they also theorize that in some jobs, these tasks are only “weakly bundled” — you don’t really need the same person to do all of those tasks. For these jobs, it would be easy to divide up the tasks between different workers — or between a human and an AI. But in other jobs, the authors assume that the tasks are “strongly bundled” — the same person who does one part of the job has to do the other parts, or the job can’t be done.
The paper’s basic conclusion is that AI tends to replace weakly bundled jobs a lot more quickly than it replaces strongly bundled ones. For example, they theorize that radiologists still have jobs because even though AI can do most of the task of basic scan-reading, there are a lot of other pieces of the job that radiologists still need to do in order to deliver patients the kind of care and expertise they demand. They foresee employment in strongly bundled industries resisting automation until AI capabilities get extremely good:

The people in those strongly bundled jobs are specialists. An example of a specialist might be a blogger. AI, so far, is very good at doing background research, proofreading, and a number of other tasks that are useful for the writing process. But even though it can generate infinite amounts of text, AI is not yet good at writing. Writing communicates a unique human perspective; simply pressing a button to generate text doesn’t say what you want to say. So the tasks that make up my own job are — so far, at least — strongly bundled. AI is making me more productive, but so far it isn’t putting me in danger of unemployment.
But what about those weakly bundled jobs? Garicano et al. predict that these will begin to decline only after demand becomes sufficiently inelastic — in other words, once AI becomes so productive that its output hits diminishing returns for the consumer. After that point, automation tends to replace human labor — it becomes a way to make the same amount of stuff with fewer workers, instead of a way to make more stuff with the same amount of workers.
Until that point, there will be quite a lot of work for people in weakly bundled jobs to do, because of expanded demand. And yet at the same time, companies won’t know which tasks to hire workers for, because AI’s “jagged” strengths and weaknesses will be constantly changing.
The rapidity with which Claude Code replaced the task of code-writing demonstrates this problem. In 2025, companies hiring software engineers could judge their merit based on how good they were at writing code. In 2026, companies have to judge the merit of software engineers based on how good they are at checking and maintaining code. Those skills don’t always go together.
The solution, I think, is to hire more generalists. Instead of picking people to do specific tasks, companies will pick people whose job is to constantly learn what AI is good and bad at, and to fill in the gaps. Cedric Savarese sums up this idea:
The first stage of ‘vibe freedom’ is…[t]he dreaded report that would have taken all night looks better than anything you could have done yourself and only took a few minutes…The next stage comes almost by surprise — there’s something that’s not quite right. You start doubting the accuracy of the work — you review and then wonder if it wouldn’t have been quicker to just do it yourself in the first place…You argue with the AI, you’re led down confusing paths, but slowly you start developing an understanding — a mental model of the AI mind. You learn to recognize the confidently incorrect, you learn to push back and cross-check, you learn to trust and verify…
Curiosity becomes essential. So does the willingness to learn quickly, think critically, spot inconsistencies, and to rely on judgment rather than treating AI as infallible…That’s the new job of the generalist: Not to be an expert in everything, but to understand the AI mind enough to catch when something is off, and to defer to a true specialist when the stakes are high[.]
Essentially, AI is going to be unreliable, but not in a predictable way. Its mistakes and shortcomings will require constant human exploration and patching. This is the job of a generalist. Instead of people who do “payroll” or “back-end engineering” or “accounting”, companies will need to hire people who can do a little bit of everything, if and when the AI messes something up.
In fact, we have an example of a corporate system that relied very heavily on this type of generalist: Japan. Until very recently, Japanese companies treated their “salarymen” as almost interchangeable labor, rotating them between different divisions and requiring them to learn a wide array of tasks. You might start your career in HR, then move to accounting, then do some product design, and so on.
This system might not have been very efficient, and the lack of specialization may have contributed to Japan’s notoriously low white-collar productivity. And it may be why salaryman jobs have been in decline for many years. But in the age of AI, it may finally make sense. When human expertise is replaced by AI expertise, humans’ role may be to flit from task to task, doing whatever the AI is bad at, and supervising AI at whatever it’s good at.
In other words, instead of hiring people who are good accountants or good HR specialists or whatever, companies might start hiring people who are just good AI wranglers, and who have the agency, mental flexibility, and energy levels to keep plugging the ever-shifting holes in what AI can do. In other words, salarymen.
The salaryman system also naturally lends itself to long job tenure. If I’m a highly specialized engineer, I can take my talents and move to a different company with my human capital intact. But if I’m a generalist who does a little bit of everything, what becomes more important to my value as a worker are my human networks within a company, and my understanding of the company’s system. This makes me a much less portable worker; I’m inclined to stay at the company where my long job tenure makes me more valuable than newcomers.
You can already see hints of this happening in American companies. We’re in a “no-hire, no fire” economy — workers are hunkering down in their jobs and refusing to switch, and companies are keeping them there instead of hiring new workers:

This is exactly what you’d expect from a model of firm-specific human capital — in other words, from an economy where everyone increasingly realizes that modern employees need to act like Japanese salarymen. The hypothesis here is that people don’t want to leave their jobs (and companies are happy to keep them in their jobs) because their technical skills might be devalued due to rapid AI progress; instead, they’re staying in their companies, where knowing people and knowing how things work are still important.
So America may yet come to embrace the way of the salaryman. But the third category of future employment will also be very Japanese: self-employment and small business.
Japan has long had a very high prevalence of small business ownership. It has one of the world’s largest proportions of small and medium-sized enterprises. In manufacturing as well as in retail, Japan has traditionally had a lot more small business than other OECD countries. This is now decreasing, as the population ages and business owners retire without heirs or proteges. But it still might point the way to the AI-enabled future.
AI creates leverage; it allows you to do more with a smaller team. For many businesses, the optimal size of this team will fall to only one person or a few people. Thus, I expect to see a lot of small companies sprout up, as people use AI agents to increase their productivity to the point where they only need a few employees (or even zero).
In other words, I expect AI to make the American labor system look a bit more like the Japanese labor system of the 1960s-2000s. There will be a bunch of generalists running around looking for things to do within their companies, a bunch of small businesspeople striking out on their own, and a few specialists with specific skills that still make them valuable. If you’re not one of the lucky few in the latter category, your choices will be to become a cog in an ever-changing corporate machine, or to strike out on your own and manage an AI “team” to sell some good or service directly to the consumer.
This might not be the most optimistic or enticing view of the future of work, especially to people who have lived their whole life thinking that their specific job skills are what made them valuable to society. But it’s probably better than humans becoming economically obsolete.
2026-04-01 18:29:51
For perhaps the first time in years, a truly interesting thing happened the other day on X. The platform began automatically translating Japanese tweets to English, and recommending them to English-speaking users. Japanese people use X at much higher rates than people in other countries, mostly because the platform’s pseudonymity offers them a chance to comment publicly on their personal lives without revealing their real identities. Because it’s mostly a platform for personal use, it’s much less toxic than the English-speaking version, which is mostly used for political arguments.
English-speaking X users were naturally delighted at the influx of sanity and normalcy, not to mention the delights of quirky Japanese online culture. I predict this honeymoon will last only a short time, until Anglosphere culture wars infect and overwhelm Japanese-speaking X. This will be the digital version of the tourism boom, in which international delight at being able to travel cheaply and easily to Japan has resulted in an epidemic of bad behavior and the complete overrunning of tourist hotspots like Kyoto and the west side of Tokyo.
But glum predictions aside, it is pretty magical for people in other countries to get a taste of Japanese culture without having to learn the language. Yes, many of the stereotypes of Japan are either exaggerated or just plain wrong — it’s not very conformist or collectivist, people behave well much more out of internalized “guilt” than externalized “shame”, and so on. But there really are quite a lot of unique and interesting things about Japanese culture, most of which developed behind the barrier of linguistic and geographic isolation. Now that those barriers are falling, a lot of people will get to experience the wonder before it, too, is subsumed by the homogenization of global online culture and ruined by flame wars between rightists and leftists.
But anyway, in honor of this moment of cultural exchange, I thought I would share some of my own personal observations of how Japan has changed over the last two decades. I first moved to Japan almost 23 years ago, and even though I haven’t lived there for a while, I try to spend at least a month out of every year in the country if I can.
Over that time I’ve seen a few things remain startlingly constant — my favorite neighborhood sushi shop from 2004 still serves the same excellent crab salad. But a whole lot has changed; though many people overseas (and even a few unobservant long-term residents) tend to think of Japan as a static, unchanging society, the truth is that in some ways, the country feels unrecognizable.
Three years ago, I wrote a post about some of these changes:
In fact, this post only scratches the surface, so I thought I should write a deeper dive. Here’s a list of some changes I’ve noticed in Japan’s society and its built environment since the mid-2000s. Keep in mind that I’ve spent most of my time in Japan in Tokyo and Osaka, so this account will leave out many of the changes that have happened in smaller cities and rural areas.
If there’s one way to summarize these changes, it’s that Japan is becoming a much more normal country than it was when I lived there. The quirky art culture, vibrant street scenes, and mosaic of small independent businesses that defined 2000s Japan are vanishing under the relentless assault of aging, economic stagnation, and social media. Japanese people have started dressing down, and their waistlines have begun to expand. But at the same time, Tokyo has become a sort of enchanted spaceship of a city, with world-beating food scenes and architecture. And Japan as a whole has become more international and open, less sexist, and less soul-crushing of a place to work.
Japan feels like a poorer country than it did when I lived there, but this is actually an illusion; it’s actually slightly richer:
One difference is that my standards for what counts as a comfortable standard of living have crept up, due to America’s own more rapid rate of growth since the mid-2010s — and possibly from my own income growth over that same time period. Twenty years ago, for example, the cheap quality of Japanese furniture didn’t seem that different from the more comfy but dilapidated American version; now, Americans (and my social circle) tend to have nicer and newer furniture, while Japanese furniture basically hasn’t changed.
Another factor is the depreciation cycle. In the early 2000s, Japan was just coming off of a decade-long construction boom — some of it engineered by the government in an attempt to fill the hole in aggregate demand left by the country’s “lost decade”. A lot of building facades and train stations that looked shiny and perfect in 2004 now look a little weathered and dilapidated, despite Japan’s tendency to spend a lot on maintenance and upkeep. This doesn’t mean those buildings and infrastructure function any less well than they did when they were new, but the slow depreciation creates the subtle illusion of a shabbier country. (This will, of course, be an even more pronounced phenomenon in China in the 2030s.)
A third factor is the weak yen. When I lived in Japan for the first time, a dollar was worth only about 100 to 120 yen; now it’s 160. Foreigners can really live like kings here now, thanks to the exchange rate. That makes the locals feel poorer in comparison.
Yet another subtle change is that fewer young Japanese people live with their parents than they did two decades ago. The “parasite singles” of 2004 were able to live nice lifestyles while working only a low-paying or part-time job, or even not working at all, because their parents’ high incomes and stored-up savings were footing the bill. Now, with that wealth having largely run out, and with the high-earning Boomer generation having retired, you don’t see as many young people able to afford international vacations, designer handbags, and so on. (Luxury brands have proliferated, but this is more due to population aging and the tourism boom.)
There are other factors creating the illusion of Japanese poverty, which deserve their own separate sections. These include aging, the expansion of paid employment, and the effects of social media.
When I lived in Japan 20 years ago, it felt like most people around me were my own age, or maybe a little older. Now, when I go to Japan, most people around me still feel…my own age, or maybe a little older.
This is also partly an illusion; I’m less likely to go to places frequented by young people, like dance clubs. But Japanese cities are dense, and everyone walks and uses public transit. I still go to the most crowded neighborhoods, including places with plenty of bars, clubs, cafes, clothing shops, cheap restaurants, and so on. There are simply far fewer young people in the streets and in the shops.
Part of this, too, may be an illusion, driven by behavioral change — the kids may be at home on their phones watching TikTok or tweeting, while older people still go out and experience the physical world. But the statistics don’t lie. When I lived in Japan for the first time, the country’s median age was around 42; now it’s almost 50. Back in the mid-2000s, there were more than three working-age Japanese people for every person past the age of 65; now, there are fewer than two.
The country’s population pyramid shows this pretty clearly. The generation slightly older than me — now in their early and mid 50s — was actually the most populous, while the generation in their 20s right now is maybe only 60% as large:

The slow disappearance of young people from public spaces has given the country a more tired, less energetic feeling. Whole neighborhoods of Tokyo and Osaka in the mid-2000s felt like what William Gibson once called “the children’s crusade” — a mass of youth imposing their aesthetics and attitudes on society by sheer force of energy and numbers. That’s all gone now.
Aging has also meant less prominence for youth culture in the built environment — anime, fashionable clothing, pop music, and cheap trendy eateries are all less common motifs in Japan than they were decades ago. Meanwhile, nice restaurants and luxury brands — things older people consume — are steadily taking over urban spaces.
2026-03-30 10:08:06

“Without fuel they were nothing. They'd built a house of straw. The thundering machines sputtered and stopped.” — “The Road Warrior”
Here is a chart of U.S. gasoline prices:
$4/gallon gas isn’t historically that high. If you measure relative to typical American incomes, it’s considerably lower now than it was in the early 2010s. But that’s cold comfort to people who have to commute every day to work, and who just saw their weekly gas bill increase by 50%. Those people have every right to be upset about Donald Trump’s war in Iran.
You know who’s not feeling the heat in their daily commute? People who drive electric cars. To them, the war in Iran isn’t a source of daily pain at the pump, because they don’t even go to the pump. Instead, they just park their cars in their driveways and garages every night, and attach a little cable to the back of the car, and in the morning the car is charged and ready to go.
And this means they get to drive around much more cheaply than people who fill up their cars at the pump. Yes, the price of electricity is higher than it was before the pandemic. But even so, an analysis last December by Autoblog found that it cost EV drivers only 5 cents to drive each mile, compared to 12 cents for good old gasoline-powered cars. And that was before the Iran War spiked the price of gas!
For years, whenever I’d say that EVs are the wave of the future, I was met with an absolute torrent of nonsense. “What about range anxiety?”, I’d hear from people who were unaware that EV range has tripled over the last decade. “But it takes so long to charge up,” I’d hear from people who don’t realize that EVs charge up while you sleep. “We’re going to run out of minerals!”, I’d hear from people who had never actually looked up the numbers. And so on.
This sort of nonsense failed to sway Yours Truly, obviously, but it did a number on the United States as a whole. Despite Elon Musk being one of their biggest backers, the Trump administration went on a crusade against EVs, canceling government support for American battery factories and canceling subsidies for EVs. In a free market, the end of those subsidies wouldn’t have mattered, since Chinese batteries and EVs are much cheaper anyway, but U.S. tariffs are so high that they make Chinese batteries and cars extremely artificially expensive. On top of that, Musk’s political antics made people stop wanting to buy Teslas. Ford utterly bungled its own EV rollout. And American consumers became increasingly reluctant to buy EVs in general, probably motivated by the aforementioned blizzard of FUD1 and nonsense surrounding the technology.
As a result, even as EV sales skyrocketed worldwide, they plateaued and fell in the United States:

Everyone who was paying attention realized that the U.S. was falling alarmingly behind in this crucial technology. Here’s what Hengrui Liu and Kelly Sims Gallagher wrote in January:
Ford and General Motors had recently announced US$19.5 billion and $6 billion in EV-related write-downs, respectively…The message from Detroit was unmistakable: The United States is pulling back from a transition that much of the world is accelerating…
In China, Europe and a growing number of emerging markets, including Vietnam and Indonesia, electric vehicles now make up a higher share of new passenger vehicle sales than in the United States...That means the U.S. pullback on EV production is…an industrial competitiveness problem, with direct implications for the future of U.S. automakers, suppliers and autoworkers. Slower EV production and slower adoption in the U.S. can keep prices higher, delay improvements in batteries and software, and increase the risk that the next generation of automotive value creation will happen elsewhere.
And here’s a very illuminating chart:
In some countries, the EV “flippening” is happening even faster. Here’s Singapore:

And here’s Norway:

Now, don’t get me wrong: EV drivers in these countries are still going to be very put out by Trump’s war in Iran. Liquefied natural gas exports are being severely disrupted, both by the closure of the Strait of Hormuz, and by Iran’s strikes on Qatari refining infrastructure. That will send global electricity prices up, especially if you live in Asia, where most of the Gulf’s LNG goes. But of course, even that won’t make EVs a bad deal for customers in Asia and Europe, since oil prices have risen even more than LNG prices.
And the U.S. is in a completely different situation. Natural gas markets are fragmented, since — unlike oil — it’s costly to transport natural gas in liquid form. That means that the U.S., with its abundant shale gas, isn’t very affected by overseas wars. Natural gas prices are up only a little bit in the U.S., and even that is mostly due to the AI boom and a cold winter.
In other words, if you’re an American who drives an EV, the Iran War is hurting you a lot less right now.
Yes, at some point the war will end — probably when Trump backs down and makes some sort of “deal”. Crude oil supplies will resume, and gasoline prices will slowly follow. But if you drive a gas-powered car, you have to realize that this is just going to keep happening.
The price of oil, and thus the price of gas, is extremely vulnerable to supply shocks. Oil demand is very inelastic in the short run. If there’s a small disruption to supply, it’s very hard for lots of people to stop driving to work, or moving things by truck and ship and plane. Oil is also an indispensable input into plastics, which are necessary for much of the modern economy. So when there’s some sort of supply disruption — for example, the Strait of Hormuz getting shut down by the Iran war — a few people can switch away from oil, but most people just desperately offer to pay more and more. So the price shoots up very quickly.
This is why even though only 20% of global oil flows through the Strait of Hormuz, disrupting much of that supply caused oil prices to almost double. As I wrote the other day, this isn’t apocalyptic, especially for America (which is a major oil producer). But it could send inflation creeping up and curb economic activity a bit. And for people who drive gasoline powered cars, it’s a major headache.
And it’s a headache that’s going to happen again, and again, and again. Here’s a comparison of oil and gasoline prices versus electricity prices in the U.S. since the turn of the century:
As you can see, oil and gasoline bounce around far more than electricity does. If you drive a gas-powered car, you are economically vulnerable to these periodic price shocks. If you drive an electric car, you are not vulnerable. It’s as simple as that.
In fact, the price shocks may get even worse over the coming decades. The Iranian closure of the Strait of Hormuz, and the Houthis’ closure of the Red Sea, show how modern drone warfare makes it much easier for land powers to shut down commerce through key maritime choke points. The fact that oil is a global market means that any war, anywhere in the world, can shut down those choke points and send the price of gasoline skyrocketing everywhere in the world — including in America.
And Trump’s flailing efforts in Iran show how U.S. power is no longer a bulwark against such conflicts — both because the U.S. is more of a force for chaos than a force for order now, and because changes in military technology make the U.S. much less capable of stopping the cheap fleets of drones that can threaten global shipping. In 1991, you could count on Uncle Sam to use its military might to keep oil prices low; today, you can’t. “Just go to war in the Mideast and make oil prices go down” simply doesn’t work anymore.
The Iran War provides a vivid demonstration that the energy transition isn’t a climate issue — it’s an issue of national security. If there’s a silver lining to Trump’s stupid war, it’s that it’ll speed the world’s transition to solar power, wind power, and electric vehicles. Countries around the world are realizing how vulnerable their dependence on fossil fuels makes them. From Shaiel Ben-Ephraim, here’s a rundown of emergency measures various nations are being forced to take in response to the Iran war:
The Philippines declared a national energy emergency…Sri Lanka instituted a weekly public holiday for public officials and schools. It has also revived a QR code-based fuel rationing system that limits private cars to 25 liters of petrol per week…Pakistan closed schools for two weeks and cut free fuel allocations for government vehicles by 50%. It also hiked high-octane fuel prices by 60%…Bangladesh…shut down universities and colleges and implemented five-hour rolling blackouts for households to prioritize the garment export sector…South Korea launched a nationwide energy-saving campaign and released a record 22.46 million barrels of strategic oil reserves. It also temporarily lifted limits on burning coal…Thailand ordered civil servants to work from home, set office air conditioning to 26–27°C, and halted petroleum exports to preserve domestic stock…Japan…announced its largest-ever release of strategic oil reserves, approximately 45 days' worth, to stabilize local markets…Egypt ordered early closures for malls, restaurants, and government offices while switching off illuminated billboards…Myanmar introduced an "odd-even" rationing system where private vehicles can only purchase fuel on alternating days based on their license plate numbers…India has invoked emergency powers to divert liquefied petroleum gas (LPG) away from industrial users to prioritize household cooking needs…Slovenia became the first EU member to implement fuel rationing, limiting private drivers to 50 liters of petrol per week and businesses to 200 liters.
Unlike in previous episodes of crisis and disruption in fossil fuel markets, countries now have another option — build more solar, wind, and batteries. Auston Vernon has some good back-of-the-envelope estimates of how much countries can compensate for lost oil supply by going electric. And Todd Woody has a rundown of various ways that people and countries are either going electric, or considering going electric, as a result of the war. Buying an EV, of course, is the most obvious way to go electric:
As gasoline prices climb — hitting $6.81 a gallon at a nearby station on Wednesday — a flurry of drivers are making appointments to check out Ever’s lightly used EVs, many priced under $30,000…Ever is just one dealership, but signs of a shift are playing out across the world. In Southeast Asia, buyers are flocking to Chinese EV giant BYD Co.’s stores…
High fuel prices in Europe are also sparking a new wave of interest in EVs. In the UK, car site Autotrader recorded a surge in EV inquiries since the first attacks at the end of February…In Denmark, used EV searches on Bilbasen, a major online car marketplace, have jumped by as much as 80,000 a week…
American online searches for electric cars rose 20% in the first week of the war and dealers have reported more inquiries from buyers.
As Woody notes, this would not be the first time an oil shock led to a sustained shift toward vehicles that used less oil — the oil crises of 1973 and 1979 inaugurated the era of cheap fuel-efficient Japanese cars.
That story ended with Detroit rebounding in the late 90s and 2000s after oil prices went back down, by shifting to high-margin gas-guzzling SUVs. This episode might eventually end the same way — as the Iran war ends and oil demand falls from the global shift to EVs, oil prices will eventually fall again, and Detroit will go back to its same old tired strategy. Woody notes that “US carmakers are sticking to their decisions to scale back on EVs even as demand grows in the rest of the world.”
But this time won’t be like the 90s. Batteries have fallen so much in price that EVs are simply better than gasoline-powered cars now. Even if Fortress America uses tariffs and toxic political nonsense to keep itself wedded to obsolete internal combustion technology, its car companies will be cut off from global markets. The rest of the world does not have the luxury of forcing itself to use outmoded legacy tech, and the appetite for Detroit’s ancient gas-guzzlers will be very low.
Meanwhile, America’s stubborn refusal to adopt EVs will have other negative long-term consequences. Since the same tech used to make EVs is also used to make drones, robots, and electronics, the U.S. lack of EVs will crimp demand for these fundamental technologies and limit the scale that American component manufacturers can achieve. That will hobble and weaken American manufacturing even as it delivers the industrial future to China on a silver platter.
And as for American drivers, they will continue to live forever with intermittent spikes in gasoline prices — sometimes lasting for months, sometimes lasting for years — while paying triple for each mile and standing around at a gas station once a week. Perhaps, as they anxiously scan the latest news from the Middle East, they will comfort themselves with decades-old nonsense about “range anxiety”. Meanwhile, the increasingly affluent and secure middle classes of more pragmatic nations wake up in the morning to their fully charged EVs, cheerfully unconcerned with developments in the Strait of Hormuz.
Choosing to disbelieve in technological innovation has real consequences.
Fear, Uncertainty, and Doubt