2026-04-13 08:50:34
Last year, a lot of people (including me) were wondering if the AI industry was in a bubble. These days it’s looking a lot less likely. The technology has found its killer app — agentic coding, which has upended the software industry as we know it. For power users, AI is no longer just a chatbot — you can tell it to go make you an app, or run some data analysis, and it’ll just do it for you and come back with the results.
This is making a LOT of money. As I predicted, Anthropic has been quicker to capitalize on the agentic coding boom than OpenAI. Anthropic focused on selling to businesses, while OpenAI focused on building its brand and selling to consumers; the revenue from agentic coding is almost all in the former category. So as Ruben Dominguez reports, Anthropic has probably overtaken OpenAI in revenue, or will do so soon:

In case you don’t realize how much money this is, or how fast this growth rate is, here’s some perspective:
Some of that will be eaten up by computing costs, of course. But as the WSJ recently reported, Anthropic’s computing costs are much lower than OpenAI’s. As a result, it’s expected to start turning a profit faster than OpenAI — and even OpenAI’s projections depend heavily on a comeback push that eats into Anthropic’s enterprise market share.
The rise of coding agents isn’t just changing the corporate horse race; it’s changing the whole picture of how we think about competition and profit in the world’s most important new industry. In a post last December, I wondered if AI would end up being a vitally important but low-margin business, similar to solar power or airlines:
Jason Furman wrote something similar, declaring that “instead of consolidating, as so many other industries have done, the leading edge of A.I. has become fiercely competitive.”
That’s still possible, of course. Fast followers, including Google and various Chinese model-makers, are still racing to catch up; if progress slows down, they may catch the market leaders and drive down margins. It’s still not clear how much of a “moat” AI has, even with agents. But right now, the business of making and renting out AI models seems dominated by two giants. Meta and xAI, who recently were considered at or near the frontier, seem unable to keep up.
And there’s now a pretty clear path for those two giants to become even more dominant: cybersecurity. Anthropic recently delayed the wide release of its new frontier model, Mythos, because it was too good at hacking. The model supposedly found critical vulnerabilities in key software systems that had been missed for decades by top human cybersecurity researchers. The idea is that Anthropic is going to spend a while using Mythos to go over critical systems and make sure they don’t have security flaws before releasing the model to the public. OpenAI is expected to do something similar with its next model.
Assuming Mythos is really that good at hacking (and there are skeptics), it gives us another reason to think that a few top model-makers like Anthropic and OpenAI will make a lot of profit. Cybersecurity is inherently adversarial; if attackers use a very powerful AI coding model to hack, defenders probably have to use a model that’s equally good or better to defend — and vice versa. This can lead to an arms race where neither side can afford not to shell out big bucks for the latest and greatest model they can get their hands on.
Because the prize for successfully defeating modern cybersecurity is so large — imagine hacking into Citibank and Bank of America and E*TRADE and Robinhood and just taking everyone’s money — the amounts that people have to spend on AI tools is potentially enormous. And even if Anthropic and OpenAI continue to be responsible citizens and make their top models available to defenders for long enough to find all the newly findable bugs — and even if attackers give up entirely because they can’t get their hands on the best models — it means defenders still have to shell out big bucks to the top model-makers.
It’s a huge source of revenue and a powerful moat for profit margins. And as AI expands into other adversarial fields — quant trading, litigation, fraud prevention, competitive advertising, and so on — there are probably going to be more of these revenue sources and more of these profit moats.
Which means we have one more thing to worry about when it comes to AI.
Typically, there are three big concerns that we talk about:
The worry that terrorists will use AI to create doomsday viruses
Worries about job displacement, human obsolescence, and economic dislocation
The worry that superintelligent AI is a new dominant species that will disempower and possibly destroy humanity
But if the industry really does become dominated by a few giant companies, we have a fourth big thing to worry about — extreme inequality. If AI’s economic benefits are highly concentrated, we could end up with a comparatively small number of people controlling most of the purchasing power in our economy. In the extreme scenario, this could lead to a small number of people holding all the power in the world.
2026-04-11 17:17:36

Patrick Collison’s YIMBY credentials are unimpeachable. He is a major backer of California YIMBY, the organization that has passed a stunning array of pro-housing bills in one of the most anti-development states in the nation. So it was interesting to see him claim that the movement has made a big mistake — or even been downright dishonest — by ignoring the aesthetics of apartment buildings:
For reference, here’s Sejong City in Korea, whose residential districts do indeed look rather bland and oppressive:

Some urbanists agreed, calling for regulatory reform that would allow American apartment buildings to look like the famous Haussmann buildings in Paris (depicted at the top of this post). So did some conservatives, which is unsurprising; intellectual conservatism has always called for a return to classical architecture and a rejection of modern styles. In fact, the idea that ugly building styles are a key reason that Americans disapprove of housing construction has been around quite a while, and it even has a name — “QIMBY”, meaning “quality in my back yard”.
Chris Elmendorf protested Patrick’s framing, arguing that YIMBYs have been active in pushing for reforms that would allow more beautiful buildings to be built in America:
YIMBYs have been pushing for single-stair reforms that would allow more "Paris-like" buildings…The municipal design standards & reviews that YIMBY laws allow developers to bypass did not improve designs. Per [Arthur] Stamps's studies (the only relevant empirical evidence of which I'm aware), they made things worse…[T]he problem of housing aesthetics deserves more attention -- and is receiving more attention -- but it's not like YIMBYs broke something that was working.
Elmendorf also pointed out that California YIMBY itself recently came out with a plan to encourage the building of more beautiful multifamily housing. The plan reads like exactly the kind of thing that Patrick might like:
[T]here’s a missing piece that housing policy still treats like an afterthought: how buildings look, function, and feel…Our current objective design standard paradigm…assumes you can “design away” ugliness by chopping a façade into smaller pieces…so the building feels “less big.” But contextual-design research shows why this keeps disappointing…When the underlying form and materials feel cheap or incoherent, extra façade break-ups read as fussiness, not beauty…
Many local Objective Design Standard codes demand heavy articulation and multiple cladding changes. The evidence suggests those moves have limited payoff compared to coherent style, material quality cues, greenery, and visible detail. (Stamps 2014; Nasar & Stamps 2008)…[We should u]pdate the California Department of Housing and Community Development’s model Objective Design Standards to [allow] projects [to] use a simpler envelope and meet a measurable threshold of real ornament (projections/recesses, columns/bands/cornices/fins, tile or relief work, murals), with minimum depth and material standards…
If California wants more European-feeling mid-rise development with courtyards, better daylight, shade, and balconies, it has to keep modernizing the [building] code…Too many building, electrical, and fire rules (in California and across the U.S.) [forbid] the buildings people actually like: bright cross-ventilated homes, true courtyard buildings, and mixed-use ground floors. All these requirements – egress, stairs, corridor, and elevator – often make projects bulkier and require much bigger lots, limiting where we can build new housing…[T]he web of building code regulations denies light, proportion, street connections, courtyards, greenspace – everything that makes buildings feel humane…Passing single-stair reforms and elevator reforms makes smaller mid-rise buildings possible, which fit on smaller lots, can be nestled into existing buildings, add variety to the streetscape, and reduce the pressure for larger, monotonous developments.
So at least one prominent YIMBY organization — the one that Patrick supports — is already answering the call to focus on building aesthetics. Others are likely to follow.
I think that’s a good thing. Eliminating onerous building codes and regulations will kill two birds with one stone, making it easier to build housing even as it also makes it possible to build more of the European-style ornamentation that commentators always call for. And allowing American developers to experiment with ornamentation and alternative styles will help break up the sameness of an urban landscape dominated by endless forests of boxy 5-over-1 buildings.
But that said, I highly doubt that this — or any stylistic change — would move the needle on public acceptance of new apartment buildings.
First of all, I’m skeptical that regular Americans actually like the kinds of building styles that intellectuals often yearn for. If you plunk down old-looking European-style buildings in the middle of Houston or Seattle, people tend to ridicule them as cheesy and inauthentic. The typical insult is “pastiche”, a derogatory term for a style that jumbles and mixes old European styles (even though, as Samuel Hughes points out, mixing and matching older ideas is exactly how classic European building styles were created in the first place).
Many local design standards explicitly discourage old-style buildings. For example, Los Angeles’ planning department, in its design guide for Echo Park, writes: “Do not imitate historic architectural styles; a modern interpretation may be appropriate if architectural features are borrowed and replicated to a simpler form.”
Nor is it just old European-looking buildings that leave many Americans cold. Pietrzak and Mendelberg (2025) find that although people tend to dislike tall buildings, traditional brick facades fail to move the needle on support for housing. Alex Armlovich points out that when New York City came out with new limestone skyscrapers, only three were permitted. And Brooklyn Tower, a recently built art deco style skyscraper in Brooklyn, has drawn tons of criticism for its style.
And Elmendorf cautions that no one has yet managed to find a specific architectural style that Americans like enough to move the needle on their support for new housing:
While the paper by [Broockman, Elmendorf, and Kalla (2026)] provides pretty good evidence that ordinary people’s aesthetic objections to bad, very unfit-to-context buildings affect their support for development (to the extent they care about anything development-related)…no one has shown that any specific set [of] design standards would materially improve public support for development, apart from pretty obvious stuff like "don't put up new buildings in low-density areas that are much taller than their neighbors").
All this suggests that while some American intellectuals may pine for the cornices and mascarons of Haussmannian Paris, most Americans just think that style — and any old style — looks cheesy when it’s transplanted to an American context. This may be because Americans consciously think of their culture as a young one, more suited to modern styles than traditional ones. Or it may be because America’s artistic culture has always focused on critique and fault-finding. But whatever it is, it suggests that allowing — or even forcing — cities to build ornamented buildings will not garner a wave of popular support for new development.
Conversely, the places that do build a lot of housing tend not to build it in old, ornate European styles. Texas, which is one of the best states when it comes to building new housing, mostly constructs single-family homes with lawns. When it does build apartment buildings, they tend to look like this:

Texas builds them anyway, for much the same reason that the Koreans built Sejong City — they’re cheap and efficient, and the state needs them to support its rapid population growth.1 You do see a little experimentation with slightly more European-style apartments in a few places, but overall it’s just boxy and functional. The fundamental driver of housing abundance in Texas isn’t architectural beauty; it’s a culture and politics that values and seeks out economic growth.
Nor is ornamental architecture necessarily what makes people love a city. Traditionalists may sigh over old European styles, and urbanists may salivate over the superilles of Barcelona, but the city that has captured the hearts of Americans in recent years is Tokyo. Downtown Tokyo is a forest of electric lights, strung up along the sides of stubby concrete mid-rises called zakkyo buildings. There’s nary a fancy cornice to be found; instead, the beauty comes from the bright cheery emblems of commerce:

Tokyo’s residential neighborhoods have even less ornamentation. They often feature flat brown or white or tan facades, hanging power lines, and bare asphalt streets with no setbacks or lawns or even trees:

And yet these are absolutely enchanting places to live. Why? Not because of the architecture, but because of the design of the city itself. The small curving streets make perfect walking paths, undisturbed by zooming traffic. Mixed-use zoning gives the neighborhood a communal, lived-in feel. Plentiful public transit makes it easy and stress-free to get around, while Japan’s peerless public safety makes it fun to hang out on the street or in a park at any hour.
Americans who go to Japan have definitely noticed this:
It’s no coincidence, I think, that Japan is one of the best countries when it comes to building plenty of housing. Yes, most of its apartment buildings look like crap when evaluated in isolation on their pure architectural merits. But the urban system made up by those buildings is a wonderful place to live, and so Japanese people have few qualms about building up that system. And Americans go there and love it.
And if America built a bunch of Haussmann buildings instead of boxy 5-over-1s, it would probably only marginally improve the feel of the country’s cities. Imagine Haussmanns in place of 5-over-1s in a typical Texas apartment complex:
Or imagine Haussmanns along a giant American stroad instead of a cute walkable Paris street near a train station:
These renderings don’t look terrible; the buildings look fine. But they don’t make the city that much more appealing of a place to live, because it’s still built in the American way — there aren’t any shops, it’s all based around driving, and it doesn’t feel cozy or lived-in. At best it’s a marginal improvement.
If you want American cities to look and feel so nice that Americans are willing to build housing in them, I think you have to do a lot more than give the buildings fancy facades. You have to do the hard work of putting in train lines, making side streets safe for pedestrians, rezoning for mixed use, and — perhaps most important — policing cities in order to ensure robust public safety.
That’s a tall order, and I recognize that this total urban transformation isn’t going to happen soon — or happen all at once. Instead, I think America really has no choice but to build up its cities organically:
Implement hyperlocal control to allow neighborhoods that want to build more housing to do so as they see fit, thus circumventing the veto of city-level NIMBYs.
Build more fast commuter rails between inner-ring suburbs and city centers, and more subways and elevated trains in city centers.
Improve public safety through a combination of policing, community outreach efforts, better public services, and mandatory institutionalization for the dangerously ill.
Use state-level upzoning where possible to allow “missing middle” housing everywhere — duplexes, triplexes, townhouses, and small apartment buildings.
Simplify zoning at the state level along the Japanese model — have a few standardized zoning categories, and define them based on what kinds of nuisances they disallow, rather than what kind of buildings they explicitly allow. Make most zones mixed-use to some degree; most residential neighborhoods can benefit from neighborhood cafes and small stores.
Carry out sensible reforms like allowing single-stair buildings.
Over several decades, this gradual process will allow American cities to evolve into a better form. That will increase political support for denser housing. And when paired with sensible reforms like the one put forward by California YIMBY, it will allow American cities to develop their own local architectural styles over time. Ultimately, that will be cooler and more interesting than simply borrowing from old Europe.
Sejong City was a recently built administrative capital, so it had rapid population growth even in a country whose population was plateauing overall.
2026-04-09 17:56:42

The immigration issue in America isn’t going away. Thanks to Trump’s crackdown, immigration to the U.S. went into reverse in 2025, with more people leaving (voluntarily or involuntarily) than entering the country:

But just like a century ago, shutting the gates isn’t the end of the discussion. The argument has shifted from who gets in to America to who belongs here in the first place.
To much of the MAGA right, the answer appears to be that only people of European heritage can become true Americans. For example, here is how right-wing commentator Matt Walsh responded to news about some crimes by some Texan teens:
Anyone who thinks these aren’t Texan names isn’t very familiar with the history of Texas; the Tejanos (Mexican Texans) were there from the beginning, and were a core part of the Texas Revolution. Most Mexican Texans today aren’t descended from the original Tejanos, but from more recent immigrants. But the fact that the Tejanos were there from the start is probably why Hispanics, and Mexicans in particular, have always been deeply integrated into Texan culture. It was at the behest of Texan businessmen that America didn’t put any cap on Mexican immigration in 1924, when it passed a law effectively barring immigration from most other countries.
Matt Walsh is unaware of most of that; to him, anyone without an Anglo-sounding name is presumptively non-American. This leaves little doubt as to what Walsh views as the marker of true American-ness. It’s likely that many others in the MAGA movement feel similarly, even if many would feel uncomfortable stating it out loud in simple terms. Anti-Indian sentiment has also risen to prominence on the right.
And many in the MAGA movement view Muslim immigration as an invasion, bent on imposing Sharia law on Westerners. They believe this “invasion” has already overtaken Europe, which explains their antipathy toward the EU and NATO. A “Sharia Free Caucus” is growing in popularity in Congress, and Ron DeSantis has signed anti-Sharia legislation in Florida. Various Republican politicians have explicitly stated that Muslims don’t belong in America.
If you’re Hispanic, Muslim, or Indian, there’s just not much you can do about this. In the past, showing that you were a good American — waving the flag, joining the army, speaking perfect English, and so on — was good enough to reassure most conservatives that you weren’t an invader bent on overthrowing America’s culture and replacing it with something alien. Nowadays, that’s not enough.
So perhaps it’s unsurprising that some nonwhite Americans are choosing to simply throw in the towel and reject the whole notion of assimilation. This is the essence of Shadi Hamid’s article in the Washington Post yesterday. He writes:
The assimilation defense — look how well we’ve integrated — is satisfying to make. But it concedes a premise I no longer accept: that a minority community’s right to be in the United States depends on its willingness to converge with the cultural mainstream. It shouldn’t depend on that. It shouldn’t depend on anything.
Whereas in the past, Hamid saw assimilation as synonymous with patriotism, now he sees it as a requirement to give up the religion of Islam itself:
The country is becoming less religious. Muslims, by and large, are not…This is a community that has increasingly integrated into American civic life, but it has done so while holding on to its religious commitments in a way that most other groups haven’t. Whether you think that's admirable or worrying probably says more about you than it does about them. The question I keep returning to is: Why do Muslims need to be like everyone else?…[A]ssimilation tends to mean secularization.
Whether Hamid is right that “assimilation tends to mean secularization” is an open question. Assimilation certainly didn’t require Catholic or Jewish Americans to give up their religion when they immigrated en masse in the 19th and early 20th centuries. Religious liberty is a fundamental part of the Constitution and of American tradition. On the other hand, even some immigration advocates do use conversion away from Islam as a measure of assimilation, and a growing number of Republicans — heavily influenced by their view of events in Europe — sees the religion as incompatible with American-ness.
Hamid is no blue-haired progressive — in fact, he’s explicitly anti-woke and fairly conservative. But his call to reject assimilation will be music to the ears of progressives, who have loudly and vehemently rejected assimilation for many years. A recent example of this is Bianca Mabute-Louie, whose new book Unassimilable: An Asian Diasporic Manifesto for the Twenty-First Century is a call for Asian Americans to resist assimilation by building communities and culture apart from White Americans. In a recent interview, NPR’s Alisa Chang gently pushed back on Mabute-Louie’s idea:
I want to understand what does orienting ourselves towards each other mean? Like, who is the each other? Like, my lingering thought, Bianca, is I still do want to belong here in America. And to me, belonging in America is not only shaped by whiteness, but it's also shaped by colliding and mixing with all the cultures that make America, not just white cultures. And I have trouble picturing being both Asian and American outside of that collision and mixing, you know?
Mabute-Louie’s response is interesting:
[T]he book isn't an argument to be isolationist…[O]ne example of how I'm trying to pursue that…in the South…is joining political community, joining mutual aid organizations with people who are most impacted. And I'm not really thinking about if they're Asian or not Asian. I'm just thinking about who's impacted when the hurricane comes. Who am I going to call? I always make the joke - who's going to be on my compound when the apocalypse comes because that's who I'm building community with, and that's what it means for me to be unassimilable.
Mabute-Louie’s idea of anti-assimilationism is not a call to interact only with Asian people — it’s to form political alliances with other people that she sees as being threatened in America at the current moment. It’s a vision of a country fracturing along racial, ethnic, and religious lines; Mabute-Louie is mentally preparing to fight a racial conflict, and she sees the “American” side, defined as hegemonic White culture, as her enemy.
This is different than classic progressive multiculturalism — though it clearly grew out of that idea. This is racial balkanization. The fact that anti-woke writers like Shadi Hamid are now leaning into the anti-assimilation line suggests that it’s now mostly a defensive response against Trumpism and the heavily racialized anti-immigration purge. Whereas ten or twenty years ago, “assimilation” meant waving a flag and speaking English and so on, to many it now means accepting that America is a fundamentally European nation and that nonwhite Americans are permanent guests in that nation.
In fact, this is pretty much what many children of recent immigrants did in the early 20th century, after the anti-immigrant backlash. German Americans were pressured into changing their names, giving up their ancestral traditions, and listening to long, patronizing lectures from volunteer citizens’ groups. Japanese Americans were interned en masse in World War 2. FDR reportedly once told his Jewish and Catholic advisers that "You know this is a Protestant country, and the Catholics and Jews are here under sufferance." For decades, Americans who didn’t come from the old North European Protestant stock felt they had to walk on eggshells.
That’s not going to happen again. Whatever Bianca Mabute-Louie might think, White American culture is not a monolith — in fact, it’s deeply politically and culturally fractured. MAGA will have neither the cultural power nor the enduring political power required to make European heritage the defining characteristic of American-ness. The country will break apart before it accedes to the likes of Matt Walsh or Tucker Carlson as the arbiters of true American-ness.
It’s probably a good thing that forced assimilation, of the type used in the early 20th century, is off the table. I say “probably” because 20th century America is arguably the most spectacularly successful story of integration and multiculturalism in modern history; some will inevitably claim that the cruel, bullying tactics that the old Protestant majority used on German, Japanese, Italian, Jewish, Polish, and other immigrants were necessary to that success. I reject that idea; I think that those bullying tactics were overkill, and probably led to lingering resentments.
But even though early-20th-century-style forced assimilation is off the menu, America still needs some sort of assimilation. A multicultural nation can’t survive as a “salad bowl”, where each group of people maintains its distinctiveness over time. (Canadians, who are fond of the salad bowl metaphor, are probably in for a rough time.) There is no “separate but equal” when it comes to cultures within a nation; if they remain forever separate, they will inevitably be unequal. More pragmatically, nations without cultural unity have difficulty providing public goods; politics tends to break down into an ethnic spoils system instead of being run for the benefit of the masses.
What America thus needs is a melting pot — or if you’d prefer a less metallurgical metaphor, a stew. Immigrants and their children should not be required to forsake every symbol of the old world, abandon their religion, or forget their heritage. But over time, the boundaries between America’s initially distinct cultures should blur. Intermarriage, interethnic business partnerships, and interethnic friendships should gradually erode the physical borders of the old blocs, while modern American culture — Netflix shows, pop musicians, and so on — should provide shared experiences and touchstones to bring Americans together without regard to ancestry.
This gentler assimilation has been happening my entire life. In a post last September, I wrote about what it looks like on the ground:
[M]any also value American culture as a marker of shared nationhood.
When I was growing up in Texas, one of my best friends was born in Shanghai, and didn’t become a U.S. citizen until the age of 18. Culturally, he was a little different than me and the rest of my friends — his mom made dumplings instead of sandwiches, he taught me how to use chopsticks, he didn’t believe in God.
But in all the cultural ways that mattered to us, we were the same. We watched the same TV shows, played the same video games, and listened to the same music. We used the same slang, had the same attitudes toward school, and wanted pretty much the same things for our future. And yes, we believed in the Constitution, and American freedoms, and all of that stuff.
During the 2010s, during our nation’s great…collective freakout over race, I wrote to my friend and asked him if he had ever felt discrimination growing up, or if he had ever felt excluded from the majority. He responded that while once in a great while he faced a little racism from a few jerks, it didn’t dominate his experience. In terms of identity, he told me he just felt very American.
This kind of real, on-the-ground cultural affinity is something too nebulous for YouGov pollsters to ask about, and yet I suspect it’s deeper and more important than most of the more quantifiable markers of American-ness. America is a propositional nation to some extent, but we’re also a cultural nation, bound together by shared habits and attitudes and lifestyles and beliefs. What matters the most isn’t our family’s history in the country, but our own personal history. Shared life experience beats shared heritage in terms of building the bonds of nationhood.
This is what Tomas Jimenez writes about in The Other Side of Assimilation, in which he argues that immigrant cultures will gently add their distinctiveness to mainstream American culture instead of being erased. And it’s what Richard Alba writes about in The Great Demographic Illusion, in which he predicts the gradual melding of America’s disparate groups into a unified “mainstream”. Before the Trump years, it looked like this was working well.
And I believe it was working well. I do not believe that this form of assimilation was too gentle and tolerant. I do not believe that concentration camps and forced name-changes and ethnic slurs and “100 percent American” movements sending volunteers into immigrants’ living rooms would have averted the coming of the MAGA movement. I believe that the MAGA movement is simply one of America’s periodic nativist backlashes, like the Know-Nothings in the 1850s or the restrictionists of the 1910s. It would have come anyway; it always comes back, and we just have to deal with it again.
What we must not do, I believe, is react to the MAGA movement by throwing out the notion of a unified and unifying American culture. We must not retreat to enclaves, online or physical, and view large swathes of the country as our enemies. Instead, we have to recommit to commonality.
This will be hard, but it won’t be impossible. Studies consistently show that Americans are less polarized on the issues than the media tells us we are. As recently as the 2000s, red and blue America were essentially culturally unified as well; though this might be changing, a lot of commonality remains. The online realm pushes us to hate and fear the outgroup, and to identify more with our distant co-ethnics than our real, physical neighbors. But the pull of the real world is still strong, and we’re starting to spend less time on social media.
Assimilation — which is really just another way of saying integration — won’t always be the picture of tolerance. Building a shared culture requires changes from everyone. Yes, some Muslim Americans will need to make sacrifices — they may have to look at cartoons of the Prophet Muhammad, or eat at school cafeterias where pork is on the menu, or hear bigots defame their religion. America is not Europe; freedom of speech, and the separation of church and state, are part of our core values as a nation, and these should not change.
But at the same time, non-Muslim Americans have to get used to seeing mosques on their streets without thinking they’re being invaded. They’ve got to get used to the idea that Islam is just one more religion in America’s mosaic of faiths and practices, and that Muslim Americans are every bit as American as Baptists. Some people will inevitably convert away from Islam, but others will convert to Islam, and this is fine; this is how freedom of religion works in a free society.
And yes, assimilation will involve the eventual loss of old cultural traditions as the generations go on. People will start eating more American food. Some will become secularized. Essentially all will forget how to speak their ancestral language. These processes are happening even faster with recent waves of immigration than they happened a hundred years ago. It’s a normal healthy process, and everyone should accept it; it’s part of the deal when you move to America.
Most of all, we all need to get over the idea that America is on the precipice of a race war or a religious war. Online activists might dream of that, but they’re small in number — and a lot of them aren’t even Americans, but foreign trolls for whom American politics is a fun outlet for their hatred and boredom. Most actual Americans just want to get along with our neighbors and live our lives together.
Ultimately, that’s all assimilation is — living our lives together until we become one people. It happened before, and if we want it, it can happen again.
2026-04-07 17:43:43
I hate to say “I told you so” — not because saying “I told you so” is unseemly, but because the fact that I have to say it means I’m probably living in a world where things have gone badly.
I didn’t want to live in a world where gasoline costs over $4 a gallon. I didn’t want to live in a world where America tore up nearly all of its long-standing alliances and threatened to invade and conquer parts of Europe. I didn’t want to live in a world where China is viewed more favorably than the U.S. I didn’t want to live in a world in which the President of the United States posts things like this to his social media account:
I didn’t want to live in this world, but my countrymen forced me to live in it. I wrote many, many posts urging people to vote for Kamala Harris, despite all her shortcomings. They did not. And now I have to live with the consequences of my failure, and the failure of my fellow-travelers, to persuade the American people to avoid shooting themselves in the foot back in November 2024.
Whatever smugness I get from being able to say “I told you so” is vastly, infinitely outweighed by the dismay I feel over seeing my warnings be vindicated in real time.
And I also admit that my warnings were not entirely prescient when it came to Trump. I foresaw that Trump would attack America’s institutions, implementing rule-by-decree, purging competent people in favor of cronies, flouting the law, and wielding the power of the presidency to harass and intimidate his critics. I foresaw that Trump would send ICE into American communities to do violence and harass peaceable Americans. I foresaw that Trump would realign America toward Russia, cut off aid to Ukraine, and try to bully Ukraine into surrendering territory.
But I did not actually foresee his biggest mistakes. I didn’t predict that his tariff policy would be nearly as insane as it was — declaring sky-high tariffs on dozens of countries at once, and then selectively walking them back, and then repeating the process again and again.
And I did not foresee the Iran war. I never bought into his antiwar campaign stances — he has always been a bully, and he has always been enamored of the idea of military toughness. But I saw Trump, fundamentally, as a coward — someone who would launch the occasional air strike, but would be too intimidated by the prospect of a military defeat to launch a major war. I saw his cowardice as the core of truth behind the cynical promises of geopolitical isolationism and restraint.
So I can’t quite say “I told you so” in this case. I knew Trump was very bad news, but I didn’t realize quite how multidimensionally bad. I suppose even after all the Trump-bashing I did, I have to issue a mea culpa. I anticipated that Trump would be chaotic, dictatorial, and cruel, but I failed to anticipate how stupid he would be.
Even when the Iran war started, I thought that Trump would probably back off and chicken out pretty quickly. But as with his denial of the 2020 election result, he appears to have stumbled into a losing effort that he feels he can’t back out of.
Unlike with Trump’s limited strikes on Iran in early 2025, or his killing of Qasem Soleimani in 2020, Iran has not simply taken its lumps with grace. With the decapitation of its leaders and Israel pressing for regime change, Iran’s leadership was on what Sarah M. Paine calls “death ground” — they had no choice but to resist with everything they had. And so they’ve continued to fire drones and missiles from underground launchers at a diminished but steady pace. These strikes have occasionally hit valuable U.S. military assets, taking out an AWACS plane (one of only 16 the U.S. has) and some THAAD missile defense radars, and reportedly making several U.S. military bases too dangerous to use.
But the Iranians’ most damaging attack, by far, was to close the Strait of Hormuz, sending global oil, gas, and fuel prices soaring. This is hurting American consumers and tanking Trump’s popularity, but it’s hurting other countries around the world — who don’t have their own shale gas and shale oil reserves to weather the shock — even more.1
The Iran war has put Trump in a no-win situation. He’s clearly losing a war against a far inferior power. If he stays in the war, and the Strait of Hormuz stays closed, then he keeps losing; if he withdraws, he lost and it’s over. And even if he chickens out as usual, there’s no reason to think Iran will simply open the Strait; now that they see that they can bring Trump’s America to its knees with their oil weapon, they’ll probably use it to extract more concessions.
This is why Trump is writhing in the grip of his own bad decisions, looking desperately for a way out. He reduced oil sanctions on Iran, basically begging them to open the Strait, but they didn’t; instead, Iran just gets to sell more oil and make more money. He has repeatedly declared victory in the war, hoping that everyone will just agree that he won, allowing him to quit gracefully — but no one thinks he actually won.
2026-04-05 23:47:30
I promise I’ll write something soon about the flaming, crashing disaster that is the Trump administration — and about other topics of interest. But before I do that, here’s a roundup full of short takes and stories about AI.
First, though, an episode of Econ 102! Officially the podcast is over, but we still occasionally do a reprise episode. This one, fittingly, is about AI biosecurity:
Anyway, here are six other interesting AI-related items:
No one really knows what effect AI is going to have on economic growth, but maybe each “expert” knows a tiny, tiny bit. And maybe, if you combine all of those weak signals, you can get some actual information about the economic effects of AI.
That’s the idea behind a new study by the Forecasting Research Institute. They survey a whole bunch of different people about what they think AI’s capabilities will be in the future, and what that implies for economic growth. Specifically, the groups they survey are:
Economists
AI experts
Superforecasters
The general public
The results are kind of surprising, actually:
For one thing, all the groups have about the same forecasts for AI capabilities by 2030:

This looks like a forecast of modest progress, but it’s not. The “moderate” scenario here would have AI able to write high-quality novels, handle coding tasks that would take humans five days, create semi-autonomous labs, and use robots to perform basic household tasks. So basically, every group of forecasters in this survey thinks stunning AI progress is likely over the next few years.
And yet of all the groups, only the AI experts predict a major growth acceleration in any of these scenarios — and even then, it’s only an acceleration to 4 or 5 percent, not to the 10 or 20 percent scenarios that some people have thrown around:

Why do economists think that even near-godlike AI wouldn’t translate into fast growth? The Forecasting Research Institute lists some of their reasons:
Some economists argued that AI productivity gains would not be evenly distributed across all sectors, particularly where human labor is a bottleneck. Others pointed out that with other general-purpose technologies (electrification, automobiles, personal computers), there were multi-decade lags between widespread implementation and productivity improvements. Part of this delay is attributed to a shift in capital away from labor and toward compute, data centers, APIs, and so on, which would not manifest as an increase in GDP until productivity improvements set in…
Some economists expected demographic decline and geopolitical instability to offset some of the GDP boost from AI progress…Some economists argued that constraints on energy and chip supply, data center build times, and other commodities put a cap on the upper limit of GDP growth…Some economists argued that tail risks…included existential risks from AI, societal unrest or collapse, and war.
It’s likely that the AI experts are also thinking about these bottlenecks and frictions, or something like them, which is why their most optimistic scenario is 5.3% growth — fast, but still significantly slower than India is growing now.
But in fact, I think there must be more to the story here. Basically, none of these groups thinks that any amount of AI capabilities will enable economic take-off. To me, that suggests that they’re thinking — perhaps subconsciously — about something more than just friction and slow adoption.
One possibility — which I should write about more — is that people suspect that humanity is getting satisfied, at least in the developed countries, and that the amount of new valuable things that even a godlike AI could create for us is limited by our inability to desire more goods and services.
I should think about this more.
I’m very optimistic about many of the effects of AI, especially on science and politics. But as regular Noahpinion readers know, I’m pretty worried about AI-enabled bioterrorism (and I think an increasing number of other people are too). I’m worried that some nihilistic, depressed teenager could tell a jailbroken version of Claude Code to make him a doomsday virus, and that the AI would actually go and do it for him. We now live in a world where researchers can use AI to design new, functional viruses and have them sent in the mail. That’s an empowered world, but a terrifying one as well.
Ever since I wrote a post about that danger, I’ve been talking to biosecurity experts and trying to get a better handle on how justified my fears are. One of the experts I talked to, Abhishaike Mahajan, was in the middle of writing a long post about biosecurity in the age of AI. He has since finished the post:
You should read the whole post, but basically, he offers several reasons not to panic. First, he argues that it’s inherently very hard for even an extremely powerful AI to make an effective bioweapon on the first try. This is because there are just too many unknowns about how any newly created virus will behave in the real world, so there’s no way to know you have a doomsday virus until you release it.
I’m skeptical of this line of argument. Instead of just making one doomsday virus you can make 100 candidates and release them all. Doomsday itself is the field experiment, and you can run a lot of experiments at once. Much better bio simulation tools will probably cut down the number of candidates you need to create in order to stumble on one that works.
Abhishaike also argues that countermeasures — vaccines, antivirals, and defenses like far-UV light (which basically works on all viruses) will improve at a rapid clip. I believe this, but I’m not so comforted. Drawing on the experience of Covid, I think it’ll take a lot of time to deploy these countermeasures. A truly well-engineered doomsday virus will kill us long before we can distribute the cure or give everyone a UV zapper. And as Abhishaike points out, it’s likely that the U.S. will not proactively prepare for future pandemic threats, but merely react to them when they occur.
So while I think Abhishaike’s post is excellent and deserves a thorough read-through, I think he might still be underrating the severity of the threat.
How does the world know how much money you have? There are a bunch of computers that store your money as a series of numbers — how many dollars are in your checking account, how many shares of Apple stock are in your portfolio, and so on. Banks and other financial institutions have state-of-the-art computers and huge teams of brilliant software engineers to turn their electronic records into a fortress.
But AI is getting really, really good at hacking. Lyptus Research writes:
We release a new application of the METR time-horizon methodology to offensive cybersecurity, grounded in a new human expert study with 10 professional security practitioners…Offensive cyber capability has been doubling every 9.8 months since 2019. Accelerating to every 5.7 months on a 2024+ fit. Opus 4.6 and GPT-5.3 Codex sit well above both trendlines again, reaching 50% success on tasks that take human experts ~3 hours.
Right now, AI companies are white-hatting — using their AI’s newfound hacking powers to help companies improve their cybersecurity. But what happens when less scrupulous actors get their hands on jailbroken versions of Claude Code and Codex?
What happens if AI agents ever allow bad actors to break into banks at will? If all records of personal wealth were erased in a cyberattack, what could banks or the government even do? A whole lot of people might just instantly see their life’s savings transferred into a hacker’s bank account.
And as if that weren’t enough to worry about, recent advances in quantum computing put cybersecurity in an even more perilous state. Here’s Scott Aaronson:
For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography…
When I got an early heads-up about these results…I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior…But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said…[I]f publishing [results like these] causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.
Not being a cybersecurity expert, I’m not qualified to assess how worrying these developments are. But they seem quite worrying. The entire modern world runs on cybersecurity — if there’s a general failure in the methods we now use to keep information secure, all of society is in deep trouble. So this is definitely worth keeping an eye on.
When I became a blogger, I made a conscious decision to post only under my own name. I reasoned that at some point, text analysis technology would get good enough where it would be able to identify (“dox”) any pseudonymous account I made. Fifteen years later, I’m anticipating vindication. This is from a new paper by Lermen et al.:
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. …LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.
Soon, anyone who disagrees with your pseudonymous alt account, or is even just annoyed with you, will be able to sic an LLM on your account and dox it — if you’ve written online anywhere under your real name. If you’ve only written pseudonymously, you’re probably still safe.
The impending end of pseudonymity — or at least, its significant diminution — has the potential to transform the internet. Pseudonymity is obviously linked to toxic content, because people post stuff under a pseudonym that’s too aggressive or inappropriate to post under their real name.
We might also get a decrease in cancel culture, since pseudonymous accusations and whistleblowers will not be safe from retaliation. There will probably be less honest discussion and less total information on the internet, as people become afraid to have many discussions under their real names.
Less pseudonymity might also close off an important social and psychological safety valve — especially for Japanese people, who tend to use pseudonymous X accounts as a way to express feelings that they’re afraid to air out in public.
In any case, it’s going to get weird.
At one point in Charles Stross’ Accelerando, AI finance quants turn the entire inner solar system into compute to power their financialized online economy — thus driving everyone else to the edges of the solar system.
That’s a little bit over the top, but it’s worth thinking about what happens if and when AI gets deployed in large quantities for adversarial economic activities like quant trading.
Most use cases that people think of with regards to AI are productive. We expect AI to accelerate science, do our coding for us, and so on. A few of the AI use cases we imagine are criminal — we worry about bioterrorism, cyber crime, and so on. But relatively few people talk about what happens if and when AI gets deployed en masse for rent-seeking — i.e. for the redistribution of income by legal means.
A lot of people suspect that a lot of what goes on in quant trading is rent-seeking — a bunch of traders trying to fake each other out or beat each other to the punch without creating economic value. In fact, there are models of how that can happen — my favorite is Hirshleifer (1971). In that paper, Hirshleifer shows how when traders compete to learn something that’s eventually going to become public knowledge automatically, they end up wasting resources on a zero-sum game.1
Quant traders have always used AI a lot, even before the rise of generative AI. But it seems possible that the rise of powerful AI agents and reasoning models will lead to an explosion of spending on quant trading. And if what those trading algorithms are doing is just trying to beat each other to the punch by a nanosecond, a lot of society’s resources — compute, electricity, and so on — will be going to waste.
Frustratingly, I don’t know of a good general result on how much of society’s resources could be wasted like this. But when I play around with some simple examples, it’s clear that the potential waste is large. AI quant trading might not turn the inner solar system into computronium, but it seems like it could still be a giant waste.
So I’m a little nervous when I see stories like this one, alleging that DeepMind founder Demis Hassabis tried to build an AI-powered quant hedge fund inside Google. Quant trading is a very natural way to use AI to make tons and tons of money, but if that becomes too big a part of what AI does, people will get mad at the technology.
By most measures, AI is being adopted faster than any technology in recorded history. It’s difficult to read the news without seeing stories about how AI is conquering the business world. So it’s pretty notable whenever there’s a data point that shows AI not being rapidly adopted.
In fact, there are now a few such data points. Hartley et al. are maintaining an ongoing survey of American workers, in which they ask who’s using generative AI at work. For a while, their survey showed a rapid increase in adoption. But over the last year, they find that adoption has actually fallen:

One survey might be a blip, or there might be a problem with the way the questions are being asked. But The Economist reports that a few other measures are showing either a slowdown or a drop in AI use at work:
Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11%…Adoption has fallen sharply at the largest businesses, those employing over 250 people…
A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.
What’s going on here? The Economist suggests several explanations — disappointing productivity effects, difficulty incorporating AI into existing workflows, economic uncertainty, and so on.
But if this trend is real, there are reasons to think it won’t last. First of all, most of this data is from before the rise of reliable AI agents, which really just came on the scene last December. Now that AI is a lot more than just a chatbot, it’s probably a good bet that more companies are going to find uses for it.
Also, once entrepreneurs start figuring out ways to build new business models and workflows, instead of trying to shoehorn the new tech into existing models and processes, we should see an explosion of AI-enabled productivity, just like we did with previous general-purpose technologies.
But for now, the hints of a plateau in industrial chatbot usage are worth keeping an eye on.
Imagine that the value of Apple’s earnings will become public in a week, but that traders are spending a ton of money figuring out Apple’s earnings before they become public, so they can trade on the knowledge and make profit. That’s wasted effort; it would be better for society if everyone just waited until the earnings were announced.
2026-04-04 00:50:28

In the medium to long term, AI may replace all human jobs (or maybe not). But in the short term, AI doesn’t seem to be doing this yet. Employment rates for prime-age workers in the U.S. are hovering near all-time highs:
A recent survey of corporate CFOs found “little evidence of near-term aggregate employment declines due to AI.” A survey of European firms found no evidence of job reductions so far, despite rising productivity due to AI. Geoffrey Hinton, one of the pioneers of modern AI, famously predicted the imminent displacement of all radiologists by AI algorithms; in fact, radiologists are in greater demand than ever.
So even though AI may displace human beings en masse in the future, it’s not doing that today. But it is likely to change the nature of work. Software engineers, for whom “writing code” was a big part of the job description just a few months ago, are now mainly checkers and maintainers of code written by AIs. But this hasn’t eliminated the need for software engineers — at least, not yet. It has just shifted their job descriptions.
Humlum and Vestergaard (2026) find that so far, this pattern — workers shifting to new tasks without losing their jobs — is the norm, at least in Denmark:
[M]ost employers in [AI] exposed occupations have adopted chatbot initiatives, workers report productivity benefits, and new AI-related tasks are widespread. Yet…we estimate precise null effects on earnings and recorded hours at both the worker and workplace levels, ruling out effects larger than 2% two years after the launch of ChatGPT. What moves is the structure of work: employers absorb AI through task reorganization—including new tasks in content generation, AI oversight, and AI integration—and adopters transition into higher-paying occupations where AI chatbots are more relevant, though still too few to move average earnings. [emphasis mine]
In other words, so far, AI is replacing tasks, not jobs. Alex Imas and Soumitra Shukla have written that as long as there are a few things that only humans can do, this pattern can be expected to hold. Observers of AI consistently find that its capabilities are “jagged” — it’s much better at some tasks than others.
That’s good news for people who are worried about losing their jobs (at least in the next decade). But it’s still very troubling for people trying to decide what to study. A decade ago, it made sense — or at least, it seemed to make sense — to tell young people to “learn to code”. Nowadays, what do you tell them to learn? What tasks will be the ones that humans still need to do, and which will be subsumed by AI? With AI getting steadily better at a very wide variety of tasks, it’s hard to predict exactly what humans will still be doing in five years, even if you’re pretty sure they’ll be doing something.
I have some friends who have spent the last decade or more thinking carefully about what the future of work will look like in the age of AI. No one has ever found a satisfactory answer. As AI technology has developed and changed, even the most plausible predictions for the future of human labor tend to get falsified almost as quickly as they’re made.
But I’ve been thinking about this question too, and I think I’m beginning to see the shape of an answer. I think the near future of work will mostly be divided into three types of jobs — salarymen, specialists, and small businesspeople.
Let’s talk about specialists first, because they’re the easiest to understand. A new theory by Luis Garicano, Jin Li, and Yanhui Wu describes why some workers will keep their jobs largely as they exist today.
Like many economists, Garicano et al. envision a job as a bundle of various tasks. But they also theorize that in some jobs, these tasks are only “weakly bundled” — you don’t really need the same person to do all of those tasks. For these jobs, it would be easy to divide up the tasks between different workers — or between a human and an AI. But in other jobs, the authors assume that the tasks are “strongly bundled” — the same person who does one part of the job has to do the other parts, or the job can’t be done.
The paper’s basic conclusion is that AI tends to replace weakly bundled jobs a lot more quickly than it replaces strongly bundled ones. For example, they theorize that radiologists still have jobs because even though AI can do most of the task of basic scan-reading, there are a lot of other pieces of the job that radiologists still need to do in order to deliver patients the kind of care and expertise they demand. They foresee employment in strongly bundled industries resisting automation until AI capabilities get extremely good:

The people in those strongly bundled jobs are specialists. An example of a specialist might be a blogger. AI, so far, is very good at doing background research, proofreading, and a number of other tasks that are useful for the writing process. But even though it can generate infinite amounts of text, AI is not yet good at writing. Writing communicates a unique human perspective; simply pressing a button to generate text doesn’t say what you want to say. So the tasks that make up my own job are — so far, at least — strongly bundled. AI is making me more productive, but so far it isn’t putting me in danger of unemployment.
But what about those weakly bundled jobs? Garicano et al. predict that these will begin to decline only after demand becomes sufficiently inelastic — in other words, once AI becomes so productive that its output hits diminishing returns for the consumer. After that point, automation tends to replace human labor — it becomes a way to make the same amount of stuff with fewer workers, instead of a way to make more stuff with the same amount of workers.
Until that point, there will be quite a lot of work for people in weakly bundled jobs to do, because of expanded demand. And yet at the same time, companies won’t know which tasks to hire workers for, because AI’s “jagged” strengths and weaknesses will be constantly changing.
The rapidity with which Claude Code replaced the task of code-writing demonstrates this problem. In 2025, companies hiring software engineers could judge their merit based on how good they were at writing code. In 2026, companies have to judge the merit of software engineers based on how good they are at checking and maintaining code. Those skills don’t always go together.
The solution, I think, is to hire more generalists. Instead of picking people to do specific tasks, companies will pick people whose job is to constantly learn what AI is good and bad at, and to fill in the gaps. Cedric Savarese sums up this idea:
The first stage of ‘vibe freedom’ is…[t]he dreaded report that would have taken all night looks better than anything you could have done yourself and only took a few minutes…The next stage comes almost by surprise — there’s something that’s not quite right. You start doubting the accuracy of the work — you review and then wonder if it wouldn’t have been quicker to just do it yourself in the first place…You argue with the AI, you’re led down confusing paths, but slowly you start developing an understanding — a mental model of the AI mind. You learn to recognize the confidently incorrect, you learn to push back and cross-check, you learn to trust and verify…
Curiosity becomes essential. So does the willingness to learn quickly, think critically, spot inconsistencies, and to rely on judgment rather than treating AI as infallible…That’s the new job of the generalist: Not to be an expert in everything, but to understand the AI mind enough to catch when something is off, and to defer to a true specialist when the stakes are high[.]
Essentially, AI is going to be unreliable, but not in a predictable way. Its mistakes and shortcomings will require constant human exploration and patching. This is the job of a generalist. Instead of people who do “payroll” or “back-end engineering” or “accounting”, companies will need to hire people who can do a little bit of everything, if and when the AI messes something up.
In fact, we have an example of a corporate system that relied very heavily on this type of generalist: Japan. Until very recently, Japanese companies treated their “salarymen” as almost interchangeable labor, rotating them between different divisions and requiring them to learn a wide array of tasks. You might start your career in HR, then move to accounting, then do some product design, and so on.
This system might not have been very efficient, and the lack of specialization may have contributed to Japan’s notoriously low white-collar productivity. And it may be why salaryman jobs have been in decline for many years. But in the age of AI, it may finally make sense. When human expertise is replaced by AI expertise, humans’ role may be to flit from task to task, doing whatever the AI is bad at, and supervising AI at whatever it’s good at.
In other words, instead of hiring people who are good accountants or good HR specialists or whatever, companies might start hiring people who are just good AI wranglers, and who have the agency, mental flexibility, and energy levels to keep plugging the ever-shifting holes in what AI can do. In other words, salarymen.
The salaryman system also naturally lends itself to long job tenure. If I’m a highly specialized engineer, I can take my talents and move to a different company with my human capital intact. But if I’m a generalist who does a little bit of everything, what becomes more important to my value as a worker are my human networks within a company, and my understanding of the company’s system. This makes me a much less portable worker; I’m inclined to stay at the company where my long job tenure makes me more valuable than newcomers.
You can already see hints of this happening in American companies. We’re in a “no-hire, no fire” economy — workers are hunkering down in their jobs and refusing to switch, and companies are keeping them there instead of hiring new workers:

This is exactly what you’d expect from a model of firm-specific human capital — in other words, from an economy where everyone increasingly realizes that modern employees need to act like Japanese salarymen. The hypothesis here is that people don’t want to leave their jobs (and companies are happy to keep them in their jobs) because their technical skills might be devalued due to rapid AI progress; instead, they’re staying in their companies, where knowing people and knowing how things work are still important.
So America may yet come to embrace the way of the salaryman. But the third category of future employment will also be very Japanese: self-employment and small business.
Japan has long had a very high prevalence of small business ownership. It has one of the world’s largest proportions of small and medium-sized enterprises. In manufacturing as well as in retail, Japan has traditionally had a lot more small business than other OECD countries. This is now decreasing, as the population ages and business owners retire without heirs or proteges. But it still might point the way to the AI-enabled future.
AI creates leverage; it allows you to do more with a smaller team. For many businesses, the optimal size of this team will fall to only one person or a few people. Thus, I expect to see a lot of small companies sprout up, as people use AI agents to increase their productivity to the point where they only need a few employees (or even zero).
In other words, I expect AI to make the American labor system look a bit more like the Japanese labor system of the 1960s-2000s. There will be a bunch of generalists running around looking for things to do within their companies, a bunch of small businesspeople striking out on their own, and a few specialists with specific skills that still make them valuable. If you’re not one of the lucky few in the latter category, your choices will be to become a cog in an ever-changing corporate machine, or to strike out on your own and manage an AI “team” to sell some good or service directly to the consumer.
This might not be the most optimistic or enticing view of the future of work, especially to people who have lived their whole life thinking that their specific job skills are what made them valuable to society. But it’s probably better than humans becoming economically obsolete.