2025-12-05 18:14:44

In the 2010s, a bunch of right-wing types suddenly became big fans of Martin Luther King Jr.’s views on race. If you saw someone on Twitter quote MLK’s nostrum that people should “not be judged by the color of their skin but by the content of their character”, it was almost certainly someone on the right — quite a change from the type of person who probably would have cited King’s words half a century earlier. This is from an Associated Press story back in 2013:
King’s quote has become a staple of conservative belief that “judged by the color of their skin” includes things such as unique appeals to certain voter groups, reserving government contracts for Hispanic-owned businesses, seeking more non-white corporate executives, or admitting black students to college with lower test scores.
Many progressives railed against the idea of a colorblind society, arguing that statistical disparities between racial groups — income gaps, wealth gaps, incarceration gaps, and so on — couldn’t be remedied without writing race into official policy and becoming much more race-conscious in our daily lives.
In the policy space, this idea manifested as DEI, which implemented racially discriminatory hiring policies across a broad swath of American business, government, academia, and nonprofits. In the media space, this manifested as a torrent of op-eds collectively criticizing white people as a group — “White men must be stopped: The very future of mankind depends on it”, “It’s Time for White People to Understand Their Whiteness”, “What is Wrong With America is Us White People”, and so on. Reputable institutions brought in speakers who made claims like “Whites are psychopaths,” and so on. Making nasty jokes about white people carried few if any professional consequences.
In that kind of environment, it’s understandable that lots of people on the right would turn to individualist principles like the ones espoused by MLK in his famous speech. Asking to be judged by the content of your character is a reasonable defense against people who are trying to judge you based on your membership in a racial group.
Fast-forward a few years, however, and the shoe is on the other foot. The Wall Street Journal released an editorial urging us not to blame Afghan immigrants as a group for the Afghan man who shot two National Guardsmen in Washington, D.C. a week ago:
[I]t would be a shame if this single act of betrayal became the excuse for deporting all Afghan refugees in the U.S…Tens of thousands are building new lives here in peace and are contributing to their communities. They shouldn’t be blamed for the violent act of one man.
Stephen Miller, Trump’s powerful Homeland Security Advisor, responded with a dismissal of individualism and an indictment of Afghans as a group:
This is the great lie of mass migration. You are not just importing individuals. You are importing societies. No magic transformation occurs when failed states cross borders. At scale, migrants and their descendants recreate the conditions, and terrors, of their broken homelands.
And that same week, it was revealed that some Somalis in Minnesota had committed a massive welfare fraud:
Federal prosecutors charged dozens of people with felonies, accusing them of stealing hundreds of millions of dollars from a government program meant to keep children fed during the Covid-19 pandemic…At first, many in the state saw the case as a one-off abuse…But…Over the last five years, law enforcement officials say, fraud took root in pockets of Minnesota’s Somali diaspora as scores of individuals made small fortunes by setting up companies that billed state agencies for millions of dollars’ worth of social services that were never provided…Federal prosecutors say…more than $1 billion in taxpayers’ money has been stolen[.]
In the wake of those revelations, Trump condemned Somalis as a group:
Here are Trump’s exact words:
I don’t want em in our country. Their country’s no good for a reason. Their country stinks…I could say that about other countries too…We don’t want em…We have to rebuild our country. You know, our country’s at a tipping point. We could go bad…We’re going to go the wrong way if we keep taking garbage into our country. Ilhan Omar is garbage. Her friends are garbage…And from where they came from, they got nothing…When they come from Hell, and they complain, and do nothing but bitch, we don’t want em in our country. Let em go back to where they came from, and fix it.
Here you see the very same idea that Stephen Miller expressed. Trump and Miller both judge people by their ethnic group, and they judge those ethnic groups by the condition of their ancestral country. Somalia is a bad place, therefore Somalis are bad, therefore if you’re a Somali you’re bad and you shouldn’t be allowed into America. Afghanistan is a bad place, therefore Afghans are bad, therefore if you’re an Afghan you’re bad and you shouldn’t be allowed into America.
In fact, this idea was very popular a century ago, when America enacted harsh restrictions on immigration. Restrictionists argued that immigrants from South and East Europe were undesirable, because South and East Europe were relatively underdeveloped places. For example, here’s what Francis Walker, the president of MIT and a staunch opponent of immigration, wrote in The Atlantic in 1896:
Only a short time ago, the immigrants from southern Italy, Hungary, Austria, and Russia together made up hardly more than one per cent of our immigration. To-day the proportion has risen to something like forty per cent…The entrance into our political, social, and industrial life of such vast masses of peasantry, degraded below our utmost conceptions, is a matter which no intelligent patriot can look upon without the gravest apprehension and alarm. These people have no history behind them which is of a nature to give encouragement…They are beaten men from beaten races; representing the worst failures in the struggle for existence…They have none of the ideas and aptitudes which fit men to take up readily and easily the problem of self-care and self-government. [emphasis mine]
This is a form of racial collectivism. It’s judging people by ethnic, racial, and national groups instead of as individuals. In his landmark 1955 book Strangers in the Land, which chronicles the anti-immigration movement of the late 19th and early 20th centuries, historian John Higham labeled this attitude “racism”. Today, of course, we can’t use that word, since it has been repurposed to mean so many other things. But the word feels like a perfect fit — it’s an ideology (an “ism”) that holds that people are to be judged according to the collective accomplishments of their race.
When I see people on the right spouting this sort of rhetoric, I think: What happened to MLK? What happened to judging people based on the content of their character? What happened to the colorblind society? What happened between 2018 and now that makes collective judgment of racial groups suddenly ok?
The answer, of course, is “The right got the upper hand in American politics.” It turns out that individualism is a bit like free speech — a principle that lots of people tend to support when their tribe is losing, only to abandon it as soon as they’re back on top. A lot of people really do believe in individualism, of course, especially in America. But a lot of others just use it as a cynical shield when they’re on the defensive. And we’re finding out that most of the MAGA movement was always the latter type.
MAGA’s overriding goal is immigration restriction. They care about this much more than any other policy issue — more than inflation, more than trade, more than crime, more than anything. And the reason they want immigration restriction, I believe, is because they think that Somalis and Afghans and Haitians and so on are going to make America more like those countries. When Trump and Miller talk about this, I think they’re being completely honest. And after Trump is gone, I think this idea will be at the core of the new right-wing ideology that will sustain the MAGA movement. Racial collectivism is absolutely at the core of their worldview.
But MAGA has a big problem: While that worldview has some appeal to Americans, overall they aren’t on board. Every poll we have shows pro-immigration sentiment on the rise again, after a dip during the Biden years:


A lot of Americans are also in favor of individualism — that is, of treating people based on their individual traits rather than what group they belong to. Americans of most races supported the recent Supreme Court decision banning racial preferences in university admissions; even black Americans were about evenly split. And while Americans disagree about lots of racial issues, they tend to overwhelmingly say they support things like equal opportunity regardless of race.
And although there are differences in American attitudes toward immigrants from different regions of the world, the differences aren’t huge, and they don’t perfectly line up with how developed the regions are. For example, here’s a 2015 poll by Pew, finding that immigration from Africa is viewed more favorably than immigration from the much more developed regions of Latin America and the Middle East:

Here’s a 2021 poll from Cato that finds the same pattern:

So although some Americans are probably evaluating immigrants based on their racial group and on the condition of their source country, like Trump and Miller are, Americans in general probably don’t think this way. They get mad at illegal immigration, and at the disorderly quasi-legal immigration that Biden tolerated — but illegal entry is an individual action, not a group trait.
Which makes sense. The U.S. immigration system is highly selective; Lazear (2017) shows that selectivity accounts for a very large fraction of the average educational attainment of different immigrant groups in America.
As Matt Yglesias points out, nowhere is this more evident than with Indian immigrants. The country of India is still poor; despite solid recent growth, its GDP per capita is lower than that of El Salvador or Guatemala. Infrastructure has improved a lot but is still subpar, and the country has pockets of startling poverty. By the racial-collectivist logic of Miller and Trump, or of the restrictionists of a century ago, Indian immigrants should be turning America into a third-world country.
And yet the exact opposite is happening. Indian Americans are arguably the most successful group in the United States. They have the highest median household income of any national ancestry group, and the highest average level of education. Even Indians who are poor when they arrive in America end up making well above the median — a level of mobility rivaled only by Chinese Americans. There are more billionaires in America from India than from any other ethnic group.
Nor has Indian immigration turned anywhere in America into a version of India. Fremont, California is probably the city over 100,000 population with the greatest percentage of Indians — about 29%. And yet Fremont is one of the cleanest, nicest, richest, safest towns in the whole country, with a murder rate so low that many European countries would envy it, and arguably the best public schools in the country. A recent survey identified Fremont as the happiest city in America.
Almost all of the MAGA people screaming about Indian immigration on the internet live in places less nice than Fremont.
A big part of this, of course, is because immigration from India is so selective. India is the world’s most populous country; it’s not too hard to grab a few million smart people from a country that big. But this isn’t the only reason. American institutions are also important.
As another example, take El Paso. The overwhelming majority of people in El Paso are of Mexican descent. Mexican immigration is among the least selective, because Mexico is so close to America and there was so much illegal immigration in the past. And yet despite being filled with ethnic Mexicans, El Paso looks absolutely nothing like Juarez, the Mexican city that sits right next to it on the opposite side of the border. El Paso’s murder rate is 3.8, very low for an American city, while Juarez is one of the most violent, chaotic cities on planet Earth.
Mexicans didn’t turn El Paso into Mexico, and the reason is American institutions. America’s economy offers El Paso’s residents the chance to get ahead without joining drug gangs. American culture is a more positive-sum, less violent culture than Mexico’s. And the U.S. Military has a big presence in El Paso, because Fort Bliss is there. Even without selectivity, institutions matter a lot.
So Stephen Miller is just flat-out wrong. Immigrants do not recreate the conditions of their homelands in America. Yes, there is some amount of carryover, including some negative influences like the old Sicilian mafia, or modern gangs like MS-13. But the differences between American immigrant populations and their source countries far outweigh the similarities.
In order for MAGA to win, they need to convince America otherwise — they need to persuade you, the American citizen, that the fiction that undergirds their ideology is actually true. To this end, they need to get you to judge people in terms of their group, rather than as individuals. So they keep looking around for a group they think they can convince you to fear, to disdain, and ultimately to hate.
Remember last year, during the campaign season, when Trump and JD Vance declared that Haitian immigrants were eating people’s pets in Springfield, Ohio?
It was all B.S., of course. News crews descended on Springfield, but not even the most right-wing reporters could find a credible report of a single pet being eaten. JD Vance awkwardly begged the internet to “keep the cat memes flowing”, and never apologized for smearing a whole group of people, but at some point everyone realized it was a hoax.
That’s why you didn’t hear anything about cat-eating Haitian-Ohioans before the campaign season of 2024. And that’s why you haven’t heard anything about it since then. It wasn’t real; you were being played.
Now they’re trying again, with the Somalis of Minnesota. This time, they probably have a better shot at success. For one thing, Somalis in America are much poorer than their Haitian-American counterparts — Haitians in the U.S. have slightly below average income and average education levels, they commit few crimes, and they’re not prominent in politics. They’re basically just quiet middle-class people living pretty normal American lives.
Somalis, on the other hand, are an extremely poor group, with very high poverty rates and much lower income than Haitians, or immigrants in general; this is due to the fact that most of them are refugees or descendants of refugees, which are the least selected type of immigrants. Somalis are Muslim, unlike Haitians, which makes them both visually distinct (because of the hijab) and mentally associated with civilizational conflict. They’re not known for violence, but now they’re associated with Minnesota’s massive organized welfare fraud.
And unlike the Haitians of Ohio, the Somalis of Minnesota are prominent and powerful in local politics. They managed to do a sort of takeover of the Minneapolis Democratic Party, nominating one of their own, Omar Fateh, as the Democratic candidate over incumbent mayor Jacob Frey. Frey managed to beat Fateh in the general election, but only by appealing to a rival Somali clan and making flamboyant appeals to the Somali community.
This is hardly unprecedented in American politics — Irish immigrants built political machines that dominated the politics of many American cities in the 19th century. Given many decades, it’s likely that Somalis will assimilate, the same way the Irish did, and turn the organizational skills that allowed them to swindle the state of Minnesota and take over the Minneapolis Democrats to some more constructive use, like building drone factories (or whatever humans are doing 80 years from now).
But “many decades” is a very long time for Americans to wait in order not to worry about culture clash. And Americans aren’t used to urban ethnic machine politics these days,1 and the notion of an iconic American city being at the mercy of clan rivalries from one of the world’s poorest and most violent nations will naturally lend force to Trump’s argument that Somalis are trying to make Minnesota into another Somalia.
If Trump and MAGA succeed in getting a critical mass of regular Americans to reject Somalis categorically, as a racial group, then they win a crucial victory — not over the Somalis, who pose them no actual threat, but in terms of changing the terms of the discourse around race and immigration in America.
Once MAGA can convince you that “Are the Somalis bad?” is a legitimate question to ask, they then pretty much automatically get to ask the same question about every other group in America. They get to ask “Are Afghans bad?”, and “Are Haitians bad?”. They’ll get to ask “Are Jews bad?”, “Are Indians bad?”, and “Are Chinese people bad?”. Eventually they might even get around to asking “Are Italians bad?”, and so on. They will push as far as they can.
Even if those questions get answered in the affirmative — even if Italians and Indians and Haitians can all successfully defend their right to be in America by appealing to the court of MAGA opinion — the mere fact that they had to defend themselves as racial groups, instead of as individuals, will redefine what America is all about. It will move America toward being an estate society — a society where groups are accorded rights and privileges instead of individuals.
In the 20th century, American liberals successfully overcame all of the people who wanted to make the country a racial estate society — Jim Crow was outlawed, immigration laws were made (more or less) race-neutral, and so on. Liberals accomplished this by appealing to Americans’ deep-seated value of individualism — of the idea that people shouldn’t be judged by the group they were born into. That idea, captured most eloquently in MLK’s famous speech but repeated ad infinitum by leaders, writers, and activists, ultimately carried the day and made America the liberal nation I grew up in.
What I fear is that by embracing identity politics in the 2010s, progressives have thrown away liberals’ ultimate weapon. Appeals to individualism carry much less moral force when the people making those appeals just spent the last decade decrying colorblindness as a tool of systemic racism (or embracing people who made that claim).
This is not to say that rightists’ push to turn America into a balkanized racial hierarchy is progressives’ fault — it isn’t. Rightists are always trying to do this sort of thing; it’s not a reaction to anything progressives did. But there’s a reason this sort of racial collectivism was defeated and suppressed for a hundred years, and there’s a reason it’s breaking through now when it couldn’t before.
To be honest, they weren’t very relaxed about it in the 19th century either; anti-Irish sentiment resulted in vicious pogroms, gang wars, and whole newspapers devoted to spreading vicious anti-Irish rumors.
2025-12-04 17:20:03

How did the screen you’re looking at right now get invented? There was a whole pipeline of innovation that started in the early 20th century. First, about a hundred years ago, a few weird European geniuses invented quantum mechanics, which lets us understand semiconductors. Then in the mid 20th century some Americans at Bell Labs invented the semiconductor. Some Japanese and American scientists at various corporate labs learned how to turn those into LEDs, LCDs, and thin-film transistors, which we use to make screens. Meanwhile, American chemists at Corning invented Gorilla Glass, a strong and flexible form of glass. Software engineers, mostly in America, created software that allowed screens to respond to touch in a predictable way. A host of other engineers and scientists — mostly in Japan, Taiwan, Korea, and the U.S. — did a bunch of incremental hardware improvements to make those screens brighter, higher-resolution, stronger, more responsive to touch, and so on. And voila — we get the screen you’re reading this post on.
This story is very simplified and condensed, but it illustrates how innovation is a pipeline. We have names for pieces of this pipeline — “basic research”, “applied research”, “invention”, “innovation”, “commercialization”, and so on — but these are approximate, and it’s often hard to tell where one of these ends and another begins. What we do know about this pipeline is:
It tends to go from general ideas (quantum mechanics) to specific products (a modern phone or laptop screen).
The initial ideas rarely if ever can be sold for money, but at some point in the chain you start being able to sell things.
That switch from non-monetizable to monetizable typically means that the early parts of the chain are handled by inventors, universities, government labs, and occasionally a very big corporate lab, while the later parts of the chain are handled mostly by corporate labs and other corporate engineers.
Very rarely does a whole chain of innovation happen within a single country; usually there are multiple handoffs from country to country as the innovation goes from initial ideas to final products.
Here’s what I think is a pretty good diagram from Barry Naughton, which separates the pipeline into three parts:

Over the years, the pipeline has changed a lot. In the old days, a lot of the middle stages — the part where theory gets turned into some basic prototype invention — were done by lone inventors like Thomas Edison or Nikola Tesla. Later, corporate labs took over this function, bringing together a bunch of different scientists and lots of research funding. Recently, corporate labs do less basic research (though they’re still very important in some areas like AI and pharma), and venture-funded startups have moved in to fill some of that gap.
The early parts of the pipeline changed too — university labs scaled up and became better funded, government labs got added, and a few very big corporate labs like Bell Labs even did some basic science of their own. The key innovation here was Big Science — in World War 2, America began using government to fund the early stages of the innovation pipeline with truly massive amounts of money. Everyone knows about the NIH and the NSF, but the really huge player here is the Department of Defense:

Japan, meanwhile, worked on improving the later parts of the chain. I recommend the book We Were Burning for a good intro to the ways that Japanese corporate labs utilized their companies’ engineering-intensive manufacturing divisions to make a continuous stream of small improvements to the final products, as well as finding ways to scale up and reduce costs (kaizen).
And finally, the links between the pieces of the pipeline — the way that technology gets handed off from one institution to another at different stages of the chain — changed as well. America passed the Bayh-Dole Act in 1980, making it a lot easier for university labs to commercialize their work — which thus made it easy and often lucrative for corporations to fund research at universities. (This had its roots in earlier practices by U.S. and German universities.)
Meanwhile, in parallel, the U.S. pioneered a couple of other models. There was the DARPA model, where an independent program manager funded by the government coordinates researchers from across government, companies, and universities in order to produce a specific technology that then gets handed off to both companies and the military. And there are occasional “Manhattan projects”, where the government coordinates a bunch of actors to create a specific technological breakthrough, like building nuclear weapons, landing on the moon, or sequencing the human genome.
So we’ve seen a number of big changes in the innovation pipeline over the years. And different countries have done innovation differently, adding crucial pieces and making key changes as their innovation ecosystems developed The UK pioneered the patent-protected “lone inventor” model (with some forerunners of modern venture capital). Germany created corporate labs and the research university. America invented Big Science, modern VC, and DARPA, while also scaling up modern university-private collaboration and undertaking a few Manhattan-type projects. And Japan added continuous improvement and continuous innovation at the end of the chain.
That story more or less brings us up from the 1700s to the late 2010s. That’s when China enters the innovation story in a big way.
Up through the mid-2010s, China had a pretty typical innovation system — the government would fund basic research, companies would have labs that would create products, and so on. China wasn’t really at the technological frontier yet, though, so this system didn’t really matter that much for Chinese technology — most of the advances came from overseas, via licensing, joint ventures, reverse engineering, or espionage. If you’ve ever heard people talk about how China “steals” all its tech, they’re talking about this era — and “steal” means a whole bunch of different things.
In the 2010s, China’s growth slowed down. There were a lot of reasons for that, but one reason was that they were approaching the limits of how much technology they could transfer from overseas. They had to start inventing things on their own. So they did.
You’ve probably read a lot about Chinese innovation in the last few years. Most things you read will fall into one or more of three basic categories:
“Look how much money China is spending on research”
“Look how many academic papers China is publishing”
“Look which high-tech industries China is dominating”
Here is a good recent Financial Times article that combines the first and the third of these, here is an Economist article from last year about the second, and here is a recent Economist story about the third.
All of these are certainly worth looking at. For example, China really is spending a whole lot more money on research:

And since salaries and materials and equipment are all cheaper in China, in PPP terms they’re actually spending a bit more on research than America now. And the gap is set to widen, with or without planned U.S. budget cuts:

As for scientific output, despite inflating their citation counts a lot with citation rings and other tricks, China now leads the world in high-quality STEM papers, especially in materials science, chemistry, engineering, and computer science:
And as for high-tech manufacturing, China is dominating there as well, except in a few narrow sectors where U.S. export controls have managed to keep key pieces of technology out of Chinese hands.
One other piece of evidence that China’s innovation is producing real results comes from the royalties that the world pays to Chinese companies to license their technologies. This amount has skyrocketed since China rolled out its new innovation system in the late 2010s, showing that China is producing lots of technology that the world is willing to pay for:

But although you’ll read a lot in the news about how much China is innovating, you almost never read a good explanation of how they’re doing it. Most people don’t seem to think about how research actually functions; people talk as if it’s just a black box where money goes in and cutting-edge high-tech products come out the other side. But it’s not a black box; the way that a country translates money into products is very important. It affects how productively the money will get used, who spends the money, how much can be deployed, what kinds of products and technologies that the system will create, and who will benefit from those products.
In fact, we know a lot about China’s innovation system — enough to know that in the last decade, they’ve created something new and powerful and interesting. If you want some readings, I strongly recommend:
MERICS’s 2023 report, “Controlling the Innovation Chain” and its 2024 follow-up
Barry Naughton’s condensed writeup of the MERICS report, describing why the shift happened and what it means
IGCC’s 2023 report, “Reorganization of China’s Science and Technology System”
Jamestown’s brief summary of the key actors in the system
If you want a deeper dive, CSET has some good reports on the Chinese Academy of Sciences and the “State Key Lab” funding ecosystem.
Anyway, reading all this, it’s clear that like all the industrial nations before it, China has made big changes to the way innovation gets done. I’ll talk about what these changes are, and what they imply for the future of technology (and the economy), but first I think it’s useful to think a bit about the purpose of China’s innovation system.
2025-12-03 17:24:54
I’ve been pretty critical of Zohran Mamdani’s ideas for New York City. His plan to make buses free would degrade the quality of public transit and make it both less useful and less popular. His idea to open government-run grocery stores would just fail outright. His rent control plan would at least partially undermine his housing plans, while his proposed tax increases would probably accelerate the exodus of New York’s crucial finance industry.
But at least one of Zohran’s ideas is excellent: His support for small business. In a recent video, he promised to make it “faster, easier, and cheaper” for small retail businesses to open in NYC, to cut fines and fees for these businesses by 50%, to accelerate permits and applications, to slash regulations, to have government workers who help small businesses navigate government requirements, and to increase funding for small business support programs by 500%. Many of these ideas are listed on Mamdani’s website.
Mamdani’s push to support small business is part of a larger overall theme within America’s revitalized socialist movement, and within modern progressivism in general — a deep suspicion of big corporations and an instinctive support for mom & pops. It’s not just movement progressives, either. Daniel Lurie, San Francisco’s mayor, is widely regarded as a centrist, and yet he has made support for small retail businesses a keystone of his approach to urban revitalization:
Mayor Daniel Lurie today signed five ordinances from his PermitSF legislative package, driving the city’s economic recovery by making major structural changes that will help small business owners and property owners secure the permits they need more easily and efficiently. Reforms include common-sense measures to support small businesses through the permitting process, boost the city’s nightlife businesses, help families maintain their homes, and increase flexibility to support businesses downtown.
And here’s what he tweeted back in May:
No more permits for sidewalk tables and chairs—putting $2,500 back in the pockets of small businesses and saving them valuable time…No more permits and fees to put your business name in your store window or paint it on your storefront…No more trips to the Permit Center to have candles on your restaurant’s table…No more rigid rules about what your security gate must look like so businesses have more options to secure their storefronts…No more long waits or costly reviews for straightforward improvements to your home, like replacing a back deck…And we’re getting rid of outdated rules to give downtown businesses more flexibility with how to use their ground-floor spaces—because if adding childcare centers and gyms will help bring companies and employees back downtown, we should support it…In addition, every city department involved in permitting will track timelines and publish them online…Learn more about the initiative at https://sf.gov/permitsf
I’m extremely happy about this trend. To be frank, there’s not much to like in the model of progressive local governance that has emerged over the last two decades. Cumbersome regulations that slow construction and raise costs, public money funneled to useless or corrupt nonprofits, permissive policies toward crime and disorder, and weakening public education in the name of “equity” have all sadly become part of the standard progressive package, with predictably terrible results. (Lurie is known as a “centrist” because he has tried to rectify at least some of these problems.)
But the emerging support for small business is a very important bright spot! First of all, it’s an example of progressives supporting productive enterprise, rather than treating every type of human activity as an opportunity for ad-hoc redistribution. Progressives talk endlessly about “resources”, but the pool of resources in a city is not fixed. If you make buses scary to use by allowing disorderly people onto them, or if you limit new housing construction with regulation, or if you outsource city services to less competent nonprofits, the total amount of your city’s resources goes down, and there is simply less to go around.
But small businesses increase a city’s total resources, because they are productive enterprises. Every restaurant means a greater variety of food to eat, every boutique means a greater selection of clothes to wear. They’re an incredibly important component of capitalism, providing productive employment for almost half of all private-sector workers.
At this point, some hard-headed conservative is going to pop up to inform me that small business is inherently less efficient at production than big business. And this is generally true. Economies of scale are a real thing — when you can leverage the distribution networks and high volumes of Wal-Mart, you can afford to charge consumers lower prices than a corner bodega that doesn’t have those advantages.
That’s why big chain stores tend to drive mom-and-pop shops out of business when they come to town. When chains drive out small businesses, productivity goes up significantly — in fact, Foster, Haltiwanger, and Krizan (2006) estimated that this was the main source of productivity growth for the U.S. retail industry in the late 20th century.1
And yet when small business dies, something important is lost. For one thing, an important path to the middle class is closed off. Small business provides a lot of employment, but the class whose lives are transformed the most are the business owners themselves. This is from a 2014 report by the Urban Institute:
Family-business ownership is associated with faster upward mobility than observed in paid work once selection is addressed….[We find] a positive and significant [causal effect] on family-business ownership, where the outcome is upward income mobility from 1980 to 1999…[Our results] suggest that family-business ownership led to a higher level of economic advancement relative to working for someone else in the 1980s and 1990s. Owning or having a management stake in a small business had an unambiguously positive effect on upward income mobility during the 1980s and 1990s after controlling for resources in the 1970s.
This is the reason for Japan’s legendarily staunch support of small retail businesses. The country offers small businesspeople a dizzying array of cheap loans, tax incentives, subsidies for technological upgrading, free training and education, expedited permitting and regulatory approval, startup subsidies, various place-based policies, protection from competition by large chains, and so on.
Small business is considered a key pillar of the Japanese middle class, and also an escape hatch for independent-minded Japanese people to escape the often stifling corporate system. Altogether, small businesses of all types are responsible for 70% of Japanese employment, which is significantly higher than in the U.S. This preponderance of small business has probably held back Japan’s productivity to some degree, but it’s a sacrifice the country has been willing to make.
In the United States, small business is especially important as a ladder of upward mobility for immigrants, as anyone whose immigrant ancestors owned a convenience store, a furniture store, or a gas station can attest. Immigrants own a disproportionately high percent of the country’s mom-and-pop shops, especially in the restaurant industry.
At this point, the aforementioned hard-headed conservatives may accuse me of caring about distribution more than production. Why settle for somewhat-productive mom-and-pop shops when you could get ultra-productive chains like CVS and Walmart? If we’re going for productivity, why not go all the way?
The answer, I think, is political-economic in nature. Socialists may be leading the charge for small business, but small business owners are perhaps the key constituency for capitalism itself. Being a business owner means that you are, by definition, a capitalist; you depend for your livelihood not just on the right to own capital and hire workers, but also on the entire network of trade and markets that supports modern business. Any major shock to that underlying free-market system — or even government policies that increase your costs by a moderate amount — is a threat to your way of life.
And on top of that, your day-to-day experience of life — hiring, firing, buying, selling, and so on — will familiarize you with the basic principles of markets. As a small businessperson, you “eat what you kill” — your survival depends on your own ability and hard work, and there’s no one looking out for you.
Compare that to the experience of being an employee of a large company, where your destiny is controlled by a distant gigantic organization, and your individual initiative may or may not be recognized and rewarded by your boss. In that sort of environment, socialism may start to seem appealing — it’s just replacing one big domineering organization with another, except at least you can vote for the people in the government.
No wonder small businesspeople tend to support pro-business parties. In the U.S., they’re traditionally a key Republican constituency, reflecting the fact that the GOP used to be known as the party of business. In fact, it’s a consistent finding across countries. Malhotra, Margalit, and Shi (2025) crunch a large number of data sources, and emerge with one consistent finding:
We show that this sizable constituency of [small business owners], which is responsible for a substantial share of economic growth and overall employment, systematically leans to the right. This is most notable among business owners that employ other workers. Our findings indicate that this political affiliation is not merely a result of background characteristics that lead people to open or run a business.
Rather, the evidence suggests that experiences associated with running a business— particularly the heightened need to deal with the regulatory state—underlie the greater appeal of parties on the right.
Allowing hyper-efficient chain businesses like Wal-Mart to annihilate independent retail might bring a bit of economic efficiency and some higher profits in the short run, but in the long run, dispossessing the masses of significant capital ownership and disconnecting them from the reality of running a business is probably how you get trends like this one:

Ironically, this means socialists might be hurting their cause in the long term by supporting small business. But in the short term, they might manage to narrow the support gap with the GOP, while devolving capital ownership to a broader base of owners. That might be a compromise worth making, especially in cities like NYC and San Francisco where Republicans are probably not going to be a competitive threat anytime soon.
In the realm of urbanism, too, my bet is that small business owners have a positive effect. Data is more sparse here, but lots of the things that make cities livable — low crime, cheap dense housing, high-quality public transit — also happen to be the very things that bring lots of customers to small businesses’ doors. Small businesses do much better when they have lots of people living nearby, who can easily reach their doors on foot, and who can go outside without being worried about crime. The more that small businesses get strengthened in American cities, I predict, the faster sanity can be restored to urban policy after the missteps of the last few years.
On top of all the political-economic and distributional benefits, having a lot of small independent retail outlets just makes a city really, really nice. I wrote about this back in May:
I’ll just quote myself a little bit:
Although American urbanists usually think in terms of housing density — which is understandable, given the country’s failure to build enough housing — I’ve come to realize the importance of commercial density. Basically, great cities have a lot of shops everywhere…The beauty of Brooklyn’s brownstones, or Paris’ Haussmann apartments, comes in large part from the fact that they’re located near to shops…
When we lament the isolation of the suburbs, we’re not really lamenting low residential density; we’re lamenting the isolation of houses from third spaces where people might meet and mingle. Those third spaces are shops…If you expect citizens to give up the comfort of huge suburban houses and leafy green lawns and move to the city center, they have to be compensated in some way. Having a huge variety of stores and restaurants and bars and cafes within easy walking distance is that compensation.
I’ve also written about how Japan’s strong support for small business is one of the biggest reasons why its cities are such amazing places to live and to visit. You just can’t beat the experience of walking around all those cool little independent restaurants and stores.

So anyway, I’m not worried about the economic inefficiency of small shops. They bring balance to the political system, they improve the quality of our cities, and they support the great American middle class. No wonder they’re more popular than any other institution in the country:

Mamdani, Lurie, and the other mayors supporting small retail business are doing exactly the right thing. It’s very refreshing to see a sensible urban policy after so many years of destructive nonsense.
Whether small business are good for economic growth overall is a slightly different question. In fact, different studies tend to contradict each other on this point.
2025-12-01 19:12:37
New technologies almost always create lots of problems and challenges for our society. The invention of farming caused local overpopulation. Industrial technology caused pollution. Nuclear technology enabled superweapons capable of destroying civilization. New media technologies arguably cause social unrest and turmoil whenever they’re introduced.
And yet how many of these technologies can you honestly say you wish were never invented? Some people romanticize hunter-gatherers and medieval peasants, but I don’t see many of them rushing to go live those lifestyles. I myself buy into the argument that smartphone-enabled social media is largely responsible for a variety of modern social ills, but I’ve always maintained that eventually, our social institutions will evolve in ways that minimize the harms and enhance the benefits. In general, when we look at the past, we understand that technology has almost always made things better for humanity, especially over the long haul.
But when we think about the technologies now being invented, we often forget this lesson — or at least, many of us do. In the U.S., there have recently been movements against mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power, with varying degrees of success.
The difference between our views of old and new technologies isn’t necessarily irrational. Old technologies present less risk — we basically know what effect they’ll have on society as a whole, and on our own personal economic opportunities. New technologies are disruptive in ways we can’t predict, and it makes sense to be worried about that risk that we might personally end up on the losing end of the upcoming social and economic changes.
But that still doesn’t explain changes in our attitudes toward technology over time. Americans largely embraced the internet, the computer, the TV, air travel, the automobile, and industrial automation. And risk doesn’t explain all of the differences in attitudes among countries.
In the U.S., few technologies have been on the receiving end of as much popular fear and hatred as generative AI. Although policymakers have remained staunchly in favor of the technology — probably because it’s supporting the stock market and the economy — regular Americans of both parties tend to say they’re more concerned than excited, with an especially rapid increase in negative sentiment among progressives.
There is plenty of trepidation about AI around the world, but America stands out. A 2024 Ipsos poll found that no country surveyed was both more nervous and less excited about AI than the United States:

America’s fear of AI stands in stark contrast to countries in Asia, from developing countries like India and Indonesia to rich countries like South Korea and Singapore. Even Europe, traditionally not thought of as a place that embraces the new, is significantly less terrified than the U.S. Other polls find similar results:

If Koreans, Indians, Israelis, and Chinese people aren’t terrified of AI, why should Americans be so scared — especially when we usually embraced previous technologies wholeheartedly? Do we know something they don’t? Or are we just biased by some combination of political unrest, social division, wealthy entitlement, and disconnection from physical industry?
It’s especially dismaying because I’ve spent most of my life dreaming of having something like modern AI. And now that it’s here, I (mostly) love it.
Media has prepared me all my life for AI. Some of the portrayals were negative, of course — Skynet, the computer in the Terminator series, tries to wipe out humanity, and HAL 9000 in 2001: A Space Odyssey tries to kill its user. But most of the AIs depicted in sci-fi were friendly — if often imperfect — robots and computers.
C-3PO and R2-D2 from Star Wars are Luke’s loyal companions, and save the Rebellion on numerous occasions — even if C-3PO is often wrong about things. The ship’s computer in Star Trek is a helpful, reassuring presence, even if it occasionally messes up its holographic creations.1 Commander Data from Star Trek: The Next Generation is a heroic figure, probably based on a character from Isaac Asimov’s Robot series — and is just one of hundreds of sympathetic portrayals of androids. Friendly little rolling robots like Wall-E and Johnny 5 from Short Circuit are practically a stock character, and helpful sentient computers are important protagonists in The Moon is a Harsh Mistress, the Culture novels, the TV show Person of Interest, and so on. The novel The Diamond Age features an AI tutor that helps kids out of poverty, while the Murderbot series is about a security robot who just wants to live in peace.
In these portrayals, intelligent robots and computers are consistently portrayed as helpful assistants, allies, and even friends. Their helpfulness makes sense, since they’re created to be our tools. But some deep empathetic instinct in our human nature makes it difficult to objectify something so intelligent-seeming as a simple tool. And so it’s natural for us to portray AIs as friends.
Fast forward a few decades, and I actually have that little robot friend I always dreamed of. It’s not exactly like any of the AI portrayals from sci-fi, but it’s recognizably similar. As I go through my daily life, GPT (or Gemini, or Claude) is always there to help me. If my water filter needs to be replaced, I can ask my robot friend how to do it. If I forget which sociologist claimed that economic growth creates the institutional momentum for further growth,2 I can ask my robot friend who that was. If I want to know some iconic Paris selfie spots, it can tell me. If I can’t remember the article I read about China’s innovation ecosystem last year, my robot buddy can find it for me.
It can proofread my blog posts, be my search engine, help me decorate my room, translate other languages for me, teach me math, explain tax documents, and so on. This is just the beginning of what AI can do, of course. It’s possibly the most general-purpose technology ever invented, since its basic function is to memorize the entire corpus of human knowledge and then spit any piece of it back to you on command. And because it’s programmed to do everything with a smile, it’s always friendly and cheerful — just like a little robot friend ought to be.
No, AI doesn’t always get everything right. It makes mistakes fairly regularly. But I never expected engineers to be able to create some kind of infallible god-oracle that knows every truth in the Universe. C-3PO gets stuff confidently wrong all the time, as does the computer on Star Trek. For that matter, so does my dad. So does every human being I’ve ever met, and every news website I’ve ever read, and every social media account I’ve ever followed. Just like with every other source of information and assistance you’ve ever encountered in your life, AI needs to be cross-checked before you can believe 100% in whatever it tells you. Infallible omniscience is still beyond the reach of modern engineering.
Who cares? This is an amazingly useful technology, and I love using it. It has opened my informational horizons by almost as much as the internet itself, and made my life far more convenient. Even without the expected impacts on productivity, innovation, and so on, just having this little robot friend would be enough for me to say that AI has improved my life enormously.
This instinctive, automatic reaction to such a magical new tool seems utterly natural to me. And yet when I say this on social media, people pop out of the woodwork to denounce AI and ridicule anyone who likes it. Here are just a few examples:
Normally I would just dismiss these outbursts as non-representative. But in this case, there’s pretty robust survey data showing that the American public is overwhelmingly negative on AI. These social media malcontents may be unusually vituperative, but their opinions probably aren’t out of the mainstream.
What’s going on here? Why doesn’t everyone else love having a little robot friend who can answer lots of their questions and perform lots of their menial tasks?
I guess it makes sense that for a lot of people, the potential negative externalities — deepfakes, the decline of critical thinking, ubiquitous slop, or the risk that bad actors will be able to use AI to do major violence — loom large. Other people, like artists or translators, may fear for their careers. I think it’s likely that in the long run, our society will learn to deal with all those challenges, but as Keynes said, “in the long run we’re all dead.”
And yet the instinctive negativity with which AI is being met by a large segment of the American public feels like an unreasonable reaction to me. Although externalities and distributional disruptions certainly exist, the specific concerns that many of AI’s most strident critics cite are often nonsensical.
One of the most common talking points you hear about AI is that data centers use a ton of water, potentially causing water shortages. For example, Rolling Stone recently put out an article by Sean Patrick Cooper, entitled “The Precedent Is Flint: How Oregon’s Data Center Boom Is Supercharging a Water Crisis”. Here’s what it claimed:
[D]ata centers pose a variety of climate and environmental problems, including their impact on the water supply. The volume of water needed to cool the servers in data centers — most of which need to be kept at 70 to 80 degrees to run effectively — has become a nationwide water resource issue particularly in areas facing water scarcity across the West. This year, a Bloomberg News analysis found that roughly “two-thirds of new data centers built or in development since 2022 are in places already gripped by high levels of water stress.” Droughts have plagued Morrow County, occurring annually since 2020. But even areas with ample water reserves are vulnerable to the outsized demand from data centers. Earlier this year, the International Energy Agency reported that data centers could consume 1,200 billion liters by 2030 worldwide, nearly double the 560 billion liters of water they use currently.
The idea that AI data centers are water-guzzlers has become standard canon in many areas of the internet — especially among progressives. And yet it’s just not a real issue. Andy Masley finally got fed up and dug into the data, writing an epic blog post that debunked every single version of the “AI uses lots of water” narrative:
Masley notes that A) almost all of the water AI uses is actually used by power plants generating electricity to power AI, and B) most of the water that gets “used” by AI is actually just run through the system and returned to the original source, instead of being used up. He writes:
So of the ways AI uses water…The vast majority (maybe 90%) is withdrawn, freshwater (not potable) that is indirectly (offsite) used non-consumptively in power plants (it’s returned to the source unaffected)…Less (maybe 7%) is withdrawn freshwater (not potable) that is consumed (evaporated) indirectly (offsite) in the power plants to generate the electricity AI uses…And less (maybe 3%) is withdrawn freshwater that’s then treated to become potable, used directly (onsite) in physical data centers themselves, and consumed after (not returned to the source, evaporated).
He then goes on to pull a bunch of numbers that show that AI’s actual water consumption isn’t a problem (yet):
All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily…So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023…However, the water that was actually used onsite in data centers was only 50 million gallons per day…Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry.
AI uses approximately 20% of the electricity in data centers…Water use roughly correlates with electricity…So AI consumes approximately 0.04% of America’s freshwater if you include onsite and offsite use, and only 0.008% if you include just the water in data centers. So AI…is using 0.008% of America’s total freshwater…
So the water all American data centers will consume onsite in 2030 is equivalent to:
8% of the water currently consumed by the U.S. golf industry.
The water usage of 260 square miles of irrigated corn farms, equivalent to 1% of America’s total irrigated corn.
And he includes a chart for comparison to other water uses:

He goes on to debunk other aspects of the “AI water use” story, showing that data centers haven’t hurt water availability in local areas, and that data centers aren’t a major source of pollution. After that, he goes through some articles claiming a link between AI and water use, and finds that either their data is obviously wrong, or they don’t even cite data.
Masley is especially annoyed with a book by Karen Hao called Empire of AI, which made huge math errors about AI water use:
Within 20 pages, Hao manages to:
Claim that a data center is using 1000x as much water as a city of 88,000 people, where it’s actually using about 0.22x as much water as the city, and only 3% of the municipal water system the city relies on. She’s off by a factor of 4500.
Imply that AI data centers will consume 1.7 trillion gallons of drinkable water by 2027, while the study she’s pulling from says that only 3% of that will be drinkable water, and 90% will not be consumed, and instead returned to the source unaffected.
Paint a picture of AI data centers harming water access in America, where they don’t seem to have caused any harm at all.
Hao later admitted some serious data errors in her book.
Anyway, Masley’s whole post is very long and involved, and it links to a bunch of other long and involved posts on the various subtopics. If you’re interested in the “AI water use” issue at all, this is a must-read. I’m not sure I’ve ever seen so thorough a debunking of a popular belief. Some AI critics like Timnit Gebru attacked Masley, but were not able to muster any substantive rebuttal; instead, they simply suggested that Masley speak to activists:
There are two things that are especially frustrating about the “AI and water use” argument. First, there’s the powerlessness of facts in the face of viral misinformation. If a bogus claim gets repeated by enough people, it becomes its own sort of echo chamber, where observers believe the false claim because they think they’ve heard it from multiple sources. Politically motivated reasoning only adds to that effect — a lot of Americans fear AI and want to find a reason why it’s bad, so they latch onto the “water use” story without checking to see if it’s real.
The second frustrating thing is that there’s a much better argument sitting right there for the taking. AI doesn’t use up much water, and probably won’t for the foreseeable future. But it does use a hell of a lot of electricity, and this could pose a problem if the tech keeps scaling up. That electricity use can strain local grids and raise carbon emissions. It’s a real challenge! And yet AI critics tend to ignore this grounded and reasonable worry in favor of the “water use” myth.
Nor is this the only such example. Aaron Regunberg, writing in The New Republic, trots out a vast litany of accusations against AI, most of which don’t hold up under scrutiny. For example, he claims that an AI crash would wipe out normal Americans while preserving the wealth of AI’s creators:
First, there is the likelihood that the AI industry is building up a bubble that, when it bursts, will take down the global economy…When this thing pops, it won’t be the filthy rich scammers behind the bubble who will lose out. It will be regular people: One former International Monetary Fund chief economist estimates that a crash could wipe out $20 trillion in wealth held by American households.
But that’s absurd. Gita Gopinath’s analysis, which Regunberg cites, is all about stock wealth. Most of the wealth of the people who create AI is in their own company’s stocks, so they would absolutely get wiped out in a crash. Regular Americans would get hurt a bit, but most of their wealth is in their houses. Almost all American stocks are owned by the rich. In fact, there was a tech stock crash in 2022, and wealth inequality went down as a result.
Anyway, Regunberg continues:
Beyond the profoundly compelling politics of a market crash and bailout, there’s the equally potent issue of AI-driven job losses. These hits have already begun. Last summer, IBM replaced hundreds of employees with AI chatbots, and UPS, JPMorgan Chase, and Wendy’s have all begun following suit. The CEO of Anthropic has warned that AI could lead to the loss of half of all entry-level white-collar jobs; McKinsey estimated that AI could automate 60 to 70 percent of employees’ work activities; and a recent report from Senator Bernie Sanders and the Health, Education, Labor and Pensions Committee found that AI could replace 89 percent of fast-food and counter workers, 64 percent of accountants, and 47 percent of truck drivers. The anger that such dislocations will generate against “clankers”—yes, anti-AI frustration has already inspired the explosion of a Gen Z meme-slur—is hard to overstate, potentially exceeding the resentment against the North America Free Trade Agreement that Trump rode to the White House in 2016.
The studies Regunberg cites, predicting how many jobs will be “replaced”, actually say no such thing, as I explained in this post back in 2023:
They are merely engineers’ guesses about which jobs will be affected by AI.
In fact, one recent study found that industries that are predicted to use AI more are seeing no slowdown in wages, and have experienced robust employment growth for workers in their 30s, 40s, and 50s:
The study did find a slowdown in hiring for younger workers. But other studies find no effect of AI on jobs at all so far. Obviously, many workers are afraid of losing their jobs to AI, but those losses don’t seem to have materialized yet, so it’s a bit silly when AI critics like Regunberg present job losses as established fact.
How can folks like Regunberg, Gebru, Hao, and Cooper get away with such incredible sloppiness? Unfortunately, the answer is “motivated reasoning”. So many Americans feel so negatively about AI that if some writer or intellectual comes up and makes some claim that AI is a water-guzzler or is causing mass unemployment, some people will believe it. Fear and anger have a way of finding their own justifications.
So I don’t think there’s much I can do about the unreasonable negative emotions being directed at AI in the United States. I’ve simply got to reconcile myself to the fact that my delight in this new technology is a rare minority position. I do think there are probably some things that AI companies can do to improve their image with the American public. But for now, all I can do is lament that a country that once boldly embraced the future now quails at it.
In fact, I’d argue that the Star Trek computer is the sci-fi portrayal of AI that most accurately predicts modern generative AI. Whole plots of Star Trek: The Next Generation revolve around holodeck prompts that generate hallucinations.
A.O. Hirshman
2025-11-29 17:11:52

Every year or so, there’s a new crop of articles about how you need to make $350k a year to live in New York City, or $150k is lower middle class, or you need $300k to be middle class, or why people making $400k are barely scraping by. This article is always roundly ridiculed on social media, and a few days later someone writes a post going through the numbers and debunking the whole thing. And then everyone posts the famous tweet:
There’s just something very annoying about publications that cater to upper-class audiences trying to reassure those audiences that they’re actually struggling.
In a recent post on his Substack — followed by a shorter version in The Free Press — asset manager Mike Green made a similar claim, but got much more positive attention for it. Instead of claiming that a family that makes $400,000 is middle class, he claimed that a family making less than $140,000 is poor. This is from the Free Press version:
I realized that [the U.S. official poverty line]—created more than 60 years ago, with good intentions—was a lie…“The U.S. poverty line is calculated as three times the cost of a minimum food diet in 1963, adjusted for inflation.”…[W]hen you understand that number, you will understand the rage of Americans who have been told that their lives have been getting better when they are barely able to stay afloat…
[E]verything changed between 1963 and 2024. Housing costs exploded. Healthcare became the largest household expense for many families. Employer coverage shrank while deductibles grew. Childcare became a market, and that market became ruinously expensive. College went from affordable to crippling…A second income became mandatory…But a second income meant childcare became mandatory…two cars became mandatory…In 2024, food-at-home is no longer 33 percent of household spending. For most families, it’s 5 to 7 percent. Housing now consumes 35 to 45 percent. Healthcare takes 15 to 25 percent. Childcare, for families with young children, can eat 20 to 40 percent.
If you keep [the original] logic [of the poverty threshold]—if you maintain [the] principle that poverty could be defined by the inverse of food’s budget share—but update the food share to reflect today’s reality, the multiplier is no longer three.
It becomes 16. Which means…the threshold for a family of four—the official poverty line in 2024—wouldn’t be $31,200. If the crisis threshold—the floor below which families cannot function—is honestly updated to current spending patterns, it lands at close to $140,000.
And just to double-check this number, Green does a quick calculation of what a family of four would need in order to afford the necessities of modern life, and comes up with a similar number:
I wanted to see what would happen if I ignored the official stats and simply calculated the cost of existing. I built a basic needs budget for a family of four (two earners, two kids). No vacations, no Netflix, no luxury. Just the “participation tickets” required to hold a job and raise kids in 2024. Using conservative data for a family in New Jersey:
Childcare: $32,773
Housing: $23,267
Food: $14,717
Transportation: $14,828
Healthcare: $10,567
Other essentials: $21,857
Required net income: $118,009
Add federal, state, and FICA taxes of roughly $18,500, and you arrive at a required gross income of $136,500.
Wow! Two different methodologies, but the same conclusion: the cost of merely participating in the modern economy, for a family of four, is around $140,000. If your family makes less than that, you must be poor.
This post made the rounds like wildfire, and was generally well-received. It turns out that there’s a much bigger market for the idea that $140,000 is poor than there is for the idea that $400,000 is middle-class.
But despite its popularity, Green’s claim is wrong. Not just slightly wrong or technically wrong, but just totally off-base and out of touch with reality. In fact, it’s so wrong that I’m willing to call it “very silly”. I know Mike Green, and I count him as a friend, but we all write silly things once in a while,1 and when we do, we deserve to be called out on it.
Why is the $140,000 poverty line silly? Well, there are two main reasons. First, Mike actually just gets a lot of his numbers wrong when he makes his calculations. And second, the way Mike is defining “poverty” doesn’t make any sense.
I’ll go through both of those points, but first, let’s step back a second and talk about why the claim that $140,000 should immediately strike you as highly suspicious.
In economic policy debates, it’s important to be able to sniff out claims that aren’t going to hold up. I’m not saying that “extraordinary claims require extraordinary evidence”, as Carl Sagan said. I’m saying that when you see a claim that sounds way off, there often isn’t any good evidence for it at all.
If Mike is right, most Americans are poor people. The majority of American families make less than $140k a year. Median family income for a family of 4 in the United States is $125,700. That means that the majority of 4-person families make less than that. So if Mike Green is right, that means that more than half of American families are poor — or at least, more than half of 4-person families.
When Mike says “poor”, he means that people can’t afford what he calls a “participation ticket” — a basket of basic necessities. He names the things in that basket: rent, health insurance, transportation (including two cars), “other necessities”, and child care. If he’s right, then more than half of American families lack one or more of the basic necessities of life.
Is that true? Well, we can check these necessities one by one. Let’s start with food. For caloric consumption, we can’t really find a median (most people don’t count calories so you can’t do surveys), but we can find an average. And average calories per person has gone way up over time:
The distribution of how much food people eat probably isn’t very skewed (there isn’t one guy eating a billion calories…I hope!), so this means the typical American is eating a lot of food. In fact, of all the countries on the planet, only Irish people eat more calories than Americans. As for food insecurity, America has a lower share than practically any other country on Earth, including Scandinavia:
That’s severe food insecurity; about 10% of married-couple households report some level of food insecurity. So almost all American parents are putting food on the table for their families.
How about shelter? About 14% of American children have living situations with more than one person per room, which is how we define “overcrowded”.
We can also look at floor space. For 4-person households, total floor space per capita was 524 square feet in 2020 (it’s much higher for smaller households). In 1960, average floor space per person across all households was only 435 square feet for newly built homes. The average for 4-person families across all homes would have been a lot smaller than that, since it includes all the older homes from previous years that were much smaller.
So we’ve definitely seen a very big increase in how much space American families have to live in, over time. Also, for what it’s worth, Americans have more living space than people in almost any other country. In sum, most Americans are doing fine in terms of shelter and living space.
How about health insurance? Here we have some very good news: The total percent of uninsured Americans has fallen to only 8%.

And in fact, the news gets better! According to the CDC, only 5.1% of American children were uninsured as of 2023. So at this point, almost all Americans in 4-person families are going to have health insurance.
(Thanks, Obama!)
What about transportation? Here’s a chart I found for the number of vehicles for 4-person households in 2022:

According to this data, more than 80% of America’s 4-person households have 2 or more cars. Presumably some fraction of those households have single breadwinners, and thus — by Mike Green’s reckoning — don’t necessarily need two cars, while a few live in places with good public transit. So most Americans do have adequate transportation.
As for child care, I don’t have good numbers on how many Americans have it (since it comes in many forms). But as Mike says, child care is something you get so that you can have both parents go work; ultimately, it’s not something you need in and of itself in order to live a good life. So we can omit it from this list.
The basic point here is that:
Most Americans have plenty of food to eat
Most Americans have a comfortable amount of living space
Most Americans have health insurance
Most Americans have sufficient transportation
By itself, that doesn’t prove that most Americans have all of these things. It’s possible that each family has to choose one or two of these to give up, and that they make different choices. So maybe you take the 10% of 4-person families who are food insecure at some point, add the 8% who lack health insurance, then add the 16% who have less than two cars, and add the 14% who have overcrowded living situations, and you get…48%! So it’s possible half of Americans lack at least one of the basic necessities of life…right?
Well, no, because there’s going to be lots of overlap between those groups. A lot of the people who don’t have enough to eat are going to be the same people who have only one car, a family member without health insurance, and a cramped living space. In reality, you can’t just add those percentages up — the percent of Americans who lack even one of the basic necessities Mike Green lists is going to be a lot less than half. It’s probably going to be closer to the 25.5% of Americans who live in “relative poverty” (below 60% of median income). That’s higher than the official poverty rate, but not even close to Mike Green’s number.
In other words, the whole idea that more than half of Americans are poor doesn’t fit with anything we know about the lifestyles that typical Americans actually live. That’s why our intuition should be sounding the alarm like crazy when we read a line like “the real poverty line is $140,000”. We’re not talking about aliens from Mars here. Most of us either are middle class, or know people who are, and they don’t lack the basic necessities of life. They aren’t missing their “participation tickets”.
So how did Mike get this so wrong? It turns out there are two reasons. First, he used some bad numbers to make his calculations — so even on his own terms, the $140,000 number is bad. But also, the way he goes about calculating the price of a “participation ticket” in the American economy just doesn’t make sense.
2025-11-28 18:06:37
“This is Bach, and it rocks/ It’s a rock block of Bach/ That he learned in the school/ Called the school of hard knocks” — Tenacious D
Has culture stagnated, at least in the United States? There are a number of prominent writers who argue that it has. For example, Adam Mastroianni blames cultural stagnation on risk aversion resulting from longer lives and lower background risk:
Ted Gioia, meanwhile, blames risk-averse entertainment companies for monopolizing content with IP and using dopamine-hacking algorithms to monopolize consumers’ attention:
This being the 2020s, both writers bring plenty of data to support their arguments. I won’t recap it here, but basically, they look at various domains of cultural production like books, movies, music, TV, and games, and they show that:
Old media products (including sequels, remakes, and adaptations) have taken over from new products.
Popularity is now more concentrated among a small number of products.
I find that evidence to be fairly convincing. The counterargument, delivered by folks like Katherine Dee and Spencer Kornhaber, is that creative effort has shifted to new formats like memes, short-form videos, and podcasts. I think that’s definitely true, but I can’t help thinking that this explanation is insufficient. Regardless of what’s happening on TikTok, the fact that the cost of making movies has declined by so much should mean that there are more good new movies being made; instead, we’re just getting flooded with sequels and remakes. Something else is going on, and maybe Mastroianni and/or Gioia are on to something.
But anyway, there’s another thinker that I particularly like to read on cultural issues, and that’s David Marx. Marx, in my opinion, is a woefully underrated thinker on culture. His first book, Ametora — about the history of postwar Japanese men’s fashion — is an absolute classic. His second book, Status and Culture, is a much heavier and more complex tome that wrestles with the question of why people make art; it is also worth a read, although I think there are lots of things it overlooks.
Back in the spring of 2023, I met David in a park in Tokyo. We walked around, and he asked me what book I thought he should write next. I asked him to tell us where internet culture — and by extension, all of culture — should go from here. He replied that if he were going to write a book like that, he would first have to write a cultural history of the 21st century; if we’re going to know where we ought to go, we need to understand where we’ve been.
Blank Space: A Cultural History of the Twenty-First Century is that book.
Most of Blank Space is just a narration of all the important things that happened in American pop culture since the year 2000. You can read all about the New York hipster scene, the startling influence of Pharrell Williams and the Neptunes, the debauchery of Terry Richardson, the savvy self-marketing of Paris Hilton and Kim Kardashian, and so on. You can learn a bit about “poptimism” and 4chan memes. You can relive the excitement of the early Obama years and the disillusionment that followed the rise of Trump. And so on.
It’s the kind of retrospective that TIME magazine used to do, but higher-quality and book-length — a good book to have on your shelf.
In most authors’ hands, just as in old issues of TIME, this would come off as a jumbled laundry list — just one damn cultural factoid after another. But David Marx’s talent as a writer is such that he can make it feel like a coherent story. In his telling, 21st century culture has been all about the internet, and the overall effect of the internet has been a trend toward bland uniformity and crass commercialism.
In fact, Marx’s skill at narrative history sometimes gets in the way of his attempts at grand theorizing. He’s so good at distilling the look and sound of the 2000s decade — hip-hop inspired streetwear, Neptunes tracks, and so on — that he ends up bringing the decade to life in vivid color. This ends up making it very difficult to think of the 2000s as a forgettable and bland period.
Other times, Marx’s own personal tastes lead to gaps in the narrative. He doesn’t deal much with film as a medium, and ends up missing the fact that the 2000s were a golden decade for indie film. Culture is not entirely defined by music and fashion.
Marx also doesn’t deal much with the explosion of Japanese cultural imports to the U.S. in the 2000s and 2010s — which is ironic, since that was the topic of his first book. Even if you’re only telling the story of American culture, foreign imports are important, since they can crowd out domestic products — kids can go read manga instead of comic books, watch anime instead of American TV, and so on. Globalization isn’t the same as stagnation; even if the center of production moves offshore, something is still being produced and consumed.
These are nitpicks, but the book’s narrative methodology has a more serious weakness. The long tail interferes with any attempt to tell a coherent story of culture. If everyone listens to a few mainstream bands, you can name those bands and identify the sound of a decade; if everyone is listening to a tiny indie band that only they and 100 other people follow on Soundcloud, the task of describing the totality of those bands is hopeless.
Sometimes I feel that as a card-carrying Gen X hipster, Marx over-indexes on the Nirvana Moment — that day in early 1992 when a flannel-wearing post-punk band from Seattle dethroned Michael Jackson on the charts. That was a cool moment, to be sure, as is any time that an indie upstart forces its way into the mainstream. But we can’t expect that to be the norm. Of the early 2020s, and the inability of TikTok creators to get rich, Marx writes:
Grassroots cultural activity posed little threat to the celebrity aristocracy of movies, TV, and supermodels. Only the mainstream could satisfy the eternal need for a shared culture…At best the monoculture could undergo slight cosmetic changes: a rotating cast of “royal houses” in the pop aristocracy rather than a true revolution.
But it seems to me that this is how things usually go. The reason it was so impressive and noteworthy that Nirvana dethroned Michael Jackson in 1992 is that that kind of thing almost never happens. Usually, the mainstream stays mainstream, and indie stays indie, and you never get to see your indie heroes overturn the firmament. You just sit there enjoying them because they’re yours — your little piece of the long tail. Hipsters don’t have to be revolutionaries — you can just sit there being smug about how only you and your five friends know about the world’s greatest band, instead of fuming about how they never make it onto the Billboard Hot 100.
What of the evidence mustered by Mastroianni and Gioia, that popularity is becoming concentrated among a smaller and smaller set of cultural aristocrats? This doesn’t disprove the idea of the long tail. It may be that the distribution of taste is becoming more leptokurtotic — more concentrated at the center, but more widely distributed at the fringes:
In the worlds of online video, graphic novels, TV, and fashion, this is almost certainly what is happening. The best fashion styles in the world are not being shown on the runway at Paris Fashion Week; they are created by some 21-year-old Japanese fashion student who woke up in an odd mood. The best YouTube videos have only 10k or maybe 100k views, and the best TikTok videos probably have a lot less than that. And in my opinion, there are more awesome niche graphic novels being made now than ever before, even though each one has a relatively small audience. (Update: As a commenter pointed out, webcomics are another medium experiencing an explosion of creativity right now.)
As for television, in the 1990s everyone watched Seinfeld and Frasier and Friends, and a few people had heard of The State or Mad TV, but in the 2010s there was an explosion of mid-sized comedy shows that catered to more niche senses of humor — Party Down, Key & Peele, Kim’s Convenience, Letterkenny, Parks & Recreation, and so on.
But in some other domains, like books, traditional film, and music, this is almost certainly not what’s happening. Unlike with YouTube videos, I haven’t discovered a few cool indie films in the 2020s that no one else appreciates — I have discovered zero. The same goes for science fiction books (my genre of choice). That’s a strong indicator that there really just aren’t many out there; word of mouth is powerful, and lots of people share my general tastes, and word gets around. The same is true of musical artists, to a lesser degree; I discover a few awesome new ones here and there, but in general there were a lot more cool niche indie artists in earlier decades.1 If more existed, I would find them.
I’m thus playing a bit of devil’s advocate here. The stagnation that Marx, Mastroianni, and Gioia perceive is real, even if it’s not evenly distributed. The bland omnivorous taste that Marx complains about is a very real thing in the age of social media. Taylor Swift may be our modern Michael Jackson, but there’s no Nirvana to challenge her. Meanwhile, Dee and Kornhaber’s argument that memes are the true art of the modern age rings a bit hollow — I’ve seen more memes than I can count, and while a few of them are clever, almost none are brilliant, and the overwhelming majority are just boring political shouting.
So yes, Marx is right, even though he’s not completely right. The utter dominance of boring unoriginal pablum in music, film, and literature cries out for an explanation. Michael Jackson was the King of Pop, but his sound was still original. Indiana Jones was a blockbuster hit, but it was wildly creative and incredibly fun. What movie can we say that about today? Slowly, one cultural medium after another is having the life squeezed out of it by some nefarious force, even if others are still going strong.
What is that force? Unlike David, I’m highly skeptical of the idea that culture moves autonomously — I don’t think that as a society, we just suddenly decide to do things differently. Marx’s story works great as a narrative, but less well as a causal theory — you can’t just call on people to be less “poptimist” and expect any real results.
Mastroianni’s hypothesis about risk aversion is somewhat plausible — who wants to be a starving artist when you can design characters for Reddit and make six figures? But the gradual upward creep of risk aversion can’t explain why some cultural fields have flowered with creativity in recent decades. As for Gioia’s hypotheses about monopoly power and predatory algorithms, this might explain why the mainstream is more stagnant, but it can’t explain why there are so few great indie bands in the age of Soundcloud.
My own personal guess is that at least part of this force must be technological in nature. I wrote about this idea back in May:
I won’t reiterate my whole argument — this post is supposed to be a review of David Marx’s book, so I don’t want to make it all about my own ideas. But the basic thesis is that novel cultural production comes from novel technology — that when we invent the pickup mic, we predictably get several decades of electric guitar music, as people play around and discover what things are possible with electric guitars. But eventually, the space of cultural possibilities opened up by a new technology gets “mined out”, progress falters, and a canon gets canonized.
This idea explains why string orchestras are cover bands — the basic technology of violins and flutes and oboes was mostly perfected centuries ago, so classical music progresses only glacially. It can also potentially explain the unevenness of cultural creativity in recent decades. Obviously, short form video became great when camera phones became ubiquitous. Books, on the other hand, are only a little easier to write than before (thanks to word processors replacing typewriters), so it makes sense that creativity in literature might be flagging a bit.
This technological explanation is fairly pessimistic. It ties artistic output to technological progress, which we don’t really know how to accelerate. And worse, it implies that every burst of cultural creativity is inherently temporary.
But I doubt this is the only way that technology affects artistic output. In the chapter of Blank Space on how to restore cultural creativity, Marx calls for a more fragmented internet culture that allows subcultures to flourish before their innovations get harvested by the mainstream. Writers like Steven Viney2 and Yomi Adegoke have long complained that subcultural distinctiveness is impossible in the age of the internet. If artists can only make art while standing in the middle of the town square, you’re going to get more boring art.
I couldn’t agree more — and that’s why I think the ongoing fragmentation of the internet away from mass social media and into small private group chats is going to be healthy for cultural output.
Most of Marx’s other recommendations for restoring cultural innovation revolve around the idea of restoring taste, gatekeeping, and criticism to pop culture. While I can imagine that this might help, the idea is only vaguely sketched out in the final pages. Blank Space works well as a history, but doesn’t have much time for prescriptions. For that, we’ll have to wait for a future David Marx book — the one I requested back in the park in 2023.
I’m sure that when that one comes out, it will be great. In the meantime, Blank Space is a fun read, and you should buy it.
Perhaps this is because I’ve gotten older and my tastes have ossified. But if so, why do I feel like this is such a golden age for TV and graphic novels and short-form video? Why would being an old fogey only ossify one’s taste in music and film, but not in other media?
Viney’s contribution here is dripping with irony, since he wrote his article for Vice, a magazine whose whole reason for existence was to find obscure subcultures and use the internet to expose those subcultures to the mainstream.