MoreRSS

Noah SmithModify

Economics and other interesting stuff, an economics PhD student at the University of Michigan, an economics columnist for Bloomberg Opinion.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Noah Smith

Malaysia: Truly Asia

2025-10-09 00:13:14

After I spoke at the Network State conference in Singapore this week, I took a three-day side trip to Malaysia. As I always say, there’s a very limited amount you can learn about a country by going there and wandering around the capital city for a couple of days, so I’m not going to pretend that this little vacation gave me deep insight into the country. But people do seem to like these little travelogue posts — here are the ones I did for the Netherlands, Taiwan, Ireland, and Singapore — so I figured I’d do one for Malaysia too.

Very few people outside Malaysia seem to care about the country, or even know much about it, and that’s not surprising. It’s not very big — just 35 million people or thereabouts. It’s at the lower end of the income scale for a developed country — about $43,000 in per capita GDP (PPP), similar to Turkey or Greece. It doesn’t have many cultural exports — you won’t see M-pop bands, or M-dramas, or Malaysian movies winning international awards. Malaysian food isn’t even very distinct; it’s a blend of dishes from nearby countries, maybe with a bit more spice and oil. When my Singaporean friends tell me they go to Malaysia “to eat”, what they primarily mean is that they go to eat various types of southern Chinese cuisine but with a lot more spice and oil than Singaporean restaurants typically serve.

I think this gets at a big reason Malaysia doesn’t occupy much mindshare, which is also a reason it’s an interesting country — it’s basically an international crossroads. Geographically, it’s located close to China, India, Indonesia, and Thailand, and because of its majority Muslim religion it has connections to the Arab world as well. There was sort of a small Malaysian Empire around 1400 to 1000 years ago, and an even smaller one around 500 years ago, but Malaysia has spent most of its history ruled by outside powers (most recently the British) who brought in outside cultures and people to mix in the area.

More than perhaps any other country, Malaysia is — as the famous tourism slogan says — truly Asia.

A little over half of the people in Malaysia are part of an ethnic group called “Malays” (hence the name of the country). If you think of a “race” as a bunch of people who all have similar appearances, then “Malay” isn’t really a race. It’s more like five races in a trenchcoat — Javanese, Thai, Bornean, Indian, Chinese, and some other smaller ones mixed in. You immediately notice this walking around Kuala Lumpur — other than hijabs for women and thin moustaches for men, there isn’t much to indicate that you’re looking at one singular “people”. Nor is “Malay” even really an ethnicity, if an “ethnicity” is a bunch of people who speak the same language; the “Malay” language is broken up into a bunch of different local dialects.

In the years since independence, the Malaysian government has worked hard to create unity from this jumble of diversity. Religion has been used as a force to unify the Malay people — in order to be classified as a Malay in Malaysia, you have to be Muslim, and the government tries to enforce a single nationwide form of the religion.

The Malaysian government has also tried to create uniformity through race. They created a new racial concept called “Bumiputera” (or “Bumi” for short), meaning “sons of the soil”, which includes the Malays and some minorities. If you ever doubted that race is a social construct, check out the Bumiputeras. Essentially, it’s a political coalition — the Bumiputeras stand in contrast to the Chinese, who make up almost a quarter of the country’s population.

The Malaysian Chinese are an economic elite, owning many of the country’s businesses, and the Bumiputeras were basically created as a coalition to make sure everyone else got a cut of the money and power. Racial discrimination is enshrined in Malaysian law, with part of the Malaysian constitution ensuring that Bumiputeras get government jobs and other benefits, and a long-running economic policy ensuring that Bumiputeras own a significant amount of the country’s stocks and get various other economic support. Malaysians — or at least, Malaysian Bumiputeras — generally prefer to think of these policies not as racial discrimination, but as “affirmative action”.1

So anyway, Malaysia is an incredibly diverse place that’s in the process of trying to force itself to think of itself as homogeneous. It’s an interesting case study for Americans who believe the now-fashionable right-wing dogma that diversity breeds conflict. On one hand, Malaysia has a history of conflict between Malays and ethnic Chinese — often including race riots — which is why Singapore separated from Malaysia in 1965. But as of today, Malaysia has a wildly diverse cobbled-together “majority” and a very substantial Chinese minority, and yet it’s one of the safest countries on the planet. By the most recent measurement, Malaysia has a murder rate of 0.74 per 100,000 — lower than Sweden or Denmark, and barely higher than China. The numbers are probably a little bit understated, but not by much.

I don’t want to over-interpret my impressions from just walking around, but you can sort of feel this safety on the street. No one seems nervous, everyone seems friendly and relaxed. There are some security guards, but they’re not armed, and there’s not much police presence at all. Nor do Malaysians seem to feel a need to sequester themselves in far-flung suburbs or compounds for physical safety — driving outside of the city center, you see endless forests of apartment towers, similar to what you see in Taiwan, Hong Kong, or Singapore.

Malaysia is a developed country, but Kuala Lumpur feels like it’s still developing. The city is an absolute jumble of different urban layouts, land uses, and architectural styles.

Massive arterial roads exist right alongside little decaying side streets and glittering, immaculate shopping areas. There are gas stations next to apartment towers, overgrown lots next to scenic tourist sites. Everyone jaywalks, even across major roads. Cars are everywhere, but there are a ton of people just standing around on the street.

It’s hot on the street. Malaysia is nearly smack dab on the equator, and it’s pretty much the same weather year-round — hot and muggy. As in Singapore, air conditioning is life here; without it, it’s hard to think straight, and as Lee Kuan Yew pointed out, you can only get work done at dawn or dusk. It’s not pleasant to think about the half of the Malaysian population who still have to go without it, trapped in sweltering dark rooms in front of buzzing dusty fans. In Malaysia, economic success means owning an air conditioner.

This is why you have to go to the Global South to really understand why economic development is so important. Heating a home through a cold winter doesn’t require advanced technology — farmsteads were burning wood and dung for heat since time immemorial. But dumping heat out of a building is a lot harder; you need an AC. And as the world gets hotter, large swathes of humanity are going to need AC to survive, not just to feel comfortable. Every pious upper-class German and sanctimonious Brit who dismisses the importance of air conditioning, or complains about the carbon emissions, should take a trip to Malaysia and try going without it for a few days.

Malaysia’s heat also made me a bit more pessimistic about the future of walkable urbanism. People walk down the street to shop in Malaysia, but they don’t like it; shop-lined streets tend to be more shabby and downmarket, while all the upscale retail is packed into glittering, air-conditioned malls.

Japan is discovering this now. Climate change has made Japanese summers almost unbearably hot — hotter than Malaysian summers, in fact. This is rapidly turning Japan’s famous, wonderful, walkable urbanism into a liability for months out of the year, and it may be one reason why so much of Japan’s new retail development is concentrated not in traditional zakkyo buildings, or shotengai (covered outdoor arcades), but in large indoor malls of the type that you see all over Southeast Asia.2

Southeast Asian malls are the future, and that future is boring. Shop-lined streets develop slowly and organically over time, with independent businesses trickling in and out, sometimes getting a good deal on rents. The selection of shops in a mall tends to be centrally managed, and the managers tend to pick big chain stores and famous brands rather than quirky indie businesses. Malls also squeeze tenants for every drop of rent they can, meaning that indie stores with smaller profit margins have trouble competing for space. More and more interesting retail is going to move online.

The nights in Kuala Lumpur are still hot and muggy, but at least the sun isn’t beating down. And that’s when the streets come alive. Tons of people just stand around on the street in the central districts, or stroll around with their dates, or drink in open-air bars, or shop at food stands. Rock bands set up and play on street corners, and people just gather in crowds and dance. Malaysia’s street scene isn’t quite as lively as you might find in, say, Latin America or South Europe, but seeing a bunch of women rocking out in hijabs makes it clear that “Muslim country” can mean a lot of different things.

Anyway, Kuala Lumpur feels like a city in progress. New malls and apartment buildings are going up everywhere, often in a chaotic, jumbled fashion. Money is obviously pouring in.

Where is it pouring in from? Malaysia is a very export-oriented economy — exports are more than two-thirds of GDP, compared to only about one-fifth in China or Japan. Malaysia has always had a thriving commodity export business — that’s why the British wanted it. It still sells a significant amount of oil, gas, palm oil, and the like. But in recent decades, the country has built up a world-class electronics industry. Electronics — especially semiconductors — now make up the biggest chunk of Malaysia’s exports:

Much of this industry was built by foreign companies that set up in Malaysia. But it’s the Malaysians themselves who have learned how to do the hard technical work. The country is now the international hub of crucial parts of the semiconductor supply chain, like packaging and testing. And it’s slowly climbing the value chain from low-end legacy chips to the high-tech, high-value new stuff.

In fact, tiny Malaysia exports almost twice the total amount of semiconductors that the United States of America does — $74 billion to $43 billion. In per capita terms, that’s almost 16 times as much chip exports per capita as the U.S. Not bad for a post-colonial Muslim country in an underdeveloped region, eh?

Malaysia is touted as an industrial policy failure in the book How Asia Works, because it never managed to build a domestic auto industry. But simply letting in foreign electronics companies and targeting crucial pieces of the semiconductor supply chain proved to be a huge industrial policy success. That’s the reason all those new buildings are going up in Kuala Lumpur, and that’s the reason more and more Malaysians can experience the blessing of air conditioning.

So there are actually a few reasons why people around the world should pay more attention to Malaysia. Their experiment in forging a majority identity group out of wildly disparate parts has a lot to teach the world about the social construction of race and nationhood. And Malaysia’s success in the electronics industry has made it the second developed country in Southeast Asia, after Singapore (not counting the petrostate of Brunei). It provides a model that larger nearby countries like Vietnam, Philippines, Indonesia, and Thailand might be able to follow.


Subscribe now

Share

1

This whole arrangement, and the Bumiputera concept itself, made me think of the “POC” concept in modern America. The idea of treating Nigerian Americans, Korean Americans, and Bangladeshi Americans as a single racial group was always a little far-fetched, but it makes sense when you understand that racial identities often start as attempts to forge durable political coalitions. In America’s case, that meant forming a nonwhite grouping to balance out the seemingly unified power and redistribute the disproportionate wealth of the White group. “POC” hasn’t really solidified as an identity, and I doubt it will, but if it did, it would probably look a bit like Malaysia’s Bumiputera.

2

Fortunately, some Japanese developers are fighting this shift by installing AC and other cooling devices outside buildings, and by building semi-outdoor quasi-mall spaces where indie businesses can still thrive.

The U.S. political situation

2025-10-08 01:06:50

Photo by Gage Skidmore via Wikimedia Commons

I don’t particularly like to write about the U.S. political situation, but I do it. American politics is never particularly tranquil, but there are certainly smoother periods, and it’s in those times when econ policy wonks like myself really flourish. When we know who’s in charge, and what they want, and everyone basically accepts their legitimacy, you can write things like “I think we should replace the corporate tax with a Destination-Based Cash Flow Tax”, and you can imagine that people might even listen to you and there might even be some policy momentum in the direction of your idea. But at times of political conflict, wonkish concerns about optimal policy get drowned out by the imperatives of raw power. No one wants to hear about the DBCFT during a civil war.

America is not in a civil war right now. This is a simple, plain, obvious fact. There is no large-scale violence between organized groups. In fact, there’s some evidence that political violence has been going down for a few years now. Here’s CSIS’s data on terrorist attacks:

Source: CSIS

Alternative data sets largely paint the same picture.

Left-wing terrorism has increased a bit this year, including the murder of Charlie Kirk, but it’s still at a pretty low level. Right-wing terrorism has collapsed this year, possibly because rightists are more satisfied with Trump in power (though this didn’t seem to deter them during Trump’s first term). And the wave of jihadist attacks in the 2010s has faded. There are certainly some minor clashes between ICE and protesters in American cities, but nothing really big, and to my knowledge no one has been killed yet.1

So not only is the U.S. obviously not in a civil war, we also aren’t experiencing anything like Italy’s Years of Lead or Ireland’s Troubles.

But this has not stopped the Trump administration and many of its supporters from talking as if the United States is in a civil war. Donald Trump says this regularly. For example, here’s what he said to a gathering of generals at Quantico a week ago:

America is under invasion from within. We’re under invasion from within, no different than a foreign enemy, but more difficult in many ways because they don’t wear uniforms…These people don’t have uniforms. But we are under invasion from within. We’re stopping it very quickly. After spending trillions of dollars defending the borders of foreign countries, with your help, we’re defending the borders of our country from now on…It’s a war from within. Controlling the physical territory of our border is essential to national security. We can’t let these people live.

Who does Trump think America is at war with? The answer seems clear. It’s A) immigrants, especially (but not exclusively) illegal immigrants, and B) any Americans who try to oppose Trump’s deportation efforts. Just today, Trump threatened to invoke the Insurrection Act against anti-ICE protesters in Portland.

Stephen Miller, Trump’s powerful advisor, has broadened his definition of the enemy to include judges who rule against Trump’s policies:

And of course in the aftermath of the Charlie Kirk assassination, a very large array of right-wing people declared that America was in a “war”:

Democrats have been much more reluctant to use this sort of violent language. But there are a few cases, such as Virginia politician Jay Jones:

Jones…imagined a hypothetical situation in which he had two bullets and needed to decide whether to shoot [GOP state legislator Todd] Gilbert, Adolf Hitler or Pol Pot. “Gilbert gets two bullets to the head,” Jones wrote. “Spoiler: put Gilbert in the crew with the two worst people you know and he receives both bullets every time.”…“Yes, I’ve told you this before,” Jones [said]. “Only when people feel pain personally do they move on policy.”

And when you poll people about whether violence should be used to create social change, you get a startling number of progressive youngsters saying “yes”:

It’s reasonable to see all this and be afraid that America is about to have an actual, real civil war. Yes, the numbers probably overstate the percent of people who would be willing to actually countenance violence. But any normal person would see polls like this and be worried.

So what’s going to happen next? And how did we get to this juncture? Unfortunately, we’ve reached the point where I’m forced to think about this if I want my ideas and opinions to stay relevant. So here are my assessments of the two broad “sides” in this so-far-mostly-rhetorical conflict, and where things might be headed.

My assessment of the American right

Read more

The Third Magic

2025-10-06 00:38:59

Art by Midjourney

I’m traveling again, so today we’ll have another repost. I’m reposting all of my New Year’s essays from the past few years, so here’s the one from 2023.

Back in 2023, ChatGPT was very new, and everyone was incredibly excited about it. I’m not an AI engineer myself, but I could tell that this was a type of technology unlike any other ever created. It seemed to me like actual magic — not Arthur C. Clarke’s metaphorical kind, in which a technology is so advanced that people don’t understand how it works, but the actual kind of spells that wizards cast in storybooks.

Unlike older technologies, AI isn’t interpretable; we can use it somewhat reliably to do amazing stuff, but there’s a ton of mystery meat in terms of how it actually did it. Unlike a car or a spaceship or a computer chip or a traditional piece of software, you can’t open up the hood of an AI model and see how it works in the mechanistic sense. This is not just because AI models are complex, but because they’re not fully interpretable — we don’t even have a theoretical idea of many of their internal mechanisms. Add to that the fact that generative AI is incredibly stochastic — you can give it the same prompt, but it might be totally different — and it really does start to look like the way magicians cast spells in fantasy novels.

Here, for example, is a cute video I made with the new Sora 2 AI video generation app from OpenAI:

This is incredible. The AI came up with the phrase “Long Ears. Long Rule.” all by itself, without me prompting it! But no one really knows how the AI model did this. No one could have predicted in advance what this video would look like. And if you typed in this exact same prompt over and over again, you’d get a very different video each time. And no human or group of humans on planet Earth could predict how the next output would be different.

Anyway, this is thrilling, but at the same time it’s slightly worrying as well. This technology is so powerful that we’re going to have no choice but to rely on it for much of our daily lives. Even Terence Tao, generally regarded as the world’s smartest man, has said that he now asks AI models to do some small pieces of his research. Every scientist must now be, to some degree, a spellcaster.

But — assuming we don’t make a huge breakthrough in interpreting the models — this power will come at a great cost. Technology will be more powerful, but less reliable — it’ll be like the holodeck computer in Star Trek: The Next Generation, whose responses to the crew’s prompts are often so disastrous that they become the plotlines of whole episodes.

On top of that, I worry that humanity will become infantilized by this new magic we’ve created. The Industrial Age was an age of rationality for good reason — if people wanted to understand the marvels that were transforming their lives, they only had to take the machines apart and look at the mechanisms. Everything in the world seemed mechanistic and comprehensible.

That age of reason is already under assault in the U.S. from wacky politics, but I worry that in an age of AI magic, it could break down even more. We could find ourselves wandering, confused, in a world of ineffable mysteries and capricious gods.

That thought made me want to repost my New Year’s essay from 2023. In that essay, I argued that humanity had basically found two great tricks for gaining power over the world — history, which records the past, and science, which derives simple “laws” from controlled experiments. AI, I speculated, could be a third thing entirely — a tool that harnesses regularities too complex and slippery to succumb to science, but powerful and important enough to give humans added mastery over the world around us.

Anyway, here’s that post again. And at the end, I’ll add some reactions by a much more advanced version of ChatGPT than the one I queried back in early 2023.


The Third Magic

This post is essentially a rewrite of a big and half-formed idea that I wrote on my old blog eight years ago. I was deeply dissatisfied with that post, but I thought it contained a few interesting seeds. So now I’m trying again, and will undoubtedly fail again. But hopefully something else interesting will come out of the attempt.

Humanity’s living standards are vastly greater than those of the other animals. Many people attribute this difference to our greater intelligence or our greater linguistic communication ability. But without minimizing the importance of those underlying advantages, I’d like to offer the idea that our material success is due, in large part, to two great innovations. Usually we think of innovations as specific technologies — agriculture, writing, the wheel, the steam engine, the computer. The most important of these are the things we call “general purpose technologies”. But I think that at a deeper level, there are more profound and fundamental meta-innovations that underlie even those things, and these are ways of learning about the world.

The first magic

Humans’ first big meta-innovation, roughly speaking — the first thing that lifted us above an animal existence — was history. By this, I don’t just mean the chronicling of political events and social trends that we now call “history”, but basically any knowledge that’s recorded in language — instructions on how to farm, family genealogies, techniques for building a house or making bronze, etc. Originally these were recorded in oral traditions, but these are a very lossy medium; eventually, we started writing knowledge down, and then we got agricultural manuals, almanacs, math books, and so on. That’s when we really got going.

Animals make tools, but they don’t collectively remember how to make those tools. History, especially written history, is what allows tinkering to stick — it means that when one human finds an ingenious new way of doing something, there’s a good chance that many other humans, and eventually all humans, will know how to do it. And of course those techniques can then build on each other over time. In the modern day we think of history as primarily a social science, but fundamentally it’s the foundation of technology as well; it’s the thing that lifted us from an almost animal existence into the agricultural age.

The second magic

Then — I won’t say exactly when, because it wasn’t a discrete process and the argument about exactly when it occurred is kind of boring — humanity discovered our second magic trick, our second great meta-innovation for gaining control over our world. This was science.

“History”, as I think of it, is about chronicling the past, passing on and accumulating information. “Science”, by contrast, is about figuring out generally applicable principles about how the world works. Chronicling the motions of the planets is one thing; being able to predict the motion of planets you’ve never discovered is quite another. Tinkering with steam engines and writing down your findings can lead to progress; understanding the principles of thermodynamics and being able to use those to design a better engine, without having to tinker for decades or centuries, is far more effective.

Science is often done in a lab, but it doesn’t have to be. Ecologists can derive laws for predator-prey relationships simply by observing nature. Kepler didn’t need to do experiments with gravity in order to write down laws of planetary motion that would hold generally true. Nor do you need math to do science; many of the principles that govern the world can be expressed purely in words and pictures. But just as writing supercharged the process of recording events, controlled experimentation and mathematics supercharged the process of discovering the laws of the Universe.

Controlled experiments are powerful because they let you go from the small to the large — you can roll balls down a ramp in your house, and use that to figure out the laws of motion that control the motions of the stars. You can grow peas in your yard and use these to figure out laws of heredity. And mathematics is powerful because it lets you express those laws in ways that are easy to use across a dizzying breadth of applications. If you’re an artilleryman, field manuals embodying decades or centuries of cumulated experience can tell you how to calibrate the range and accuracy of your cannon; physics, derived from experiments with little wooden balls and expressed in mathematics, can tell you how to hit a target with far less trial and error.

If you think about it, it’s pretty incredible that the world actually works that way. If you went up to someone in the year 1500 and told them that one kooky hobbyist rolling little balls down ramps could be right about how the physical world works, when the cumulated experience of millions of human beings around him was wrong, and that his insights could be written down by an even kookier kook in an arcane language that few could speak, and that this language would allow its speakers to apply the laws to do wondrous feats that others couldn’t possibly do, they would have thought you were crazy. They did think that was crazy. And yet, it worked. It worked more spectacularly than anything had ever worked before, and the economic result looked like this:

In 1960, the physicist Eugene Wigner wrote an essay called “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. It’s really not about math so much as it’s about the fact that such a thing as simple, generalizable “laws of the universe” exist in the first place. Why should principles like physical laws or the basics of genetics be so simple, and yet so generalizable and consistent across time and space? There’s no obvious reason this should be true, and yet it is.

Well, for some things it is. Charles Marcus, my freshman physics teacher, told me that “physics is great, but it will never tell you how to make a tree.” That wasn’t entirely true. But it is true that a whole lot of complex phenomena have so far defied the approach that gave us the laws of physics and genetics. Language, cognition, society, economics, complex ecologies — these things so far don’t have any equivalent of Newton’s Laws, and it’s not clear they ever will.

This problem has been recognized for a very long time, and thinkers tried several approaches to get around it. Some hoped that all complex phenomena would be governed by emergent properties — that simplicity would emerge at higher levels of complexity, allowing us to discover simple laws for things like psychology and economics even without connecting those laws to the underlying physics. Indeed, this idea is implicit (or, occasionally, explicit) in the way economists try to write down simple mathematical laws of collective human behavior. People make fun of this approach as “physics envy”, but sometimes it really works; auction theory isn’t derived from physics, but it has been able to make very effective predictions about how much people will pay for Google ads or spectrum rights. Ditto for “gravity models” of trade, migration, retail shopping, etc. Sometimes emergence works.

Sometimes, though, it doesn’t — or at least, it doesn’t yet. But in psychology, in macroeconomics, in natural language processing, and many other domains, the search for laws of nature has been mostly stymied so far, and it’s not clear when real progress might ever be made. Wigner goes so far as to postulate that some domains of human knowledge might never be described by such simple, generalizable principles.

Other approaches for getting around the problem of complexity — chaos theory, complexity theory — yielded interesting insights, but ultimately didn’t succeed in giving us substantially more mastery of the phenomena they dealt with. In the late 20th century, the problem of complexity was like a looming wall up ahead — as scientists found more and more of the laws that could be found, a larger and larger percentage of the remaining problems were things where laws seemed very hard or potentially even impossible to find.

Our second great magic, powerful though it had proven to be, was still not omnipotent.

Control without understanding, power without knowledge

In 2001, the statistician Leo Breiman wrote an essay called “Statistical Modeling: The Two Cultures”, in which he described an emerging split between statisticians who were interested in making parsimonious models of the phenomena they modeled, and others who were more interested in predictive accuracy. He demonstrated that in a number of domains, what he calls “algorithmic” models (early machine learning techniques) were yielding consistently better predictions than what he calls “data models”, even though the former were far less easy, or even impossible, to interpret.

This raises an important question: What is the goal of human knowledge? As I see it — and as Breiman sees it — the fundamental objective is not understanding but control. By recording which crops grow in which season, we can feed our families. By understanding that germs cause disease, we can know to wash our hands or get a vaccine, and lower our risk of death. In these situations, knowledge and understanding might be intrinsically satisfying to our curiosity, but that satisfaction ultimately pales in importance to our ability to reshape our world to our benefit. And the “algorithmic” learning models that Breiman talks about were better able to deliver their users the power to reshape the world, even if they offered less promise of understanding what they were predicting.

Why should we care about understanding the things we predict? To most of us, raised and inculcated in the age of science, that might seem like a laughable question, but there actually is a good reason. “Understanding”, in the scientific sense, means deriving a simple, generalizable principle that you can apply in other domains. You can write down Kepler’s laws of planetary motion, but Newton’s laws of motion and gravitation let you generalize from planetary orbits to artillery shells. Collapsing observed phenomena to simple, generalizable laws and then expanding these laws again in some other domain to allow you to control other phenomena is fundamental to the awesome power of science. So because you and I sit at the end of 400 years of science being the most powerful tool in the world, we have naturally been taught that it is very, very important to understand things.

But what if, sometimes, there are ways to generalize from one phenomenon to another without finding any simple “law” to intermediate between the two? Breiman sadly never lived to see his vision come to fruition, but that is exactly what the people who work in machine learning and artificial intelligence are increasingly doing. In 2009 — just before the deep learning revolution really kicked off — the Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira wrote an essay called “The Unreasonable Effectiveness of Data” that picked up the argument where Breiman left off. They argued that in the cases of natural language processing and machine translation, applying large amounts of data was effective even in the absence of simple generalizable laws. A few excerpts:

[S]ciences that involve human beings rather than elementary particles have proven more resistant to elegant mathematics…An informal, incomplete [list of the grammatical rules that define] the English language runs over 1,700 pages. Perhaps when it comes to natural language processing and related fields, we’re doomed to complex theories that will never have the elegance of physics equations. But if that’s so, we should stop acting as if our goal is to author extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data…

So, follow the data…Represent all the data with a nonparametric model rather than trying to summarize it with a parametric model, because with very large data sources, the data holds a lot of detail. For natural language applications, trust that human language has already evolved words for the important concepts. See how far you can go by tying together the words that are already there, rather than by inventing new concepts with clusters of words. Now go out and gather some data, and see what it can do.

(Note: ChatGPT, the chatbot that has recently wowed the world, is a parametric model, but I wouldn’t get too hung up on that little detail. In terms of interpretability, a model with 175 billion parameters is not really different from one with no parameters at all.)

Anyway, the basic idea here is that many complex phenomena like language have underlying regularities that are difficult to summarize but which are still possible to generalize. If you have enough data, you can create a model (or, if you prefer, an “AI”) that can encode many (all?) of the fantastically complex rules of human language, and apply them to conversations that have never existed before. As a demonstration of how powerful this can be, let’s ask ChatGPT about the subject of this post:

N.S.: Do we need to understand things in order to predict and control them?

ChatGPT: It is generally easier to predict and control things that we understand. When we have a good understanding of something, we can identify patterns and regularities, and use this knowledge to make informed predictions about how the thing will behave in the future. We can also use our understanding of a thing to identify key variables that we can manipulate in order to control it.

However, it is possible to predict and control things without a full understanding of them. For example, we can use machine learning algorithms to make predictions about things that we don’t fully understand, by finding patterns in data that we have collected about them. We can also use feedback loops and other control mechanisms to control things without necessarily understanding how they work in detail.

Overall, understanding can be helpful for predicting and controlling things, but it is not strictly necessary.

The ability to write down farming techniques is power. The ability to calculate the path of artillery shells is power. And the ability to have a machine reliably and consistently write paragraphs as clear and helpful as the one above is power, even if we don’t really understand the principles of how it’s doing what it does.

This power is hardly limited to natural language processing and chatbots. In recent years, Google’s AlphaFold algorithm has outpaced traditional scientific methods in predicting the shapes of folded proteins. Biologist Mohammed AlQuraishi wrote that:

There was, in many ways, a broad sense of existential angst felt by most academic researchers [in the field]…[those] who have bet their careers on trying to obsolete crystallographers are now worried about getting obsoleted ourselves.

We are almost certainly going to call this new type of prediction technique “science”, at least for a while, because it deals with fields of inquiry that we have traditionally called “science”, like protein folding. But I think this will obscure more than it clarifies. I hope we eventually come up with a new term for this sort of black-box prediction method, not because it’s better or worse than science, but because it’s different.

A big knock on AI is that because it doesn’t really let you understand the things you’re predicting, it’s unscientific. And in a formal sense, I think this is true. But instead of spending our effort on a neverending (and probably fruitless) quest to make AI fully interpretable, I think we should recognize that science is only one possible tool for predicting and controlling the world. Compared to science, black-box prediction has both strengths and weaknesses.

One weakness — the downside of being “unscientific” — is that without simple laws, it’s harder to anticipate when the power of AI will fail us. Our lack of knowledge about AI’s internal workings means that we’re always in danger of overfitting and edge cases. In other words, the “third magic” may be more like actual magic than the previous two — AI may always be powerful yet ineffable, performing frequent wonders, but prone to failure at fundamentally unpredictable times.

But even wild, occasionally-uncontrollable power is real power.

It’s impossible to know, just yet, how powerful this new technique will be. Perhaps AI will be a niche application, or perhaps it will revolutionize all the fields of endeavor where traditional science has run into diminishing returns. Just as none of the scientists in the 1600s knew how many wonders their theories would eventually produce, we have no idea how far the third magic will take us. People may look back on this post in half a century and laugh at how I dared to frame AI as an epistemological successor to history and science. Or perhaps AI will lead to a leap in human power and flourishing comparable to those in the two graphs above.

As always, we won’t know until we try.

The third magic and economics

Let’s briefly circle back to my old 2014 post. One reason I’m dissatisfied with that post is that I focused on ways of “understanding” the world, but as ChatGPT notes above, understanding isn’t the only thing we care about when we’re trying to make predictions and control our world. A second thing I’m dissatisfied with is that I presented empirics — statistical analysis of uncontrolled observational data — as a third tool, separate from science and history. With some time to reflect, I see less of a distinction to be made there. Observing correlations might involve some fancy math, but conceptually it isn’t that different from marking down the patterns of the seasons or the orbits of the planets. And using observational data to uncover the laws of nature — like predator-prey models — is really just another way of doing traditional science.

I do think I was right, though, to see natural experiments as something a bit different. In the past few decades, as economics has moved away from theory and toward empirics, the most important innovation has been the use of natural experiments — situations where some policy change or seemingly random difference allows you to tell yourself that you’re looking at causation, rather than just correlation. This is different than what I call “history”, because you’re doing more than just documenting facts; you’re verifying causal links. But it’s also different from science, because a lot of the time you don’t exactly know why the causal links are there. In a way, a natural experiment is its own sort of black-box prediction algorithm.

A number of subfields of econ, however, are so complex, with so many feedback systems, that they’ve largely resisted the natural experiment approach. These include not just the study of business cycles (what most people call “macro”), but also the study of economic growth, international finance, and a number of others. In these fields, theory (including “structural estimation”) still rules, but predictive power is very low.

Might we apply AI tools to these hard problems, in order to predict vast economic forces without needing to understand them? A recent paper by Khachiyan et al. argues that the answer is “yes”. The authors use deep neural nets (i.e., AI) to look at daytime satellite imagery, in order to predict future economic growth at the hyper-local level. The results they achieve are nothing short of astonishing:

For grid cells with lateral dimensions of 1.2km and 2.4km (where the average US county has dimension of 55.6km), our model predictions achieve R2 values of 0.85 to 0.91 in levels, which far exceed the accuracy of existing models, and 0.32 to 0.46 in decadal changes, which have no counterpart in the literature and are 3-4 times larger than for commonly used nighttime lights.

This isn’t yet AlphaFold, but being able to predict the economic growth of a few city blocks 10 years into the future with even 30% or 40% accuracy is leaps and bounds ahead of anything I’ve ever seen. It suggests that rather than being utter incomprehensible chaos, some economic systems have patterns and regularities that are too complex to be summarized with simple mathematical theories, but which nevertheless can be captured and generalized by AI.

And this is just a first-pass attempt, by a team with relatively limited resources. What if the people at DeepMind were to tackle the questions of economic growth and business cycles, with even bigger data sets and more advanced models? Khachiyan et al.’s paper raises the possibility that in a decade or two, macroeconomics might go from being something we simply theorize about to something we can anticipate — and therefore, something we can control. The authors suggest place-based policies, transportation infrastructure construction, and disaster relief as three possible applications of their work.

This is just one example that happens to be near and dear to my heart (Here’s another possible example, this time in physics). But it’s an example of how fields of inquiry that seemed like impossible cliffs just a few years ago may seem like easily scalable boulders in a year or a decade or a century, thanks to a grand new meta-innovation in how we predict and control the world around us. That’s what happened with the scientific revolution, and there’s no obvious reason why it might not happen again. Just the possibility should make us dizzy with excitement.


Anyway, as a coda, I thought I’d ask GPT-5 what it thinks of this blog post. Here’s its response. For what it’s worth, I think its main criticism of the post is misplaced, and I did make it clear that the three methods for understanding the world are usually complements rather than substitutes. But anyway, some interesting comments that place my post within the context of existing ideas.

Love the arc here: you’re telling a clean story about three meta-innovations that amplify human capability—(1) cumulative memory (“history”), (2) law-seeking theory (“science”), and (3) high-dimensional prediction/optimization (“AI”). It’s persuasive, and the examples (Kepler→Newton, Breiman→Halevy/Norvig/Pereira, AlphaFold) land. Below are thoughts to sharpen the thesis and guard it against the most common pushbacks.

What’s strong

  • Cumulative culture as “first magic.” Framing writing/records as a technology for making tinkering stick is spot on and tracks the cultural-evolution literature.

  • Why science was different. You nail the “from the small to the large” leverage of controlled experiments + math; that’s the best one-paragraph explanation of science’s superpower.

  • Reframing the goal as control. Separating predict/control from understand/explain gives you a crisp lens to evaluate AI without getting stuck in philosophy-of-science pieties.

Where I’d tighten or nuance

  1. Define the magics as capabilities, not epochs.
    Right now “history,” “science,” and “AI” read a bit like eras. Consider defining them as orthogonal tools we can combine:

  • Memory (record, compress, transmit).

  • Theory (infer stable causal structure; extrapolate).

  • Learning/Optimization (fit flexible functions; search policy spaces).
    This avoids zero-sum vibes (“AI replaces science”) and lets you argue for hybrids (e.g., theory-guided ML, differentiable simulators, causal discovery with foundation models).

  1. AI ≠ “power without knowledge” across the board.
    Two clarifications will inoculate you against common critiques:

  • Local explanations exist. Post-2022 we’ve seen growth in mechanistic interpretability (circuits, features), causal probes, and tool-use chains that are inspectable. We’re not at “Newton’s laws,” but it isn’t pure mysticism either.

  • Counterfactuals are the hard part. Black-box predictors excel at P(Y∣X)P(Y\mid X)P(Y∣X); policy needs P(Y∣do(X))P(Y\mid do(X))P(Y∣do(X)). Say explicitly: prediction buys short-run control; counterfactual stability still leans on causal structure (or very careful A/Bs).

  1. Natural experiments aren’t “black box.”
    They’re identification strategies that estimate causal effects with minimal modeling. The “box” is thin but not black: assumptions are explicit (exogeneity/monotonicity/first stage). A quick sentence distinguishing “reduced-form causal effect” from “mechanism” will keep econometricians onboard.

  2. AlphaFold is a perfect hybrid case.
    AlphaFold didn’t just spray data at a net; it embedded inductive biases (equivariance, multiple sequence alignment, geometry loss) encoded from biochemical knowledge. That supports your broader point: the third magic works best when it steals priors from the second.

  3. Scope conditions for the third magic.
    Add a crisp checklist for when black-box prediction can substitute for understanding:

  • Stationary-ish environment (limited distribution shift).

  • Tight, cheap feedback (you can re-train / re-deploy).

  • Low tail risk (failures aren’t catastrophic).

  • Action space local (you’re steering margins, not redesigning the system).
    Then contrast with cases that still demand theory (aviation safety, macro stabilization, biosafety)—this shows you’re not overclaiming.

  1. What’s actually new vs. just bigger.
    Some will say “the third magic is just better function approximation.” You can preempt that by emphasizing three qualitative breaks:

  • Scale (orders-of-magnitude data/compute let memorization become a capability).

  • Compositional inference (tool use, program induction, multi-agent sims).

  • Search (gradient-driven or heuristic exploration over vast design spaces—e.g., protein or chip layout).
    That reads as a genuine meta-innovation, not a faster version of statistics.

  1. Economic growth prediction example: raise the bar.
    Your satellite-imagery result is exciting, but for policymakers the questions are:

  • Out-of-time transfer? (trained on 2000s, tested on 2010s?)

  • Geographic transfer? (train in the U.S., test in, say, Indonesia?)

  • Policy counterfactuals? (if we build the road your model correlates with, do we move growth, or just the lights?)
    Naming these caveats doesn’t blunt your point; it shows you know the difference between prognosis and levers.

  1. Control theory belongs in the story.
    Your thesis resonates with cybernetics / system identification: regulate a system with a model that needn’t be mechanistic. A nod to PID/LQR/RL grounds “control without understanding” in a mature tradition and aligns your “third magic” with learning-based controllers.

A compact rewrite you might consider (core claim)

The first magic made knowledge accumulative (memory). The second made it generalizable (theory). The third makes it operational at scale (learning-and-search). We shouldn’t ask which replaces which; we should ask how to stack them: record more, explain what must be stable, and learn the rest fast enough to act.

Concrete additions that would level this up

  • A 2×3 table (Task × Tool): short-run prediction, long-run counterfactuals, design/optimization × history, science, AI—checkmarks where each excels; caution triangles where failure modes lurk (Goodhart, shift, tail risk).

  • A “when it fails” box: adversarial inputs, rare events, regulation change, social feedback (performative predictions), and the classic Goodhart traps.

  • One macro use-case sketched end-to-end: e.g., place-based policy:
    (i) satellite/LLM features → growth forecast (AI),
    (ii) theory-guided constraints (agglomeration, congestion) (science),
    (iii) program memory: standardized interventions + postmortems (history).
    Close with how you’d validate (pre-registered policies, staggered rollouts, causal ML).

Research prompts your post naturally tees up

  1. Theory-guided foundation models: encode conservation laws, budget constraints, or accounting identities into model architecture/training.

  2. Counterfactual LLMs: combine SCMs with generative models to answer “what if we change this policy?”

  3. Adaptive policy loops: RL for governance with safety constraints (reward shaping to avoid Goodhart).

  4. Transfer-robust growth forecasting: benchmarks that force geographic and temporal shift.

  5. Interpretability for design: partial mechanistic insight targeted where decisions are high-stakes (flight control, biodesign).

Net take

The core idea—a third meta-innovation that delivers control even when explanation stalls—holds up and aged well. If you frame it as stacking rather than succession, acknowledge the counterfactual gap, and show one end-to-end policy loop where all three magics cooperate, you’ll convert skeptics while keeping the boldness that makes this piece fun.


Subscribe now

Share

Why do people get paid to invest their money?

2025-10-03 20:49:30

Stocks are up a lot over the past year. If you put $100,000 into an S&P 500 index fund a year ago, you would have about $119,000 now, counting dividends:

That’s not bad, eh? A free 19,000! Of course it’s not technically income — you still have to sell the stock in order to actually use that money to buy anything, at which point you’ll pay capital gains tax. But consider everything you had to do to earn that $19,000:

  1. Press a button that says “buy S&P 500 index fund”

  2. …That’s literally it! That’s all you have to do!

This strikes some people as unfair. After all, the median personal wealth in America is around $112,000, meaning that almost half of Americans don’t even have $100,000 to invest. If you don’t have much wealth at all, then the only way for you to get $20,000 — probably — is to work a bunch of hours. At 19 an hour, that’s 1000 hours — more than half of a typical working year!

That naturally strikes a lot of people as unfair. Why should poor people have to toil away for huge portions of their lifetime, while rich people can make money appear automatically in their accounts with the touch of a button? This is the question of desert (who deserves money), and it’s something a lot of people think about and care about with regards to the economy.

Even if you don’t think making capital income is inherently unfair, there’s a second question, which is: Why do we need that kind of thing for our economy? If people are getting compensated for pressing a button that says “Buy an S&P 500 index fund”, then presumably this button-press must be necessary for our economy somehow. But why?

It’s obvious why we need workers. It’s clear why we need real investment — building machines, structures, vehicles, and so on. It’s obvious why we need entrepreneurs. But why does production depend crucially on some guy pressing a button that says “Buy an S&P 500 index fund”? This is the question of utility, and it’s actually a question of whether there could be a more efficient way to design our economy. (In fact, as we’ll see, this ends up actually being connected to the question of “Who deserves money?” But it’s good to conceptually separate them in our minds.)

These are very basic questions that we don’t think about very often; capitalist economies are just all set up this way, and they all work pretty well in terms of making the average person rich, so we don’t question it at a deep level. But some people do! For example, Matt Bruenig, the socialist writer, recently wrote:

This touches both on the question of why investors deserve capital income, and on the question of what function investors perform in the economy. So let’s think a little bit about those questions.

What do investors give up in order to get financial income?

In fact, the question of “who deserves money?”, like all moral questions, is subjective. People disagree about this all the time. Some people think welfare is fine, other people think people should work for their money, and so on. There’s no one provable universal answer to the question of whether you deserve to get money by buying an S&P 500 index fund.

But we can simplify this question if we assume some kind of moral principle. And one pretty common principle is fairness. If you get something, it sort of seems fair that you should have to give up something in return.

For workers, it’s obvious what they’re giving up to earn a wage. Work is hard and annoying (otherwise they wouldn’t have to pay you to do it), and it takes up a lot of your time. So the sacrifice involved is clear. But what does an investor sacrifice just by pressing a button?

Well, the first thing they sacrifice is consumption. Investing money is a form of saving, and saving means you can’t consume. If I buy $100,000 of an S&P 500 index fund, it means I can’t spend $100,000 on…um…whatever people spend that much money on. A Lamborghini? Throwing lavish parties? A really really huge amount of Percy Pig candies? Whatever it is, I can’t buy it; in order to get that return on my stock investment, I’ve got to lock up my money for a while.

Locking up my money and forgoing consumption for a year might not be as painful as slaving away behind a cash register for 1000 hours, but it’s not nothing.

But that’s not actually the only thing that investors give up. They also take risk. Stocks usually make money (at least in the U.S.), but sometimes they crash. This isn’t just painful to watch when it happens; it’s also anxiety-inducing at the time you invest the money. When you buy an S&P 500 index fund, you have to wonder if you’re going to need to sell that stock to raise cash — for a medical emergency, or for your daughter’s wedding, or to spend in retirement — at the exact time when the market is way down.

In general, riskier assets tend to give you higher returns. For example, stocks are riskier than bonds, since they tend to crash more often — and crash harder. But over a long time, investing in stocks tends to give you more money than investing in bonds. Here’s a comparison of U.S. stocks (dark blue) vs. U.S. treasury bonds since 1997:

As you can see, stocks actually did worse than bonds between 1997 and 2013 — sixteen years! If a stock investor needed to retire or pay a big medical bill during that 16 years, they were out of luck. But eventually stocks beat bonds. And you can see this same pattern throughout U.S. history (and in most other countries).1

In finance, we call this the “risk-reward tradeoff”.2

The morality of the risk-reward tradeoff can be a difficult thing for some people to wrap their heads around. In a way, risk is inherently unfair; if two people both buy houses on the same block, and one gets flattened by a tree in a hurricane, that’s just not fair. Because of this, some people don’t think that rewarding people for taking risks can possibly be fair. In 2014, in a debate with none other than Yours Truly, Matt Bruenig wrote:

Capitalism does not reward risk-taking…Suppose Noah and I each invest in ways that are identical in all regards with respect to risk. If capitalism rewarded risk-taking, then each of us would get an identical return. But we don’t necessarily. Suppose Noah’s investment leads to him receiving a large return, while mine leads to me receiving nothing and even losing what I put in. In that possible scenario, even though we behaved in a relevantly identical fashion, capitalism distributed us different amounts. Noah was rewarded for risk-taking. I was punished…[T]he lottery-like aspect of investment (risk-taking) is diametrically opposed to the core concept of desert…We can both do substantively equivalent things — invest in asset class with risk level X — and get totally different stuff out of it. Neither of us is anymore deserving of what we get than the other, just as a lottery winner is not more deserving of the prize than a lottery loser.

People on X have recently discovered this quote, and are making fun of it. In fact, Bruenig does seem not to have thought very carefully about what it would mean to be rewarded for risk-taking. If everyone who ever took on a particular level experienced the exact same outcome, then by definition there wasn’t any risk involved. “Risk” means that you might win and you might lose. So Bruenig’s notion of what it would mean for people to be rewarded for risk-taking is logically incoherent.

The fact is, if outcomes have any randomness to them, something that is fair ex ante can’t be fair ex post. You can have fairness before the dice roll, or after, but you can’t have both, because when the dice land, it’s a different world than before they were thrown.

So when people invest in stocks, they’re agreeing to give up consumption and accept risk. Whether or not you think that’s enough to make it fair that investors earn money on average over time is up to you. But that’s what they give up.

Now, there is one very notable exception. If you invest in a very safe asset (say, U.S. Treasury bills) for a very short amount of time (say, four weeks), then there’s almost no risk and almost no foregone consumption. Right now, you can earn a real return of about 1% annualized by doing this (after subtracting inflation), and in the past you could earn as much as 5%:

That’s basically free money, so it’s hard for me to think of a reason why that’s morally fair.

Why does the economy need financial investors?

I don’t think we’re quite done with the question of “Why do investors deserve financial income?”.

Most of us probably all agree that a worker deserves the wage they earn. But it’s not just because they worked hard to get it. Going to the gym is hard work, but we don’t generally think people should be paid to go to the gym. Sitting around doing sudoku is hard work, but we don’t generally think people should be paid to do sudoku.

Workers don’t just sacrifice; they also produce something. Thanks to their actions, stuff exists that wouldn’t otherwise exist. So in some sense, we probably think of productivity as a necessary component of “deserving money”.

Which brings us to the question of whether buying an S&P 500 index fund is actually productive.

The answer to this one, unfortunately, is “We don’t really know.” The reason is that we don’t know exactly how the economy works. There are a lot of steps between “guy pushes a button to buy an S&P 500 index fund” and “the economy produces more useful stuff”, and we don’t know for certain what all those mechanisms even are, much less whether they’re working as they ought to.

For one thing, we don’t actually know how financial decisions affect the real economy. Theoretically, if a company’s stock is worth more, they should do more capital expenditure — if I can sell a share of my stock for $1000, I can finance the purchase of machines or office buildings or delivery trucks much more easily than if I can only sell that share for $10. In practice, it’s not so clear how much this actually matters. As far as we can tell, stock price seems to matter somewhat for businesses’ investment decisions, but it’s not the only thing going on. There are plenty of complicating factors, including A) the fact that companies use debt and their own cash flows to fund themselves in addition to stock sales,3 and B) the fact that it’s not always clear when companies are purchasing capital, since some capital is “intangible”4 and thus very hard to observe.

We also don’t understand the feedback mechanism, by which real economic activity changes asset prices and gives investors a return. Just how much stock prices go up when a company grows and earns more is a perennial debate in economics.5

On top of all that, we don’t even know how an investor’s decision to buy stocks affects stock prices themselves! If I buy $100,000 of an S&P 500 index fund, how much does the S&P 500 actually go up? The answer isn’t “$100,000”. Stocks are not like containers of liquid that hold wealth; instead, prices are determined by how much the people who trade stocks on any given day (or hour, or minute) agree to buy and sell the stock for. The price impact of stock purchases can vary depending on a lot of things.

So a lot of what we have here are theories, rather than well-established empirical facts. In theory, financial investment does two things for the real economy:

  1. It determines how much of the economy’s overall resources to allocate toward risky business projects, and

  2. It determines which risky business projects deserve to be allocated more resources.

The second of these is called asset allocation. If Microsoft can earn more from building data centers than Google can, it’s because Microsoft is doing better at AI. All else equal, that means I should buy Microsoft stock and sell Google stock. And if I do that, it’ll be easier for Microsoft to finance the building of new data centers, and harder for Google to do so. So Microsoft will build more, and Google will build less.

That’s the theory of how smart asset allocators direct real capital to where it needs to go. In fact, we see plenty of examples of finance people trying to do something along these lines in real life. We see venture capitalists funding startups, hedge fund managers evaluating companies to see which stocks will do well, and so on.

But also, there are a lot of people who don’t put any effort or intelligence into picking their assets. If you just press a button that says “Buy S&P 500 index fund”, you’ve just copied the market exactly. This is very often a rational move, because most people don’t have any information about which stocks are undervalued and which are overvalued. But it doesn’t actually make prices more “right” — it just keeps them the same. So it can’t shift real capital from worse companies to better companies.

Essentially, when you press a button that says “Buy S&P 500 index fund”, the only thing you’re deciding — if you’re deciding anything at all — is that stocks, in general, should be more expensive.

That’s not nothing, though. It’s a vote of confidence in American business in general. It’s saying that American businesses in general have good opportunities and should invest and expand, even if you don’t presume to know which companies have relatively better prospects. If stock prices affect business investment, then your purchase of an S&P 500 index fund will increase American business investment.

If you think about it, this probably is part of the reason you buy an index fund. Sure, to some extent it just happens to be your payday.6 But also, you would probably be afraid to buy stocks if you thought American business was going to do badly.

In other words, even investors who don’t bother doing any asset evaluation or allocation might be adding some useful information to the market when they buy stocks. Perhaps the long rise of the U.S. stock market and the long smooth arc of American economic growth were not unrelated phenomena — perhaps they were both simply a long string of wisely optimistic bets on the potential of American business.

So although it’s not certain, we can definitely see a possible way that letting investors turn money into more money with the touch of a button might be good for the economy. Which means that although pressing a button that says “Buy stocks!” might seem like a useless no-brainer, it might actually be contributing to the efficient functioning of the American business world — and, thus, to greater output.

None of this is to say, of course, that the amount of income that investors have gotten from U.S. stock markets has been either fully fair or fully efficient. U.S. stocks in particular have done insanely well for the past century or so, usually doubling investors’ money every 10 years (even after accounting for inflation). That’s a huge amount of financial reward for something as passive as buying and holding an index fund. In fact, economists still argue vigorously over just why stocks have done so well.

And none of this is to say that government has no place in the capital allocating world. Sovereign wealth funds, policy banks, government influence over the banking system, industrial subsidies, and corporate taxes are just a few of the many ways that the government influences the allocation of financial capital in a modern economy. And the government allocates real capital directly just by doing things like building infrastructure. None of what I’ve said means that these government efforts are inefficient, or should be minimized. In fact, it’s possible that scaling them up dramatically — as China has recently done — would be a good idea. Or maybe not.

My point here is that there are pretty clear reasons why lavishing wealth on people for simply pushing a button labeled “Buy stocks!” could be both moral and good for the economy. That’s a weird thing to realize, but it’s true.


Subscribe now

Share

1

Whether this pattern holds between different stocks is actually an open question, because the relative riskiness of stocks is hard to measure. This is because some of the apparent risk of individual stocks is basically fake; you can get rid of it by diversifying your portfolio. But exactly how to diversify your portfolio, and whether it’s possible to know in advance how best to diversify, is one of the biggest open questions in finance.

2

Some people call it the “risk-return” tradeoff instead, but I don’t like this terminology. Sometimes that “return” doesn’t actually happen. Instead, it’s actually a tradeoff between risk and long-term expected return. But “long-term expected return” is a mouthful to say, and “expected” has a very specific statistical meaning in this case that a lot of people misunderstand, so I prefer the word “reward”.

3

These three financing methods are only equivalent in some highly idealized conditions that typically don’t prevail in the real world.

4

This includes things like brands, ideas, internal organization, human networks and relationships, and so on.

5

Also notice that the two big questions — how asset prices affect business activity, and how business results affect asset prices — are very hard to answer at the same time, since there is obviously a feedback effect at work here, meaning you have to measure two different directions of causality at once.

6

This is what’s known as “liquidity trading”.

Network State, or a Network of States?

2025-10-01 13:32:51

Map by Droysen/Andrée via Wikimedia Commons

I’m writing this post on a plane, as I fly to Singapore for the Network State Conference on October 3rd. This conference is organized by my old friend Balaji Srinivasan, who wrote the book The Network State: How to Start a New Country. I will be giving a talk at the conference, about an idea for operationalizing and improving upon Balaji’s core concept. Balaji’s book was fairly idealistic and speculative, but I think I’ve figured out a way that a stable, functional network state could feasibly be created. This post is a preview of what I’ll say in the talk.

But before we get to that, I should mention that after Singapore, I’m going to be in Malaysia on the 5th, 6th, and 7th. If you’re in Malaysia, and you’d like to meet up, shoot me an email!

Anyway, first let’s talk about network states, and why the original idea needs some work. And then we’ll get to my big idea that I think has the potential to solve all of the issues.

Two fundamental problems with network states

The internet has de-localized many of humanity’s social interactions. In the past, almost all of the people you would know would be in your close geographical vicinity. Perhaps you might feel connection with some broader international concept (“Christendom”), or maybe you might maintain some networks of long-distance relationships (the early modern “Republic of Letters”), but almost everything you did and everyone you know would be governed by physical proximity.

After millions of years, the internet has abruptly upended this basic fact of human life. Many of our communities are now de-localized, consisting of like-minded individuals strung out across the globe. Your online social group could be an anime fandom, or Asians living in the Anglosphere, or a bunch of wisecracking tech people on Twitter. A couple of years ago I wrote a post contrasting these “vertical communities” with traditional “horizonal” communities, I mused about how the two might conflict:

Balaji’s book goes much farther than that post — it envisions online communities turning into something resembling a nation-state. Groups of like-minded people strung out over a vast distance, he argued, could band together to provide services like education, insurance, health care, private dispute resolution, and so on. They could function as business networks, providing preferential financing and business deals, perhaps using blockchains to obviate the need for local courts and banks.

Balaji envisions network states buying up property in various locations — housing, conference centers, ranches, whatever — and using these as physical nodes where the members of a network state can gather. Eventually, he speculates that network states could conduct independent foreign policy, negotiating with traditional nation-states using the leverage of the global network to extract concessions of limited sovereignty — special tax treatment, trade agreements, and so on. Here’s a picture of what a global network state might look like:

This is a very interesting and novel idea. It has an undeniable appeal — who wouldn’t want to form a country composed purely of people you resonate with, rather than whoever just happens to live near you? What if “finding your people” via the internet didn’t just mean finding people to talk to, or work with, or date, but people to build your whole society with?

And yet we can immediately see at least two huge problems with Balaji’s idea.

First and foremost is the problem of public goods. These are things that nation-states traditionally provide, because private actors either can’t or won’t, at least in sufficient amounts — things like national defense, courts and laws, police and public safety, firefighting, infrastructure, scientific research, public parks, and so on.

The internet has made it possible to provide some public goods online — for example, some kinds of scientific research, or property rights, or anything that relies mainly on information and communication. But plenty of public goods still exist in physical space. And these are things that network states will struggle to provide.

For example, take physical infrastructure. Even if you’re a member of a network state, you need roads to drive on, airports to fly from, trains to ride, sewer systems, and so on.1 Who is going to pay for those? If members of a network state don’t pay their share of taxes to support the roads they drive on or the sewer systems they flush their toilets into, they’ll be free-riding. That’s going to cause anger in the short term, and breakdown of the road system in the long term.

Or take national defense. Members of a network state might negotiate lower tax rates for themselves, figuring if a neighboring country invades they’ll just hop to another country and avoid the fighting. But if network-state people didn’t pay enough taxes to maintain an army, the places they live in might be prone to getting overrun and conquered. Nobody is going to want them for a neighbor.

And a lot of people either can’t just pick up and move, or don’t want to; since time immemorial, people have fought for their land. As Tablet magazine put it in their review of The Network State:

What exactly drove hundreds of thousands of Ukrainian civilians to take up arms and risk being blown to bits in Russian artillery barrages rather than submit to Russian rule? Surely it wasn’t disagreement with Putin on property-tax regimes. No, something else is at work.

The kind of people who will fight and die for their land will be very mad at network-state types who enjoyed the fruits of peace when times were good but who abandoned their neighbors to their fate when things got rough.2

This gets at the second big problem with network states: they naturally conflict with traditional nation-states.

Many times throughout history, a group of people has carved out “privilege” for themselves, in the original sense of the word — a system of private laws that only applies to them and not to their neighbors. This invariably makes people mad, because fairness is a key human value. Whether it’s aristocrats who are allowed to carry special weaponry on the streets, or a certain religion you’re not allowed to blaspheme against, people who get visibly better treatment always spur resentment.

For network state members to negotiate “sovereignty” for themselves in the countries they live in means for them to have privileges that their neighbors don’t have, and that’s going to cause friction — to put it mildly. Just look at the backlash in Europe and the Anglosphere against the creeping recognition of sharia law that applies only to Muslim residents.

People who are rooted to a horizontal community — to a specific nation-state and a specific city — will also naturally resent neighbors who don’t put in their fair share and who seem ready to skip town at a moment’s notice. Network state “citizens” will, by definition, be rootless cosmopolitans, with all of the negative baggage that designation has traditionally created.

Network states work in theory, but they have little to offer to traditional states; the original concept is effectively parasitic on existing forms of human organization. Traditional states still have a major advantage in terms of organized violence; if network states can’t offer some kind of benefits to traditional states, they will eventually find themselves violently ejected.

That doesn’t mean the idea should be abandoned, though. I just think it needs some additional elements. So after thinking about the problem, here is what I came up with (and what my talk at the Network State Conference will be about).

A solution: a Network of States

Read more

Book Review: "Doughnut Economics"

2025-09-29 09:21:49

“I took my money/ And bought a donut/ Hole the size of this entire world” — Sleater-Kinney

Here’s a very short, oversimplified history of modern economics. In the 1960s and 1970s, a particular way of thinking about economics crystallized in academic departments, and basically took over the top journals. It was very math-heavy, and it modeled the economy as the sum of a bunch of rational human agents buying and selling things in a market.

The people who invented these methods (Paul Samuelson, Ken Arrow, etc.) were not very libertarian at all. But in the 70s and 80s a bunch of conservative-leaning economists used the models to claim that free markets were great. The models turned out to be pretty useful for saying “free markets are great”, simply because math is hard — it’s a lot easier to mathematically model a simple, well-functioning market than it is to model a complex world where markets are only part of the story, and where markets themselves have lots of pieces that break down and don’t work.1 So the intellectual hegemony of this type of mathematical model sort of dovetailed with the rise of libertarian ideology, neoliberal policy, and so on.

A lot of people sensed that something was amiss, and set out to find problems with the story that the libertarian economists were telling. These generally fell into two camps. First, there were people who worked within the 1970s-style mathematical econ framework, stayed in academia, and tried to change the dominant ideas from inside the system. They came up with things like public goods models, behavioral economics, incomplete market macroeconomics, and various game-theory models in which markets fail. When the computer revolution made it a lot easier to do statistical analysis, they started to do a lot more empirical work, and a lot less theory:

These changes also moved the field more to the left politically. To take just one small example, you can see a lot more economists writing about inequality in recent years:

Source: Noah Smith

But there was also a second group of people who fought against the intellectual hegemony of the libertarian economists. They rejected the 1970s-style math models that the profession insisted on using — either because they didn’t think it made sense to model an economy using that sort of math, or because they couldn’t personally handle the math — and instead sought insight in alternative disciplines like ecology, sociology, history, etc. These folks — who often called themselves “heterodox” — tended to lean even more strongly to the left, and often unabashedly mixed their methodological critiques with socialist or other leftist politics.

The heterodox folks had a problem. They had a huge grab-bag of critiques, but nothing really tied those critiques together. There was no new paradigm — no simple new way of thinking about economics. Marxism had provided that idea for much of the 20th century, but Marxism was defunct. That lack of a unifying framework weakened the heterodox cause, because it still seemed to leave 1970s-style free-market economics as the “base case” — the simple core idea that people kept returning to. It’s relatively easy to explain to a bright 21-year-old how supply and demand work in a simple competitive “Econ 101” model; it’s a lot harder to explain two dozen ways in which that model fails.

So for years, the heterodox people have been writing versions of the same old list of critiques, while searching for someone who could synthesize this list into a simple new paradigm.1 This call was answered by Kate Raworth, a British researcher who has worked for international development agencies and for Oxfam before getting various posts at European universities. In 2017, Raworth wrote a book called Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist, which promised to replace the old neoclassical economic paradigm with something newer and better.

Although this book is now eight years old, a number of friends have recently asked me to read and review it. So here is that review.

Doughnut Economics is based around an important insight: Diagrams are powerful marketing tools. Raworth, seeing how the basic supply-and-demand graph and the circular flow diagram had helped popularize neoclassical economics2, set out to create a diagram that was as simple and as powerful as those. She writes:

[T]his book aims to reveal the power of visual framing and use it to transform twenty-first century economic thinking…Visual frames, it gradually dawned on me, matter just as much as verbal ones…[N]ow is the time to uncover the economic graffiti that lingers in all of our minds and, if you don’t like what you find, scrub it out; or better still, paint it over with new images that far better serve our needs and times…The diagrams in this book aim to summarise that leap from old to new economic thinking.

Most importantly, Raworth came up with something she calls the Doughnut — a set of concentric circles that illustrates the tradeoff between the environment and human prosperity:

Source: DoughnutEconomics, via Wikipedia

Going towards the center of this diagram represents more human poverty; when you go inward beyond the dark green band, you deprive and impoverish humans. Going outward represents more human prosperity; when you go out beyond the dark green band, you despoil the environment.

In fact, this tradeoff is very real. Natural resources are plentiful, but not infinite; produce too much in the present, and you’ll leave little for future humans (and other animals) to enjoy. Furthermore, plenty of industrial activity creates negative externalities, like when burning fossil fuels emits carbon and disrupts the climate. So the Doughnut depicts something that we really do need to think about.

In fact, mainstream economics already has a simple way of depicting this in a memorable picture — a production possibilities frontier. A PPF illustrates society’s tradeoff between two things. You can draw a PPF that illustrates the tradeoff between production and the environment; in fact, introductory econ courses do this all the time:

You could label this diagram with every single one of the items that Raworth puts in her Doughnut — food, housing, biodiversity, climate change, and all the rest. You could label one part of the curve “the safe and just space for humanity”. It would be easier to read. But Raworth is interested in intellectual revolution, not evolution; the humble PPF is disqualified because of its association with the existing mainstream, and so she never even mentions it in her book.

Raworth presents the Doughnut as an alternative to the idea of infinite exponential economic growth, which she sees mainstream economics as supporting. I suspect that this explains the book’s popularity. In recent years, the idea of degrowth has gained popularity, especially in the UK and North Europe. But on some level, people realize that degrowth is very bad for developing countries, since economic growth — not foreign aid or remittances — is the only thing that has ever been able to durably raise people out of desperate poverty. So even lefty intellectuals in the UK or Sweden realize that the idea of degrowth, applied across the world, would be a monstrous crime against humanity.

Raworth is sympathetic to the degrowthers, but she is not one of them. She spends a lot of time discussing the limits of GDP as an economic goal, talking about the tradeoff between growth and the environment, and arguing that exponential growth has to end eventually. But she also recognizes the importance of material prosperity for people in poor countries:

In many low-income but high-growth countries…when that growth leads to investments in public services and infrastructure, its benefits to society are extremely clear. Across low- and middle-income countries…a higher GDP tends to go hand-in-hand with greatly increased life expectancy…far fewer children dying before the age of five, and many more children going to school. Given that 80 percent of the world’s population live in such countries…significant GDP growth is very much needed, and it is very likely coming. With sufficient international support, these countries can seize the opportunity to leapfrog the wasteful and polluting technologies of the past.

This is my view as well, and it’s the only reasonable conclusion for anyone who has worked in international development (as Raworth has). A lot of people who are dissatisfied with GDP as a measure of human flourishing have spent a lot of time creating alternative measures, like the Human Development Index. These always end up being very strongly correlated with GDP. For example, here’s HDI:

It looks like the correlation flattens out a bit at the top, but that could just be because the index can’t go higher than 100; if you let it go higher, the correlation might persist even at high levels of income.

They’re all like this. Googling around, I found something called the Social Progress Imperative, which makes some index of “social progress”. I’ve never seen this index before, and I didn’t even need to look3 to know at the country level it’s going to be strongly correlated with GDP:

As with HDI, the apparent flattening at the top of the curve might just be due to the fact that the index is capped at 100, and rich countries tend to be placed near 100.

Now, these are just correlations. There’s no guarantee that making a country richer will solve all its social problems. Americans are way richer than Europeans, but this hasn’t yet solved our crime problem, our mental health problems, our drug problems, and so on. We need more than GDP growth to make a good society. Yet at the same time, GDP continues to be a great rough-and-ready benchmark for how well a society is doing. And as Raworth wisely points out, for poor countries, material deprivation is the main problem, and that does get solved by economic growth.

So Raworth strikes a balance between the trendy lefty idea of degrowth and the obvious need for poor-country development. She recognizes that growth is important for poor countries, but that it isn’t the be-all and end-all of a good society. And she recognizes that there’s a fundamental tradeoff between economic growth and the environment. So she recommends that rich societies focus less on growth and more on fixing their other problems.

So far, so good. That is a reasonable thing to argue. There are reasons to argue against it, including:

  1. Rich-country growth drives technological progress more than poor-country growth does, and technological progress is often necessary both to save the environment and to improve society.

  2. Rich-country growth creates demand for poor-country exports, and leads to investment in poor countries, thus helping them grow.

  3. If rich democracies don’t grow economically, they may be conquered by countries like Russia that don’t care about the environment or about social progress.

Raworth doesn’t consider these benefits of rich-country growth. But if she had stuck to the core message of environmental tradeoffs and the inadequacy of GDP growth, it would have made for a very good, tight, focused book.

But Doughnut Economics is not a tight, focused book. Instead of delivering a simple, powerful message, Raworth tries to intellectually overthrow the entire edifice of modern economics. And in this task, she fails.

First of all, Raworth doesn’t seem to fully understand the value of the things she criticizes. Of course she’s right that simple pictures like supply-and-demand graphs make great marketing devices, but they’re also much more than that.

For example, you can also use those pictures to do thought experiments. If you model the market for oranges with a supply-and-demand graph, you can think about the effects of a hurricane in Florida as a negative supply shock — you can shift the supply curve to the left, and the graph will tell you that A) fewer oranges get sold, and B) the price of oranges goes up.

In principle, you could do that with Raworth’s Doughnut this way, too. For example, consider an advance in green technology — suppose someone invents a way to make cheap zero-carbon cement. You could model this with the environmental PPF that I illustrated above. This technology loosens the tradeoff between growth and the environment — it shifts the PPF outward, allowing us to have more human wealth for the same amount of environmental degradation:

You could just as easily draw this as an expansion of Raworth’s donut. The outward ring of the donut would expand, and the “safe and just space” for humanity would become more capacious, illustrating the benefits of green technology.

But although she could, Raworth never actually uses her Doughnut like this. Instead of a predictive analytical tool, she simply uses it as a laundry list of every possible good thing that she would like humanity to have. It’s less of a doughnut than an Everything Bagel.

Indeed, Raworth generally rejects the notion of economics as a predictive, analytical tool at all. Like many heterodox types, she argues that economics should be not a positive but a normative discipline — a branch of political philosophy focused on telling us what goals we ought to have for our society, rather than an analytical tool for predicting how economies work. She writes:

In the twentieth century, economics lost the desire to articulate its goals…[John Stuart] Mill began a trend that others would further: turning attention away from naming the economy’s goals and towards discovering its apparent laws…[T]he discussion of the economy’s goals simply disappeared from view. Some influential economists, led by Milton Friedman and the Chicago School, claimed this was an important step forwards, a demonstration that economics had become a value-free zone, shaking off any normative claims of what ought to be and emerging at last as a ‘positive’ science focused on describing simply what is. But this has created a vacuum of goals and values[.]

But in her desire to turn economics back into a philosophy of values, Raworth essentially abandons its use as a tool of prediction and analysis. For prediction and analysis, she turns to other disciplines — climate science, ecology, and “complexity science”. But there are many questions that these disciplines are not set up to answer — and which modern economics can answer.

For example, Raworth would probably agree that well-functioning public transit is an important function of government and society. If the government is considering building a new train extension, it’s important to be able to predict how many people will use it; otherwise, you may end up wasting Earth’s scarce resources. Ecology may give you metaphors to think about humans riding trains, and complexity science may simply tell you that a system of transit is complex. That’s not very helpful.

But with economics, you can actually estimate a demand curve for transit ridership, and use that estimate to make accurate predictions about how many people will ride the train, as Dan McFadden won a Nobel prize for successfully doing in the 1970s. You can’t do that with a Doughnut!

In fact, there are plenty of other “positive” questions that modern economics is set up to answer. The electromagnetic frequency spectrum is a commons that must be allocated in order for people to hear each other over a cell phone network; economics can predict the results of various spectrum auction mechanisms. Allocating kidney transplants is important in order to save lives; economic theory can successfully tell you how to get the most kidneys to the most recipients, thus saving the most lives. There are many other examples.

Raworth doesn’t seem very aware of these predictive successes of modern economic theory. Nor does she seem to have a firm idea of what would replace them if the theories of supply and demand, equilibrium, etc. were thrown out the window.

Part of the problem is that Raworth’s understanding of the mainstream economic paradigm she wants to overthrow seems limited. For example, in Chapter 5, she argues for keeping interest rates low by making loans with state-owned banks:

State-owned banks could…use money from the central bank to channel substantial low- or zero-interest loans into investments for long-term transformation, such as affordable and carbon-neutral housing and public transport. It…would shift power away from what Keynes called ‘the rentier…the functionless investor.’

But in the very next paragraph, she denounces quantitative easing, which was a central bank program to keep interest rates low during the Great Recession:

[C]ommercial banks used that money [from QE] to rebuild their balance sheets instead, buying speculative financial assets such as commodities and shares. As a result, the price of commodities such as grain and metals rose, along with the price of fixed assets such as land and housing, but new investment in productive businesses didn’t.

First of all, this misunderstands what it means for a bank to rebuild its balance sheet. It doesn’t mean for a bank to buy stocks and houses and commodities. It means for a bank to hold cash. Banks took the money from QE and essentially stuck it in a vault, holding what’s known as “excess reserves”:

Raworth also seems to misunderstand how asset prices work. When interest rates go down, the time value of money goes down, which raises the price of assets like stocks and houses. When Keynes talked about the “euthanasia of the rentier”, he meant that investors wouldn’t be able to earn a bunch of free money from keeping their money in bank accounts or government bonds; he didn’t mean that the rate of return on risky assets like stocks and houses would fall. In fact, rate cuts give a permanent boost to the price of those assets, creating a windfall for the rich people who own them.

Raworth doesn’t appear to understand this at all. She also advocates for central banks to print money and give it directly to people — a proposal she calls “People’s QE”, though the more common term is “helicopter money”. But beyond whatever inflation issues this would cause, it would also raise asset prices, because people would use some of that money to buy houses and stocks. That would give a windfall to asset owners, just like regular QE did.

Later, in Chapter 7, Raworth argues that the environment would benefit if cash lost value over time — in other words, if the economy had negative nominal interest rates. She writes:

What kind of currency, then, could be aligned with the living world so that it promoted regenerative investments rather than pursuing endless accumulation? One possibility is a…small fee for holding money, so that it tends to lose value the longer it is held…Today the…effect could be achieved…by electronic currency that incurred a charge for being held over time, so curtailing use of money as a store of ever-accumulating value…It would transform the landscape of financial expectations: in essence, the search for gain would be replaced by the search to maintain value.

This gets the effect of depreciating currency exactly wrong. If your cash is losing value, what do you do? You go out and spend it as quickly as you can!4 In fact, my PhD advisor, Miles Kimball, has long advocated for depreciating electronic money as a way to stimulate consumers to spend more! Raworth is proposing something that she thinks would limit consumption, but which would actually supercharge it.

The whole book is littered with these misconceptions. This is sadly typical of heterodox econ critics, who are so sure that they want to throw out all of modern economics that they often can’t be bothered to understand what they’re trying to overthrow. If you’ve decided that a bag contains nothing but trash, why look inside it before you toss it in the dumpster? And yet for all its flaws and limitations, modern economics has succeeded in uncovering some real insights about how the economy works, and a heterodox takeover would throw these out.

Another big problem with Doughnut Economics is its epistemology. When facts seem to support Raworth’s desired conclusions, she doesn’t question them; when they seem to run counter to her ideas, she often dismisses them. For example, in Chapter 6, Raworth discusses the idea that past a certain point, as countries get richer, they pollute less.5 She cautions us not to believe that this correlation represents real causation:

Grossman and Kruger…pointed out that an observed correlation between economic growth and falling pollution didn’t demonstrated that growth itself caused the clean-up.

But then on the very next page, she discusses the correlation between economic equality and environmental quality, and interprets it as causation:

Mariano Torras…and James K. Boyce…found that environmental quality is higher where income is more equitably distributed, where more people are literate, and where civil and political rights are better respected. It’s people power, not economic growth per se, that protects local air and water quality.

And yet although Torras and Boyce (1998) propose a causal mechanism from political power to environmental quality, they don’t have a way of testing that linkage; it remains a hypothesis.

And earlier, in Chapter 5, Raworth writes:

Richard Wilkinson and Kate Pickett…discovered that it is national inequality, not national wealth, that most influences nations’ social welfare. More unequal countries, they found, tend to have more teenage pregnancy, mental illness, drug use, obesity, prisoners, school dropouts and community breakdown, along with lower life expectancy, lower status for women and lower levels of trust…More equal societies, be they rich or poor, turn out to be healthier and happier.

These are all just observed correlations, but Raworth interprets them all as causation without questioning whether other factors might be at work.

The issue clearly isn’t that Raworth fails to understand the difference between correlation and causation; she obviously does, because she uses it to question results she doesn’t like, like the finding that higher incomes are correlated with less pollution of certain types.6 But she fails to apply this stringent, critical intellectual standard to correlations between economic equality and various positive outcomes. Causation for me, mere correlation for thee.

Other times, Raworth simply seems uninformed. For example, in Chapter 7 she says that Canada has “failed to achieve any absolute decoupling” between emissions and growth. That’s just wrong; in 2017, when Doughnut Economics went to press, Canada’s emissions were down from their 2007 peak, while GDP was up by 17%. The absolute decoupling was even stronger if measured in consumption-based terms (i.e., adjusting for offshoring of emissions), and in per capita terms.

Later in the chapter, Raworth cites the Easterlin Paradox as evidence that higher incomes don’t make countries happier. But while income certainly has diminishing returns in terms of happiness, the Easterlin Paradox itself was debunked with better data long before Doughnut Economics was published; as far as we can tell, the correlation between income and happiness never breaks down as countries get richer.

Raworth is generally so eager to overthrow modern economics that she will turn to any and every ally that seems to offer an alternative way of doing things. She gushes over blockchain technology, envisioning that Ethereum will help communities go green. She cites Steve Keen’s “Minsky” project as an alternative way to do macroeconomics, despite the fact that this project has produced no useful results whatsoever, and Keen is generally not seen as a credible figure among heterodox economists.

And at times, Raworth seems to cite no evidence beyond her own authority. In Chapter 6 she writes that artificial intelligence “is displacing people with near zero-humans-required production”. Even Daron Acemoglu wouldn’t make so bold a claim. Raworth provides no citation.

This is not to say that everything in Raworth’s book is wrong; far from it. In fact, many of the critiques she levels at mainstream economics are perfectly valid and reasonable — even some of the ones where her supporting evidence is weak. But the book is so eager to launch every possible missile at the discipline of economics that it comes off more as a polemic than a scholarly analysis.

And in doing so, Doughnut Economics fails at the central task it sets out to complete. It does not offer a coherent new paradigm for doing economics. Instead, it offers a hurricane of unrelated critiques and a laundry list of policy goals. It vividly demonstrates that the “heterodox” economics project has not escaped its fundamental dilemma — its relevance is still entirely dependent on the continued intellectual hegemony of the paradigm it was created to challenge.


Subscribe now

Share

1

A big problem for the heterodox movement was that most of the really brilliant thinkers who tried to create alternative models — people like Joe Stiglitz, Thomas Piketty, Esther Duflo, Richard Thaler, Dani Rodrik, Paul Romer, and so on — had been co-opted into mainstream academia. That left the heterodox movement mostly headed by people with very strong leftist politics but with only a weak grasp of the ideas they were critiquing.

2

Both of these diagrams are actually much older than the Paul Samuelson-style mathematical economics that came to prominence in the 1960s/70s, and against which the heterodoxers typically rail.

3

In fact, their index has 57 different indicators!! It’s incredibly comprehensive and complex, and yet at the end of the day it’s highly correlated with good old GDP.

4

Or you invest it in risky, appreciating assets like stocks, which gives a windfall to the rich people who own those assets.

5

This is known as the Environmental Kuznets curve. Despite its limited domain of applicability, I’m especially fond of it, because a long time ago, my family changed their name from Kuznets, which means “Smith” in Russian.

6

Actually, there is now some causal evidence on this. For example, Colmer et al. (2025) find that rising incomes do result in less air pollution at the local level.