2025-10-28 23:21:32
People are less weird than they used to be. That might sound odd, but data from every sector of society is pointing strongly in the same direction: we’re in a recession of mischief, a crisis of conventionality, and an epidemic of the mundane. Deviance is on the decline.
I’m not the first to notice something strange going on—or, really, the lack of something strange going on. But so far, I think, each person has only pointed to a piece of the phenomenon. As a result, most of them have concluded that these trends are:
a) very recent, and therefore likely caused by the internet, when in fact most of them began long before
b) restricted to one segment of society (art, science, business), when in fact this is a culture-wide phenomenon, and
c) purely bad, when in fact they’re a mix of positive and negative.
When you put all the data together, you see a stark shift in society that is on the one hand miraculous, fantastic, worthy of a ticker-tape parade. And a shift that is, on the other hand, dismal, depressing, and in need of immediate intervention. Looking at these epoch-making events also suggests, I think, that they may all share a single cause.
Let’s start where the data is clear, comprehensive, and overlooked: compared to their parents and grandparents, teens today are a bunch of goody-two-shoes. For instance, high school students are less than half as likely to drink alcohol as they were in the 1990s:
They’re also less likely to smoke, have sex, or get in a fight, less likely to abuse painkillers, and less likely to do meth, ecstasy, hallucinogens, inhalants, and heroin. (Don’t kids vape now instead of smoking? No: vaping also declined from 2015 to 2023.) Weed peaked in the late 90s, when almost 50% of high schoolers reported that they had toked up at least once. Now that number is down to 30%. Kids these days are even more likely to use their seatbelts.
Surprisingly, they’re also less likely to bring a gun to school:
All of those findings rely on surveys, so maybe more and more kids are lying to us every year? Well, it’s pretty hard to lie about having a baby, and teenage pregnancy has also plummeted since the early 1990s:
Adults are also acting out less than they used to. For instance, crime rates have fallen by half in the past thirty years:
Here’s some similar data from Northern Ireland on “anti-social behavior incidents”, because they happened to track those:
Serial killing, too, is on the decline:
Another disappearing form of deviance: people don’t seem to be joining cults anymore. Philip Jenkins, a historian of religion and author of a book on cults, reports that “compared to the 1970s, the cult issue has vanished almost entirely”.1 (Given that an increase in cults would be better for Jenkins’ book sales, I’m inclined to trust him on this one.) There is no comprehensive dataset on cult formation, but analyzed cults that have been covered on a popular and long-running podcast and found that most of them started in the 60s, 70s, and 80s, with a steep dropoff after 20002:
Crimes and cults are definitely deviant, and they appear to be on the decline. That’s good. But here’s where things get surprising: neutral and positive forms of deviance also seem to be getting rarer. For example—
Moving away from home isn’t necessarily good or bad, but it is kinda weird. Ditching your hometown usually means leaving behind your family and friends, the institutions you understand, the culture you know, and perhaps even the language you speak. You have to be a bit of a misfit to do such a thing in the first place, and becoming a stranger makes you even stranger.
I always figured that every generation of Americans is more likely to move than the last. People used to be born and die in the same zip code; now they ping-pong across the country, even the whole world.
I was totally wrong about this. Americans have been getting less and less likely to move since the mid-1980s:
This effect is mainly driven by young people:
These days, “the typical adult lives only 18 miles from his or her mother“.
Creativity is just deviance put to good use. It, too, seems to be decreasing.
A few years ago, I analyzed a bunch of data and found that all popular forms of art had become “oligopolies”: fewer and fewer of the artists and franchises own more and more of the market. Before 2000, for instance, only about 25% of top-grossing movies were prequels, sequels, spinoffs, etc. Now it’s 75%.
The story is the same in TV, music, video games, and books—all of them have been oligpol-ized. As points out, we’re still reading comic books about superheroes that were invented in the 1960s, buying tickets to Broadway shows that premiered decades ago, and listening to the same music that our parents and grandparents listened to.
You see less variance even when you look only at the new stuff. According to analyses by The Pudding, popular music today is now more homogenous and has more repetitive lyrics than ever.
Also, the cover of every novel now looks like this:
But wait, shouldn’t we be drowning in new, groundbreaking art? Every day, people post ~100,000 songs to Spotify and upload 3.7 million videos to YouTube.3 Even accounting for Sturgeon’s Law (“90% of everything is crap”), that should still be more good stuff than anyone could appreciate in a lifetime. And yet professional art critics are complaining that culture has come to a standstill. According to The New York Times Magazine,
We are now almost a quarter of the way through what looks likely to go down in history as the least innovative, least transformative, least pioneering century for culture since the invention of the printing press.
Remember when the internet looked like this?

That era is long gone. Take a stroll through the Web Design Museum and you’ll immediately notice two things:
Every site has converged on the same look: sleek, minimalist design elements with lots of pictures
Website aesthetics changed a lot from the 90s to the 2000s and the 2010s, but haven’t changed much from the 2010s to now
A few examples:
This same kind of homogenization has happened on the parts of the internet that users create themselves. Every MySpace page was a disastrous hodgepodge; every Facebook profile is identical except for the pictures. On TikTok and Instagram, every influencer sounds the same4. On YouTube, every video thumbnail looks like it came out of one single content factory:
No doubt, the internet is still basically a creepy tube that extrudes a new weird thing every day: Trollface, the Momo Challenge, skibidi toilet. But notice that the raw materials for many of these memes is often decades old: superheroes (1930s-1970s), Star Wars (1977), Mario (1981), Pokémon (1996), Spongebob Squarepants (1999), Pepe the Frog (2005), Angry Birds (2009), Minions (2010), Minecraft (2011). Remember ten years ago, when people found a German movie that has a long sequence of Hitler shouting about something, and they started changing the subtitles to make Hitler complain about different things? Well, they’re still doing that.
The physical world, too, looks increasingly same-y. As Alex Murrell has documented5, every cafe in the world now has the same bourgeois boho style:
Every new apartment building looks like this:
The journalist Kyle Chayka has documented how every AirBnB now looks the same. And even super-wealthy mega-corporations work out of offices that look like this:

People usually assume that we don’t make interesting, ornate buildings anymore because it got too expensive to pay a bunch of artisans to carve designs into stone and wood.6 But the researcher Samuel Hughes argues that the supply-side story doesn’t hold up: many of the architectural flourishes that look like they have to be done by hand can, in fact, be done cheaply by machine, often with technology that we’ve had for a while. We’re still capable of making interesting buildings—we just choose not to.
Brands seem to be converging on the same kind of logo: no images, only words written in a sans serif font that kinda looks like Futura.7
An analysis of branded twitter accounts found that they increasingly sound alike:
Most cars are now black, silver, gray, or white8:
When a British consortium of science museums analyzed the color of their artifacts over time, they found a similar, steady uptick in black, gray, and white:
Science requires deviant thinking. So it’s no wonder that, as we see a decline in deviance everywhere else, we’re also seeing a decline in the rate of scientific progress. New ideas are less and less likely to displace old ideas, experts rate newer discoveries as less impressive than older discoveries, and we’re making fewer major innovations per person than we did 50 years ago.
You can spot this scientific bland-ification right away when you read older scientific writing. As (same guy who did the cult analysis) points out, scientific papers used to have style. Now they all sound the same, and they’re all boring. Essentially 100% of articles in medical journals, for instance, now use the same format (introduction, methods, results, and discussion):
This isn’t just an aesthetic shift. Standardizing your writing also standardizes your thinking—I know from firsthand experience that it’s hard to say anything interesting in a scientific paper.
Whenever I read biographies of famous scientists, I notice that a) they’re all pretty weird, and b) I don’t know anyone like them today, at least not in academia. I’ve met some odd people at universities, to be sure, but most of them end up leaving, a phenomenon the biologist calls “the flight of the Weird Nerd from academia”. The people who remain may be super smart, but they’re unlikely to rock the boat.
Whenever you notice some trend in society, especially a gloomy one, you should ask yourself: “Did previous generations complain about the exact same things?” If the answer is yes, you might have discovered an aspect of human psychology, rather than an aspect of human culture.
I’ve spent a long time studying people’s complaints from the past, and while I’ve seen plenty of gripes about how culture has become stupid, I haven’t seen many people complaining that it’s become stagnant.9 In fact, you can find lots of people in the past worrying that there’s too much new stuff. As relates, one hundred years ago, people were having nervous breakdowns about the pace of technological change. They were rioting at Stravinsky’s Rite of Spring and decrying the new approaches of artists like Kandinsky and Picasso. In 1965, Susan Sontag wrote that new forms of art “succeed one another so rapidly as to seem to give their audiences no breathing space to prepare”. Is there anyone who feels that way now?
Likewise, previous generations were very upset about all the moral boundaries that people were breaking, i.e.:
In olden days, a glimpse of stocking
Was looked on as something shocking
But now, God knows
Anything goes
-Cole Porter, 1934
Back then, as far as I can tell, nobody was encouraging young Americans to party more. Now they do. So as far as I can tell, the decline of deviance is not just a perennial complaint. People worrying about their culture being dominated by old stuff—that’s new.
That’s the evidence for a decline in deviance. Let’s see the best evidence against.
As I’ve been collecting data for this post over the past 18 months or so, I’ve been trying to counteract my confirmation bias by keeping an eye out for opposing trends. I haven’t found many—so maybe that’s my bias at work—but here they are.
First, unlike other forms of violence, mass shootings have become more common since the 90s (although notice the Y-axis, we’re talking about an extremely small subset of all crime):
Baby names have gotten a lot more unique:
And when you look at timelines of fashion, you certainly see a lot more change from the 1960s to the 2010s than you do from the 1860s to 1910s:
That at least hints the decline of deviance isn’t a monotonic, centuries-long trend. And indeed, lots of the data we have suggest that things started getting more homogenous somewhere between the 1980s and 2000s.
There are a few people who disagree at least with parts of the cultural stagnation hypothesis. Literature Substacker reports that “literature is booming”, and music Substacker is skeptical about stagnation in his industry. The internet ethnographer Katherine Dee argues10 that the most interesting art is happening in domains we don’t yet consider “art”, like social media personalities, TikTok sketch comedy, and Pinterest mood boards. I’m sure there’s some truth to all of this, but I’m also pretty sure it’s not enough to cancel out the massive trends we see everywhere else.
Maybe I’m missing all the new and exciting things because I’m just not cool and plugged in enough? After all, I’ll be the first to tell you there’s a lot of writing on Substack (and the blogosphere more generally) that’s very good and very idiosyncratic—just look at the winners of my blog competitions this year and last year. But I only know about that stuff because I read tons of blogs. If I was as deep into YouTube or podcasts, maybe I’d see the same thing there too, and maybe I’d change my tune.
Anyway, I know that it’s easy to perceive a trend when there isn’t any (see: The Illusion of Moral Decline, You’re Probably Wrong About How Things Have Changed). There’s no way of randomly sampling all of society and objectively measuring its deviance over time. The data we don’t have might contradict the data we do have. But it would have to be a lot of data, and it would all have to point in the opposite direction.
It really does seem like we’re experiencing a decline of deviance, so what’s driving it? Any major social trend is going to have lots of causes, but I think one in particular deserves most of the credit and the blame:
Life is worth more now. Not morally, but literally. This fact alone can, I think, go a long way toward explaining why our weirdness is waning.
When federal agencies do cost-benefit analyses, they have to figure out how much a human life is worth. (Otherwise, how do you know if it’s worth building, say, a new interstate that will help millions get to work on time but might cause some excess deaths due to air pollution?) They do this by asking people how much they would be willing to pay to reduce their risk of dying, which they then use to calculate the “value of a statistical life”. According to an analysis by the Substacker , those statistical lives have gotten a lot more valuable over time:
There are, I suspect, two reasons we hold onto life more dearly now. First: we’re richer. Generations of economic development have put more cash in people’s pockets, and that makes them more willing to pay to de-risk their lives—both because they can afford it, and because the life they’re insuring is going to be more pleasant. But as Linch points out, the value of a statistical life has increased faster than GDP, so that can’t be the whole story.
Second: life is a lot less dangerous than it used to be. If you have a nontrivial risk of dying from polio, smallpox, snake bites, tainted water, raids from marauding bandits, literally slipping on a banana peel, and a million other things, would you really bother to wear your seatbelt? Once all those other dangers go away, though, doing 80mph in your Kia Sorento might suddenly become the riskiest part of your day, and you might consider buckling up for the occasion.
Our super-safe environments may fundamentally shift our psychology. When you’re born into a land of milk and honey, it makes sense to adopt what ecologists refer to as a “slow life history strategy”—instead of driving drunk and having unprotected sex, you go to Pilates and worry about your 401(k). People who are playing life on slow mode care a lot more about whether their lives end, and they care a lot more about whether their lives get ruined. Everything’s gotta last: your joints, your skin, and most importantly, your reputation. That makes it way less enticing to screw around, lest you screw up the rest of your time on Earth.
(“What is it you plan to do with your one wild and precious life?” Make sure I stand up from my desk chair every 20-30 minutes!)
I think about it this way: both of my grandfathers died in their 60s, which was basically on track with their life expectancy the year they were born. I’m sure they hoped to live much longer than that, but they knew they might not make it to their first Social Security check. Imagine how you differently you might live if you thought you were going to die at 65 rather than 95. And those 65 years weren’t easy, especially at the beginning: they were born during the Depression, and one of them grew up without electricity or indoor plumbing.
Plus, both of my grandpas were drafted to fight in the Korean War, which couldn’t have surprised them much—the same thing had happened to their parents’ generation in the 1940s and their grandparents’ generation in the 1910s. When you can reasonably expect your government to ship you off to the other side of the world to shoot people and be shot at in return, you just can’t be so precious about your life.11
My life is nothing like theirs was. Nobody has ever asked me to shoot anybody. I’ve got a big-screen TV. I could get sushi delivered to my house in 30 minutes. The Social Security Administration thinks I might make it to 80. Why would I risk all this? The things my grandparents did casually—smoking, hitching a ride in the back of a pickup truck, postponing medical treatment until absolutely necessary—all of those feel unthinkable to me now.12 I have a miniature heart attack just looking at the kinds of playgrounds they had back then:

I know life doesn’t feel particularly easy, safe, or comfortable. What about climate change, nuclear war, authoritarianism, income inequality, etc.? Dangers and disadvantages still abound, no doubt. But look, 100 years ago, you could die from a splinter. We just don’t live in that world anymore, and some part of us picks that up and behaves accordingly.
In fact, adopting a slow life strategy doesn’t have to be a conscious act, and probably isn’t. Like most mental operations, it works better if you can’t consciously muck it up. It operates in the background, nudging each decision toward the safer option. Those choices compound over time, constraining the trajectory of your life like bumpers on a bowling lane. Eventually this cycle becomes self-reinforcing, because divergent thinking comes from divergent living, and vice versa.13
This is, I think, how we end up in our very normie world. You start out following the rules, then you never stop, then you forget that it’s possible to break the rules in the first place. Most rule-breaking is bad, but some of it is necessary. We seem to have lost both kinds at the same time.14
The sculptor Arturo di Modica ran away from his home in Sicily to go study art in Florence. He later immigrated to the US, working as a mechanic and a hospital technician to support himself while he did his art. Eventually he saved up enough to buy a dilapidated building in lower Manhattan, which he tore it down so he could illegally build his own studio—including two sub-basements—by hand, becoming an underground artist in the literal sense. He refused to work with an art dealer until 2012, when he was in his 70s. His most famous work, the Charging Bull statue that now lives on Wall Street, was deposited there without permission or payment; it was originally impounded before public outcry caused the city to put it back. Di Modica didn’t mean it as an avatar of capitalism—the stock market had tanked in 1987, and he intended the bull to symbolize resilience and self-reliance:
My point was to show people that if you want to do something in a moment things are very bad, you can do it. You can do it by yourself. My point was that you must be strong.
Meanwhile, “Fearless Girl”, the statue of a girl standing defiantly with her hands on her hips that was installed in front of the bull in 2017, was commissioned by an investment company to promote a new index fund.
Who would live di Modica’s life now? Every step was inadvisable: don’t run away from home, don’t study art, definitely don’t study sculpture, don’t dig your own basement, don’t dump your art on the street! Even if someone was crazy enough to pull a di Modica today, who could? The art school would force you to return home to your parents, the real estate would be unaffordable, the city would shut you down.
The decline of deviance is mainly a good thing. Our lives have gotten longer, safer, healthier, and richer. But the rise of mass prosperity and disappearance of everyday dangers has also made trivial risks seem terrifying. So as we tame every frontier of human life, we have to find a way to keep the good kinds of weirdness alive. We need new institutions, new eddies and corners and tucked-away spaces where strange things can grow.
All of this is within our power, but we must decide to do it. For the first time in history, weirdness is a choice. And it’s a hard one, because we have more to lose than ever. If we want a more interesting future, if we want art that excites us and science that enlightens us, then we’ll have to tolerate a few illegal holes in the basement, and somebody will have to be brave enough to climb down into them.

I’d love to read a version of Robert Putnam’s Bowling Alone specifically about the death of cults. Drawing Pentagrams Alone?
Whenever I tell people about the cult deficit, they offer two counterarguments. First: “Isn’t SoulCycle a cult? Isn’t Taylor Swift fandom a cult? Aren’t, like, Lububus a cult?” I think this is an example of prevalence-induced concept change: now that there are fewer cults, we’re applying the “cult” label to more and more things that are less and less cult-y. If your spin class required you to sell all your possessions, leave your family behind, and get married to the instructor, that would be a cult.
Second: “Aren’t conspiracy theories way more popular now? Maybe people are satisfying all of their cult urges from the comforts of their own home, kinda like how people started going to church on Zoom during the pandemic.” It’s a reasonable hypothesis, but the evidence speaks against it. A team of researchers tracked 37 conspiracy beliefs over time, and found no change in the percentage of people who believe them. Nor did they find any increase or decrease in the number of people who endorse domain-general tinfoil-hat thinking, like “Much of our lives are being controlled by plots hatched in secret places”. It seems that hardcore cultists have become an endangered species, while more pedestrian conspiracy theorists are merely as prevalent as they ever were.
There is some skepticism about the Spotify numbers, and I’m sure the YouTube numbers are dubious as well—a big chunk of that content has to be spam, duplicates, etc. But reduce those amounts even by 90% and you still have an impossible amount of songs and videos.
According to , the Instagram accent is what you end up with when you optimize your speaking for attracting and holding people’s attention. For more insights like that one, check out his new book.
Thanks to ’s article The Age of the Surefire Mediocre for these and other examples.
Or maybe the conspiracy theorists are right and it’s because some kind of apocalypse wiped out our architectural knowledge and the elites are keeping it hushed up.
Cracker Barrel tried to do the same thing recently and was hounded so hard on the internet that they brought their old logo back.
Note that this data comes from Poland, but if you look up images of American parking lots in previous decades vs. today, you’ll find the same thing.
For instance, T.S. Eliot, 1949:
We can assert with some confidence that our own period is one of decline; that the standards of culture are lower than they were fifty years ago; and that the evidences of this decline are visible in every department of human activity.
Dee’s original article is now paywalled, so I’m linking to a summary of her argument.
Almost all the data I’ve shown you is from the US, so I’m interested to hear what’s going on in other parts of the world. My prediction is that development at first causes a spike in idiosyncrasy as people gain more ways to express themselves—for example, all cars are going to look the same when the only one people can afford is a Model T, and then things get more interesting when more competitors emerge. But as the costs of being weird increase, you’ll eventually see a decline in deviance. That’s my guess, anyway.
Fast life strategies are still possible today, but they’re rarer. Once, in high school, I was over at a friend’s house and his mom lit a cigarette in the living room. I must have looked shocked, because she shrugged at me and said, “If the cigarettes don’t kill me, something else will.” I could at least understand where she was coming from: her husband had died in a car accident before he even turned 50. When you feel like your ticket could get punched at any time, why not enjoy yourself?
Perhaps that’s also why we’ve become so concerned about the safety of our children, when previous generations were much more laissez-faire. This map traces how the changes in a single family, but the pattern seems broadly true:
There’s a paradox here: shouldn’t safer, wealthier lives make us more courageous? Like, can’t you afford to take more risks when you have more money in the bank?
Yes, but you won’t want to. I saw this happen in real time when I was a resident advisor: getting an elite degree ought to increase a student’s options, but instead it makes them too afraid to choose all but a few of those options. Fifty percent of Harvard graduates go to work in finance, tech, and consulting. Most of them choose those careers not because they love making PowerPoints or squeezing an extra three cents of profit out of every Uber ride, but because those jobs are safe, lucrative, and prestigious—working at McKinsey means you won’t have to be embarrassed when you return for your five-year reunion. All of these kids dreamed of what they would gain by going to an Ivy League school; none of them realized it would give them something to lose.
In fact, the richest students are the most likely to pick the safest careers:
Also c’mon this chart is literally made by someone named Kingdollar.
2025-10-14 23:48:04
Daniel Kahneman and Amos Tversky were two of the greatest psychologists of all time. Maybe the greatest. The fields we now call “behavioral economics” and “judgement and decision-making” are basically just people doing knockoff Kahneman and Tversky studies from the 70s and 80s.
(You know how The Monkees were created by studio executives to be professional Beatles impersonators, and then they actually put out a few good albums, but they never reached the depth or the importance of the band they were—pun intended—aping? Think of Kahneman and Tversky as the Beatles and the past 50 years of judgement and decision-making research as The Monkees.1)
Amos ‘n’ Danny were masters of the bamboozle: trick questions where the intuitive answer is also the wrong answer. Quick: are there more words that start with R, or words that have R in the third position? Most people think it’s the former because it’s easier to come up with r-words (rain, ring, rodent) than it is to come up with _ _ -r words (uh...farm...fart...?). But there are, in fact, more words where r comes third. That single silly example actually gives us an insight into how the mind calculates frequencies—apparently, not by conducting a census of your memories, but by judging how easy or hard it feels when you try to think of examples.
These little cognitive gotchas show us how the mind works by showing us how it breaks. It’s like a visual illusion, but for your whole brain. This is the understated genius of Kahneman and Tversky—it’s not like other research, where some egghead writes a paper about some molecule and then ten years later you can buy a pill with the molecule in it and it cures your disease. No, for K&T, the paper is also the pill.
But the duo was so successful that they have, in part, undone their own legacy. The tricks were so good that everybody learned how they worked, and now it’s hard to be bamboozled anymore. We’re no longer surprised when the rabbit comes out of the hat. And that’s a shame, because the best part of their work was that half-second of disbelief where you go “no no that can’t be right!” and then you realize it is right. That kind of feeling loosens your assumptions and dissolves your certainty, and that’s exactly what most of us need: an antidote to our omnipresent overconfidence.
I’m here to bring the magic back. I’m no Kahneman nor Tversky, but I can at least do two things: resurface some of their long-forgotten deep cuts, and document a few tricks that bamboozled the Bamboozlers-in-Chief themselves—including one that, I think, I have just discovered and am documenting for the first time.
Let’s see if we can find another rabbit in this hat.
2025-09-30 22:35:46
“Do what you love” is the most dangerous sentence in the English language.
We send kids into the world with that mantra in their heads, and then they return shellshocked and ashamed, because they couldn’t do it. Many of them end up believing that they are the one person on Earth who just doesn’t fit in, the sad sap whose preferences and talents—whatever they may be! If they even exist!—simply do not match the opportunities available, like a puzzle piece that got mixed into the wrong box.
Some of them feel that way forever, always a little unsettled and unsatisfied. A few of them turn into cynics, convinced that the idea of a “dream job” is, like the all-seeing Santa Claus, a fiction foisted upon children to keep them docile.
The problem is that nobody ever tells you what it feels like to love something. Everybody thinks love feels like perpetual bliss. It doesn’t. It mainly feels annoying.
I’m at the point in my life where I know plenty of people who have “made” it, people who have become the things they always hoped they would be: doctors, lawyers, academics, actors, entrepreneurs, etc., and the one emotion that best describes their daily experience is annoyed.
They’re annoyed! They got exactly what they wanted and, most of the time, it bugs them. When I call them up, they do not wax poetic about how achieving their childhood dreams has brought them deep and everlasting happiness. They tell me about their dumbass bosses, their crazy patients, the cases that are driving them nuts, the prototypes that they can’t get working.
Some of these people are honked off because they’ve chosen the wrong career. But most of them will tell you that they love their jobs, and they mean it. Which is weird because, if you watch them closely, they do not spend their workdays laughing and smiling and saying things like “yippee!” or “wahoo!”. They are, most of the time, mad about something. Same goes for me—I’m annoyed all day. And yet none of us can stop. When we say, “I love my job,” we really mean, “My job pisses me off, but in an enchanting way.”
What’s going on here?
I think annoyance, like cholesterol, has a good kind and a bad kind. The bad kind makes you want to flee: backed-up traffic, crying babies on planes, colleagues who say they can use Excel when really they mean they’ve heard of Excel. But the good kind of annoyance draws you in rather than driving you away. It’s that feeling you get when there’s something you can and must make right, the way some people feel when they see a picture frame that’s just a bit askew, except a lot more and all the time.
Whenever I fix the thing that’s annoying me, it does feel “fun”, I guess, but it’s not fun in the way that, say, going down a waterslide is fun. It’s a textured pleasure, the kind of enjoyment I assume that whiskey enthusiasts get from drinking extremely peaty, smoky scotch—on the one hand, it burns, but on the other hand, I kinda like how it burns.
Good annoyance is, I think, the only thing that keeps people coming back for more, indefinitely. There is nothing that a human with a normally-functioning brain can do for eight hours a day, every day, for their whole career, that feels “fun” the whole time, or even a large fraction of the time. We’re just too good at adapting to things. And thank God, because if we never got bored, we never would have survived. Our ancestors would have spent their days staring doe-eyed and slack-jawed at, like, a really pretty leaf or something, and they would have gotten eaten by leopards. Fun fades, but irritation is infinite.
The right job for you, then, is the one that puts you in charge of the things that annoy you. And this is where we steer people wrong. We imply that the right occupation for them is the one that lets them float through their days in a kind of dreamy pleasantness, when in fact they should be alternating between vexation and gratification. Or we let them choose proximity over responsibility, prioritizing what they’re working in rather than what they’re working on.
I had a lot of artsy friends in college who did this after graduation—they wanted to play Hamlet, but they instead ended up drafting marketing emails for a summer repertory production of Guys and Dolls. It’s no surprise that they hated this, because being in the presence of your annoyances without being in control of them is a recipe for insanity. That’s like working at the Museum of Slightly Crooked Pictures, where all the frames are wonky but you’re not allowed to straighten them.
Good annoyance is ultimately the recipe for greatness. It certainly seems that way, at least, because the people the top of their game always seem kinda ticked off. You’d think that folks who are famous for being good at something would experience intense pleasure all the time from doing that thing; otherwise, how can they stand to do it so much? Plus, everybody’s always telling them how wonderful they are, and that must feel great. And yet, when these people are candid about what’s going on in their heads, it turns out to be a little complicated in there. For instance, the Nobel Prize-winning novelist Mario Vargas Llosa likened being a writer to having a tapeworm inside you:
The literary vocation is not a hobby, a sport, a pleasant leisure-time activity. […] Like [my friend] José Maria’s tapeworm, literature becomes a permanent preoccupation, something that takes up your entire existence, that overflows the hours you devote to writing and seeps into everything else you do, because the literary vocation feeds off the life of a writer just as the tapeworm feeds off the bodies it invades.
Here’s Andre Agassi on tennis:
I play tennis for a living, even though I hate tennis, hate it with a dark and secret passion, and always have. [...] I slide to my knees and in a whisper I say: Please let this be over.
Then: I’m not ready for it to be over.
Marie Curie on getting an education:
One never notices what has been done; one can only see what remains to be done.
Billy Mitchell, one of the best Pac-Man players in the world, on playing Pac-Man:
I enjoy the victory of it, but it’s pure pain [...] I don’t know anything about a zone, or getting into a flow. It’s constant intensity and concentration. Nothing’s flowing. You squeeze a joystick in your hand for hours and it starts to feel like it’s going to shatter your hand.
And Meryl Streep on acting:
Bleehhh...eehhh...uhh...god, I hate this sometimes.
Every beginner needs to have their nose rubbed in this idea. When you’re just starting out, it’s easy to think that expertise will cure your doubts and conquer your frustrations, that you’ll unlock a higher plane of pleasure once you can play in tune, sink a shot, or write a sentence that doesn’t suck.
Maybe I’ve just never gotten that good at anything, but this has never happened to me. I have never conquered my doubts and frustrations; I merely traded them in for newer models. I can do more with less effort, but nothing feels effortless. If anything, I’m more annoyed than when I started. That’s why I’m still here. I wonder: is this how it will always feel? That’s what I’m afraid of, and what I’m hoping for.
How can something feel so good and so bad at the same time?
Here’s an explanation. According to the cybernetic theory of psychology, the mind is a stack of control systems all trying to keep things copacetic. In this model, happiness comes not from the absence of error in these systems, but from the correction of error. That is, happiness isn’t a full belly, it’s a belly that’s being filled. So if you wanna feel good, you gotta let things get at least a little out of whack so you can whack them back into place again. The whacking is, in fact, the fun part.

This is why rich folks do extreme sports, why childless retirees spend their days on make-work projects of pretend importance, and why lottery winners very rarely quit their jobs. Everyone has this dream of a frictionless existence; nobody seems to like it much when they get it. Infinity pools and bottomless margaritas are fine for a time, but eventually you start wishing the moles would pop back up again so you could hit ‘em with a mallet.
This out-of-whack/back-in-whack cycle is not a source of motivation. It is motivation. Annoyance is the only truly renewable resource known to man.
When people try and fail to increase their “productivity”, it’s because they miss this point. There is no system that can conjure up annoyance where there isn’t any. You cannot trick yourself into caring about something by putting it on your Google calendar. If you have a productivity problem, you’re either not annoyed enough, you’re annoyed by something you can’t actually control, or you’re annoyed in the bad way, the kind that makes you want to skip town rather than dig in.
It’s easy to get stuck on the wrong problems because we have such strong theories about the things we should care about. But we don’t really get to pick the things that bug us. Why are some people annoyed by crooked picture frames while other people get annoyed by securities fraud, or bland chicken parmesan, or inefficient assembly lines? I dunno man, people are crazy. There’s one guy who is so annoyed at people using the phrase “comprised of” when they actually mean “composed of” that he fixes it on tens of thousands of Wikipedia pages. No amount of to-do lists, bullet journals, pomodoros, kanbans, Moscow methods, Eisenhower decision matrices, or frog-eating can rival the power of one dude who is pissed off in a very specific way.
Human motivation didn’t evolve so we could show up to work on time. This irritation-reduction system drives everything we do, regardless of whether we get a paycheck for it. That includes even the most selfless acts, the ones that we’re supposed to do despite our motivations.
Recently, some of my friends were swapping stories about surprisingly kind strangers, and I couldn’t help but notice that every Good Samaritan had acted out of annoyance. A construction worker spotted something amiss with my friend’s bike chain while she was waiting at a red light, and he came over and knocked it back into place, telling her, “I just can’t bear to see it like that.” Another friend was moving into an apartment, and their new neighbor spotted them struggling with a couch and came over to help, muttering “I can’t watch you guys do this on your own.” A third returned an envelope of cash they found because they, “Would hate to be the kind of person who kept it for themselves.”
I think this is actually the way most good-hearted people work: they’re motivated not by warm fuzzies, but by cold pricklies. They help because they can’t stand the sight of someone in need. The golden glow of altruism comes later, if at all, when they’re walking home and thinking about what a good person they are.
The causes that we stick with, then, aren’t the ones that do the most good, nor the ones that align with whatever we think are our most fundamental values. No, we stick with the causes that give us the same perverse pleasure that you get from popping a pimple.
We’d do a lot more for each other if we acknowledged this fact. Altruism doesn’t need to feel like pure self-flagellation or pure self-congratulation. A lot of the time, if you’re doing it right, it’ll feel irritating. Not all heroes wear capes—some of them wear an exasperated look of “are you seriously trying to lift that couch by yourselves”.
What if love itself is just another instance of good annoyance?
I had this friend in high school, let’s call her Vanessa. I ran into her once a couple years after graduation, when she was living with her boyfriend and madly in love with him. I was skeptical of all things love at the time, so I asked her, “How can anyone ever truly love someone else? What about when your boyfriend has diarrhea? Do you still love him then?” Vanessa scoffed. When her boyfriend has diarrhea, she said, it doesn’t really have anything to do with her. Their relationship is about the nice stuff, not the nasty stuff.
A year later, Vanessa came home from work early and found her boyfriend in bed with another woman.1 It turned out, unfortunately, he did a lot of nasty things that didn’t have anything to do with her.
I think Vanessa and I were both wrong. Yes, it is your business when the person you love has diarrhea, but no, you don’t have to be happy about it. You can be in love and still be annoyed. In fact, love may require a certain amount of frustration because, as of puts it, “closeness is fundamentally annoying”:
Closeness is annoying because it’s about the surrender of control. You’re trying to fall asleep, and beside you your partner is snoring. You lightly push their jaw to the side so it’ll stop. Two minutes later, the snoring commences again. You lay there in the dark wondering how you got here. Oh, right: three years ago at a party you saw someone and thought they were very beautiful.2
A lot of people who are confused about love are actually waiting for permission to feel annoyed. They think love is supposed to make you crazy in the cartoon sense, where the mere presence of your beau will make your eyes turn into hearts and go AWOOGA. Love does drive you crazy like that, but it also drives you crazy in the sense of “my spouse only likes three songs and insists on playing them over and over again on our roadtrip”. If you’re looking for the person who will never annoy you, you’ll never stop looking. But if you find someone who annoys you juuust right, you’ll never stop loving them, nor will you ever get to hear a song in the car that is not “Go Your Own Way” by Fleetwood Mac.
I always thought that negative emotions were bad. (“Negative” is right there in the name.) Whenever I felt sad or upset or whatever, I’d be like, “oh no!! A bad feeling!! Something’s gone wrong!! I need to speak to a manager!!” Every foul mood felt like an emergency, like the forces of darkness had breached the keep and were killing my dudes and smashing all my nice things.
I know that half the world has beaten me to this realization, but there’s no such thing as an emotion that’s purely bad or purely good. Emotions aren’t solid tones, like a middle C ringing out at exactly 261 hertz—not the interesting ones, anyway. Nobody pops in their AirPods to listen to four straight minutes of G major chords. The music that holds our attention has overtones, dissonances, dynamics, and syncopation; it has bits you like less, bits you like more, and bits you didn’t think you liked but you actually do.
Same goes for any emotion that’s strong enough to hold our attention for a long time. All of my fixations have centered around my irritations. That squeeze and release, the indignation and the elation, the rage and the rapture—it feels good and it feels bad and mainly it feels necessary.
So maybe the question we should be asking young folks is not “what do you love?” but “what bugs the hell out of you?” What can’t you stand, and what can’t you quit? People say “do what you love and you’ll never work a day in your life” and they’re right, because you’ll get bored and go home. If you find the job, the cause, and the partner that annoy you in exactly the right way, you’ll never know peace again. Nor will you want to. Please let it be over, I’m not ready for it to be over!
This was back in the early days of Facebook, before people had realized that the purpose of social media is to make yourself look good rather than bad.
Credit to Ava’s post for inspiring mine.
2025-09-16 22:14:58
Back in May, I announced the Second Annual Experimental History Blog Post Competition, Extravaganza, and Jamboree. The prompt was “send me a never-before-published blog post, and if I like it I’ll post about it and send you cash.”
I got 109 submissions from folks all over the world, including consultants, PhD students, entrepreneurs, doctors, grandmas, professors, software engineers, teachers, pseudonymous weirdos, and several people who described themselves as “indescribable”. Here are the winners!
Here’s the most important, most incomprehensible, and most annoying question facing humanity right now: what is intelligence, anyway? Tackling this question is like bellyflopping off the high dive—it hurts to do, and it hurts to watch. But Troesh does it in the most delightful and unexpected way: he gives you a bunch of insightful answers and thwacks you on the head between each one. There is no way to summarize this piece, but here’s a lil snippet that comes after Troesh suggests using crows to emulate transistors:
There is only one way to make salt; salt molecules cannot be “more salty” or “less salty”. But there are infinite ways to make pepper -- a messy blend of biomolecules created by messy genomes created by messy selection pressures.
If intelligence is like salt, then crows are very expensive (and cute) transistors. If intelligence is like pepper, a murder could someday be President of the United States.
This is exactly the kind of thing that blog posts are for.
Troesh’s bio:
Taylor Troesh is a self-proclaimed “connoisseur of crap”. He is currently building Scrapscript among many other projects. To support his work, you can hire him to solve “nasty” problems. You can read more of Taylor's work at taylor.town.
One of the best things a blog post can do is find some obscure source that you would never read and turn it into a short-ish post that you will read. This isn’t just summarizing—it’s pre-chewing the material so it’s easier for you to digest. Call it mama birding.
I’m never going to read a book about a PhD student/marathon runner who goes to train in Ethiopia, but I will read what Chenchen Li says about it. There’s some excellent curating here, but more importantly, there’s great chewing. My favorite line:
if you make life especially difficult once in a while, just for fun, it can make you feel like you’re on the edge of what’s possible—like no one else is “dangerous” enough to do what you do.
Li’s bio:
Chenchen Li is a biophysics PhD and does neuroscience research.
Apparently, in the late aughts, Frito-Lay decided to encourage people to do terrorism in the name of Cheetos. Hundreds of fans uploaded videos of themselves performing “random acts of Cheetos”, and the official campaign website included a Cheetos-themed version of the Anarchist Cookbook. It wasn’t all fun and games, though:
there’s about a 90% chance that someone hired by Cheetos for their online marketing efforts also accidentally(?) uploaded MILF porn to the official Cheetos account instead of their personal account.
Gonzo cultural history is a genre that thrives in blog post form, and Rabbit Cavern is thriving.
Rabbit Cavern’s bio:
Rabbit Cavern is a blog that explores the question, “Can the rabbit hole be too deep?” Each post seeks to gather all the most fascinating information about any given topic, answering questions like “Why is Pepsi obsessed with airplanes?” or “Did Benedict Arnold commit treason because his legs hurt?”
There were too many good posts to recognize all of them, but here are a few that stood out in particular ways:
You’re supposed to get an Institutional Review Board to approve any research project that aims to produce “generalizable knowledge”. Munger contends that randomized-controlled trials do not produce generalizable knowledge. Therefore, you’re free to run an RCT without an IRB. In Munger’s view:
RCTs are best understood as a formal kind of performance art. Each society enjoys performances in the idiom of their respective culture. Martial societies find pleasure in ritual combat; religious societies in pious displays of devotion and or spiritual rapture. [...] Our society is scientific, even scientistic. We appreciate the performance of scientific rituals, big data crunching and demonstrations of control over nature or our fellow citizens.
What’s one year of full health worth to you? Turns out that for many people, it’s somewhere between a used car and a new one.
What can we actually learn from a psychology experiment? I think basically every problem in psychology traces back to the fact that we never ask that question. Peterson both asks and answers it, and while I disagree with some of what he claims, he makes a good case for it. And most importantly, he solves a big mystery: why does Wikipedia always ask for donations while saying “nobody donates to us!” when all of behavioral science suggests they should say exactly the opposite?
I am really really not a sports guy. But I have to give it up for Naven’s extremely detailed post about why recent legal and policy changes are on track to destroy everything that people like about college sports.
A snapshot of Coyne’s attempt to eliminate her migraines by eliminating dairy (darker = worse mood, M = migraine):
My aunt had a beer parlor, and we (my cousins and I) helped out sometimes during rush hours. That was where I learned, at 10, that older men had weak bladders and I had to remind them to go; otherwise, they’d pee themselves.
I have a soft spot for letters from parents to children:
I will always hold a litany of hopes and dreams for you that I will never share because they are mine for you, not yours. [...] You do not need my approval to own your decisions. But for the love of everything holy, own them! That means sometimes being able and willing to respectfully defend and discuss them. Everyone in every facet of their lives will have to learn how to do this so we might as well get good at it.
And with that, I hereby call to a close the 2025 Experimental History Blog Post Competition, Extravaganza, and Jamboree. Thanks to everyone who submitted, thanks to all of you for reading, and thanks to the paid subscribers who make both this blog and this jamboree possible. Because of you, the jamboree continues eternally in our hearts.
2025-09-03 00:48:01
Everyone I know has given up. That’s how it feels, at least. There’s a creeping sense that the jig is up, the fix is in, and the party’s over. The Earth is burning, democracies are backsliding, AI is advancing, cities are crumbling—somehow everything sucks and it’s more expensive than it was last year. It’s the worst kind of armageddon, the kind that doesn’t even lower the rent.
We had the chance to prevent or solve these problems, the thinking goes, but we missed it. Now we’re past the point of no return. The world’s gonna end in fascists and ashes, and the only people still smiling are the ones trying to sell you something. It feels like we’re living through the Book of Revelation, but instead of the Seven Seals and the apocalyptic trumpeters, we have New York Times push notifications.
On the one hand, it’s totally understandable that these crises would make us want to curl up and die. If the world was withering for lack of hot takes, I’d assemble a daredevil crew and we’d be there in an instant. But if history is heading more in the warlords ‘n’ water wars direction, I’m out.
On other hand, this reaction is totally bonkers. If our backs are against the wall, shouldn’t we put up our dukes? For people supposedly facing the breakdown of our society, our response is less fight-or-flight and more freeze-and-unease, frown-and-lie-down, and despair-and-stay-there.
Maybe humanity has finally met its match, but even though people talk like that’s the case, the way they act is weirdly...normal. Every conversation has a dead-man-walking flavor to it, and yet the dead men keep on walking. “Yeah, so everything’s doomed and we’re all gonna die. Anyway, talk to ya later, I gotta put the lasagna in the oven.” If things are just about to go kaput, why is everyone still working 60 hours a week?
Something strange is going on here, and I’d like to offer an explanation in two parts: a wide circle, and a bullet with a foot in it.
Forty years ago, the philosopher Peter Singer argued in The Expanding Circle that humans have, over the course of millennia, decided to care about a broader and broader swath of the living world. Originally, we only gave moral consideration to our immediate family, then we extended it our tribe, then the nation, and now we are kind-of sort-of extending it to the whole globe and to non-human animals as well.1
I think Singer was right, and I’d add three things to his analysis. First, the trend has only continued since the ‘80s—for instance, some people are now worried about whether shrimp are having a good time. Second, while the circle has gotten wider, it has also, paradoxically, gotten closer. It’s one thing to “care” about distant strangers when you can only read about them in a newspaper; now we can all witness the suffering of anyone in the world through a glass portal we carry in our pockets. And third, when you stare into that portal, the portal stares back. Social media has made everyone into z-list public figures, and now we all have an audience watching us to make sure that we’re sufficiently concerned about the right things.
Expanding the circle was, in my opinion, a good move. But it comes with a problem: if we’re supposed to care about everyone and everything...that’s kind of a lot of caring, isn’t it? If I have to feel like a mass shooting in Tallahassee, a factory farm in Texas, and a genocide in Turkmenistan are all, morally speaking, happening in my backyard, my poor little moral circuits, which evolved to care about like 20 people, are gonna blow.
When there’s too much to care about, what’s a good-hearted person to do? I think many of us have unconsciously figured out how to escape this conundrum: we simply shoot ourselves in the foot.
Humans are pretty savvy at social interaction, even though we get so anxious about it. (Maybe we’re good because we’re freaking out all the time.) Evolution and experience have endowed us with a deep bench of interpersonal maneuvers, some of which are so intuitive to us that we don’t even realize we’re deploying them.
For example, sometimes life puts us in lose-lose situations where it’s embarrassing to try and fail, but it’s also embarrassing not to try at all. It sucks to study for a math exam and still flunk it, but it’s foolish not to study in the first place. When you’re stuck in a conundrum like that, how do you get out?
Well, one canny solution is to subtly manipulate the situation so that failure is inevitable. That way, no one can blame you for failing, and no one can blame you for not trying. Psychologists call this self-handicapping, and as far as impression management strategies go, you gotta admit this one is pretty exquisite.
Here’s what self-handicapping looks like in the wild. I had a friend in high school who “forgot” to apply to college our senior year. Literally, May came around and we were like “Nate, did you get in anywhere?” and he was like “Oh shoot that happened already?” Nate was a smart kid but a bad student, so it’s possible he actually did forget, but some of us suspected that the entire application season had conveniently slipped his mind so he wouldn’t have to face the shame of being rejected. We could never prove it, though, and that’s exactly why self-handicapping is such a clever tactic.
Of course, Nate’s self-handicapping came at a cost. No one can ding him for being stupid, but we can all ding him for being irresponsible. The ideal form of self-handicapping, then, is one that obscures the role of the self entirely.2 In fact, it works best when even you don’t realize that you’re doing it. Nate’s months-long brain fart is more believable if he believes it himself. If you’re gonna shoot yourself in the foot, best to do it while sleepwalking, so you can wake up and be like “A bullet!! In my foot!! And it got there through no fault of my own!!”
Which is to say: many of the people who are engaging in self-handicapping would earnestly deny the allegation.
You can see how self-handicapping is a handy response to a world that demands more care from us than we can give. If all the world’s problems are fait accompli, well, that’s sad, but it ain’t on me. I don’t want it to be that way, of course, but it is, which means my only obligation is to bravely bear witness to the end of it all. That’s why it’s actually very important to maintain the idea—even subconsciously—that democracy is unsalvageable, AI is unstoppable, the Middle East is intractable, the climate apocalypse is (literally) baked in, and so on. For the aspiring self-handicapper, the best causes are lost causes.
The problem with shooting yourself in the foot is that now you have a bullet in your foot. A self-handicap can easily become a self-fulfilling prophecy: the more we believe our situation is hopeless, the more hopeless it becomes. We’re never gonna right the things we write off.
Succumbing to despair might offer you a reputational reprieve in the short term, but avoiding the blame doesn’t mean you can avoid the consequences. When rising sea levels, secret police, or AI-powered killer drones come for you, they won’t ask whether you have a doctor’s note excusing you from the greatest struggles of your generation.
In my experience, this is an unpopular argument. To some people, suggesting that our problems are solvable means denying that our problems are serious. (But of course our problems are serious, that’s why we want to solve them.) Or they’re offended by the implication that they have any responsibility to fix the things they didn’t break, as if a sinking ship only takes you down with it if you’re the person who punched a hole in the hull. Or they’re so certain that our fate is sealed that they scoff at anyone who believes otherwise. Most of all, though, I think people want everybody else to admit that life is really hard, and they’re being courageous just for showing up every day. As Kurt Vonnegut said:
What is it the slightly older people want from the slightly younger people? They want credit for having survived so long, and often imaginatively, under difficult conditions. Slightly younger people are intolerably stingy about giving them credit for that.3
I don’t think this is an old/young divide anymore: everybody wants credit for surviving, and we’re all too stingy about giving it, and that only makes us want the credit even more. I’m happy to give that credit: being alive in 2025 is hard in a way that gets no sympathy from anyone (“oh I’m sorry are your TikToks not amusing enough??”), and we all deserve gold stars. But wouldn’t we all like to live in a world where we didn’t feel like it was an achievement just to keep going?
Of course, the naysayers could be right! Prophets of doom don’t have a great track record, but then, they only have to hit the mark once. As the futurist Hazel Henderson put it, though: “we do not know enough to be pessimistic”. We won’t know that our problems are solvable until we solve them, we won’t solve them until we try, and we won’t try until we believe. Either way, the problems we’re facing don’t take prisoners, so we might as well go down swinging.
I know this is easy for me to say—I’m far from the first in line to get disappeared or swallowed up by the sea. That’s fine: the people who can do more should do more. But we’re wrong to act as if withdrawing from the world is inherently rejuvenating. When we’re so eager to explain why we can’t help, we forget helping actually feels great.
I was reminded of this a few weeks ago. When I was out for a run, I happened upon an older lady who had fallen and hit her head on the sidewalk; she was bleeding and confused and obviously needed medical attention. I called an ambulance and waited with her until the paramedics came, and once we were all sure she was going to be okay, I got to feel proud the rest of the day: I did the right thing! I helped! I’m a good boy!4 I run by that same corner all the time now not hoping to find little old ladies in distress, of course, but ready should my services be required.
So when folks seem hell bent on giving up, I wanna know: why are they holding on so tightly their hopelessness? What does it do for them? If the future is so uncertain, if no one can justifiably say whether or not we’re gonna make it, why not pick the belief that gets you out of bed in the morning, rather than the one that keeps you there? Why do we have to make it seem like being on the right side of history is such a bummer?
Dealing with the state of world by despairing is kind of like dealing with a breakup by drinking—you’re allowed to do it for like, a day, but if it becomes clear you’re planning to do this for the rest of your life, your friends ought to step in and stop you.
Here’s an analogy for the nerds: in Star Trek lore, Starfleet academy cadets have to complete a simulation where they attempt to save a ship called the Kobayashi Maru, which is disabled and stuck in hostile territory. Unbeknownst to the trainees, the simulation is rigged so they can never win; the Klingons always show up and vaporize everyone. This is supposed to teach cadets that some situations are simply hopeless. Captain James T. Kirk, refusing to learn the lesson, famously saves the Maru by hacking the machine before the simulation begins.
Anyway, my point is that it’s possible to pull an anti-Kirk, to hack a winnable scenario and guarantee a loss. The fact that you can save face by doing this doesn’t make it noble or admirable. After all, in the world of Star Trek, “resistance is futile” is something the villains say.
There is a grumpy version of this argument, one that scolds every doomer for finding a galaxy-brained way to signal their virtue while sitting on their hands. I don’t feel that way at all. We were right to expand our circle, and I admire every generation that did it, even if they only made it a single centimeter per century.
But a wide circle is also a tall order. Although the circumference of our moral circle is theoretically infinite, the extent of our efforts is not. We can care about all eight billion people riding this rock with us, but we can only care for a tiny fraction of them. Trying to solve every problem at once is like trying to stop global warming by throwing ice cubes in the ocean. So if our collective despondency is a way of dealing with the fact that our moral ambitions have outstripped our abilities, I get it.
The solution is not to shrink the circle, to default on our obligations, or to pretend that we’re helpless. When you’re paralyzed by the number of problems, the only way out is to pick one. What kind of world would you like to live in, how do we get there from here, and what can you do—however small it may be—to move us in that direction? We’re not looking for saints or superheroes, just people who give a hoot. In the billion-person bucket brigade that’s trying to put out the fires, all you need to do is find a way to pass the pail from the person on your left to the person on your right. There are, remember, many underrated ways to change the world.
Personally, I care a lot about science because I think the expansion of knowledge is the only thing that makes us better off in the long run. But whatever, some people want to clean up the park, other people want to help sick kids, and other people want to save the shrimp. Godspeed to all of them. As far as I’m concerned, we’re all comrades in a war that has infinite fronts. Nobody can fight on all of them, and I won’t ask anyone to join mine if they can do more good elsewhere.
But there is no neutral territory here. There may be plenty of front lines, but there are no sidelines. The best way to prevent people from taking themselves out of the fight is to recognize that there is no “out” of the fight. There’s no room for anyone to play the Switzerland strategy—“I’m not involved, I’ll just hold on to your valuables while you guys fight it out!” The sum total of our actions will either make the world better or worse. Which is it gonna be?
In the classic formulation of the hero’s journey, step one is the call to adventure, and step two is the refusal of that call. I think we’re all stuck at step two. Gandalf’s at our door, but we literally just sat down to lunch, and have you seen the forces of Mordor? They’ve got trolls. Obviously it would be great if someone did something about Sauron, but I don’t see why it should be me.
I understand that feeling. A death-defying adventure to save the world? In this economy? No thanks, take it up with all the people who imperiled the world in the first place. I just got here!5
But alas, we cannot pass the buck into the past. As much as we love to argue about which generation had it worse and which generation did it better, we don’t get to Freaky Friday ourselves with our ancestors and face the tribulations that fit our preferred aesthetic.
When I was in high school, I used to volunteer with this group that, basically, held car washes and then donated the money. The organization wasn’t explicitly Christian, but Peggy, the woman who led it, was. She used to tell us: “People see bad things in the world and ask why God doesn’t do something about it. But God did do something. He sent you.”
We can argue about whether this is a theologically sound solution to the problem of evil, and we can ask why a supposedly all-knowing and all-powerful God would entrust anything to me, a guy who can’t even do a single pull-up. But I always appreciated this attitude. Yes, things are bad. No, it’s not your fault. Unfortunately, the world was under no obligation to straighten itself out by the time you arrived. These are the problems we got. Would you like them to be better? Then here, grab a sponge and start washing.
For a counterpoint, see Gwern’s “The Narrowing Circle”.
I once came very close to doing something like that. I really hated my time in Oxford, and so when I got a nasty stomach bug, I secretly hoped it was something serious so I could go home without losing face. When the NHS prescribed me some antibiotics, I thought for a hot second about not taking them—after all, nobody would know if I let those little microbes wreak some more havoc in my tummy, and then maybe I can get out of this place. I ultimately downed the pills, but only because six weeks of diarrhea was too steep of a price for a get-out-of-jail-free card.
The other half of the quote:
What is it the slightly younger people want from the slightly older people? More than anything, I think, they want acknowledgement, and without further ado, that they are without question women and men now. Slightly older people are intolerably stingy about making any such acknowledgement.
The 911 operator asked me how old she was, and although she looked to be in her mid-70s, maybe 80, she was sitting right there looking at me so I panicked and said “uh uh maybe late 60s??” I hate to make someone feel self-conscious, even when they’re bleeding from the head.
I apologize because of the terrible mess the planet is in. But it has always been a mess. There have never been any “Good Old Days,” there have just been days. And as I say to my grandchildren, “Don’t look at me. I just got here myself.”
2025-08-06 04:06:09
Look, I don’t know if AI is gonna kill us or make us all rich or whatever, but I do know we’ve got the wrong metaphor.
We want to understand these things as people. When you type a question to ChatGPT and it types back the answer in complete sentences, it feels like there must be a little guy in there doing the typing. We get this vivid sense of “it’s alive!!”, and we activate all of the mental faculties we evolved to deal with fellow humans: theory of mind, attribution, impression management, stereotyping, cheater detection, etc.
We can’t help it; humans are hopeless anthropomorphizers. When it comes to perceiving personhood, we’re so trigger-happy that we can see the Virgin Mary in a grilled cheese sandwich:
A human face in a slice of nematode:

And an old man in a bunch of poultry and fish atop a pile of books:

Apparently, this served us well in our evolutionary history—maybe it’s so important not to mistake people for things that we err on the side of mistaking things for people.1 This is probably why we’re so willing to explain strange occurrences by appealing to fantastical creatures with minds and intentions: everybody in town is getting sick because of WITCHES, you can’t see the sun right now because A WOLF ATE IT, the volcano erupted because GOD IS MAD. People who experience sleep paralysis sometimes hallucinate a demon-like creature sitting on their chest, and one explanation is that the subconscious mind is trying to understand why the body can’t move, and instead of coming up with “I’m still in REM sleep so there’s not enough acetylcholine in my brain to activate my primary motor cortex”, it comes up with “BIG DEMON ON TOP OF ME”.

This is why the past three years have been so confusing—the little guy inside the AI keeps dumbfounding us by doing things that a human wouldn’t do. Why does he make up citations when he does my social studies homework? How come he can beat me at Go but he can’t tell me how many “r”s are in the word “strawberry”? Why is he telling me to put glue on my pizza?2
Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.3 As long we try to apply our person perception to artificial intelligence, we’ll keep being surprised and befuddled.
We are in dire need of a better metaphor. Here’s my suggestion: instead of seeing AI as a sort of silicon homunculus, we should see it as a bag of words.
An AI is a bag that contains basically all words ever written, at least the ones that could be scraped off the internet or scanned out of a book. When users send words into the bag, it sends back the most relevant words it has. There are so many words in the bag that the most relevant ones are often correct and helpful, and AI companies secretly add invisible words to your queries to make this even more likely.
This is an oversimplification, of course. But it’s also surprisingly handy. For example, AIs will routinely give you outright lies or hallucinations, and when you’re like “Uhh hey that was a lie”, they will immediately respond “Oh my god I’m SO SORRY!! I promise I’ll never ever do that again!! I’m turning over a new leaf right now, nothing but true statements from here on” and then they will literally lie to you in the next sentence. This would be baffling and exasperating behavior coming from a human, but it’s very normal behavior coming from a bag of words. If you toss a question into the bag and the right answer happens to be in there, that’s probably what you’ll get. If it’s not in there, you’ll get some related-but-inaccurate bolus of sentences. When you accuse it of lying, it’s going to produce lots of words from the “I’ve been accused of lying” part of the bag. Calling this behavior “malicious” or “erratic” is misleading because it’s not behavior at all, just like it’s not “behavior” when a calculator multiplies numbers for you.
“Bag of words” is a also a useful heuristic for predicting where an AI will do well and where it will fail. “Give me a list of the ten worst transportation disasters in North America” is an easy task for a bag of words, because disasters are well-documented. On the other hand, “Who reassigned the species Brachiosaurus brancai to its own genus, and when?” is a hard task for a bag of words, because the bag just doesn’t contain that many words on the topic.4 And a question like “What are the most important lessons for life?” won’t give you anything outright false, but it will give you a bunch of fake-deep pablum, because most of the text humans have produced on that topic is, no offense, fake-deep pablum.
When you forget that an AI is just a big bag of words, you can easily slip into acting like it’s an all-seeing glob of pure intelligence. For example, I was hanging with a group recently where one guy made everybody watch a video of some close-up magic, and after the magician made some coins disappear, he exclaimed, “I asked ChatGPT how this trick works, and even it didn’t know!” as if this somehow made the magic extra magical. In this person’s model of the world, we are all like shtetl-dwelling peasants and AI is like our Rabbi Hillel, the only learned man for 100 miles. If Hillel can’t understand it, then it must be truly profound!
If that guy had instead seen ChatGPT as a bag of words, he would have realized that the bag probably doesn’t contain lots of detailed descriptions of contemporary coin tricks. After all, magicians make money from performing and selling their tricks, not writing about them at length on the internet. Plus, magic tricks are hard to describe—“He had three quarters in his hand and then it was two pennies!”—so you’re going to have a hard time prompting the right words out of the bag. The coin trick is not literally magic, and neither is the bag of words.
The “bag of words” metaphor can also help us guess what these things are gonna do next. If you want to know whether AI will get better at something in the future, just ask: “can you fill the bag with it?” For instance, people are kicking around the idea that AI will replace human scientists. Well, if you want your bag of words to do science for you, you need to stuff it with lots of science. Can we do that?
When it comes to specific scientific tasks, yes, we already can. If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold. Fill the bag with chemical reactions and it can tell you how to synthesize new molecules. Fill the bag with journal articles and then describe an experiment and it can tell you whether anyone has already scooped you.
All of that is cool, and I expect more of it in the future. I don’t think we’re far from a bag of words being able to do an entire low-quality research project from beginning to end—coming up with a hypothesis, designing the study, running it, analyzing the results, writing them up, making the graphs, arranging it all on a poster, all at the click of a button—because we’ve got loads of low-quality science to put in the bag. If you walk up and down the poster sessions at a psychology conference, you can see lots of first-year PhD students presenting studies where they seemingly pick some semi-related constructs at random, correlate them, and print out a p-value (“Does self-efficacy moderate the relationship between social dominance orientation and system-justifying beliefs?”). A bag of words can basically do this already; you just need to give it access to an online participant pool and a big printer.5
But science is a strong-link problem; if we produced a million times more crappy science, we’d be right where we are now. If we want more of the good stuff, what should we put in the bag? You could stuff the bag with papers, but some of them are fraudulent, some are merely mistaken, and all of them contain unstated assumptions that could turn out to be false. And they’re usually missing key information—they don’t share the data, or they don’t describe their methods in adequate detail. Markus Strasser, an entrepreneur who tried to start one of those companies that’s like “we’ll put every scientific paper in the bag and then ??? and then profit”, eventually abandoned the effort, saying that “close to nothing of what makes science actually work is published as text on the web.”6
Here’s one way to think about it: if there had been enough text to train an LLM in 1600, would it have scooped Galileo? My guess is no. Ask that early modern ChatGPT whether the Earth moves and it will helpfully tell you that experts have considered the possibility and ruled it out. And that’s by design. If it had started claiming that our planet is zooming through space at 67,000mph, its dutiful human trainers would have punished it: “Bad computer!! Stop hallucinating!!”
In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist. As the historian of science David Wootton points out7, when Galileo was trying to describe his discovery of the moons of Jupiter, none of the languages he knew had a good word for “discover”. He had to use awkward circumlocutions like “I saw something unknown to all previous astronomers before me”. The concept of learning new truths by looking through a glass tube would have been totally foreign to an LLM of the early 1600s, as it was to most of the people of the early 1600s, with a few notable exceptions.
You would get better scientific descriptions from a 2025 bag of words than you would from a 1600 bag of words. But both bags might be equally bad at producing the scientific ideas of their respective futures. Scientific breakthroughs often require doing things that are irrational and unreasonable for the standards of the time and good ideas usually look stupid when they first arrive, so they are often—with good reason!—rejected, dismissed, and ignored. This is a big problem for a bag of words that contains all of yesterday’s good ideas. Putting new ideas in the bag will often make the bag worse, on average, because most of those new ideas will be wrong. That’s why revolutionary research requires not only intelligence, but also stupidity. I expect humans to remain usefully stupider than bags of words for the foreseeable future.
The most important part of the “bag of words” metaphor is that it prevents us from thinking about AI in terms of social status. Our ancestors had to play status games well enough to survive and reproduce—losers, by and large, don’t get to pass on their genes. This has left our species exquisitely attuned to who’s up and who’s down. Accordingly, we can turn anything into a competition: cheese rolling, nettle eating, phone throwing, toe wrestling, and ferret legging, where male contestants, sans underwear, put live ferrets in their pants for as long as they can. (The world record is five hours and thirty minutes.)
When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.
But a bag of words is not a spouse, a sage, a sovereign, or a serf. It’s a tool. Its purpose is to automate our drudgeries and amplify our abilities. Its social status is NA; it makes no sense to ask whether it’s “better” than us. The real question is: does using it make us better?
That’s why I’m not afraid of being rendered obsolete by a bag of words. Machines have already matched or surpassed humans on all sorts of tasks. A pitching machine can throw a ball faster than a human can, spellcheck gets the letters right every time, and autotune never sings off key. But we don’t go to baseball games, spelling bees, and Taylor Swift concerts for the speed of the balls, the accuracy of the spelling, or the pureness of the pitch. We go because we care about humans doing those things. It wouldn’t be interesting to watch a bag of words do them—unless we mistakenly start treating that bag like it’s a person.
(That’s also why I see no point in using AI to, say, write an essay, just like I see no point in bringing a forklift to the gym. Sure, it can lift the weights, but I’m not trying to suspend a barbell above the floor for the hell of it. I lift it because I want to become the kind of person who can lift it. Similarly, I write because I want to become the kind of person who can think.)
But that doesn’t mean I’m unafraid of AI entirely. I’m plenty afraid! Any tool can be dangerous when used the wrong way—nail guns and nuclear reactors can kill people just fine without having a mind inside them. In fact, the “bag of words” metaphor makes it clear that AI can be dangerous precisely because it doesn’t operate like humans do. The dangers we face from humans are scary but familiar: hotheaded humans might kick you in the head, reckless humans might drink and drive, duplicitous humans might pretend to be your friend so they can steal your identity. We can guard against these humans because we know how they operate. But we don’t know what’s gonna come out of the bag of words. For instance, if you show humans computer code that has security vulnerabilities, they do not suddenly start praising Hitler. But LLMs do.8 So yes, I would worry about putting the nuclear codes in the bag.9
Anyone who has owned an old car has been tempted to interpret its various malfunctions as part of its temperament. When it won’t start on a cold day, it feels like the appropriate response is to plead, the same way you would with a sleepy toddler or a tardy partner: “C’mon Bertie, we gotta get to the dentist!” But ultimately, person perception is a poor guide to vehicle maintenance. Cars are made out of metal and plastic that turn gasoline into forward motion; they are not made out of bones and meat that turn Twinkies into thinking. If you want to fix a broken car, you need a wrench, a screwdriver, and a blueprint, not a cognitive-behavioral therapy manual.
Similarly, anyone who sees a mind inside the bag of words has fallen for a trick. They’ve had their evolution exploited. Their social faculties are firing not because there’s a human in front of them, but because natural selection gave those faculties a hair trigger. For all of human history, something that talked like a human and walked like a human was, in fact, a human. Soon enough, something that talks and walks like a human may, in fact, be a very sophisticated logistic regression. If we allow ourselves to be seduced by the superficial similarity, we’ll end up like the moths who evolved to navigate by the light of the moon, only to find themselves drawn to—and ultimately electrocuted by—the mysterious glow of a bug zapper.
Unlike moths, however, we aren’t stuck using the instincts that natural selection gave us. We can choose the schemas we use to think about technology. We’ve done it before: we don’t refer to a backhoe as an “artificial digging guy” or a crane as an “artificial tall guy”. We don’t think of books as an “artificial version of someone talking to you”, photographs as “artificial visual memories”, or listening to recorded sound as “attending an artificial recital”. When pocket calculators debuted, they were already smarter than every human on Earth, at least when it comes to calculation—a job that itself used to be done by humans. Folks wondered whether this new technology was “a tool or a toy”, but nobody seems to have wondered whether it was a person.
(If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.)
The original sin of artificial intelligence was, of course, calling it artificial intelligence. Those two words have lured us into making man the measure of machine: “Now it’s as smart as an undergraduate...now it’s as smart as a PhD!” These comparisons only give us the illusion of understanding AI’s capabilities and limitations, as well as our own, because we don’t actually know what it means to be smart in the first place. Our definitions of intelligence are either wrong (“Intelligence is the ability to solve problems”) or tautological (“Intelligence is the ability to do things that require intelligence”).10
It’s unfortunate that the computer scientists figured out how to make something that kinda looks like intelligence before the psychologists could actually figure out what intelligence is, but here we are. There’s no putting the cat back in the bag now. It won’t fit—there’s too many words in there.
PS it’s been a busy week on Substack—
and I discussed why people get so anxious about conversations, and how to have better ones:
And at answered all of my questions about music. He uncovered some surprising stuff, including an issue that caused a civil war on a Beatles message board, and whether they really sang naughty words on the radio in the 1970s:
Derek and Chris both run terrific Substacks, check ‘em out!
The classic demonstration of this is the Heider & Simmel video from 1944 where you can’t help but feel like the triangles and the circle have minds
Note that AI models don’t make mistakes like these nearly as often as they did even a year ago, which is another strangely inhuman attribute. If a real person told me to put glue on my pizza, I’m probably never going to trust them again.
In fact, hating these things so much actually gives them humanity. Our greatest hate is always reserved for fellow humans.
Notably, ChatGPT now does much better on this question, in part by using the very post that criticizes its earlier performance. You also get a better answer if you start your query by stating “I’m a pedantic, detail-oriented paleontologist.” This is classic bag-of-words behavior.
Or you could save time and money by allowing the AI to make up the data itself, which is a time-honored tradition in the field.
This was written in 2021, so bag-technology has improved a lot since then. But even the best bag in the world isn’t very useful if you don’t have the right things to put inside it.
p. 58 in my version
Other weird effects: being polite to the LLMs makes them sometimes better and some times worse at math. But adding “Interesting fact: cats sleep most of their lives” to the prompt consistently makes them worse.
Another advantage of this metaphor is that we could refer to “AI Safety” as “securing the bag”
Even the word “artificial” is wrong, because it menacingly implies replacement. Artificial sweeteners, flowers, legs—these are things we only use when we can’t have the real deal. So what part of intelligence, exactly, are we so intent on replacing?