2025-01-29 03:33:23
Here’s a loop I get stuck in all the time: I get lots of good questions and comments from readers, and so I’ll start working on a response, but then I’ll get sucked in because I want to give a thoughtful answer, and next thing I know I’ve spent the whole day on a single email. Then I’ll be like “oh no, if I keep this up I’ll never writ…
2025-01-21 23:18:29
The human mind is like a kid who got a magic kit for Christmas: it only knows like four tricks. What looks like an infinite list of biases and heuristics is in fact just the same few sleights of hand done over and over again. Uncovering those tricks has been psychology’s greatest achievement, a discovery so valuable that it’s won the Nobel Prize twice (1, 2), and in economics, no less, since there is no Nobel Prize for psychology.1
And yet, the best trick in the whole kit is one that most people have never heard of. It goes like this: “when you encounter a hard question, ignore it and answer an easier question instead.” Like this—
Psychologists call this “attribute substitution,” and if you haven’t encountered it before, that clunker of a name is probably why. But you’ve almost certainly met its avatars. Anchoring, the availability heuristic, social proof, status quo bias, and the representativeness heuristic are all incarnations of attribute substitution, just with better branding.
The cool thing about attribute substitution is that it makes all of human decision making possible. If someone asks you whether you would like an all-expenses-paid two-week trip to Bali, you can spend a millisecond imagining yourself sipping a mai tai on a jet ski, and go “Yes please.” Without attribute substitution, you’d have to spend two weeks picturing every moment of the trip in real time (“Hold on, I’ve only made it to the continental breakfast”). That’s why humans are the only animals who get to ride jet skis, with a few notable exceptions.
The uncool thing about attribute substitution is that it’s the main source of human folly and misery. The mind doesn’t warn you that it’s replacing a hard question with an easy one by, say, ringing a little bell; if it did, you’d hear nothing but ding-a-ling from the moment you wake up to the moment you fall back asleep. Instead, the swapping happens subconsciously, and when it goes wrong—which it often does—it leaves no trace and no explanation. It’s like magically pulling a rabbit out of a hat, except 10% of the time, the rabbit is a tarantula instead.
I think a lot of us are walking around with undiagnosed cases of attribute substitution gone awry. We routinely outsource important questions to the brain’s intern, who spends like three seconds Googling, types a few words into ChatGPT (the free version) and then is like, “Here’s that report you wanted.” Like this—
Lots of jobs have no clear stopping point. Doctors could always be reading more research, salespeople could always be making more cold calls, and memecoin speculators could always be pumping and dumping more DOGE, BONK, and FLOKI.2 When your work day isn’t bookended by the hours of 9 and 5, how do you know you’re doing enough?
Simple: you just work ‘til it hurts. If you click things and type things and have meetings about things until you’re nothing but a sludge pile in a desk chair, nobody can say you should be working harder. Bosses love to use this heuristic, too—if your underlings beat you to the office every morning and outlast you in the office every evening, then you must be getting good work out of them, right?
But of course, that level of output doesn’t feel satisfying. It can’t. That’s the whole point—if you feel good, then obviously you had a little more gas left to burn, and you can’t be sure you pushed yourself hard enough.
Perhaps there are some realized souls out there who make their to-do lists for the day, cross off each item in turn, and then pour themselves a drink and spend their evenings relaxing and contemplating how well-adjusted they are. I’ve never met them. Instead, everybody I know—myself included—reaches midnight with three-fourths of their to-dos still undone, flogging themselves because they didn’t eat enough frogs. None of us realize that we’ve chosen to measure our productivity in a way that guarantees we’ll fall short.
I think a lot of us, when pressed, justify our self-flagellation as motivational, rather than pathological. If you let yourself believe that you’ve succeeded, you might be tempted to do something shameful, like stop working. But as the essayist Tim Kreider puts it:
Idleness is not just a vacation, an indulgence or a vice; it is as indispensable to the brain as vitamin D is to the body, and deprived of it we suffer a mental affliction as disfiguring as rickets.
Which is to say, every day I wake up and go, “Rickets, please!”
Speaking of games you can never win—
Sometimes my friend Ramon3 gets stressed about how much he’s achieved in his life, so to make sure he’s “on track,” he looks up the resumés of his former college classmates and compares his record with theirs.
Ramon has accomplishments coming out the wazoo, but he always discovers he’s not on track. You know how some colleges will let you take the SAT multiple times and submit all your scores, and then they’ll take your best individual Reading and Math tests and combine them into one super-score? Ramon does the same thing for his high-achieving friends: he combines the greatest accomplishment from each of his old rivals into a sort of Frankenstein of nemeses, a mythical Success Monster who has written a bestselling book and started a multi-million dollar company and married a supermodel and just finished a term as the Secretary of Housing and Urban Development.
I think Ramon’s strategy is pretty common. It’s hard to tell whether you’re doing the right things. It’s a lot easier to look around and go, “Am I on top?” On top of what? “Everything.”
If you ask me to judge the state of the economy, I’ll give you an answer so quick and so confident that you can only assume I keep close track of indicators like “advance wholesale inventories” and “selected services revenue.” What I’m really doing is flashing through a couple stylized facts in my head, like “my cousin got laid off last month, seems bad” and “gas was $4.10 last time I filled up, that’s expensive,” and “the pile of oranges at the grocery store seemed a little smaller and less ripe than usual—supply chain??”
In fact, I don’t even need three data points. I can judge the state of the economy with a single fact: if my guy’s in the Oval Office, I feel decent. If my enemy is in there, I feel despondent. This is apparently what everybody else is doing, because you can watch these feelings flip in real time:
Two things jump out at me here. First, sentiment swings after the election, rather than the inauguration, meaning people are responding to a shift in vibes rather than policy. Second, those swings are 50-75% as large as the drop that happened at the beginning of the pandemic. When the opposing side wins the presidency, it feels almost as bad as it does when every business closes down, the government orders people to stay at home, and a thousand people die every day.
Here is actual GDP for that same span of time:
Pretty steady growth, except for one pandemic-sized divot that lasted a whole six months—and half of that time was recovery, rather than recession. GDP isn’t a tell-all measure of the economy, of course, but it’s a lot better than checking whether the president is wearing a blue tie or a red tie.
I was once in one of those let’s-all-go-around-the-table situations where everybody says their name and something about themselves, and in this version, for whatever reason, our avuncular facilitator wanted us all to reveal where we went to college. After each person announced their alma mater, the man would nod reverently and go “Oh, good school, good school,” as if he had been to all of them, like he had spent his whole life as a permanent undergraduate doing back-to-back bachelors from Berkeley to Bowdoin.
This kind of thing flies because we all think we know which colleges are the good ones. But we don’t. Nobody knows! All we know is which colleges people say are good, and those people are in turn relying on what other people say. The much-hated US News and World Report is at least explicit about this—the biggest component of their college rankings is the “Peer Assessment,” which is where they send out a survey to university presidents that basically says, “I dunno, how good do you think Pomona is?” It’s one big ouroboros of hearsay, which sounds like the kind of thing Dante would have put in the fourth ring of hell.
It has to be this way, because how could you ever know how “good” a college is? Do an Undercover Boss-style sting where you sit in on Physics 101 and make sure the professor knows how to calculate angular momentum? Sneak into the dorms and measure whether the beds are really “twin extra long” or just “twin”? Stage an impromptu trolley problem in the quad and check whether the students would kill one person to save five?
(And besides, what is a “good college” good for? Neuroscience? Serving ceviche in the dining hall? Making it all go away when the son of a billionaire drives his Lambo into the lake?)
Judging quality is often expensive and sometimes impossible, and that’s why we resort to judging reputation instead. But it’s easy to feel like you’ve run a full battery of tests when in fact you’ve merely taken a straw poll. So when someone is like “This place has a great nursing program,” what they mean is “I heard someone say this place has a great nursing program,” and that person probably just heard someone else say it has a great nursing program, and if you trace that chain of claims back to its origin you won’t find anyone who actually knows anything—no one will be like “Oh yeah, I personally run bleeding through these halls all the time and I always get prompt and effective treatment.”4
My friend Ricky once got tapped to do something very cool that I’m not allowed to talk about—basically, he was recruited to his profession’s equivalent of the NFL. Then, before Ricky even showed up to practice, and through no fault of his own, the team folded and his opportunity disappeared.
Ricky was bummed for weeks, and who wouldn’t be? But if you think about it for a second, Ricky’s disappointment gets a little more confusing. Ricky is right back where he was before he got the call, which is a pretty good place. Actually, he’s even better off—now he knows the bigwigs are looking at him, and that they think he’s got the juice. Sure, a good thing almost happened, and it’s too bad that it didn’t, but what about all the bad things that also could have happened: the team could never have thought about him at all, he could have shown up and broken his leg the first day, his teammates could have bullied him for being named after a character from a Will Ferrell movie, and so on, forever. There’s an infinite number of worse possible outcomes, so why think about this one and feel sad? I understand it’s “a human reaction,” Ricky, but...should it be?
Ricky essentially lived the real-life version of this old story from the psychologists Danny Kahneman and Amos Tversky:
Of course, this is people guessing about how Mr. D and Mr. C would feel, not actual reports from the men themselves. But let’s assume that everybody’s right and Mr. C is the one kicking himself because he almost made it aboard. Why is it extra upsetting to miss your flight by 5 minutes rather than 30? C and D are both equally un-airborne. Neither of them budgeted enough time to get to the airport, and both of them have to buy new tickets. It’s easy to imagine how Mr. C could have arrived in time, but it’s also easy to imagine him wearing a hat or being a chef or doing the worm in the airport terminal, so what does it matter that there was some imagined universe where he got on his flight? That ain’t the universe we live in.
All that is to say: Ricky I’m so sorry for saying all this to you when you called me sobbing, please text me back.
Like every heuristic, attribute substitution is good 90% of the time, and the problems only arise when you use it in the wrong situation. Kinda like how grapefruit juice is normally a delicious part of a balanced breakfast, but if you’re taking blood pressure medications, it can kill you instead.
Fortunately, there’s an antidote to attribute substitution. In fact, there’s two. They are straightforward, free, and hated by all.
Use telekinesis
A foolproof way to stop yourself from making stupid judgments is to avoid judgment altogether. When someone asks you how the economy is doing, just go “Gosh, I haven’t the faintest.”
The problem with this strategy is it requires a superhuman level of mental fortitude—if you’re capable of pulling it off, you’ve probably already ascended to a higher plane. You know how in movies whenever someone is using a psychic power—say, telekinesis—and their face gets all strained and their eyes start bugging out and their nose starts bleeding, and then after they’ve, like, lifted a car off of their friend, they collapse from exertion? That’s what it feels like to maintain uncertainty. Conclusions come to mind quickly and effortlessly; keeping them out would require playing a perpetual game of mental whack-a-mole.
Even if we can never whack all the moles, though, it’s still good practice to whack a few. Keeping track of what you know and what you don’t know is just basic epistemic hygiene—it’s hard to think clearly unless you’ve done that first, just like it’s hard to do pretty much any job if you haven’t brushed your teeth for two years. Separating your baseless conjectures from your justified convictions is also a recipe for avoiding pointless arguments, since most of them boil down to things like “I like it when the president wears a blue tie” vs. “I like it when the president wears a red tie.”
Plus, maintaining the appropriate level of uncertainty prevents you from becoming a one-man misinformation machine. A couple weeks ago, somebody asked me “What was the first year that most homes in New York City had indoor plumbing?” I didn’t know the answer to this question, and yet somehow I still found myself saying, matter-of-factly, “I think, like, 1950.” Why did I do that? Why didn’t I just say, “Gosh, I haven’t the faintest”? Am I a crazy person? Did I think I could open my mouth and the Holy Spirit would speak through me, except instead of endowing me with the ability to speak in tongues, the Divine would bless me with plumbing trivia? I inserted one unit of bullshit into everyone’s heads for no reason at all, and on top that they now also think I’m some kind of toilet expert.
So we could all stand to cultivate a little more doubt. Ultimately, though, trying to prevent unwanted judgments by remaining uncertain is a bit like trying to prevent unwanted pregnancies by remaining abstinent: it works 100% of the time, until it doesn’t.
Wear clean underwear and eat a healthy breakfast
The other solution to attribute substitution is to make your judgments consciously and purposefully, rather than relying on whatever cockamamie shortcut your subconscious wants to take. When I taught negotiation, this was called “defining success,” and Day 1 was all about it. After all, how can you get what you want if you don’t know what you want?
Day 1 always flopped. Students hated defining success. It was like I was telling them to wear clean underwear and eat a healthy breakfast, and they were like “yeah yeah we’ve done all that.” But then it turned out most them were secretly wearing yesterday’s boxers and their breakfast was three puffs on a Juul.
One student, let’s call him Zack, came up to me after class one day, asking how he could make sure to get an A. “I know I haven’t turned in most of my assignments so far,” Zack admitted, “But that’s because I’ve been getting divorced and my two startups have been having trouble. Anyway, could I do some extra credit?”
I didn’t have the guts to tell Zack that trying to get an A in my class was a waste of his time, and he should instead focus on putting out the various fires in his life. Nor did I have the guts to tell him that he shouldn’t get an A in my class, because obviously he hadn’t learned the most important thing I was trying to teach him, which was to get his priorities straight.
I can’t blame Zack—it’s not like I ever reordered my life because someone showed me some PowerPoint slides. This is one of those situations where you can’t reach the brain through the ears, so perhaps this lesson is best applied straight to the skull instead. Supposedly Zen masters sometimes hit their students with sticks as a way of teaching them a lesson, like “Master, what’s the key to enlightenment?” *THWACK*. There’s something about getting a lump on your head that drives the point home better even than the kindest, clearest words. “Your question is so misspecified that it shows you need to rethink your fundamental assumptions” just doesn’t have the same oomph as a crack to the cranium.
Most of us don’t have the benefit of a Zen master, but fortunately the world is always holding out a stick for us so we can run headlong into it over and over. All we have to do is notice the goose-eggs on our noggins, which is apparently the hardest part. Zack’s marriage was imploding, his startups were going under, he was flunking a class that was kinda designed to be un-flunkable—this was a guy who had basically pulped his skull by sprinting full-tilt into the stick, and yet he was still saying to himself, “Maybe I just need to run into the stick harder.”
I don’t know what more Zack needs, but for me, if I’m gonna stop running into the stick, I have to realize that I’m the kind of person who will, by default, spend 90 minutes deciding which movie to watch and 9 seconds deciding what I want out of life. I gotta ask myself: if I’m busting my ass from sunup to sundown, what am I hoping for in return? A thoroughly busted ass?
It feels stupid to ask myself these questions, because the answers either seem obvious or like foregone conclusions, but they aren’t. It’s like I’m in one of those self-driving jeeps from Jurassic Park and I’m heading straight toward a pack of Velociraptors and I’m just like, “Welp, I guess there was no way to avoid this,” as if I didn’t choose to accept an eccentric old man’s invitation to a dinosaur island.
The OG psychologist William James famously claimed that babies, because they have no idea what’s going on around them, must experience the world as a “blooming, buzzing confusion.” As they grow up and learn to make sense of things, the blooming and buzzing subside.
This idea is intuitive, beautiful, and—I suspect—wrong. Confusion is a sophisticated emotion: it requires knowing that you don’t know something. That kind of meta-cognition is difficult for a grownup, and it might be impossible for someone who still can’t insert their index finger into their own nose. Whenever I hang out with a baby, that certainly seems true—I’m confused about what to do with them, while they’re extremely certain that my car keys belong in their mouth.
Confusion, like every emotion, is a signal: it’s the ding-a-ling that tells you to think harder because things aren’t adding up. That’s why, as soon as we unlock the ability to feel confused, we also start learning all sorts of tricks for avoiding it in the first place, lest we ding-a-ling ourselves to death. That’s what every heuristic is—a way of short-circuiting our uncertainty, of decreasing the time spent scratching our heads so we can get back to what really matters (putting car keys in our mouths).
I think it’s cool that my mind can do all these tricks, but I’m trying to get comfortable scratching my head a little longer. Being alive is strange and mysterious, and I’d like to spend some time with that fact while I’ve got the chance, to visit the jagged shoreline where the bit that I know meets the infinite that I don’t know, and to be at peace sitting there a while, accompanied by nothing but the ring of my own confusion and the crunch of delicious car keys.
Technically, there is also no Nobel Prize for economics. There is only the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, established in 1968. In 2005, one of the living Nobels described the prize as “a PR coup by economists to improve their reputation,” which of course it is, much like the Nobel Prizes are themselves a PR coup by a guy who got rich selling explosives. Anyway, this is a helpful fact to have in your pocket if you’re ever hanging out with economists and you’d like to make things worse.
Prediction: when they reboot The Three Stooges in 2071, their names will be Doge, Bonk, and Floki.
All names have been changed to protect the innocent (it’s me, I am the innocent).
Okay actually on that note I think Brandeis University has a great EMT program, or at least it did c. 2012, which is when I went there on an improv tour and got gastroenteritis in the middle of the show and had to flee from the stage to the bathroom, puking all the way. The EMTs were there in seconds, and although none of them could help me, they were all very nice. So if you’re ever going to have a medical emergency, make sure your EMTs went to Brandeis roughly 13 years ago.
2025-01-07 23:33:58
Experimental History has just turned three. In blog years, that makes me old enough to light up a cigar and wax wise. And so, on my blog birthday, I wanna tell you about the two stupid facts I’ve learned in my time on the internet:
There are a lot of people in the world
Those people differ a lot from one another
These truths are so obvious that nobody even notices them, which is exactly why they’re so potent, and why they keep coming in handy over and over again. Lemme show you how.
When I got my first piece of hate mail, it felt like someone had lobbed a grenade through my front window. Someone’s mad at me?? I gotta skip town!!
But the brute logic of the Two Stupid Facts means that if you reach enough people, eventually you’re gonna bump into someone who doesn’t like what you’re doing. Haters are inevitable and therefore non-diagnostic—being heckled on the internet is like running a blood test and discovering that there’s blood inside of you.
I was once at a standup show where everybody was laughing except for one guy in the front row who, for some reason, had a major stick up his butt. After sitting stone-faced for half an hour, the guy eventually just got up and stormed out. The comic didn’t miss a beat. “Well, it’s not for everybody,” he said, and kept right on going.
That’s the beauty of the Stupid Facts: they set you free from the illusion that you can please everybody. What a relief to let that obligation go! To hear someone say, “I don’t like you!” and to be able to respond, “Well, that was guaranteed to happen.” You need that kind of serenity to do anything interesting, especially on the internet, where there’s an infinite supply of detractors standing by to shout you down.
I hate looking at my stats because it’s a recipe for getting Goodharted, but I’m gonna do it for a sec because I want to show you something. Here’s the graph of Experimental History’s readers over the past 90 days:
You see those periodic little bumps where I shed ~50 people? Those are days I posted. What’s happening there is a bunch of people got an email from me and decided to never receive emails from me again.
Most people wince when they see bumps like that. Thanks to the Two Stupid Facts, I laugh instead. Hey man, it ain’t for everybody!
You gotta be careful with these Facts, though, because they can turn you extra Stupid.
Once someone lobs a grenade into your house and the smoke clears, you realize something surprising: you now have a bigger house. That’s because, on the internet, attention is almost always good. Every complaint about you is also a commercial for you—after all, nobody bothers to yell at a nobody.
We all know this, but the first time it happened to me, it felt a little freaky, like I had just sat down to a business meeting with Satan and he was making some really good points. “When you make people mad, you succeed,” he says. “All fame is good fame. I mean, come on, Dior is still running ads with Johnny Depp!”
If you browse Substack—something the platform wisely makes it difficult to do—you quickly see that lots of people have taken the devil’s bargain. Every outrage is on offer; if it ain’t fit to print, someone’s posting it instead.
This has made a lot of people very upset, but it shouldn’t surprise us, because it’s just the Two Stupid Facts again. In a big, diverse world, there’s a market for every opinion, and for its opposite. There will always be people who hate something because other people like it, and people who like something because other people hate it, so all that hate is ultimately just free advertising. Trying to squash the thing you despise is like squashing a stink bug; it just attracts more stink bugs who are like “yum do I smell stink in here??” That’s why being crucified is often a good business move—one guy who did it 2,000 years ago still has several billion followers.
As tempting as the devil’s deal may be, it is—surprise!—a bad one. Yes, you can enlarge your house by encouraging people to lob grenades at it. But the fact that you end up with a mansion doesn’t mean you’ve done anything interesting or useful. You’ve merely taken advantage of some Stupid Facts.
Mastering the Facts is helpful for handling haters and for not becoming one yourself, but their real power is that they can bust open your sense of what’s possible.
The Facts mean that even if you only appeal to a minuscule cadre of dorks, say, 0.1% of people, that’s still 8 million people. That’s roughly the population of Ireland and Uruguay combined. I would be honored to be read by every Irelander and Uruguayan, even if it meant being read by literally nobody else.
Once that really sunk in for me, I realized how silly it is to think there’s a single path to success. There’s no one thing that’ll please every 0.1%. Anyone peddling advice about how to do that is, at best, just telling you how to please their 0.1%.
Here’s an example from my world. Many successful people on Substack pump out a pretty good article every day or two, and they’ll tell you that the secret to succeeding on Substack is...pumping out a pretty good article every day or two. Substack’s official data-driven advice agrees: if you’re serious about getting ahead on this platform, you better ping your readers at least once a week, ideally more. Keep that content spigot open! That’s the sensible thing to do.
But when you’re only going for 0.1% of people, you don’t have to do the sensible thing. In fact, you shouldn’t do the sensible thing. You should do the thing that appeals to that small set of weirdos you’re trying to reach.1
That’s why I don’t do the 3-7x/week schedule. I’m trying to do something different, and I don’t care if it doesn’t appeal to 99.9% of people. To me, writing is like climbing a mountain and then telling people what you saw up there, except the mountain is in your head. Climbing 10% of the mountain is pretty easy and lots of people do it; climbing all the way to the top is hard and almost no one does it. That’s why climbing 10% of the mountain ten times is not as useful as climbing to the top once.
I wanna see that summit, even if I die halfway up the mountain. That’s why I’ll trash a whole post if I don’t surprise myself while writing it—if I already knew what I was going to say, so did the 0.1% that I’m writing for. And it’s why the default mode of my brain is permanently set to “blog”. I’m stringing sentences together in my head from the moment I open my eyes to the moment I close them again, even when I should be doing other things. When I one day absent-mindedly cross the street and get splattered across the windshield of a Kia Sorento, my last thoughts before I lose consciousness will be “This would make a good post.”
It’s a joy to live this way because I feel like I’m being useful. When you’re trying to write for everybody, you can’t actually care about your readers; they’re too numerous, too varied, too vague. But when you’re writing for that beautiful, tiny fraction, you can care a lot. I want to give those folks something good. I want to write them the post they bring up on their second date, the post they forward to their grandpa, the post they listen to on a road trip. I don’t have the upper body strength to pull people out of burning buildings, the steady hands to remove brain tumors, or the patience to teach first-graders right from wrong. But I can write words that a few people find useful, and damn it, I’m going to bust my butt to do my bit.
I think that’s exactly how it should feel to serve your slice of humanity. It shouldn’t be easy, like stealing a 2022 Kia Sorento. It should be hard in an interesting way, like stealing a 2024 Kia Sorento.
All of this writing about writing on the internet probably sounds foolish, what with the coming AI apocalypse and all. Surely, every blog is about to be automated, right? It kinda feels like spending my life savings to buy a house and then this guy moves in next door:
The Two Stupid Facts are the reasons why I haven’t quit yet. As long as there are humans, there will be human-shaped niches where the robots can’t fit, because the way you grow humans is inherently different from the way you grow robots.
A critical step in training large language models is “reinforcement learning from human feedback,” which is where you make the computer say something and then you either pat it on the head and go “good computer!!” or you hit it with a stick and go “bad computer!!” This is how you make an AI helpful and prevent it from going full Nazi.2
Humans also undergo reinforcement learning from human feedback—we get yelled at, praised, flunked, kissed, laughed at (in a good way), laughed at (in a bad way), etc. This is how you make a human helpful and prevent it from going full Nazi, although clearly the procedure isn’t foolproof. But there are four important differences between the process that produces us and the one that makes machines:
No two humans get the same set of training data; our inputs are one-of-a-kind and non-replicable.
Rather than getting trained by a semi-random sample of humans who all have an equal hand in shaping us, we get trained very deeply by a few people—mainly parents, peers, and partners—and very shallowly by everybody else.
Because we’re born with different presets, even identical feedback can produce different people; some humans like getting hit by a stick.
We can choose to ignore our inputs, which you can confirm by having a single conversation with a toddler or a teenager.
This is a recipe for creating 8 billion different eccentrics with peculiar preferences and proclivities, the kind of people who are, at best, loved by a handful and tolerated by the rest.
I know most predictions about the future of AI are proven wrong in like ten minutes3, but I expect these four non-stupid facts to remain true because they’re baked into the business model. Tech companies want the big bucks, and that comes from mildly pleasing a billion people, not delighting a handful of them. That’s why those companies routinely make their products worse in the hopes of attracting the lowest common denominator. Even serving all of Ireland and Uruguay won’t earn you enough to run the cooling fans in your data centers, let alone get you the $7 trillion you need to build your chip factories. I, on the other hand, require only a single cooling fan, and my chip factory is the snack aisle at Costco.
I think that’s why the blogosphere hasn’t yet fallen to the bots. Ever since ChatGPT came out two years ago, anybody can press a button and get a passable essay for free. And yet, when a company called GPTZero checked a bunch of big Substacks for evidence of AI-generated text, 90% of them came out “Certified Human”4. I’m happy to report that Experimental History passed the bot check with flying colors:
You gotta take this with a grain of salt, of course. Maybe those Substacks are disguising their computer-generated text so well that the Robot Detector can’t detect them, and maybe aspiring slop-mongers just haven’t yet perfected their technique. The LLMs are only gonna get better, but they’re already pretty good, and eventually they’ll reach a point where pleasing one human more would mean pleasing another human less. That piddling little percentage point of people you piss off when you try to make your product appeal to everybody—those folks are my whole world.
So until the Terminators show up, I’m gonna be out here doing my thing. And if I’m wrong about all this, well, remember I was relying on Two Stupid Facts.
That’s what I learned. Here’s what I did!
The top posts from this year were:
My three favorite MYSTERY POSTS from this year (and coincidentally, the most-viewed) were:
I also did a few special projects and events:
I ran a seven-week Science House prototype.
I hosted a Blog Competition and Jamboree and found some great new (and old) writers.
I convened a meetup for Experimental History readers in NYC. Everybody was cool and nice and it made me proud to bring them together—look out for more of these in the future (like this one).
There’s one more project I’ve been working on in secret: I’m writing a book. There are some things I want to do that can’t be done with pixels on screens; they can only be done with paper and ink. You know how most nonfiction books should have been blog posts? I got obsessed with the idea of a book that can only be a book, the kind of thoughts you can only transmit when you print them out and bind them together. So it’s gonna be a weird kind of book, but weird in a good way, like Frankenstein in a bowler cap.
It’s about 25% written so the release date is still up in the air, but with any luck I’ll get it done before the Terminators arrive. Practically, this means I’ll be spending some blog time on the book instead. So if you don’t hear from me for longer than usual, I’m either neck-deep in a chapter, or I had a fateful encounter with a Kia. Paid subscribers will still get regular MYSTERY POSTS.
Getting to work on this book and this blog has been dream come true, and it’s all thanks to you guys. When I started writing Experimental History, I was like “oh wouldn’t it be cool if 500 people read it.” Instead, it’s changed everything about my life, and for the better. Thank you to everyone who reads, and thank you to the paid subscribers who keep the blog afloat. I promise you: there are many more Stupid Facts to come.
Microsoft killed their chatbot Tay in less than a day, but they briefly revived it so it could say this:
My first research paper, published in 2021, included this line that remained true for about a year: “Conversation is common, but it is not simple, which is why modern computers can pilot aircraft and perform surgery but still cannot carry on anything but a parody of a conversation.”
The Substacks that failed the human test mainly offer investment advice, which, no offense, is a genre where there’s already high tolerance for slop.
2024-12-19 00:25:15
In fourteen hundred and ninety-two, Columbus sailed the ocean blue…because he thought the apocalypse was coming.
Like many of his contemporaries, Columbus believed that the Earth was supposed to last for 7,000 years total, with only ~150 years remaining. More importantly, God had left us a list of things he wanted us to get done before he returned, which included “convert everybody to Christianity,” and “rebuild the temple in Jerusalem.” Columbus saw himself as critical to achieving both goals: he would discover new sea routes to speed up the evangelization of the world, and his success in that mission would prove he was also the man to tackle the temple job.1
So the most pivotal voyages in history happened because one guy wanted to tick some things off the divine to-do list before Jesus returned to vaporize the sinners and valorize the saints. But that’s not the weird part.
What’s really weird is that, in the big scheme of things, Columbus’ ideas are totally normal. Apocalyptic expectations are so common across time and place that they seem like the cosmic background radiation of the mind. Many people, while going about their business, are thinking to themselves, “Well, this will all be over soon.”
That remains true today. Nearly half of Christians in America believe that Jesus will “definitely” or “probably” return by 2050.2 A similar proportion of Muslims living in the Middle East and North Africa believe they will live to see the end times. In 2022, 39% of Americans said we’re living in the end times right now.
If you ask a psychologist why so many people expect the world to end, they’ll probably invoke terror management theory—maybe believing that armageddon is a-comin’ somehow helps us cope with the inevitable fact of our own deaths. Rather than getting your ticket punched at some random time by a stroke, a tumor, or a drunk driver, wouldn’t you rather believe you’ll go out in a blaze of godly glory that could be predicted in advance?
That might make sense for someone like Columbus, who gave himself a leading role in the end of the world. But for the rest of us stuck in the chorus line, doesn’t the impending doom seems kinda...stressful? Even if the end times ultimately lead to the Kingdom of God, most prophecies are pretty clear that there will be lots of wailing and gnashing of teeth beforehand: earthquakes and famines, Antichrists everywhere, not to mention the deafening trumpet blasts from a sort of celestial, apocalyptic ska band. Plus, most of the folks who think the end is nigh will admit they don’t know exactly how nigh it is, so it’s not like they can take solace in planning their outfits for the big day.
I’ve got a different theory: people are predisposed to believe the end is coming not because it feels good, but because it seems reasonable.
In The Illusion of Moral Decline, my coauthor Dan and I showed how two biases could lead people to believe that humans are getting nastier over time, even when they’re not.
Humans pay more attention to bad things vs. good things in the world. And they’re more likely to transmit info about bad things—the news is about planes that crashed, not planes that landed, etc. We call this part biased attention.
In memory, the negativity of bad stuff fades faster than the positivity of good stuff. There’s a good reason for this: when bad things happen, we try to rationalize them, reframe, distance, explain them away, etc., all things that sap the badness. (Much of this might be automatic and unconscious.) But we don’t do that when good things happen, and so good things keep their shine longer than bad things keep their sting. We call this part biased memory.3
Here’s what it looks like when you combine those two tendencies. Imagine you’ve got two cups in your head: a Bad Cup that fills up when you see bad things, and a Good Cup that fills up when you see good things. Every day you look out on the world, and thanks to biased attention, the Bad Cup gets fuller than the Good Cup.
But thanks to biased memory, stuff in the Bad Cup evaporates faster than stuff in the Good Cup:
When you remember the past, then, the Good Cup has lost some good stuff, but the Bad Cup has lost even more bad stuff:
So when you compare the past to the present, it seems like there was a more positive ratio of Good Cup to Bad Cup back then:
That can explain why things always seem bad and why things always seem like they’re getting worse. Which is exactly what we see in the data: every year, people say that humans just aren’t as kind as they used to be, and every year they rate human kindness exactly the same as they did last year.
(Of course, depending on the rates of evaporation and how far back you go, you could eventually get to a point where the Good Cup is actually fuller than the Bad Cup.4)
If we’re right that the perception of decline is all about what people experience vs. what they remember, then people should perceive less decline or no decline at all in the years before they were born—after all, they don’t have memories from back then. And indeed, people tell us that the decline in human goodness only began after they arrived on Earth5:
These results can help explain why people find apocalypticism so appealing: to them, it fits the data. If you think that the troubles only started after you exited the birth canal, then “the end is nigh” seems like a reasonable extrapolation of the trend you’ve been observing for your whole life.
We only studied the supposed decline of kindness, but people seem to think that most thing are cratering. For example, in 2017, 59% of Americans said that the lowest point in the nation’s history they could remember is “right now.” People of all ages gave basically the same answer, meaning they thought the disasters of 2017 were worse than World War II, Vietnam, 9/11, Afghanistan, Iraq, etc. Little did they know that the worst was yet to come: in 2024, 67% of Americans said the lowest point in history is, in fact, this very moment. When you feel like you’re constantly hitting new lows, the end of the world isn’t some kind of cold comfort—it’s just the next point on the regression line.
If I’m right, people’s colorful theories of the End Times come second. What comes first is the conviction that the world’s problems are brand-spanking-new. And that conviction is stunningly consistent across time.
“Happiness is all gone,” says the Prophecy of Neferty, an Egyptian papyrus from roughly 4000 years ago. “Kindness has vanished and rudeness has descended upon everyone,” agrees Dialogue of a Man with His Spirit, written at around the same time. “It is not like last year […] There is no person free from wrong, and everyone alike is doing it,” says the appropriately-named Complaints of Khakheperraseneb from several hundred years later. And some unknown amount of time after that, the Admonitions of Ipuwer reports that actually things just started going to hell. “All is ruin! Indeed, laughter is perished and no longer made.” Worst of all: “Everyone’s hair has fallen out.”
I could keep going (and I do, in this footnote6), but the point is that when you take a stroll through history, you don’t encounter many people saying things like “the forces of evil and the flaws of human nature have always been among us.” Instead, you meet a lot of people people saying things like “the forces of evil and the flaws of human nature have JUST APPEARED what do we do now??”
That’s still true today. We seem to assume that all the problems in the world arrived only recently, and we do this by default and without realizing it. Notice how people reflexively refer to institutions as “broken” or “rotten,” as if those institutions was once functional and fresh. Regardless of whether crime is going up or going down, people say it’s going up. It’s standard procedure to declare an epidemic of something—loneliness7, misinformation, fighting at schools—without demonstrating that there’s more of it than there used to be. We talk about “late capitalism” as if it just passed its expiration date, when in fact that term is 100 years old.8
“Unlike every previous American generation, we face impossible choices,” wrote David Herbert Donald in a New York Times op-ed. “The age of abundance has ended [...] Consequently, the ‘lessons’ taught by the American past are today not merely irrelevant but dangerous.” Hilariously, Donald published that piece in 1977 when he was...a tenured professor of history at Harvard.
Much more recently, a post went viral on Substack describing how the author came to believe in “the hazy idea of collapse,” which she describes as the “nagging sense that has hung over modern life since 2020, or 2016, or 2008, or 2001 — pick your start date — that things are not working anymore.” Another piece says the quiet part out loud: “We are living through a period of societal collapse. This isn’t a factual statement, but an emotional one.”9
“Hazy” and “emotional” perfectly describe the idea that everything started falling apart sometime in our recent past. We begin with a suspicion of decline and then reason backward from there, cherrypicking data as we go. We all lived through the replication crisis, and so we all know what happens when people have infinite researcher degrees of freedom: they discover that their preexisting biases were right all along.10
It’s easier to see what this looks like when we’ve got some distance, so here’s an example from long ago. The New England preacher William Miller and his followers were very sure that Jesus was going to return in 1843. This wasn’t some fly-by-night doomsaying; Miller double-checked his calculations for 12 years before he went public, and plenty of educated people saw his reasoning and said, “By jove, you’re right!” They even printed posters showing their work:
If you actually check any of the Bible references behind those numbers, though, they look hella sketchy. God says he will “afflict you for your sins seven times over,” (Lev 26:24) so that means you should...multiply some other number by seven? There’s a passage in the Book of Daniel where a goat’s horn grows really big and takes away “the daily sacrifice from the Lord”; the Millerites assumed this referred the kidnapping of Pope Pius VII in 1798. No offense to our Heavenly Father, but this is the kind of random-ass nonsensical number puzzle I would expect to see in a third-tier escape room.
And yet people found it convincing! The Millerite newspapers were always publishing stories like “Rev. Dimblestick came to our meeting intending to debunk our theories, and instead we out-debated him and he joined our cause.” That’s gotta be because “the world will end in 1843” ultimately sounded pretty plausible. If you encountered the Millerites in the late 1830s or early 1840s, you’d likely agree with them that the world had gone bananas. There’s a rebellion in Rhode Island, riots in Paris, war in Afghanistan, and earthquakes in the Holy Land. Someone just tried to kill a sitting president for the first time ever, and the most recent president dropped dead after only a month in office. The economy is in a panic, states are threatening to secede, thousands of people are dying of cholera, and for goodness’ sake, they’re setting nuns on fire. How could this go on much longer?
So of course Miller’s Biblical research led him to believe that Judgment Day was coming soon. If his number-crunching had spat out the year 2024 instead, he probably would have tweaked his assumptions, because the result would have sounded so ridiculous. No way the world is gonna survive that long!
(The Miller debacle is one of the wildest episodes in American history, and it’s largely been forgotten. For more of the story, see my post from last week: The Day the World Didn’t End.)
That’s why this “hazy” idea of decline is so dangerous—it lowers the standard of evidence for believing miserable things, and raises the standard of evidence for everything else. If you fertilize that pernicious suspicion with a bit of confirmation bias, it can eventually grow into the full-fledged denial that anything has meaningfully changed for the better, that it ever can, and that it ever will.
There’s a lot of people walking around with that conviction calcified in their minds, and if you prod them with any evidence to the contrary—say, the proportion of people living in extreme poverty has shrunk by almost two-thirds in the last 20 years—they’ll doubt the data or explain it away. “Well, maybe those people are a little richer now, but that just means there will be more people burning fossil fuels, and ultimately more casualties of climate change.”
And maybe they’re right! Nobody knows the future, especially not me. But when we’re willing to shrug at a billion people rising out of poverty, if you’ve decided that every bad thing is bad and every good thing is secretly also bad, well, good news! If you read the Bible very closely, it’s clear that God is gonna start raining down hellfire any minute now.
I love humans because, God bless us, if you find one of us believing something, you can find someone else who believes the exact opposite of that thing, and with equal fierceness.
So while most folks think we’re heading to hell in a handbasket, there’s a vocal minority who think we’re heading to heaven on an escalator. (Usually, these people are trying to sell you the escalator.) They’re always ready to pop up and tell you things like, “People were worried about coffee and they were wrong, therefore it’s wrong to worry about anything.”
I have less ire to fire at these folks because there are fewer of them, and because doomsaying dominates the discourse—people who tell you to worry seem wise, while people who tell you to relax seem naive. But “Everything is working, turn it all up!” is just as foolish as “Nothing’s working, roll it all back!” because both sides look at the hardest question in the universe and say, “Oh yeah, this one’s easy.”11
Our Problem #1, our Final Boss, our Prime Directive is to multiply the good things and minimize the bad things. Every job, every policy, every idea is grappling with some corollary of that quandary. Right now, our collective efforts produce some mix of making things better and mucking things up. But which ones? How much? Since when? And by what mechanism?
Answering those questions is a huge pain, and I know that because my two lengthiest research projects (1, 2) have tried to answer a tiny subset of them, and both times it led to a yearslong misadventure of untangling extremely annoying issues. How can you pick your sources without biasing your results? Can you trust that the data means what it says? If there’s a kink in the trendline, is that because something happened, or is it because they changed how they were measuring things?
That’s why I pop a cranial artery whenever people assume they already have the answers. It’s like being in a room where something stinks, and everyone is like “Man what stinks” and we’re looking around trying to find the source of the stench then someone enters the room and announces “Have you guys noticed it STINKS in here?” as if we were happily stewing in the stink, waiting for some noble and well-nosed soul to wake us from our slumber.
So when you open your eyes for the first time and see all the depravity and inanity, all the malice and avarice, all the abominations and calamities, and you go “Have you guys notic—” yes! We’ve noticed! If you have a vague feeling that your Bad Cup didn’t used to be so full, and then you conclude we’re slip-sliding toward catastrophe, you haven’t discovered anything. You’ve just taken your biases for a walk.
This story might be half-apocryphal, but apparently on May 19th, 1780, the sky went dark over Connecticut. We don’t know what blotted out the sun—probably some forest fires burning nearby—but the deeply Christian Connecticuters figured it was a sign the End Times had come. At the State House in Hartford, several senators suggested that everybody should return home and prepare to meet their Maker. Amidst the commotion, Senator Abraham Davenport of Stamford stood up and said:
I am against adjournment. The day of judgment is either approaching, or it is not. If it is not, there is no cause for an adjournment; if it is, I choose to be found doing my duty. I wish therefore that candles may be brought.
Candles were brought, and the work continued.
We know this because Columbus was compiling a Book of Prophecies during his travels, which was basically a scrapbook of apocalyptic prophecies plus an unfinished letter to Ferdinand and Isabella of Spain. In the letter, he says:
I spent six years here at your royal court, disputing the case with so many people of great authority, learned in all the arts. And finally they concluded that it all was in vain, and they lost interest. In spite of that it [the voyage to the Indies] later came to pass as Jesus Christ our Saviour had predicted and as he had previously announced through the mouths of His holy prophets. Therefore, it is to be believed that the same will hold true for this other matter [the voyage to the Holy Sepulchre].
In a more recent survey with more response options, 14% of US Christians said Jesus “definitely” or “probably” will return in their lifetimes, 37% weren’t sure, 25% said that Jesus “definitely” or “probably” will NOT return in their lifetimes, and 22% said he’s not coming back at all, or that they don’t believe in him. I take this to mean that most Christians aren’t sure whether Jesus will return before they die, but if you make them choose (as the other survey did), they lean “yes”.
This is true on average, but it’s not true for every single person or every single memory. Sometimes bad experiences can get even worse over time, and sometimes good things can seem bad in retrospect. But the opposite happens far more often, which is why life is bearable for most people.
If you start with a full Good Cup and and empty-ish Bad Cup, the differential evaporation will lead you to believe that the ratio of good to bad was even better in the past. We think this is exactly what happens when people consider their loved ones instead of the entire world: you mainly perceive good things from your friends and family members, so your Good Cup is usually fuller today than your Bad Cup. Over time, differential evaporation will make that ratio even more positive, leading people to conclude that their loved ones have improved. And indeed, that’s exactly what we found.
By the way, this finding was replicated earlier this year.
Here’s the Talmud, from ~200AD:
If the early generations are characterized as sons of angels, we are the sons of men. And if the early generations are characterized as the sons of men, we are akin to donkeys. And I do not mean that we are akin to either the donkey of Rabbi Ḥanina ben Dosa or the donkey of Rabbi Pinḥas ben Yair, who were both extraordinarily intelligent donkeys; rather, we are akin to other typical donkeys.
Christians agree:
[Martin] Luther once remarked that a whole book could be filled with the signs that happened in his day and that pointed to the approaching end of the world. He thought that the “worst sign” was that human beings had never been so earthly minded “as right now.” Hardly anyone cared about eternal salvation.
In 1627, the English clergyman George Hakewill was so tired of people complaining about the world getting worse over time—an opinion “so generally received, not only among the Vulgar, but of the Learned”—that he wrote a whole book attempting to refute it. He failed, of course. Here’s Nietzche in 1874:
Never was the world more worldly, never poorer in goodness and love. Men of learning are no longer beacons or sanctuaries in the midst of this turmoil of worldliness; they themselves are daily becoming more restless, thoughtless, loveless. Everything bows before the coming barbarism, art and science included.
I got this example from a recent piece in called “The Myth of the Loneliness Epidemic”
Werner Sombart, the guy who coined the term, was thrilled toward the end of his life because he thought capitalism was finally being replaced by a better system called “Nazism.”
If you can’t get enough collapse-related content, there’s an apparently successful Substack called Last Week in Collapse that offers you a “weekly summary of the ongoing collapse of civilization”
If I can play the quote game one last time, these posts sound eerily similar to a New York Times Magazine piece from 1978:
Europeans have a sense of being at the beginning of a downhill slide. [...] There is a pervading sense of crisis, although it has no clear face, as it did in the days of postwar reconstruction. People are disillusioned and preoccupied. The notion of progress, once so stirring, rings hollow now. Nobody can say exactly what he fears, but neither is anyone sanguine about the future. [...] there is a sense that governments no longer have the wisdom or power to cope
This was exactly what it was like to study people’s perception of moral decline. I was basically asking people, “Hey, I’ve spent the past three years working on this question, but could you answer it in .4 seconds?” And their answer was “Sure can.”
2024-12-11 01:16:18
It was October 22nd, 1844, and the world was about to end.
Somewhere between 50,000 and 200,000 people were looking up at the sky that day, waiting for Jesus to burst through the clouds. Some of them had quit their jobs, left their crops to rot in the fields, sold all their possessions, paid the debts they owed, forgave the debts owed t…
2024-12-03 22:54:12
Even a years ago, if you had tried to talk to me about the “philosophy of science,” I would have skeddadled. Philosophy? As in, people just...saying stuff? No thanks dude, I’m good. I’m a Science Guy, I talk Data.
But then I realized something: I had no idea what I was doing. Like, nobody had ever told me what science was or how to do it. My advisor never sat me down and gave me the scientific Birds and the Bees talk (“When a hypothesis and an experiment love each other very much...”). I just showed up to my PhD and started running experiments. Wasn’t I supposed to be using the Scientific Method, or something? Come to think of it, what is the Scientific Method?
As far as I could tell, everybody else was as clueless as me. There are no classes called “Coming Up with Ideas 101” or “How to Pick a Project When There Are Literally Infinite Projects You Could Pick.” We all learned through osmosis—you read some papers, you watch some talks, and then you just kinda wing it.
We’re devoting whole lifetimes to this project—not to mention billions of taxpayer dollars—so shouldn’t we, you know, have some idea of what we’re doing? Fortunately, in the 20th century, three philosophers were like, “Damn, this science thing is getting pretty crazy, we should figure out what it is and what it’s all about.”1
Karl Popper said: science happens by proposing hypotheses and then falsifying them.
Thomas Kuhn said: science happens by people trying to solve puzzles in the prevailing paradigm and then shifting the paradigm entirely when things don’t add up.
And Paul Feyerabend said: *BIG WET FART NOISE*
Guess whose book I’m gonna tell you about?
Here’s how Feyerabend put it in Against Method (1975, emphasis his):
The thesis is: the events, procedures and results that constitute the sciences have no common structure; there are no elements that occur in every scientific investigation but are missing elsewhere. [...] Successful research does not obey general standards; it relies now on one trick, now on another [...] This liberal practice, I repeat, is not just a fact of the history of science. It is both reasonable and absolutely necessary for the growth of knowledge.
Which is to say: there is no such thing as the scientific method. Feyerabend’s famous dictum is “anything goes,” which he explained is “the terrified exclamation of the rationalist who takes a closer look at history.”2 Whenever you try to lay down some rule like “science works like this,” Feyerabend pop ups and says “Aha! Here’s a time when someone did exactly the opposite of that, and it worked!”
Here’s an example. Say some guy named Galileo comes up to you and says something crazy like, “The Earth is constantly moving.” Well, Rule #1 of science is supposed to be “theories should fit the facts,” so you, a dutiful scientist, consult them:
When you’re on something that’s moving (like a horse or a cart), you notice it for all sorts of reasons. You feel the wind, you see the scenery changing, you might get a little sick, and so on. If the Earth was moving, we’d know it.
When you jump straight up, you land on exactly the point where you started. If the Earth was moving, that wouldn’t happen—it would change position while you’re in the air, and your landing spot would be different from your launching spot.
The Earth does actually move sometimes; it’s called an earthquake. And when that happens, buildings topple over, you get mudslides and avalanches, etc. So the Earth can’t be moving all the time, or else those things would be happening all the time, too.
Those facts seem pretty irrefutable, so you conclude the Earth does not move. Unfortunately, it does move, and you’re not going to figure that out by being a scientific goody-two-shoes. Instead, you’ll have to entertain the possibility that this Galileo guy might be right. In Feyerabend’s words:
Turning the argument around, we first assert the motion of the earth and then inquire what changes will remove the contradiction. Such an inquiry may take considerable time, and there is a good sense in which it is not finished even today. The contradiction may stay with us for decades or even centuries. Still, it must be upheld until we have finished our examination or else the examination, the attempt to discover the antediluvian components of our knowledge, cannot even start. This, we have seen, is one of the reasons one can give for retaining, and, perhaps, even for inventing, theories which are inconsistent with the facts.
To Feyerabend, our minds are like deep lakes, our assumptions are like the fish you can’t see from the surface, and a wacko theory is like a stick of dynamite that you drop into the water so it blows up and all the dead assumptions float to the surface. If you take Galileo’s dumb-sounding theory seriously, you start wondering whether your facts are as irrefutable as they seem: Would you really be able to tell if the Earth and everything on it was moving? For instance, if you were below deck on a ship, would you be able to tell whether the ship was docked or sailing smoothly? Have you ever even checked?
Feyerabend goes so far as to claim that, in the whole Galileo debacle, the Catholic Church was the side “trusting the science.” The Inquisition didn’t just condemn Galileo for contradicting the Bible. They also said he was wrong on the facts. That part of their judgment “was made without reference to the faith, or to Church doctrine, but was based exclusively on the scientific situation of the time. It was shared by many outstanding scientists — and it was correct when based on the facts, the theories and the standards of the time.” As in: the Inquisition was right.
Feyerabend described himself as an “epistemological anarchist,” and other people have called him that too, but they meant it as an insult. This whole “anything goes” thing causes a lot of pearl-clutching and handwringing about misinformation and pseudoscience. There’s supposed to be some kind of Special Secret Science Sauce that makes “real” science good, and if it turns out that you can squirt any kind of condiment on there and it tastes fine, then why are we spending so much on the brand-name stuff?
I think the pearl-clutchers are onto something here, and I’ll come back to that in a second. But first, even if the strong version of Feyerabend’s thesis makes people faint, those same people probably agree with him when it comes to revolutionary science. A breakthrough, almost by definition, has to be at least a little ridiculous—otherwise, someone would have made it already. For example, when you look back at the three most important breakthroughs in biology in the last forty years, every one of them required at least one step, and sometimes many steps, that sounded stupid to a lot of people:
We have mRNA vaccines because one woman was so sure she she could make the technology work that she kept going for decades, even when all of her grants were denied, her university tried to kick her out, and her advisor tried to deport her.
We have CRISPR in part because some scientists at a dairy company were trying to make yogurt.
We have polymerase chain reaction because one guy wanted to “find out what kind of weird critters might be living in boiling water in Yellowstone” and another guy refused to listen to his friends when they told him he was wasting his time: “...not one of my friends or colleagues would get excited over the potential for such a process. [...] most everyone who could take a moment to talk about it with me, felt compelled to come up with some reason why it wouldn’t work.”3
(That’s why it’s always a con whenever people dig up silly-sounding studies to prove that the government is wasting money on science. They’ll be like “Can you believe they’re PAYING PEOPLE to SCOOP OUT PART OF A CAT’S BRAIN and then SEE IF IT CAN STILL WALK ON A TREADMILL???” And then it turns out the research is about how to help people walk again after they have a spinal cord injury. A lot of research is bad, but the goofiness of its one-sentence summary is not a good indication of its quality.4)
Not only do we have useful research that breaks the rules; we also have useless research that follows the rules. You can develop theories, run experiments, gather data, analyze your results, and reject your null hypotheses, all by the book, without a lick of fraud or fakery, and still not produce any useful knowledge. In psychology, we do this all the time.
Everybody seems to agree with these facts in the abstract, but we clearly don’t believe them in our hearts, because we’ve built a scientific system that denies them entirely. We hire people, accept papers, and dole out money based on the assumption that science progresses by a series of sensible steps that can all be approved by committee. That’s why everyone tries to pretend this is happening even when it isn’t.
For example, the National Institutes of Health don’t like funding anything risky, so a good way to get money from them is to show them some “promising” and “preliminary” results from a project that, secretly, you’ve already completed. When they give you a grant, you can publish the rest of the results and go “wow look it all turned out so well!” when actually you’ve been using the money to try other stuff, hopefully generating “promising” and “preliminary” results for the next grant application. Which is to say, a big part of our scientific progress depends on defrauding the government.
In fact, whenever we find ourselves stuck on some scientific problem for a long time, it’s often from an excess of being reasonable. For instance, we don’t have any good treatments for Alzheimer’s in large part because a “cabal” of researchers was so sure they knew which direction Alzheimer’s science should take that they scuttled anybody trying to follow other leads. Their favored hypothesis—that a buildup of amyloid proteins gums up the brain—enjoyed broad consensus at the time, and anybody who harbored doubts looked like a science-denier. Decades and billions of dollars later, the amyloid hypothesis is pretty much kaput, and we’re not much closer to a cure, or even an effective treatment. So when Grandma starts getting forgetful and there’s nothing you can do about it, you should blame the people who enforced the supposed rules of science, not the people who tried to break them.
Likewise, even if we believe that science requires occasional irrationality, we hide this fact from our children. Walk into an elementary classroom and you’ll probably see a poster like this:
Most of the things my teachers hung on the walls turned out to be whoppers: the food pyramid is not a reasonable dieting guide, Greenland is actually one-seventh the size of South America (despite how it looks on the map), and Maya Angelou never said that thing about people remembering how you made them feel. But this hokum about the “scientific method” is the tallest tale of them all.
Yes, scientists do all of these things sometimes, but laying out these steps as “the” method implies that science is a solved problem, that everybody knows what’s going on, that we all just show up and paint by numbers, and that you, too, can follow the recipe and discoveries will pop out the other end. (“Hey pal, I’m stuck—what comes after research again?”) This “method” is about as useful as my friend Steve’s one-step recipe for poundcake, which goes: “Step 1: add poundcake.”5
I read Against Method because people kept recommending it to me, and now I see why. Ol’ Paul and I have a similar vibe: we both love italics and hate hierarchy. But I have two bones to pick with him.
First, Feyerabend has a bit of a Nazi problem—namely, that he was a Nazi. He mentions this in a bizarre footnote toward the end of the book:
Like many people of my generation I was involved in the Second World War. This event had little influence on my thinking. For me the war was a nuisance, not a moral problem.
When he was drafted, his main reaction was annoyance that he couldn’t continue studying astronomy, acting, and singing:
How inconvenient, I thought. Why the hell should I participate in the war games of a bunch of idiots? How do I get out of it? Various attempts misfired and I became a soldier.
He did a bit more than that. According to his entry in the Stanford Encyclopedia of Philosophy, Feyerabend earned the Iron Cross for gallantry in battle, and was promoted to lieutenant by the end of the war. He later quipped that he “relished the role of army officer no more than he later did that of university professor,” which is, uh, not the most reassuring thing one can say about either of those jobs. (“Don’t worry, students! I was indeed an officer in the Wermacht, but I hated it just as much as I hate teaching you.”)6
Look, I don’t think you should judge people’s ideas by judging their character.7 I expect everyone I read to have a thick file of foibles, and most of the folks who have something to teach me probably don’t share my values. But there are “foibles” and then there’s “carrying a gun for Hitler and then being extremely nonchalant about it.” That’s especially weird for a philosopher, since thinking critically about stuff is his job.
So me and Herr Feyerabend may not see eye-to-eye vis-a-vis WWII, but my biggest philosophical gripe with him is that he doesn’t seem to care about the Hockey Stick:
The Hockey Stick is the greatest mystery in human history. Something big happened in the past ~400 years, something that never happened before in the ~300,000 years of our species’ existence. People have all sorts of theories about what caused the Hockey Stick, but everyone agrees that science played a part. We started investigating the mysteries of the universe in a new way, and our discoveries piled up and spilled over into technology much faster than they ever had before.
So, fine, “anything goes,” but some things go better than others. That’s why I don’t buy Feyerabend’s claim that the scientific method doesn’t exist. I think it doesn’t exist yet. That is, we’ve somehow succeeded in the practice of science without understanding it in principle. Although we haven’t solved the Mystery of the Hockey Stick, there are too many bodies to deny the mystery exists (the bodies in this mystery are alive—that’s the whole point).
Even though Feyerabend denies that mystery, perhaps he can help us solve it. That first upward tick of the Hockey Stick, that almost imperceptible liftoff sometime after the year 1600—that was a burst of Feyerabendian irrationality. The first scientists did something pretty stupid for the time: they ditched the books that had taught people for thousands of years (some of them supposedly written by God) and decided to do things themselves. They claimed you could discover objective, useful truths if you built air-pumps and peered through prisms, and they were...mostly wrong. It took ~200 years for this promise to really start paying off, which is why we only reached the crook of the Hockey Stick sometime after 1800.
So for two centuries, the progenitors of modern science mainly made fools of themselves. Early on, a popular play called The Virtuoso ridiculed the Royal Society (the organization that housed most of the important early scientists) by depicting some of their actual experiments on stage—a buffoonish natural philosopher tries to “swim on dry land by imitating a frog” and transfuses sheep’s blood into a man, causing a tail to grow out of the man’s butt. A few decades later, the politician Sir William Temple claimed that no material benefits had come from the “airy Speculations of those, who have passed for the great Advancers of Knowledge and Learning.” And fifty years after that, the writer Samuel Johnson looked upon the works of science and judged them to be “meh”:
When the Philosophers of the last age were first congregated into the Royal Society, great expectations were raised of the sudden progress of useful arts [...] The society met and parted without any visible diminution of the miseries of life.
Every new scientific investigation must trace this same path. You must first estrange yourself from the old ways of thinking, and then you must fall in love with new ways of thinking, and you must do both of these things before they are reasonable. Whatever the real scientific method is, these must be the first two steps. Incumbent theories are always going to be stronger than their younger challengers—at first. Only the truly foolish will be able to discover the evidence that ultimately overturns the old and establishes the new.
But this isn’t a general purpose, broad spectrum foolishness. It’s a laser-targeted, precise kind of foolishness. Falling in love with a fledgling idea is fine, but eventually you have to produce better experiments and more convincing explanations than the establishment can muster, or else your theory is going to go where 99% of them go, which is nowhere. And this is where the rules do matter. We remember Galileo because his arguments, as weird as they were at the time, ultimately held up.8 We would not remember him if he tried to claim that the Earth turns on its axis because there’s a sort of cosmic Kareem Abdul-Jabbar spinning it on his fingertip like a basketball.9
People call it “Nobel Disease” when scientist-laureates do nutty things like talk to imaginary fluorescent raccoons, as if this nuttiness is a tax on the scientists’ talent. But that’s backwards: that nuttiness is part of their talent. The craziness it takes to talk to the raccoons is the same craziness it takes to try creating a polymerase chain reaction when everybody tells you it won’t work. It’s just an extremely specific and rare kind of craziness. You have to grasp reality firmly enough to understand it, but loosely enough to let it speak. Which is to say, our posters of the “Scientific Method” should look like this:
Back in 2011, a psychologist named Daryl Bem published a bunch of studies claiming to show that ESP is real. This helped jumpstart the replication crisis in psychology, and some folks wonder whether that was Bem’s intention all along—maybe his wack-a-doo experiments were a false flag operation meant to expose the weakness of our methods.
I don’t think that’s true of Bem, but it might well be true of Feyerabend. Against Method is a medium-is-the-message kind of book: it’s meant to induce the same kind of madness that Feyerabend claims is necessary for scientific progress. That’s why he praises voodoo and astrology, that’s why he spends a whole chapter doing a non-sequitur close-reading of Homer10, and that’s even why he mentions his blasé attitude toward his Nazi days—he wants to upset you. He’s trying to pull a Galileo, to make an argument that’s obviously at odds with the facts, trying to trick you into looking closer at those facts so that you’ll see they’re shakier than you thought. He wants you to clutch your pearls because he knows they’ll crumble to dust in your hands.
This is the guy who once exclaimed in an interview, “I have no position! [...] I have opinions that I defend rather vigorously, and then I find out how silly they are, and I give them up!”11 He’s a stick of dynamite—you toss him into your mind so you can see what floats to the surface. And once the waves subside and the quiet returns, perhaps then you will hear the voice of the fluorescent raccoon, and perhaps you will listen.
Obviously there were more, but people mainly talk about these three. Sometimes they mention a fourth guy named Imre Lakatos, but he comes up less often, probably because he died tragically early. (In fact, he was supposed to write a book rebutting Feyerabend, which was going to be called For Method.) If you’re a philosopher, it behooves you to live a long time so people have plenty of opportunities to yell at you, thereby increasing your power. That’s why, personally, I don’t plan to die.
Here, “rationalist” refers to Feyerabend’s nemeses: Popper and his ilk, who thought that science required playing by the rules. It does not refer to the online movement of people trying to think correctly.
If you wanna get really sad, read the entirety of Kary Mullis’ Nobel lecture, which traces the development of PCR alongside the disintegration of his relationship. Here’s how it ends:
In Berkeley it drizzles in the winter. Avocados ripen at odd times and the tree in Fred’s front yard was wet and sagging from a load of fruit. I was sagging as I walked out to my little silver Honda Civic, which never failed to start. Neither Fred, empty Becks bottles, nor the sweet smell of the dawn of the age of PCR could replace Jenny. I was lonesome.
I got this example from this recent piece by .
A couple weeks ago, I was on a conference panel with someone who predicted that AI will replace human scientists within 5-10 years. I disagreed, and this is exactly why. LLMs work because we can train them on a couple quintillion well-formed sentences. We’ve got way less well-formed science, and we have a hard time telling the well-formed from the malformed.
Feyerabend isn’t alone here—20th-century philosophers of science have an alarmingly high body count. Lakatos (see footnote above) allegedly forced a 19-year-old Jewish girl to commit suicide during WWII because he was too afraid to help her hide from the Nazis. I don’t think Popper or Kuhn ever killed anybody, although Kuhn once threw an ashtray at the filmmaker Errol Morris, who then wrote a book about it.
I do think serving in the Nazi army should disqualify you from becoming, say, the pope, but I understand that some people disagree about this.
Except for his cockamamie theory of the tides. Nobody wins ‘em all!
I sometimes get emails from folks who want to sell me on a theory of everything, assuming that I’ll be a sympathetic audience since I’m always going on about crazy ideas in science. And sure, I’ll bend my ear for a dotty hypothesis. But if you want me not just to listen, but to believe, then you’ll have to bring data. To be fair, though, I’ll give anybody the same benefit of the doubt that the original scientists got—that is, I’ll withhold judgment for two centuries.
This will be interesting to like, four people, but I’ve gotta let those four people know: one of the final chapters of Against Method makes a stunningly similar argument to Julian Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind, which is that ancient humans weren’t conscious in the way modern humans are, but instead experienced consciousness as the voice of the gods. Here’s what Feyerabend says when discussing Homer:
Actions are initiated not by an “autonomous I” but by further actions, events, occurrences, including divine interference. And this is precisely how mental events are experienced. [...] Archaic man lacks “physical” unity, his “body” consists of a multitude of parts, limbs, surfaces, connections; and he lacks “mental” unity, his “mind” is composed of a variety of events, some of them not even “mental” in our sense, which either inhabit the body-puppet as additional constituents or are brought into it from outside.
This is super weird, because Against Method came out one year before Origin of Consciousness. Did Feyerabend and Jaynes know each other, or was this the zeitgeist speaking through both of them? I have no idea, and I haven’t seen anybody comment on the connection except for this one guy on Goodreads.
This interview is also notable for the fact that Feyerabend predicts the journalist’s divorce 17 years in advance.