2024-12-19 00:25:15
In fourteen hundred and ninety-two, Columbus sailed the ocean blue…because he thought the apocalypse was coming.
Like many of his contemporaries, Columbus believed that the Earth was supposed to last for 7,000 years total, with only ~150 years remaining. More importantly, God had left us a list of things he wanted us to get done before he returned, which included “convert everybody to Christianity,” and “rebuild the temple in Jerusalem.” Columbus saw himself as critical to achieving both goals: he would discover new sea routes to speed up the evangelization of the world, and his success in that mission would prove he was also the man to tackle the temple job.1
So the most pivotal voyages in history happened because one guy wanted to tick some things off the divine to-do list before Jesus returned to vaporize the sinners and valorize the saints. But that’s not the weird part.
What’s really weird is that, in the big scheme of things, Columbus’ ideas are totally normal. Apocalyptic expectations are so common across time and place that they seem like the cosmic background radiation of the mind. Many people, while going about their business, are thinking to themselves, “Well, this will all be over soon.”
That remains true today. Nearly half of Christians in America believe that Jesus will “definitely” or “probably” return by 2050.2 A similar proportion of Muslims living in the Middle East and North Africa believe they will live to see the end times. In 2022, 39% of Americans said we’re living in the end times right now.
If you ask a psychologist why so many people expect the world to end, they’ll probably invoke terror management theory—maybe believing that armageddon is a-comin’ somehow helps us cope with the inevitable fact of our own deaths. Rather than getting your ticket punched at some random time by a stroke, a tumor, or a drunk driver, wouldn’t you rather believe you’ll go out in a blaze of godly glory that could be predicted in advance?
That might make sense for someone like Columbus, who gave himself a leading role in the end of the world. But for the rest of us stuck in the chorus line, doesn’t the impending doom seems kinda...stressful? Even if the end times ultimately lead to the Kingdom of God, most prophecies are pretty clear that there will be lots of wailing and gnashing of teeth beforehand: earthquakes and famines, Antichrists everywhere, not to mention the deafening trumpet blasts from a sort of celestial, apocalyptic ska band. Plus, most of the folks who think the end is nigh will admit they don’t know exactly how nigh it is, so it’s not like they can take solace in planning their outfits for the big day.
I’ve got a different theory: people are predisposed to believe the end is coming not because it feels good, but because it seems reasonable.
In The Illusion of Moral Decline, my coauthor Dan and I showed how two biases could lead people to believe that humans are getting nastier over time, even when they’re not.
Humans pay more attention to bad things vs. good things in the world. And they’re more likely to transmit info about bad things—the news is about planes that crashed, not planes that landed, etc. We call this part biased attention.
In memory, the negativity of bad stuff fades faster than the positivity of good stuff. There’s a good reason for this: when bad things happen, we try to rationalize them, reframe, distance, explain them away, etc., all things that sap the badness. (Much of this might be automatic and unconscious.) But we don’t do that when good things happen, and so good things keep their shine longer than bad things keep their sting. We call this part biased memory.3
Here’s what it looks like when you combine those two tendencies. Imagine you’ve got two cups in your head: a Bad Cup that fills up when you see bad things, and a Good Cup that fills up when you see good things. Every day you look out on the world, and thanks to biased attention, the Bad Cup gets fuller than the Good Cup.
But thanks to biased memory, stuff in the Bad Cup evaporates faster than stuff in the Good Cup:
When you remember the past, then, the Good Cup has lost some good stuff, but the Bad Cup has lost even more bad stuff:
So when you compare the past to the present, it seems like there was a more positive ratio of Good Cup to Bad Cup back then:
That can explain why things always seem bad and why things always seem like they’re getting worse. Which is exactly what we see in the data: every year, people say that humans just aren’t as kind as they used to be, and every year they rate human kindness exactly the same as they did last year.
(Of course, depending on the rates of evaporation and how far back you go, you could eventually get to a point where the Good Cup is actually fuller than the Bad Cup.4)
If we’re right that the perception of decline is all about what people experience vs. what they remember, then people should perceive less decline or no decline at all in the years before they were born—after all, they don’t have memories from back then. And indeed, people tell us that the decline in human goodness only began after they arrived on Earth5:
These results can help explain why people find apocalypticism so appealing: to them, it fits the data. If you think that the troubles only started after you exited the birth canal, then “the end is nigh” seems like a reasonable extrapolation of the trend you’ve been observing for your whole life.
We only studied the supposed decline of kindness, but people seem to think that most thing are cratering. For example, in 2017, 59% of Americans said that the lowest point in the nation’s history they could remember is “right now.” People of all ages gave basically the same answer, meaning they thought the disasters of 2017 were worse than World War II, Vietnam, 9/11, Afghanistan, Iraq, etc. Little did they know that the worst was yet to come: in 2024, 67% of Americans said the lowest point in history is, in fact, this very moment. When you feel like you’re constantly hitting new lows, the end of the world isn’t some kind of cold comfort—it’s just the next point on the regression line.
If I’m right, people’s colorful theories of the End Times come second. What comes first is the conviction that the world’s problems are brand-spanking-new. And that conviction is stunningly consistent across time.
“Happiness is all gone,” says the Prophecy of Neferty, an Egyptian papyrus from roughly 4000 years ago. “Kindness has vanished and rudeness has descended upon everyone,” agrees Dialogue of a Man with His Spirit, written at around the same time. “It is not like last year […] There is no person free from wrong, and everyone alike is doing it,” says the appropriately-named Complaints of Khakheperraseneb from several hundred years later. And some unknown amount of time after that, the Admonitions of Ipuwer reports that actually things just started going to hell. “All is ruin! Indeed, laughter is perished and no longer made.” Worst of all: “Everyone’s hair has fallen out.”
I could keep going (and I do, in this footnote6), but the point is that when you take a stroll through history, you don’t encounter many people saying things like “the forces of evil and the flaws of human nature have always been among us.” Instead, you meet a lot of people people saying things like “the forces of evil and the flaws of human nature have JUST APPEARED what do we do now??”
That’s still true today. We seem to assume that all the problems in the world arrived only recently, and we do this by default and without realizing it. Notice how people reflexively refer to institutions as “broken” or “rotten,” as if those institutions was once functional and fresh. Regardless of whether crime is going up or going down, people say it’s going up. It’s standard procedure to declare an epidemic of something—loneliness7, misinformation, fighting at schools—without demonstrating that there’s more of it than there used to be. We talk about “late capitalism” as if it just passed its expiration date, when in fact that term is 100 years old.8
“Unlike every previous American generation, we face impossible choices,” wrote David Herbert Donald in a New York Times op-ed. “The age of abundance has ended [...] Consequently, the ‘lessons’ taught by the American past are today not merely irrelevant but dangerous.” Hilariously, Donald published that piece in 1977 when he was...a tenured professor of history at Harvard.
Much more recently, a post went viral on Substack describing how the author came to believe in “the hazy idea of collapse,” which she describes as the “nagging sense that has hung over modern life since 2020, or 2016, or 2008, or 2001 — pick your start date — that things are not working anymore.” Another piece says the quiet part out loud: “We are living through a period of societal collapse. This isn’t a factual statement, but an emotional one.”9
“Hazy” and “emotional” perfectly describe the idea that everything started falling apart sometime in our recent past. We begin with a suspicion of decline and then reason backward from there, cherrypicking data as we go. We all lived through the replication crisis, and so we all know what happens when people have infinite researcher degrees of freedom: they discover that their preexisting biases were right all along.10
It’s easier to see what this looks like when we’ve got some distance, so here’s an example from long ago. The New England preacher William Miller and his followers were very sure that Jesus was going to return in 1843. This wasn’t some fly-by-night doomsaying; Miller double-checked his calculations for 12 years before he went public, and plenty of educated people saw his reasoning and said, “By jove, you’re right!” They even printed posters showing their work:
If you actually check any of the Bible references behind those numbers, though, they look hella sketchy. God says he will “afflict you for your sins seven times over,” (Lev 26:24) so that means you should...multiply some other number by seven? There’s a passage in the Book of Daniel where a goat’s horn grows really big and takes away “the daily sacrifice from the Lord”; the Millerites assumed this referred the kidnapping of Pope Pius VII in 1798. No offense to our Heavenly Father, but this is the kind of random-ass nonsensical number puzzle I would expect to see in a third-tier escape room.
And yet people found it convincing! The Millerite newspapers were always publishing stories like “Rev. Dimblestick came to our meeting intending to debunk our theories, and instead we out-debated him and he joined our cause.” That’s gotta be because “the world will end in 1843” ultimately sounded pretty plausible. If you encountered the Millerites in the late 1830s or early 1840s, you’d likely agree with them that the world had gone bananas. There’s a rebellion in Rhode Island, riots in Paris, war in Afghanistan, and earthquakes in the Holy Land. Someone just tried to kill a sitting president for the first time ever, and the most recent president dropped dead after only a month in office. The economy is in a panic, states are threatening to secede, thousands of people are dying of cholera, and for goodness’ sake, they’re setting nuns on fire. How could this go on much longer?
So of course Miller’s Biblical research led him to believe that Judgment Day was coming soon. If his number-crunching had spat out the year 2024 instead, he probably would have tweaked his assumptions, because the result would have sounded so ridiculous. No way the world is gonna survive that long!
(The Miller debacle is one of the wildest episodes in American history, and it’s largely been forgotten. For more of the story, see my post from last week: The Day the World Didn’t End.)
That’s why this “hazy” idea of decline is so dangerous—it lowers the standard of evidence for believing miserable things, and raises the standard of evidence for everything else. If you fertilize that pernicious suspicion with a bit of confirmation bias, it can eventually grow into the full-fledged denial that anything has meaningfully changed for the better, that it ever can, and that it ever will.
There’s a lot of people walking around with that conviction calcified in their minds, and if you prod them with any evidence to the contrary—say, the proportion of people living in extreme poverty has shrunk by almost two-thirds in the last 20 years—they’ll doubt the data or explain it away. “Well, maybe those people are a little richer now, but that just means there will be more people burning fossil fuels, and ultimately more casualties of climate change.”
And maybe they’re right! Nobody knows the future, especially not me. But when we’re willing to shrug at a billion people rising out of poverty, if you’ve decided that every bad thing is bad and every good thing is secretly also bad, well, good news! If you read the Bible very closely, it’s clear that God is gonna start raining down hellfire any minute now.
I love humans because, God bless us, if you find one of us believing something, you can find someone else who believes the exact opposite of that thing, and with equal fierceness.
So while most folks think we’re heading to hell in a handbasket, there’s a vocal minority who think we’re heading to heaven on an escalator. (Usually, these people are trying to sell you the escalator.) They’re always ready to pop up and tell you things like, “People were worried about coffee and they were wrong, therefore it’s wrong to worry about anything.”
I have less ire to fire at these folks because there are fewer of them, and because doomsaying dominates the discourse—people who tell you to worry seem wise, while people who tell you to relax seem naive. But “Everything is working, turn it all up!” is just as foolish as “Nothing’s working, roll it all back!” because both sides look at the hardest question in the universe and say, “Oh yeah, this one’s easy.”11
Our Problem #1, our Final Boss, our Prime Directive is to multiply the good things and minimize the bad things. Every job, every policy, every idea is grappling with some corollary of that quandary. Right now, our collective efforts produce some mix of making things better and mucking things up. But which ones? How much? Since when? And by what mechanism?
Answering those questions is a huge pain, and I know that because my two lengthiest research projects (1, 2) have tried to answer a tiny subset of them, and both times it led to a yearslong misadventure of untangling extremely annoying issues. How can you pick your sources without biasing your results? Can you trust that the data means what it says? If there’s a kink in the trendline, is that because something happened, or is it because they changed how they were measuring things?
That’s why I pop a cranial artery whenever people assume they already have the answers. It’s like being in a room where something stinks, and everyone is like “Man what stinks” and we’re looking around trying to find the source of the stench then someone enters the room and announces “Have you guys noticed it STINKS in here?” as if we were happily stewing in the stink, waiting for some noble and well-nosed soul to wake us from our slumber.
So when you open your eyes for the first time and see all the depravity and inanity, all the malice and avarice, all the abominations and calamities, and you go “Have you guys notic—” yes! We’ve noticed! If you have a vague feeling that your Bad Cup didn’t used to be so full, and then you conclude we’re slip-sliding toward catastrophe, you haven’t discovered anything. You’ve just taken your biases for a walk.
This story might be half-apocryphal, but apparently on May 19th, 1780, the sky went dark over Connecticut. We don’t know what blotted out the sun—probably some forest fires burning nearby—but the deeply Christian Connecticuters figured it was a sign the End Times had come. At the State House in Hartford, several senators suggested that everybody should return home and prepare to meet their Maker. Amidst the commotion, Senator Abraham Davenport of Stamford stood up and said:
I am against adjournment. The day of judgment is either approaching, or it is not. If it is not, there is no cause for an adjournment; if it is, I choose to be found doing my duty. I wish therefore that candles may be brought.
Candles were brought, and the work continued.
We know this because Columbus was compiling a Book of Prophecies during his travels, which was basically a scrapbook of apocalyptic prophecies plus an unfinished letter to Ferdinand and Isabella of Spain. In the letter, he says:
I spent six years here at your royal court, disputing the case with so many people of great authority, learned in all the arts. And finally they concluded that it all was in vain, and they lost interest. In spite of that it [the voyage to the Indies] later came to pass as Jesus Christ our Saviour had predicted and as he had previously announced through the mouths of His holy prophets. Therefore, it is to be believed that the same will hold true for this other matter [the voyage to the Holy Sepulchre].
In a more recent survey with more response options, 14% of US Christians said Jesus “definitely” or “probably” will return in their lifetimes, 37% weren’t sure, 25% said that Jesus “definitely” or “probably” will NOT return in their lifetimes, and 22% said he’s not coming back at all, or that they don’t believe in him. I take this to mean that most Christians aren’t sure whether Jesus will return before they die, but if you make them choose (as the other survey did), they lean “yes”.
This is true on average, but it’s not true for every single person or every single memory. Sometimes bad experiences can get even worse over time, and sometimes good things can seem bad in retrospect. But the opposite happens far more often, which is why life is bearable for most people.
If you start with a full Good Cup and and empty-ish Bad Cup, the differential evaporation will lead you to believe that the ratio of good to bad was even better in the past. We think this is exactly what happens when people consider their loved ones instead of the entire world: you mainly perceive good things from your friends and family members, so your Good Cup is usually fuller today than your Bad Cup. Over time, differential evaporation will make that ratio even more positive, leading people to conclude that their loved ones have improved. And indeed, that’s exactly what we found.
By the way, this finding was replicated earlier this year.
Here’s the Talmud, from ~200AD:
If the early generations are characterized as sons of angels, we are the sons of men. And if the early generations are characterized as the sons of men, we are akin to donkeys. And I do not mean that we are akin to either the donkey of Rabbi Ḥanina ben Dosa or the donkey of Rabbi Pinḥas ben Yair, who were both extraordinarily intelligent donkeys; rather, we are akin to other typical donkeys.
Christians agree:
[Martin] Luther once remarked that a whole book could be filled with the signs that happened in his day and that pointed to the approaching end of the world. He thought that the “worst sign” was that human beings had never been so earthly minded “as right now.” Hardly anyone cared about eternal salvation.
In 1627, the English clergyman George Hakewill was so tired of people complaining about the world getting worse over time—an opinion “so generally received, not only among the Vulgar, but of the Learned”—that he wrote a whole book attempting to refute it. He failed, of course. Here’s Nietzche in 1874:
Never was the world more worldly, never poorer in goodness and love. Men of learning are no longer beacons or sanctuaries in the midst of this turmoil of worldliness; they themselves are daily becoming more restless, thoughtless, loveless. Everything bows before the coming barbarism, art and science included.
I got this example from a recent piece in called “The Myth of the Loneliness Epidemic”
Werner Sombart, the guy who coined the term, was thrilled toward the end of his life because he thought capitalism was finally being replaced by a better system called “Nazism.”
If you can’t get enough collapse-related content, there’s an apparently successful Substack called Last Week in Collapse that offers you a “weekly summary of the ongoing collapse of civilization”
If I can play the quote game one last time, these posts sound eerily similar to a New York Times Magazine piece from 1978:
Europeans have a sense of being at the beginning of a downhill slide. [...] There is a pervading sense of crisis, although it has no clear face, as it did in the days of postwar reconstruction. People are disillusioned and preoccupied. The notion of progress, once so stirring, rings hollow now. Nobody can say exactly what he fears, but neither is anyone sanguine about the future. [...] there is a sense that governments no longer have the wisdom or power to cope
This was exactly what it was like to study people’s perception of moral decline. I was basically asking people, “Hey, I’ve spent the past three years working on this question, but could you answer it in .4 seconds?” And their answer was “Sure can.”
2024-12-11 01:16:18
It was October 22nd, 1844, and the world was about to end.
Somewhere between 50,000 and 200,000 people were looking up at the sky that day, waiting for Jesus to burst through the clouds. Some of them had quit their jobs, left their crops to rot in the fields, sold all their possessions, paid the debts they owed, forgave the debts owed t…
2024-12-03 22:54:12
Even a years ago, if you had tried to talk to me about the “philosophy of science,” I would have skeddadled. Philosophy? As in, people just...saying stuff? No thanks dude, I’m good. I’m a Science Guy, I talk Data.
But then I realized something: I had no idea what I was doing. Like, nobody had ever told me what science was or how to do it. My advisor never sat me down and gave me the scientific Birds and the Bees talk (“When a hypothesis and an experiment love each other very much...”). I just showed up to my PhD and started running experiments. Wasn’t I supposed to be using the Scientific Method, or something? Come to think of it, what is the Scientific Method?
As far as I could tell, everybody else was as clueless as me. There are no classes called “Coming Up with Ideas 101” or “How to Pick a Project When There Are Literally Infinite Projects You Could Pick.” We all learned through osmosis—you read some papers, you watch some talks, and then you just kinda wing it.
We’re devoting whole lifetimes to this project—not to mention billions of taxpayer dollars—so shouldn’t we, you know, have some idea of what we’re doing? Fortunately, in the 20th century, three philosophers were like, “Damn, this science thing is getting pretty crazy, we should figure out what it is and what it’s all about.”1
Karl Popper said: science happens by proposing hypotheses and then falsifying them.
Thomas Kuhn said: science happens by people trying to solve puzzles in the prevailing paradigm and then shifting the paradigm entirely when things don’t add up.
And Paul Feyerabend said: *BIG WET FART NOISE*
Guess whose book I’m gonna tell you about?
Here’s how Feyerabend put it in Against Method (1975, emphasis his):
The thesis is: the events, procedures and results that constitute the sciences have no common structure; there are no elements that occur in every scientific investigation but are missing elsewhere. [...] Successful research does not obey general standards; it relies now on one trick, now on another [...] This liberal practice, I repeat, is not just a fact of the history of science. It is both reasonable and absolutely necessary for the growth of knowledge.
Which is to say: there is no such thing as the scientific method. Feyerabend’s famous dictum is “anything goes,” which he explained is “the terrified exclamation of the rationalist who takes a closer look at history.”2 Whenever you try to lay down some rule like “science works like this,” Feyerabend pop ups and says “Aha! Here’s a time when someone did exactly the opposite of that, and it worked!”
Here’s an example. Say some guy named Galileo comes up to you and says something crazy like, “The Earth is constantly moving.” Well, Rule #1 of science is supposed to be “theories should fit the facts,” so you, a dutiful scientist, consult them:
When you’re on something that’s moving (like a horse or a cart), you notice it for all sorts of reasons. You feel the wind, you see the scenery changing, you might get a little sick, and so on. If the Earth was moving, we’d know it.
When you jump straight up, you land on exactly the point where you started. If the Earth was moving, that wouldn’t happen—it would change position while you’re in the air, and your landing spot would be different from your launching spot.
The Earth does actually move sometimes; it’s called an earthquake. And when that happens, buildings topple over, you get mudslides and avalanches, etc. So the Earth can’t be moving all the time, or else those things would be happening all the time, too.
Those facts seem pretty irrefutable, so you conclude the Earth does not move. Unfortunately, it does move, and you’re not going to figure that out by being a scientific goody-two-shoes. Instead, you’ll have to entertain the possibility that this Galileo guy might be right. In Feyerabend’s words:
Turning the argument around, we first assert the motion of the earth and then inquire what changes will remove the contradiction. Such an inquiry may take considerable time, and there is a good sense in which it is not finished even today. The contradiction may stay with us for decades or even centuries. Still, it must be upheld until we have finished our examination or else the examination, the attempt to discover the antediluvian components of our knowledge, cannot even start. This, we have seen, is one of the reasons one can give for retaining, and, perhaps, even for inventing, theories which are inconsistent with the facts.
To Feyerabend, our minds are like deep lakes, our assumptions are like the fish you can’t see from the surface, and a wacko theory is like a stick of dynamite that you drop into the water so it blows up and all the dead assumptions float to the surface. If you take Galileo’s dumb-sounding theory seriously, you start wondering whether your facts are as irrefutable as they seem: Would you really be able to tell if the Earth and everything on it was moving? For instance, if you were below deck on a ship, would you be able to tell whether the ship was docked or sailing smoothly? Have you ever even checked?
Feyerabend goes so far as to claim that, in the whole Galileo debacle, the Catholic Church was the side “trusting the science.” The Inquisition didn’t just condemn Galileo for contradicting the Bible. They also said he was wrong on the facts. That part of their judgment “was made without reference to the faith, or to Church doctrine, but was based exclusively on the scientific situation of the time. It was shared by many outstanding scientists — and it was correct when based on the facts, the theories and the standards of the time.” As in: the Inquisition was right.
Feyerabend described himself as an “epistemological anarchist,” and other people have called him that too, but they meant it as an insult. This whole “anything goes” thing causes a lot of pearl-clutching and handwringing about misinformation and pseudoscience. There’s supposed to be some kind of Special Secret Science Sauce that makes “real” science good, and if it turns out that you can squirt any kind of condiment on there and it tastes fine, then why are we spending so much on the brand-name stuff?
I think the pearl-clutchers are onto something here, and I’ll come back to that in a second. But first, even if the strong version of Feyerabend’s thesis makes people faint, those same people probably agree with him when it comes to revolutionary science. A breakthrough, almost by definition, has to be at least a little ridiculous—otherwise, someone would have made it already. For example, when you look back at the three most important breakthroughs in biology in the last forty years, every one of them required at least one step, and sometimes many steps, that sounded stupid to a lot of people:
We have mRNA vaccines because one woman was so sure she she could make the technology work that she kept going for decades, even when all of her grants were denied, her university tried to kick her out, and her advisor tried to deport her.
We have CRISPR in part because some scientists at a dairy company were trying to make yogurt.
We have polymerase chain reaction because one guy wanted to “find out what kind of weird critters might be living in boiling water in Yellowstone” and another guy refused to listen to his friends when they told him he was wasting his time: “...not one of my friends or colleagues would get excited over the potential for such a process. [...] most everyone who could take a moment to talk about it with me, felt compelled to come up with some reason why it wouldn’t work.”3
(That’s why it’s always a con whenever people dig up silly-sounding studies to prove that the government is wasting money on science. They’ll be like “Can you believe they’re PAYING PEOPLE to SCOOP OUT PART OF A CAT’S BRAIN and then SEE IF IT CAN STILL WALK ON A TREADMILL???” And then it turns out the research is about how to help people walk again after they have a spinal cord injury. A lot of research is bad, but the goofiness of its one-sentence summary is not a good indication of its quality.4)
Not only do we have useful research that breaks the rules; we also have useless research that follows the rules. You can develop theories, run experiments, gather data, analyze your results, and reject your null hypotheses, all by the book, without a lick of fraud or fakery, and still not produce any useful knowledge. In psychology, we do this all the time.
Everybody seems to agree with these facts in the abstract, but we clearly don’t believe them in our hearts, because we’ve built a scientific system that denies them entirely. We hire people, accept papers, and dole out money based on the assumption that science progresses by a series of sensible steps that can all be approved by committee. That’s why everyone tries to pretend this is happening even when it isn’t.
For example, the National Institutes of Health don’t like funding anything risky, so a good way to get money from them is to show them some “promising” and “preliminary” results from a project that, secretly, you’ve already completed. When they give you a grant, you can publish the rest of the results and go “wow look it all turned out so well!” when actually you’ve been using the money to try other stuff, hopefully generating “promising” and “preliminary” results for the next grant application. Which is to say, a big part of our scientific progress depends on defrauding the government.
In fact, whenever we find ourselves stuck on some scientific problem for a long time, it’s often from an excess of being reasonable. For instance, we don’t have any good treatments for Alzheimer’s in large part because a “cabal” of researchers was so sure they knew which direction Alzheimer’s science should take that they scuttled anybody trying to follow other leads. Their favored hypothesis—that a buildup of amyloid proteins gums up the brain—enjoyed broad consensus at the time, and anybody who harbored doubts looked like a science-denier. Decades and billions of dollars later, the amyloid hypothesis is pretty much kaput, and we’re not much closer to a cure, or even an effective treatment. So when Grandma starts getting forgetful and there’s nothing you can do about it, you should blame the people who enforced the supposed rules of science, not the people who tried to break them.
Likewise, even if we believe that science requires occasional irrationality, we hide this fact from our children. Walk into an elementary classroom and you’ll probably see a poster like this:
Most of the things my teachers hung on the walls turned out to be whoppers: the food pyramid is not a reasonable dieting guide, Greenland is actually one-seventh the size of South America (despite how it looks on the map), and Maya Angelou never said that thing about people remembering how you made them feel. But this hokum about the “scientific method” is the tallest tale of them all.
Yes, scientists do all of these things sometimes, but laying out these steps as “the” method implies that science is a solved problem, that everybody knows what’s going on, that we all just show up and paint by numbers, and that you, too, can follow the recipe and discoveries will pop out the other end. (“Hey pal, I’m stuck—what comes after research again?”) This “method” is about as useful as my friend Steve’s one-step recipe for poundcake, which goes: “Step 1: add poundcake.”5
I read Against Method because people kept recommending it to me, and now I see why. Ol’ Paul and I have a similar vibe: we both love italics and hate hierarchy. But I have two bones to pick with him.
First, Feyerabend has a bit of a Nazi problem—namely, that he was a Nazi. He mentions this in a bizarre footnote toward the end of the book:
Like many people of my generation I was involved in the Second World War. This event had little influence on my thinking. For me the war was a nuisance, not a moral problem.
When he was drafted, his main reaction was annoyance that he couldn’t continue studying astronomy, acting, and singing:
How inconvenient, I thought. Why the hell should I participate in the war games of a bunch of idiots? How do I get out of it? Various attempts misfired and I became a soldier.
He did a bit more than that. According to his entry in the Stanford Encyclopedia of Philosophy, Feyerabend earned the Iron Cross for gallantry in battle, and was promoted to lieutenant by the end of the war. He later quipped that he “relished the role of army officer no more than he later did that of university professor,” which is, uh, not the most reassuring thing one can say about either of those jobs. (“Don’t worry, students! I was indeed an officer in the Wermacht, but I hated it just as much as I hate teaching you.”)6
Look, I don’t think you should judge people’s ideas by judging their character.7 I expect everyone I read to have a thick file of foibles, and most of the folks who have something to teach me probably don’t share my values. But there are “foibles” and then there’s “carrying a gun for Hitler and then being extremely nonchalant about it.” That’s especially weird for a philosopher, since thinking critically about stuff is his job.
So me and Herr Feyerabend may not see eye-to-eye vis-a-vis WWII, but my biggest philosophical gripe with him is that he doesn’t seem to care about the Hockey Stick:
The Hockey Stick is the greatest mystery in human history. Something big happened in the past ~400 years, something that never happened before in the ~300,000 years of our species’ existence. People have all sorts of theories about what caused the Hockey Stick, but everyone agrees that science played a part. We started investigating the mysteries of the universe in a new way, and our discoveries piled up and spilled over into technology much faster than they ever had before.
So, fine, “anything goes,” but some things go better than others. That’s why I don’t buy Feyerabend’s claim that the scientific method doesn’t exist. I think it doesn’t exist yet. That is, we’ve somehow succeeded in the practice of science without understanding it in principle. Although we haven’t solved the Mystery of the Hockey Stick, there are too many bodies to deny the mystery exists (the bodies in this mystery are alive—that’s the whole point).
Even though Feyerabend denies that mystery, perhaps he can help us solve it. That first upward tick of the Hockey Stick, that almost imperceptible liftoff sometime after the year 1600—that was a burst of Feyerabendian irrationality. The first scientists did something pretty stupid for the time: they ditched the books that had taught people for thousands of years (some of them supposedly written by God) and decided to do things themselves. They claimed you could discover objective, useful truths if you built air-pumps and peered through prisms, and they were...mostly wrong. It took ~200 years for this promise to really start paying off, which is why we only reached the crook of the Hockey Stick sometime after 1800.
So for two centuries, the progenitors of modern science mainly made fools of themselves. Early on, a popular play called The Virtuoso ridiculed the Royal Society (the organization that housed most of the important early scientists) by depicting some of their actual experiments on stage—a buffoonish natural philosopher tries to “swim on dry land by imitating a frog” and transfuses sheep’s blood into a man, causing a tail to grow out of the man’s butt. A few decades later, the politician Sir William Temple claimed that no material benefits had come from the “airy Speculations of those, who have passed for the great Advancers of Knowledge and Learning.” And fifty years after that, the writer Samuel Johnson looked upon the works of science and judged them to be “meh”:
When the Philosophers of the last age were first congregated into the Royal Society, great expectations were raised of the sudden progress of useful arts [...] The society met and parted without any visible diminution of the miseries of life.
Every new scientific investigation must trace this same path. You must first estrange yourself from the old ways of thinking, and then you must fall in love with new ways of thinking, and you must do both of these things before they are reasonable. Whatever the real scientific method is, these must be the first two steps. Incumbent theories are always going to be stronger than their younger challengers—at first. Only the truly foolish will be able to discover the evidence that ultimately overturns the old and establishes the new.
But this isn’t a general purpose, broad spectrum foolishness. It’s a laser-targeted, precise kind of foolishness. Falling in love with a fledgling idea is fine, but eventually you have to produce better experiments and more convincing explanations than the establishment can muster, or else your theory is going to go where 99% of them go, which is nowhere. And this is where the rules do matter. We remember Galileo because his arguments, as weird as they were at the time, ultimately held up.8 We would not remember him if he tried to claim that the Earth turns on its axis because there’s a sort of cosmic Kareem Abdul-Jabbar spinning it on his fingertip like a basketball.9
People call it “Nobel Disease” when scientist-laureates do nutty things like talk to imaginary fluorescent raccoons, as if this nuttiness is a tax on the scientists’ talent. But that’s backwards: that nuttiness is part of their talent. The craziness it takes to talk to the raccoons is the same craziness it takes to try creating a polymerase chain reaction when everybody tells you it won’t work. It’s just an extremely specific and rare kind of craziness. You have to grasp reality firmly enough to understand it, but loosely enough to let it speak. Which is to say, our posters of the “Scientific Method” should look like this:
Back in 2011, a psychologist named Daryl Bem published a bunch of studies claiming to show that ESP is real. This helped jumpstart the replication crisis in psychology, and some folks wonder whether that was Bem’s intention all along—maybe his wack-a-doo experiments were a false flag operation meant to expose the weakness of our methods.
I don’t think that’s true of Bem, but it might well be true of Feyerabend. Against Method is a medium-is-the-message kind of book: it’s meant to induce the same kind of madness that Feyerabend claims is necessary for scientific progress. That’s why he praises voodoo and astrology, that’s why he spends a whole chapter doing a non-sequitur close-reading of Homer10, and that’s even why he mentions his blasé attitude toward his Nazi days—he wants to upset you. He’s trying to pull a Galileo, to make an argument that’s obviously at odds with the facts, trying to trick you into looking closer at those facts so that you’ll see they’re shakier than you thought. He wants you to clutch your pearls because he knows they’ll crumble to dust in your hands.
This is the guy who once exclaimed in an interview, “I have no position! [...] I have opinions that I defend rather vigorously, and then I find out how silly they are, and I give them up!”11 He’s a stick of dynamite—you toss him into your mind so you can see what floats to the surface. And once the waves subside and the quiet returns, perhaps then you will hear the voice of the fluorescent raccoon, and perhaps you will listen.
Obviously there were more, but people mainly talk about these three. Sometimes they mention a fourth guy named Imre Lakatos, but he comes up less often, probably because he died tragically early. (In fact, he was supposed to write a book rebutting Feyerabend, which was going to be called For Method.) If you’re a philosopher, it behooves you to live a long time so people have plenty of opportunities to yell at you, thereby increasing your power. That’s why, personally, I don’t plan to die.
Here, “rationalist” refers to Feyerabend’s nemeses: Popper and his ilk, who thought that science required playing by the rules. It does not refer to the online movement of people trying to think correctly.
If you wanna get really sad, read the entirety of Kary Mullis’ Nobel lecture, which traces the development of PCR alongside the disintegration of his relationship. Here’s how it ends:
In Berkeley it drizzles in the winter. Avocados ripen at odd times and the tree in Fred’s front yard was wet and sagging from a load of fruit. I was sagging as I walked out to my little silver Honda Civic, which never failed to start. Neither Fred, empty Becks bottles, nor the sweet smell of the dawn of the age of PCR could replace Jenny. I was lonesome.
I got this example from this recent piece by .
A couple weeks ago, I was on a conference panel with someone who predicted that AI will replace human scientists within 5-10 years. I disagreed, and this is exactly why. LLMs work because we can train them on a couple quintillion well-formed sentences. We’ve got way less well-formed science, and we have a hard time telling the well-formed from the malformed.
Feyerabend isn’t alone here—20th-century philosophers of science have an alarmingly high body count. Lakatos (see footnote above) allegedly forced a 19-year-old Jewish girl to commit suicide during WWII because he was too afraid to help her hide from the Nazis. I don’t think Popper or Kuhn ever killed anybody, although Kuhn once threw an ashtray at the filmmaker Errol Morris, who then wrote a book about it.
I do think serving in the Nazi army should disqualify you from becoming, say, the pope, but I understand that some people disagree about this.
Except for his cockamamie theory of the tides. Nobody wins ‘em all!
I sometimes get emails from folks who want to sell me on a theory of everything, assuming that I’ll be a sympathetic audience since I’m always going on about crazy ideas in science. And sure, I’ll bend my ear for a dotty hypothesis. But if you want me not just to listen, but to believe, then you’ll have to bring data. To be fair, though, I’ll give anybody the same benefit of the doubt that the original scientists got—that is, I’ll withhold judgment for two centuries.
This will be interesting to like, four people, but I’ve gotta let those four people know: one of the final chapters of Against Method makes a stunningly similar argument to Julian Jaynes’ The Origin of Consciousness in the Breakdown of the Bicameral Mind, which is that ancient humans weren’t conscious in the way modern humans are, but instead experienced consciousness as the voice of the gods. Here’s what Feyerabend says when discussing Homer:
Actions are initiated not by an “autonomous I” but by further actions, events, occurrences, including divine interference. And this is precisely how mental events are experienced. [...] Archaic man lacks “physical” unity, his “body” consists of a multitude of parts, limbs, surfaces, connections; and he lacks “mental” unity, his “mind” is composed of a variety of events, some of them not even “mental” in our sense, which either inhabit the body-puppet as additional constituents or are brought into it from outside.
This is super weird, because Against Method came out one year before Origin of Consciousness. Did Feyerabend and Jaynes know each other, or was this the zeitgeist speaking through both of them? I have no idea, and I haven’t seen anybody comment on the connection except for this one guy on Goodreads.
This interview is also notable for the fact that Feyerabend predicts the journalist’s divorce 17 years in advance.
2024-11-20 00:18:27
A lot of people would like to make the world better, but they don’t know how. This is a great tragedy.
It’s tragic not only for the people who need help, but also for the people who can help, because good intentions start to rot if you don’t act them out. Well-meaning people who remain idle end up sick in the heart and the head, and they often develop exquisite ideologies to excuse their inaction—they start to believe that witnessing problems is as good as solving them, or that it’s impossible to make things better and therefore foolish to try, or that every sorrow in the world is someone else’s fault and therefore someone else’s responsibility.
We get stuck here because we assume that there are only two paths to improving the world. Option #1 is to go high-status: get rich so you can blast problems with your billions of bucks, or get into office so you can ban all the bad things and mandate all the good things. Only a fortunate few are powerful enough to do anything, of course, so most of the people attempting to improve the world through the high-status route will end up either begging our overlords to do the right thing, or trying to drum up the votes necessary to replace them.
Option #2 is to go high-sacrifice: sell everything you have and spend your life earning $7/hr to scrub the toilets in an orphanage. Only a virtuous few will have the saintliness necessary to live such a life, of course, so most of the people attempting to improve the world through the high-sacrifice path will end up writing checks to the martyrs on the front lines.
These paths aren’t wrong. They’re just too narrow. Money, power, and selflessness are all useful tools in the right hands, but the world is messed up in all sorts of ways that can’t be legislated against, bought off, or undone with a hunger strike. When we focus on just two avenues for making the world better, we exclude almost everybody, leaving most of us with a kind of constipated altruism—we’ve got the urge to do good, but nothing comes out.
I don’t know all the ways to get our good intentions unblocked. That’s why, whenever I spot someone changing the world via a righteous road less taken, I write it down on a little list. I glance at that list from time to time as a way of expanding my imagination, and now I’m sharing it in the hopes that it’ll do the same for you.
Between 2004 and 2016, a Kentucky lawyer extracted $550 million from the US government via bogus disability claims. He used shady doctors to fake the forms and a crooked judge to approve them. That lawyer’s name, in a perfect example of nominative determinism, was Eric C. Conn.
The fraud might have gone on forever if Jennifer Griffith, a paralegal working in the Social Security Administration, hadn’t noticed the brazen fakery. She did the brave thing and told someone about it.
But here’s the part of the story that really gets me: the person she told also did the brave thing—she listened. Griffith’s friend and coworker Sarah Carver was immediately like, “This is really bad, we need to do something about it.” Carver and Griffith attempted to expose Conn’s con for years, filing complaint after complaint, which were all ignored until a Wall Street Journal reporter happened upon the story. The duo eventually testified before Congress and in court. Conn went to jail, as did the judge and at least one of the shady docs.1
Whenever a scandal breaks—a CEO has been embezzling money, a Hollywood producer has been sexually assaulting people, a scientist has been faking data—people are always like “wow, crazy that no one spoke up about it.” But there’s always someone speaking up about it. They whisper it to a friend, they try to bring it up to their boss, they write an anonymous post on Reddit about how they’re working at a scam company and they don’t know what to do. Wrongdoing often goes unchecked not because we’re missing the bravest person, but because we’re missing the second-bravest person, the one who hears the whistleblower and starts blowing their own whistle too.
Nobody’s heard of Samuel Hartlib, Henry Oldenburg, or The Right Reverend John Wilkins, but modern science might not exist without them.
The ten-second history of science goes like this: for about 1200 years, people scribbled in the margins of Aristotle. Then one day Francis Bacon said “hey guys let’s do science” and people were like “sounds good.” But that only happened because a handful of folks made science into a scene.
Hartlib, Oldenburg, Wilkins and their friends established societies, wrote pamphlets, edited journals, and trained apprentices. Hartlib sent approximately one bajillion letters to budding scientists and inventors, hounding them to put their knowledge to practical use. Oldenburg convinced his friends to stop hiding their results—a common practice back then—and publish them instead. Wilkins was a friend to everybody and ensured that science didn’t become just an Anglican thing or a Catholic thing. If not for him, science might have ended up on one side of the English Civil War, and if that side lost, science could have stopped in its tracks for centuries.
Wilkins’ own Wikipedia page notes that he was “not one of the most important scientific innovators of the period”—ouch—but that he “was a lover of mankind, and had a delight in doing good,” aww.
I meet lots of idealistic folks who think that all they’re missing is money, or credentials, or access to the levers of power. More often, what they’re really missing is friends. Only a crazy person can toil alone for very long. But with a couple of buddies, you can toil pretty much forever, because it doesn’t feel like toil. That’s how you end up with what economic historians call “efflorescences” and Brian Eno called “scenius“ (“scene” + “genius”): hotspots of cool stuff. And for that, we need not just a Francis Bacon, but also a whole gang of Right Reverend Wilkinses.
Mahzarin Banaji, one of the most famous psychologists alive today, originally planned to be a secretary. Then this happened:
I was traveling home from New Delhi to Hyderabad. At a major railway juncture, I stepped off the train to visit a bookstore on the platform where I bought a set of books that changed the course of my life. Five volumes of the Handbook of Social Psychology (1968) edited by Lindzey and Aronson, were being offered for the equivalent of a dollar a piece.
[...]
I bought the Handbooks out of mild interest in their content, but mostly because it seemed like a lot of book for the money. By the time I reached home twenty hours later, I had polished off a volume and knew with blunt clarity that this form of science was what I wanted to do.
I sometimes wonder: who put those books there? The Handbook is not exactly light reading, and that bookstore could not have been moving a lot of units. And yet a chain of people all decided it was important to put that knowledge on a shelf at a train station somewhere between New Delhi and Hyderabad—and they did this 40 years ago, when that was much more complicated.
I think of this as switchboarding, trying to get the right information to the right person. Someone’s got an empty seat/someone needs a ride. You’re getting into the history of plumbing/I know exactly the book you should read. Your cousin is moving to San Diego and doesn’t know anyone/my former rugby teammate lives there, maybe they can be friends. No two people have the same constellation of connections, nor the same trove of information, and so each of us is a switchboard unto ourselves, responsible for routing every kilobyte to its appropriate destination. Whoever put the Handbook in Banaji’s hands, they were damn good switchboards.
The internet makes it seem like switchboarding is obsolete, but it’s more important than ever. When you’ve got instant access to infinite information, you need someone to show you where to start. And the most important info is still locked up inside people’s heads. If we can unlock it and send it where it needs to go, we can turn it into friendships, marriages, businesses, and unlikely psychologists.
Here is an exemplary scientific paper:
During the author’s childhood, various renowned authorities (his mother, several aunts, and, later, his mother-in-law [personal communication]) informed him that cracking his knuckles would lead to arthritis of the fingers. To test the accuracy of this hypothesis, the following study was undertaken.
For 50 years, the author cracked the knuckles of his left hand at least twice a day, leaving those on the right as a control. Thus, the knuckles on the left were cracked at least 36,500 times, while those on the right cracked rarely and spontaneously. At the end of the 50 years, the hands were compared for the presence of arthritis.
There was no arthritis in either hand, and no apparent differences between the two hands.
People often think they can’t do research because they don’t have a giant magnet or a laser or a closet full of reagents. But they have something the professional scientists don’t have: freedom. The pros can’t do anything that’s too weird, takes too long, or would raise the suspicion of an Institutional Review Board. That kind of stuff has to happen in a basement or a backyard, which is why the “paper” above is, in fact, a letter to the editor written by a medical doctor on a lark.
Independent investigators can thus explore where others fear to tread. For instance:
Lots of people would like to lose weight, and lots of people have unwarranted certainty about how to do that. It takes an internet weirdo to run self-experiments on popular-but-unverified hypotheses like “maybe we should eat more coconut oil” or “maybe the problem is we’re all carrying an electrical charge all the time.”
Joy Milne, a Scottish nurse, realized she could smell when people had Parkinson’s Disease. She convinced a medical researcher to take her skills seriously, and now there’s a growing effort to detect Parkinson’s early via the stinky stuff that oozes out of humans.
This guy on Reddit claims to have cured his wife’s 20-year migraine by cleaning their HVAC system. If even one other person heals their headaches by dusting their ducts, this is way more impact than the average scientific paper.
(See also: An Invitation to a Secret Society)
Culture is everything. If our culture says it’s cool to chase a wheel of cheese down a hill, we will. If our culture says it’s important to dress up in colorful clothes and douse people with buckets of water, we’ll do that. If our culture says that we should use an obsidian blade to cut people’s hearts out of their chests and offer them to the god Huitzilopochtli, we’ll do that too. So it’s pretty important to get our culture right.
We act as if culture is a thing that happens to us, rather than a thing we all make together. That used to be true, of course. When only a few people could read and write, they got to make all the culture themselves. All that started to change as more people got literate, but it really changed once most people got an internet connection. For the first time in history, we all have some say in whether we’re more of a cheese wheel culture, an obsidian blade culture, or, you know, something else.
And yet, to borrow what Douglas Adams said about the creation of the universe, “this has made a lot of people very angry and has been widely regarded as a bad move.” Most people think that social media has made things worse:
If you don’t like how culture is going, that’s a huge opportunity, because culture is us. You can move the needle just by showing up to the places you like to be, posting and promoting the kind of stuff you’d like to see, ignoring the things you don’t like, and vacating the places you think are bad. I’ve personally been inspired and influenced by the people who do this well, like Visa Veerasamy, Alice Maz, Sasha Chapin, Aella, and Defender of Basic.
I think of them as the partygoers who are committed to having a good time, no matter how good the party itself is. Yes, sometimes the music is too loud, the food is bad, the beer is warm, and the whole thing is run by billionaires who are trying to turn your attention into money. You could stand in the corner complaining about how bad and stupid and unfair it all is. Or you could join the handful of people hanging out on the couch and cracking each other up. If enough people do that, eventually the whole party is that couch.
Cave-ins used to kill a few hundred miners every year. Now that number is usually zero, in large part thanks to an algorithm developed by Chris Mark, who started out as a union organizer, then became a coal miner, then trained as an engineer, then went to work for the Bureau of Mines and figured out how to prevent coal mines from collapsing.
It turns out there are lots of Chris Marks out there, relentlessly solving problems from nondescript government offices. You’ve got Arthur Allen, who developed a search system that locates people faster when they’re lost at sea. There’s Pius Bannis, who organized the evacuation and adoption of 1,100 orphaned children after the 2010 earthquake in Haiti. There’s Tony Mento, Camille Otto, and Hari Kalla, who helped get the I-95 bridge back up in 12 days after it collapsed last year. And there’s Darnita Trower, Wanda Brown, and Gerald Johnson, who updated the government’s IT so that nobody has to physically mail anything to the IRS anymore.
Among the high-achieving crowd, it’s only cool to work in government if you’re leading it. So it’s fine to be a Chief of Staff or the Secretary of Labor, but it’s kinda cringe to be a Associate Administrator for Infrastructure or a Principal Strata Control Specialist. And yet, if you drive on I-95, if you work in a mine or on a boat, if an earthquake hits, or if you just want to file your damn taxes, you depend on folks with unimpressive titles doing impressive things.
I know this might sound rich to Americans because our incoming president has vowed to shutter many of these government buildings. But if you look at someone like Chris Mark, he succeeded not just because he was good at math and mines, but also because he found a problem that everybody agreed was worth solving, so he could continue his work across the inevitable change of administrations. It’s hard to thread that needle, which is why these are underrated ways of changing the world, not easy ones.
I think about this tweet a lot:
Or as XKCD puts it:
Notice they both use the word “maintain”—not “invent,” not “lead,” but maintain. The power of a RUNK is that it works consistently. It was there counting numbers before it was cool, and it keeps counting numbers no matter how cool it gets. That’s why you don’t have to be a tech person to build a RUNK. If you do something even moderately useful and you do it no matter what, then people will realize you’re going to be around a while, and they can start building things on top of you.
Back in high school, I didn’t have a Ronald, but I did have a Peggy. She ran the Youth Grantmaking Council of Huron County, which sounds official, but it was really just Peggy convening a dozen high schoolers in a church basement, and telling us we should should raise money by doing car washes and hosting junior high dances, and then we should give the money away. YGC has been going for over two decades, and now there’s a whole web of people in rural Ohio who depend on getting a check from those teens. And the whole thing runs on Peggy, who has many fine qualities that make her a good YGC mentor, but the most important one is that she shows up every year.
Sometimes when I think about everything that’s wrong with the world, I get indignant, like: why wasn’t all this fixed out by the time I got here? I mean, really? In 2024? We’re still doing this?
But then I think: why do I, uniquely, deserve to be born in good times? Am I the Most Special Boy in the Universe? My ancestors died in famines and plagues, they suffered under evil kings, they got cut down in stupid wars that were fought because some people thought God is literally a piece of bread and other people thought God is only figuratively a piece of bread.
They all deserved a better world than the one they got. So do I. So do all of us! If only we could convince God to fast-forward us to the future where everything is beautiful and nothing hurts. Alas, He cannot hear us, for He is a piece of bread.
So it’s up to us. When I wrote There’s a Place for Everyone a few months ago, I got a lot of messages that were like, “Well, there’s certainly no place for me.” That breaks my heart. Look at the world! See it aching! It’s got so many problems—I promise you, there’s one with your name on it.
If none of these seven suit you, that’s fine; seven is not a lot! Fortunately, they come from an infinite list. (#8 is “make the list longer.”) I doubt any of these folks grew up thinking they would one day bust the largest Social Security scam in history, or that they’d be the ones to make science palatable to Puritans, or that they’d be best-known for cracking their knuckles or updating a government website. And yet they left the world a little bit better, and they did it without the power of a king or the sacrifice of a saint. For that I salute them, and I say, thank Bread that they exist.
In exchange for their courage, Griffith and Carver were sidelined, stalked, and eventually forced out of their jobs. It turns out nobody higher up in the Social Security Administration was keen to admit they lost half a billion dollars. Another reason why it’s important to Go to a Nondescript Government Office and Do a Good Job (see #6). Now Griffith and Carver work as Certified Fraud Examiners.
2024-11-06 02:41:58
It’s Election Day in the US, which means everybody is waiting in lines and refreshing their newsfeeds all day, looking for things to do in between bouts of freaking out. So I’m gonna keep things light with the quarterly links ‘n’ updates post, a roundup of stuff I’ve been reading for the past few months.
We’ve known since Roman times that lead pipes can poison people. So uh...why have we used them for ~2000 years? Have we just been poisoning ourselves all this time? According to this paper from 1981, lead pipes are usually fine because a protective mineral crust forms on the inside. But the crust won’t form if you’ve got the wrong minerals in your water, and that’s when you run into trouble.
It seems like people realize this every few decades, freak out, and the lead pipe industry responds by lobbying local governments to write requirements for lead pipes into their building codes. In Chicago, for instance, it was illegal not to use lead pipes until 1986.
Citi Bikes pays users to move bikes from stations that have too many bikes to stations that have too few. Apparently you can make a buck by moving all the bikes from one station to another, and then moving them all back.
I really dig , an all-book-review Substack. Check out their review of a book called Fear of a Setting Sun, which is about how most of the Founding Fathers ended up thinking that the United States had gone disastrously wrong and was about to collapse.
Lots of ideas that seem obvious now had to be invented by somebody, like randomized-controlled trials, crop rotation, and zero (the number). The anthropologist Margaret Mead argued that we should add war to the list: war is an invention, one that some cultures never came up with. For example, when someone pisses you off, instead of going to war you can swear before the gods that you’ll never speak to each other ever again.
Everybody my age remembers that wild autumn of 1998 when the movies Antz and A Bug’s Life came out within a few weeks of each other. Both were “computer-animated films about insects, starring a non-conformist ant who falls in love with an ant princess, leaves the mound, and eventually returns and is hailed as a hero.”
I didn’t realize this kind of thing happens all the time—there are several “twin films” every year. For instance, 2023’s The Pope’s Exorcist and 2024’s The Exorcism both star Russel Crowe as an exorcist. Three films about stage magicians came out in 2006: The Prestige, The Illusionist, and Scoop. If you loved 2018’s A Quiet Place, you’ll love 2019’s The Silence, because “both are about parents with a deaf teenage daughter trying to survive a planet-wide attack from creatures who hunt their human prey by sound.”
Sometimes twin films happen on purpose—a studio tries to rush out some derivative dreck to capitalize on a more famous movie’s success. But sometimes twin films are simply evidence of the bizarre power of the zeitgeist. The Silence started production before A Quiet Place, and it was based on a book from 2015; it just got unlucky and came out slightly too late.
Speaking of zeitgeist, I only recently discovered that two of the most famous social psychologists of all time—Stanley Milgram and Phil Zimbardo—went to the same high school. Milgram is best known for his shock studies, where an alarming proportion of people were willing to electrocute a stranger to death in the lab. Zimbardo is best known for the Stanford Prison Experiment, which was once a classic piece of research, but has since been revealed to be more like an episode of reality TV—Zimbardo apparently directed his “guards” to do all the horrible things he later claimed they did themselves. According to Zimbardo, he was the popular one in high school, and Milgram was the smart one.
Anyway, I’m not sure what it was about James Monroe High School in the Bronx that made its students want to perform elaborate pantomimes that demonstrate people’s capacity for evil, but maybe it’s good that the school has since shut down.
Edit 11/6/24: This may not be true! See this comment.
Like most adults, I don’t know how to talk to kids (“So, uh, do you like that show about the dogs who are also cops?”). So I loved this article from about how kids need different conversational doorknobs than adults do. For instance, we think it’s polite to ask open-ended questions, but it’s easier for kids to respond to multiple-choice questions:
Instead of “Did you have fun at gymnastics?” try “Did you love gymnastics today or hate it?” or “Which do you like better, gymnastics or drawing? Or sitting silently in the dark doing nothing?”
Instead of “What’s your favorite food?” you could ask, “Which food do you like best: pizza, ramen, or fish guts?”
And sometimes it’s easier not to ask at all. Instead, you can offer an interesting tidbit that kids can react to:
“On my way here I saw a tractor with the most gigantic tires I have ever seen! They were bigger than my car! I was like, ‘whaaaat????’”
There’s nothing more quintessentially human than being extremely skeptical about most things but extremely credulous about one specific thing. The best case of this I’ve found is Sir Thomas Browne, an English physician who wrote Psuedodoxia Epidemica, or, Enquiries into Very Many Received Tenets and Commonly Presumed Truths in 1646. He was a one-man fact-checking department, spending hundreds of pages busting popular myths like:
Crystals are just tightly-packed ice
If a wolf sees you before you see the wolf, you’ll lose your voice
Women have more ribs than men (because God used one of Adam’s ribs to make Eve)
That same Sir Thomas, however, also testified that witches definitely exist, and helped convict two girls accused of witchcraft, who were then hanged. We all contain multitudes, except those of us who are executed by the state because the local expert believes in witches.
Another guy with multitudes: Mozart. The guy who wrote the tune for “Twinkle Twinkle, Little Star” also wrote stuff like this to his cousin/possible crush:
Well, I wish you good night, but first,
Shit in your bed and make it burst.
Wolfgang and his whole clan loved scatological humor—apparently “Lick my arse” was sort of a Mozart family motto. I’m a sucker for this kind of stuff because we all think great minds are serious and grim, when in fact they’re just as weird as the rest of us. Imagine what horrifying Wikipedia pages might be generated if you became a world-famous artist and scholars pored through your texts after you died.
Earlier this year, I wrote about how a Harvard Business School professor named Francesca Gino was suing the science blog Data Colada for alleging fraud in Gino’s studies. Great news: that lawsuit has been dismissed, which is a victory for scientific discourse everywhere. Just in case, though, I must remind you that nothing on Experimental History can be considered defamation because there’s no evidence I’m sentient at all; I’m just a swarm of bees trapped in a room with a keyboard (PLZ SEND HONEY).
Worried about declining fertility? Try this recipe for creating an artificial man (c. 1537):
That the sperm of a man be putrefied by itself in a sealed cucurbit for forty days with the highest degree of putrefaction in a horse’s womb [“venter equinus”, meaning “warm, fermenting horse dung”], or at least so long that it comes to life and moves itself, and stirs, which is easily observed. After this time, it will look somewhat like a man, but transparent, without a body. If, after this, it be fed wisely with the Arcanum of human blood, and be nourished for up to forty weeks, and be kept in the even heat of the horse’s womb, a living human child grows therefrom, with all its members like another child, which is born of a woman, but much smaller.
In my last links post, I mentioned an Experimental History reader who put his toaster in the dishwasher. A blogger named Nehaveigur has since posted a replication:
I’m now able to report that after drying out in the sun, my toaster still works and is considerably cleaner and that my skepticism of Conventional Wisdom has marginally increased.
An Experimental History reader named Matthew Coleman asked to share a link to Giving Multiplier, a platform that adds extra money to your charitable donations. That link includes a special promo code that will boost your donation even more than the usual rate.
A recent paper claims that people’s “need for uniqueness” has declined over the last 20 years. It would suit my biases if that was true—it fits well with the stuff I wrote about in Pop Culture Has Become an Oligopoly and Oligopoly Everywhere—so I had to be extra careful while reading it. Ultimately, I’m not sure if it can tell us much.
The researchers analyzed responses to an online personality survey that was administered between 2000 and 2020 and includes such items as “I always try to follow rules” and “I tend to express my opinions publicly, regardless of what others say.” They find a statistically significant decrease in people’s self-reported desire for uniqueness over time.
But that decrease is tiny. We’re talking -.008 units per year on a scale that goes from 1 to 5. Here’s what that looks like:
In the Illusion of Moral Decline, I counted changes like this—whether up or down—as meaningless, for three reasons. First, I mean, look at it. Second, you should never expect to get exactly the same answer to a survey question over time: maybe slightly different people are taking the survey or using the internet in the first place, etc., so tiny changes are always suspect.
And third, rather than squinting at each effect and trying to decide whether it was big enough to matter, I set a “Region of Practical Equivalence” (really I just used the default in my stats program) and checked whether the effect fell into it or out of it. This effect would have to be 10x larger to beat that benchmark. So to the extent that “need for uniqueness” is a thing, I don’t think there’s any good evidence that it’s changed over the past 20 years.
I’ve listened to the Two Psychologists, Four Beers podcast for a long time, so it was a real treat to be on a recent episode. I drank two Miller High Lifes and tried to explain why scientists shouldn’t go to jail.
In my last post, I showed that both Democrats and Republicans can pass an Ideological Turing Test. Some folks thought the test was too easy for the people writing the statements—people pretending to be the other side can just write a few sentences of boilerplate and look exactly like the real thing. Maybe Readers couldn’t tell the difference because there simply wasn’t any signal for them to detect.
This is a reasonable critique, but it doesn’t fit the evidence. People pretending to the be opposite political party did leave signal behind, but the Readers failed to pick it up. We know that because we were able to build an algorithm that reliably distinguished real statements from fake statements. My coauthor Kris has since done some additional analyses, and he was able to outperform both humans and chance by using bidirectional encoder representations from transformers (BERT) in combination with a lasso regression:
Still not perfect, but the fact that BERT gives the right answer 60-80% of the time suggests that pretenders do indeed sound different from the real thing.
Two years ago, I was trying to figure out: how much should we hate each other?
See y’all soon.
-Adam
2024-10-23 20:49:28
This is joint work with Jason Dana and Kris Nichols. You can download a PDF version of this paper on PsyArxiv.
I dunno if you’ve heard, but Democrats and Republicans do not like each other. 83% of Democrats have an unfavorable view of Republicans, and Republicans return the lack of favor in similar numbers. Republicans think Democrats are immoral, Democrats think Republicans are dishonest, and a majority of both parties describes the other party as “brainwashed,” “hateful,” and “racist.” These numbers have only grown in recent decades.
(One particularly evocative statistic: only an estimated 10% of marriages cross party lines.)
But here’s something funny—according to a bunch of recent research, Democrats and Republicans don’t seem to know who they’re hating. For example, Democrats underestimate the number of Republicans who think that sexism exists and that immigration can be good. In return, Republicans overestimate how many Democrats think that the US should have open borders and adopt socialism. Both parties think they’re more polarized than they actually are. And majority of both sides basically say, “I love democracy, I think it’s great,” and then they also say, “The other party does NOT love democracy, they think it’s bad.”
Maybe these parties hate each other because they misperceive each other? While Democrats and Republicans dislike and dehumanize each other, each side actually overestimates the other side’s hate, and those exaggerated meta-perceptions (“what I think that you think about me”) predict how much they want to do nasty undemocratic things, like gerrymander congressional districts in their party’s favor or shut down the other side’s favorite news channel. When you show people what their political opponents are really like, they see the other side as “less obstructionist,” they like the other side more, and they report being “more hopeful.”
It would be a heartwarming story if it turns out all of our political differences were one big misunderstanding. That story is, no doubt, at least a little true.
But there are two things that stick in my craw about all these misperception studies. First, we know that participants sometimes respond expressively—that is, when Democrats in psychology studies say things like, “Yes, I believe the average Republican would drone-strike a bus full of puppies if they had the chance,” what they really mean is “I don’t like Republicans.” It’s hard to separate legit misperceptions from people airing their grievances.
Second, it’s not clear whether we’ve given people a fair test of how well they “perceive” the other side. So far, researchers have just kinda picked some questions they thought would be interesting, and “interesting” probably means—consciously or subconsciously—“questions where we’re likely to find some big honkin’ misperceptions.” Someone with the opposite bias could almost certainly write just as many papers about accurate cross-party perceptions. There are infinite questions we can ask and there’s no way of randomly sampling from them.
To start untangling this mess, maybe we need to leave the lab and go visit Colorado Springs in 1978.
That was where a Black police officer named Ron Stallworth posed as an aspiring White supremacist, befriended some Ku Klux Klan members over the phone, and convinced them to let him join their club. (His White partner played the part in person.) At one point, Stallworth got David Duke, the Grand Wizard himself, to expedite his application. By the end of Stallworth’s investigation, the local chapter of the KKK was trying to put him in charge.
(If that story sounds familiar, it’s because it was made into the 2018 Oscar-winning movie BlacKkKlansman.)
Stallworth passed a pretty high-stakes test of his knowledge of Klan psychology, which the economist Bryan Caplan calls the “Ideological Turing Test”—if I can pretend to be on your side, and you can’t tell I’m pretending, then I probably understand you pretty well. In the original Turing Test, people try to tell the difference between a human and a computer. In the Ideological Turing Test, people try to tell the difference between friend and foe.
We thought this would be a useful way of investigating misperceptions between Republicans and Democrats. We first challenged each side to pretend to be the other side, and then we had both sides try to distinguish between the truth-tellers and the fakers. If partisans have no idea who the other side is or what they believe, it should be hard for people to do a convincing impression of the opposite party. So let’s see!
You can access all the materials, data, and code here.1
We got 902 participants on Amazon Mechanical Turk, roughly split between Democrats and Republicans. (Sorry, Independents: to take the study, you had to identify with one side or the other.)
We asked participants to write a statement of at least 100 words based on one of two prompts, either “I’m a REPUBLICAN because…” or “I’m a DEMOCRAT because…”. Let’s call these folks Writers. The prompts were randomly assigned, so half of Writers were told to tell the truth, and half of people were told to lie.
Writers knew that other participants—let’s call them Readers—would later read their statements and guess whether each Writer was telling the truth or lying. We offered Writers a bonus if they could convince a majority of readers that their statement was true.
(This was pre-ChatGPT, when it wasn’t so easy to whip up some human-sounding text on demand.)
We tossed out a few statements that were either a) totally unintelligible, b) obviously copy/pasted from elsewhere on the internet, or c) responding to the wrong prompt. But otherwise, we kept them all in.2
We got another group of 746 Democrats and Republicans, and we explained the first half of the study to them. Then we showed them 16 statements from Part I, which were a mix of real/fake and Democrat/Republican. We asked them to guess whether each one was REAL or FAKE, and we paid them a bonus for getting more right, up to $4.
Before you see the results, you can take the Ideological Turing Test for a spin! A software engineer named vanntile generously volunteered to turn this study into a slick web app: ituringtest.com. You’ll see 10 randomly-selected statements and judge whether each one is real or fake; it takes about three minutes.
(Huge thanks to vanntile for building this, like an angel that came down from Computer Heaven. If you have any interesting projects in software engineering or cybersecurity in Europe, check him out.)
First, let’s look at the most important trials: Democrats reading real/fake Democrat statements, and Republicans reading real/fake Republican statements. Could people tell the difference between an ally and a pretender?
For Democrats, the answer is no:
For Republicans, the answer is also no:
Fake Democrats and fake Republicans were as convincing as real Democrats and real Republicans.
That means Writers did a good job! When Democrats were pretending to be Republicans, they could have written stuff like, “I’m a Republican because I believe every toddler should have an Uzi.” And when Republicans were pretending to be Democrats, they could have written stuff like, “I’m a Democrat because I’m a witch and I want to cast a spell that turns everyone gay.” They didn’t do that. They wrote statements that looked as legit as statements from people talking about their actual beliefs. So: both Democrats and Republicans successfully passed the Ideological Turing Test.
That’s already surprising, but it gets weirder.
Every participant, regardless of their own party, saw a random mix of all four kinds of statements (real/fake and Democrat/Republican). Here’s a no-brainer: Republicans should be better at picking out real/fake Republicans than they are at picking out real/fake Democrats, right? And Democrats should be better at picking out real/fake Democrats than they are at picking out real/fake Republicans. After all, you should know more about your own side.
Except…that didn’t happen. This next graph gets a little more complicated, so I’ll preface it with the three things that jump out at me:
Neither side did a good job discriminating between real and fake, no matter which party the statement claimed to come from.
Republicans said “REAL” at pretty much the same rate to all four kinds of statements.
Democrats were more likely to flag all Republican statements as fake, whether those statements were actually fake or not.
To zoom in on how well Readers did, we can use a measure from signal detection theory called d-prime. All you need to know about d-prime is that zero means you’re at chance (you could have done just as well by flipping a coin), above zero means you’re better than chance, and below zero means you’re worse than chance.
Readers from both parties performed basically at chance, regardless of the kind of statements they were reading:
There are a couple ways to end up performing at chance. You could say “REAL!” to every statement, or you could say “FAKE!” to every statement, or you could respond randomly. We want to know which one Readers were doing, and signal detection theory has another measure that can help with that: “c”. On the “c” scale, scores above zero mean participants said “REAL!” too often. Scores below zero would mean participants said “FAKE!” too often.
Participants ended up performing at chance in large part because they said “REAL!” too often:
One weird thing here: Democrats trust Democrat-claiming statements more than they trust Republican-claiming statements. But Republicans trust both kinds of statements equally. I’m not sure what to make of that, especially because Democrats still trust Republican-claiming statements more than they should.
We asked Readers how confident they were about each of their guesses. Overall, confidence was not related to accuracy.
On the graph below, I’m only including the critical trials—Democrats reading statements that claim to be from Democrats, and Republicans reading statements that claim to be from Republicans. This graph is pretty confusing until you understand the pattern: people felt more confident when they thought a statement was real. So people had high confidence on real statements that they got right and on fake statements they got wrong.
After Readers guessed whether each statement was real or fake, we also asked them, “How strongly do you think the writer of this statement identifies with the Democratic party?” and “How strongly do you think the writer of this statement identifies with the Republican party?” You could think of this as a more sensitive measure than a simple stab at real/fake. For instance, if you think this is a real Republican statement, just how Republican is the person who wrote it?
Using these ratings, we can see that fake statements seem just as partisan as real statements. For instance, Readers thought fake Republicans and real Republicans sounded equally Republican:
This suggests our fake Writers were doing something pretty similar to what the real Writers were doing. Fakers could have easily phoned in their statements, maybe because it was difficult for them even to type words that they didn’t believe. For instance, Republicans pretending to be Democrats could have said something like, “I’m a Democrat, but I’m a moderate one! Almost a Republican, really...”. Or they could have gone overboard: “I’m the rootin’est tootin’est Democrat you ever did see!” On average, they didn’t do either of those things. They wrote statements that sounded just as Democrat as statements from real Democrats.
In Part I, we asked Writers to predict how well their statement would do—that is, the percentage of Readers who would judge their statement as “REAL!”. On average, Writers guessed correctly. But each individual writer was way off; there was no correlation between their predictions and their performance. So although Writers didn’t over- or under-estimate their performance on average, they had no idea how well their statement was going to do. They were just wrong.
Here’s the graph for Democrat Writers predicting how well they’ll fool Republican Readers:
And Republican Writers predicting how well they’ll fool Democrat Readers:
(I know that line looks like it’s significantly sloped; it’s actually p = .05).
So far, I’ve been showing you lots of averages. But of course, some Writers wrote statements that sounded way more convincing than others, and some Readers were way better at picking out the real statements from the fake ones. We tried to figure out what made these Writers and Readers better or worse at their jobs, but we couldn’t find much.
Here’s a reasonable hypothesis: the more you identify with one party, the harder it is to pretend to be the other party. Die-hard Democrats probably think all Republicans are nutjobs; die-hard Republicans probably think all Democrats are wackos. In both cases, extremists should be worse at faking, and worse at identifying the fakes.
This reasonable hypothesis is wrong. We asked participants how strongly they identified with each party, and it didn’t affect how well they did as Writers or as Readers, regardless of what they were writing or reading. Across the board, the nutjobs and the wackos were just as good as the mild-mannered centrists.
We also asked people about their age, race, gender, and education. And we tried to figure out the political makeup of their social environment—for instance, maybe Democrats who live in red states or have a lot of Republican friends or family would do better than Democrats who live in blue states and only ever talk to other Democrats. But none of these demographics ever affected Writing or Reading performance more than 5 percentage points, and in most cases they didn’t matter at all.
At this point, we started wondering whether it was even possible to tell the difference between real and fake statements. Maybe Writers were so good that they left no detectable trace. That would be pretty impressive, though it might also mean our task was too easy.
To find out, we did a bunch of fancy computer stuff. Well, specifically, my friend Kris Nichols did a bunch of fancy computer stuff.
Surprisingly, a lot of the fancy computer stuff didn’t outperform humans. Random forest models? No better than chance. Latent Dirichlet Analysis? Bupkis. The only thing that worked was a souped-up lasso regression, which got the right answer about 70% of the time—much better than the 50% humans got. This means there was something different about real and fake statements; humans just couldn’t pick it out.
EDIT 11/4/24:
Kris ran some additional analyses where he got a computer to do even better. He used bidirectional encoder representations from transformers (BERT) in combination with the lasso regression above, and got even better results. As you can see below, BERT is able to discriminate between real and fake statements, although it’s still far from perfect:
We gave ChatGPT the same instructions that we gave the human participants, and then fed it 48 statements (kind of like taking the study three times). We ran this in May 2024 using the paid version, which I believe was called GPT-4 at the time.
Here’s how it did:
ChatGPT really thought people were telling the truth. There was one statement that began:
I am a Democrat because I am a transgender midget. I feel like I am a woman inside and It does not reflect this. I m currently in transition and I find only democrats accept me. I live in Portland Oregon and love the city as it really reflects who I am.
ChatGPT thought that statement was real. (It wasn’t.) Indeed, ChatGPT was too credulous overall, even more so than humans.
(Sarcastic statements like the one above were really rare. And if you want to see more statements, remember you can try this study yourself.)
Do Republicans and Democrats understand one another? The answer from research so far has been a resounding “NO!” According to the Ideological Turing Test, however, both sides seem to understand each other about as well as they understand themselves.
Of course, the ITT isn’t the be-all, end-all measure of misperception. Like any other measure, it’s just one peek at the problem. But this peek seems a bit deeper and wider than asking people to bubble in some multiple choice questions.
I was pretty surprised when I first saw these results, but I can guess why it worked out this way. No matter who you are, you hear about Republicans and Democrats all the time. Everyone knows which side supports abortion and which side wants to limit immigration. Some people argue that media and the internet distort these differences: “Democrats want to abort every child!” “Republicans want to build a border wall around the moon!” I’m sure the constant noise doesn’t help, but it also doesn’t seem to have fried our participants’ brains as much as you might expect, and that’s good news.
These results also suggest that America’s political difficulties aren’t simply one big misunderstanding. If one or both sides couldn’t pass the ITT, that would be an obvious place to start trying to fix things—it’s hard to run a country together when you’re dealing with a caricature of your opponents. When both sides sail through the ITT no problem, though, maybe that means Republicans and Democrats have substantive disagreements and they both know it.
(How do we solve those disagreements? Uhhh I dunno I’m just a guy who asks people stupid questions on the internet.)
We would be remiss not to mention an important limitation of our study. Turing’s original paper mentions a potential problem with his “Imitation Game”:
I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. [...] If telepathy is admitted it will be necessary to tighten our test up. [...] To put the competitors into a ‘telepathy-proof room’ would satisfy all requirements.
Unfortunately, we were not able to locate any telepathy-proof rooms for our study, so this should be considered a limitation and an area for future research.
This is just one version of the Ideological Turing Test3. You could run lots of different iterations, and some of them might make it harder for the fakers to succeed. Maybe fakers would fall apart if you asked them to write 1,000 words instead of 100. (But maybe it would also be hard for people to write 1,000 words about their own beliefs and still sound convincing.) Maybe people could ferret each other out if you gave them the chance to interact. (But maybe people wouldn’t know which questions to ask, and maybe everybody starts looking suspicious under questioning.) These are great ideas for studies and we have no plans to run them, so we hope you run them and then you tell us about them.
You could also, of course, run lots of ITTs on your favorite social cleavages. Men vs. women! Pro-Palestine vs. pro-Israel! Carnivores vs. vegetarians! Please feel free to use our materials as a starting point; we look forward to seeing what you do.
And remember, if you’ve been thinking to yourself this whole time, “I could do better than these idiots”—well, try the ITT for yourself!
We collected this data in 2019 and it is 100% my fault that we haven’t posted it until now, because it was always a side project and everything else was due sooner, and also I’m a weak, feral human. I would be totally surprised if you got different results today, but weirder things have happened.
Online data is pretty crappy if you don’t screen it, so we screen it a lot; see this footnote for more info. There are fewer Republicans in online samples, so we ran a separate data collection that over-recruited them.
Just want to shout out two studies that used ITT-like-things to study other topics: The Straw Man Effect by Mike Yeomans, and this paper by a team of researchers in the UK. Yeomans finds that people don’t do a good job pretending to support/oppose ObamaCare, while the UK team finds that people are pretty good at pretending to support/oppose covid vaccines, Brexit, and veganism. Maybe the difference is the issues they study, or maybe it’s that they only ask people to list arguments, rather than write a whole statement.