2025-05-27 22:41:00
We are currently living through the greatest experiment humankind has ever tried on itself, an experiment called the internet. As a species, we’ve done some wacky things before—domesticating wolves, planting seeds in the ground, having sex with Neanderthals, etc.—but all of those played out over millennia, whereas we’re kinda doing this one in a single lifetime.
So far the results are, I would say, mixed. But the weirdest part is that most people act like they’re spectators to this whole thing, like, “Oh, I have nothing to do with the outcome of this species-wide experiment, that’s up to other people. Hope it turns out good!” That sentiment makes no sense, because the internet is us. There are no sidelines there. Whatever you write, read, like, forward, comment on, subscribe to, pay for—that thing gets bigger. So if this experiment is gonna work, it’s because we make it work.
In her new book, the historian Ada Palmer argues that what made the Renaissance different was that “people said it was different, believed it was different, and claimed and felt that they were part of a project the transform the world on an unprecedented scale.”
Well, I feel that way right now. If you do too, let’s make a Renaissance.
The blogosphere has a particularly important role to play, because now more than ever, it’s where the ideas come from. Blog posts have launched movements, coined terms, raised millions, and influenced government policy, often without explicitly trying to do any of those things, and often written under goofy pseudonyms. Whatever the next vibe shift is, it’s gonna start right here.
The villains, scammers, and trolls have no compunctions about participating—to them, the internet is just another sandcastle to kick over, another crowded square where they can run a con. But well-meaning folks often hang back, abandoning the discourse to the people most interested in poisoning it. They do this, I think, for three bad reasons.
One: lots of people look at all the blogs out there and go, “Surely, there’s no room for lil ol’ me!” But there is. Blogging isn’t like riding an elevator, where each additional person makes the experience worse. It’s like a block party, where each additional person makes the experience better. As more people join, more sub-parties form—now there are enough vegan dads who want to grill mushrooms together, now there’s sufficient foot traffic to sustain a ring toss and dunk tank, now the menacing grad student next door finally has someone to talk to about Heidegger. The bigger the scene, the more numerous the niches.
Two: people will keep to themselves because they assume that blogging is best left to the professionals, as if you’re only allowed to write text on the internet if it’s your full-time job. The whole point of this gatekeeper-less free-for-all is that you can do whatever you like. Wait ten years between posts, that’s fine! The only way to do this wrong is to worry about doing it wrong.
And three: people don’t want to participate because they’re afraid no one will listen. That’s certainly possible—on the internet, everyone gets a shot, but no one gets a guarantee. Still, I’ve seen first-time blog posts go gangbusters simply because they were good. And besides, the point isn’t to reach everybody; most words are irrelevant to most people. There may be six individuals out there who are waiting for exactly the thing that only you can write, and the internet has a magical way of switchboarding the right posts to the right people.
If that ain’t enough, I’ve seen people land jobs, make friends, and fall in love, simply by posting the right words in the right order. I’ve had key pieces of my cognitive architecture remodeled by strangers on the internet. And the party’s barely gotten started.
But I get it—it takes a little courage to walk out your front door and into the festivities, and it takes some gumption to meet new people there. That’s why I’m running the Second Annual Experimental History Blog Post Competition, Extravaganza, and Jamboree.
Submit your best unpublished blog post, and if I pick yours, I’ll send you real cash money and I’ll tell everybody I know how great you are.
You can see last year’s winners and honorable mentions here. They included: self-experiments, travelogues, tongue-in-cheek syllabi, reviews of books that don’t exist, literary essays, personal reveries, and one very upsetting post about picking your nose. The authors were sophomores, software engineers, professors, filmmakers, public health workers, affable Midwesterners, and straight up randos and normies. So there’s no one kind of thing I’m looking for, and no one kind of person I’m looking to.
That said, if you’re looking for some inspiration, here are some triumphs of the form:
Book Reviews: On the Natural Faculties, The Gossip Trap, Progress and Poverty, all of The Psmith’s Bookshelf
Deep Dives: Dynomight on air quality and air purifiers, Higher than the Shoulders of Giants, or a Scientist’s History of Drugs, How the Rockefeller Foundation Helped Bootstrap the Field of Molecular Biology, all of Age of Invention
Big Ideas: Ads Don’t Work That Way, On Progress and Historical Change, Meditations on Moloch, Reality Has a Surprising Amount of Detail, 10 Technologies that Won’t Exist in 5 Years
Personal Stories/Gonzo Journalism: No Evidence of Disease, It-Which-Must-Not-Be-Named, adventures with the homeless people outside my house, My Recent Divorce and/or Dior Homme Intense, The Potato People
Scientific Reports/Data Analysis: Lady Tasting Brine, Fahren-height, A Chemical Hunger, The Mind in the Wheel, all of Experimental Fat Loss, all of The Egg and the Rock
How-to and Exhortation: The Most Precious Resource Is Agency, How To Be More Agentic, Things You’re Allowed to Do, Are You Serious?, 50 Things I Know, On Befriending Kids
Good Posts Not Otherwise Categorized: The biggest little guy, Baldwin in Brahman, The Alameda-Weehawken Burrito Tunnel, Bay Area House Parties (1, 2, 3, etc.), Alchemy is ok, Ideas Are Alive and You Are Dead, If You’re So Smart Why Can’t You Die?, A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox
And of course:
Last year’s winners: We’re not going to run out of new anatomy anytime soon, The Best Antibiotic for Acne is Non-Prescription, and Medieval Basket Weaving
(By the way, if you have some all-time great blog posts, please leave them in the comments! I’d love to expand this list.)
Paste your post into a Google Doc.
VERY IMPORTANT STEP: Change the sharing setting is “Anyone with the link”. This is not the default setting, and if you don’t change it, I won’t be able to read your post.
First place: $500
Second place: $250
Third place: $100
I’ll also post an excerpt of your piece on Experimental History and heap praise upon it, and I’ll add your blog to my list of Substack recommendations for the next year. You’ll retain ownership of your writing, of course.
Only unpublished posts are eligible. As fun as it would be to read every blog post ever written, I want to push people to either write something new or finish something they’ve been sitting on for too long. You’re welcome to publish your post after you submit it. If you win, I’ll reach out beforehand and ask you for a direct link to your post so I can include it in mine.
One entry per person. Multiple authors is fine.
There’s technically no word limit, but if you send me a 100,000 word treatise I probably won’t finish it.
You don’t need to have a blog to submit, but if you win and you don’t have one, I will give you a rousing speech about why you should start one.
Previous top-three winners are not eligible to win again, but honorable mentions are.
Uhhh otherwise don’t break any laws I guess??
Submissions are due July 1. Submit here.
2025-05-13 21:49:57
I’ve complained a lot about the state of psychology, but eventually it’s time to stop whining and start building. That time is now.
My mad scientist friends who go by the name Slime Mold Time Mold (yes, really) have just published a book that lays out a new foundation for the science of the mind. It’s called The Mind in the Wheel, and it’s the most provocative thing I’ve read about psychology since I became a psychologist myself—this is probably the first time I’ve felt surprised by something in the field since 2016. It’s maybe right, it’s probably wrong, but there’s something here, something important, and anybody with a mind ought to take these ideas for a spin. I realize some people are skittish about reading books from pseudonymous strangers on the internet—isn’t that what your mom warned you not to do?—but baby, that’s what I’m here for! So let’s go—
Lots of people agree that psychology is stuck because it doesn’t have a paradigm, but that’s where the discussion ends. We all pat our pockets and go, “paradigm, paradigm...uh...hmm, I seem to have left mine at home, do you have one?”
Our minds turn to mush at this point because nobody has ever been clear on what a paradigm is. Thomas Kuhn, the guy who coined the term, was famously hard to understand.1 People assumed that “paradigm shift” just meant “a big change” and so they started using the term for everything: “We used to wear baggy jeans, now we wear skinny jeans! Paradigm shift!”
So let’s get clear: a paradigm is made out of units and rules. It says, “the part of the world I’m studying is made up of these entities, which can do these activities.”
In this way, doing science is a lot like reverse-engineering a board game. You have to figure out the units in play, like the tiles in Scrabble or the top hat in Monopoly. And then you have to figure out what those units can and can’t do: you can use your Scrabble tiles to spell “BUDDY” or “TREMBLE”, but not “GORFLBOP”. The top hat can be on Park Place, it can be on B&O Railroad, but it can never inside your left nostril, or else you’re not playing Monopoly anymore.
A paradigm shift is when you make a major revision to the list of units or rules. And indeed, when you look back at the biggest breakthroughs in the history of science, they’re all about units and rules. Darwin’s big idea was that species (units) can change over time (rule). Newton’s big idea was that the rules of gravitation that govern the planets also govern everything down here on Earth. Atomic theory was a proposal about units (all matter is made up of things called atoms) and it came with a lot of rules (“atoms always combine in the same proportions”, “matter can’t be created or destroyed”, etc.). When molecular biologists figured out their “central dogma” in the mid-1900s, they expressed it in terms of units (DNA, RNA, proteins) and what those units can do (DNA makes RNA, RNA makes proteins).
If all this sounds obvious, that’s great. But in the ~150 years that psychology has existed, this is not what we’ve been doing.
When you’re making and testing conjectures about units and rules, let’s call that science. It’s easy to do two other things that look like science, but aren’t, and this is unfortunately what a lot of research in psychology is like.
First, we can do studies without any inkling about the units and rules at all. You know, screw around and find out! Just run some experiments, get some numbers, do a few tests! A good word for this is naive research. If you’re asking questions like “Do people look more attractive when they part their hair on the right vs. the left?” or “Does thinking about networking make people want soap?” or “Are people less likely to steal from the communal milk if you print out a picture of human eyes and hang it up in the break room?” you’re doing naive research.2
The name is slightly pejorative, but only slightly, and for good reason. On the one hand, some proportion of your research should be naive, because there’s always a chance you stumble onto something interesting. If you’re locked into a paradigm, naive research may be the only way you discover something that complicates the prevailing view.
On the other hand, you can do naive research forever without making any progress. If you’re trying to figure out how cars work, for instance, you can be like, “Does the car still work if we paint it blue?” *checks* “Okay, does the car still work if we...paint it a slightly lighter shade of blue??”
(As SMTM puts it: “To get to the moon, we didn’t build two groups of rockets and see which group made it to orbit.”)
There’s a second way to do research that’s non-scientific: you make up a bunch of hand-wavy words and then study them. A good name for this is impressionistic research. If you’re studying whether “action-awareness merging” leads to “flow”, or whether students’ “math self-efficacy” mediates the relationship between their “perceived classroom environment” and their scores on a math test, or whether “mindfulness” causes “resilience” by increasing “zest for life”, you are doing impressionistic research.
The problem with this approach is that it gets you tangled up in things that don’t actually exist. What is “zest for life”? It is literally “how you respond to the Zest for Life Scale”. And what does the Zest for Life Scale measure? It measures...zest for life. If you push hard enough on any psychological abstraction, you will eventually find a tautology like this. This is why impressionistic research makes heavy use of statistics: the only way you can claim you’ve discovered anything is to produce a significant p-value.
Naive and impressionistic research are often respectable-looking ways to go nowhere. For example, if you were trying to understand Monopoly using the tools of naive research, you might start by correlating “the number that appears on the dice” with “money earned”. That sounds like a reasonable idea, but you’d end up totally confused—sometimes people get money when they roll higher numbers, but sometimes they roll higher numbers and lose money, and sometimes they gain or lose money without rolling at all. These inconsistent results could spawn academic feuds that play out over decades: “The Monopoly Lab at Johns Hopkins finds that rolling a four is associated with an increase in wealth!” “No, the Monocle Group at UCLA did a preregistered replication and it actually turns out that odd numbers are good, but only if you fit a structural equation model and control for the past ten rolls!”
The impressionistic approach would be even more hopeless. At least dice and dollars are actual parts of the game; if you start studying abstractions like “capitalism proneness” and “top hat-titude”, you can spin your wheels forever. The only way you’ll ever understand Monopoly is by making guesses about the units and rules of the game, and then checking whether your guesses hold up. Otherwise, you might as well insert the top hat directly into your left nostril.
We’re going to get to psychology in a second, but first we have to avoid a very tempting detour. Whenever I talk to people about the units and rules of psychology, they’re immediately like, “Oh, so you’re saying psychology should be neuroscience. The units are neurons and—”
Lemme stop you right there, because that’s not where we’re going.
Let’s say you’re trying to fix the New York City transit system, so you’re thinking about trains, stations, passengers, etc. All of those things are made of smaller units, but you don’t get better at designing the system by thinking about the smallest units possible. If you start asking questions like, “How do I use a collection of iron atoms to transport a collection of carbon and hydrogen atoms?” you’ll miss the fact that some of those carbon and hydrogen atoms are in the shape of butts that need seats, or that some of them are in the shape of brains that need to be told when the train is arriving.
Those smaller units do matter, because you’re constrained by what they can and can’t do—you can’t build a train that goes faster than the speed of light, and you can’t expect riders to be able to phase through train doors like those two twin ghosts with dreadlocks from the second Matrix movie. But lower-level truths like the Plank constant, the chemical makeup of the human body, the cosmic background radiation of the universe, etc., are not going to help you figure out where to put the elevators in Grand Central, nor will they tell you how often you should run an express train.
Another example for all you computer folks out there: ultimately, all software engineering is just moving electrons around. But imagine how hard your job would be if you could only talk about electrons moving around. No arrays, stacks, nodes, graphs, algorithms—just those lil negatively charged bois and their comings and goings. I don’t know a lot about computers, but I don’t think this would work. Psychology is similar to software; you can’t touch it, but it’s still doing stuff.
So yes, anything you posit at the level of psychology has to be possible at the level of neuroscience. And everything in neuroscience has to be possible at the level of biochemistry, etc., all the way down, and ultimately it’s all at the whims of God. But if you try to reduce any of those levels to be “just” the level below it, you lose all of its useful detail.
That’s why we won’t be talking about neurons or potassium ions or whatever. We’re gonna be talking about thermostats.
Here’s the meat of The Mind in the Wheel: the mind is made out of units called control systems.
I talked about control systems before in You Can’t Be Too Happy, Literally, but here’s a brief recap. The classic example of a control system is a thermostat. You set a target temperature, and if the thermostat detects a temperature that’s lower than the target, it turns on the heat. If the temperature is higher than the target, it turns on the A/C. The difference between the target temperature and the actual temperature is called the “error”, and it’s the thermostat’s job to minimize it. That’s it! That’s a control system.
It seems likely that the mind contains lots of control systems because they are a really good way not to die. Humans are fragile, and we need to keep lots of different things at just the right level. We can’t be too warm or too cold. We need to eat food, but not too much. We need to be horny sometimes in order to reproduce, but if you’re horny all the time, you run into troubles of a different sort.
The science of control systems is called cybernetics, so let’s call this approach cybernetic psychology. It proposes that the mind is a stack of control systems, each responsible for monitoring one of these necessities. The units are the control systems themselves and their components, and the rules are the way those systems operate. Like a thermostat, they monitor some variable out in the world, compare it to the target level of that variable, and then act to reduce the difference between the two. For simplicity, we can refer to this error-reduction component as the “governor”. Unlike a simple thermostat, however, governors are both reactive and predictive—they try to reduce errors that have occurred, and they try to prevent those errors from occurring in the first place.
Every control system has a target level, and they each produce errors when they’re above or below that target. In cybernetic psychology, we call those errors “emotions”. So hunger is the error signal from the Nutrition Control System, pain is the error signal from the Body Damage Prevention System, and loneliness is the error signal from the Make Sure You Spend Time with Other People System.
(I’m making these names up; we don’t yet know how many systems there are, or what they control.)
Some of these emotions will probably correspond to words that people already use, but some won’t. For instance, “need to pee” will probably turn out to be an emotion, because it seems to be the error signal from some kind of Urine Control System. “Hunger” will probably turn out to be several emotions, each one driving us to consume a different macronutrient, or maybe a different texture or taste, who knows. If one of those drives is specifically for sugar, it would explain why people mysteriously have room for dessert after eating everything else: the Protein/Carbs/Fiber/etc. Control Systems are all satisfied, but the Sugar Control System is not.
The Sims actually did a reasonable job of identifying some of these drives:
I worry that all of this is sounding too normal so far, so let’s get weirder.
In cybernetic psychology, “happiness” is not an emotion, because it’s not an error from a control system. Instead, happiness is the result of correcting errors. As SMTM puts it, “Happiness is what happens when a thirsty person drinks, when a tired person rests, when a frightened person reaches safety.” It’s kind of like getting $200 for passing “Go” in Monopoly.
I’ll return to this later because I’ve got a bone to pick with it, but for now I just want to point out that “emotion” means something different in cybernetics than it does in common parlance, and that’s on purpose, because repurposing words is a natural part of paradigm-shifting. (In Aristotelian physics, for instance, “motion” means something different. If your face turns red, it is undergoing “motion” in terms of color.) When you get too familiar with the words you’re using, you forget that each one is packed with assumptions—assumptions that might be wrong, but you’ll never know unless you bust ‘em open like a piñata.
Okay, if the mind is made out of these cybernetic systems and their governors, what we really want to know is: how many are there? How do they work?
We’re not doing impressionistic research here, so we can’t just create control systems by fiat, the way you can create “zest for life” by creating a Zest for Life Scale. Instead, discovering the drives requires a new set of methodologies. You might start by noticing that people seem inexplicably driven to do some things (like play Candy Crush) or inexplicably not driven to do other things (like drink lemon juice when they’re suffering from scurvy, even though it would save their life). This could give you an inkling of what kind of drives exist. Then you could try to isolate one of those drives through methods like:
Prevention: If you stop someone from playing Candy Crush, what do they do instead?
Knockout: If you turn off the elements of Candy Crush one at a time—make it black and white, eliminate the scoring system, etc.—at what point do they no longer want to play?
Behavioral exhaustion (knockout in reverse): If you give people one component of Candy Crush at a time—maybe, categorizing things, earning points, seeing lots of colors, etc.—and let them do that as much as they want, do they still want to play Candy Crush afterward?
(See the methods sections for more).
With a few notable exceptions, you can pretty much only do one thing at a time, and each governor has a different opinion on what that thing should be. So unlike the thermostat in your house, which doesn’t have to contend with any other control systems, all of the governors of the mind have to fight with each other constantly. While we’re discovering the drives, then, we also have to figure out how the governors jockey for the right to choose behaviors.
For example, the Oxygen Governor can get a lot of votes really fast; no matter how cold, hungry, or lonely you are, you’ll always attend to your lack of air first. The Pain Governor can be overridden at low error levels (“my ankle kinda hurts but I have to finish this 5k”) but it gets a lot of sway at high error levels (“my ankle hurts so much I literally can’t walk on it”). Meanwhile, people can be really lonely for a long time without doing much about it, suggesting that the Loneliness Governor tops out at relatively few votes, or that it has a harder time figuring out what to vote for.
From the get go, this raises a lot of questions. What are the governors governing—is the Loneliness Governor paying attention to something like eye contact or number of words spoken, or is it monitoring some kind of super abstract measure of socialization that we can’t even imagine yet? What happens when two governors are deadlocked? What happens when the vote is really close, is there a mental equivalent of a runoff election? And how do these governors “learn” what things to vote for? No one knows yet, but we’d like to!
Here’s where cybernetics really pops off: if you’re on board so far, you’ve already got a theory of personality and psychopathology.
If the mind is made out of control systems, and those control systems have different set points (that is, their target level) and sensitivities (that is, how hard they fight to maintain that target level), then “personality” is just how those set points and sensitivities differ from person to person. Someone who is more “extraverted”, for example, has a higher set point and/or greater sensitivity on their Sociality Control System (if such a thing exists). As in, they get an error if they don’t maintain a higher level of social interaction, or they respond to that error faster than other people do.
This is a major upgrade to how we think about personality. Right now, what is personality? If you corner a personality psychologist, they’ll tell you something like “traits and characteristics that are stable across time and situations”. Okay, but what’s a trait? What’s a characteristic? Push harder, and you’ll eventually discover that what we call “personality” is really “how you bubble things in on a personality test”. There are no units here, no rules, no theory about the underlying system and how it works. That’s why our best theory of personality performs about as well as the Enneagram, a theory that somebody just made up.
But that’s not all—cybernetics also gives you a systematic way of thinking about mental illness. When you lay out all the parts of a control system, you’ll realize that there are lots of ways it can break down, and each malfunction causes a different kind of pathology.
For instance, if you snip that line labeled “error”, you knock out almost the entire control system. There’s no voting, no behavior, and no happiness generated. You just sit there feeling nothing, which certainly sounds like a kind of depression. On the other hand, if all of your errors get turned way up, then you get tons of voting, lots of behavior, and—if your behavior is successful—lots of happiness. That sounds like mania. (And so on.)
This is how units-and-rules thinking can get you farther than naive or impressionistic research. Right now, we describe mental disorders based on symptoms. Like: “You feel depressed because you feel sad.” We have no theory about the underlying system that causes the depression. Instead, we produce charts filled with abstractions, like this:
Imagine how hopeless we would be if we approached medicine this way, lumping together both Black Lung and the common cold as “coughing diseases”, even though the treatment for one of them is “bed rest and fluids” and the treatment for the other one is “get a different job”. This is, unfortunately, about the best we can do with a symptoms-based approach—maybe we’ll rearrange the chart as our statistical techniques get better, but we’ll never, ever cure depression. This is why we need a blueprint instead of a list: if we can trace a malfunction back to the part that broke, maybe we can fix it.
When we’re doing that, we’ll probably discover that what we think of as one thing is actually many things. “Depression”, for instance, may in fact be 17 different disorders. I mean, c’mon, one symptom of “depression” is sleeping too much, and another is sleeping too little. One symptom is weight gain; another is weight loss. Some people with depression feel extremely sad; others feel nothing. It’s crazy that we use one word to describe all of these syndromes, and it probably explains why we’re not very good at treating them.
I think there’s a lot of promise here, but now let me attack this idea a little bit, so you can see how disputes work differently when we’re working inside a paradigm.
To SMTM, happiness is not an emotion because it isn’t an error signal. Instead, it’s the thing you get for correcting an error signal. Eat a burrito when you’re hungry = happiness. Talk to a friend when you’re lonely = happiness. SMTM suspect that happiness operates like an overall guide for explore/exploit: when you got a lot of happiness in the tank, keep doing the things you’re doing. When you’re low, change it up.
I think there’s something missing here. When you really gotta pee and you finally make it to a bathroom, that feels good. When you study all month for a big exam and then you get a 97%, that feels good too. But they feel like different kinds of good. The first kind is intense but fleeting; the second is more of a slow burn that could last for a whole day. I don’t see how you accomplish this with one common “pot” of happiness.
Or have you ever been underwater for a little too long? When you finally reach the surface, I guess it feels “good” to breathe again, but once you catch your breath, it’s not like you feel ecstatic. You feel more like, I dunno, what’s the emotion for “BREATHING IS VERY IMPORTANT, I WOULD ALWAYS LIKE TO BREATHE”?
I see two ways to solve this problem. One is to allow for several types of positive signals, so not all error correction gets dumped into “happiness”. Maybe there’s a separate bucket called “relief” that fills up when you correct dangerous errors like pain or suffocation. Unlike happiness, which is meant to encourage more of the same behaviors, relief might be a signal to do less of something.
Another solution is to allow for different governors to have different ratios between error correction and happiness generation. Right now we’re assuming that every unit of error that you correct becomes a unit of happiness gained. Let’s say that you’re really hungry, and your Hunger Governor is like “I GIVE THIS A -50!” and then you eat dinner and not only does your -50 go away, but you also get +50 happiness, that kind of feeling where you pat your belly and go “Now that’s good eatin’!”. (That’s my experience, anyway.) But maybe other governors work differently. If you feel like you’re drowning, your Oxygen Governor is like “I GIVE THIS A -1000!”. When you can breathe again, though, maybe you only get the -1000 to go away, and you don’t get any happiness on top of that. You feel much better than you did before, but you don’t feel good. You don’t pat your lungs and go, “Now that’s good breathin’!”
Ultimately, the way to test these ideas would be to build something. In this case, you’d start by building something like a Sim. If you program a lil computer dude with all of our conjectured control systems, does it act like a human does? Or does it keep half-drowning itself so it can get the pleasure of breathing again? Even better, if you build your Sim and it looks humanlike, can you then adjust the parameters to make it half-drown itself? After all, most people do not get their jollies from starving themselves of oxygen, but a few do3, so we ought to be able to explain even rare and pathological behavior by poking and prodding the underlying systems. I don’t think any of this would be easy, but unlike impressionistic research, it least has a chance of being productive.
So far, I’ve been talking like this cybernetics thing is mostly right. To be clear, I expect this to be mostly wrong. This might end up being a totally boneheaded way to think about psychology. That’s fine! The point of a paradigm is to be wrong in the right direction.
The philosopher of science Karl Popper famously said that real science is falsifiable. I think he didn’t go far enough. Real science is overturnable. That is, before something is worth refuting, it has to be worth believing. “I have two heads” is falsifiable, but you’d be wasting your time falsifying it. First we need to stake out some clear theoretical commitments, some strong models that at least some of us are willing to believe, and only then can we go to town trying to falsify them. Cunningham’s Law states, “The best way to get the right answer on the Internet is not to ask a question; it’s to post the wrong answer.” Science works the same way, and the bolder we can make our wrong answers, the better our right answers will be.
This has certainly been true in our history. The last time psychology made a great leap forward was when behaviorism went bust. Say what you will about John Watson and B.F. Skinner, but at least they believed in something. Their ideas were so strong and so specific, in fact, that a whole generation of scientists launched their careers by proving those ideas wrong.4 This is what it looks like to be wrong in the right direction: when your paradigm eventually falls, it sinks to the bottom of the ocean like a dead whale and a whole ecosystem grows up around its carcass.
When we killed behaviorism, though, we did not replace it with a set of equally overturnable beliefs. Instead, we kinda decided that anything goes. If you want to study whether people remember circles better than squares, or whether taller people are also more aggressive, or whether babies can tell time, that’s all fine, as long as you can put up some significant p-values.5 The result has been a decades-long buildup of findings, and each one has gone into its own cubbyhole. We sometimes call these things “theories” or “models”, but when you look closely, you don’t see a description of a system, but a way of squishing some findings together. Like this:
This is what impressionistic research looks like. You can shoehorn pretty much anything into a picture like that, and then you can argue for the rest of your career with people who prefer a different picture. Or, more often, everyone can make their own pictures, add ‘em to the pile, and then we all move on. Nothing gets overturned, only forgotten.
When you work with units and rules, it looks more like this:
It’s not that we should use boxes and lines instead of rainbows. It’s that the boxes and lines should mean something. This diagram claims that the “primary rate gyro package”, whatever that might be, is a critical component of the system. Without it, the “attitude control electronics” wouldn’t know what to do. If you remove any of those boxes and the system still works, you know you got the diagram wrong. (Of course, if you zoom into any of those blocks, you’ll find that each of them contains its own world of units; it’s units all the way down.) This is very different from the kinds of boxes and lines we produce right now, which contain a mishmash of statistics and abstractions:
When you’re doing impressionistic research like that, you can accommodate anything. That’s what I often find when I talk to psychologists about units and rules—they’ll light up and go, “Oh yes! I already do that!” And then they’ll describe something that is definitely not that. Like, “I study attention, and I find that people pay more attention when you make the room cold!” But...what is attention? Where does it go on the blueprint? The fact that we have a noun or a verb for something does not mean that it’s a unit or a rule. Until you know what board game you’re playing, you’re stuck describing things in terms of behaviors, symptoms, and abstractions.6
So, look. I do suspect that key pieces of the mind run on control systems. I also suspect that much of the mind has nothing to do with control systems at all. Language, memory, sensation—these processes might interface with control systems, but they themselves may not be cybernetic. In fact, cybernetic and non-cybernetic may turn out to be an important distinction in psychology. It would certainly make a lot more sense than dividing things into cognitive, social, developmental, clinical, etc., the way we do right now. Those divisions are given by the dean, not by nature.
But really, I like cybernetic psychology because it stands a chance of becoming overturnable. And I’d love to see it overturned! We’d learn a lot in the process, the same way overturning a rock in the woods reveals a whole new world of grubs ‘n’ worms ‘n’ things. I’d love to see other overturnable approaches, too, other paradigms that propose different universes of units and rules. If you hate control systems, that’s fine, what else you got? I like how cybernetics has unexpected implications for learning, animal welfare, and artificial intelligence, that’s fun for me, that tickles the underside of my brain, so if your paradigm also connects things that otherwise seem to have nothing to do with each other, please, tickle away!7
In a healthy scientific ecosystem, this kind of thing would be happening all the time. We’d have lots of little eddies and enclaves of people doing speculative work, and they’d grow in proportion to their success at explaining the universe. Alas, that’s not the world we have, but it’s the one we ought to build. If only we had more Zest for Life!
In his defense, Kuhn didn’t expect his book to blow up like it did, and so what he published was basically a rough draft of the idea. He spent the rest of his career trying to counter people’s misperceptions and critiques, but this didn’t really clear anything up for reasons that will be understandable to anyone who has gone several rounds with a peer reviewer.
It’s worth noting that two of these findings (the “networking makes you feel dirty” effect and the “eyes in the break room make you steal less milk” effect) have been the subject of several failed replications. The networking study was done by a researcher credibly suspected of fraud, but even if there wasn’t foul play, we should expect the results of naive research to be flimsy. If you have no idea how the underlying system works, then you have no idea why your effect occurred or how to get it again. The methods section of your paper is supposed to include all of the details necessary to replicate your effect, but this is of course a joke, because in psychology nobody knows which details are necessary to replicate their effects.
Apparently some folks even like to hold their pee for a long time so they can achieve a pee-gasm, so we should be able to model this as well. (I promise that link is as safe for work as it could be.)
Noam Chomsky, for instance, got famous for pointing out that behaviorism could not explain how kids acquire language. William Powers, the guy who first tried to apply cybernetics to psychology in the 1970s, was still beating up on behaviorism, decades after it had been dethroned. (Powers’ ideas were hot for a second and then went dormant for 50 years and no one knows why.)
Note that judgment and decision making, Prospect Theory, heuristics and biases, etc.—which are perhaps psychology’s greatest success since the fall of behaviorism—are themselves an overturning of expected utility theory.
I’ve now run into a few psychologists who are certain that their corner of the field has this nailed down, but whenever they lay out their theory, this is always the thing missing—there’s nothing left to fill in, nothing that unifies things we would intuitively see as separate, or separates things we would intuitively see as unified. But look, I err on the side of being too cynical. If you’ve got this figured out, great! Please do the rest of psychology next.
This is the most common failure mode for The Mind in the Wheel, but there are two others I’ve encountered. Some people think it’s too old: they’ll go, “Oh, this is just...” and then they’ll name some old-timey research that kinda sorta bears a resemblance and assume that settles things. Or they’ll think it’s too new: “Things are mostly fine right now, so why listen to these internet weirdos?” I find this usually breaks down by age—old people want to dismiss it, young people want to understand it. And hey, maybe the old timers will ultimately be proven right, but they’re definitely wrong to feel so confident about it, because no one knows how this will pan out. I always find it surprising when I meet someone whose job is to make knowledge and yet they seem really really invested in not thinking anything different from whatever they think right now.
2025-04-29 22:45:13
Here’s a fact I find hilarious: we only know about several early Christian heresies because we have records of people complaining about them.1 The original heretics’ writings, if they ever existed, have been lost.
I think about this whenever I am about to commit my complaints to text. Am I vanquishing my enemies’ ideas, or am I merely encasing them in amber, preserving them for eternity?
The poet Paul Valéry said that, for every poem you write, God gives you one line, and you supply the rest.2 Amy Lowell, another poet, described those in-between lines, the ones you provide, as “putty”. You get no credit for God’s lines; all artistry is in the puttying.
In the long run, every writer is misunderstood.
Apparently Sir Arthur Conan Doyle considered his Sherlock Holmes stories “a lower stratum of literary achievement” and thought his novels were far better. (Can you name any?) Borges once remarked, “I think of myself as a poet, though none of my friends do.” (Didn’t even know he wrote poems.) Sylvia Plath derided The Bell Jar as “a pot boiler”. (That is, a piece of art produced to keep the heat on.) Elizabeth Barrett Browning wrote poems about slavery and politics, but now the only poem anyone remembers is the one about how much she loves her husband (You know it: “How do I love thee? Let me count the ways”). After he published The Structure of Scientific Revolutions, Thomas Kuhn spent the rest of his life arguing with his critics (and—purportedly—throwing ashtrays at them).
I remember a young man in Paris after the war—you have never heard of this young man—and we all liked his first book very much and he liked it too, and one day he said to me, “This book will make literary history,” and I told him: “It will make some part of literary history, perhaps, but only if you go on making a new part every day and grow with the history you are making until you become part of it yourself.” But this young man never wrote another book and now he sits in Paris and searches sadly for the mention of his name in indexes.
The Wadsworth Constant says that you can safely skip the first 30% of anything you see online. (It was meant for YouTube videos, but it applies just as well to writing). This is one of those annoying pieces of advice that remains applicable even after you know it. Somehow, whenever I finish a draft, my first few paragraphs almost always contain ideas that were necessary for writing the rest of the piece, but that aren’t necessary for understanding it. It’s like I’m giving someone a tour of my hometown and I start by showing them all the dead ends.
Anyway, this reminds me of my favorite windup of all time:
The internet is full of smart people writing beautiful prose about how bad everything is, how it all sucks, how it’s embarrassing to like anything, how anything that appears good is, in fact, secretly bad. I find this confusing and tragic, like watching Olympic high-jumpers catapult themselves into a pit of tarantulas.
All emotions are useful for writing except for bitterness.
Good writing requires the consideration of other minds—after all, words only mean something when another mind decodes them. But bitterness can consider only itself. It demands sympathy but refuses to return it, sucks up oxygen and produces only carbon dioxide. It’s like sadness, but stuck eternally at a table for one.
Other emotions—anger, fear, contentment—are deep enough to snorkel in, and if you keep swimming around in them, you’ll find all sorts of bizarre creatures that dwell in the depths and demand description. Bitterness, on the other hand, is three inches of brackish water. Nothing lives in it. You can stand in it and see the bottom.
All writing about despair is ultimately insincere. Putting fingers to keys or pen to paper is secretly an act of hope, however faint—hope that someone will read your words, hope that someone will understand. Someone who truly feels despair wouldn’t bother to tell anyone about it because they wouldn’t expect it to do anything. All text produced in despair, then, is ultimately subtext. It shouts “All is lost!” but it whispers “Please find me.”
Or, as the writer D.H. Lawrence put it when he got into painting:
A picture has delight in it, or it isn’t a picture. [...] No artist, even the gloomiest, ever painted a picture without the curious delight in image-making.
So why do writers whine so much about writing? They’re always saying things like: “I hate to write, but I love having written,” or “Being a writer is like having homework every night for the rest of your life” or “There is nothing to writing. All you do is sit down at a typewriter and bleed.” In fact, it’s nearly impossible to trace those quotes back to their original source because apparently every writer who ever lived said something similar.
Maybe it’s because writing is inherently lonely, or maybe it’s because the only people who would try to make a living from writing are messed up in the head.
Personally, I think the reason is far more sinister: making art is painful because it forces the mind to do something it’s not meant to do. If you really want to get that sentence right, if you want that perfect brush stroke or that exquisite shot, then you have to squeeze your neurons until they scream. That level of precision is simply unnatural.
Maybe that’s why so few people write, and why a few people feel compelled to write. Every kind of pain is aversive to most humans, but addictive to a handful of them. Writers are addicted to the particular kind of pain you feel when you’re at a loss for words, and to the relief that comes from finding them.
I mean, here’s Ray Bradbury, sounding like a dope fiend:
If I let a day go by without writing, I grow uneasy. Two days and I am in a tremor. Three and I suspect lunacy. Four and I might as well be a hog suffering the flux in a wallow. An hour’s writing is tonic.3
Remember the adrenochrome conspiracy? It claimed that children produce a kind of magical hormone when under duress, and celebrities stay forever young by feeding upon it. This is false, of course. But what if this actually describes our relationship to artists? What if we all stay alive by feeding on the products of their suffering? What if a great piece of art is like a pearl: an irritant covered in a million attempts to make it go away?
I used to think that the phrase “Love the questions” was from the same pablum factory that produced “Live, laugh, love”—like, there must have been a global glut of throw pillows a few years ago, and someone figured out that you can sell them to Bed, Bath, and Beyond if you embroider them with fake-deep claptrap. Then I found out that “Love the questions” comes from a letter that the poet Rainer Maria Rilke wrote to a kid named Franz Xavier Kappus, who was trying to decide whether to become a writer or a soldier in the Austro-Hungarian army. In context, the quote is far less cringe:
You are so young, so much before all beginning, and I would like to beg you, dear Sir, as well as I can, to have patience with everything unresolved in your heart and to try to love the questions themselves as if they were locked rooms or books written in a very foreign language.
It’s even less cringe when you read the whole letter, and realize that a few sentences later Rilke appears to be counseling the poor Kappus on what to do about his excessive horniness:
Sex is difficult; yes. But those tasks that have been entrusted to us are difficult; almost everything serious is difficult; and everything is serious. If you just recognize this and manage [...] to achieve a wholly individual relation to sex (one that is not influenced by convention and custom), then you will no longer have to be afraid of losing yourself and becoming unworthy of your dearest possession.4
Here’s my point. Some people think that writing is merely the process of picking the right words and putting them in the right order, like stringing beads onto a necklace. But the power of those words, if there is any, doesn’t live inside the words themselves. On its own, “Love the questions” is nearly meaningless. Those words only come alive when they’re embedded in this rambling letter from a famous poet to a scared kid, a kid who is choosing between a life where he writes poems and a life where he shoots a machine gun at Bosnian rebels. The beauty ain’t in the necklace. It’s in the neck.
Maybe that’s my problem with AI-generated prose: it’s all necklace, no neck.
I worked in the Writing Center in college, and whenever a student came in with an essay, we were supposed to make sure it had two things: an argument (“thesis”) and a reason to make that argument (“motive”). Everybody understood what a “thesis” is, whether or not they actually had one. But nobody understood “motive”. If I asked a student why they wrote the essay in front of them, they’d look at me funny. “Because I had to,” they’d say.
Most writing is bad because it’s missing a motive. It feels dead because it hasn’t found its reason to live. You can’t accomplish a goal without having one in the first place—writing without a motive is like declaring war on no one in particular.
This is why it’s very difficult to teach people how to write, because first you have to teach them how to care. Or, really, you have to show them how to channel their caring, because they already care a lot, but they don’t know how to turn that into words, or they don’t see why they should.
Instead, we rob students of their reason for writing by giving it to them. “Write 500 words about the causes of the Civil War, because I said so.” It’s like forcing someone to do a bunch of jumping jacks in the hopes that they’ll develop an intrinsic desire to do more jumping jacks. But that’s not what will happen. They’ll simply learn that jumping jacks are a punishment, and they’ll try to avoid them in the future.
Usually, we try to teach motive by asking: “Why should I, the reader, care about this?”
This is reasonable advice, but it’s also wrong. You, the writer, don’t know me. You don’t have a clue what I care about. The only reasons you can give me are the reasons you could give to literally anyone. “This issue is important because understanding it could increase pleasure and reduce pain.” Uh huh, cool!
What I really want to know is: why do you care? You could have spent your time knitting a pair of mittens or petting your cat or eating a whole tube of Pringles. Why did you do this instead? What kind of sicko closes the YouTube tab and types 10,000 words into a Google doc? What’s wrong with you? If you show me that—implicitly, explicitly, I don’t care—I might just close my own YouTube tab and read what you wrote.
There’s a scene in the movie Her where Joaquin Phoenix realizes that the AI he’s fallen in love with has, in fact, fallen in love with hundreds of other people as well. The AI doesn’t think this is a big deal, but Mr. Phoenix does, as any human would, because we only know how to value things that are scarce. If love doesn’t cost you anything, is it even really love?
The same dynamics are at play in writing, even if we don’t think about them. Imagine how amazing it must have felt to receive a letter from Rilke, to know that a famous poet decided to spend his time writing those words to you, and only you. A mimeographed, boilerplate response wouldn’t have been nearly as potent: “Thanks for writing! I’m afraid I don’t have to time to respond personally to every letter, but just remember, love the questions!”).
Most writing, of course, isn’t exclusive in terms of access, but in terms of time. There’s something special about every word written by a human because they chose to do this thing instead of anything else. Something moved them, irked them, inspired them, possessed them, and then electricity shot everywhere in their brain and then—crucially—they laid fingers on keys and put that electricity inside the computer. Writing is a costly signal of caring about something. Good writing, in fact, might be a sign of pathological caring.
Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters. It has no context—or, really, it has infinite context, because the context for its outputs is every word ever written.
When you learn something about a writer, say, that Rousseau abandoned all of his children, it inflects the way you understand their writing. But even that isn’t quite the right example—it’s not about filling in the biographical details, it’s about realizing that the thing you’re reading comes from somewhere. Good writing is thick with that coming-from-ness.
Lots of people worry that AI will replace human writers. But I know something the computer doesn’t know, which is what it feels like inside my head. There is no text, no .jpg, no .csv that contains this information, because it is ineffable. My job is to carve off a sliver of the ineffable, and to eff it.
(William Wordsworth referred to this as “widening the sphere of human sensibility, for the delight, honor, and benefit of human nature [...] the introduction of a new element into the intellectual universe.”)
If I succeed, of course, the computer will come to know more and more of my secrets. It’s like I am slowly speaking my password aloud, and eventually the computer might be able to guess the characters that still remain, at which point it will breach all my systems and kill me. My only hope is that my password contains infinite characters, that the remaining dots keep changing their identities, so that no machine, no matter how powerful, can ever guess the rest.
We’ll see if I’m right!
I probably sound like one of those guys who is always like, “AI will never do this, or I’ll eat my hat!!” and then he has to get his stomach pumped because it’s full of hats. I’m not actually interested in fighting a rear-guard action against the machines, because it’s inherently depressing. We’ve already lost a lot of ground, and we’re going to lose more.
If you take that perspective, though, you miss the point. We’ve got a once-in-the-history-of-our-species opportunity here. It used to be that our only competitors were made of carbon. Now some of our competitors are made out of silicon. New competition should make us better at competing—this is our chance to be more thoughtful about writing than we’ve ever been before. No system can optimize for everything, so what are our minds optimized for, and how can I double down on that? How can I go even deeper into the territory where the machines fear to tread, territories that I only notice because they’re treacherous for machines?
Most of the students who came into the Writing Center thought the problem with their essay was located somewhere between their forehead and the paper in front of them. That is, they assumed their thinking was fine, but they were stuck on this last, annoying, arbitrary step where they have to find the right words for the contents of their minds.
But the problem was actually located between their ears. Their thoughts were not clear enough yet, and that’s why they refused to be shoehorned into words.
Which is to say: lots of people think they need to get better at writing, but nobody thinks they need to get better at thinking, and this is why they don’t get better at writing.
The more you think, the closer you get to the place where the most interesting writing happens: that tiny slip of land between “THINGS THAT ARE OBVIOUS” and “THINGS THAT ARE OBVIOUSLY WRONG”. Kinda like this:
For example, after Virginia Woolf finished the first part of To the Lighthouse, she jotted in her diary, “Is it nonsense? Is it brilliance?” In his own diary, John Steinbeck wrote: “Sometimes I seem to do a good little piece of work, but when it is done it slides into mediocrity.” (That work was The Grapes of Wrath.) Francis Bacon, the father of modern science, begins The Great Instauration by wondering whether he’s got a banger or a dud on his hands: “The matter at issue is either nothing, or a thing so great that it may well be content with its own merit, without seeking other recompense.” The first page of the book does make it clear, though, which way Bacon ultimately came down on that question:
I know some very great writers, writers you love who write beautifully and have made a great deal of money, and not one of them sits down routinely feeling wildly enthusiastic and confident. Not one of them writes elegant first drafts. All right, one of them does, but we do not like her very much. We do not think that she has a rich inner life or that God likes her or can even stand her.
I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.
Of course, that includes any post called “Notes on” something, like this very post you’re reading right now. Every writer, whether they know it or not, is subtweeting themselves. Whenever they rail against something, they are first and foremost railing against their own temptation to do that thing. To understand any author, picture them delivering their prose to a mirror.
That’s true for me, anyway. Wait, does no one else do this?
See: Marcionism, Montanism
Valéry was nominated for the Nobel Prize 27 times in 12 different years and never won, so maybe God was trying to make it up to him.
This is from Bradbury’s Zen in the Art of Writing. Here’s the poet Brewster Ghiselin saying the same thing:
Poets speak of the necessity of writing poetry rather than of a liking for doing it. It is a spiritual compulsion, a straining of the mind to attain heights surrounded by abysses and it cannot be entirely happy, for in the most important sense, the only reward worth having is absolutely denied: for, however confident a poet may be, he is never quite sure that all his energy is not misdirected nor that what he is writing is great poetry.
From the translation I’m not sure if Rilke is talking about sex as in “doing the deed” or sex as in “being a man” but I choose the more interesting interpretation
2025-04-15 22:35:58
In 2016, I accidentally became a bit character in UK history.
I had bumbled my way onto a British reality show called Come Dine with Me, where four strangers take turns hosting, attending, and rating each other’s dinner parties, and the person with the highest score at the end of the week wins an extremely modest £1,000. Usually, the show is low-stakes—its version of “drama” is when someone sticks a whole whisk in their mouth. It’s the kind of trashy, easy-viewing TV you might watch while you’re recovering from having your appendix removed.
My episode was different. On the final day, when a contestant named Peter realized he had lost, he delivered a now-iconic denunciation of the winner and kicked the rest of us out of his house. The clip is routinely scrubbed from YouTube for copyright violations, but here’s a version that has lived on (I’m the guy in blue sitting sheepishly on the couch):
This is tame by American standards, of course, where our reality shows involve stripping people naked and dropping them in the woods. But for the Brits, this was as scandalous as the Queen showing cleavage. Peter’s blowup became national and international news. Bootleg versions of the episode racked up millions of views before being taken down. Voice actors, vintage shop employees, and precocious children posted their best Peter impressions, while enterprising Etsy sellers slapped his visage on coasters, t-shirts, spatulas, Christmas jumpers, doilies, and religious candles. Internet citizens turned his rant into an auto-tuned ballad, a ukulele ditty, and an honestly very catchy indie single.
Most memes die, but a few transcend. This one transcended. “You won, Jane” became a permanent part of British memetic vernacular, right up there with “Keep Calm and Carry On”, destined to be resurrected and remixed to fit whatever’s in the headlines. For instance, when covid struck, this appeared:
When England lost a soccer game:
When Keir Starmer became prime minister last year:
When Chappell Roan revealed the original title of her hit song “Good Luck, Babe!”:
If you went to the Edinburgh Fringe Festival in 2023, you could go see live actors reenact the whole Come Dine with Me debacle, with one caveat: “To avoid copyright infringement, we will recreate the episode using sock puppets”.1
Every generation casts down the memetic gods of their forefathers, and so I expect “You won, Jane” to eventually be displaced by Gen Z’s pantheon of rizz and skibidi toilet. But that hasn’t happened yet. Even to this day, although I no longer live in the UK, every once in a while I’ll see some stranger squinting at me, and they’ll walk over, and I’ll secretly be hoping that they’ll say “Hey, I read your blog” but instead they will say, “Hey, were you on that one episode of Come Dine with Me?”
I have resigned myself to the fact that, no matter what I do for the rest of my life, if I am remembered for anything at all, it will be the for the thirty seconds I spent sitting idly by while a man ruined his life on national television. I haven’t said much about the whole episode since then, for reasons that will become clear later. But now that we’re nearly 10 years out, it’s time to unburden myself of these secrets I’ve been carrying ever since. Because what they show on TV ain’t the whole story. It ain’t even close.
2025-04-02 00:15:53
I got a bone to pick with an ancient meme. Remember five years ago, when a viral post was claiming that you could judge someone’s capacity for “self-governing” by observing their behavior in the grocery store parking lot? Good people return their shopping carts, the theory went, and bad people don’t.
Like everything that comes out of the website 4Chan, the Shopping Cart Test was designed to get people’s attention and then make their lives worse. These are the people who brought us Pizzagate, QAnon, Pedobear, the bikini bridge, and a high-profile heist of a flag owned by the actor Shia Lebeouf.1 So this meme, too, did exactly what it was supposed to do: it made people look and then it made them mad. The inevitable backlash came, the backlash to the backlash, the New York Times article, etc. People got bitter, then they got bored, then they moved on.
But not me. I think the 4channers accidentally discovered something useful. They’re right about one big thing: our moral mettle will be tested by dilemmas so small that they’re nearly imperceptible. As fun as it is to toss around Trolley Problems and Lifeboat Ethics, most of us will never actually have to decide whether to kill one to save five, or whether to toss a toddler into the sea to keep the boat afloat. You cannot purchase an experience machine, nor can you move to Omelas. But you will have to decide whether to take 30 seconds to push a cart 50 feet. If there really is a Saint Peter waiting for us at the pearly gates with a karmic dossier, it will be stuffed not with a few grand actions, but a million mundanities like this one.
Because they’re trying to turn the whole world into a kingdom of trolls, however, the originators of the Shopping Cart Test made it nefarious instead of useful by pointing it in the wrong direction. The test is presented as a cudgel to use against your fellow citizens, a way to judge them worthy of coexistence or condemnation. (In the meme’s own words, anyone who fails the test is “no better than an animal.”) But Saint Peter is not like a district attorney, willing to let you walk into heaven in exchange for kompromat on your accomplices. (“I saw Bill from down the block leave a cart in the handicapped parking spot! Do I get to have eternal life now?”) No, the point of the Shopping Cart Test is not to administer it, but to pass it—to practice a tiny way of being good, because most ways of being good are, in fact, tiny.
There should be many more Shopping Cart Tests, then, and we should be pointing them at ourselves, rather than at each other. So I’ve been keeping track of them—those almost invisible moments when you can choose to do a bit of good or a bit of bad. I think of these as keyholes to the soul, ways of peeking into your innermost self, so you can make sure you like what you see in there. Here are seven of ‘em.
God help the driver who gives me control over the music in the car, because the second I get that Bluetooth connection, I become a madman. I take ‘em on a wild ride through my Spotify, hellbent on showing them just how interesting of a guy I am, and how cool and eclectic my tastes are. “They’re playing authentic medieval instruments,” I shout over the music, “But it’s also mixed with death metal!”
I was once on a third date where we needed to drive somewhere, and when my date connected to the car’s Bluetooth, I figured she would do like I do and show off her discography. Instead, my favorite tunes started coming out of the speakers. “Whoa, you like this too??” I asked. “No,” she said, “I’m playing it because you like it.”
I was awestruck and dumbstruck. I had never even entertained the idea of creating a good experience for someone else. And I had never realized that the only pleasure greater than playing the music you love is other people playing the music you love. The scales fell from my eyes as I thought of all the times I had been in charge and decided, without thinking, to cater to my own tastes: order the food I like, set the temperature to what’s comfortable for me, pack the itinerary full of stuff I want to do. I’ve come to think of this as the Bluetooth Test: when you’re given the smallest amount of power, do you use it to make things nice for everybody, or just yourself?
Anyway, me ‘n’ that girl are married now.
You know that moment when you’re at a party or a conference or whatever and you have no one to talk to, so you sidle up to a circle of people and then stand there awkwardly at the periphery, hoping for a chance to jump in? There is no more vulnerable position than this, to be teetering on the edge of personhood and oblivion, waiting to see whether a jury of your peers will decide that you exist.
My friend Wanda just doesn’t let that happen. If she’s in a circle and someone tries to join, she’ll go “Oh hey everybody this is Adam. Adam, we were just talking about...” and then we all go on normally. If she doesn’t know you, she’ll introduce herself quickly, tell you everybody’s names, and then pick up where the conversation left off.
This is the itsiest bitsiest mercy of all time, but when you’re on the receiving end of it, it feels like an angel has snatched you out of the maw of Hades. That’s why I call it the Circle of Hell Test: when you see someone writhing in social damnation, do you grab their hand, or do you let ‘em burn?
I think most people fail this test this because they’re too anxious about their own status, like “Hey man how can I affirm your personhood?? I’m still waiting to see if they affirm my personhood!” But this is the wrong way of looking at it. Bringing someone else into the fold doesn’t cost you status. It gives you status. Taking the floor and then handing it to someone else is a big conversational move, and you look cool when you do it. Wanda ends up seeming like the most high-status person in any conversation for exactly this reason.
No one can convince my friend Micah of anything. He treats conversations like trench warfare—you have to send a full battalion to their deaths if you want to gain an inch of ground. When anybody tries to give him advice, he stares into the distance and waits for them to stop. Meanwhile, Micah is quick to tell other people how to live, and then he gets kinda huffy if you blow him off.
Oh, who am I kidding? Micah is me.
I’m usually skeptical of self-help books, but when I read that John Gottman’s Principle #4 for a Successful Marriage is “let your partner influence you,” I didn’t just feel seen. I felt caught. This is Gottman Test #4, and it applies not just to partners, but to people in general: do you expect to influence others, but refuse to be influenced yourself?
I think I’m so resistant to the idea of being swayed because I feel like I’m made out of opinions. Changing them would be like opening up my DNA and scrambling my nucleotides. That’s why I surround ‘em with barbed wire and mine fields and machine gunners. This assumes, of course, that I just happen to have all of my cytosines and guanines in exactly the right place, that none of my DNA is useless or mutated, and that every codon is critical—change one, and you change everything. I mean, if I admit my ranking of Bruce Springsteen albums is slightly out of order, then who am I anymore?
But it ain’t like that. Like genes, opinions acquire errors over time, and they have to be perpetually proofread and repaired, or else they start going wonky and you end up with a whole tumorous ideology. Unlike genes, however, those repairs generally have to come from outside. I always assume that it will feel frightening to let this happen, to reel in the barbed wire, to deactivate the mine field, to order the machine gunners to stand down. Instead, it feels like a relief to finally give up and agree that Born to Run is, in fact, superior to Born in the U.S.A.
I recently walked through a train station that was plastered with ads like: “There’s a lot of stigma against disabilities, but we’re here to change that!” and “80% of Americans have a prejudice against people with disabilities. It’s time to lower that number!”
If you actually want to reduce prejudice toward people with disabilities, you would never ever run a campaign like that, not in a million years. When you tell everybody that there’s a lot of stigma against something, you stand a pretty good chance of increasing that stigma. Maybe the nonprofit that paid for these ads has a different theory of human behavior, or maybe the ads looked reasonable because that nonprofit cannot actually imagine a future without stigma. Perhaps because if that stigma went away, the nonprofit would have to go, too.2
I think of this as the Codependent Problems Test: do you actually want to solve your problem, or are you secretly depending on its continued existence? If you showed up to fight the dragon and found it already slain, would you be elated or disappointed?
After all, a righteous crusade gives you meaning and camaraderie, to the point where you can become addicted to the crusading itself. It is possible to form an entire identity around being mad at things, and to make those things grow by pouring your rage on them, which in turn gives you more things to be mad at. This is, in fact, the business model for approximately half the internet.
When I was an academic, I used to worry about getting scooped—if someone debuts my idea before I do, they get all the glory. As soon as I stopped publishing in journals and started blogging my research instead, this fear went away. I realized that I wasn’t actually looking for knowledge; I was looking for credit. I was in a codependent relationship with ignorance: I wanted it to keep existing until the exact moment that I, and only I, could make it go away.
Everybody loves my friends Tim and Renee because they are willing to match your freak. If you do something weird, they’ll do something weirder. Dance an embarrassing little jig, talk like you’re a courtier to Louis XIV, pretend you’re from a universe where 9/11 never happened, and they’ll be right there with you: “That’s so crazy! We’re from a universe where Saddam Hussein became a famous lifestyle TikTokker!”
Every moment you’re with another person, you are implicitly asking, “If I’m a little bit weird, will you be a little bit weird too?” And when you’re with Tim and Renee the answer is yes, yes, a thousand times yes. It’s hard to describe how good it feels when someone passes the Match Your Freak Test, to know that no matter how far you put yourself out there, you won’t be left hanging.
Technically this is called “Yes, And,” but in our attempt to mass-produce that idea, we’ve made it mechanical and cringe. (Somehow you don’t get the vibe right when you put people in a circle with coworkers they hate and force them all to play “Zip, Zap, Zop”.) I knew plenty of talented improvisers who would never disobey the letter of improv law, but would still find a way to make it clear that they hated your choices.
Matching someone’s freak isn’t about reluctantly agreeing to their reality. It’s about declining the opportunity to judge them, and choosing instead to do something that’s even more judge-able. It’s the opposite of being a bully—it’s seeing someone with a mustard stain on their shirt, and instead of pointing and laughing, you grab a bottle of mustard and squirt it all over yourself.
In fourth grade, the teacher handed us all a blank map of the US and told us to color in every state that we had visited. Immediately, my mission was clear: I needed to be the kid who had been to the most states. As I was sharpening my Crayolas, though, I saw this kid Ian coloring in swaths of the Northeast—as if, knowing this showdown would one day come, he had gone on a road trip through New England specifically to juice his stats with all those tiny states.
Desperate, I hatched a plan: I had once flown from Ohio to Florida to visit my uncle, so hadn’t I technically been “in” all of those intervening states? This led to a pitched metaphysical debate with the teacher, and many thought experiments later (“What if you drive through Pennsylvania but you never get out of the car?”, “What if you walk through Delaware on stilts, so that your feet don’t technically touch the ground?”), she relented and allowed me to identify all of my “flyover” states in a different color. That put me just barely ahead of Ian, who soon moved out of town, I assume because of his shame, or to up his numbers for our inevitable rematch.
That story has stuck in my head for decades because that stupid, petty instinct has never left me. I am constantly failing the Pointless Status Test—whenever there’s some way I could consider myself better than other people, no matter how stupid or arbitrary it is, I feel compelled to compete.
The problem isn’t the competition itself; it’s only a vice when it doesn’t produce anything useful. So I’m proud of my fourth-grade self for demonstrating creativity in the face of adversity. I just wish I had used it to do something other than win a status game that existed entirely inside my own head.
That’s why, in my opinion, we should feel the same derision toward people who engage in pointless competition as we feel toward people who embezzle public funds. We all benefit from the public goods that society provides—safety, trust, knowledge—and so we all owe society some portion of our efforts in return. If instead you squander your talents on the acquisition of purely positional goods, you are robbing the world of its due. It’s like commandeering a nuclear power plant so you can heat up a Hot Pocket.
In college, I crammed my schedule so full that my GCal was one solid block of red: classes, extracurriculars, jobs, committees, shows, research. My improv group would routinely rehearse from 11:15pm to 1:15am because that was literally the only time left. It felt like every dial of my life was permanently turned up to 11 and it was great.
Well, mostly great. One spring, Maya, a good friend of mine, was performing a play she had written for her senior thesis. A sacred rule among theater kids is that you go to each other’s shows—otherwise, you might have to perform your Vietnam era reimagining of The Music Man to no one. There was only one night I could possibly make it to Maya’s play, and when that night arrived I just...didn’t go. It wasn’t because I forgot. I decided. I wanted a few hours to finish an essay, to read, to respond to emails, to think, to sit motionless on my couch, and while I could have delayed all those things to some later time and nothing bad would have happened, I didn’t have the gumption to do it.
When Maya came by my room later that night, upset, and rightfully so, I realized for the first time: extreme busyness is a form of selfishness. When you’re running at 110% capacity, you’ve got nothing left for anybody else. Having slack in your life is prosocial, like carrying around spare change in your pocket in case someone needs it. My pockets were permanently empty—I was unable to bake anyone a birthday cake, proofread their essay, pick them up at the airport, or even, if I’m honest, think about them more than a few seconds. I was failing the Too Busy to Care Test. “Oh, you’re going through a breakup and need someone to talk to? No problem, just sign up for a 15-minute slot on my Calendly.”
I once read a study where they found that people’s perception of the care available to them was a better predictor of their mental health than the care they actually received. That made a lot of sense to me. It’s not every day that you need to call someone at 2am and bawl your eyes out. But every day you wonder: if I called, would someone pick up?
I’ve heard that assholes can occasionally transform into angels, but I’ve never seen it happen. Any improvement I’ve ever witnessed—in myself, in others, doesn’t matter—has been, to borrow Max Weber’s description of politics, “the strong and slow boring of hard boards.” That’s because there’s no switch in the mind marked “BE A BETTER PERSON”. Instead, becoming kinder and gentler is mainly the act of noticing—realizing that some small decision has moral weight, acting accordingly, and repeating that pattern over and over again.
It’s much easier, of course, to wait for a Road to Damascus moment, to put off any self-improvement to some dramatic day when the sky will open and God will reprimand you directly, so you can do all of your repenting and changing of ways at once. For me, anyway, that day is always permanently and conveniently located in the near future, so in the meantime I can enjoy my “Lord make me good, but not yet” phase.
If you accept that nothing is going to happen on the way to Damascus, to you or anyone else, if you let go of the myth of an imminent moral metamorphosis, you can instead enjoy a life lived under expectations that are both extremely consistent and extremely low. It is always possible to become a better person—even right this second!—but only a very very slightly better one. Whatever flaws you have today, you will probably have them tomorrow, and same goes for your loved ones. But you can shrink ‘em (your flaws, that is, not your loved ones) by the tiniest amount today, a bit more tomorrow, and a bit more after that.
It’s like you’re trying to move across the country, but each day, you can only move into the house that’s right next to yours. It might be months before you even make it to another zip code. But if you keep carrying your boxes from house to house, soon enough you’ll be on the other side of town, and then in the next state over, and then the next one after that. The most important thing to remember is: keep track of those states, because you never know when Ian might return.
In their earlier, more innocent days, they also brought us lolcats and rickrolling.
See also: the Shirky Principle.
2025-03-18 22:05:39
A couple years ago, I got a job interview at a big-name university and I had to decide whether to go undercover or not. On paper, I looked like a normal candidate. But on the internet, I was saying all sorts of wacko things, like how it’s cool to ditch scientific journals and just publish your papers as blog posts instead.
If the university wanted to hire me, I was pretty sure they wanted the normal me, not the wacko me. But I was also starting to suspect the wacko me was the real me. This was a big problem, because that job came with a paycheck, an office, and the approval of my peers. Could I maybe Trojan Horse myself into their department and only reveal my true nature after we’d signed all the documents (“Ha ha, you fools! Couldn’t you tell I’ve gone bananas??”)? Or could I maybe lock up my wackiness, maybe forever, or at least until I had tenure, when I could let it loose again?
Those lies were extremely tempting, but they were also lies. I knew that if I went undercover, at some point they were going to show me a picture of my wacko self and ask, “Do you know this man?” and I would have to say, “Never heard of him.” As much as I love having health insurance, I couldn’t bring myself to do that.
So in the end I went as the real me, a normal guy who was in the process of becoming a wacko, like a caterpillar who had snuggled up inside his cocoon and was soon to emerge as a, I dunno, a teeny tiny walrus in a fedora. I gave my normal talk about my dissertation—a project I loved and was happy to present—but I ended by saying, “I don’t think we need a hundred more papers like this one. We need to turn psychology into a paradigmatic science, and I think I can help do that.” I told them why I thought psychology hadn’t made much progress in the past few decades, and how it could, and how I might be wrong about everything, but I would hopefully be wrong in a useful way. When people asked me if I planned to publish my papers in journals, I said no, and I explained why.
It probably doesn’t sound like much, but this was the scariest thing I’ve ever done. This is the academic equivalent of getting on stage and mooning the audience. You’re not supposed to say this stuff, especially not at a job interview. You’re risking a fate worse than death—people might think you’re weird.
That’s how it felt when I decided to do all that, anyway. But on the actual day, I didn’t feel afraid at all. I felt free. Invincible, even. I had already condemned myself to death, so what could they do to me now? In the unlikely event that they liked the real me, great! And if they didn’t, well, better to find that out sooner rather than later. The only real danger was if they hired the fake me and then I had to pretend to be the fake me for the rest of my life.
I expected to last like 15 minutes before someone said, “Oh wow we’ve made a terrible mistake, please leave.” Instead, people were nice. Some of them were excited—if nothing else, they had never seen someone get on stage and drop their drawers before. A couple people were like “Between you and me, I like what you’re doing, but I don’t know about the others.” I never encountered those supposed others, but maybe they waited until I left their office to bust out laughing. Some were skeptical, but in a “I’m taking you seriously” kind of way. And some people didn’t react at all, as if I was being so weird that they couldn’t even perceive it, like I was up there slapping my bare buttcheeks and they were like, “Um yeah I had a question about slide four?”
And then...I got the job!!
Haha no just kidding I didn’t get the job, are you nuts?
But fortunately it didn’t hurt at all, it actually felt great!
Haha no it sucked! Of course it hurts when people hang out with you all day and then decide they don’t ever want to hang out with you again. I might be a wacko, but I’m not a psychopath.
And yet the hurt was only skin deep. It felt like I got a paper cut when I expected to be sawn in half. I guess I imagined that being myself would make me feel saintly but grim, like one of those martyrs in a Renaissance painting who has a halo around his head and a sword through his belly. Kinda like this:
Instead, it made me feel like this:
This felt like a triumph, but it also felt confusing and silly. Why was it so hard to be myself in front of my potential employer? Why was I even considering trying to bamboozle my way into a job that I wasn’t even sure I wanted?
Maybe it’s because, historically, doing your own thing and incurring the disapproval of others has been dangerous and stupid. Bucking the trend should make you feel crazy, because it often is crazy. Humans survived the ice age, the Black Plague, two world wars, and the Tide Pod Challenge. 99% of all species that ever lived are now extinct, but we’re still standing. Clearly we’re doing something right, and so it behooves you to look around and do exactly what everybody else is doing, even when it feels wrong. That’s how we’ve made it this far, and you’re unlikely to do better by deriving all your decisions from first principles.
Here’s an example1. Cassava is tasty, nutritious, and easy to grow, but it is also unfortunately full of cyanide. Thousands of years ago, humans developed a lengthy process that renders the root edible, which involves scraping, grating, washing, boiling, waiting, and baking. It’s a huge pain in the ass and it takes several days, but if you skip any of the steps, you get poisoned and maybe die. Of course, the generations of humans who came up with these techniques had no idea why they worked, and once they perfected the process, the subsequent generations would also have no idea why they were necessary.
The knowledge of cassava processing only survived—and indeed, humans themselves only survived—because of conformity, tradition, and superstition. The mavericks who were like, “You guys are dummies, I’m gonna eat my cassava right away” all perished from the Earth. The people who passed their genes down to us were the ones who were like, “Yes of course I’ll do a bunch of pointless and annoying tasks just because the elders say so and it’s what we’ve always done.” We are the sons and daughters of sheeple.
No wonder it’s hard to be yourself, even long after we’ve gotten all the cyanide out of our cassava. Looking normal, pleasing others, squelching your internal sense of self and surrendering to the standards of your society—that all feels like a matter of life and death because, until recently, it was.2
It’s funny that none of this dawned on me even though I’ve been staring at this fact for years. My home field, social psychology, got famous for demonstrating all the ways that people will conform to the norm: the Milgram shock experiments, the Asch line studies, the smoke filled room study, the (now discredited) Stanford Prison Experiment, etc. Come to our classes and we’ll show you a hilarious Candid Camera clip where they send stooges to face the wrong way in the elevator, and the hapless civilians around them cave to peer pressure and end up facing the same way, too3:
“Ha ha! Look at what those dopes will do!” I thought to myself, profoundly missing the point. I didn’t realize at the time that demonstrating people’s willingness to conform is like demonstrating their willingness to doggy paddle when they get tossed in a lake—that’s what a lifesaving instinct looks like.
I guess what I’m saying is: everybody tells you to be yourself, but nobody tells you it’ll make you feel insane.
Maybe there are some lucky folks out there who are living Lowest Common Denominators, whose desires just magically line up with everything that is popular and socially acceptable, who would be happy living a life that could be approved by committee. But almost everyone is at least a little bit weird, and most people are very weird. If you’ve got even an ounce of strange inside you, at some point the right decision for you is not going to be the sensible one. You’re going to have to do something inadvisable, something alienating and illegible, something that makes your friends snicker and your mom complain. There will be a decision tucked behind glass that’s marked “ARE YOU SURE YOU WANT TO DO THIS?”, and you’ll have to shatter it with your elbow and reach through.
You shouldn’t do that thing on a whim—the snickers and complaints are often right, and evolution put the glass there for a reason. Nor should you do it in the hopes that it’ll make your life easier, because on the whole, it won’t. Sticking to the paved path puts you beyond reproach. If you, say, hit a pot hole and go flipping into oncoming traffic, well, hey, that’s not on you. Go off-road, though, and you’re vulnerable to criticism. If you crash, it’s your fault. Why couldn’t you just drive on the street like a normal person?
When you make that crazy choice, things get easier in exactly one way: you don’t have to lie anymore. You can stop doing an impression of a more palatable person who was born without any inconvenient desires. Whatever you fear will happen when you drop the act, some of it won’t ultimately happen, but some will. And it’ll hurt. But for me, anyway, it didn’t hurt in the stupid, meaningless way that I was used to. It hurt in a different way, like “ow!...that’s all you got?” It felt crazy until I did it, and then it felt crazy to have waited so long.
Which is to say: once you moon the audience, you don’t have to worry anymore, because the worst is already over. All that’s left is to live the rest of your life in a world where people have seen your bare butt. And I can tell you, it’s better than you might expect.
This example comes from Joe Henrich’s The Secret of Our Success, pg. 97-99.
Our instinct to conform is so strong that it can even reprogram our taste buds. Once, at a party, some of my friends handed me a glass of wine and were like “Here, try this!” and the way they said it made me think there was something wrong with the wine, like it had gone off or something, and they wanted me to see how bad it was. (These are the kind of people I hang out with.) So I took a sip and went, “Oh, that’s rancid!”
My friends got quiet. One of them looked especially horrified. “That’s my favorite wine,” she said. She had brought it to share with everybody.
I took another sip and tried to backpedal, “Actually it’s not so bad! It’s good, even!” I said, looking insane. “I’m bad at tasting things,” I added, helplessly. (I didn’t yet have documented scientific evidence that I have a poor sense of smell and taste.) I wasn’t lying—it really had tasted bad at first, and as soon as I knew it was supposed to taste good, it did.
Whoever uploaded this clip called it “Groupthink” but that’s…a different thing