2025-03-04 21:39:48
This is the quarterly links and updates post, a selection of things I’ve been reading and doing for the past few months.
Everybody knows about the Milgram shock experiments, but I had never heard of the Hofling hospital study, which basically did Milgram in the field.
A guy introducing himself as “Dr. Smith” would call into the ward and ask a nurse to administer a mega dose of “Astroten” to a patient. The nurses had never met Dr. Smith (because he was made up), they had never dispensed Astroten (because it was also made up), they weren’t supposed to take orders over the phone, and the bottle itself clearly indicated that Dr. Smith’s requested dose was double the daily limit. When this situation was described to a sample of nurses in a separate study, almost all of them insisted that they would refuse the order. And yet 21/22 nurses who were subjected to the real situation made it all the way to the patient’s room with their overdose in tow before the research team stopped them.
In a sort-of replication 11 years later, 16/18 nurses protested in some way when a “little-known” doctor called and asked them to administer way too much Valium to a patient. Still, most of the nurses got pretty close to giving it, and nearly half of them said they would have continued if the doctor insisted.
People have tried to debunk the Milgram studies, but those debunkings have failed because they miss the whole point of experiments like these. It’s not that people are blindly obedient. It’s that we live in a world where the folks around us are usually acting normal and doing reasonable things, and it would be impolite, weird, and annoying to second-guess them all the time, so we generally don’t.
People always think the end times are around the corner (see my recent posts: 1, 2), but to be fair, they sometimes have good reasons. For instance, around the year 1500, there were terrifying reports of “monstrosities” being born—abominable creatures never seen before on Earth, whose arrival could only herald the coming apocalypse. The grandaddy of these was the “Papal Ass,” a Frankenstein of different beasts that supposedly washed up on the Tiber River in 1496.
Word on the street was that the Papal Ass was sent by God to demonstrate the depravity of Pope Alexander VI, AKA Rodrigo de Borgia. Pope Alex had indeed done some non-popely things, like having a bunch of illegitimate children, maybe killing off some of his political enemies, and making his 17-year-old son Cesare a cardinal. He is perhaps best known for appearing in Assassin’s Creed II, where he zaps the player with a magic staff.
On the other hand, sometimes weird births could be good omens. For instance, in 1595, a child was born in Silesia (now mostly Poland) with a golden tooth. A certain Dr. Horst investigated and made this report:
at the birth of the child the sun was in conjunction with Saturn, at the sign Aries. The event, therefore, though supernatural, was by no means alarming. The golden tooth was the precursor of a golden age, in which the emperor would drive the Turks from Christendom, and lay the foundations of an empire that would last for a thousand years.
On Christmas Eve 2020, the New York State Representative Brian Kolb wrote an op-ed warning people not to drive drunk over the holidays. On New Years Eve 2020, he got drunk and drove his state-issued SUV into a ditch.
Many people blame scientific stagnation on ideas getting harder to find. A new paper offers another explanation: maybe as more people have gone into research, the average quality of researchers has fallen. This makes some sense—if your company has 100 researchers and suddenly decides to hire 1,000 more, eventually you’ll have to lower your standards.
That said, I don’t think the evidence they present is all that convincing, because they assume they can measure researcher ability by looking at how much people make when they leave their research jobs. As in, if people are leaving their postdoc positions to start billion-dollar companies, they must be pretty smart. But if they’re leaving to go dig ditches, their research probably sucked. This is obviously pretty fraught—it assumes that people always choose the job that maximizes their income, and that the highest earners outside the research sector would be the top performers inside it. It also focuses on the average quality of researchers, when in fact science is a strong-link problem, so we should be asking whether the best has gotten worse.
Before I started a blog, I thought I would write a post like “How to like more things” and I never did, and now I’m glad because recently wrote a better version: How to Like Everything More
Smallpox is the only human disease that’s ever been eradicated. It used to kill an estimated 5 million people every year; now it kills zero. That’s in large part thanks to the smallpox vaccine developed by Edward Jenner. And yet Philosophical Transactions—one of the most prestigious scientific outlets of its day—rejected Jenner’s paper describing his results. And here’s how people felt about his vaccine:
The other day I was like, “wait, who lost money in Bernie Madoff’s ponzi scheme?” I did not expect the list of victims to include...Nobel Prize winning author of Night, Elie Wiesel. He apparently entrusted both his life savings and his foundation’s finances to Madoff, who then made them disappear. Here’s the punishment Wiesel recommended for his erstwhile friend:
I would like him to be in a solitary cell with a screen, and on that screen ... every day and every night there should be pictures of his victims, one after the other after the other, always saying, “Look, look what you have done.” ... He should not be able to avoid those faces, for years to come.
He added, “This is only a minimum punishment.”
On a single day in London, 1902, “fourteen people slipped on banana skins on Fleet Street and the Strand alone, and were injured enough to need treatment.”
I love this post from Uri Bram at Atoms vs. Bits. Hoeing a field is no fun, and so we’d love to know how much hoeing is necessary for maximizing crop yield, and do exactly that much. But nobody knew the optimal amount of hoeing until Jethro Tull (the inventor, not the band) designed the world’s simplest experiment. Just plant some seeds all in a row, hoe a little bit around the first seed, hoe a little bit more around the second one, and so on, like this:
Then watch ‘em grow. Wherever their growth plateaus, that’s how much you need to hoe. Someone could have done this a thousand years earlier, and if they had, we would have way more turnips by now.
One turnip I’m glad we have: , a Substack that posts rarely but always posts good. Their most recent: Mechanisms Too Simple for Humans to Design
Another sporadic-but-never-miss Substack is Desystemize. I thought I was tired of hearing about AI, but actually I was tired of hearing the same takes over and over. His recent post (If You’re So Smart, Why Can’t You Die?) is very good and comes with some useful tools for thinking that I hadn’t encountered before.
The sociologist Claude S. Fischer casts doubt on the idea of a “loneliness epidemic”:
We can provisionally conclude that, over the last half-century or more, friends have remained roughly constant, probably even expanding their roles in Americans’ lives. Yet, as we saw, that long history has usually been accompanied by repeated alarms about the loss of friendship.
Speaking of things that were supposed to be running out, the journalist Ed Conway intended to write a series about “the world’s lost minerals.” He now reports that he failed: “So far, we haven’t really, meaningfully run out of, well, pretty much anything.”
According to of , after the film adaptation of Lolita came out in 1962, a few hundred parents decided to name their daughters “Lolita”. I wonder if they...watched the movie.
PageRank (Larry Page)
Taco Bell (Glen Bell)
shrapnel (Henry Shrapnel)
I would add boycott (Charles Boycott)
The French scientist Pierre Borel was one of the first people to argue that life exists on other planets. In A New Treatise Proving a Multiplicity of Worlds, published in 1657, he suggests that aliens are real and they’ve come to Earth—we call them “birds”. Specifically, birds of paradise:
This Bird is so beautiful, that no one in the Earth is to be compared to it; its figure is of so rare a form, and so extraordinary, that never the like hath been found […] no body ever saw its eggs, nor its nest; and it’s asserted, that it lives by the Air; this Bird never being found upon Earth, is it not consonant to Reason, that it may come from some other Starre, where it lives and breeds
When people are like “psychology is simply too complicated, we’ll never understand it much better than we do now,” I think of Dr. Borel, who was like “we’ve never seen this bird’s eggs, so we never will, so it must be from another planet.”
When I wrote about the fraud scandals surrounding Dan Ariely and Francesca Gino, some folks wondered how anyone could get away with fraud for very long. Doesn’t anyone notice? One answer: yes, people notice all the time, but nobody’s willing to speak up about it. A graduate student named Zoé Zaini originally noticed inconsistencies in Gino’s papers, but according to Zaini’s retrospective, her dissertation committee told her to bury her doubts instead of airing them:
After the defense, two members of the committee made it clear they would not sign off on my dissertation until I removed all traces of my criticism of [Gino’s paper] [...] one committee member implied that a criticism is fundamentally incompatible with the professional norms of academic research. She wrote that “academic research is a like a conversation at a cocktail party”, and that my criticism was akin to me “storming in and shouting ‘you suck’”
This is a classic case where we were missing the second bravest person.
Speaking of which: “ is launching a program to promote investigations into research fraud and other serious misconduct.” If you’re in Zaini’s position, consider reaching out to them.
Most histories of the replication crisis (including mine) begin in 2011 with Daryl Bem publishing a paper in the Journal of Personality and Social Psychology “proving” that ESP is real. I didn’t realize that Bem was old news by then: in 1974, Nature published a paper confirming that the magician Uri Geller could read minds and see through walls.
The book was banned by the Catholic church in 1739.
Lastly, my friends Slime Mold Time Mold have a new series out called The Mind in the Wheel, which proposes a cybernetic paradigm for psychology. I’m extremely excited about this project and I’ll have a lot more to say about it in the future.
News ‘n’ updates from the burgeoning science-on-the-internet movement. Original post here; email me if you have an update.
Alex Chernavsky is doing lots of self-experiments and is looking for collaborators. Some of his recent studies include: the effects of creatine on reaction time, and potassium for weight loss. I love seeing work like this; if you join forces with Alex (or do similar stuff on your own), please send it to me and I’ll link to it here.
Another replication of putting your toaster in the dishwasher:
is pulling together people interested in putting memetics to good use—reach out to him if you want to collaborate.
, an official Experimental History-recommended Substack, publishes a terrific Best of Science Blogging series. Now they’re sharing their revenue with everyone they publish.
Asimov Press has a list of stories they’d like to publish. That includes a piece on “Alternatives to Peer Review,” which I of course would love to read myself. Asimov’s editors are lovely, so if you’re itching to do some science writing, don’t sleep on this.
A little out of date, but you can also pitch Works in Progress.
I was on Derek Thompson’s Plain English podcast talking about the lack of progress in psychology.
Chris Turner is a very funny standup comedian and also my friend; I went on his Godforsaken podcast to talk about the time we got chained up in a Hungarian basement as Brexit happened.
I contributed a piece to THE LOOP, which I would describe as a cross between Teen Vogue, a scientific journal, and a fever dream of the internet.
And finally, from the vault: in 2022, I started wondering, “is every popular movie really a remake these days?” I analyzed the data and discovered: yes, but it’s not only movies. It’s everything.
2025-02-18 23:08:00
It’s a grim time for science in America. The National Science Foundation might be forced to fire half its staff, grants are being frozen and reviewed for ideological purity, and universities may see their cut of those grants reduced by 40%.
We were bound to end up here sooner or later. Science funding has been riding an atomic shockwave since 1945, buoyed by the bomb and the Cold War and the conviction that we could vanquish our enemies if we just kept cutting checks to the geeks. Now the specter of Soviet submarines isn’t so scary anymore, and our most feared and hated enemies happen to share the same country with us—we’re out of our Red Dawn era and into our Civil War era. And so people are wondering: why are we spending billions on basic research, again?
There’s a good answer to that question, but nobody’s giving it. We all assumed that science was self-evidently worthwhile, thus allowing our arguments to atrophy and leaving us with two half-assed defenses. On the one hand, we have romantics who think science is important because “something something the beauty of the universe triumph of the human spirit look at this picture of a black hole!” And on the other hand we’ve got people who think science is like having a Geek Squad that you can call upon to solve your problems (“My tummy hurts! Scientists, fix it!!”). These points aren’t completely wrong, but they’re achingly incomplete—why should we pay for people to stand agog at the wonder of the universe, and why would we let them do anything that isn’t immediately related to some pressing problem?
And then there’s an alarmingly large contingent who thinks there isn’t any argument to make. In their heart of hearts, they think the NSF and the NIH are, in fact, charitable organizations. In this view, science funding is just welfare for eggheads, and scientists are a bunch of Dickensian beggars going, “Please sir, can you spare a few pence so I can run my computational models?” Witness, for instance, the Johns Hopkins cardiologist who thinks that all NIH-funded scientists owe an annual thank-you note to the American public.1
Even the folks who have a soft spot for science often think of it as a nice-to-have—you know, let’s first make sure we build enough battleships, mail enough checks to the right people, etc., and then if there’s a little left over we can toss a few bones to the nerds. If we’re all really good, we can have some science, as a treat.
If this is all we got, it’s no wonder that people wanna smash the science piggybank and distribute the pennies to their pet projects instead. The case for science should be a slam dunk; we’ve turned it into a self-own. So lemme take a crack at it.
There is only one way that we improve our lot over time: we get more knowledge. That’s it. Everything that has ever made our lives better has come from collecting, cultivating, and capitalizing on information.
When I say “knowledge,” I mean everything we’ve figured out, from the piddling (“cotton is more comfortable than burlap”) to the profound (“energy cannot be created or destroyed”). I mean the things you have to learn in school (“kidneys filter your blood”) and the things that now go without saying (“it’s better if the king isn’t allowed to kill whoever he wants”). Some of this knowledge looks science-y, like when engineers use the theory of relativity to make GPS work, and some of it looks folksy, like when a lady on Long Island invents a little plastic table that prevents pizza from sticking to the top of the box.
No matter where it comes from, it’s all part of the same great quest to de-stupify ourselves. If you’re doing anything remotely good—basically, unless you are shaking down local businesses for protection money or blowing up UNESCO World Heritage sites—you are either enabling, conducting, or cashing in on the search for knowledge, and therefore you’re part of the project.
We don’t talk about the history of our species this way. In school, I learned things like “the Egyptians made hieroglyphics” and “the Romans did aqueducts” and “the Pilgrims wore hats,” but nobody mentioned that if I had been born an ancient Egyptian, a Roman, or a Pilgrim, there’s a 50/50 chance I would have died before I turned 15. I might have starved or frozen, or maybe I would have been executed for believing in the wrong god, or maybe I’d be done in by microscopic invaders that I didn’t even know existed. (Making it to your quinceañera was once a much greater reason for celebration, not that anyone but the king could afford streamers and cake.) And if I wasn’t dead, I would be working—farming, shepherding, child soldiering, etc.—not sitting in social studies making dioramas.
Nor did anyone explain why I got to have a different life than my ancestors did: I was born into a society where we know more. We know how to grow enough food for everybody, how to keep the cold out, how to fend off the microscopic invaders, and how to get along—more or less—with people who worship different gods.2 None of this happened by chance, and none of it came for free. In fact, for most of our history, our stock of knowledge didn’t increase at all, and the people who dared to add to it were often ridiculed and sometimes killed.
Despite all that, we have made a lot of progress in the millennia-long project of diminishing our ignorance, and that’s why I get to eat focaccia and play Call of Duty, while my ancestors had to eat moldy bread and play a game called “hide from the marauding Visigoths”. But the project isn’t finished yet. It’s not even close. There’s still so much suffering, and we could escape it if only we knew how.
That’s why we fund science. We all pitch in to hunt down the knowledge that can’t be found any other way. We don’t seek the knowledge that will turn us a profit tomorrow—that’s what businesses are for—but the knowledge that will support a permanently better life. We do science that is speculative and strange because that’s where the breakthroughs will come from, the frontiers of knowledge where our intuitions stop working, where predictions fail, and where the things that seem sensible are unlikely to be important. We do this with public funding because it produces public goods. The things we discover are too important to be owned; they must be shared.
So yes, it’s beautiful, but that’s not why we do it. Yes, it’s practical, but not right away. And no, it’s not charity. You don’t “save” money by skimping on science, just like you don’t save money by sending second graders to the coal mines instead of the classroom. You could think of it as investment because it does pay off in the long run, but even that undersells it. Pooling our resources to discover new truths about the universe so that we can all have better lives, to strike back against disease, suffering, poverty, and violence, to reduce ignorance for the benefit of all—that’s literally the most badass thing we do.
Our past wasn’t inevitable and our future isn’t guaranteed. We have to choose to keep increasing our knowledge. That choice might seem like an easy one, but we have to contend with three tempting but false arguments for choosing to do opposite, three Sirens of Entropy trying to seduce us into running the ship of civilization aground.
The first is rejection. Knowledge comes with tradeoffs—the chemistry that cures can also poison, the physics that builds space rockets can also make cruise missiles, and so on. Plus, the past always looks idyllic—as long as you don’t look too closely—and so it always seems like history has just recently gone wrong. Anyone could be lulled into believing that these tradeoffs cannot be managed or improved and must be avoided entirely, that the solution to our problems is less knowledge, rather than more. This view does require you to ignore or deny things like the near disappearance of extreme poverty, the end of child labor, the historically low rates of violent death, etc., but there will always be people willing to rise to that challenge, because the appearance of one problem will always be more salient than the disappearance of another.
The second is complacency. When the lines keep going up and to the right, it’s easy to assume that’s just what they do, and to forget that every increase ultimately comes from our expanding stock of knowledge. You can slip into thinking that our living standards rise of their own accord, that death and disease recede because we want them to, that the GDP fairy puts 4% growth under our pillows to reward us for being such good boys and girls. When you think that progress is a perpetual motion machine, you’ll see no need to top up its gas tank.
And the third is the pie apocalypse. Every time we grow the pie that we all share, we also have to figure out how to split it fairly. We get pissed off when the products of progress are disproportionately captured by the rich, and well we should—it’s like playing Hungry Hungry Hippos against someone who gets to mash two buttons instead of one. But it’s easy to focus on the pie-splitting problem and forget the pie-growing problem entirely. We can thus descend into an all-out war for pie, where they only way to get a bigger slice is to steal someone else’s. Meanwhile, the pie shrinks and shrinks, and we end up fighting over crumbs.3
So it’s a good idea to get smarter, and we can all contribute to that mission. But that doesn’t mean we’re doing a good job right now. People are right to be mad about the state of science funding in America: the fraud, the waste, the low ambitions, the dogmatism, politicking, and rent-seeking. Maybe this chaos is at least a chance to sunset some of the most outrageous parts of the system, so long as we’re committed to figuring out the best way to spend our science dollars, rather than throttling or lavishing funds the way a king dispenses dukedoms and decapitations. (“Sussex for my real friends, no necks for my sham friends.”)
Fortunately, this ain’t hard to do. There’s so much low-hanging fruit that we’re tripping over it. Here are three of the easiest, most obvious moves that we could make right away, and that we should have done half a century ago.
For-profit publishers make their money by privatizing public goods. The government pays the scientists to do the research, then publishers paywall it, and finally the government pays again so the scientists can read what they wrote. This gets obfuscated because publishers don’t charge the government directly. Instead, universities fork over millions for journal access, then charge it back to the government as part of the much-hated “indirect costs”. The taxpayers foot the bill for all of this, but they don’t even get to read the studies themselves unless they have a .edu email address.4
Everybody knows this is ridiculous. If this business didn’t already exist and you tried to pitch it on Shark Tank, Mark Cuban would laugh in your face. It only worked out this way because some schemers realized that academics are a captive market, so they bought up all the journals in the 70s.5 Individual scientists have tried to upend the system through self-immolation—that is, by refusing to publish in or review for journals owned by Elsevier and the like—but it hasn’t worked, because it’s hard to convince everyone to immolate at the same time.
This is the kind of coordination problem that only the government can solve. We could score such an easy win by paying for the minor costs of publishing directly (hosting, copyediting, the pittances occasionally paid to editors, etc.), rather than paying middlemen to do them at a ~40% markup. I’ve got plenty of issues with scientific journals, but if we’re going to have them, we shouldn’t also set money on fire. So if you want to smash things, smash this one!
By one estimate, principal investigators spend 44% of their time applying for grants. It takes an average of 116 hours to fill out a single NSF proposal. Most applications get rejected, and so most of that time is wasted. These costs don’t appear in the budget because nobody can say “if you pay me, I will spend half of my time trying to make sure that you will pay me again in the future.” But that’s what they’ll do, because failure to secure a grant is death for an academic. So whether federal agencies realize it or not, by making their applications so laborious and competitive, they are paying people to spend almost half of their time trying to get paid.
And that doesn’t include the cost of grant panels who have to sift through those applications, the bureaucracy that has to process the paperwork, the mandatory Institutional Review Boards that take six months to tell you whether it’s okay to ask someone if they own a lawnmower, etc. The government requires all of these things, and since the government is funding the whole enterprise, it also pays for them.
You might think we adopted all of these policies because we had evidence suggesting they would make us better off, but we didn’t. We adopted them because they sounded good at the time, and why would you check something that sounds good?
Whenever people complain that a lot of government-funded science ends up un-cited and unused, or that it’s hideously ideological6 or pointlessly incremental, I gotta laugh because those are the projects we picked. We got exactly what we asked for; it’s like ordering something from Amazon and then being angry when it arrives. The problem isn’t the product—it’s the picking.
If we want better outcomes, we should pick different projects. The whole point of funding science is to discover things that wouldn’t get discovered anywhere else. Pharmaceutical companies can make plenty of profit turning molecules into medicines, but they can’t go looking into the mouths of gila monsters, asking “Any good drugs in here?” Thats how we got GLP-1 agonists, which are now used by millions of people.
So public funding should go to projects that are foundational, speculative, long-term, useful but unsexy, or big if true. Some of those projects can be identified by committee, but many can’t, and so we should pick them some other way: lotteries, golden tickets, trust windfalls, fast grants, bounties, prizes, retroactive public goods funding7, “people not projects,” and moonshots that are actually moonshots, to name just a few. We should be placing some of our bets outside the scientific consensus so that we don’t waste billions on one idea that turns out to be wrong. And we should really try to figure out how one guy funded almost every single person who won a Nobel Prize for molecular biology between 1954 and 1965...19 years before they won. It would be cool to do that again!
Many of these methods cost less than the standard “solicit one million pages of applications” procedure. If we tracked them long term—not “did this get into a good journal”, but “did this end up mattering”—we could figure out which ones work better for what ends, and we could get more science for less money. It is a national embarrassment that the agencies who fund experiments do basically zero experiments themselves. Would you trust a pulmonologist who smokes?
I understand why people think we can balance the budget by shrinking science, and I understand why we spend our limited science funding in such a cowardly way. We want everything to be accountable, and it’s hard to tell the difference between actual accountability and the mere appearance of it. Nobody gets blamed for “saving” money, even when it costs more in the long term. And nobody gets blamed for doing things by the book, even when the book turns out to be fiction.
But paying for science is different from paying for other things. When you pay for a bridge, you get a bridge. When you pay for Medicare, you get Medicare. When you pay for an F-35 fighter jet, you...pay an extra $500 billion, and then you get an F-35 fighter jet. In the short run, though, you can’t know what your science dollars are going to get you. That’s the whole point of doing the science!
The more we fight against that fact, the more we demand legibility in the form of applications and metrics, the more we try to squash and slash and cut just for the hell of it, the less we get the thing we’re actually trying to buy. There’s a sort of Heisenberg Uncertainty Principle at work here: you can’t spend your money wisely and make sure you’re spending your money wisely at the same time. It would be cool to only run experiments that were guaranteed to both work and teach us something, just like it would be cool to only buy stocks that are guaranteed to increase in value. Until that becomes possible, we’re gonna have to take some risks.
In the long run, however, you know exactly what you’re going to get. The only thing that lifts our boats is the rising tide of knowledge. One of the basic functions of the government is to help make that happen. It requires some patience and some money. In return, it gives us literally everything.
I’m fascinated by this logic. The government pays a lot of people to do a lot of things, so why are researchers uniquely indebted to the American public? Should your local police officer also send you a thank-you note? Should the Secretary of State? Do we all deserve a thoughtful box of mixed nuts from the guy who trims the bushes in front of the DMV?
It’s fitting that in Cixin Liu’s Three Body Problem series, hostile aliens try to ensure our defeat by halting our scientific progress.
This same tension between growing and splitting, by the way, is one of the core problems of negotiation.
The Biden administration tried to fix this by requiring government-funded research to be open-access, but they got outfoxed by the publishers, who started charging authors for publishing rather than charging readers for reading. The last paper I published in one of these journals made us pay $12,000 to make it “open-access”. Publishing my papers on Substack, on the other hand, costs me $0.
As one publisher recalls, “When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. Then you market the thing and your salespeople go out there to sell subscriptions, which is slow and tough, and you try to make the journal as good as possible. That’s what happened at Pergamon. And then we buy it and we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.”
Although we’re probably not funding all that much hideously ideological science, see this analysis and clarification
I’ve only seen very online/crypto-y people try to do this, but there’s no reason we couldn’t use it for regular stuff
2025-02-05 02:45:54
There are two kinds of problems in the world: strong-link problems and weak-link problems.1
Weak-link problems are problems where the overall quality depends on how good the worst stuff is. You fix weak-link problems by making the weakest links stronger, or by eliminating them entirely.
Food safety, for example, is a weak-link problem. You don’t want to eat anything that will kill you. That’s why it makes sense for the Food and Drug Administration to inspect processing plants, to set standards, and to ban dangerous foods. The upside is that, for example, any frozen asparagus you buy can only have “10% by count of spears or pieces infested with 6 or more attached asparagus beetle eggs and/or sacs.” The downside is that you don’t get to eat the supposedly delicious casu marzu, a Sardinian cheese with live maggots inside it.
It would be a big mistake for the FDA to instead focus on making the safest foods safer, or to throw the gates wide open so that we have a marketplace filled with a mix of extremely dangerous and extremely safe foods. In a weak-link problem like this, the right move is to minimize the number of asparagus beetle egg sacs.
Weak-link problems are everywhere. A car engine is a weak-link problem: it doesn’t matter how great your spark plugs are if your transmission is busted. Nuclear proliferation is a weak-link problem: it would be great if, say, France locked up their nukes even tighter, but the real danger is some rogue nation blowing up the world. Putting on too-tight pants is a weak-link problem: they’re gonna split at the seams.
It’s easy to assume that all problems are like this, but they’re not. Some problems are strong-link problems: overall quality depends on how good the best stuff is, and the bad stuff barely matters. Like music, for instance. You listen to the stuff you like the most and ignore the rest. When your favorite band releases a new album, you go “yippee!” When a band you’ve never heard of and wouldn’t like anyway releases a new album, you go…nothing at all, you don’t even know it’s happened. At worst, bad music makes it a little harder for you to find good music, or it annoys you by being played on the radio in the grocery store while you’re trying to buy your beetle-free asparagus.
Because music is a strong-link problem, it would be a big mistake to have an FDA for music. Imagine if you could only upload a song to Spotify after you got a degree in musicology, or memorized all the sharps in the key of A-sharp minor, or demonstrated competence with the oboe. Imagine if government inspectors showed up at music studios to ensure that no one was playing out of tune. You’d wipe out most of the great stuff and replace it with a bunch of music that checks all the boxes but doesn’t stir your soul, and gosh darn it, souls must be stirred.
Strong-link problems are everywhere; they’re just harder to spot. Winning the Olympics is a strong-link problem: all that matters is how good your country’s best athletes are. Friendships are a strong-link problem: you wouldn’t trade your ride-or-dies for better acquaintances. Venture capital is a strong-link problem: it’s fine to invest in a bunch of startups that go bust as long as one of them goes to a billion.
Figuring out whether a problem is strong-link or weak-link is important because the way you solve them is totally different:
When you’re looking to find a doctor for a routine procedure, you’re in a weak-link problem. It would be great to find the best doctor on the planet, of course, but an average doctor is fine—you just want to avoid someone who’s going to prescribe you snake oil or botch your wart removal. For you, it’s great to live in a world where doctors have to get medical degrees and maintain their licenses, and where drugs are thoroughly checked for side effects.
But if you’re diagnosed with a terminal disease, you’re suddenly in a strong-link problem. An average doctor won’t cut it for you anymore, because average means you die. You need a miracle, and you’re furious at anyone who would stop that from happening: the government for banning drugs that might help you, doctors who refuse to do risky treatments, and a medical establishment that’s more worried about preventing quacks than allowing the best healers to do as they please.
Science is a strong-link problem.
In the long run, the best stuff is basically all that matters, and the bad stuff doesn’t matter at all. The history of science is littered with the skulls of dead theories. No more phlogiston nor phlegm, no more luminiferous ether, no more geocentrism, no more measuring someone’s character by the bumps on their head, no more barnacles magically turning into geese, no more invisible rays shooting out of people’s eyes, no more plum pudding, and, perhaps saddest of all, no more little dudes curled up inside sperm cells:
Our current scientific beliefs are not a random mix of the dumbest and smartest ideas from all of human history, and that’s because the smarter ideas stuck around while the dumber ones kind of went nowhere, on average—the hallmark of a strong-link problem. That doesn’t mean better ideas win immediately. Worse ideas can soak up resources and waste our time, and frauds can mislead us temporarily. It can take longer than a human lifetime to figure out which ideas are better, and sometimes progress only happens when old scientists die. But when a theory does a better job of explaining the world, it tends to stick around.
(Science being a strong-link problem doesn’t mean that science is currently strong. I think we’re still living in the Dark Ages, just less dark than before.)
Here’s the crazy thing: most people treat science like it’s a weak-link problem.
Peer reviewing publications and grant proposals, for example, is a massive weak-link intervention. We spend ~15,000 collective years of effort every year trying to prevent bad research from being published. We force scientists to spend huge chunks of time filling out grant applications—most of which will be unsuccessful—because we want to make sure we aren’t wasting our money.
These policies, like all forms of gatekeeping, are potentially terrific solutions for weak-link problems because they can stamp out the worst research. But they’re terrible solutions for strong-link problems because they can stamp out the best research, too. Reviewers are less likely to greenlight papers and grants if they’re novel, risky, or interdisciplinary. When you’re trying to solve a strong-link problem, this is like swallowing a big lump of kryptonite.
(Peer review also does a pretty bad job at stamping out bad research too, oops.)
Giant replication projects—like this one, this one, this one, this one, and this one—also only make sense for weak-link problems. There’s no point in picking some studies that are convenient to replicate, doing ‘em over, and reporting “only 36% of them replicate!” In a strong-link situation, most studies don’t matter. To borrow the words of a wise colleague: “What do I care if it happened a second time? I didn’t care when it happened the first time!”
This is kind of like walking through a Barnes & Noble, grabbing whichever novels catch your eye, and reviewing them. “Only 36% of novels are any good!” you report. That’s fine! Novels are a strong-link problem: you read the best ones, and the worst ones merely take up shelf space. Most novels are written by Danielle Steel anyway.
(See also: Psychology might be a big stinkin’ load of hogwash and that’s just fine.)
I think there are two reasons why scientists act like science is a weak-link problem.
The first reason is fear. Competition for academic jobs, grants, and space in prestigious journals is more cutthroat than ever. When a single member of a grant panel, hiring committee, or editorial board can tank your career, you better stick to low-risk ideas. That’s fine when we’re trying to keep beetles out of asparagus, but it’s not fine when we’re trying to discover fundamental truths about the world.
(See also: Grant funding is broken. Here’s how to fix it.)
The second reason is status. I’ve talked to a lot of folks since I published The rise and fall of peer review and got a lot of comments, and I’ve realized that when scientists tell me, “We need to prevent bad research from being published!” they often mean, “We need to prevent people from gaining academic status that they don’t deserve!” That is, to them, the problem with bad research isn’t really that it distorts the scientific record. The problem with bad research is that it’s cheating.
I get that. It’s maddening to watch someone get ahead using shady tactics, and it might seem like the solution is to tighten the rules so we catch more of the cheaters. But that’s weak-link thinking. The real solution is to care less about the hierarchy. If you spend your life yelling at bad scientists, you’ll make yourself hoarse. If you spend your life trying to do great science, you might forever change the world for the better, which seems like a better use of time.
Here’s our reward for a generation of weak-link thinking.
The US government spends ~10x more on science today than it did in 1956, adjusted for inflation. We’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today. (Reminder, if you want to blame this on ideas getting harder to find, I will fight you.)
We should have seen this coming, because the folks doing the strongest-link research have been warning us about it for a long time. One of my favorite genres is “Nobel Prize winner explains how it would be impossible to do their Nobel Prize-winning work today.” For instance, here’s Peter Higgs (Nobel Prize in Physics, 2013):
Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.
Sydney Brenner (Nobel Prize in Physiology or Medicine, 2002) on Frederick Sanger (Nobel Prize in Chemistry, 1958 & 1980):
A Fred Sanger would not survive today's world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labeled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.
Carol Greider (Nobel Prize in Physiology or Medicine, 2009):
“I’m not sure in the current climate we have for research funding that I would have received funding to be able to do the work that led to the Nobel Prize,” Greider said at a National Institutes of Health (NIH) event last month, adding that her early work on enzymes and cell biology was well outside the mainstream.
John Sulston (Nobel Prize in Physiology or Medicine, 2002):
I wandered along to the chemistry labs, more or less on the rebound, and asked about becoming a research student. It was the 60s, a time of university expansion: the doors were open and a 2:1 [roughly equivalent to a B] was good enough to get me in. I couldn’t have done it now.
Jeffrey C. Hall (Nobel Prize in Physiology or Medicine, 2017):
I admit that I resent running out of research money. [...] In my day you could get a faculty job with zero post-doc papers, as in the case of yours truly; but now the CV of a successful applicant looks like that of a newly minted full Professor from olden times. [...] US institutions (possibly also those in other countries) behave as though they and their PIs are entitled to research funding, which will magically materialize from elsewhere: ‘Get a grant, serf! If you can't do it quickly, or have trouble for some years — or if your funding doesn't get renewed, despite continuing productivity — forget it!’ But what if there are so many applicants (as there are nowadays) that even a meritorious proposal gets the supplicant nowhere or causes a research group to grind prematurely to a halt? [...] Thus, as I say ‘so long,’ one component of my last-gasp disquiet stems from pompously worrying about biologists who are starting out or are in mid-career.
It goes on and on like this. When the people doing the best work are saying “hey there’s no way you could do work like this anymore,” maybe we should listen to them.
I’ve got a hunch that science isn’t the only strong-link problem we’ve mistakenly diagnosed as a weak-link problem. It’s easy to get your knickers in a pinch about weak links—look at these bad things!! They’re so bad!! Can you believe how bad they are??
It’s even easier to never think about the strong links that were prevented from existing. The terrible study that gets published sounds like nails on a chalkboard, but the terrific study that never got funded sounds like nothing at all. Purge all the terrible at the cost of the terrific, and all you’re left with is the mediocre.
Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter how rich the richest are when the poorest are starving. Issuing parking tickets is pointless when people are getting mugged on the sidewalk. Upgrading your wardrobe is a waste when you stink like a big fart.
Whether we realize it or not, we’re always making calls like this. Whenever we demand certificates, credentials, inspections, professionalism, standards, and regulations, we are saying: “this is a weak-link problem; we must prevent the bad!”
Whenever we demand laissez-faire, the cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link problem; we must promote the good!”
When we get this right, we fill the world with good things and rid the world of bad things. When we don’t, we end up stunting science for a generation. Or we end up eating a lot of asparagus beetles.
I originally heard these terms on this podcast discussing this book.
2025-01-29 03:33:23
Here’s a loop I get stuck in all the time: I get lots of good questions and comments from readers, and so I’ll start working on a response, but then I’ll get sucked in because I want to give a thoughtful answer, and next thing I know I’ve spent the whole day on a single email. Then I’ll be like “oh no, if I keep this up I’ll never writ…
2025-01-21 23:18:29
The human mind is like a kid who got a magic kit for Christmas: it only knows like four tricks. What looks like an infinite list of biases and heuristics is in fact just the same few sleights of hand done over and over again. Uncovering those tricks has been psychology’s greatest achievement, a discovery so valuable that it’s won the Nobel Prize twice (1, 2), and in economics, no less, since there is no Nobel Prize for psychology.1
And yet, the best trick in the whole kit is one that most people have never heard of. It goes like this: “when you encounter a hard question, ignore it and answer an easier question instead.” Like this—
Psychologists call this “attribute substitution,” and if you haven’t encountered it before, that clunker of a name is probably why. But you’ve almost certainly met its avatars. Anchoring, the availability heuristic, social proof, status quo bias, and the representativeness heuristic are all incarnations of attribute substitution, just with better branding.
The cool thing about attribute substitution is that it makes all of human decision making possible. If someone asks you whether you would like an all-expenses-paid two-week trip to Bali, you can spend a millisecond imagining yourself sipping a mai tai on a jet ski, and go “Yes please.” Without attribute substitution, you’d have to spend two weeks picturing every moment of the trip in real time (“Hold on, I’ve only made it to the continental breakfast”). That’s why humans are the only animals who get to ride jet skis, with a few notable exceptions.
The uncool thing about attribute substitution is that it’s the main source of human folly and misery. The mind doesn’t warn you that it’s replacing a hard question with an easy one by, say, ringing a little bell; if it did, you’d hear nothing but ding-a-ling from the moment you wake up to the moment you fall back asleep. Instead, the swapping happens subconsciously, and when it goes wrong—which it often does—it leaves no trace and no explanation. It’s like magically pulling a rabbit out of a hat, except 10% of the time, the rabbit is a tarantula instead.
I think a lot of us are walking around with undiagnosed cases of attribute substitution gone awry. We routinely outsource important questions to the brain’s intern, who spends like three seconds Googling, types a few words into ChatGPT (the free version) and then is like, “Here’s that report you wanted.” Like this—
Lots of jobs have no clear stopping point. Doctors could always be reading more research, salespeople could always be making more cold calls, and memecoin speculators could always be pumping and dumping more DOGE, BONK, and FLOKI.2 When your work day isn’t bookended by the hours of 9 and 5, how do you know you’re doing enough?
Simple: you just work ‘til it hurts. If you click things and type things and have meetings about things until you’re nothing but a sludge pile in a desk chair, nobody can say you should be working harder. Bosses love to use this heuristic, too—if your underlings beat you to the office every morning and outlast you in the office every evening, then you must be getting good work out of them, right?
But of course, that level of output doesn’t feel satisfying. It can’t. That’s the whole point—if you feel good, then obviously you had a little more gas left to burn, and you can’t be sure you pushed yourself hard enough.
Perhaps there are some realized souls out there who make their to-do lists for the day, cross off each item in turn, and then pour themselves a drink and spend their evenings relaxing and contemplating how well-adjusted they are. I’ve never met them. Instead, everybody I know—myself included—reaches midnight with three-fourths of their to-dos still undone, flogging themselves because they didn’t eat enough frogs. None of us realize that we’ve chosen to measure our productivity in a way that guarantees we’ll fall short.
I think a lot of us, when pressed, justify our self-flagellation as motivational, rather than pathological. If you let yourself believe that you’ve succeeded, you might be tempted to do something shameful, like stop working. But as the essayist Tim Kreider puts it:
Idleness is not just a vacation, an indulgence or a vice; it is as indispensable to the brain as vitamin D is to the body, and deprived of it we suffer a mental affliction as disfiguring as rickets.
Which is to say, every day I wake up and go, “Rickets, please!”
Speaking of games you can never win—
Sometimes my friend Ramon3 gets stressed about how much he’s achieved in his life, so to make sure he’s “on track,” he looks up the resumés of his former college classmates and compares his record with theirs.
Ramon has accomplishments coming out the wazoo, but he always discovers he’s not on track. You know how some colleges will let you take the SAT multiple times and submit all your scores, and then they’ll take your best individual Reading and Math tests and combine them into one super-score? Ramon does the same thing for his high-achieving friends: he combines the greatest accomplishment from each of his old rivals into a sort of Frankenstein of nemeses, a mythical Success Monster who has written a bestselling book and started a multi-million dollar company and married a supermodel and just finished a term as the Secretary of Housing and Urban Development.
I think Ramon’s strategy is pretty common. It’s hard to tell whether you’re doing the right things. It’s a lot easier to look around and go, “Am I on top?” On top of what? “Everything.”
If you ask me to judge the state of the economy, I’ll give you an answer so quick and so confident that you can only assume I keep close track of indicators like “advance wholesale inventories” and “selected services revenue.” What I’m really doing is flashing through a couple stylized facts in my head, like “my cousin got laid off last month, seems bad” and “gas was $4.10 last time I filled up, that’s expensive,” and “the pile of oranges at the grocery store seemed a little smaller and less ripe than usual—supply chain??”
In fact, I don’t even need three data points. I can judge the state of the economy with a single fact: if my guy’s in the Oval Office, I feel decent. If my enemy is in there, I feel despondent. This is apparently what everybody else is doing, because you can watch these feelings flip in real time:
Two things jump out at me here. First, sentiment swings after the election, rather than the inauguration, meaning people are responding to a shift in vibes rather than policy. Second, those swings are 50-75% as large as the drop that happened at the beginning of the pandemic. When the opposing side wins the presidency, it feels almost as bad as it does when every business closes down, the government orders people to stay at home, and a thousand people die every day.
Here is actual GDP for that same span of time:
Pretty steady growth, except for one pandemic-sized divot that lasted a whole six months—and half of that time was recovery, rather than recession. GDP isn’t a tell-all measure of the economy, of course, but it’s a lot better than checking whether the president is wearing a blue tie or a red tie.
I was once in one of those let’s-all-go-around-the-table situations where everybody says their name and something about themselves, and in this version, for whatever reason, our avuncular facilitator wanted us all to reveal where we went to college. After each person announced their alma mater, the man would nod reverently and go “Oh, good school, good school,” as if he had been to all of them, like he had spent his whole life as a permanent undergraduate doing back-to-back bachelors from Berkeley to Bowdoin.
This kind of thing flies because we all think we know which colleges are the good ones. But we don’t. Nobody knows! All we know is which colleges people say are good, and those people are in turn relying on what other people say. The much-hated US News and World Report is at least explicit about this—the biggest component of their college rankings is the “Peer Assessment,” which is where they send out a survey to university presidents that basically says, “I dunno, how good do you think Pomona is?” It’s one big ouroboros of hearsay, which sounds like the kind of thing Dante would have put in the fourth ring of hell.
It has to be this way, because how could you ever know how “good” a college is? Do an Undercover Boss-style sting where you sit in on Physics 101 and make sure the professor knows how to calculate angular momentum? Sneak into the dorms and measure whether the beds are really “twin extra long” or just “twin”? Stage an impromptu trolley problem in the quad and check whether the students would kill one person to save five?
(And besides, what is a “good college” good for? Neuroscience? Serving ceviche in the dining hall? Making it all go away when the son of a billionaire drives his Lambo into the lake?)
Judging quality is often expensive and sometimes impossible, and that’s why we resort to judging reputation instead. But it’s easy to feel like you’ve run a full battery of tests when in fact you’ve merely taken a straw poll. So when someone is like “This place has a great nursing program,” what they mean is “I heard someone say this place has a great nursing program,” and that person probably just heard someone else say it has a great nursing program, and if you trace that chain of claims back to its origin you won’t find anyone who actually knows anything—no one will be like “Oh yeah, I personally run bleeding through these halls all the time and I always get prompt and effective treatment.”4
My friend Ricky once got tapped to do something very cool that I’m not allowed to talk about—basically, he was recruited to his profession’s equivalent of the NFL. Then, before Ricky even showed up to practice, and through no fault of his own, the team folded and his opportunity disappeared.
Ricky was bummed for weeks, and who wouldn’t be? But if you think about it for a second, Ricky’s disappointment gets a little more confusing. Ricky is right back where he was before he got the call, which is a pretty good place. Actually, he’s even better off—now he knows the bigwigs are looking at him, and that they think he’s got the juice. Sure, a good thing almost happened, and it’s too bad that it didn’t, but what about all the bad things that also could have happened: the team could never have thought about him at all, he could have shown up and broken his leg the first day, his teammates could have bullied him for being named after a character from a Will Ferrell movie, and so on, forever. There’s an infinite number of worse possible outcomes, so why think about this one and feel sad? I understand it’s “a human reaction,” Ricky, but...should it be?
Ricky essentially lived the real-life version of this old story from the psychologists Danny Kahneman and Amos Tversky:
Of course, this is people guessing about how Mr. D and Mr. C would feel, not actual reports from the men themselves. But let’s assume that everybody’s right and Mr. C is the one kicking himself because he almost made it aboard. Why is it extra upsetting to miss your flight by 5 minutes rather than 30? C and D are both equally un-airborne. Neither of them budgeted enough time to get to the airport, and both of them have to buy new tickets. It’s easy to imagine how Mr. C could have arrived in time, but it’s also easy to imagine him wearing a hat or being a chef or doing the worm in the airport terminal, so what does it matter that there was some imagined universe where he got on his flight? That ain’t the universe we live in.
All that is to say: Ricky I’m so sorry for saying all this to you when you called me sobbing, please text me back.
Like every heuristic, attribute substitution is good 90% of the time, and the problems only arise when you use it in the wrong situation. Kinda like how grapefruit juice is normally a delicious part of a balanced breakfast, but if you’re taking blood pressure medications, it can kill you instead.
Fortunately, there’s an antidote to attribute substitution. In fact, there’s two. They are straightforward, free, and hated by all.
Use telekinesis
A foolproof way to stop yourself from making stupid judgments is to avoid judgment altogether. When someone asks you how the economy is doing, just go “Gosh, I haven’t the faintest.”
The problem with this strategy is it requires a superhuman level of mental fortitude—if you’re capable of pulling it off, you’ve probably already ascended to a higher plane. You know how in movies whenever someone is using a psychic power—say, telekinesis—and their face gets all strained and their eyes start bugging out and their nose starts bleeding, and then after they’ve, like, lifted a car off of their friend, they collapse from exertion? That’s what it feels like to maintain uncertainty. Conclusions come to mind quickly and effortlessly; keeping them out would require playing a perpetual game of mental whack-a-mole.
Even if we can never whack all the moles, though, it’s still good practice to whack a few. Keeping track of what you know and what you don’t know is just basic epistemic hygiene—it’s hard to think clearly unless you’ve done that first, just like it’s hard to do pretty much any job if you haven’t brushed your teeth for two years. Separating your baseless conjectures from your justified convictions is also a recipe for avoiding pointless arguments, since most of them boil down to things like “I like it when the president wears a blue tie” vs. “I like it when the president wears a red tie.”
Plus, maintaining the appropriate level of uncertainty prevents you from becoming a one-man misinformation machine. A couple weeks ago, somebody asked me “What was the first year that most homes in New York City had indoor plumbing?” I didn’t know the answer to this question, and yet somehow I still found myself saying, matter-of-factly, “I think, like, 1950.” Why did I do that? Why didn’t I just say, “Gosh, I haven’t the faintest”? Am I a crazy person? Did I think I could open my mouth and the Holy Spirit would speak through me, except instead of endowing me with the ability to speak in tongues, the Divine would bless me with plumbing trivia? I inserted one unit of bullshit into everyone’s heads for no reason at all, and on top that they now also think I’m some kind of toilet expert.
So we could all stand to cultivate a little more doubt. Ultimately, though, trying to prevent unwanted judgments by remaining uncertain is a bit like trying to prevent unwanted pregnancies by remaining abstinent: it works 100% of the time, until it doesn’t.
Wear clean underwear and eat a healthy breakfast
The other solution to attribute substitution is to make your judgments consciously and purposefully, rather than relying on whatever cockamamie shortcut your subconscious wants to take. When I taught negotiation, this was called “defining success,” and Day 1 was all about it. After all, how can you get what you want if you don’t know what you want?
Day 1 always flopped. Students hated defining success. It was like I was telling them to wear clean underwear and eat a healthy breakfast, and they were like “yeah yeah we’ve done all that.” But then it turned out most them were secretly wearing yesterday’s boxers and their breakfast was three puffs on a Juul.
One student, let’s call him Zack, came up to me after class one day, asking how he could make sure to get an A. “I know I haven’t turned in most of my assignments so far,” Zack admitted, “But that’s because I’ve been getting divorced and my two startups have been having trouble. Anyway, could I do some extra credit?”
I didn’t have the guts to tell Zack that trying to get an A in my class was a waste of his time, and he should instead focus on putting out the various fires in his life. Nor did I have the guts to tell him that he shouldn’t get an A in my class, because obviously he hadn’t learned the most important thing I was trying to teach him, which was to get his priorities straight.
I can’t blame Zack—it’s not like I ever reordered my life because someone showed me some PowerPoint slides. This is one of those situations where you can’t reach the brain through the ears, so perhaps this lesson is best applied straight to the skull instead. Supposedly Zen masters sometimes hit their students with sticks as a way of teaching them a lesson, like “Master, what’s the key to enlightenment?” *THWACK*. There’s something about getting a lump on your head that drives the point home better even than the kindest, clearest words. “Your question is so misspecified that it shows you need to rethink your fundamental assumptions” just doesn’t have the same oomph as a crack to the cranium.
Most of us don’t have the benefit of a Zen master, but fortunately the world is always holding out a stick for us so we can run headlong into it over and over. All we have to do is notice the goose-eggs on our noggins, which is apparently the hardest part. Zack’s marriage was imploding, his startups were going under, he was flunking a class that was kinda designed to be un-flunkable—this was a guy who had basically pulped his skull by sprinting full-tilt into the stick, and yet he was still saying to himself, “Maybe I just need to run into the stick harder.”
I don’t know what more Zack needs, but for me, if I’m gonna stop running into the stick, I have to realize that I’m the kind of person who will, by default, spend 90 minutes deciding which movie to watch and 9 seconds deciding what I want out of life. I gotta ask myself: if I’m busting my ass from sunup to sundown, what am I hoping for in return? A thoroughly busted ass?
It feels stupid to ask myself these questions, because the answers either seem obvious or like foregone conclusions, but they aren’t. It’s like I’m in one of those self-driving jeeps from Jurassic Park and I’m heading straight toward a pack of Velociraptors and I’m just like, “Welp, I guess there was no way to avoid this,” as if I didn’t choose to accept an eccentric old man’s invitation to a dinosaur island.
The OG psychologist William James famously claimed that babies, because they have no idea what’s going on around them, must experience the world as a “blooming, buzzing confusion.” As they grow up and learn to make sense of things, the blooming and buzzing subside.
This idea is intuitive, beautiful, and—I suspect—wrong. Confusion is a sophisticated emotion: it requires knowing that you don’t know something. That kind of meta-cognition is difficult for a grownup, and it might be impossible for someone who still can’t insert their index finger into their own nose. Whenever I hang out with a baby, that certainly seems true—I’m confused about what to do with them, while they’re extremely certain that my car keys belong in their mouth.
Confusion, like every emotion, is a signal: it’s the ding-a-ling that tells you to think harder because things aren’t adding up. That’s why, as soon as we unlock the ability to feel confused, we also start learning all sorts of tricks for avoiding it in the first place, lest we ding-a-ling ourselves to death. That’s what every heuristic is—a way of short-circuiting our uncertainty, of decreasing the time spent scratching our heads so we can get back to what really matters (putting car keys in our mouths).
I think it’s cool that my mind can do all these tricks, but I’m trying to get comfortable scratching my head a little longer. Being alive is strange and mysterious, and I’d like to spend some time with that fact while I’ve got the chance, to visit the jagged shoreline where the bit that I know meets the infinite that I don’t know, and to be at peace sitting there a while, accompanied by nothing but the ring of my own confusion and the crunch of delicious car keys.
Technically, there is also no Nobel Prize for economics. There is only the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, established in 1968. In 2005, one of the living Nobels described the prize as “a PR coup by economists to improve their reputation,” which of course it is, much like the Nobel Prizes are themselves a PR coup by a guy who got rich selling explosives. Anyway, this is a helpful fact to have in your pocket if you’re ever hanging out with economists and you’d like to make things worse.
Prediction: when they reboot The Three Stooges in 2071, their names will be Doge, Bonk, and Floki.
All names have been changed to protect the innocent (it’s me, I am the innocent).
Okay actually on that note I think Brandeis University has a great EMT program, or at least it did c. 2012, which is when I went there on an improv tour and got gastroenteritis in the middle of the show and had to flee from the stage to the bathroom, puking all the way. The EMTs were there in seconds, and although none of them could help me, they were all very nice. So if you’re ever going to have a medical emergency, make sure your EMTs went to Brandeis roughly 13 years ago.
2025-01-07 23:33:58
Experimental History has just turned three. In blog years, that makes me old enough to light up a cigar and wax wise. And so, on my blog birthday, I wanna tell you about the two stupid facts I’ve learned in my time on the internet:
There are a lot of people in the world
Those people differ a lot from one another
These truths are so obvious that nobody even notices them, which is exactly why they’re so potent, and why they keep coming in handy over and over again. Lemme show you how.
When I got my first piece of hate mail, it felt like someone had lobbed a grenade through my front window. Someone’s mad at me?? I gotta skip town!!
But the brute logic of the Two Stupid Facts means that if you reach enough people, eventually you’re gonna bump into someone who doesn’t like what you’re doing. Haters are inevitable and therefore non-diagnostic—being heckled on the internet is like running a blood test and discovering that there’s blood inside of you.
I was once at a standup show where everybody was laughing except for one guy in the front row who, for some reason, had a major stick up his butt. After sitting stone-faced for half an hour, the guy eventually just got up and stormed out. The comic didn’t miss a beat. “Well, it’s not for everybody,” he said, and kept right on going.
That’s the beauty of the Stupid Facts: they set you free from the illusion that you can please everybody. What a relief to let that obligation go! To hear someone say, “I don’t like you!” and to be able to respond, “Well, that was guaranteed to happen.” You need that kind of serenity to do anything interesting, especially on the internet, where there’s an infinite supply of detractors standing by to shout you down.
I hate looking at my stats because it’s a recipe for getting Goodharted, but I’m gonna do it for a sec because I want to show you something. Here’s the graph of Experimental History’s readers over the past 90 days:
You see those periodic little bumps where I shed ~50 people? Those are days I posted. What’s happening there is a bunch of people got an email from me and decided to never receive emails from me again.
Most people wince when they see bumps like that. Thanks to the Two Stupid Facts, I laugh instead. Hey man, it ain’t for everybody!
You gotta be careful with these Facts, though, because they can turn you extra Stupid.
Once someone lobs a grenade into your house and the smoke clears, you realize something surprising: you now have a bigger house. That’s because, on the internet, attention is almost always good. Every complaint about you is also a commercial for you—after all, nobody bothers to yell at a nobody.
We all know this, but the first time it happened to me, it felt a little freaky, like I had just sat down to a business meeting with Satan and he was making some really good points. “When you make people mad, you succeed,” he says. “All fame is good fame. I mean, come on, Dior is still running ads with Johnny Depp!”
If you browse Substack—something the platform wisely makes it difficult to do—you quickly see that lots of people have taken the devil’s bargain. Every outrage is on offer; if it ain’t fit to print, someone’s posting it instead.
This has made a lot of people very upset, but it shouldn’t surprise us, because it’s just the Two Stupid Facts again. In a big, diverse world, there’s a market for every opinion, and for its opposite. There will always be people who hate something because other people like it, and people who like something because other people hate it, so all that hate is ultimately just free advertising. Trying to squash the thing you despise is like squashing a stink bug; it just attracts more stink bugs who are like “yum do I smell stink in here??” That’s why being crucified is often a good business move—one guy who did it 2,000 years ago still has several billion followers.
As tempting as the devil’s deal may be, it is—surprise!—a bad one. Yes, you can enlarge your house by encouraging people to lob grenades at it. But the fact that you end up with a mansion doesn’t mean you’ve done anything interesting or useful. You’ve merely taken advantage of some Stupid Facts.
Mastering the Facts is helpful for handling haters and for not becoming one yourself, but their real power is that they can bust open your sense of what’s possible.
The Facts mean that even if you only appeal to a minuscule cadre of dorks, say, 0.1% of people, that’s still 8 million people. That’s roughly the population of Ireland and Uruguay combined. I would be honored to be read by every Irelander and Uruguayan, even if it meant being read by literally nobody else.
Once that really sunk in for me, I realized how silly it is to think there’s a single path to success. There’s no one thing that’ll please every 0.1%. Anyone peddling advice about how to do that is, at best, just telling you how to please their 0.1%.
Here’s an example from my world. Many successful people on Substack pump out a pretty good article every day or two, and they’ll tell you that the secret to succeeding on Substack is...pumping out a pretty good article every day or two. Substack’s official data-driven advice agrees: if you’re serious about getting ahead on this platform, you better ping your readers at least once a week, ideally more. Keep that content spigot open! That’s the sensible thing to do.
But when you’re only going for 0.1% of people, you don’t have to do the sensible thing. In fact, you shouldn’t do the sensible thing. You should do the thing that appeals to that small set of weirdos you’re trying to reach.1
That’s why I don’t do the 3-7x/week schedule. I’m trying to do something different, and I don’t care if it doesn’t appeal to 99.9% of people. To me, writing is like climbing a mountain and then telling people what you saw up there, except the mountain is in your head. Climbing 10% of the mountain is pretty easy and lots of people do it; climbing all the way to the top is hard and almost no one does it. That’s why climbing 10% of the mountain ten times is not as useful as climbing to the top once.
I wanna see that summit, even if I die halfway up the mountain. That’s why I’ll trash a whole post if I don’t surprise myself while writing it—if I already knew what I was going to say, so did the 0.1% that I’m writing for. And it’s why the default mode of my brain is permanently set to “blog”. I’m stringing sentences together in my head from the moment I open my eyes to the moment I close them again, even when I should be doing other things. When I one day absent-mindedly cross the street and get splattered across the windshield of a Kia Sorento, my last thoughts before I lose consciousness will be “This would make a good post.”
It’s a joy to live this way because I feel like I’m being useful. When you’re trying to write for everybody, you can’t actually care about your readers; they’re too numerous, too varied, too vague. But when you’re writing for that beautiful, tiny fraction, you can care a lot. I want to give those folks something good. I want to write them the post they bring up on their second date, the post they forward to their grandpa, the post they listen to on a road trip. I don’t have the upper body strength to pull people out of burning buildings, the steady hands to remove brain tumors, or the patience to teach first-graders right from wrong. But I can write words that a few people find useful, and damn it, I’m going to bust my butt to do my bit.
I think that’s exactly how it should feel to serve your slice of humanity. It shouldn’t be easy, like stealing a 2022 Kia Sorento. It should be hard in an interesting way, like stealing a 2024 Kia Sorento.
All of this writing about writing on the internet probably sounds foolish, what with the coming AI apocalypse and all. Surely, every blog is about to be automated, right? It kinda feels like spending my life savings to buy a house and then this guy moves in next door:
The Two Stupid Facts are the reasons why I haven’t quit yet. As long as there are humans, there will be human-shaped niches where the robots can’t fit, because the way you grow humans is inherently different from the way you grow robots.
A critical step in training large language models is “reinforcement learning from human feedback,” which is where you make the computer say something and then you either pat it on the head and go “good computer!!” or you hit it with a stick and go “bad computer!!” This is how you make an AI helpful and prevent it from going full Nazi.2
Humans also undergo reinforcement learning from human feedback—we get yelled at, praised, flunked, kissed, laughed at (in a good way), laughed at (in a bad way), etc. This is how you make a human helpful and prevent it from going full Nazi, although clearly the procedure isn’t foolproof. But there are four important differences between the process that produces us and the one that makes machines:
No two humans get the same set of training data; our inputs are one-of-a-kind and non-replicable.
Rather than getting trained by a semi-random sample of humans who all have an equal hand in shaping us, we get trained very deeply by a few people—mainly parents, peers, and partners—and very shallowly by everybody else.
Because we’re born with different presets, even identical feedback can produce different people; some humans like getting hit by a stick.
We can choose to ignore our inputs, which you can confirm by having a single conversation with a toddler or a teenager.
This is a recipe for creating 8 billion different eccentrics with peculiar preferences and proclivities, the kind of people who are, at best, loved by a handful and tolerated by the rest.
I know most predictions about the future of AI are proven wrong in like ten minutes3, but I expect these four non-stupid facts to remain true because they’re baked into the business model. Tech companies want the big bucks, and that comes from mildly pleasing a billion people, not delighting a handful of them. That’s why those companies routinely make their products worse in the hopes of attracting the lowest common denominator. Even serving all of Ireland and Uruguay won’t earn you enough to run the cooling fans in your data centers, let alone get you the $7 trillion you need to build your chip factories. I, on the other hand, require only a single cooling fan, and my chip factory is the snack aisle at Costco.
I think that’s why the blogosphere hasn’t yet fallen to the bots. Ever since ChatGPT came out two years ago, anybody can press a button and get a passable essay for free. And yet, when a company called GPTZero checked a bunch of big Substacks for evidence of AI-generated text, 90% of them came out “Certified Human”4. I’m happy to report that Experimental History passed the bot check with flying colors:
You gotta take this with a grain of salt, of course. Maybe those Substacks are disguising their computer-generated text so well that the Robot Detector can’t detect them, and maybe aspiring slop-mongers just haven’t yet perfected their technique. The LLMs are only gonna get better, but they’re already pretty good, and eventually they’ll reach a point where pleasing one human more would mean pleasing another human less. That piddling little percentage point of people you piss off when you try to make your product appeal to everybody—those folks are my whole world.
So until the Terminators show up, I’m gonna be out here doing my thing. And if I’m wrong about all this, well, remember I was relying on Two Stupid Facts.
That’s what I learned. Here’s what I did!
The top posts from this year were:
My three favorite MYSTERY POSTS from this year (and coincidentally, the most-viewed) were:
I also did a few special projects and events:
I ran a seven-week Science House prototype.
I hosted a Blog Competition and Jamboree and found some great new (and old) writers.
I convened a meetup for Experimental History readers in NYC. Everybody was cool and nice and it made me proud to bring them together—look out for more of these in the future (like this one).
There’s one more project I’ve been working on in secret: I’m writing a book. There are some things I want to do that can’t be done with pixels on screens; they can only be done with paper and ink. You know how most nonfiction books should have been blog posts? I got obsessed with the idea of a book that can only be a book, the kind of thoughts you can only transmit when you print them out and bind them together. So it’s gonna be a weird kind of book, but weird in a good way, like Frankenstein in a bowler cap.
It’s about 25% written so the release date is still up in the air, but with any luck I’ll get it done before the Terminators arrive. Practically, this means I’ll be spending some blog time on the book instead. So if you don’t hear from me for longer than usual, I’m either neck-deep in a chapter, or I had a fateful encounter with a Kia. Paid subscribers will still get regular MYSTERY POSTS.
Getting to work on this book and this blog has been dream come true, and it’s all thanks to you guys. When I started writing Experimental History, I was like “oh wouldn’t it be cool if 500 people read it.” Instead, it’s changed everything about my life, and for the better. Thank you to everyone who reads, and thank you to the paid subscribers who keep the blog afloat. I promise you: there are many more Stupid Facts to come.
Microsoft killed their chatbot Tay in less than a day, but they briefly revived it so it could say this:
My first research paper, published in 2021, included this line that remained true for about a year: “Conversation is common, but it is not simple, which is why modern computers can pilot aircraft and perform surgery but still cannot carry on anything but a parody of a conversation.”
The Substacks that failed the human test mainly offer investment advice, which, no offense, is a genre where there’s already high tolerance for slop.