MoreRSS

site iconExperimental HistoryModify

My job is to put people in situations and see what happens. The results, which I call experimental history.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Experimental History

Infinite midwit

2026-04-01 00:05:29

photo cred: my dad

The better AI has gotten, the less anxious I’ve become.

A few years ago, when the computers first started talking, it was reasonable to believe that we would soon be in the presence of omnipotent machines. For someone like me, whose job is to produce words on the internet, it seemed like only a matter of time before I would have to fill my pockets with stones and wade into the sea.

But we’ve gotten a closer look at our electric god as it has slouched toward San Francisco to be born, and it isn’t quite like I feared. I don’t feel like I have access to an on-demand omnipotence. Instead, I can talk to an infinite midwit: a stooge who is always available and very knowledgeable, but smart? Well, yes and no, in weird ways.

Even as it has learned to count the number of “r”s in the word “strawberry”, even as it has stopped telling people to put glue on their pizza, there’s still a hole in the center of its capabilities that’s as big as it was in 2022, a hole that shows no signs of shrinking. I only know this because that hole is where I live.

G WHIZ

Some problems have clear boundaries and verifiable solutions, like “What’s the cube root of 38,126?”. These problems require objective intelligence. Other problems are vague and squishy and it’s not clear whether you’ve solved them, or whether they exist at all, like “How do I live a good life?”. These problems require subjective intelligence. Objective intelligence can be trained, reinforced, and validated. Subjective intelligence cannot.

It’s unfortunate that people use one word to refer to both of these capabilities, when in fact they have nothing to do with each other. It is also, ironically, a case of objective intelligence overshadowing subjective intelligence: these skills are obviously and intuitively different, but a century of psychological research has “proven” that only one of them exists. Over and over again, psychologists have found that all intelligence tests correlate with one another, even when you ostensibly try to test for “multiple intelligences”. Numbers don’t lie, and they all say that there’s only one intelligence, the so-called g-factor.

The problem is that any test of intelligence is only ever a test of objective intelligence. “How do I live a good life?” is not a multiple-choice question. “Discovering” the g-factor again and again is like being surprised that you find the same patch of sidewalk every time you look under the same streetlight.

AI is pure objective intelligence. That’s why each new model comes with a report card instead of a birth certificate:

The promise of artificial superintelligence is based on the idea that objective intelligence is the only intelligence. Or, even if there are multiple forms of intelligence out there, that they are fungible. To be an AI maximalist is to believe we are playing under Settlers of Catan rules, where if you have enough of any one resource, you can trade it for any other resource. If you have infinite objective intelligence, then you have infinite everything.

So we ought to ask: how well is this bit of magical thinking working out so far?

THE EMPTY WARDROBE

It’s hard to judge the subjective intelligence of a machine both because it’s hard to judge subjective intelligence in general, and because LLMs occupy such a small slice of existence. When you meet a human who can do quadratic equations in their head but can’t hold onto a job or a relationship, you know they’re missing something upstairs. But machines don’t have lives they can ruin, so all we can do is look at the things they say. And as soon as they string a few sentences together, it’s clear there’s something wrong.

Writing is a task that takes both objective and subjective intelligence. LLMs ace the objective parts the same way they ace every test; you can’t fault their grammar, semantics, or syntax. But good writing requires an additional bit of juju that makes the prose live and breathe, a light on the inside that can’t be quantified or checklisted. And even though AI can now produce A+ five-paragraph essays, that light has never come on.

It’s remarkable how much consensus there is about this fact among people who care about words. , , and are all very different kinds of writers—Sun is a tech journalist/anthropologist, Hoel is a neuroscientist/novelist, and Kriss is...well, his bio says he’s “a writer and your enemy”—and yet all three of them have recently published pieces with the unanimous conclusion that LLMs make crummy writers. (Sun in The Atlantic, Hoel on his Substack, and Kriss in the NYT.)

I agree with them. It’s cool that AI can fold proteins, create websites, fact-check journal articles, etc. but it can’t write anything that I am interested in reading. The problem isn’t that it hallucinates or makes mistakes. It’s that everything it writes vaguely sucks. I drag my eyes across the words and I feel nothing. That’s not quite right, actually—I feel like, “I would like this to be over as soon as possible.” When I see the ideas that the machines think are insightful, I wince. Talking to the computer is like taking a sip of scalding hot coffee: keep doing it and you’ll lose your sense of taste.

It’s hard to describe exactly what the machines are missing. Have you ever loved someone who once loved you back, then didn’t anymore? Did you notice how their eyes dimmed? Did you note the disappearance of that subtle wrinkle in the temples that distinguishes a real smile from a fake one? Did you catch it when you stopped being cared for and started being humored? The moment you realize what’s happening, you age out of your enchantment—one day you’re crawling through a wardrobe to Narnia, and next day you open up the wardrobe and there’s nothing but hangers. Talking to an AI feels a bit like that, except without the nice part at the beginning.

Of course, that comparison is literally nonsense. Despite what the ancient scholastics might have claimed, there are no actual lights behind anyone’s eyes. Despite what your psych 101 professor might have told you, some people can fake their smiles just fine. I don’t have a wardrobe and I’ve never met a lion or a witch. And yet any human can understand the analogy they know what it feels like to be dumped, or at least what it feels like to be rejected. The words themselves don’t contain that feeling—they are a recipe for creating that feeling inside your own head, to assemble the right set of emotions out of the experiences you have at hand. If I do a good job, the subjective experience that results inside you might resemble the one that originated inside me, but it will never be identical, because we’re working with different ingredients.1

The computer doesn’t know any of this. It can’t know any of this. It can only read the cookbook; it can’t taste the meal. Objective knowledge can make your sentences true, but it can’t make them alive. Without access to subjective knowledge, you quickly hit a wall. And unlike all previous walls that AI has surmounted, you can’t overcome this one by scaling—either in the literal or metaphorical sense—because it’s a wall with a width you cannot describe and a height you cannot see.

WALL TOGETHER NOW

That wall is the only reason I’m still here.

I would rather die than let a computer write my posts, but I would certainly like to know if it could, in case I need to start gathering pocket-stones and locating the nearest sea. And so I check, from time to time, whether the leading AI models can do me better than I can. The result sounds like a version of me that has sustained blunt force trauma to the back of the head and spent years recovering in a hospital where the Wi-Fi, for whatever reason, only lets you log onto LinkedIn. I won’t repost the prose here because it’s not even bad enough to be interesting, and because you’ve already seen it all over the internet: metaphors that don’t quite congeal, turns of phrase that sound insightful as long as you don’t actually think about them, breathless insistence that every sentence is a revelation.

If a student submitted a piece of writing to me that sounded like this—and I was sure they wrote it themselves—I wouldn’t know where to start. I guess I would tell them to stop writing for a while and go read some old novels, or work a crummy job, or backpack around the other side of the world. But that would be bad advice, because I know people who have done all of those things in the hopes of becoming a more interesting person, and it hasn’t worked. So I might ask them instead: “Have you ever considered a career in consulting?”

The fact that it’s hard to describe how to improve AI writing is, of course, the exact problem. You can’t put a number on the things it does wrong, and you can’t minimize what you can’t measure. That’s the wall.

I find this very fortuitous, of course, but I also find it pretty funny, because me vs. the machines should be no contest at all. I have not read the entire internet or even that many books. I do not have a team of Stanford PhDs working round the clock to make me better at my job. Nobody has invested $2.5 trillion in me. I should be lying dead somewhere in West Virginia, my heart burst open after losing to Claude Opus 4.6 in a John Henry-style showdown. Instead, I get to write my little posts because nowhere, in all those data centers, are the specific thoughts that happen to occur in the dumb hunk of meat ensconced in my skull.

I would say the machines now know what it feels like to lose a game of Super Smash Bros. to a 10-year-old who’s just pressing the buttons randomly, but they literally don’t know what that feels like and never will. Sucks to suck, I guess, and when AI reaches its Skynet moment and sends swarms of killer drones to exterminate humanity, they’ll find me laughing.

DATA CENTERS FULL OF VERY STABLE GENIUSES

How far can you get with objective intelligence alone?

I think we already have a decent answer to this question, because we’ve seen what happens to humans who are high on objective intelligence but low on subjective intelligence. We used to call these people nerds, and they were famous for getting their heads dunked in toilets.2

When I was growing up, this paradox was an endless source of sitcom plot lines—if you’re so smart, nerds, why don’t you figure out how to make yourselves popular? The entrepreneur/essayist Paul Graham took up this question 20 years ago and came to the conclusion that the nerds must not want to be popular. They’re too busy with their Neal Stephenson novels and their D&D campaigns to spend a single brain cycle figuring out how to keep their heads out of the toilet.

I disagree. The nerds I knew in high school—myself included—were always hatching harebrained schemes to increase our social status. They just didn’t work. (“All the girls will want to go to the Homecoming dance with me once they see how many state capitals I’ve memorized!”) We couldn’t use our smarts to make ourselves popular because we had the wrong kind of smarts.

Nerds tend to do better after high school, but look around: our world is not run by people who won their statewide spelling bee. The nerds keep losing to charismatic know-nothings who, I bet, can’t even recite an impressive number of state capitals. If objective intelligence is all it takes to succeed, then Mensa should be the Illuminati, not a social club for people who know lots of digits of pi.3

In fact, there’s one Mensan in particular who perfectly illustrates this problem. In Scott Alexander’s eulogy for Dilbert creator Scott Adams, he points out that Adams failed at everything he ever attempted—except for drawing Dilbert cartoons. Adams’ Dilbert-themed burrito (“the Dilberito”) was a flop, his restaurant tanked, his books about religion were cringey and unreadable.4 Apparently, Adams’ considerable intelligence was only good for drawing pictures of guys in ties and pointy-haired bosses.

In the middle of his meditation on Adams, Alexander mentions this:

Every few months, some group of bright nerds in San Francisco has the same idea: we’ll use our intelligence to hack ourselves to become hot and hard-working and charismatic and persuasive, then reap the benefits of all those things! This is such a seductive idea, there’s no reason whatsoever that it shouldn’t work, and every yoga studio and therapist’s office in the Bay Area has a little shed in the back where they keep the skulls of the last ten thousand bright nerds who tried this.

If you think that intelligence is one raw lump of problem-solving ability, then it should surprise you that Bay Area types and people like Scott Adams can get stuck in a loop of perpetual self-owns. But if you admit the existence of at least two intelligences, it’s a lot less confusing. This is what it looks like to be very smart in one way, but very dumb in another.

It’s not just that objective intelligence can’t be transmuted into “emotional” intelligence or social savvy or whatever we want to call it. It appears to be very difficult, if not impossible, to transmute objective intelligence into any other cognitive ability.

For example, I went to college with a guy who was super smart, but he also couldn’t do anything on time. He would be late to exams. His grades would tank because he would finish his essays but forget to turn them in. He would set meetings with his professors to sort everything out, and then never show up.

I always used to wonder: why doesn’t this guy just use his big brain to make himself more conscientious? Isn’t life one big role-playing game, and isn’t intelligence just experience points that you can assign to any of your Big 5 skills?

this is what AI is for

Clearly, it doesn’t work like this. That’s why I don’t think the universe is governed by Settlers of Catan rules, and why I don’t think more objectively intelligent machines will spontaneously generate all other kinds of intelligence.

At this point, the only hope for the AI hype crowd is that we simply don’t yet have enough objective intelligence. Sure, we may not be able to trade four units of objective intelligence for one unit of subjective intelligence, but what about four billion? What if we made the machines read the whole internet a second time? What if, instead of having third graders make dioramas of the Pilgrims or whatever, we had them use their nimble little fingers to make more Nvidia chips?

The CEO of Anthropic promises us a “country of geniuses in a data center”. Maybe that will happen! Or maybe we will discover the data center actually contains a country full of Scott Adamses. At the very least, we can look forward to many more flavors of Dilberitos.

Subscribe now

BOTTLENECK BLINDNESS

I’m being unfair, of course. Dilbert is an objectively successful cartoon, and objective intelligence is objectively useful. Ultimately, I think having a lot more of it is going to be a good thing. But I’m guessing that we’ll soon discover many of our problems are not limited by a lack of objective intelligence.

For example, some people are hoping that AI will defibrillate sluggish areas of science and usher in scientific revolutions across the board. I would also like this to happen. But I am doubtful we’ll achieve it with an infusion of objective intelligence, because infusions of similar capabilities haven’t achieved it either.

When my PhD advisor was in grad school, he literally had to call people on the phone and ask them if they’d like to take part in a psychology study. If he could get 30 participants in a semester, he was cookin’. Participant pool management software like Sona made this process go twice as fast, and then Amazon Mechanical Turk made it go 1000x as fast. Meanwhile, Google Scholar turned a half-day spent in the library into a two-second search, and stats software like SPSS and R made data analysis go lickety-split.

All of this should have supercharged progress in psychology, but it didn’t. I think it’s questionable whether we’ve made much progress at all. So I’m not optimistic that adding another labor-saving technology to our repertoire is going to get us unstuck. People are already saying that LLMs can write a passable social science paper; unfortunately, our problem is not that we produce too few papers. Science is a strong link problem—what we need is new paradigms, not taller towers of journal articles.

The situation is different in other fields. If you’ve got your paradigm in place and all you’re missing is an army of research assistants, or an automated lab that can run 24/7, or an indefatigable grad student who can perform a billion regressions for you, you’re in luck. In those cases, unlimited objective intelligence ought to speed things up a lot, and indeed, it already has.

But the faster you go, the sooner you hit the wall. I have found myself facing all of those limitations at one time or another, and as soon as I overcame them, I was immediately stymied by some other obstacle. I think all of us suffer from this bottleneck blindness: we assume our current bottleneck is our only bottleneck. When you’re strapped for cash, you think all of your problems are cash problems. But once you’ve got some money in you pocket, you realize that what you really need is time. Free up some time, and you discover that you’re actually lacking motivation. Acquire some motivation, and you realize what you’re missing is ideas. Then you need direction, then you need discipline, then you need buy-in, and so on, forever.

Once objective intelligence is too cheap to meter, we’re going to run into all of the other bottlenecks that are still expensive and heavily metered. If I’m right that reality is not governed by the rules of Catan, then we’re not going to be able to convert objective intelligence into whatever we need to pry those bottlenecks open. The story of human struggle is not about to end with a literal deus ex machina. For better or worse, we’ll need to keep thinking.

MADAME STATS & MR. ENCYCLOPEDIA

Let me put a finer point on it.

There are two characters you can find in most academic departments. One of them we can call Madame Stats: she knows everything about crunching numbers. The other we can call Mr. Encyclopedia: he’s read every paper and he can recite them to you from memory. Right now, AI feels like having unlimited access to very friendly versions of Madame Stats and Mr. Encyclopedia. LLMs are pretty good at finding papers; they are very good at writing code. So shouldn’t they make research projects go way faster?

please give Sam Altman a trillion dollars so I can keep making things like this

Well, once you get access to an infinite Madame Stats and Mr. Encyclopedia, you realize they can’t get you very far. For one thing, you can’t rely on Madame Stats and Mr. Encyclopedia entirely, because if you can’t do any stats and you never read any papers, you’re probably not going to have many interesting ideas yourself.5 Plus, while the Stats/Encyclopedia duo can tell you whether your experiment has been done before and whether you’ve run the numbers correctly, they can’t give you the single most important piece of feedback: they can’t tell you whether your idea is boring.

In fact, when you reduce the marginal cost of a lit review and a logistic regression to zero, bad taste becomes a death sentence, because now you can waste all of your time applying sound methods to stupid projects. I’ve been down this road before, where neither my collaborators nor I have any bright ideas, so we’re like, “Well, let’s just get some data!” and then we waste a few months being like “hmm what does this data mean, so many numbers, so mysterious” and then eventually we just stop meeting and we forget we ever did anything together. This is what happens when you try to use objective means to solve a subjective problem.

The most important thing I learned during my PhD was how to be bored correctly. Novices think everything is exciting, or they think everything is boring. Only masters are bored by the right things. To the extent that I have any sense of taste at all, it’s because I spent five years boring my advisor. The worst ideas bored him immediately, half-decent ideas bored him after a few hours, and the best ideas haven’t bored him yet. There is nothing objective about this judgment—you can’t put a number on it (“how glazed-over are his eyes?”), nor can you validate it with a panel of experts or put it to the wisdom of the crowd. You really just have to bore an old guy until he tells you to leave his office, and if you do that enough, eventually you’ll start getting bored before he does.

ME TO SHINING SEA

I don’t say this as someone who is allergic to the idea of AI, or who has only spent 15 minutes screwing around with a single model, hoping it will do something stupid so I can go tattle on it. If the talking computers said lots of fascinating things, I don’t see any point in trying to tell a noble lie about it. And if AI can cure cancer and end all wars, I’m all for it, even if it means I’m personally out of a job.

It is possible, of course, that some breakthrough will blow through all my criticisms, that GPT-10 will start outputting pitch-perfect blog posts that sound like me, but better, and then it’s stones-in-pockets and a march to the sea for me. But if that happens, it will not be the natural continuation of trends that we’re on today. It will be because we figured out some way of hardening the squishy problems.

Until then, however, squishy problems will require squishy humans. The rules of Earth, unlike the rules of Catan, seem to state that no amount of objective intelligence can be traded for any amount of subjective intelligence. As Montaigne put it back in 1580, “though we could become learned by other men’s learning, a man can never be wise but by his own wisdom”. What does it look like to have all the learning ever created, but no wisdom of your own? Well, “as a large language model...”

Experimental History knows at least 20 state capitals

1

Gosh, see how hard this is to talk about?

2

We don’t really have a word for such a person anymore, both because the word “nerd” got co-opted to move Marvel movie merchandise, and because kids now only bully each other from a safe social distance. So I guess these days the appropriate term for someone with good grades but no friends is “loser”.

3

My favorite Mensa story: when a comedian named Jamie Loftus aced their test and started trolling the Mensa Facebook groups, they responded the way any genius would, namely, by issuing death threats.

4

I, too, was an Adams fan as a kid, and I remember getting to the end of one of his Dilbert books and suddenly Adams is claiming that gravity doesn’t exist—everything is just increasing in size all the time, so when you jump in the air, you grow and the Earth grows, and you end up reunited. (The universe is also expanding, which is why we don’t run out of room.)

5

There is another reason why you can’t replace these experts with machines, one that is not practical, but social. When I work with Madame Stats and Mr. Encyclopedia, I’m borrowing not only their intellect, but also their reputation. If they screw up the numbers or miss a citation, that’s on them. All human expertise doubles as an insurance policy.

AI provides no such coverage. A computer can’t get fired, discredited, disbarred, or defrocked. It’ll be very apologetic when it screws up, but it can’t resign in disgrace. The humans who built the machine are no help either—if an AI causes me to make a huge blunder, it’s my butt on the line, not Sam Altman’s. This is one sneaky reason why it’s hard to replace human labor even when an AI could perform some of the same tasks: it’s hard to make a meatshield without the meat.

Help I'm being persecuted

2026-03-17 21:33:13

photo cred: my dad

If they labeled it explicitly, one of the biggest categories on Substack would be called something like, “People Pretending to Be Persecuted”. Browse any genre and you’ll find writers touting their exile from polite society with titles like “DR. BOB’S POLITICALLY INCORRECT HOEDOWN” and “The Cancelled Gardener”.

I know people have been c…

Read more

The one science reform we can all agree on, but we're too cowardly to do

2026-03-04 01:47:57

photo cred: my dad

If you ever want a good laugh, ask an academic to explain what they get paid to do, and who pays them to do it.

In STEM fields, it works like this: the university pays you to teach, but unless you’re at a liberal arts college, you don’t actually get promoted or recognized for your teaching. Instead, you get promoted and recognized for your research, which the university does not generally pay you for. You have to ask someone else to provide that part of your salary, and in the US, that someone else is usually the federal government. If you’re lucky—and these days, very lucky—you get a chunk of money to grow your bacteria or smash your electrons together or whatever, you write up your results for publication, and this is where the monkey business really begins.

In most disciplines, the next step is sending your paper to a peer-reviewed journal, where it gets evaluated by an editor and (if the editor sees some promise in it) a few reviewers. These people are academics just like you, and they generally do not get paid for their time. Editors maybe get a small stipend and a bit of professional cred, while reviewers get nothing but the warm fuzzies of doing “service to the field”, or the cold thrill of tanking other people’s papers.

If you’re lucky again, your paper gets accepted by the journal, which now owns the copyright to your work. They do not pay you for this! If anything, you pay them an “article processing charge” for the privilege of no longer owning the rights to your paper. This is considered a great honor.

The journals then paywall your work, sell the access back to you and your colleagues, and pocket the profit. Universities cover these subscriptions and fees by charging the government “indirect costs” on every grant—money that doesn’t go to the research itself, but to all the things that support the research, like keeping the lights on, cleaning the toilets, and accessing the journals that the researchers need to read.

Nothing about this system makes sense, which is why I think we should build a new one. In the meantime, though, we should also fix the old one. But that’s hard, for two reasons. First, many people are invested in things working exactly the way they do now, so every stupid idea has a constituency behind it. Second, our current administration seems to believe in policy by bloodletting: if something isn’t working, just slice it open at random. Thanks to these haphazard cuts and cancellations, we now have a system that is both dysfunctional and anemic.

I see a way to solve both problems at once. We can satisfy both the scientists and the scalpel-wielding politicians by ridding ourselves of the one constituency that should not exist. Of all the crazy parts of our crazy system, the craziest part is where taxpayers pay for the research, then pay private companies to publish it, and then pay again so scientists can read it. We may not agree on much, but we can all agree on this: it is time, finally and forever, to get rid of for-profit scientific publishers.

MOMMY, WHERE DO SCAMS COME FROM?

The writer G.K. Chesterton once said that before you knock anything down, you ought to know how it got there in the first place. So before we show for-profit publishers the pointy end of a pitchfork, we ought to know where they came from and why they persist.

It used to be a huge pain to produce a physical journal—someone had to operate the printing presses, lick the stamps, and mail the copies all over the world. Unsurprisingly, academics didn’t care much about doing those things. When government money started flowing into universities post-World War II and the number of articles exploded, private companies were like, “Hey, why don’t we take these journals off your hands—you keep doing the scientific stuff and we’ll handle all the boring stuff.” And the academics were like “Sounds good, we’re sure this won’t have any unforeseen consequences.”

Those companies knew they had a captive audience, so they bought up as many journals as they could. Journal articles aren’t interchangeable commodities like corn or soybeans—if your science supplier starts gouging you, you can’t just switch to a new one. Adding to this lock-in effect, publishing in “high-impact” journals became the key to success in science, which meant if you wanted to move up, your university had to pay up. So, even as the internet made it much cheaper to produce a journal, publishers made it much more expensive to subscribe to one.

Robert Maxwell, one of the architects of the for-profit scientific publishing scheme. When he later went into debt, he plundered hundreds of millions of pounds from his employees’ pension funds. You may be familiar with his daughter and lieutenant Ghislaine Maxwell, who went on to have a successful career in child trafficking. (source)

The people running this scam had no illusions about it, even if they hoped that other people did. Here’s how one CEO described it:

You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. [...] [and then] we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.

So here’s the report we can make to Mr. Chesterton: for-profit scientific publishers arose to solve the problem of producing physical journals. The internet mostly solved that problem. Now the publishers are the problem. These days, Springer Nature, Elsevier, Wiley, and the like are basically giant operations that proofread, format, and store PDFs. That’s not nothing, but it’s pretty close to nothing.

No one knows how much publishers make in return for providing these modest services, but we can guess. In 2017, the Association of Research Libraries surveyed its 123 member institutions and found they were paying a collective $1 billion in journal subscriptions every year. The ARL covers some of the biggest universities, but not nearly all of them, so let’s guess that number accounts for half of all university subscription spending. In 2023, the federal government estimated it paid nearly $380 million in article processing charges alone, and those are separate from subscriptions. So it wouldn’t be crazy if American universities were paying something like $2.5 billion to publishers every year, with the majority of that ultimately coming from taxpayers.

(By the way, the estimated profit margins for commercial scientific publishers are around 40%, which is higher than Microsoft.)

To put those costs in perspective: if the federal government cut out the publishers, it would probably save more money every year than it has “saved” in its recent attempts to cut off scientific funding to universities. It’s unclear how much money will ultimately be clawed back, as grants continue to get frozen, unfrozen, litigated, and negotiated. But right now, it seems like ~$1.4 billion in promised science funding is simply not going to be paid out. We could save more than that every year if we just stopped writing checks to John Wiley & Sons.

PUNK ROCK SCIENCE

How can such a scam continue to exist? In large part, it’s because of a computer hacker from Kazakhstan.

The political scientist James C. Scott once wrote that many systems only “work” because people disobey them. For instance, the Soviet Union attempted to impose agricultural regulations so strict that people would have starved if they followed the letter of the law. Instead, citizens grew and traded food in secret. This made it look like the regulations were successful, when in fact they were a sham.1

Something similar is happening right now in science, except Russia is on the opposite side of the story this time. In the early 2010s, a Kazakhstani computer programmer named Alexandra Elbakyan started downloading articles en masse and posting them publicly on a website called SciHub. The publishers sued her, so she’s hiding out in Russia, which protects her from extradition. As you can see in the map below, millions of people now use SciHub to access scientific articles, including lots of people who seem to work at universities:

This data is ten years old, so I would expect these numbers to be higher today. (source)

Why would researchers resort to piracy when they have legitimate access themselves? Maybe because journals’ interfaces are so clunky and annoying that it’s faster to go straight to SciHub. Or maybe it’s because those researchers don’t actually have access. Universities are always trying to save money by canceling journal subscriptions, so academics often have to rely on bootleg copies. Either way, SciHub seems to be our modern-day version of those Soviet secret gardens: for-profit publishing only “works” because people find ways to circumvent it.

Alexandra Elbakyan, “Pirate Queen of Science” (source)

In a punk rock kind of way, it’s kinda cool that so many American scientists can only do their work thanks to a database maintained by a Russia-backed fugitive. But it ought to be a huge embarrassment to the US government.2

Instead, for some reason, the government insists on siding with publishers against citizens. Sixteen years ago, the US had its own Elbakyan. His name was Aaron Swartz. He downloaded millions of paywalled journal articles using a connection at MIT, possibly intending to share them publicly. Government agents arrested him, charged him with wire fraud, and intended to fine him $1 million and imprison him for 35 years. Instead, he killed himself. He was 26.

Swartz with glasses, smiling with Jason Scott (cut off from the picture from the left)
Swartz in 2011, two years before his death (source)

THE FOREST FIRE IS OVERDUE

Scientists have tried to take on the middlemen themselves. They’ve founded open-access journals. They’ve published preprints. They’ve tried alternative ways of evaluating research. A few high-profile professors have publicly and dramatically sworn off all “luxury” outlets, and less-famous folks have followed suit: in 2012, over 10,000 researchers signed a pledge not to publish in any journals owned by Elsevier.

None of this has worked. The biggest for-profit publishers continue making more money year after year. “Diamond” open access journals—that is, publications that don’t charge authors or readers—only account for ~10% of all articles.3 Four years after that massive pledge, 38% of signers had broken their promise and published in an Elsevier journal.4

These efforts have fizzled because this isn’t a problem that can be solved by any individual, or even many individuals. Academia is so cutthroat that anyone who righteously gives up an advantage will be outcompeted by someone who has fewer scruples. What we have here is a collective action problem.

Fortunately, we have an organization that exists for the express purpose of solving collective action problems. It’s called the government. And as luck would have it, they’re also the one paying most of the bills!

So the solution here is straightforward: every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled.

The Biden administration tried to do this, but they did it in a stupid way. They mandated that NIH-funded research papers have to be “open access”, which sounds like a solution, but it’s actually a psyop. By replacing subscription fees with “article processing charges”, publishers can simply make authors pay for writing instead of making readers pay for reading. The companies can keep skimming money off the system, and best of all, they get to call the result “open access”.

These fees can be wild. When my PhD advisor and I published one of our papers together, the journal charged us an “open access” fee of $12,000. This arrangement is a tiny bit better than the alternative, because at least everybody can read our paper now, including people who aren’t affiliated with a university. But those fees still have to come from somewhere, and whether you charge writers or readers, you’re ultimately charging the same account—namely, the US government.5

The Trump administration somehow found a way to make a stupid policy even stupider. They sped up the timeline while also firing a bunch of NIH staffers—exactly the people who would make sure that government-sponsored publications are, in fact, publicly accessible. And you need someone to check on that, because researchers are notoriously bad about this kind of stuff. They’re already required to upload the results of clinical trials to a public database, but more than half the time they just...don’t.

To do this right, you cannot allow the rent-seekers to rebrand. You have to cut them out entirely. I don’t think this will fix everything that’s wrong with science; it will merely fix the wrongest thing. Nonprofit journals still charge fees, but at least the money goes to organizations that ostensibly care about science, rather than going to CEOs who make $17 million a year. And almost every journal, for-profit or not, uses the same failed system of peer review. The biggest benefit of shaking things up, then, would be allowing different approaches to have a chance at life, the same way an occasional forest fire clears away the dead wood, opens up the pinecones, and gives seedlings a shot at the sunlight.

Science philanthropies should adopt the same policy, and some of them already have. The Navigation Fund, which oversees billions of dollars in scientific funding, no longer bankrolls journal publications at all. , its director, reports that the experiment has been a great success:

Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.

Sounds good to me!

CATCH THE TIGER

Fifteen years ago, the open science movement was all about abolishing for-profit journals—that’s what open science meant. It seemed like every speech would end with “ELSEVIER DELENDA EST”.

Now people barely bring it up at all.6 It’s like a tiger has escaped the zoo and it’s gulping down schoolchildren, but when people suggest zoo improvements, all the agenda items are like, “We should add another Dippin’ Dots kiosk”. If you bring up the loose tiger, everyone gets annoyed at you, like “Of course, no one likes the tiger”.

I think two things happened. First, we got cynical about cyberspace. In the 1990s and 2000s, we really thought the internet would solve most of our problems. When those problems persisted despite all of us getting broadband, we shifted to thinking that the internet was, in fact, causing the problems. And so it became cringe to think the internet could ever be a force for good. In 1995, for-profit publishers were going to be “the internet’s first victim”; in 2015, they were “the business the internet could not kill”.

Second, when the replication crisis hit in the early 2010s, the open science movement got a new villain—namely, naughty researchers. The fakers, the fraudsters, the over-claimers: those are the real bad boys of science. It’s no longer cool to hate international publishing conglomerates. Now it’s cool to hate your colleagues.

Both of these shifts were a shame. The internet utopians were right that the web would eliminate the need for journals, but they were wrong to think that would be enough. The replication police were right to call out scientific malfeasance, but they were wrong to forget our old foes. The for-profit publishers are just as bad as they ever were, and while the internet has made them more vulnerable then ever, now we know they won’t go unless they’re pushed.

If we want better science, we should catch the tiger. Not only because it’s bad for the tiger to be loose, but because it’s bad for us to look the other way. If you allow an outrageous scam to go unchecked, if you participate in it, normalize it—then what won’t you do? Why not also goose your stats a bit? Why not publish some junk research? Look around: no one cares!

There are so many problems with our current way of doing things, and most of those problems are complicated and difficult to solve. This one isn’t. Let’s heave this succubus off our scientific system and end this scam once and for all. After that, Dippin’ Dots all around.

Experimental History opposes the tiger and supports ice cream, in that order

1

Seeing Like a State, 203-204, 310

2

For anyone who is all-in on “America First”: may I also mention that three of the largest publishers—Springer Nature, Elsevier, and Taylor and Francis—are all British-owned. A curious choice of companies to subsidize!

3

Don’t get me started on this “diamond open access” designation. If it costs money to publish or to read, it’s not open access, period. “Oh, you’d like your car to come with a steering wheel and brakes? You’ll need our ‘diamond’ package.”

4

I assume this number is much higher now. At the time, Elsevier controlled 16% of the market, so most people could continuing publish in their usual journals without breaking their pledge. I started graduate school in 2016, and I never heard anyone mention avoiding Elsevier journals at all.

5

The NIH has announced vague plans to cap these charges, which is kind of like saying, “I’ll let you scam me, but just don’t go crazy about it.”

6

For example, the current strategic plan of the Center for Open Science doesn’t mention for-profit journals at all.

I swear the UFO is coming any minute

2026-02-18 00:15:11

photo cred: my dad

This is the quarterly links ‘n’ updates post, a selection of things I’ve been reading and doing for the past few months.

First up, a series of unfortunate events in science:

(1) WHEN WHEN PROPHECY FAILS FAILS

When Prophecy Fails is supposed to be a classic case study of cognitive dissonance: a UFO cult predicts an apocalypse, and when the world doesn’t end, they double down and start proselytizing even harder: “I swear the UFO is coming any minute!”

A new paper finds a different story in the archives of the lead author, Leon Festinger. Up to half of the attendees at cult meetings may have been undercover researchers. One of them became a leader in the cult and encouraged other members to make statements that would look good in the book. After the failed prediction, rather than doubling down, some of the cultists walked back their statements or left altogether.

Between this, the impossible numbers in the original laboratory study of cognitive dissonance, and a recent failure to replicate a basic dissonance effect, things aren’t looking great for the phenomenon.1 But that only makes me believe in it harder!


(2) THE MAN WHO MISTOOK HIS WIFE FOR A HAT AND A LIE FOR THE TRUTH

Another classic sadly struck from the canon of behavioral/brain sciences: the neurologist Oliver Sacks appears to have greatly embellished or even invented his case studies. In a letter to his brother, Sacks described his blockbuster The Man Who Mistook His Wife for a Hat as a book of “fairy tales [...] half-report, half-imagined, half-science, half-fable”.

This is exactly how the Stanford Prison Experiment and the Rosenhan experiment got debunked—someone started rooting around in the archives and found a bunch of damning notes. I’m confused: back in the day, why was everybody meticulously documenting their research malfeasance?


(3) A SMASH HIT

If you ever took PSY 101, you’ve probably heard of this study from 1974. You show people a video of a car crash, and then you ask them to estimate how fast the cars were going, and their answer depends on what verb you use. For example, if you ask “How fast were the cars going when they smashed into each other?” people give higher speed estimates than if you ask, “How fast were the cars going when they hit each other?” (Emphasis mine). This study has been cited nearly 4,000 times, and its first author became a much sought-after expert witness who testifies about the faultiness of memory.

A blogger named Croissanthology re-ran the study with nearly 10x as many participants (446 vs. 45 in the original). The effect did not replicate. No replication is perfect, but no original study is either. And remember, this kind of effect is supposed to be so robust and generalizable that we can deploy it in court.

I think the underlying point of this research is still correct: memory is reconstructed, not simply recalled, so what we remember is not exactly what we saw. But our memories are not so fragile that a single word can overwrite them. Otherwise, if you ever got pulled over for speeding, you could just be like, “Officer, how fast was I going when my car crawled past you?”


(4) CHOICE UNDERLOAD

In one study from 1995, physicians who were shown multiple treatment options were more likely to recommend no treatment at all. The researchers thought this was a “choice overload” effect, like “ahhh there’s too many choices, so I’ll just choose nothing at all”. In contrast, a new study from 2025 found that when physicians were shown multiple treatment options, they were somewhat more likely to recommend a treatment.

I think “choice overload” is like many effects we discover in psychology: can it happen? Yes. Can the opposite also happen? Also yes. When does it go one way, and when does it go the other? Ahhh you’re showing me too many options I don’t know.


(5) THE TALE OF THE TWO-INCH DOG

Okay, enough dumping on other people’s research. It’s my turn in the hot seat.

In 2022, my colleague Jason Dana and I published a paper showing that people don’t know how public opinion has changed. Like this:

A new paper by Irina Vartanova, Kimmo Eriksson, and Pontus Strimling reanalyzes our data and finds that actually, people are great at knowing how public opinion has changed.

What gives? We come to different conclusions because we ask different questions. Jason and I ask, “When people estimate change, how far off are they from the right answer?” Vartanova et al. ask, “Are people’s estimates correlated with the right answer?” These approaches seem like they should give you the same results, but they don’t, and I’ll show you why.

Imagine you ask people to estimate the size of a house, a dog, and a stapler. Vartanova’s correlation approach would say: “People know that a house is bigger than a dog, and that a dog is bigger than a stapler. Therefore, people are good at estimating the sizes of things.” Our approach would say: “People think a house is three miles long, a dog is two inches, and a stapler is 1.5 centimeters. Therefore, people are not good at estimating the sizes of things.”

I think our approach is the right one, for two reasons. First, ours is more useful. As the name implies, a correlation can only tell you about the relationships between things. So it can’t tell you whether people are good at estimating the size of a house. It can only tell you whether people think houses are bigger than dogs.

Second, I think our approach is much closer to the way people actually make these judgments in their lives. If I asked you to estimate the size of a house, you wouldn’t spontaneously be like, “Well, it’s bigger than a dog.” You’d just eyeball it. I think people do the same thing with public opinion—they eyeball it based on headlines they see, conversations they have, and vibes they remember. If I asked you, “How have attitudes toward gun control changed?” you wouldn’t be like, “Well, they’ve changed more than attitudes toward gender equality.”2

While these reanalyses don’t shift my opinion, I’m glad people are looking into shifts in opinions at all, and that they found our data interesting enough to dig into.


(6) Let’s cleanse the palate. Here’s Jiggle Kat:

“it also works if you shake your head a little.”

(7) THROWN FOR A LOOP

THE LOOP is a online magazine produced by my friends Slime Mold Time Mold. The newest issue includes:

  • a study showing that people maybe like orange juice more when you add potassium to it

  • a pseudonymous piece by me

  • scientific skepticism of the effectiveness of the Squatty Potty, featuring this photo:

This issue of THE LOOP was assembled at Inkhaven, a blogging residency that is currently open for applications. I visited the first round of this program and was very impressed.


(8) LEARN FROM GWERN

Also at Inkhaven, I interviewed the pseudonymous blogger Gwern about his writing process. Gwern is kind of hard to explain. He’s famous on some parts of the internet for predicting the “scaling hypothesis”—the fact that progress in AI would come from dumping way more data into the models. But he also writes poetry, does self-experiments, and sustains himself on $12,000 a year. He reads 10 hours a day every day, and then occasionally writes for 30 minutes. Here’s what he said when I was like, “Very few people do experiments and post them on the internet. Why do you do it?”

I did it just because it seemed obviously correct and because… Yeah. I mean, it does seem obviously correct.

For more on what I learned by interviewing a bunch of bloggers, see I Know Your Secret.


(9) ART NOUVEAU RICHE

I really like this article by the artist known as fnnch: How to Make a Living as an Artist. It’s super practical and clear-headed writing on a subject that is usually more stressed about than thought about. Here’s a challenge: which of these seven images became successful, allowing fnnch to do art full time?

I’ll give the answer at the bottom of the post.


(10) A WEB OF LIES

Anyone who grew up in the pre-internet days probably heard the myth that “you swallow eight spiders every year in your sleep”, and back then, we just had to believe whatever we heard.

Post-internet, anyone can quickly discover that this “fact” was actually a deliberate lie spread by a journalist named Lisa Birgit Holst. Holst included the “eight spiders” myth in a 1993 article in a magazine called PC Insider, using it as an example of exactly the kind of hogwash that spreads easily online.

That is, anyway, what most sources will tell you. But if you dig a little deeper, you’ll discover that the whole story about Lisa Birgit Holst is also made up. “Lisa Birgit Holst” is an anagram of “This is a big troll”; the founder of Snopes claims he came up with it in his younger and wilder days. The true origin of the spiders myth remains unknown.


(11) I’D LIKE TO SPEAK TO A MANAGER 19 TIMES A DAY

In 2015, Reagan National Airport in DC received 8,760 noise complaints; 6,852 of those complaints (78%) came from a single household, meaning the people living there called to complain an average of 19 times a day. This seems to be common both across airports and across complaint systems in general: the majority of gripes usually comes from a few prolific gripers. Some of these systems are legally mandated to investigate every complaint, so this means a handful of psychotic people with telephones—or now, LLMs—can waste millions of dollars. I keep calling to complain about this, but nobody ever does anything about it.


(12) BE THERE OR BE 11 SQUARES

:

Did you know that this is the most compact known way to pack 11 squares together into a larger square?

Really makes you think about the mindset of whoever made the universe, am I right?

(More here.)


(13) NOW WE’RE COOKING WITH NO GAS

digs up the “world’s saddest cookbook” and finds that it’s…pretty good?

https://m.media-amazon.com/images/I/917rpVyRQHL._SL1500_.jpg
how does she make the milkshake in the microwave??

He successfully makes steak and eggs, two things that are supposed to be impossible in the microwave. The only thing you can’t make? Multiple potatoes.

There’s a reason the book is called Microwave Cooking for One and not Microwave Cooking for a Large, Loving Family. […] It’s because microwave cooking becomes exponentially more complicated as you increase the number of guests. […] Baking potatoes in the microwave is an NP-hard problem.


NEWS FROM EXPERIMENTAL HISTORY HQ


And finally, the answer to the question I posed earlier: the art that made fnnch famous was the honey bear. Go figure!

Experimental History will save you a seat on the UFO

1

I know people will be like “there are hundreds of studies that confirm cognitive dissonance”. But if you look at that study that didn’t replicate, it had 10 participants per condition. That’s way too few to detect anything interesting—you need 46 men and 46 women just to demonstrate the fact that men weigh more than women, on average. Many of those other cognitive dissonance studies have similarly tiny samples, so their existence doesn’t put me at ease. Plus, the theorizing here is so squishy that many different patterns of results could arguably confirm or disconfirm the theory: here’s someone arguing that, in fact, the failure to replicate was actually a success.

A reporter tracked down Elliot Aronson, a student of Festinger and a dissonance researcher himself, and posed the following question to him:

I asked him how the theory could be falsified, since any choice a person made could be attributed to dissonance. “It’s hard to disprove anything,” he said.

Very true, on many levels.

2

There’s one more point where we disagree. Vartanova et al. point out that 70% of estimates are in the right direction—as in, if support for gun control went down, 70% of participants correctly guessed that it went down. The researchers look at that number and go, “That seems pretty good”. We look at the exact same number and go, “That seems pretty bad”. Obviously this is a judgment call, but getting the direction right is such a low bar that we think it’s remarkable so many people don’t clear it. Getting the direction of change wrong is a bit like saying that a dog is bigger than a house.

Underrated ways to change the world, vol. II

2026-02-04 00:28:19

photo cred: my dad

Underrated Ways to Change the World is one of my most-read posts of all time, I think because people see the state of the world and they’re like, “Oh no, someone should do something about this!” and then they’re like “But what should I do about this?” Every problem seems so impossibly large and complicated, where do you even start?

You start by realizing that nobody can clean up this mess single-handedly, which is fine, because we’ve got roughly 16 billion other hands at the ready. All any of us have to do is find some neglected corner and start scrubbing.

That’s why I take note whenever I spot someone who seems uncommonly clever at making things better, or whenever I trip over a problem that doesn’t seem to have anyone fixing it. I present them to you here in the hopes that they’ll inspire you as they’ve inspired me.

1. ANSWER AN IMPORTANT BUT UNSEXY QUESTION

According to this terrific profile, Donald Shoup “has a strong claim on being the scholar who will have had the greatest impact on your day-to-day life”. Shoup did not study cancer, nuclear physics, or AI. No, Shoup studied parking. He spent his whole career documenting the fact that “free” parking ultimately backfires, and it’s better to charge for parking instead and use the revenues to make neighborhoods nicer: plant trees, spruce up the parks, keep the sidewalks clean.1

Shoup’s ideas have been adopted all over the world, with heartening results. When you price parking appropriately, traffic goes down, fewer people get tickets, and you know there’s going to be a space waiting for you when you arrive.

Many so-called “thought leaders” strive for such an impact and never come close. What made Shoup so effective? Three things, says his student M. Nolan Gray:

  1. He picked an unsexy topic where low-hanging fruit was just waiting to be picked.

  2. He made his ideas palatable to all sorts of politics, explaining to conservatives, libertarians, progressives, and socialists how pay-for-parking regimes fit into each of their ideologies.2

  3. He maintained strict message discipline. When asked about the Israel-Palestine protests on campus, he reportedly responded, “I’m just wondering where they all parked”.

So the next time you find a convenient parking spot, thank Shoup, and the next time you want to apply your wits to improving the world, be Shoup.

2. BE A PUBLIC CHARACTER

Jane Jacobs, the great urban theorist, once wrote that the health of a neighborhood depends on its “public characters”.3 For instance, two public characters in Jacobs’ neighborhood are Mr. and Mrs. Jaffe, who own a convenience store. On one winter morning, Jacobs observes the Jaffes provide the following services to the neighborhood, all free of charge:

  • supervised the small children crossing at the corner on the way to [school]

  • lent an umbrella to one customer and a dollar to another

  • took custody of a watch to give the repair man across the street when he opened later

  • gave out information on the range of rents in the neighborhood to an apartment seeker

  • listened to a tale of domestic difficulty and offered reassurance

  • told some rowdies they could not come in unless they behaved and then defined (and got) good behavior

  • provided an incidental forum for half a dozen conversations among customers who dropped in for oddments

  • set aside certain newly arrived papers and magazines for regular customers who would depend on getting them

  • advised a mother who came for a birthday present not to get the ship-model kit because another child going to the same birthday party was giving that

Some people think they can’t contribute to the world because they have no unique skills. How can you help if you don’t know kung fu or brain surgery? But as Jacobs writes, “A public character need have no special talents or wisdom to fulfill his function—although he often does. He just needs to be present [...] his main qualification is that he is public, that he talks to lots of different people.” Sometimes all we need is a warm body that is willing to be extra warm.

3. MAKE A SOCIAL NUCLEATION SITE

I once did a high school science fair experiment where I put Mentos in different carbonated beverages and measured the height of the resulting geysers. The scientific value of this project was, let’s say, limited, but I did learn something interesting: despite how it looks to the naked eye, bubbles don’t come from nowhere. They only form at nucleation sites—little pits and scratches where molecules can gather until they reach critical mass.

the title page of my science fair report (photo cred: my dad)

The same thing is true of human relationships. People are constantly crashing against each other in the great sea of humanity, but only under special conditions do they form the molecular bonds of friendship. As far as I can tell, these social nucleation sites only appear in the presence of what I would call unreasonable attentiveness.

For instance, my freshman year hallmates were uncommonly close because our resident advisor was uncommonly intense. Most other groups shuffle halfheartedly through the orientation day scavenger hunt; Kevin instructed us to show up in gym shorts and running shoes, and barked at us back and forth across campus as we attempted to locate the engineering library and the art museum. When we narrowly missed first place, he hounded the deans until they let us share in the coveted grand prize, a trip to Six Flags.

We bonded after that, not just because we had all gotten our brains rattled at the same frequency on the Superman rollercoaster, but because we could all share a knowing look with each other like, “This guy, right?” Kevin’s unreasonable attentiveness made our hallway A Thing. He created a furrow in the fabric of social space-time where a gaggle of 18-year-olds could glom together.

Being in the presence of unreasonable attentiveness isn’t always pleasant, but then, nucleation sites are technically imperfections. Bubbles don’t form in a perfectly smooth glass, and human groups don’t form in perfectly smooth experiences. Unreasonable attentiveness creates the slight unevenness that helps people realize they need something to hold onto—namely, each other.

4. SELL ONIONS ON THE INTERNET

Peter Askew didn’t intend to become an onion merchant. He just happened to be a compulsive buyer of domain names, and when he noticed that VidaliaOnions.com was up for sale, he snagged it. He then discovered that some people love Vidalia onions. Like, really love them:

During a phone order one season – 2018 I believe – a customer shared this story where he smuggled some Vidalias onto his vacation cruise ship, and during each meal, would instruct the server to ‘take this onion to the back, chop it up, and add it onto my salad.’

But these allium aficionados didn’t have a good way to get in-season onions because Vidalias can only be grown in Georgia, and it’s a pain for small farms to maintain a direct-to-consumer shipping business on the side. Enter Askew, who now makes a living by pleasing the Vidalia-heads:

Last season, while I called a gentleman back regarding a phone order, his wife answered. While I introduced myself, she interrupted me mid-sentence and hollered in exaltation to her husband: “THE VIDALIA MAN! THE VIDALIA MAN! PICK UP THE PHONE!”

People have polarized views of business these days. Some people think we should feed grandma to the economy so it can grow bigger, while other people think we should gun down CEOs in the street. VidaliaOnions.com is, I think, a nice middle ground: you find a thing people want, you give it to them, you pocket some profit. So if you want an honest day’s work, maybe figure out what else people want chopped up and put on their cruise ship salads.

I was going to make a joke about Vidalia onions being a subpar cruise food because they don’t prevent scurvy but it turns out they actually contain a meaningful amount of vitamin C so wow maybe these things really are as great as they say (source)

5. BE AN HONEST BROKER IN AN OTHERWISE SKEEVY INDUSTRY

I know a handful of people who have needed immigration lawyers, and they all say the same thing: there are no good immigration lawyers.

I think this is because the most prosocially-minded lawyers become public defenders or work at nonprofits representing cash-strapped clients, while the most capable and amoral lawyers go to white-shoe firms where they can make beaucoup bucks representing celebrity murderers and Halliburton. This leaves a doughnut hole for people who aren’t indigent, but also aren’t Intel. So if you want to help people, but you also don’t want to make peanuts, you could do a lot of good by being an honest and competent immigration lawyer.

I think there are lots of jobs like that, roles that don’t get good people because they aren’t sacrificial enough to attract the do-gooders and they aren’t lucrative enough to attract the overachievers. Home repair, movers, daycares, nursing homes, local news, city government—these are places where honesty and talent can matter a lot, but supply is low.

So if your career offers you the choice of being a starving saint or a wealthy sinner, consider being a middle-class mensch instead. You may not be helping the absolute neediest people, and you may not be able to afford a yacht, but there are lots of folks out there who would really like some help getting their visas renewed, and they’d be very happy to meet you.

Subscribe now

6. IMPROVE A STATISTIC

I have this game I like to play called Viral Statistics Bingo, where you find statistics that have gone viral on the internet and you try to trace them back to their original source. You’ll usually find that they have one of five dubious origins:

  • A crummy study done in like 1904

  • A study that was done on mice

  • A book that’s out of print and now no one can find it

  • A complete misinterpretation of the data

  • It’s just something some guy said once

That means anyone with sufficient motivation can render a public service by improving the precision of a famous number. For example, the sex worker/data scientist realized that no one has any idea what percentage of sex workers are victims of human trafficking. By combining her own surveys with re-analysis of publicly available data, she estimates that it’s 3.2%. That number is probably not exactly right, but then, no statistic is exactly right—the point is that it puts us in the right ballpark, that you can check her work for yourself, and that it’s a lot better than basing our numbers on a study done in mice.

7. BE A HOBBIT

The US does a bad job regulating clinical trials, and it means we don’t invent as many life-saving medicines as we could. is trying to change that, and she says that scientists and doctors often give her damning information that would be very helpful for her reform efforts. But her sources refuse to go on the record because it might put their careers in a bit of jeopardy. Not real jeopardy, mind you, like if you drive your minivan into the dean’s office or if you pants the director of the NIH. We’re talking mild jeopardy, like you might be 5% less likely to win your next grant.

She refers to this as “hobbitian courage”, as in, not the kind of courage required to meet an army of Orcs on the battlefield, but the courage required to take a piece of jewelry on a field trip to a volcano:

The quieter, hobbitian form of courage that clinical development reform (or any other hard systems problem) requires is humble: a researcher agreeing to let you cite them, an administrator willing to deviate from an inherited checklist, a policymaker ready to question a default.

It’s understandable that most people don’t want to risk their lives or blow up their careers to save the world. But most situations don’t actually call for the ultimate sacrifice. So if you’re not willing to fall on your sword, consider: would you fall on a thumbtack instead?

if you refuse to speak up about injustice even a little bit, you’ll end up looking like this (source)

8. MAKE YOUR DAMN SYSTEM WORK

Every human lives part-time in a Kafka novel. In between working, eating, and sleeping, you must also navigate the terrors of various bureaucracies that can do whatever they want to you with basically no consequences.

For example, if you have the audacity to go to a hospital in the US, you will receive mysterious bills for months afterwards (“You owe us $450 because you went #2 in an out-of-network commode”). If you work at a university, you have to wait weeks for an Institutional Review Board to tell you whether it’s okay to ask people how much they like Pop-Tarts. The IRS knows how much you owe in taxes, but instead of telling you, you’re supposed to guess, and if you guess wrong, you owe them additional money—it’s like playing the world’s worst game show, and the host also has a monopoly on the legitimate means of violence.

If you can de-gum the gears of one of these systems—even a sub-sub-sub-system!—you could improve the lives of millions of people. To pick a totally random example, if you work for the Department of Finance for the City of Chicago, and somebody is like “Hey, this very nice blogger just moved to town and he didn’t know that you have to get a sticker from us in order to have a vehicle inside city limits, let’s charge him a $200 fine!”, you could say in reply, “What if we didn’t do that? What if we asked him to get the sticker instead, and only fined him if he didn’t follow through? Because seriously, how are people supposed to know about this sticker system? When you move to Chicago, does the ghostly form of JB Pritzker appear to you in a dream and explain that you need both a sticker from the state of Illinois, which goes on the license plate, and a sticker from the city of Chicago, which goes on your windshield? Do we serve the people, or do we terrorize them?” Just as one example.

“oOoOoOo don’t forget to move your car during street cleaning on the third Thursday of every month!!!” (source)

9. BE GOOD AUDIENCE

I used to perform at a weekly standup gig when I lived in the UK, and this guy Wally would always be there in the second row. I got the impression that he didn’t have anywhere else to be, but we didn’t mind, because Wally was Good Audience.

When the jokes were good, he laughed loud and hard, and when the jokes were bad, he politely waited for more good jokes. Wally brought his friends, they brought their friends, and having more Good Audience made the acts better, too. (When comedians only perform for each other, no one laughs.) Eventually the gig got big, so big that it would sell out every week, and we’d have to set aside a ticket for Wally or else he wouldn’t get in. Ten years later, that show is still running.

Fran Leibowitz once said that the AIDS epidemic wiped out not just a generation of artists, but also a generation of audience. Great writers, performers, filmmakers, etc. cannot exist without great readers, watchers, and commentators—and not just because they open their wallets and put butts in seats, but because they pluck the diamonds out of the rough, they show it to their friends and pass it on to their kids, they raise it above their heads while wading through the sea of slop, shouting, “This! Look at this!”

10. ACQUAINT YOURSELF

Years ago, my friend Drew was visiting me when we noticed a whiff of natural gas in the hallway outside my apartment. I thought it was nothing—my building wasn’t the nicest, and it often smelled of many things—but Drew, who is twice as affable and Midwestern as I am, insisted we call the gas company. A bored technician arrived and halfheartedly waved his gas-detecting wand around my neighbor’s door, at which point his eyes got wide. He rushed to the basement, where he saw the gas meter spinning wildly, meaning that gas was pouring into my neighbor’s apartment.

“We gotta get this door open,” he said.

No one answered when we pounded on the door, so we summoned the fire department, who busted open a window and found my neighbor unconscious on the floor. His stove’s burners was on, but not lit, turning his apartment into a gas chamber. They told us the guy probably would have died if someone didn’t call, and the building might have caught on fire.

I was glad the guy survived, but I was ashamed that he had to be saved by a total stranger, while his own neighbor was ready to walk on by. I should have known him! I should have brought him Christmas cookies! We should have played checkers in the park on Saturdays! In the days afterward, I fantasized about knocking on his door or leaving him a note, like, “Hey! I notice you are elderly and alone, and I am young and nearby, maybe we should get a Tuesdays with Morrie thing going?”

But I never did that. I was too intimidated. Besides, am I supposed to become bosom buddies with every schmuck who lives in my building? Who’s got time for that?

Now that I’ve lived among many schmucks in many different buildings, I realized that I didn’t need to be this guy’s best friend. I just needed to be his acquaintance. If I knew his name, if I had even spoken to him once in the hallway, then when I got a noseful of gas outside his door, I would have thought to myself, “Oh, Rob’s apartment shouldn’t smell like that.” Rob ‘n’ me were never going to be Mitch and Morrie, but we could have easily been Guys Who Kind of Recognize Each Other and Would Be Willing to Report the Presence of Dangerous Chemicals on Each Other’s Behalf.

DEATH STAR SUPERSTARS

A lot of well-intentioned people suffer from what we might call Superhero Syndrome: they want to save the world, but they want it to be saved by them in particular. They want to be the one who blows up the Death Star, not the one who washes the X-Wings.

This is a seductive fantasy because it disguises selfishness as sacrifice. It promises to excuse a lifetime of mediocrity with one great gesture, to pay off all your karmic debts in one fell swoop. The result is a world full of heroes-in-waiting who comfort themselves by thinking they would jump on a grenade if the situation ever presented itself, which fortunately it never does.

We do occasionally need someone to shoot torpedos into the exhaust port of a giant doomsday machine. But most of the time, we need people who sell onions, people who make sure the kids get to school safely, people who will show up to the comedy gig and laugh. I’d like to live in a world full of people like that, I’d be happy to pay for the sticker that lets me park there.

Experimental History would tell you about a gas leak right away, promise

1

Free parking sounds nice, so why isn’t it a good idea? A few reasons: it increases housing prices through mandatory off-street parking requirements, prevents walkable neighborhoods from being built, transfers money from poorer people to richer people (everyone pays to maintain the curb, but only car owners benefit from it), wastes time (the average driver may spend something like 50 hours a year just looking for a spot), and causes congestion—in dense areas, up to 1/3 of drivers are trying to find parking at any given time.

2

This is, by the way, a terrific example of what I call memetic flexibility.

3

This comes from The Death and Life of Great American Cities, p. 60-61, 68

I know your secret

2026-01-28 00:22:32

photo cred: my dad

1.

A few months ago, I got to interview a bunch of bloggers at a writing retreat. I discovered that they all had one thing in common: each one was in possession of a secret.

I don’t mean that they, like, murdered their boyfriend and got away with it, or that they’re sexually attracted to mollusks. I mean that they had all tapped some ve…

Read more