MoreRSS

site iconPaul GrahamModify

Co-founder of Y Combinator. English computer scientist, essayist, entrepreneur, investor, and author.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Paul Graham

What to Do

2025-03-01 00:00:00

March 2025

What should one do? That may seem a strange question, but it's not meaningless or unanswerable. It's the sort of question kids ask before they learn not to ask big questions. I only came across it myself in the process of investigating something else. But once I did, I thought I should at least try to answer it.

So what should one do? One should help people, and take care of the world. Those two are obvious. But is there anything else? When I ask that, the answer that pops up is Make good new things.

I can't prove that one should do this, any more than I can prove that one should help people or take care of the world. We're talking about first principles here. But I can explain why this principle makes sense. The most impressive thing humans can do is to think. It may be the most impressive thing that can be done. And the best kind of thinking, or more precisely the best proof that one has thought well, is to make good new things.

I mean new things in a very general sense. Newton's physics was a good new thing. Indeed, the first version of this principle was to have good new ideas. But that didn't seem general enough: it didn't include making art or music, for example, except insofar as they embody new ideas. And while they may embody new ideas, that's not all they embody, unless you stretch the word "idea" so uselessly thin that it includes everything that goes through your nervous system.

Even for ideas that one has consciously, though, I prefer the phrasing "make good new things." There are other ways to describe the best kind of thinking. To make discoveries, for example, or to understand something more deeply than others have. But how well do you understand something if you can't make a model of it, or write about it? Indeed, trying to express what you understand is not just a way to prove that you understand it, but a way to understand it better.

Another reason I like this phrasing is that it biases us toward creation. It causes us to prefer the kind of ideas that are naturally seen as making things rather than, say, making critical observations about things other people have made. Those are ideas too, and sometimes valuable ones, but it's easy to trick oneself into believing they're more valuable than they are. Criticism seems sophisticated, and making new things often seems awkward, especially at first; and yet it's precisely those first steps that are most rare and valuable.

Is newness essential? I think so. Obviously it's essential in science. If you copied a paper of someone else's and published it as your own, it would seem not merely unimpressive but dishonest. And it's similar in the arts. A copy of a good painting can be a pleasing thing, but it's not impressive in the way the original was. Which in turn implies it's not impressive to make the same thing over and over, however well; you're just copying yourself.

Note though that we're talking about a different kind of should with this principle. Taking care of people and the world are shoulds in the sense that they're one's duty, but making good new things is a should in the sense that this is how to live to one's full potential. Historically most rules about how to live have been a mix of both kinds of should, though usually with more of the former than the latter. [1]

For most of history the question "What should one do?" got much the same answer everywhere, whether you asked Cicero or Confucius. You should be wise, brave, honest, temperate, and just, uphold tradition, and serve the public interest. There was a long stretch where in some parts of the world the answer became "Serve God," but in practice it was still considered good to be wise, brave, honest, temperate, and just, uphold tradition, and serve the public interest. And indeed this recipe would have seemed right to most Victorians. But there's nothing in it about taking care of the world or making new things, and that's a bit worrying, because it seems like this question should be a timeless one. The answer shouldn't change much.

I'm not too worried that the traditional answers don't mention taking care of the world. Obviously people only started to care about that once it became clear we could ruin it. But how can making good new things be important if the traditional answers don't mention it?

The traditional answers were answers to a slightly different question. They were answers to the question of how to be, rather than what to do. The audience didn't have a lot of choice about what to do. The audience up till recent centuries was the landowning class, which was also the political class. They weren't choosing between doing physics and writing novels. Their work was foreordained: manage their estates, participate in politics, fight when necessary. It was ok to do certain other kinds of work in one's spare time, but ideally one didn't have any. Cicero's De Officiis is one of the great classical answers to the question of how to live, and in it he explicitly says that he wouldn't even be writing it if he hadn't been excluded from public life by recent political upheavals. [2]

There were of course people doing what we would now call "original work," and they were often admired for it, but they weren't seen as models. Archimedes knew that he was the first to prove that a sphere has 2/3 the volume of the smallest enclosing cylinder and was very pleased about it. But you don't find ancient writers urging their readers to emulate him. They regarded him more as a prodigy than a model.

Now many more of us can follow Archimedes's example and devote most of our attention to one kind of work. He turned out to be a model after all, along with a collection of other people that his contemporaries would have found it strange to treat as a distinct group, because the vein of people making new things ran at right angles to the social hierarchy.

What kinds of new things count? I'd rather leave that question to the makers of them. It would be a risky business to try to define any kind of threshold, because new kinds of work are often despised at first. Raymond Chandler was writing literal pulp fiction, and he's now recognized as one of the best writers of the twentieth century. Indeed this pattern is so common that you can use it as a recipe: if you're excited about some kind of work that's not considered prestigious and you can explain what everyone else is overlooking about it, then this is not merely a kind of work that's ok to do, but one to seek out.

The other reason I wouldn't want to define any thresholds is that we don't need them. The kind of people who make good new things don't need rules to keep them honest.

So there's my guess at a set of principles to live by: take care of people and the world, and make good new things. Different people will do these to varying degrees. There will presumably be lots who focus entirely on taking care of people. There will be a few who focus mostly on making new things. But even if you're one of those, you should at least make sure that the new things you make don't net harm people or the world. And if you go a step further and try to make things that help them, you may find you're ahead on the trade. You'll be more constrained in what you can make, but you'll make it with more energy.

On the other hand, if you make something amazing, you'll often be helping people or the world even if you didn't mean to. Newton was driven by curiosity and ambition, not by any practical effect his work might have, and yet the practical effect of his work has been enormous. And this seems the rule rather than the exception. So if you think you can make something amazing, you should probably just go ahead and do it.











Notes

[1] We could treat all three as the same kind of should by saying that it's one's duty to live well — for example by saying, as some Christians have, that it's one's duty to make the most of one's God-given gifts. But this seems one of those casuistries people invented to evade the stern requirements of religion: you could spend time studying math instead of praying or performing acts of charity because otherwise you were rejecting a gift God had given you. A useful casuistry no doubt, but we don't need it.

We could also combine the first two principles, since people are part of the world. Why should our species get special treatment? I won't try to justify this choice, but I'm skeptical that anyone who claims to think differently actually lives according to their principles.

[2] Confucius was also excluded from public life after ending up on the losing end of a power struggle, and presumably he too would not be so famous now if it hadn't been for this long stretch of enforced leisure.



Thanks to Trevor Blackwell, Jessica Livingston, and Robert Morris for reading drafts of this.

The Origins of Wokeness

2025-01-01 00:00:00

January 2025

The word "prig" isn't very common now, but if you look up the definition, it will sound familiar. Google's isn't bad:

A self-righteously moralistic person who behaves as if superior to others.
This sense of the word originated in the 18th century, and its age is an important clue: it shows that although wokeness is a comparatively recent phenomenon, it's an instance of a much older one.

There's a certain kind of person who's attracted to a shallow, exacting kind of moral purity, and who demonstrates his purity by attacking anyone who breaks the rules. Every society has these people. All that changes is the rules they enforce. In Victorian England it was Christian virtue. In Stalin's Russia it was orthodox Marxism-Leninism. For the woke, it's social justice.

So if you want to understand wokeness, the question to ask is not why people behave this way. Every society has prigs. The question to ask is why our prigs are priggish about these ideas, at this moment. And to answer that we have to ask when and where wokeness began.

The answer to the first question is the 1980s. Wokeness is a second, more aggressive wave of political correctness, which started in the late 1980s, died down in the late 1990s, and then returned with a vengeance in the early 2010s, finally peaking after the riots of 2020.

This was not the original meaning of "woke," but it's rarely used in the original sense now. Now the pejorative sense is the dominant one. What does it mean now? I've often been asked to define both wokeness and political correctness by people who think they're meaningless labels, so I will. They both have the same definition:
An aggressively performative focus on social justice.
In other words, it's people being prigs about social justice. And that's the real problem — the performativeness, not the social justice.

Racism, for example, is a genuine problem. Not a problem on the scale that the woke believe it to be, but a genuine one. I don't think any reasonable person would deny that. The problem with political correctness was not that it focused on marginalized groups, but the shallow, aggressive way in which it did so. Instead of going out into the world and quietly helping members of marginalized groups, the politically correct focused on getting people in trouble for using the wrong words to talk about them.

As for where political correctness began, if you think about it, you probably already know the answer. Did it begin outside universities and spread to them from this external source? Obviously not; it has always been most extreme in universities. So where in universities did it begin? Did it begin in math, or the hard sciences, or engineering, and spread from there to the humanities and social sciences? Those are amusing images, but no, obviously it began in the humanities and social sciences.

Why there? And why then? What happened in the humanities and social sciences in the 1980s?

A successful theory of the origin of political correctness has to be able to explain why it didn't happen earlier. Why didn't it happen during the protest movements of the 1960s, for example? They were concerned with much the same issues. [1]

The reason the student protests of the 1960s didn't lead to political correctness was precisely that — they were student movements. They didn't have any real power. The students may have been talking a lot about women's liberation and black power, but it was not what they were being taught in their classes. Not yet.

But in the early 1970s the student protestors of the 1960s began to finish their dissertations and get hired as professors. At first they were neither powerful nor numerous. But as more of their peers joined them and the previous generation of professors started to retire, they gradually became both.

The reason political correctness began in the humanities and social sciences was that these fields offered more scope for the injection of politics. A 1960s radical who got a job as a physics professor could still attend protests, but his political beliefs wouldn't affect his work. Whereas research in sociology and modern literature can be made as political as you like. [2]

I saw political correctness arise. When I started college in 1982 it was not yet a thing. Female students might object if someone said something they considered sexist, but no one was getting reported for it. It was still not a thing when I started grad school in 1986. It was definitely a thing in 1988 though, and by the early 1990s it seemed to pervade campus life.

What happened? How did protest become punishment? Why were the late 1980s the point at which protests against male chauvinism (as it used to be called) morphed into formal complaints to university authorities about sexism? Basically, the 1960s radicals got tenure. They became the Establishment they'd protested against two decades before. Now they were in a position not just to speak out about their ideas, but to enforce them.

A new set of moral rules to enforce was exciting news to a certain kind of student. What made it particularly exciting was that they were allowed to attack professors. I remember noticing that aspect of political correctness at the time. It wasn't simply a grass-roots student movement. It was faculty members encouraging students to attack other faculty members. In that respect it was like the Cultural Revolution. That wasn't a grass-roots movement either; that was Mao unleashing the younger generation on his political opponents. And in fact when Roderick MacFarquhar started teaching a class on the Cultural Revolution at Harvard in the late 1980s, many saw it as a comment on current events. I don't know if it actually was, but people thought it was, and that means the similarities were obvious. [3]

College students larp. It's their nature. It's usually harmless. But larping morality turned out to be a poisonous combination. The result was a kind of moral etiquette, superficial but very complicated. Imagine having to explain to a well-meaning visitor from another planet why using the phrase "people of color" is considered particularly enlightened, but saying "colored people" gets you fired. And why exactly one isn't supposed to use the word "negro" now, even though Martin Luther King used it constantly in his speeches. There are no underlying principles. You'd just have to give him a long list of rules to memorize. [4]

The danger of these rules was not just that they created land mines for the unwary, but that their elaborateness made them an effective substitute for virtue. Whenever a society has a concept of heresy and orthodoxy, orthodoxy becomes a substitute for virtue. You can be the worst person in the world, but as long as you're orthodox you're better than everyone who isn't. This makes orthodoxy very attractive to bad people.

But for it to work as a substitute for virtue, orthodoxy must be difficult. If all you have to do to be orthodox is wear some garment or avoid saying some word, everyone knows to do it, and the only way to seem more virtuous than other people is to actually be virtuous. The shallow, complicated, and frequently changing rules of political correctness made it the perfect substitute for actual virtue. And the result was a world in which good people who weren't up to date on current moral fashions were brought down by people whose characters would make you recoil in horror if you could see them.

One big contributing factor in the rise of political correctness was the lack of other things to be morally pure about. Previous generations of prigs had been prigs mostly about religion and sex. But among the cultural elite these were the deadest of dead letters by the 1980s; if you were religious, or a virgin, this was something you tended to conceal rather than advertise. So the sort of people who enjoy being moral enforcers had become starved of things to enforce. A new set of rules was just what they'd been waiting for.

Curiously enough, the tolerant side of the 1960s left helped create the conditions in which the intolerant side prevailed. The relaxed social rules advocated by the old, easy-going hippy left became the dominant ones, at least among the elite, and this left nothing for the naturally intolerant to be intolerant about.

Another possibly contributing factor was the fall of the Soviet empire. Marxism had been a popular focus of moral purity on the left before political correctness emerged as a competitor, but the pro-democracy movements in Eastern Bloc countries took most of the shine off it. Especially the fall of the Berlin Wall in 1989. You couldn't be on the side of the Stasi. I remember looking at the moribund Soviet Studies section of a used bookshop in Cambridge in the late 1980s and thinking "what will those people go on about now?" As it turned out the answer was right under my nose.

One thing I noticed at the time about the first phase of political correctness was that it was more popular with women than men. As many writers (perhaps most eloquently George Orwell) have observed, women seem more attracted than men to the idea of being moral enforcers. But there was another more specific reason women tended to be the enforcers of political correctness. There was at this time a great backlash against sexual harassment; the mid 1980s were the point when the definition of sexual harassment was expanded from explicit sexual advances to creating a "hostile environment." Within universities the classic form of accusation was for a (female) student to say that a professor made her "feel uncomfortable." But the vagueness of this accusation allowed the radius of forbidden behavior to expand to include talking about heterodox ideas. Those make people uncomfortable too. [5]

Was it sexist to propose that Darwin's greater male variability hypothesis might explain some variation in human performance? Sexist enough to get Larry Summers pushed out as president of Harvard, apparently. One woman who heard the talk in which he mentioned this idea said it made her feel "physically ill" and that she had to leave halfway through. If the test of a hostile environment is how it makes people feel, this certainly sounds like one. And yet it does seem plausible that greater male variability explains some of the variation in human performance. So which should prevail, comfort or truth? Surely if truth should prevail anywhere, it should be in universities; that's supposed to be their specialty; but for decades starting in the late 1980s the politically correct tried to pretend this conflict didn't exist. [6]

Political correctness seemed to burn out in the second half of the 1990s. One reason, perhaps the main reason, was that it literally became a joke. It offered rich material for comedians, who performed their usual disinfectant action upon it. Humor is one of the most powerful weapons against priggishness of any sort, because prigs, being humorless, can't respond in kind. Humor was what defeated Victorian prudishness, and by 2000 it seemed to have done the same thing to political correctness.

Unfortunately this was an illusion. Within universities the embers of political correctness were still glowing brightly. After all, the forces that created it were still there. The professors who started it were now becoming deans and department heads. And in addition to their departments there were now a bunch of new ones explicitly focused on social justice. Students were still hungry for things to be morally pure about. And there had been an explosion in the number of university administrators, many of whose jobs involved enforcing various forms of political correctness.

In the early 2010s the embers of political correctness burst into flame anew. There were several differences between this new phase and the original one. It was more virulent. It spread further into the real world, although it still burned hottest within universities. And it was concerned with a wider variety of sins. In the first phase of political correctness there were really only three things people got accused of: sexism, racism, and homophobia (which at the time was a neologism invented for the purpose). But between then and 2010 a lot of people had spent a lot of time trying to invent new kinds of -isms and -phobias and seeing which could be made to stick.

The second phase was, in multiple senses, political correctness metastasized. Why did it happen when it did? My guess is that it was due to the rise of social media, particularly Tumblr and Twitter, because one of the most distinctive features of the second wave of political correctness was the cancel mob: a mob of angry people uniting on social media to get someone ostracized or fired. Indeed this second wave of political correctness was originally called "cancel culture"; it didn't start to be called "wokeness" till the 2020s.

One aspect of social media that surprised almost everyone at first was the popularity of outrage. Users seemed to like being outraged. We're so used to this idea now that we take it for granted, but really it's pretty strange. Being outraged is not a pleasant feeling. You wouldn't expect people to seek it out. But they do. And above all, they want to share it. I happened to be running a forum from 2007 to 2014, so I can actually quantify how much they want to share it: our users were about three times more likely to upvote something if it outraged them.

This tilt toward outrage wasn't due to wokeness. It's an inherent feature of social media, or at least this generation of it. But it did make social media the perfect mechanism for fanning the flames of wokeness. [7]

It wasn't just public social networks that drove the rise of wokeness though. Group chat apps were also critical, especially in the final step, cancellation. Imagine if a group of employees trying to get someone fired had to do it using only email. It would be hard to organize a mob. But once you have group chat, mobs form naturally.

Another contributing factor in this second wave of political correctness was the dramatic increase in the polarization of the press. In the print era, newspapers were constrained to be, or at least seem, politically neutral. The department stores that ran ads in the New York Times wanted to reach everyone in the region, both liberal and conservative, so the Times had to serve both. But the Times didn't regard this neutrality as something forced upon them. They embraced it as their duty as a paper of record — as one of the big newspapers that aimed to be chronicles of their times, reporting every sufficiently important story from a neutral point of view.

When I grew up the papers of record seemed timeless, almost sacred institutions. Papers like the New York Times and Washington Post had immense prestige, partly because other sources of news were limited, but also because they did make some effort to be neutral.

Unfortunately it turned out that the paper of record was mostly an artifact of the constraints imposed by print. [8] When your market was determined by geography, you had to be neutral. But publishing online enabled — in fact probably forced — newspapers to switch to serving markets defined by ideology instead of geography. Most that remained in business fell in the direction they'd already been leaning: left. On October 11, 2020 the New York Times announced that "The paper is in the midst of an evolution from the stodgy paper of record into a juicy collection of great narratives." [9] Meanwhile journalists, of a sort, had arisen to serve the right as well. And so journalism, which in the previous era had been one of the great centralizing forces, now became one of the great polarizing ones.

The rise of social media and the increasing polarization of journalism reinforced one another. In fact there arose a new variety of journalism involving a loop through social media. Someone would say something controversial on social media. Within hours it would become a news story. Outraged readers would then post links to the story on social media, driving further arguments online. It was the cheapest source of clicks imaginable. You didn't have to maintain overseas news bureaus or pay for month-long investigations. All you had to do was watch Twitter for controversial remarks and repost them on your site, with some additional comments to inflame readers further.

For the press there was money in wokeness. But they weren't the only ones. That was one of the biggest differences between the two waves of political correctness: the first was driven almost entirely by amateurs, but the second was often driven by professionals. For some it was their whole job. By 2010 a new class of administrators had arisen whose job was basically to enforce wokeness. They played a role similar to that of the political commissars who got attached to military and industrial organizations in the USSR: they weren't directly in the flow of the organization's work, but watched from the side to ensure that nothing improper happened in the doing of it. These new administrators could often be recognized by the word "inclusion" in their titles. Within institutions this was the preferred euphemism for wokeness; a new list of banned words, for example, would usually be called an "inclusive language guide." [10]

This new class of bureaucrats pursued a woke agenda as if their jobs depended on it, because they did. If you hire people to keep watch for a particular type of problem, they're going to find it, because otherwise there's no justification for their existence. [11] But these bureaucrats also represented a second and possibly even greater danger. Many were involved in hiring, and when possible they tried to ensure their employers hired only people who shared their political beliefs. The most egregious cases were the new "DEI statements" that some universities started to require from faculty candidates, proving their commitment to wokeness. Some universities used these statements as the initial filter and only even considered candidates who scored high enough on them. You're not hiring Einstein that way; imagine what you get instead.

Another factor in the rise of wokeness was the Black Lives Matter movement, which started in 2013 when a white man was acquitted after killing a black teenager in Florida. But this didn't launch wokeness; it was well underway by 2013.

Similarly for the Me Too Movement, which took off in 2017 after the first news stories about Harvey Weinstein's history of raping women. It accelerated wokeness, but didn't play the same role in launching it that the 80s version did in launching political correctness.

The election of Donald Trump in 2016 also accelerated wokeness, particularly in the press, where outrage now meant traffic. Trump made the New York Times a lot of money: headlines during his first administration mentioned his name at about four times the rate of previous presidents.

In 2020 we saw the biggest accelerant of all, after a white police officer asphyxiated a black suspect on video. At this point the metaphorical fire became a literal one, as violent protests broke out across America. But in retrospect this turned out to be peak woke, or close to it. By every measure I've seen, wokeness peaked in 2020 or 2021.

Wokeness is sometimes described as a mind-virus. What makes it viral is that it defines new types of impropriety. Most people are afraid of impropriety; they're never exactly sure what the social rules are or which ones they might be breaking. Especially if the rules change rapidly. And since most people already worry that they might be breaking rules they don't know about, if you tell them they're breaking a rule, their default reaction is to believe you. Especially if multiple people tell them. Which in turn is a recipe for exponential growth. Zealots invent some new impropriety to avoid. The first people to adopt it are fellow zealots, eager for new ways to signal their virtue. If there are enough of these, the initial group of zealots is followed by a much larger group, motivated by fear. They're not trying to signal virtue; they're just trying to avoid getting in trouble. At this point the new impropriety is now firmly established. Plus its success has increased the rate of change in social rules, which, remember, is one of the reasons people are nervous about which rules they might be breaking. So the cycle accelerates. [12]

What's true of individuals is even more true of organizations. Especially organizations without a powerful leader. Such organizations do everything based on "best practices." There's no higher authority; if some new "best practice" achieves critical mass, they must adopt it. And in this case the organization can't do what it usually does when it's uncertain: delay. It might be committing improprieties right now! So it's surprisingly easy for a small group of zealots to capture this type of organization by describing new improprieties it might be guilty of. [13]

How does this kind of cycle ever end? Eventually it leads to disaster, and people start to say enough is enough. The excesses of 2020 made a lot of people say that.

Since then wokeness has been in gradual but continual retreat. Corporate CEOs, starting with Brian Armstrong, have openly rejected it. Universities, led by the University of Chicago and MIT, have explicitly confirmed their commitment to free speech. Twitter, which was arguably the hub of wokeness, was bought by Elon Musk in order to neutralize it, and he seems to have succeeded — and not, incidentally, by censoring left-wing users the way Twitter used to censor right-wing ones, but without censoring either. [14] Consumers have emphatically rejected brands that ventured too far into wokeness. The Bud Light brand may have been permanently damaged by it. I'm not going to claim Trump's second victory in 2024 was a referendum on wokeness; I think he won, as presidential candidates always do, because he was more charismatic; but voters' disgust with wokeness must have helped.

So what do we do now? Wokeness is already in retreat. Obviously we should help it along. What's the best way to do that? And more importantly, how do we avoid a third outbreak? After all, it seemed to be dead once, but came back worse than ever.

In fact there's an even more ambitious goal: is there a way to prevent any similar outbreak of aggressively performative moralism in the future — not just a third outbreak of political correctness, but the next thing like it? Because there will be a next thing. Prigs are prigs by nature. They need rules to obey and enforce, and now that Darwin has cut off their traditional supply of rules, they're constantly hungry for new ones. All they need is someone to meet them halfway by defining a new way to be morally pure, and we'll see the same phenomenon again.

Let's start with the easier problem. Is there a simple, principled way to deal with wokeness? I think there is: to use the customs we already have for dealing with religion. Wokeness is effectively a religion, just with God replaced by protected classes. It's not even the first religion of this kind; Marxism had a similar form, with God replaced by the masses. [15] And we already have well-established customs for dealing with religion within organizations. You can express your own religious identity and explain your beliefs, but you can't call your coworkers infidels if they disagree, or try to ban them from saying things that contradict its doctrines, or insist that the organization adopt yours as its official religion.

If we're not sure what to do about any particular manifestation of wokeness, imagine we were dealing with some other religion, like Christianity. Should we have people within organizations whose jobs are to enforce woke orthodoxy? No, because we wouldn't have people whose jobs were to enforce Christian orthodoxy. Should we censor writers or scientists whose work contradicts woke doctrines? No, because we wouldn't do this to people whose work contradicted Christian teachings. Should job candidates be required to write DEI statements? Of course not; imagine an employer requiring proof of one's religious beliefs. Should students and employees have to participate in woke indoctrination sessions in which they're required to answer questions about their beliefs to ensure compliance? No, because we wouldn't dream of catechizing people in this way about their religion. [16]

One shouldn't feel bad about not wanting to watch woke movies any more than one would feel bad about not wanting to listen to Christian rock. In my twenties I drove across America several times, listening to local radio stations. Occasionally I'd turn the dial and hear some new song. But the moment anyone mentioned Jesus I'd turn the dial again. Even the tiniest bit of being preached to was enough to make me lose interest.

But by the same token we should not automatically reject everything the woke believe. I'm not a Christian, but I can see that many Christian principles are good ones. It would be a mistake to discard them all just because one didn't share the religion that espoused them. It would be the sort of thing a religious zealot would do.

If we have genuine pluralism, I think we'll be safe from future outbreaks of woke intolerance. Wokeness itself won't go away. There will for the foreseeable future continue to be pockets of woke zealots inventing new moral fashions. The key is not to let them treat their fashions as normative. They can change what their coreligionists are allowed to say every few months if they like, but they mustn't be allowed to change what we're allowed to say. [17]

The more general problem — how to prevent similar outbreaks of aggressively performative moralism — is of course harder. Here we're up against human nature. There will always be prigs. And in particular there will always be the enforcers among them, the aggressively conventional-minded. These people are born that way. Every society has them. So the best we can do is to keep them bottled up.

The aggressively conventional-minded aren't always on the rampage. Usually they just enforce whatever random rules are nearest to hand. They only become dangerous when some new ideology gets a lot of them pointed in the same direction at once. That's what happened during the Cultural Revolution, and to a lesser extent (thank God) in the two waves of political correctness we've experienced.

We can't get rid of the aggressively conventional-minded. [18] And we couldn't prevent people from creating new ideologies that appealed to them even if we wanted to. So if we want to keep them bottled up, we have to do it one step downstream. Fortunately when the aggressively conventional-minded go on the rampage they always do one thing that gives them away: they define new heresies to punish people for. So the best way to protect ourselves from future outbreaks of things like wokeness is to have powerful antibodies against the concept of heresy.

We should have a conscious bias against defining new forms of heresy. Whenever anyone tries to ban saying something that we'd previously been able to say, our initial assumption should be that they're wrong. Only our initial assumption of course. If they can prove we should stop saying it, then we should. But the burden of proof is on them. In liberal democracies, people trying to prevent something from being said will usually claim they're not merely engaging in censorship, but trying to prevent some form of "harm". And maybe they're right. But once again, the burden of proof is on them. It's not enough to claim harm; they have to prove it.

As long as the aggressively conventional-minded continue to give themselves away by banning heresies, we'll always be able to notice when they become aligned behind some new ideology. And if we always fight back at that point, with any luck we can stop them in their tracks.

The number of true things we can't say should not increase. If it does, something is wrong.









Notes

[1] Why did 1960s radicals focus on the causes they did? One of the people who reviewed drafts of this essay explained this so well that I asked if I could quote him:
The middle-class student protestors of the New Left rejected the socialist/Marxist left as unhip. They were interested in sexier forms of oppression uncovered by cultural analysis (Marcuse) and abstruse "Theory". Labor politics became stodgy and old-fashioned. This took a couple generations to work through. The woke ideology's conspicuous lack of interest in the working class is the tell-tale sign. Such fragments as are, er, left of the old left are anti-woke, and meanwhile the actual working class shifted to the populist right and gave us Trump. Trump and wokeness are cousins.

The middle-class origins of wokeness smoothed its way through the institutions because it had no interest in "seizing the means of production" (how quaint such phrases seem now), which would quickly have run up against hard state and corporate power. The fact that wokeness only expressed interest in other kinds of class (race, sex, etc) signalled compromise with existing power: give us power within your system and we'll bestow the resource we control — moral rectitude — upon you. As an ideological stalking horse for gaining control over discourse and institutions, this succeeded where a more ambitious revolutionary program would not have.
[2] It helped that the humanities and social sciences also included some of the biggest and easiest undergrad majors. If a political movement had to start with physics students, it could never get off the ground; there would be too few of them, and they wouldn't have the time to spare.

At the top universities these majors are not as big as they used to be, though. A 2022 survey found that only 7% of Harvard undergrads plan to major in the humanities, vs nearly 30% during the 1970s. I expect wokeness is at least part of the reason; when undergrads consider majoring in English, it's presumably because they love the written word and not because they want to listen to lectures about racism.

[3] The puppet-master-and-puppet character of political correctness became clearly visible when a bakery near Oberlin College was falsely accused of race discrimination in 2016. In the subsequent civil trial, lawyers for the bakery produced a text message from Oberlin Dean of Students Meredith Raimondo that read "I'd say unleash the students if I wasn't convinced this needs to be put behind us."

[4] The woke sometimes claim that wokeness is simply treating people with respect. But if it were, that would be the only rule you'd have to remember, and this is comically far from being the case. My younger son likes to imitate voices, and at one point when he was about seven I had to explain which accents it was currently safe to imitate publicly and which not. It took about ten minutes, and I still hadn't covered all the cases.

[5] In 1986 the Supreme Court ruled that creating a hostile work environment could constitute sex discrimination, which in turn affected universities via Title IX. The court specified that the test of a hostile environment was whether it would bother a reasonable person, but since for a professor merely being the subject of a sexual harassment complaint would be a disaster whether the complainant was reasonable or not, in practice any joke or remark remotely connected with sex was now effectively forbidden. Which meant we'd now come full circle to Victorian codes of behavior, when there was a large class of things that might not be said "with ladies present."

[6] Much as they tried to pretend there was no conflict between diversity and quality. But you can't simultaneously optimize for two things that aren't identical. What diversity actually means, judging from the way the term is used, is proportional representation, and unless you're selecting a group whose purpose is to be representative, like poll respondents, optimizing for proportional representation has to come at the expense of quality. This is not because of anything about representation; it's the nature of optimization; optimizing for x has to come at the expense of y unless x and y are identical.

[7] Maybe societies will eventually develop antibodies to viral outrage. Maybe we were just the first to be exposed to it, so it tore through us like an epidemic through a previously isolated population. I'm fairly confident that it would be possible to create new social media apps that were less driven by outrage, and an app of this type would have a good chance of stealing users from existing ones, because the smartest people would tend to migrate to it.

[8] I say "mostly" because I have hopes that journalistic neutrality will return in some form. There is some market for unbiased news, and while it may be small, it's valuable. The rich and powerful want to know what's really going on; that's how they became rich and powerful.

[9] The Times made this momentous announcement very informally, in passing in the middle of an article about a Times reporter who'd been criticized for inaccuracy. It's quite possible no senior editor even approved it. But it's somehow appropriate that this particular universe ended with a whimper rather than a bang.

[10] As the acronym DEI goes out of fashion, many of these bureaucrats will try to go underground by changing their titles. It looks like "belonging" will be a popular option.

[11] If you've ever wondered why our legal system includes protections like the separation of prosecutor, judge, and jury, the right to examine evidence and cross-examine witnesses, and the right to be represented by legal counsel, the de facto parallel legal system established by Title IX makes that all too clear.

[12] The invention of new improprieties is most visible in the rapid evolution of woke nomenclature. This is particularly annoying to me as a writer, because the new names are always worse. Any religious observance has to be inconvenient and slightly absurd; otherwise gentiles would do it too. So "slaves" becomes "enslaved individuals." But web search can show us the leading edge of moral growth in real time: if you search for "individuals experiencing slavery" you will as of this writing find five legit attempts to use the phrase, and you'll even find two for "individuals experiencing enslavement."

[13] Organizations that do dubious things are particularly concerned with propriety, which is how you end up with absurdities like tobacco and oil companies having higher ESG ratings than Tesla.

[14] Elon did something else that tilted Twitter rightward though: he gave more visibility to paying users. Paying users lean right on average, because people on the far left dislike Elon and don't want to give him money. Elon presumably knew this would happen. On the other hand, the people on the far left have only themselves to blame; they could tilt Twitter back to the left tomorrow if they wanted to.

[15] It even, as James Lindsay and Peter Boghossian pointed out, has a concept of original sin: privilege. Which means unlike Christianity's egalitarian version, people have varying degrees of it. An able-bodied straight white American male is born with such a load of sin that only by the most abject repentance can he be saved.

Wokeness also shares something rather funny with many actual versions of Christianity: like God, the people for whose sake wokeness purports to act are often revolted by the things done in their name.

[16] There is one exception to most of these rules: actual religious organizations. It's reasonable for them to insist on orthodoxy. But they in turn should declare that they're religious organizations. It's rightly considered shady when something that appears to be an ordinary business or publication turns out to be a religious organization.

[17] I don't want to give the impression that it will be simple to roll back wokeness. There will be places where the fight inevitably gets messy — particularly within universities, which everyone has to share, yet which are currently the most pervaded by wokeness of any institutions.

[18] You can however get rid of aggressively conventional-minded people within an organization, and in many if not most organizations this would be an excellent idea. Even a handful of them can do a lot of damage. I bet you'd feel a noticeable improvement going from a handful to none.



Thanks to Sam Altman, Ben Miller, Daniel Gackle, Robin Hanson, Jessica Livingston, Greg Lukianoff, Harj Taggar, Garry Tan, and Tim Urban for reading drafts of this.

Writes and Write-Nots

2024-10-01 00:00:00

October 2024

I'm usually reluctant to make predictions about technology, but I feel fairly confident about this one: in a couple decades there won't be many people who can write.

One of the strangest things you learn if you're a writer is how many people have trouble writing. Doctors know how many people have a mole they're worried about; people who are good at setting up computers know how many people aren't; writers know how many people need help writing.

The reason so many people have trouble writing is that it's fundamentally difficult. To write well you have to think clearly, and thinking clearly is hard.

And yet writing pervades many jobs, and the more prestigious the job, the more writing it tends to require.

These two powerful opposing forces, the pervasive expectation of writing and the irreducible difficulty of doing it, create enormous pressure. This is why eminent professors often turn out to have resorted to plagiarism. The most striking thing to me about these cases is the pettiness of the thefts. The stuff they steal is usually the most mundane boilerplate — the sort of thing that anyone who was even halfway decent at writing could turn out with no effort at all. Which means they're not even halfway decent at writing.

Till recently there was no convenient escape valve for the pressure created by these opposing forces. You could pay someone to write for you, like JFK, or plagiarize, like MLK, but if you couldn't buy or steal words, you had to write them yourself. And as a result nearly everyone who was expected to write had to learn how.

Not anymore. AI has blown this world open. Almost all pressure to write has dissipated. You can have AI do it for you, both in school and at work.

The result will be a world divided into writes and write-nots. There will still be some people who can write. Some of us like it. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.

Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.

Yes, it's bad. The reason is something I mentioned earlier: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did:

If you're thinking without writing, you only think you're thinking.
So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.

This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be.

It will be the same with writing. There will still be smart people, but only those who choose to be.









Thanks to Jessica Livingston, Ben Miller, and Robert Morris for reading drafts of this.

Founder Mode

2024-09-01 00:00:00

September 2024

At a YC event last week Brian Chesky gave a talk that everyone who was there will remember. Most founders I talked to afterward said it was the best they'd ever heard. Ron Conway, for the first time in his life, forgot to take notes. I'm not going to try to reproduce it here. Instead I want to talk about a question it raised.

The theme of Brian's talk was that the conventional wisdom about how to run larger companies is mistaken. As Airbnb grew, well-meaning people advised him that he had to run the company in a certain way for it to scale. Their advice could be optimistically summarized as "hire good people and give them room to do their jobs." He followed this advice and the results were disastrous. So he had to figure out a better way on his own, which he did partly by studying how Steve Jobs ran Apple. So far it seems to be working. Airbnb's free cash flow margin is now among the best in Silicon Valley.

The audience at this event included a lot of the most successful founders we've funded, and one after another said that the same thing had happened to them. They'd been given the same advice about how to run their companies as they grew, but instead of helping their companies, it had damaged them.

Why was everyone telling these founders the wrong thing? That was the big mystery to me. And after mulling it over for a bit I figured out the answer: what they were being told was how to run a company you hadn't founded — how to run a company if you're merely a professional manager. But this m.o. is so much less effective that to founders it feels broken. There are things founders can do that managers can't, and not doing them feels wrong to founders, because it is.

In effect there are two different ways to run a company: founder mode and manager mode. Till now most people even in Silicon Valley have implicitly assumed that scaling a startup meant switching to manager mode. But we can infer the existence of another mode from the dismay of founders who've tried it, and the success of their attempts to escape from it.

There are as far as I know no books specifically about founder mode. Business schools don't know it exists. All we have so far are the experiments of individual founders who've been figuring it out for themselves. But now that we know what we're looking for, we can search for it. I hope in a few years founder mode will be as well understood as manager mode. We can already guess at some of the ways it will differ.

The way managers are taught to run companies seems to be like modular design in the sense that you treat subtrees of the org chart as black boxes. You tell your direct reports what to do, and it's up to them to figure out how. But you don't get involved in the details of what they do. That would be micromanaging them, which is bad.

Hire good people and give them room to do their jobs. Sounds great when it's described that way, doesn't it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.

One theme I noticed both in Brian's talk and when talking to founders afterward was the idea of being gaslit. Founders feel like they're being gaslit from both sides — by the people telling them they have to run their companies like managers, and by the people working for them when they do. Usually when everyone around you disagrees with you, your default assumption should be that you're mistaken. But this is one of the rare exceptions. VCs who haven't been founders themselves don't know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world. [1]

Whatever founder mode consists of, it's pretty clear that it's going to break the principle that the CEO should engage with the company only via his or her direct reports. "Skip-level" meetings will become the norm instead of a practice so unusual that there's a name for it. And once you abandon that constraint there are a huge number of permutations to choose from.

For example, Steve Jobs used to run an annual retreat for what he considered the 100 most important people at Apple, and these were not the 100 people highest on the org chart. Can you imagine the force of will it would take to do this at the average company? And yet imagine how useful such a thing could be. It could make a big company feel like a startup. Steve presumably wouldn't have kept having these retreats if they didn't work. But I've never heard of another company doing this. So is it a good idea, or a bad one? We still don't know. That's how little we know about founder mode. [2]

Obviously founders can't keep running a 2000 person company the way they ran it when it had 20. There's going to have to be some amount of delegation. Where the borders of autonomy end up, and how sharp they are, will probably vary from company to company. They'll even vary from time to time within the same company, as managers earn trust. So founder mode will be more complicated than manager mode. But it will also work better. We already know that from the examples of individual founders groping their way toward it.

Indeed, another prediction I'll make about founder mode is that once we figure out what it is, we'll find that a number of individual founders were already most of the way there — except that in doing what they did they were regarded by many as eccentric or worse. [3]

Curiously enough it's an encouraging thought that we still know so little about founder mode. Look at what founders have achieved already, and yet they've achieved this against a headwind of bad advice. Imagine what they'll do once we can tell them how to run their companies like Steve Jobs instead of John Sculley.









Notes

[1] The more diplomatic way of phrasing this statement would be to say that experienced C-level execs are often very skilled at managing up. And I don't think anyone with knowledge of this world would dispute that.

[2] If the practice of having such retreats became so widespread that even mature companies dominated by politics started to do it, we could quantify the senescence of companies by the average depth on the org chart of those invited.

[3] I also have another less optimistic prediction: as soon as the concept of founder mode becomes established, people will start misusing it. Founders who are unable to delegate even things they should will use founder mode as the excuse. Or managers who aren't founders will decide they should try to act like founders. That may even work, to some extent, but the results will be messy when it doesn't; the modular approach does at least limit the damage a bad CEO can do.



Thanks to Brian Chesky, Patrick Collison, Ron Conway, Jessica Livingston, Elon Musk, Ryan Petersen, Harj Taggar, and Garry Tan for reading drafts of this.

When To Do What You Love

2024-09-01 00:00:00

September 2024

There's some debate about whether it's a good idea to "follow your passion." In fact the question is impossible to answer with a simple yes or no. Sometimes you should and sometimes you shouldn't, but the border between should and shouldn't is very complicated. The only way to give a general answer is to trace it.

When people talk about this question, there's always an implicit "instead of." All other things being equal, why wouldn't you work on what interests you the most? So even raising the question implies that all other things aren't equal, and that you have to choose between working on what interests you the most and something else, like what pays the best.

And indeed if your main goal is to make money, you can't usually afford to work on what interests you the most. People pay you for doing what they want, not what you want. But there's an obvious exception: when you both want the same thing. For example, if you love football, and you're good enough at it, you can get paid a lot to play it.

Of course the odds are against you in a case like football, because so many other people like playing it too. This is not to say you shouldn't try though. It depends how much ability you have and how hard you're willing to work.

The odds are better when you have strange tastes: when you like something that pays well and that few other people like. For example, it's clear that Bill Gates truly loved running a software company. He didn't just love programming, which a lot of people do. He loved writing software for customers. That is a very strange taste indeed, but if you have it, you can make a lot by indulging it.

There are even some people who have a genuine intellectual interest in making money. This is distinct from mere greed. They just can't help noticing when something is mispriced, and can't help doing something about it. It's like a puzzle for them. [1]

In fact there's an edge case here so spectacular that it turns all the preceding advice on its head. If you want to make a really huge amount of money — hundreds of millions or even billions of dollars — it turns out to be very useful to work on what interests you the most. The reason is not the extra motivation you get from doing this, but that the way to make a really large amount of money is to start a startup, and working on what interests you is an excellent way to discover startup ideas.

Many if not most of the biggest startups began as projects the founders were doing for fun. Apple, Google, and Facebook all began that way. Why is this pattern so common? Because the best ideas tend to be such outliers that you'd overlook them if you were consciously looking for ways to make money. Whereas if you're young and good at technology, your unconscious instincts about what would be interesting to work on are very well aligned with what needs to be built.

So there's something like a midwit peak for making money. If you don't need to make much, you can work on whatever you're most interested in; if you want to become moderately rich, you can't usually afford to; but if you want to become super rich, and you're young and good at technology, working on what you're most interested in becomes a good idea again.

What if you're not sure what you want? What if you're attracted to the idea of making money and more attracted to some kinds of work than others, but neither attraction predominates? How do you break ties?

The key here is to understand that such ties are only apparent. When you have trouble choosing between following your interests and making money, it's never because you have complete knowledge of yourself and of the types of work you're choosing between, and the options are perfectly balanced. When you can't decide which path to take, it's almost always due to ignorance. In fact you're usually suffering from three kinds of ignorance simultaneously: you don't know what makes you happy, what the various kinds of work are really like, or how well you could do them. [2]

In a way this ignorance is excusable. It's often hard to predict these things, and no one even tells you that you need to. If you're ambitious you're told you should go to college, and this is good advice so far as it goes, but that's where it usually ends. No one tells you how to figure out what to work on, or how hard this can be.

What do you do in the face of uncertainty? Get more certainty. And probably the best way to do that is to try working on things you're interested in. That will get you more information about how interested you are in them, how good you are at them, and how much scope they offer for ambition.

Don't wait. Don't wait till the end of college to figure out what to work on. Don't even wait for internships during college. You don't necessarily need a job doing x in order to work on x; often you can just start doing it in some form yourself. And since figuring out what to work on is a problem that could take years to solve, the sooner you start, the better.

One useful trick for judging different kinds of work is to look at who your colleagues will be. You'll become like whoever you work with. Do you want to become like these people?

Indeed, the difference in character between different kinds of work is magnified by the fact that everyone else is facing the same decisions as you. If you choose a kind of work mainly for how well it pays, you'll be surrounded by other people who chose it for the same reason, and that will make it even more soul-sucking than it seems from the outside. Whereas if you choose work you're genuinely interested in, you'll be surrounded mostly by other people who are genuinely interested in it, and that will make it extra inspiring. [3]

The other thing you do in the face of uncertainty is to make choices that are uncertainty-proof. The less sure you are about what to do, the more important it is to choose options that give you more options in the future. I call this "staying upwind." If you're unsure whether to major in math or economics, for example, choose math; math is upwind of economics in the sense that it will be easier to switch later from math to economics than from economics to math.

There's one case, though, where it's easy to say whether you should work on what interests you the most: if you want to do great work. This is not a sufficient condition for doing great work, but it is a necessary one.

There's a lot of selection bias in advice about whether to "follow your passion," and this is the reason. Most such advice comes from people who are famously successful, and if you ask someone who's famously successful how to do what they did, most will tell you that you have to work on what you're most interested in. And this is in fact true.

That doesn't mean it's the right advice for everyone. Not everyone can do great work, or wants to. But if you do want to, the complicated question of whether or not to work on what interests you the most becomes simple. The answer is yes. The root of great work is a sort of ambitious curiosity, and you can't manufacture that.









Notes

[1] These examples show why it's a mistake to assume that economic inequality must be evidence of some kind of brokenness or unfairness. It's obvious that different people have different interests, and that some interests yield far more money than others, so how can it not be obvious that some people will end up much richer than others? In a world where some people like to write enterprise software and others like to make studio pottery, economic inequality is the natural outcome.

[2] Difficulty choosing between interests is a different matter. That's not always due to ignorance. It's often intrinsically difficult. I still have trouble doing it.

[3] You can't always take people at their word on this. Since it's more prestigious to work on things you're interested in than to be driven by money, people who are driven mainly by money will often claim to be more interested in their work than they actually are. One way to test such claims is by doing the following thought experiment: if their work didn't pay well, would they take day jobs doing something else in order to do it in their spare time? Lots of mathematicians and scientists and engineers would. Historically lots have. But I don't think as many investment bankers would.

This thought experiment is also useful for distinguishing between university departments.



Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston, Robert Morris, Harj Taggar, and Garry Tan for reading drafts of this.

The Right Kind of Stubborn

2024-07-01 00:00:00

July 2024

Successful people tend to be persistent. New ideas often don't work at first, but they're not deterred. They keep trying and eventually find something that does.

Mere obstinacy, on the other hand, is a recipe for failure. Obstinate people are so annoying. They won't listen. They beat their heads against a wall and get nowhere.

But is there any real difference between these two cases? Are persistent and obstinate people actually behaving differently? Or are they doing the same thing, and we just label them later as persistent or obstinate depending on whether they turned out to be right or not?

If that's the only difference then there's nothing to be learned from the distinction. Telling someone to be persistent rather than obstinate would just be telling them to be right rather than wrong, and they already know that. Whereas if persistence and obstinacy are actually different kinds of behavior, it would be worthwhile to tease them apart. [1]

I've talked to a lot of determined people, and it seems to me that they're different kinds of behavior. I've often walked away from a conversation thinking either "Wow, that guy is determined" or "Damn, that guy is stubborn," and I don't think I'm just talking about whether they seemed right or not. That's part of it, but not all of it.

There's something annoying about the obstinate that's not simply due to being mistaken. They won't listen. And that's not true of all determined people. I can't think of anyone more determined than the Collison brothers, and when you point out a problem to them, they not only listen, but listen with an almost predatory intensity. Is there a hole in the bottom of their boat? Probably not, but if there is, they want to know about it.

It's the same with most successful people. They're never more engaged than when you disagree with them. Whereas the obstinate don't want to hear you. When you point out problems, their eyes glaze over, and their replies sound like ideologues talking about matters of doctrine. [2]

The reason the persistent and the obstinate seem similar is that they're both hard to stop. But they're hard to stop in different senses. The persistent are like boats whose engines can't be throttled back. The obstinate are like boats whose rudders can't be turned. [3]

In the degenerate case they're indistinguishable: when there's only one way to solve a problem, your only choice is whether to give up or not, and persistence and obstinacy both say no. This is presumably why the two are so often conflated in popular culture. It assumes simple problems. But as problems get more complicated, we can see the difference between them. The persistent are much more attached to points high in the decision tree than to minor ones lower down, while the obstinate spray "don't give up" indiscriminately over the whole tree.

The persistent are attached to the goal. The obstinate are attached to their ideas about how to reach it.

Worse still, that means they'll tend to be attached to their first ideas about how to solve a problem, even though these are the least informed by the experience of working on it. So the obstinate aren't merely attached to details, but disproportionately likely to be attached to wrong ones.



Why are they like this? Why are the obstinate obstinate? One possibility is that they're overwhelmed. They're not very capable. They take on a hard problem. They're immediately in over their head. So they grab onto ideas the way someone on the deck of a rolling ship might grab onto the nearest handhold.

That was my initial theory, but on examination it doesn't hold up. If being obstinate were simply a consequence of being in over one's head, you could make persistent people become obstinate by making them solve harder problems. But that's not what happens. If you handed the Collisons an extremely hard problem to solve, they wouldn't become obstinate. If anything they'd become less obstinate. They'd know they had to be open to anything.

Similarly, if obstinacy were caused by the situation, the obstinate would stop being obstinate when solving easier problems. But they don't. And if obstinacy isn't caused by the situation, it must come from within. It must be a feature of one's personality.

Obstinacy is a reflexive resistance to changing one's ideas. This is not identical with stupidity, but they're closely related. A reflexive resistance to changing one's ideas becomes a sort of induced stupidity as contrary evidence mounts. And obstinacy is a form of not giving up that's easily practiced by the stupid. You don't have to consider complicated tradeoffs; you just dig in your heels. It even works, up to a point.

The fact that obstinacy works for simple problems is an important clue. Persistence and obstinacy aren't opposites. The relationship between them is more like the relationship between the two kinds of respiration we can do: aerobic respiration, and the anaerobic respiration we inherited from our most distant ancestors. Anaerobic respiration is a more primitive process, but it has its uses. When you leap suddenly away from a threat, that's what you're using.

The optimal amount of obstinacy is not zero. It can be good if your initial reaction to a setback is an unthinking "I won't give up," because this helps prevent panic. But unthinking only gets you so far. The further someone is toward the obstinate end of the continuum, the less likely they are to succeed in solving hard problems. [4]



Obstinacy is a simple thing. Animals have it. But persistence turns out to have a fairly complicated internal structure.

One thing that distinguishes the persistent is their energy. At the risk of putting too much weight on words, they persist rather than merely resisting. They keep trying things. Which means the persistent must also be imaginative. To keep trying things, you have to keep thinking of things to try.

Energy and imagination make a wonderful combination. Each gets the best out of the other. Energy creates demand for the ideas produced by imagination, which thus produces more, and imagination gives energy somewhere to go. [5]

Merely having energy and imagination is quite rare. But to solve hard problems you need three more qualities: resilience, good judgement, and a focus on some kind of goal.

Resilience means not having one's morale destroyed by setbacks. Setbacks are inevitable once problems reach a certain size, so if you can't bounce back from them, you can only do good work on a small scale. But resilience is not the same as obstinacy. Resilience means setbacks can't change your morale, not that they can't change your mind.

Indeed, persistence often requires that one change one's mind. That's where good judgement comes in. The persistent are quite rational. They focus on expected value. It's this, not recklessness, that lets them work on things that are unlikely to succeed.

There is one point at which the persistent are often irrational though: at the very top of the decision tree. When they choose between two problems of roughly equal expected value, the choice usually comes down to personal preference. Indeed, they'll often classify projects into deliberately wide bands of expected value in order to ensure that the one they want to work on still qualifies.

Empirically this doesn't seem to be a problem. It's ok to be irrational near the top of the decision tree. One reason is that we humans will work harder on a problem we love. But there's another more subtle factor involved as well: our preferences among problems aren't random. When we love a problem that other people don't, it's often because we've unconsciously noticed that it's more important than they realize.

Which leads to our fifth quality: there needs to be some overall goal. If you're like me you began, as a kid, merely with the desire to do something great. In theory that should be the most powerful motivator of all, since it includes everything that could possibly be done. But in practice it's not much use, precisely because it includes too much. It doesn't tell you what to do at this moment.

So in practice your energy and imagination and resilience and good judgement have to be directed toward some fairly specific goal. Not too specific, or you might miss a great discovery adjacent to what you're searching for, but not too general, or it won't work to motivate you. [6]

When you look at the internal structure of persistence, it doesn't resemble obstinacy at all. It's so much more complex. Five distinct qualities — energy, imagination, resilience, good judgement, and focus on a goal — combine to produce a phenomenon that seems a bit like obstinacy in the sense that it causes you not to give up. But the way you don't give up is completely different. Instead of merely resisting change, you're driven toward a goal by energy and resilience, through paths discovered by imagination and optimized by judgement. You'll give way on any point low down in the decision tree, if its expected value drops sufficiently, but energy and resilience keep pushing you toward whatever you chose higher up.

Considering what it's made of, it's not surprising that the right kind of stubbornness is so much rarer than the wrong kind, or that it gets so much better results. Anyone can do obstinacy. Indeed, kids and drunks and fools are best at it. Whereas very few people have enough of all five of the qualities that produce the right kind of stubbornness, but when they do the results are magical.







Notes

[1] I'm going to use "persistent" for the good kind of stubborn and "obstinate" for the bad kind, but I can't claim I'm simply following current usage. Conventional opinion barely distinguishes between good and bad kinds of stubbornness, and usage is correspondingly promiscuous. I could have invented a new word for the good kind, but it seemed better just to stretch "persistent."

[2] There are some domains where one can succeed by being obstinate. Some political leaders have been notorious for it. But it won't work in situations where you have to pass external tests. And indeed the political leaders who are famous for being obstinate are famous for getting power, not for using it well.

[3] There will be some resistance to turning the rudder of a persistent person, because there's some cost to changing direction.

[4] The obstinate do sometimes succeed in solving hard problems. One way is through luck: like the stopped clock that's right twice a day, they seize onto some arbitrary idea, and it turns out to be right. Another is when their obstinacy cancels out some other form of error. For example, if a leader has overcautious subordinates, their estimates of the probability of success will always be off in the same direction. So if he mindlessly says "push ahead regardless" in every borderline case, he'll usually turn out to be right.

[5] If you stop there, at just energy and imagination, you get the conventional caricature of an artist or poet.

[6] Start by erring on the small side. If you're inexperienced you'll inevitably err on one side or the other, and if you err on the side of making the goal too broad, you won't get anywhere. Whereas if you err on the small side you'll at least be moving forward. Then, once you're moving, you expand the goal.



Thanks to Trevor Blackwell, Jessica Livingston, Jackie McDonough, Courtenay Pipkin, Harj Taggar, and Garry Tan for reading drafts of this.