MoreRSS

site iconSLIME MOLD TIME MOLDModify

Philosophy, chemistry, etc.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of SLIME MOLD TIME MOLD

The Mind in the Wheel – Part XII: Help Wanted

2025-05-08 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]
[INTERLUDE – I LOVE YOU FOR PSYCHOLOGICAL REASONS]
[PART VIII – ARTIFICIAL INTELLIGENCE]
[PART IX – ANIMAL WELFARE]
[PART X – DYNAMIC METHODS]
[PART XI – OTHER METHODS]


“Alright, gang, let’s split up and search for clues.”

— Fred Jones, Scooby-Doo

This has been our proposal for a new paradigm for psychology. 

If the proposal is more or less right, then this is the start of a scientific revolution. And while we can’t make any guarantees, it’s always good to plan for success. So in case these ideas do turn out successful, then: welcome to psychology’s first paradigm, let’s discuss what we do from here.

In looking for a paradigm, we’re looking for new ways to describe the mysteries that pop up on the regular. When a good description arrives, some of those mysteries will become puzzles, problems that look like they can be solved with the tools at hand, that look like they will have a clear solution, the kind of solution we’ll recognize when we see it. Because a shared paradigm gives us a shared commitment to the same rules, standards, and assumptions, it can let us move very quickly. 

All that is to say is that if this paradigm has any promise, then there should be a lot of normal science, a lot of puzzle-solving to do. A new paradigm is like an empty expert-level sudoku: there’s a kind of internal logic, but also a lot of tricky blanks that need filling in. So, we need your help. Here are some things you can do.

Experimentation

First, experimentalists can help us develop methods for figuring out how many cybernetic drives people have, what each drive controls, and different parameters of how they work. In the last two sections we did our best to speculate about what these methods might look like, but there are probably a lot of good ideas we missed. 

Then, we need people to actually go out and use these methods. The first task is probably to discover all of the different drives that exist in human psychology, to fill out the “periodic table” of motivation as completely as we can. Finding all of the different drives will generate many new mysteries, which will lead to more lines of research and more discoveries.

We will also want to study other animals. There are a few reasons to study animals in addition to humans. First of all, most animals don’t have the complex social drives that humans do. The less social an animal is, the easier it will be to study its non-social drives in isolation. Second, it’s possible to have more control over an animal’s environment. We can raise an animal so that it never encounters certain things, or only encounters some things together. Finally, we can use somewhat more invasive techniques with animals than we can with humans. 

Some animals have the bad emotions.

Computational Modeling

Computational models will be especially important for developing a better understanding of depression, anxiety, and other mental illnesses. With a model, we can test different changes to the design and parameters, and see which kinds of models and what parameter values lead to the behaviors and tendencies that we recognize as depression. This will ultimately help us determine how many different types of depression there are, come to an understanding of their etiology, and in time develop interventions and treatments. 

Computational models should provide similar insight into tendencies like addiction and self-harm. The first step is to show that models of this kind can give rise to behavior that looks like addiction. Then, we see what other predictions the model makes about addictive behavior, and about behavior in general, and we test those predictions with studies and experiments. 

If we discover more than one computational model that leads to addictive behavior, we can compare the different models to real-world cases of addiction, and see which is more accurate. Once we have models that provide a reasonably good fit, we can use them to develop new approaches for treatment and prevention. 

Biology and Chemistry

Those of you who tend more towards biology or neuroscience can help figure out exactly how these concepts are implemented in our biology. Understanding the computational side of how the mind works is important, but the possible interventions we can take (like treating depression) will be limited if we don’t know how each part of the computation is carried out in an organism. 

For example: every governor tracks and controls some kind of signal. The fear governor tracks something like “danger”. This is a complicated neurological construct that probably doesn’t correspond to some specific part of biology. But other governors probably track biological signals that may be even as simple as the concentrations of specific minerals or hormones in the bloodstream. 

For example, the hormone leptin seems to be involved in regulating hunger. Does one of the hunger governors act to control leptin levels in our blood? Or is leptin involved in some other part of the hunger-control process? What do the hunger, thirst, sleep, and other basic governors control, and what are their set points? 

Biologists may be able to answer some of these questions. Some of these questions may even have already been answered in neuroscience, biology, or medicine, in which case the work will be in bundling them together under this new perspective. 

Design

Running studies and inventing better methods sounds very scientific and important, but we suspect the most important contributions might actually come from graphic design.

The first “affinity table” was developed in 1718 by Étienne François Geoffroy. Substances are identified by their alchemical symbol and grouped by “affinity”. 

At the head of each column is a substance, and below it are listed all the substances that are known to combine with it. “The idea that some substances could unite more easily than others was not new,” reports French Wikipedia, “but the credit for bringing together all the available information into a large general table, later called the affinity table, goes to Geoffroy.”

Here is a later affinity table with one additional column, the Tabula Affinitatum, commissioned around 1766 for the apothecary’s shop of the Grand Duke of Florence, now to be found in the Museo Galileo

These old attempts at classification are charming, and it’s tempting to blame this on the fact that they didn’t understand that elements fall into some fairly distinct categories. But chemical tables remained lacking even after the discovery of the periodic law.

Russian chemist Dmitri Mendeleev is often credited with inventing the periodic table, but he did not immediately give us the periodic table as we know it today. His original 1869 table looked like this: 

And his update in 1871 still looked like this: 

It wasn’t until 1905 that we got something resembling the modern form, the first 32-column table developed by Alfred Werner:

They tried a lot of crazy things on the way to the periodic table we know and love, and not all of these ideas made it. We’ll share just one example here, Otto Theodor Benfey’s spiral periodic table from 1964:

When a new paradigm arrives, the first tools for thinking about it, whether tables, charts, diagrams, metaphors, or anything else, are not going to be very good. Instead we start with something that is both a little confused and a little confusing, but that half-works, and iterate from there.

The first affinity table by Étienne François Geoffroy in 1718 was not very good. It was missing dozens of elements. It contained bizarre entries like “absorbent earth” and “oily principle”. And it was a simple list of reactions, with no underlying theory to speak of. 

But it was still good enough for Fourcroy, a later chemist, to write

No discovery is more brilliant in this era of great works and continued research, none has done more honor to this century of renewed and perfected chemistry, none finally has led to more important results than that which is relative to the determination of affinities between bodies, and to the exposition of the degrees of this force between different natural substances. It is to Geoffroy the elder … that we owe this beautiful idea of ​​the table of chemical ratios or affinities. … We must see in this incorrect and inexact work only an ingenious outline of one of the most beautiful and most useful discoveries which have been made. This luminous idea served as a torch to guide the steps of chemists, and it produced a large number of useful works. … chemists have constantly added to this first work; they have corrected the errors, repaired the omissions, and completed the gaps.

It took about two hundred years, and the efforts of many thousands of chemists, to get us from Geoffroy’s first affinity table to the periodic table we use today. So we should not worry if our first efforts are incomplete, or a little rough around the edges. We should expect this to take some effort, we should be patient. 

Better tools do not happen by accident. We do not get them for free — someone has to make them. And if you want, that someone can be you.


That’s all, folks!

Thank you for reading to the end of the series! We hope you enjoyed.

We need your help, your questions, your disagreement. Consider reaching out to discuss collaborating, or to just toss around ideas, especially if they’re ideas that could lead to empirical tests. You can contact us by email or join the constant fray of public discussion on twitter.

If you find these ideas promising and want to see more of this research happen, consider donating. Our research is funded through Whylome, a a 501(c)(3) nonprofit that relies on independent donations for support. Donations will go towards further theoretical, modeling, and empirical work.


The Mind in the Wheel – Part XI: Other Methods

2025-05-01 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]
[INTERLUDE – I LOVE YOU FOR PSYCHOLOGICAL REASONS]
[PART VIII – ARTIFICIAL INTELLIGENCE]
[PART IX – ANIMAL WELFARE]
[PART X – DYNAMIC METHODS]


There’s a fascinating little paper called Physiological responses to maximal eating in men

The researchers recruited fourteen men (mean age: 28 years old) and invited them back to the lab to eat “a homogenous mixed-macronutrient meal (pizza)”. The authors note that “this study was open to males and females but no females signed up.” 

They invited each man to visit the lab two separate times. On one occasion, the man was asked to eat pizza until “comfortably full”. The other time, he was asked to eat pizza until he “could not eat another bite”.

When asked to eat until “comfortably full”, the men ate an average of about 1500 calories of pizza. But when asked to eat until they “could not eat another bite”, the men ate an average of more than 3000 calories. 

Study Materials

The authors view this as a study about nutrition, but we saw it and immediately went, “Aha! Pizza psychology!”

While this isn’t a lot of data — only fourteen men, and they only tried the challenges one time each — it shows some promise as a first step towards a personality measure of hunger and satiety, because it measures both how hungry these boys are, and also how much they can eat before they have to stop.

When asked to aim for “could not eat another bite”, the men could on average eat about twice as much pizza compared to when they were asked to aim for “comfortably full”. But there was quite a lot of variation in this ratio for different men:  

All the men ate more when they were asked to eat as much as they could, than when they were asked to eat as much as they liked. But there’s a lot of diversity in the ratio between those two values. When instructed to eat until they “could not eat another bite”, some men ate only a little bit more than they ate ad libitum. But one man ate almost three times as much when he was told to go as hard as he can. 

People have some emotions that drive them to eat (collectively known as hunger), and other emotions that drive them to stop eating (collectively known as satiety). While these pizza measurements are very rough, they suggest something about the relationship between these two sets of drives in these men. If nothing else, it’s reassuring to see that for each individual, the “could not eat another bite” number is always higher. 

It’s a little early to start using this as a personality measure, but with a little legwork to make it reliable, we might find something interesting. It could be the case, for example, that there are some men with very little daylight between “comfortably full” and “could not eat another bite”, and other men for whom these two occasions are like day and night. That would suggest that some men’s hunger governor(s) are quite strong compared to their satiety governor(s), and other men’s are relatively weak. 

The general principle of personality in cybernetic psychology is “some drives are stronger than others”. So for personality, we want to invent methods that can get at the question of how strong different drives are, and how they stack up against each other. Get in loser, we’re making a tier list of the emotions. 

We may not be able to look at a drive and say exactly how strong it is, since we don’t yet know how to measure the strength of a drive. We don’t even know the units. When this is eventually discovered, it will probably come from an unexpected place, like how John Dalton’s work in meteorology gave him the idea for the atomic theory. 

But we can still get a decent sense of how strong one drive is compared to another drive. This is possible whenever we can take two drives and make them fight. 

Personality psychology be like

Some drives are naturally in opposition — this pizza study is a good example. The satiety governor(s) exist specifically to check the hunger governor(s). Hunger was invented to start eating, and satiety was invented to make it stop. So it’s easy to set up a situation where the two of them are in conflict. 

Or somewhat easy. We think it’s more accurate to model the pizza study as the interaction between three (groups of) emotions. When asked to eat until “comfortably full”, the hunger governor voted for “eat pizza” until its error was close to zero, then it stopped voting for “eat pizza”, so the man stopped. That condition was simple and mainly involved just the one governor.

The other condition was more complex. When asked to eat until they “could not eat another bite”, the hunger governor first voted for “eat pizza” until its error was close to zero. Then, some kind of “please the researchers” governor(s) kept voting for “eat pizza” to please the researchers. 

At some point this started running up against the satiety governor. The satiety governor tracks something like how full you are, so as the man started to get too full, the satiety governor started voting against “eat pizza”. The man kept eating until the vote from the “please the researchers” governor(s) was just as strong as the vote from the satiety governor, at which point the two votes cancel out and the man could not eat another bite. 

This reveals the problem. In one sense, hunger and satiety are naturally in opposition. Hunger tries to make you eat enough and satiety tries to make sure you don’t eat enough too much. But in a healthy person, there’s plenty of daylight between the set points of these two drives, and they don’t come into conflict. 

Same thing with hot and cold — the drive that tries to keep you warm is in some sense “in opposition” to the drive that tries to keep you from overheating, but they don’t normally fight. If you have a sane and normal mind, you don’t put on 20 sweaters, then overheat, then in a fit of revenge take off all of your clothes and jump in a snowbank, etc. These drives oppose each other along a single axis, but when they are working correctly, they keep the variable they care about in a range that they agree on. Hunger and satiety, and all the paired governors, are more often allies than enemies. 

But any two drives can come into conflict when the things they want to do become mutually exclusive, or even just trade off against each other. Even if you can do everything you want, the drives will still need to argue about who gets to go first. Take something you want, anything at all, and put it next to a tiger. Congratulations, fear is now in conflict with that original desire. 

Many people experience this conflict almost every morning:

This is actually a more complicated situation, where the governors have formed factions. The pee governor wants to let loose on your bladder. But your hygiene governor votes against wetting the bed. Together they settle on a compromise where you get up and pee in the toilet instead, since this satisfies both of their goals (bladder relief + hygienic). 

But the governor that keeps you warm, the sleep governor (who wants to drift back into unconsciousness), and any other governors with an interest in being cozy, strenuously oppose this policy. They want you to stay in your warm, comfy bed. So you are at an impasse until the bladder governor eventually has such a strong error signal — you have to take a leak so bad — that it has the votes to overrule the cozy coalition and motivate you to get up and go to the bathroom. 

The point is, the bladder governor, warmth governor, and sleep governor don’t fundamentally have anything to do with each other. They all care about very different things. But when you have to pee in the middle of the night, their interests happen to be opposed. They draw up into factions, and this leads to a power struggle — one so universal that there are memes about it. And as is always the case in politics, a power struggle is a good chance to get a sense of the relative strength of the factions involved.

If you met someone who said they didn’t relate to this — they always get up in the middle of the night to pee without any hesitation or inner struggle — this would suggest that their bladder governor is very strong, or that their warmth and/or sleep governors are unusually weak. Whatever the case, their bladder governor wins such disagreements so quickly that there doesn’t even appear to be a dispute. 

In contrast, if your friend confesses that they have such a hard time getting up that they sometimes wet the bed, this suggests that their bladder governor, and probably their hygiene governor, are unusually weak compared to the governors voting for them to stay in bed. 

To understand these methods, we have to understand the difference between two kinds of “strength”. 

In general when we say that a drive is strong, we mean that it can meet its goals, it can vote for the actions it wants. This is why we can learn something about the relative strength of two drives by letting them fight — we can present the organism with mutually exclusive options (truth or dare?) and see which option it picks. If we have some reasonable idea which drive would pick which option, we know which drive is stronger from which option is picked. 

However! Another way a drive can be strong is that it can have a big error signal in that moment. If you are ravenously hungry, you will eat before anything else. If you are in excruciating pain, you will pull your hand off the stove before doing anything else. This kind of urgency tells us that the current error is big, but it doesn’t tell us much about the governor. 

A drive does get a stronger vote when its variable is further off target. But it’s also true that for a given person, some drives seem stronger in all situations. 

The normal sense of strength gets at the fact that a governor can be stronger or weaker for a given error. Some people can go to sleep hungry without any problem. For other people, even the slightest hint of appetite will keep them awake. When we talk about someone being aggressive, we mean that they will drop other concerns if they see a chance to dominate someone; if we talk about someone being meek, we mean the opposite. 

The current strength of any drive is a function of the size of its current error signal and the overall strength or “weight” of the governor. Unfortunately, we don’t know what that function is. Also, it might be a function of more than just those two things. Uh-oh! 

Ideally, what we would do is hold the size of the error constant. If we could make sure that the error on the salt governor is 10 units, and the error on the sweet governor is 10 units, then we could figure out which governor is stronger by seeing which the person would choose first, skittles or olives. This is based on the assumption that the strength of the vote for each option is a combination of the size of the errors and the strength of the governor itself. Since in this hypothetical we know that the strength of the errors is exactly the same, the difference in choice should be entirely the result of the difference in the strength of the governors.

Unfortunately we don’t know how to do that either. We don’t know how to measure the errors directly, let alone how to hold the size of the errors constant. 

But we can use techniques that should make the size of some error approximately constant, and base our research on that. The closer the approximation, the better. 

The important insight here is that even when we can’t make measurements in absolute terms, we can often make ordinal comparisons. “How strong is this drive” is an impossible question to answer until we know more about how strength is implemented mechanically, but we can make very reasonable guesses about which of two drives is stronger, what order their strengths are in, i.e. ordinal measurements. 

We can do this two ways: we can compare one of your drives to everyone else’s version of that same drive, or we can compare one of your drives to your other drives.

Compare One of Your Drives to Everyone Else’s Version of that Same Drive

The first is that we can compare one of a person’s drives to the same drive in other people. 

It’s reasonable to ask if your hunger, fear, pain, or shame drive is stronger or weaker than average. To do this, we can look at two or more individuals and ask if the drive is stronger for one of them or for the other. 

This will offer a personality measure like: your salt governor is stronger than 98% of people. You a salty boy.

Again, to get a measure of strength, we need to make everyone’s errors approximately constant. One way we can make errors approximately constant is by fully satisfying the drive. So if we identify a drive, like the drive for salt, we can exhaust the drive by letting people eat as much salt or salty food(s) as they want. Now all their errors should be close to zero. Then we can see how long it takes before they go eat something salty again. If someone goes to get salty foods sooner, then other things being equal, this is a sign that their salt governor is unusually strong.

This won’t be perfectly the same, and other things will not be perfectly equal. Some people’s salt error may increase more quickly than others’, like maybe they metabolize salt faster, or something. So after 5 hours without salty foods, some people’s error may be much bigger than others’. But it should be approximately equal, and certainly we would learn something important if we saw one guy who couldn’t go 10 minutes without eating something salty, and someone else who literally never seemed to seek it out. 

When we say things like, “Johnnie is a very social person. If he has to spend even 30 minutes by himself he gets very lonely, so he’s always out and spending time with people. But Suzie will go weeks or even months without seeing anyone,” this is a casual version of the same reasoning, and we think it’s justified. It may not get exactly at the true nature of personality, but it’s a start. 

When we figure out what the targets are for some governors, we’ll be able to do one better. For example, let’s imagine that we find out that thirst is the error for a governor that controls blood osmolality, and through careful experimentation, we find out that almost everyone’s target for blood osmolality is 280 mOsm/kg. Given the opportunity, behavior drives blood osmolality to 280 mOsm/kg and then stops.

If we measure people’s blood osmolality, we can dehydrate them to the point where their blood osmolality is precisely 275 mOsm/kg. We know that this will be an error of 5 mOsm/kg, because that’s 5 units less than the target. Then we would know almost exactly what their error is, and we could estimate the relative strength of their thirst governor by measuring how hard they fight to get a drink of water. 

On that note, it’s possible that a better measure than time would be effort. For example, you could take a bunch of rats and figure out the ideal cage temperature for each of them. Separately, you teach them that pushing a lever will raise the temperature of their cage by a small amount each time they press it. 

Then, you set the cage temperature 5 degrees colder than they prefer. This should give them all errors of similar magnitude — they are all about 5 degrees colder than they’d like. Then you give them the same lever they were trained on. But this time, it’s disconnected. You count how many times they press the lever before they give up. This will presumably give you a rough measure of how much each rat is bothered by being 5 degrees below target, and so presumably an estimate of the strength of that governor. If nothing else, you should observe some kind of individual difference. 

Compare One of Your Drives to Your Other Drives

The second approach is to ask how your drives compare to each other, basically a ranking. We can look at a single person and ask, in this person, is drive A stronger than drive B? 

The main way to do this is to give the person a forced choice between two options, one choice that satisfies governor A, and the other that satisfies governor B. This doesn’t have to be cruel — you can let them take both options, you just have to just make them choose which they want to do first.

This would offer a personality measure like: you are more driven by cleanliness than by loneliness, which is why you keep blowing off all your friends to stay in and scrub your toilet.

There are some drives that make us want to be comfortable and other drives that make us want to be fashionable; there are at least some tradeoffs between comfort and fashion; if you reflect on each of the people in your life, it’s likely that you already know which coalition of drives tends to be stronger in each person.  

Every time you see someone skip work to play videogames, refuse to shower even when it ruins all their friendships, blow up their life to have an affair with the 23-year-old at the office, or stay up late memorizing digits of pi, you are making this kind of personality judgment implicitly. People have all kinds of different drives, and you can learn a lot about which ones are strongest by seeing which drives are totally neglected, and which drives lead people to blithely sacrifice all other concerns, as though they’re blind to the consequences.

The Bene Gesserit, a sect of eugenicist, utopian nuns from the Dune universe, use a simplified version of this method in their famous human awareness test, better known as the gom jabbar. Candidates are subjected to extreme pain and ordered not to pull away, at penalty of taking a poisoned needle in the neck. In his success, Paul demonstrates that some kind of self-control governor is much stronger than his pain governor, even when his pain error is turned way up.

“What’s in the box?” “A personality test.”

But no shade to the Bene Gesserit, this is not a very precise measure. By turning the pain governor’s error extremely high, they can show that a candidate has exceptional self-control. But this doesn’t let them see if self-control is in general stronger than pain, because the error gets so huge. To compare the strength of governors, you ideally want the error signals to be as similar as possible.

As before, the best way to get at strength is to take two drives, try to make their errors as similar as possible, and then see which drive gets priority. Other things being equal, that drive must be stronger. 

When we were trying to compare personality between people, this was relatively easy. If nothing else, we were at least looking at the same error. We can’t get an exact measure of the error, but we could at least say, both of these people have gone 10 hours without eating, or 20 hours without sleep, or are ten degrees hotter than they find comfortable. These are the same kinds of things and they are equal for both people.

But to compare two governors within a single person, we are comparing two different errors, and we have no idea what the units are. So it may be hard to demonstrate differences between the strength of the governors when those differences are small. If one error is ten times stronger than the other, then we assume that the governor behind that error will win nearly all competitions between the two of them. If one error is 1.05 times stronger than the other, that governor has an edge, but will often get sidelined when there are other forces at play.

But like the common-sense examples above, it should be possible to make some comparisons, especially when differences are clear. For example, if we deprive a person of both sleep and food for 48 hours (with their consent of course), then offer them a forced choice between food and sleep, and they take the food, that suggests that their drive to eat may be stronger than their drive to sleep. This is especially true if we see that other people in the same situation take the option to sleep instead. 

If we deprive the person of sleep for 48 hours and food for only 4 hours, and they still choose the food over sleep, that is even better evidence that their drive to eat is stronger than their drive to sleep, probably a lot stronger. 

While these methods are designed to discover something inside an individual person, they might also shed some light on personality differences between people. For example, we might find that in most people, the sugar governor is stronger than the salt governor. But maybe for you, your salt governor is much stronger than your sugar governor. That tells us something about your personality in isolation (that one drive is stronger than another), and also tells us something about your personality compared to other people (you have an uncommon ordering of drives). 

Return to Pizza Study

The pizza study is interesting because it kind of combines these techniques.

Each person was compared on two tasks — “comfortably full” and “could not eat another bite”, which gives us a very rough sense of how strong their hunger and satiety governors are. If you ate 10 slices to get to “comfortably full” and only 12 slices to get to “could not eat another bite”, your satiety governor is probably pretty strong, since it kicks in not long after you ate as much as you need. (There could be other interpretations, but you get the gist.) 

In addition, each person can be compared to all the other people. Some men could eat only a little more when they were asked to get to “could not eat another bite”. But one man ate almost three times as much as his “comfortably full”. This man’s satiety governor is probably weaker than average. There are certainly other factors involved, but it still took a long time before that governor forced him to stop eating, suggesting it is weak. 

A final note on strength. The strength of a governor is probably somewhat innate. But it may also be somewhat the result of experience. If someone is more motivated by safety than by other drives, some of that may be genetic, but some of that may be learned. It would not be ridiculous to think that your mind might be able to tune things so that if you have been very unsafe in your life, you will pay more attention to safety in the future.

Even the part that’s genetic (or otherwise innate) still has to be implemented in some specific way. When one of your governors is unusually strong, does that governor have a stronger connection to the selector? Does it have the same connection as usual, but it can shout louder? Does it shout as loud as normal, but it can shout twice as often? We don’t know the details yet, but keep in mind that all of this will be implemented in biology and will include all kinds of gritty details. 

Deeper Questions

People can differ in more ways than just having some of their drives be stronger than others. For example, some people are more active than other people in general, more active for every kind of drive. They do more things every single day. 

Some people seem to get more happiness from the same level of accomplishment. For some people, cooking dinner is a celebration. For others, routine is routine. 

Some people seem more anxious by default. Even a small thing will make them nervous. 

These seem like they might be other dimensions on which people can differ, and they don’t seem like they are linked to specific governors. 

Studying the strength of the governors is nice because the governors are all built on basically the same blueprint, so the logic needed to puzzle out one of them should mostly work to puzzle out any of the others. The methods used to study one governor should work to study all of them, only minor tweaks required. If you find techniques to measure the strength of one governor, you should be able to use those techniques to measure the strength of any governor.

But other ways in which people differ seem more idiosyncratic. They are probably the result of different parameters that tune features that are more global, each of which interacts with the whole system in a unique and different way. So we will probably need to invent new methods for each of them. 

That means we can’t yet write a section on the different methods that will be useful. These methods still need to be invented. And we might only get to these methods once we have learned most of what there is to know about the differences in strength between the governors, and have to track down the remaining unexplained differences between people. But we can give a few examples to illustrate what some of these questions and methods might look like.

Learning

Every governor has to have some way of learning which behaviors increase/decrease their errors. We don’t know exactly how this learning works yet, but we can point to a few questions that we think will be fruitful.

For example, is learning “both ways”? 

The hot governor (keeps you from getting too hot) and the cold governor (keeps you from getting too cold) both care about the same variable, body temperature. Certainly if you are too cold and you turn on a gas fireplace, your cold governor will notice that this corrects its error and will learn that turning on the gas fireplace is a good option. So when you get too cold in the future, that governor will sometimes vote for “turn on the gas fireplace”.

But what if you are too hot and you turn on the gas fireplace? Well, your hot governor will notice that this increases its error, and will learn that this is a bad option, which it will vote against if you’re in danger of getting too hot. 

What does your cold governor learn in this situation? Maybe it learns the same thing your hot governor does — that the gas fireplace increases temperature. The hot governor thinks that’s a bad outcome, but the cold governor thinks it’s a good outcome. If so, then next time you are cold, the cold governor might vote for you to turn on the gas fireplace. 

But maybe a governor only learns when its error is changed. After all, each governor only really cares about the error it’s trying to send to zero. And if that error isn’t changed, maybe the governor doesn’t pay attention. If the error is very small, maybe that governor more or less turns off, and stops paying attention, to conserve energy. Then it might not do any learning at all. 

If this were the case, the cold governor shouldn’t learn from any actions you take when you’re too hot, even when these actions influence your body temperature. And the hot governor shouldn’t learn from anything you do when you’re too cold, same deal. 

You could test this by putting a mouse in a cage that is uncomfortably hot, and that contains a number of switches. Each switch will either temporarily increase or temporarily decrease the temperature of the cage. With this setup, the mouse should quickly learn which switches to trip (makes the cage cooler) and which switches to avoid (makes the cage even more uncomfortably hot). 

Once the mouse has completely learned the switches, then you make the cage uncomfortably cold instead, and see what happens. If the cold governor has also been learning, then the mouse should simply invert its choice of switches, and will be just as good at regulating the cage temperature as before. 

But if the cold governor wasn’t paying close attention to the hot governor’s mistakes, then the mouse will have to do some learning to catch up. If the cold governor wasn’t learning from the hot governor’s mistakes at all, then the mouse will be back at square one, and might even have to re-learn all the switches through trial and error.

We definitely might expect the former outcome, but you have to admit that the latter outcome would be pretty interesting. 

The Model of Happiness

Or consider the possibility that happiness might drive learning.

This would explain why happiness exists in the first place. It’s not just pleasant, it’s a signal to flag successful behavior and make sure that it’s recorded. When something makes you happy, that signals some system to record the link between the recent action and the error correction.

This would also explain why it often feels like we are motivated by happiness as a reward. We aren’t actually motivated by happiness itself, but when something has made us happy, we tend to do it more often in the future. 

Previously we said that happiness is equal to the change in an error. In short, when you correct one of your errors, that creates a proportional amount of happiness. This happiness sticks around for a while but slowly decays over time. 

That’s a fine model as a starting point, but it’s very simple. Here’s a slightly more complicated model of happiness, which may be more accurate than the model we suggested earlier. Maybe happiness is equal to the reduction in error times the total sum of all errors, like so:

happiness = delta_error * sum_errors

If happiness is just the result of the correction of an error, then you get the same amount of happiness from correcting that error in any circumstance. But that seems a little naïve. A drink of water in the morning after a night at a five-star hotel is an accomplishment, but the same drink of water drawn while hungry and in pain, lost in the wilderness, is a much greater feat. Remembering the strategy that led to that success might be more important. 

If you multiply the correction by the total amount of error, then correcting an error when you are in a rough situation overall leads to a much greater reward, which would encourage the governors to put a greater weight on successes that are pulled off in difficult situations. If you correct an error when all your other errors are near zero, you will get some happiness. But if you are more out of alignment generally — more tired, cold, lonely, or whatever — you get more happiness from the same correction.

This might explain fetishes. Why do so many sexual fetishes include things that cause fear, pain, disgust, or embarrassment? Surely the fear, pain, disgust, and embarrassment governors would vote against these things. 

We have to assume that the horny governor is voting for these things. The question is, why would it vote for anything more than getting your rocks off? Why would an orgasm plus embarrassment be in any way superior to an orgasm in isolation?

If learning is based on happiness rather than raw reduction in error, then governors will learn to vote for things that have caused past happiness.

And if happiness is a function of total error, not just correction in the error they care about, governors will sometimes vote for things that increase the total error just before their own error is corrected. 

The point is, if happiness is a function of total error, governors will actually prefer to reduce their errors in a state of greater disequilibrium. This doesn’t decrease their error any more than in a state of general calm, but it does lead to more happiness, greater learning, and so they learn to perform that action more often. And in some cases they will actually vote to increase the errors of other governors, when they can get the votes.

The horny governor only cares about you having an orgasm. But since it learns from happiness, not from the raw correction in its error, it has learned to vote for you to become afraid and embarrassed just before the moment of climax, because that increases your total error, which increases happiness. And since the horny governor has the votes, it overrules the governors who would vote against those things.

We don’t know how to quantify any of the factors involved, so we can’t test precise models. There are probably constants in these equations, but we can’t figure those out either, at least not yet.

But we can still make reasonable tests of general classes of models. We can make very decent guesses about whether or not something is a function of something else, and we can probably figure out if these relationships are sums or products, whether relationships are linear or exponential, and so on. For example:

happiness = delta_error

This is the original model we proposed, and it’s the most simple. In this case, happiness is caused when an organism corrects any error, and the amount of happiness produced is a direct function of how big of an error was corrected. Eating a cheeseburger makes you happy because, assuming you are hungry, it corrects that error signal. The cheeseburger error. 

Not shown in that equation is the kind of relationship. Maybe it’s linear, but maybe it’s exponential. Does eating two cheeseburgers cause more than twice as much happiness as eating one?

This very simple model has the virtue of being very simple. And it seems like it lines up with the basic facts — eating, sleeping, drinking, and fucking do tend to make us happy, especially if we are quite hungry, tired, thirsty, or horny. 

But we should also think about more complex models and see if any of them are any better. For example:

happiness = delta_error * product_errors

In this case, the correction in an error is multiplied not by the sum, but by the product of all other errors. So eating a cheeseburger while tired and lonely will be much more pleasurable than eating a cheeseburger while merely tired or merely lonely. 

This seems pretty unlikely just from first glance. If happiness were dependent on the product of your other errors, that seems like it would be pretty noticeable, because the difference between correcting an error while largely satisfied and largely unsatisfied would be huge and thus obvious. But this is also something that you could test empirically and maybe there could be some kind of truth to it. 

Is this a better model? Not entirely clear, but it certainly makes predictions that can be compared to parts of life we’re familiar with, and it can be tested empirically. That’s a pretty good start.

Or another example:

happiness = delta_error / sum_errors

Instead of multiplying the correction to produce happiness, this time we tried dividing it. In this case, happiness is smaller when the total amount of error is bigger. So correcting the same error leads to less happiness if you’re more out of alignment. 

This one seems right out. The joy we get from a cup of hot chocolate is greater when we are lonely, not less. Living in extremis seems like it should only magnify the satisfaction of our experiences. It’s possible that this doesn’t stand up to closer inspection, but people certainly find the idea intuitive:

Finally, one more example. You remember this equation from the learning and memory section above: 

Another model of happiness is that happiness is proportional to the TD error in the equation above, or the equivalent in whatever system our brain really uses. The TD error is the difference between the current and projected outcome of the action and the expected outcome of the action. So in this model, we get happiness when something corrects an error by more than the governor expects

Having an especially great sandwich for the first time feels great. This is because you didn’t know how good it would be. But having the same sandwich for the 100th time isn’t as good, even if it corrects the same amount of error. This is because you anticipated it would be that good, so there’s no TD error. In fact, if the sandwich hits the spot less than usual, you’ll be disappointed, even if it’s still pretty good. 

In this model, you’d expect that doing the same enjoyable stuff over and over wouldn’t keep you happy for very long. You’d have to mix it up and try new things that correct your errors.

This model does seem to capture something important. But that said, in real life correcting a big enough error usually creates some happiness. So happiness doesn’t seem like it could be entirely based on how unexpected the correction is. Some amount of happiness seems to come from any correction. But it does seem like more unexpected corrections usually make us more happy. 

So this is an example of how we can test general models, even before we can make precise measurements. We can think about classes of models, bring them to their limits, ask how the implications of these models compare to other things we already know about life and happiness, things we experience every day.

Just thinking of these questions mechanically, thinking of them as models, prompts us to ask questions like — What is the minimum amount of happiness? Can happiness only go down to zero, or can there be negative happiness? Is there a maximum amount of happiness? Even if a maximum wasn’t designed intentionally, surely there is some kind of limit to the value the hardware can represent? Can you get happiness overflow errors? What is the quantum of happiness? What are the units? — questions that psychologists wouldn’t normally ask. 


[Next: HELP WANTED]


The Mind in the Wheel – Part X: Dynamic Methods

2025-04-24 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]
[INTERLUDE – I LOVE YOU FOR PSYCHOLOGICAL REASONS]
[PART VIII – ARTIFICIAL INTELLIGENCE]
[PART IX – ANIMAL WELFARE]


Since behavioral feedback of any significance is always negative, it follows that there will always be a tendency to move toward a zero-error condition calling for no effort, and (if clever enough) one will always be able to discover the reference condition. By the same token, one will always be able to discover what the subject is controlling, for if disturbances are applied that do not in fact disturb the controlled aspect of the environment, the subject’s behavior will not oppose the disturbance. Only when one has found the correct definition will the proposed controlled quantity be protected against disturbance by the subject’s actions.

— William Powers, Behavior: The Control of Perception

What we wrote in the previous parts is only a start. Here are the things we need to figure out next.

First, we should try to discover all the basic drives of human psychology. We should learn about their error signals, which we identify as emotions. 

When possible, we should also figure out what signal each governor is actually controlling, and the target it is controlling that signal towards. It’s a good start to know that there is a drive with an error we know as thirst, but it would be better to confirm that thirst is the error of a governor controlling blood osmolality. And it would be even better to then confirm that this governor controls blood osmolality towards a target of 280-295 mOsm/kg (or perhaps some biological proxy of that target). 

For example, we may find that there is a hunger governor controlling the hormone leptin, a tiredness governor controlling the hormone melatonin, and so on. The answers we find probably won’t be quite that simple, but we’re looking for something along these lines. 

We should also try to characterize signals like happiness and curiosity, which don’t seem to be errors from a control system (if nothing else, they aren’t actively driven towards zero!), but do seem to be important signals that interact with the other drives and with motivation in other ways.

Second, we should try to discover the parameters that tune the governors. It’s clear that some governors can be “stronger” than others, and that these patterns of strength and weakness differ between different people. People are more or less brave, more or less neat and clean, etc. We’d like to find out what it means, in a precise sense, for one governor to be stronger than another. 

We’d like to know whether parameters are individual to each governor, or global to all of them, or if there are some of both. For example, we’d like to know if each governor has an individual parameter that adjusts how it balances exploration vs. exploitation, or if there is an explore/exploit parameter that influences all the governors globally. 

One of our long-term goals is to find ways of measuring these parameters for each person. For example, we might want to ask if someone’s fear governor is stronger than their thirst governor, perhaps even how much stronger. This will give us the start of a true measure of personality. 

Third, we will want to discover the laws of what’s known as selection, the detailed parliamentary procedure and rules that control how the governors vote on actions. 

As before, there will be parameters that adjust these laws, and make people different from one another. Learning how to measure these parameters will give us an even stronger theory of personality.

Fourth, as we develop a better understanding of the drives, the governors, and the laws that dictate their behavior, we can start working to characterize well-known behaviors in terms of these governors and their parameters. 

Here are some things we might be able to understand in terms of this new paradigm: personality, anxiety, depression, personality disorders, possibly other psychiatric disorders, self-harm, high-risk behavior, drugs, and addiction. 

If cybernetic principles lead to models that have natural outcomes that look just like anxiety, depression, addiction, etc., that will establish the promise of this approach. Then we can look at the points where the models fail, consider alternative models, refine the approach, and make the models even better. 

But this last project is kind of “for the rest of time”. If building the paradigm is successful, people can spend the next few hundred years applying it. But first we have to build it.

1. Considerations

First, a few considerations, issues that might come up when trying to discover the drives.

1.1 Are Emotions Constructed?

One of the questions academics keep asking about emotions is whether or not they are “culturally constructed”. 

This may seem like a weird question, but to people on the inside of academic psychology, it’s a major topic.

But we’re not here to revisit those debates, we’re here to put them to rest. The cybernetic perspective gives a very clear answer to the question of whether or not emotions are constructed: yes, and no.

All emotions are biologically hard-wired, because they are the error signals from our most fundamental drives, all of which are necessary for survival. These are not at all constructed. While we don’t yet know the details, we understand that at some level they are physically distinct from each other, controlling different biological signals towards different set points. 

But emotion categories are culturally constructed. There are a huge number — dozens, maybe hundreds, maybe even thousands — of individual emotions, but we don’t have a word for each of them. Instead we group them together in ways that make practical sense for the needs of our culture. 

As usual, hunger is a good example. We treat hunger as if it is just one signal, when in fact hunger is easily a dozen different emotions, maybe more. But because these emotions are all addressed by similar actions (stuffing something in your maw) most languages treat them as one thing.

We can unpack our emotion words when we need to — we can talk about craving salt, or talk about specific cravings that come from this drive, like craving pickles. We can say things like, “I’m stuffed but I’m still hungry!” and so on. But the hunger drives are closely intertwined most of the time, so most languages don’t make any serious distinction between them.

Desert mice almost never drink water; they get almost all their water from their food, from eating seeds. So if desert mice developed a language, they would probably come up with a single word that meant both hungry and thirsty. In their experience, hunger and thirst are addressed by one action, eating seeds, and it’s more useful to combine these ideas than to keep them separate. 

No group of humans is as extreme as the desert mouse — but still, we do wonder if there are cultures where people get most of their water from their food, and if those cultures would bother to distinguish between hunger and thirst, or if they would have one word covering both. 

1.2 Redundancy

Basic needs, especially needs that are critical to our survival, are probably supported by more than just one drive. 

Elevators are designed not only to support the weight they were designed to carry, but to support many times that weight, and they have multiple brakes and other failsafes in case of crisis. If one brake or failsafe malfunctions, the others kick in to prevent disaster. 

For the same reason, we should expect drives to be redundant, sometimes massively redundant. Humans tend to create systems that are highly efficient but fragile. But nature tends to create systems that are inefficient but resilient. If an animal has only one drive that tells it to eat, then if anything goes wrong with that drive, it will die. Better to have multiple drives, so that the animal is able to survive even if it is born with a surprise mutation or gets an unexpected brain injury. 

The more important a need is to survival, the more likely it is that there will be built-in redundancy. A need that is critically important may be supported by not one governor but by many separate governors that all control different measures of the same need. 

2. Observational Methods

One of the most foundational projects is to discover the list of drives and emotions. Above anything else, we should figure out how many different drives there are, and do our best to identify each of them. 

We can do this in two ways. We can use methods that are observational (looking at historical data, case studies, etc.) and methods that are empirical (let’s collect some data). Let’s look at these methods one at a time, starting with observational methods. 

2.1 Pure Observation

We can draw a lot of reasonable conclusions about the list of drives based on our everyday experiences of what it’s like to be human, and what we know about what it takes to survive. 

For example, we know that people have drives that lead to hunger and pain because we all experience those emotions, and it’s clear that they motivate our behavior. Most behaviors you encounter can be explained in terms of a basic drive.

Drives aren’t linked directly to each specific behavior, of course. There isn’t a drive to watch operas, or to play shuffleboard. For one thing, those options didn’t exist for our ancestors. People are probably driven to do these things because of some kind of general social emotions. But any behavior that can’t be explained in terms of a known basic drive may point to a basic drive of its own.

For example, it seems possible that humans have a basic drive to look at animals. As strange as this might sound, we go to great lengths to look at animals, even in private when no one else is around, even when no one is watching, and it seems like we are driven to this for no other reason than to look at them. It’s hard to explain these behaviors in terms of another drive, so the drive to look at animals may itself be basic. 

Think about all the time, space, and money we spend on zoos. Think of how we plaster the walls of our kindergartens with pictures of lions. Think of how many hours you personally have spent watching nature documentaries, or animal videos on YouTube. 

Before dog people got online, everyone knew that cat pictures ruled the internet. Animal pictures still rule the internet. As of this writing, the subreddit r/aww (mostly pictures of animals) is the 6th largest subreddit, with 37 million members. This may also be why people get pets in the first place, so they have animals to look at whenever they want.

If the desire to look at animals is a drive, then it should be homeostatic and conserved; you should want to go to the zoo for a while, then you should be ready to go home. If we keep you from going to the zoo, you will look at geese in the park instead. And if we keep you from looking at any animals at all, you may eventually become nearly frantic with your desire to do so, especially if this drive is unusually strong in you.  

Games like 2048 and Candy Crush suggest that there might be some kind of drive that causes sorting behavior, though maybe this is just an unusual manifestation of a drive for decorating or cleaning your environment. 

Like, let’s draw out how weird it is that people play these games. What the fuck is going on? Why is it so engrossing to watch two little blocks labeled “2” combine to form a block labeled “4”? People will do this for hours. It sounds so dumb, and yet when you’re on a plane sitting behind someone playing this on their seatback screen, you can’t look away.

When we find something extremely engrossing, it might be because it has concentrated the exact thing our drive is trying to control. If the drive here is something like “sorting”, there aren’t many naturally-occurring situations where you’re only sorting. But a game can provide you with pure, unadulterated sorting. (Compare: superstimuli.)

Another unlikely drive is some kind of drive to dig holes. The strongest evidence for this is in hobby tunneling, where people wake up one day and start digging vast networks of tunnels, usually for no apparent reason. They often do their digging in secret, and they’ll keep doing it even if there is a social or material cost, even when it’s forbidden. This suggests that it’s not done for social reasons, but in fact is done in spite of them.

What else could explain the incredible popularity of Minecraft? Why would children flock to a game about digging, instead of a game about anything else? As they say, the children yearn for the mines. 

When it is hard for us to do an activity itself, watching the activity can sometimes serve as an acceptable substitute. In this way, a drive for excavation might explain what the Italians call umarell. You’ve probably seen them — old men who spend their days watching construction sites, especially dig sites, standing there entranced with their hands clasped behind their back. This is enough of a universal across time and space that Jerry Seinfeld even has a bit about it.

Of course, these Italian men are so old-fashioned. Today the boys get all their construction watching on TikTok: 

There may even be a drive to seek out weapons, expressed especially strongly in boys. If you have ever been a boy, or spent any time around boys, this will probably sound familiar. Check out this passage from the Cyropaedia, a 370 BC biography of Cyrus the Great: 

And to-day a battle is before us where no man need teach us how to fight: we have the trick of it by nature, as a bull knows how to use his horns, or a horse his hoofs, or a dog his teeth, or a wild boar his tusks. The animals know well enough,” he added, “when and where to guard themselves: they need no master to tell them that. I myself, when I was a little lad, I knew by instinct how to shield myself from the blow I saw descending: if I had nothing else, I had my two fists, and used them with all my force against my foe: no one taught me how to do it, on the contrary they beat me if they saw me clench my fists. And a knife, I remember, I never could resist: I clutched the thing whenever I caught sight of it: not a soul showed me how to hold it, only nature herself, I do aver. I did it, not because I was taught to do it, but in spite of being forbidden, like many another thing to which nature drove me, in spite of my father and mother both. Yes, and I was never tired of hacking and hewing with my knife whenever I got the chance: it did not seem merely natural, like walking or running, it was positive joy.

Consider this collection, and what could possibly have driven someone to put it together with such care: 

People seem stuck on the idea that complex behaviors like digging or pretending a cool stick is a weapon couldn’t possibly be innate. But obviously they can be. Breeds of dogs whose ancestors were bred to herd animals, will herd animals without having to be taught. Spiders spin webs. People usually become attracted to adult members of the same species, rather than becoming attracted to furniture or the moon. If evolution has enough discretion to latch our sexual drives onto reasonable targets most of the time, then surely it can latch other drives onto other complex targets, like a stick that reminds you of an AK-47.

While we can see evidence of these drives as they express themselves in specific kinds of behavior, we don’t immediately know what is actually being controlled. A drive to dig might be implemented as something like a drive to smell freshly-turned earth, because in general that target would lead to digging behavior. You could imagine how tangential behaviors, like gardening, might be other, confused results of this drive. 

2.2 Ecological 

We can also draw some reasonable conclusions from our understanding of biology. 

All of our psychological drives were put into us by evolution to help keep us alive. So generally speaking, we should find in ourselves at least one drive (and matching emotion) for each thing that we need to stay alive, and at least one drive for all the things that have been necessary to be evolutionarily successful. 

You need to eat to stay alive, which is another reason to expect at least one drive for hunger. You don’t need sex to stay alive, but the species does need a sex drive to go on being a species at all, which is why evolution made us horny. Things that are necessary for survival (like breathing and sleeping) must be backed up by drives.

However, there are a few drives that are conspicuously missing — we don’t have quite every drive we need. See the example of scurvy, the horrible disease caused by a deficiency of vitamin C. You might think that people suffering from scurvy would seek out foods that contain the thing they lack, but as far as we know they don’t crave lemons or cabbage, which is why the cure took so long to discover. Vitamin C is necessary for survival, but people don’t appear to have a drive to seek it out.

There seem to be two main reasons we lack this drive.

First, most foods contain at least a little vitamin C, so most of our ancestors would have survived just fine without a drive telling them to seek it out. If you eat any kind of normal diet, you’ll end up with plenty of vitamin C by default. Only in very weird situations where you get no fresh food at all, like being a 15th century mariner or an arctic explorer, does this become a problem.

Second, this is a specific case where humans happen to be very unusual. We are one of the very small number of animals that can’t synthesize our own vitamin C, which is why we need to find it in our food. Most animals don’t need to consume any vitamin C, they make their own, so most animals would have no need for vitamin C drive at all. 

We probably inherit most of our drives from designs that are common to all mammals, and since the default mammal package doesn’t include a drive for vitamin C (because most mammals make their own), humans would have had to evolve such a drive from scratch. But given that vitamin C is so abundant in everything we normally eat, it’s easy to imagine why we didn’t bother.  

We have a vegetarian friend who used to struggle with random fatigue and low energy. Then he tried taking vitamin B12, and immediately felt a huge difference. But he didn’t seem to crave foods high in B12 before, suggesting that vitamin B12 also lacks a governor, despite being an essential nutrient.

This may be a common feature of many vitamins — in fact, it may be part of what it means for us to call something “a vitamin”. Most vitamins were discovered by people trying to cure diseases of deficiency, where people weren’t getting enough of the vitamin. It’s hard to develop a deficiency of something you have a drive for — the deficiency and the cure will be really obvious, since you’ll develop cravings. If you have a drive for some substance, it will be hard to develop a deficiency, so it may not be classified as a vitamin.

Some essential minerals probably have governors, but others may not, and it’s not entirely clear which is which.

But there will be signs. If you have a drive for a mineral, it should be pretty hard to develop a deficiency in that mineral, since you will normally be driven to consume it. But if you don’t have a drive for a mineral, then just like with vitamins, you’re at risk of developing deficiencies in that mineral, since you don’t have any natural motivation to seek it out. If there’s a mineral that people are always getting deficient in, that’s probably a sign that it doesn’t have a drive.

Iodine is a necessary mineral — if you don’t get enough, you develop terrible diseases of deficiency, especially goiter. This happens pretty frequently, or at least it did until people discovered the connection and started supplementing salt with iodine. Again this seems like possible evidence that there’s no iodine drive and no iodine governor. If there were, then all these Swiss people suffering from goiter would have been sitting around in their mountain cabins going “damn I would kill for some seafood right now” (seafood is high in iodine). On the other hand, maybe they were saying that, and history simply didn’t record it.

This seems like the kind of thing we should already have a clear answer for, but the literature on iodine is pretty unclear — there are a few studies, like this one that says that children aged 8-10 can’t tell the difference between traditionally prepared pickles made with iodized salt and traditionally prepared pickles made with non-iodized salt. Most of the existing research agrees, though there isn’t much of it. 

But we’ve collected a bit of data on this already, and found that while most people indeed seem unable to distinguish between iodized and non-iodized salt, a few people can pick them out of a lineup at rates somewhat better than chance. It’s also possible that most people can’t distinguish between iodized and non-iodized salt because most people aren’t iodine deficient, so that drive is inactive. 

Another slightly odd possibility is that maybe some people have iodine governors and other people don’t. Maybe this depends on where your ancestors lived, and whether they naturally got iodine in their diet (like if they were fisherpeople) or whether they had to actively seek it out to get enough (#hillpeople).

We are probably “missing” some other governors, especially governors for things that are not necessary to stay alive per se, but things that would be nice to have. 

For example, there appears to be no emotion that drives us to go and get more sunshine. Lack of sunshine is pretty bad for you, but there’s just no system making sure you go out and get it. Just like vitamin C, our ancestors were exposed to so much sunlight that evolution never bothered to give us a sunlight drive. 

This is why you have to use your human intellect, or a phone reminder or something, to remember to get your daily sunlight. You have a hard time building an association between sunlight and health because you don’t have a dedicated system keeping tabs on it.

2.3 Resistance

Another way to identify the drives is to ask ourselves what kinds of things make people angry when you try to stop them from doing those things. 

This provides some justification for drives like privacy and territoriality. Most people will go nuts if they’re not allowed some amount of personal territory; think of the teenager with the STAY OUT sign on their door.

This is also the reason to believe in various social emotions, like an emotion that arises when we feel we are being taken advantage of. This governor has a target that’s something like “I am doing 1/x of the work in this group, where x is the number of people in this group”. If you are doing more than your fair share of the work, very far from this target, then you get an error signal that feels something like being exploited, or being played.

This is why roommate situations are so stressful. People have different setpoints for cleanliness, and you might expect that each person would just clean the apartment up to their preferred level. An animal that had no social emotions would probably do exactly that. But people are social animals, and for people living in groups, the desire for cleanliness is in conflict with the desire not to get taken advantage of. 

We can also take the argument from depression in reverse. When someone is in the depths of a serious depression, we think that’s a result of all of their drives being turned way down. What do people conspicuously stop doing when they are depressed? The answer is hygiene. They let both their body and their living space become unkempt, even filthy. 

If you try to stop someone from getting/doing something, and they resist, that’s a drive. This is useful when, like privacy, it may not appear that they’re actively doing anything. But a drive for privacy becomes apparent when you don’t let people have it, because then they will fight for it. 

2.4 Knockout

Sometimes a drive is conspicuously absent in a few individuals, throwing into stark relief the fact that it’s present in everyone else. This can give us a surprisingly clear picture of the missing drive — the shape of something can be more obvious from its absence than its presence (or at least you can learn different things about its shape from the absence).

Cases of total or near-total knockout, where a person or animal is entirely missing a drive or an emotion, provide pretty strong evidence that the drive is present in everyone else. Consider the patient known as SM-046, a woman with severe amygdala damage, who experiences almost no fear:  

While the researchers behind this study don’t seem to understand its significance, we see this as strong evidence that fear and suffocation are separate emotions arising from separate drives.

SM has a complete fear knockout, and never experiences fear, no matter how dangerous the situation. She just doesn’t have that governor, or her copy of the governor is totally turned off. But she will still feel “air hunger” when she is suffocating, because breathing is handled by a different governor. It produces an entirely different error signal, one that’s easy to mistake for fear if you’re not looking carefully. 

Fear is pretty important to survival, so it seems like one of those cases where you might expect evolution to have added some redundancy. It seems reasonable to have different fear governors for different things, so if you knock your head wrong once and are no longer afraid of snakes, at least you’re still afraid of tigers. But SM doesn’t seem to have any backup fears that are still online. 

This suggests two possibilities. 1) Maybe there is really only one governor that accounts for every kind of fear. SM isn’t just missing some kinds of fear, she’s missing every kind, because there’s a single point of failure. 2) There are multiple fear governors, but they are organized in a way where it’s possible to knock all of them out at once. For example, maybe there are multiple governors but her ability to generate the perception of danger is knocked out, so all the governors are totally inactive. 

There are also some very rare genetic conditions that leave people with no experience of physical pain. These conditions are very rare because pain is very important. Without pain, you usually die, because you have no motivation not to put your arm in a wood chipper. One patient said, “at a young age, I would like to bang my head against the wall because I liked the feeling of vibration”.

This suggests that like fear, pain might be a single emotion, because it can be so cleanly toggled on or off. As far as we know, there aren’t genetic conditions where you can feel burning but you can’t feel cutting, or vice versa. People seem to either have pain basically working or have it basically gone, across the board.

That said, there do appear to be shades of pain insensitivity. For example, Jo Cameron has a version of pain insensitivity where she still experiences pain in the sense that she can avoid harming herself, but her subjective experience of pain isn’t at all unpleasant. She can tell that she’s been burned or cut, but she doesn’t mind. She described childbirth as “a tickle”, and said, “I could feel that my body was changing, but it didn’t hurt me.”

While Jo’s case is extreme, this kind of variation seems common. Some people experience pain but don’t mind, and other people don’t notice at all, and also there are shades between. So maybe there are tightly-linked drives or subcomponents that can eventually be distinguished with enough examination.

Extreme personality disorders may also be a kind of knockout. The average psychopath behaves a lot like a person with weights near zero on certain social governors, the governors that normally make people feel emotions like empathy and shame. 

Compare the stories of patient SM and Jo Cameron to this podcast interview with the sociopath M.E. Thomas / “Jamie”. There’s a lot of interesting stuff in here, but we want to highlight this one section where Spencer, the interviewer, asks her about fear:

SPENCER: I know a handful of sociopaths, and one thing I’ve asked them about is fear. Some of them say that they don’t think they have fear, or at least not in the normal way that other people do. What’s your relationship with fear?

JAMIE: Yeah, I totally agree with that. … Sometimes that’s gotten me in trouble because I will not take adequate precautions. Sometimes I do things that can maybe seem like I’m a little accident-prone. For instance, when I go mountain biking, I probably crash like 20% of the time, which I’ve heard is high.

SPENCER: Yeah, you mention in your book how you cut yourself in the kitchen a lot with knives by accident. Can you talk about that?

JAMIE: Yes, I still have a plastic safety knife. It’s kind of like the type that you carve pumpkins with, or little children can carve pumpkins with. I almost always use that knife. Here and there, I think it actually is safer for me to just use a bigger metal knife, but then I have to be very, very conscientious. I’m the same way too with train tracks. There are some train tracks close to where I live, and I cross them basically every day, but I know that I’m bad at paying attention and being careful for my own self. So I really talk to myself when I’m doing it, I’m like, “Here we come, 15 feet from the train tracks, 10 feet from the train tracks. Look right, left, right, left, right.” It’s this very belt and suspenders approach to kind of rein in my brain, which naturally doesn’t care, doesn’t even pay attention to things like that.

Sometimes psychopaths like to say that they are more rational than other people, like in this excerpt: 

JAMIE: I think you can always cooperate with psychopaths when your incentives align, and when you’re able to convince a psychopath that the incentives do align, then the psychopath is a very good team member.

SPENCER: And why are they a good team member?

JAMIE: Because once their incentives are aligned that way, they’re almost like a robot. They will always behave in a way that is in alignment with their incentives. Essentially, you can trust — in economics, they talk about the rational actor, who always behaves rationally — in a lot of ways, the psychopath, as long as they’re not experiencing gray rage or maybe some weird hormones or a situation like that, they basically are the economic rational actor.

But assuming self-preservation is one of your values, what is so rational about crashing 20% of the time you go mountain biking? 

A different interpretation is that psychopaths aren’t more rational, but they are less conflicted. What they describe as a lack of ego is perhaps a lack of the self-suppressing social emotions that include certain types of fear of social consequences (for example, shame). 

In a normal person, these prosocial emotions are in conflict with selfish desires that might lead someone to cheat, lie, steal, and so on. But psychopaths mostly lack these emotions, they are entirely un-self-conscious. This means that they feel little hesitation to bend the rules. But it also has the relaxing side effect of leading to less inner conflict, which might make one feel very rational. After all, the experience is of having clear desires and working towards them without any second thoughts.

This might also be why psychopaths are often so charming and charismatic — we find a lack of inner conflict very attractive, the lack of tension even showing in your face.

3. Empirical Methods

So far we’ve looked at observational techniques only. Now we’re gonna get off our asses and (describe how to) collect some data. Here’s how we might do it.

3.1 Artificial Knockout

Natural knockouts are the clearest-cut examples, and teach us the strongest lessons. But we can learn similar lessons by knocking out emotions artificially, like with drugs.

Drugs don’t usually seem to reduce the weight on a governor to zero. But they do often seem to turn the weight (or error) on a drive down, and sometimes they seem to turn it up. For example, alcohol seems like it temporarily reduces the weights on the fear and the shame governors, making people less driven by fear and shame. In contrast, it doesn’t seem to have much impact on the hunger governor. Drunk people seem just as hungry as normal. Or maybe alcohol turns hunger up; it seems like everyone wants fried food after a couple of pints, but maybe this is driven more by the sudden lack of shame.

Sometimes the changes caused by drugs are what we would normally think of as “side effects”, but all effects are really just effects. When we talk about SSRIs having sexual side effects, this may cash out as them interfering in some way with the horny governor.

There are some extreme circumstances that are almost like knockouts, and may help us distinguish between emotions in similar ways. Our favorite example, of course, is the potato diet. When people eat almost nothing but potatoes for several days, some of them find that the normal sensation of hunger becomes very weird. They say things like:

  • “It’s been very easy for me to not eat enough doing this and not realize that’s why I feel off. Might be worth a PSA. Hunger literally feels different on this diet.”
  • “finding myself completely forgetting about food, even as something i need to do to live. not experiencing any hunger. no urge to snack. i am certain i’m not drinking enough water. i definitely have more energy, and more focus, despite this … not sure if i’m actually hungry but haven’t eaten nearly enough.”
  • “I did get more tired throughout, and my appetite actually continually decreased. Had to remind myself to eat quite often and actually made a schedule.”
  • “On 100% potatoes, I don’t ever feel ‘hungry’ the way hunger usually feels, I’ll notice that I’m low-energy or fading, and that’s my signal that I should eat again”
  • “the normal feeling of hunger was entirely gone for me – what was left was a feeling of being almost faint and feeling not great when I went too long without eating. Took a lot of adjusting to.”

We think that “hunger” is actually a number of different emotions that come from several different drives. Because eating a well-rounded meal satisfies most of these drives at the same time, we don’t normally experience these emotions independently from one another, which is why we call them by a single name. 

We interpret the comments from the potato diet as reflecting a situation where some hunger emotions are unbundled from others, creating unusual subjective experiences. 

We think it went something like this: let’s say there’s one hunger drive for calories and then a bunch of drives for micronutrients like magnesium, sugar, or whatever. Normally the metabolism governor drives most eating behavior, since that’s the strongest signal. The other signals rise and fall with the signal from the calories governor anyway, because if you’re getting enough calories from a mixed diet, you will be getting approximately the right amount of the other things you need. They only chime in if you happen to be getting a diet really low in magnesium or whatever.

But something about the potato diet convinces your body that its weight set point should be lower, so it starts removing calories from your fat stores instead of adding them. This makes the metabolism governor stay quiet. It doesn’t have to vote for you to eat to get calories anymore, they are being added directly to the bloodstream. 

But your micronutrient governors don’t have the same kinds of reserves, so they keep sending out their error signals as normal. But you’re not used to responding to these micronutrient errors in isolation, and they’re not used to running the show. You feel vaguely weird and bad, but it’s not something you’re used to thinking of as hunger, and you don’t immediately know what to do about it. That’s why it feels weird on the potato diet.

Or here’s a slightly different model: If there are hunger governors for five different things and your diet only provides the nutrients that satisfy four of them, you’ll seem to experience hunger normally: very hungry before meals, full after meals (because of a fullness governor switching on). But there’s one governor that continues to vote for eating, who is later joined by the other four as time passes. So if switching to the potato diet suddenly satisfies all the hunger governors, you might experience the complete satisfaction of your hunger governors for the first time.

Which drives and emotions have been unbundled, and why exactly that would happen on potatoes, remains an open question. 

3.2 Behavioral Exhaustion

You can discover the root of a drive by separating the target of that drive into its component parts, and feeding each one into the system in turn. 

Let’s say you’re craving a cranberry juice cocktail. A natural question might be to wonder why you crave it so bad. Any craving presumably comes from one or more of your drives, but which one(s)?

A reasonable guess is that you don’t crave the whole cranberry juice cocktail, you actually crave one or more of its ingredients. You can test this by consuming the ingredients one at a time. If you first let yourself drink as much water as you want, and you still crave the cranberry juice cocktail, clearly you did not want it just because you were thirsty per se.

So you look at the other ingredients. There’s lots of sugar in the cocktail, maybe you are craving something sweet. So next you eat as much sugar as you want. If you’re still craving the cranberry juice cocktail, then it must have been something else.

In principle you can follow this process as far as you want, to discover precisely the ingredient you were craving. And once you discover the ingredient, you can follow the same process even further. You can go as far as centrifuging the original cranberry juice and eating different strata to determine exactly what part of it you were after. With enough effort, you might be able to identify the exact molecule.

In practice, things probably won’t be so simple. From oral rehydration formula, we know that some combinations of sugar, salt, and water are much more hydrating than others. If you mix the wrong combination, it can even become dehydrating. So in some cases, cravings may be holistic, your drives may really vote for something that is greater than the sum of its parts. This may be why some foods, like beans and rice, are often eaten together and seem much more delicious than the sum of their parts. In our pursuit of a better understanding of psychology, we can’t forget about biology. There is probably a reason why people prefer to drink lemonade instead of consuming water, sugar, and lemon juice in isolation. And by golly, we’re gonna find it.

In general, exhaustion shows that 1) there is a drive for the pure thing being exhausted (or else why would the organism keep taking/doing it), and 2) any behavior remaining after exhaustion cannot be caused in this case by the exhausted drive, though the exhausted drive might also vote for that behavior if it were not exhausted. 

3.3 Fungibility

Another angle is looking at impulses for different actions and trying to determine how they are fungible.

The thermostat only cares about the temperature in the house. When the house is too cold, actions that raise the temperature in any way are all equally successful, since they all correct the thermostat’s error. So from the thermostat’s perspective, actions that raise the house temperature are totally fungible. It is just as happy to turn on the baseboard heating as it is to turn on the forced-air heating, in this absurd hypothetical where your house, for some reason, has both.

We can use other fungible actions in the same way, and trace them back to their common origin. For example, you may notice that you feel hungry. You want bananas. You interrogate that feeling — what else sounds good? The other things that come to mind are avocados, potatoes, and spinach. All of them sound great. 

In many ways these foods are very different — for example, the avocado is high in fat and the banana is not. But you realize that all of the foods that sound good have something in common: they are all high in potassium. So instead of eating any of these foods, you drink some straight potassium chloride in water.

You may find that you no longer feel hungry at all, suggesting that what you thought of as a general sense of hunger was in fact a single drive for potassium. Your potassium governor was happy to fulfill in a number of different ways, so it was willing to vote for bananas, avocado, spinach, anything that would reduce its error. And when you drank straight potassium chloride in water, that also satisfied the drive, so the error signal went away. We don’t know if this would happen, but if it did, that would be fairly strong evidence for a potassium drive. 

Similarly, you might notice you have a craving for eggs and broccoli. Then you eat some nutritional yeast, which is basically nothing but B vitamins. Five minutes later, you don’t crave those foods anymore. Same deal.

3.4 Prevention

A version of fungibility in reverse, or an empirical version of resistance. To see what drive is behind a behavior, keep the person (or animal) from doing the behavior and see what they do instead. If an organism tries to do something, stop it. What does it do instead? This is probably an expression of the same drive. 

If you do this enough, you can triangulate all these behaviors and infer what variable the drive is controlling. You might also learn that two behaviors you thought were different are both expressions of the same drive. 

Also interesting that at some point the organism might do a substitution, e.g. look at pictures of food if it can’t manage to eat. When they can’t substitute, you have the drive surrounded. 

3.5 Effort

A similar method is to see what goals an animal will expend large amounts of effort to reach. A rat will push a lever 1000 times for water if that’s its only way to get hydration. It wouldn’t do this if the desire for water were an epiphenomenon of some other drive. The rat really wants water, specifically wants water, and will accept no substitutes. The fact that it puts in so much effort is the evidence.

3.6 Division

To alchemists, Fire was considered “the true and Universal Analyzer of all Mixt Bodies”, capable of dividing any substance into its more base components. 

But there were some problems with this approach. The alchemists were shaken when they discovered that Fire was not the only thing that could divide a substance into simpler components. They found that liquids like urine, beer, and wine would separate when put out in extreme cold.

Worse, there were some elements that fire couldn’t separate at all. Robert Boyle relates a story of gold being kept in a furnace for two months straight. The gold stayed a liquid the whole time, but it never separated into baser substances. Apparently fire had failed to separate gold into its elementary ingredients. Some true and Universal Analyzer! Or, more radically, maybe this meant that gold didn’t have more basic components, that gold itself was an element.

Observations like these threw alchemy into a state of chaos. In the preface to his Elements of Chemistry, where he pitches his homies on a new way of doing things, Antoine Lavoisier explains this history. He apologizes for not including a list of all the elements, saying (emphasis ours):

It will, no doubt, be a matter of surprise, that in a treatise upon the elements of chemistry, there should be no chapter on the constituent and elementary parts of matter; but I shall take occasion, in this place, to remark, that the fondness for reducing all the bodies in nature to three or four elements, proceeds from a prejudice which has descended to us from the Greek Philosophers. The notion of four elements, which, by the variety of their proportions, compose all the known substances in nature, is a mere hypothesis, assumed long before the first principles of experimental philosophy or of chemistry had any existence. In those days, without possessing facts, they framed systems; while we, who have collected facts, seem determined to reject them, when they do not agree with our prejudices. The authority of these fathers of human philosophy still carry great weight, and there is reason to fear that it will even bear hard upon generations yet to come.

It is very remarkable, that, notwithstanding of the number of philosophical chemists who have supported the doctrine of the four elements, there is not one who has not been led by the evidence of facts to admit a greater number of elements into their theory. The first chemists that wrote after the revival of letters, considered sulphur and salt as elementary substances entering into the composition of a great number of substances; hence, instead of four, they admitted the existence of six elements. Beccher assumes the existence of three kinds of earth, from the combination of which, in different proportions, he supposed all the varieties of metallic substances to be produced. Stahl gave a new modification to this system; and succeeding chemists have taken the liberty to make or to imagine changes and additions of a similar nature. All these chemists were carried along by the influence of the genius of the age in which they lived, which contented itself with assertions without proofs; or, at least, often admitted as proofs the slighted degrees of probability, unsupported by that strictly rigorous analysis required by modern philosophy.

Lavosier doesn’t claim that he knows what is an element and what is not. He says that we are going to need some very serious analysis before any of us can be sure. So instead of starting with a list of the elements, Lavoisier proposes a new method for figuring them out:

If, by the term elements, we mean to express those simple and indivisible atoms of which matter is composed, it is extremely probable we know nothing at all about them; but, if we apply the term elements, or principles of bodies, to express our idea of the last point which analysis is capable of reaching, we must admit, as elements, all the substances into which we are capable, by any means, to reduce bodies by decomposition. Not that we are entitled to affirm, that these substances we consider as simple may not be compounded of two, or even of a greater number of principles; but, since these principles cannot be separated, or rather since we have not hitherto discovered the means of separating them, they act with regard to us as simple substances, and we ought never to suppose them compounded until experiment and observation has proved them to be so.

To put this in more modern language: What we mean by “element” is “something that can’t be divided”. If we’ve discovered a way to divide some substance into different components, that substance can’t be an element. Elements are by definition the basic building blocks of matter that cannot be divided — so if it can be divided in any way, it’s not an element. (Ignore atomic chemistry for the moment, they wouldn’t discover that for a hundred years.)

Substances that we can’t divide are candidates. They might be elements — after all, they seem entirely indivisible so far. But some day we might discover a way to divide them into different components, which would prove they’re not elements after all. So they’re not elements for sure, only candidates.

If you put wood into a fire, it will be divided into ashes, smoke, etc. This makes it pretty clear that wood isn’t an element. But as of 1789, no one has found a way to divide gold into anything else, and it’s not for lack of trying. So gold should be considered an element, at least for now. To Lavoisier, gold is provisionally an element. Other things can be divided in a way that yields gold, but he’s never been able to confirm a way to divide gold into anything simpler. 

In short, it’s impossible to prove that something is an element, but you can prove that something is not, simply by dividing it. Anything we know how to divide is proven to be a compound, not an element. But anything we don’t know how to divide is only a possible element, because we may yet discover some way to divide it. 

We find ourselves in a similar situation today, and we can use something like Lavoisier’s approach to discover the full set of psychological drives (each with a corresponding emotion and governor), just like the chemists used his approach to discover the full set of elements.

The difference between these methods and the methods from the previous sections is that the methods in the previous sections start with observed behaviors, and try to figure out what drive(s) are behind them. These methods start with established or proposed drive(s) and try to learn more.

A good place to start is hunger. We think that hunger is not one emotion, it’s a common term applied to many emotions. The reason these signals are all mistakenly called by the same name, at least in English, is that they all come from governors that vote for eating behavior. These behaviors all look superficially similar, but in fact we put things in our mouths for a variety of reasons.  

Humans come with several different hunger drives because we need to eat several different things to remain healthy. We’ll call these things-you-need-to-eat “nutrients”, though this may be a little different from the common usage of that word.

Most foods contain more than one nutrient, so most foods satisfy more than one governor. A decent burrito will satisfy almost everything — your salt, carbs, fat, and guacamole governors, etc. This makes these emotions hard to disentangle, so most cultures don’t bother. It’s still possible to express these drives — “I’m really craving pickles” or “I would kill for some mozzarella right now” — and there are some related idioms like “sweet tooth”. But we don’t have dedicated words for each individual emotion, we just lump them together as “hunger”.

If you’ve messed around with your diet in really strange ways, as we have, you can sometimes get to the point where the different hunger drives become obvious. When we supplemented potassium, it was very clear to us that this increased our cravings for salt.

Like Lavoisier, we can try to break hunger down into individual drives, until we find drives we can’t distinguish any further. Those drives that can’t be divided are probably basic drives, at least until proven otherwise.

Let’s play through some examples. We think that there is probably at least one drive for salt (likely for sodium, but maybe there is a drive for chloride too) and at least one drive for fatty foods.

Now consider Joey, who wants to eat a pile of onion rings. If this is simply unalloyed hunger, a general desire for calories, then if you give Joey any other food, and he eats that food to exhaustion, he should no longer want to eat the onion rings.

However, if we assume there is one drive for salty foods, and a separate drive for fatty foods, we might suspect that the strong desire for onion rings reflects a combination of these desires, leading him to seek a food that is both salty and fatty. If true, he will also be at least somewhat interested in foods that are salty but not fatty, and in foods that are fatty but not salty.

Then, if we let Joey eat as much as he wants of a food that is salty but not fatty (perhaps mini pretzels), he will still be interested in foods that are fatty but not salty. And if we let him eat as much as he wants of a food that is fatty but not salty (perhaps avocado), he will still be interested in foods that are salty but not fatty. This would demonstrate that these are different drives. 

It probably has not escaped your attention that most foods that are salty are also fatty, and vice versa (french fries, olives, peanut butter, etc.). Perhaps this indicates some kind of drive specifically for foods that are both fatty and salty, a drive that cannot be extinguished by salt or fat in isolation. We will probably discover some outcomes at least this weird, and we should try not to stick too closely to any assumptions. The early chemists really didn’t expect to some day discover isotopes.

Evolution is doing her own thing, and she has no obligation to provide categories that make any sense to us. Governors might be controlling anything at all. There might be an important hunger governor that controls a proxy of a proxy of the ratio between sodium and potassium in the bloodstream. That’s not something that a human will find intuitive — but it’s not about being intuitive to the humans! The only law is, whatever works! 

But assuming for a moment that our study with Joey did support the idea that there’s both a salt governor and a fat governor, similar techniques could be used to discover whether there’s just one governor controlling fat-hunger, or if there are separate drives for different kinds of fat. Perhaps one drive for saturated and another drive for unsaturated fat. Or perhaps one drive for sterols? The truth will probably be stranger than we expect. 

A relatable example of this is the “dessert stomach”. If you can eat a big meal and still have room for dessert, it must be because your sugar or fat governor (or both) is still active. You can exhaust chicken-hunger while not exhausting chocolate-lava-cake hunger. This is clear evidence that there are at least two hunger drives.

3.7 The Parable of Rat C13

A lot of the studies we’ve suggested would be difficult or unethical to run on humans. But it may be easier to run this kind of study with animals. 

First of all, we can have more control over an animal’s diet than we usually would over a human’s. And second, humans might try to eat more or less of something to show the researchers how virtuous or how tough they are, but animals won’t have anything to prove — they’ll express their hunger drives with little interference from drives about impressing the research team. 

A design might look something like this: Restrict the animal’s food for a while so we know it will be hungry. Then, give it as much butter as it wants and let it eat until it stops eating. This way, we can assume that it should be fully corrected for any nutrient in the butter. 

Then, give the animal access to olive oil. If it eats an appreciable amount of olive oil, that suggests there’s a drive for at least one nutrient in olive oil that is not in butter. Further tests should be able to isolate the exact nutrients. You could also try this in the opposite order, to find if there are drives for nutrients in butter that are not found in olive oil. 

And in fact, some of these studies have already been run on animals. As one example, consider one 1968 paper by Paul Rozin. In this study, Rozin housed Sprague-Dawley rats in cages that contained water, a salt-vitamin mix, and a “liquid cafeteria” of three foods: 1) sucrose in water, 2) a 30% protein solution, and 3) Mazola oil for fat. All the rats responded well to this cafeteria, growing bigger and showing a lot of stability in their choices of liquids.

Rats clearly had protein targets and were able to hit them without blinking. When offered protein solution diluted by ½ or ¼, they increased how much solution they drank to compensate, so that their protein intake was approximately constant, though they didn’t compensate quite as well for the ¼ solution as they did for the ½ solution. Some rats were better than others at keeping their protein intake constant. This starts looking like an early form of cybernetic personality testing — at least in rats. 

Even when Rozin added quinine hydrochloride to the diluted solution, a flavor that rats normally hate, they still compensated and drank more of the diluted protein solution. This suggests they really were controlling protein intake, not just drinking for taste. That said, Rat C13 seemed to like the quinine just fine, and didn’t show any preference for the solution without it. Another sign of personality — that Rat C13, what a character!

In contrast, when Rozin diluted the sucrose solution, their source of carbohydrates, the rats only drank a little more sucrose solution to compensate. Some rats didn’t drink more sucrose solution at all. This is kind of surprising, because under normal circumstances all the rats took at least 50% of their calories from sucrose. 

Similarly, when rats were deprived of protein for a few days, they would drink more protein solution to make up for it. But when rats were deprived of sucrose solution for a few days, they would actually drink slightly less sucrose solution when it came back. The effects of being deprived were also noticeably different. Rats lost more than twice as much weight when deprived of protein than when deprived of carbohydrates.

We wish that we could provide similar comparisons for fat, but Rozin says that, “due to the very low levels of fat intake, no meaningful compensation value could be calculated.”

This isn’t evidence that carbohydrates are totally unregulated — they may just be regulated on a timescale that isn’t noticeable over a few days. The author speculates that, “this failure may have occurred because the highly palatable 35% sucrose solution is consumed at levels well above a physiological minimum.” And of course, the regulation may just be too complex to see in Rozin’s data. But it does at least look like evidence that protein is closely controlled, and controlled separately from overall calorie intake, at least in rats. 

Score one for Lavoisier’s method. Assuming that these findings are reliable, this seems like clear evidence against the idea that there is just one elemental drive for hunger. It also seems like evidence in favor of a drive for protein. Whether that drive for protein is elemental, or whether it too can be broken down into a collection of more basic drives, perhaps drives for individual amino acids, remains to be seen. 

Similar studies have suggested that cows have something like 16 different hunger drives

The idea of feeding minerals “free choice” to livestock came about by a need to decrease over-consumption of a liquid supplement containing phosphoric acid, protein, molasses, and other minerals. Upon investigation, it was found that the liquid supplement was being used heavily by the animal as a source of phosphorous. Consequently, we discovered if animals had access to a phosphorous source on a free choice basis, over-consumption of the liquid ceased. We then extended this concept to other vitamins and minerals: if the animal was able to select phosphorous on a free choice basis, perhaps calcium could be selected in the same manner – success!

…In time, potassium, sulfur, silicon, magnesium, vitamins, and trace minerals were added to the list. Finally, there were 16 separate vitamins and minerals fed free choice.

These findings should be independently and widely replicated before we treat them as strong evidence, but if true, this suggests that cows have drives for each of these vitamins and minerals. If they didn’t have a drive for sulfur, why would they spend their time eating it? 

4. In Which We Speculate About What Emotions There Are

The first major achievement for psychology may be a complete list of all the drives, governors, and emotions — each drive comes from a governor, and the emotion is that governor’s error signal. The most obvious analogy is to chemistry. This will be our version of the periodic table. 

We’re still a long way off from this list being completed, but we can make some educated guesses about what will be on there once it’s finished. You just heard a lot of those guesses in the previous sections — now, we’ll put those guesses together into a rough draft. 

A slightly unorthodox, yet promising list of the emotions

For now, we’ll try to call each governor by the name of its error signal — drives to eat come from a governor whose error is hunger, so these are hunger governors. The drive to keep yourself from physical harm comes from a governor whose error is pain, so this is the pain governor. 

That said, there are a few cases where it’s easier to call a governor by some other name. It’s nice when we have existing terms like “thirst” already on hand, but there are some emotions that don’t have a common name, at least not in English. So sometimes we will punt and call these drives only “a drive to do X”, where X is the characteristic behavior that makes us suspect there’s a drive there in the first place. 

The big question at this point is whether this can be more than just a list. The chemical elements have a periodic structure, their properties repeat in a regular pattern. This repetition, or periodicity, is visually organized in the periodic table, where elements are grouped into rows and columns to highlight these patterns. That’s the whole reason to have the periodic table in the first place — it’s more than just a list, and it eventually led to a better understanding of how the properties of elements are related to their atomic structure. 

Maybe there is no structure or pattern to the drives, and we will just end up with a long list. But if there’s any kind of pattern or structure, we’d love to come up with an organization that highlights that structure, instead of just listing the drives one by one. 

To make an early attempt, for now we will group the drives in three categories: physiological emotions, that attend to the basics needed to keep the body functioning; environmental emotions, that attend to the qualities of a person’s immediate external environment; and social emotions, that attend to a person’s social status and relations. 

  • Physiological
    • Suffocation/Panic
    • Pain
    • Hot
    • Cold
    • Exhaustion
    • Waking
    • Thirst
    • Hunger (actually several drives)
    • Satiety (stops us from eating; also probably several drives)
    • A drive to fidget and be active that burns excess calories
    • “Zoomies” (this may be the same as the drive to fidget; consider also that rodents need wheels in their cages, and if you give them a wheel in the wild, they’ll run on that too!)
    • Horny
    • A drive to pee (The Sims called it “Bladder”)
    • A drive to shit
  • Environmental
    • Fear
    • Disgust
    • A drive to have a clean and organized living space (The Sims called it “Room”)
    • A drive to have a clean and well-groomed body
    • Possibly decorative drives (though these may be extension of cleanliness drives)
    • Possibly a drive to dig
    • Possibly a drive to look at animals
    • Possibly a drive to collect or hoard
    • Possibly a drive to sort
  • Social
    • A drive to regulate social status up
    • A drive to keep social status from growing too fast
    • A drive for physical contact; “touch starved”
    • A drive for privacy, perhaps territorial
    • A drive for autonomy
    • A drive to socially dominate
    • Possibly a desire to follow or submit 
    • Self-consciousness (an error when you are not acting consistently or normatively)
    • Empathy
    • Grief (the drive is to care for others, but the error signal is grief)
    • Loneliness
    • Anger
    • Shame

The list should also include other signals that are not cybernetic control errors. Here’s our current best guess for that list: 

  • Happiness
  • Surprise
  • Curiosity

Happiness and surprise are two things we subjectively experience all the time, but they don’t seem to be cybernetic control errors. They also don’t seem to drive behavior. 

In contrast, curiosity doesn’t seem to be a cybernetic control error, because it doesn’t seem to drive a target to zero, but curiosity does seem to drive behavior. As we speculated above, we think curiosity may be an adversarial signal that teaches us about the world by voting for us to explore options that our governors wouldn’t vote for on their own. 

If the history of psychology is any indication, people will want to jump straight to figuring out the social emotions. We think this is a mistake. The social emotions will probably be the hardest to uncover.

There are two reasons to leave the social emotions for later.

First, we don’t know what the social emotions might be controlling. If there really is a dominance emotion, what is it targeting? It can’t literally be “the image of someone wailing at your feet.” It’s going to be something more subtle, and we don’t currently know how to capture or measure that thing.

Second, investigating the social emotions is impractical. If you want to be able to alter someone’s social status at will for an experiment, you kinda have to put people in a Biodome or a VR world. Even then, it’s hard to be sure you’re really evoking what goes on in the regular world.

We have much stronger suspicions about what the physiological drives control, and investigating them doesn’t require us to build a whole alternative society. You can just make people eat salt or not eat salt and see what happens. 

And because other animals probably share a lot of our physiological emotions, we can run studies on them that would be unethical or impractical to run on humans. You can’t study the social emotions in other animals because other animals probably don’t have most of the social emotions that humans do. Maybe dolphins or elephants, but they’re hard to study.

We should start with something easier. We should start by studying emotions like hunger and fatigue, then use what we’ve learned to eventually understand the social emotions.

In chemistry, we discovered the gases first, then later got around to the other elements. In psychology, we will probably learn about the physiological drives first. We may cut our teeth on hunger before working up to things like fatigue, pain, fear, and eventually the social emotions, which are probably the most baroque and complex. 

It’s true that the social drives are the most interesting, and it might seem like understanding the social drives might be more important, might solve more of the problems you care about. But be patient. You have to spend some time rolling balls down ramps before you can go to the moon.


[Next: MORE METHODS]


The Mind in the Wheel – Part IX: Animal Welfare

2025-04-17 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]
[INTERLUDE – I LOVE YOU FOR PSYCHOLOGICAL REASONS]
[PART VIII – ARTIFICIAL INTELLIGENCE]


When people talk about the ethical treatment of animals, they tend to hash it out in terms of consciousness.

But figuring out whether animals have consciousness, and figuring out what consciousness even is, are philosophical problems so hard they may be impossible to solve.

There’s not much common ground. The main thing people are generally willing to agree on is that since they themselves are conscious, other humans are probably conscious too, since other humans behave more or less like they do and are built in more or less the same way. 

So a better question might be whether or not animals feel specific emotions, especially fear and pain. 

The cybernetic paradigm gives a pretty clear answer to this question: Anything that controls threat and danger has an error signal that is equivalent to fear. And anything that controls injury has an error signal that is equivalent to pain. 

This allows us to say with some confidence that animals like cows and rats feel fear, pain, and many other sophisticated emotions.

There’s no reason to suspect that a cow or a rat’s subjective experience of fear is meaningfully different from a human’s. We can’t prove this, but we can appeal to the same intuition that tells you that since you are conscious, other humans are probably conscious as well. 

You believe that other humans feel fear, and that their fear is as subjectively terrifying to them as your fear is to you, for a simple reason: you notice that another person’s external behavior is much the same as yours is when you feel afraid, and is happening under similar circumstances. Then, you make the reasonable assumption that since all humans are biologically similar to one another, their external behavior is likely caused by similar internal rules and structures. Since there’s no reason to suspect that basically the same behavior created by basically the same structures would be any different phenomenologically, you conclude that other humans probably have the same kind of subjective experience.

With a better model for the emotions, this same logic can extend to other animals. Assuming we are right that a cow also has a governor dedicated to keeping it safe, which generates an error signal of increasing strength as danger increases, which drives behavior much like the behavior we engage in when we are afraid, there is little reason to suspect that the cow’s subjective experience is meaningfully different from our own. At the very least, if you accept the conclusion for humans, it’s not clear why you would reject it for other animals. 

This is a relatively easy conclusion to draw for other complex, social mammals. They almost certainly feel fear and pain, because we see the outward signs, and because the inside machinery is overall so similar. But it’s harder to tell as animals become less and less closely related to humans.

An animal that doesn’t bother to avoid danger or injury clearly isn’t controlling for them. But most animals do. So the question is whether these animals actually represent danger and injury within a control system, trying to minimize some error, or if they simply avoid danger and injury through stimulus-response. 

Dogs probably feel fear, and even without dissecting their brains, we can reasonably assume that they use similar mechanisms as we do. They’re built on the same basic mammalian plan and inherit the same hardware. But what about squid, or clams? These animals probably avoid danger in some way, but it’s not clear that they use an approach at all like the one we do. 

If an animal cybernetically controls for danger and injury, then they are producing an error signal. In this case, the argument from above applies — there’s no reason to suspect that a creature using the same algorithms to accomplish the same thing is having a notably different experience. Their error signal is probably perceived as an emotion similar to our emotions.

But if an animal’s reaction to danger is instead a programmed response to a set stimulus, then there is no control system, no feedback loop, and no error signal. 

For example, we might encounter an arthropod that freezes when we walk nearby. At first this looks like a fear response. We imagine that the arthropod is terrified and trying to avoid being seen and eaten. 

But through trial and error, we show that whenever a shadow passes over it, the arthropod always freezes for exactly 2.5 seconds. Let’s further say that the arthropod shows no other signs of danger avoidance. If you “threaten” it in other ways, put it in other apparently dangerous situations, it changes its behavior not at all. The only thing it responds to is a shadow suddenly passing overhead.

ARE YOU CONSCIOUS???

This suggests that, at least for the purposes of handling danger, this arthropod operates purely on stimulus-response. As a result, it probably does not feel anything like the human emotion of fear. Even if we allow that the arthropod is conscious in some sense, its conscious experience is probably very different from ours because it is based on a different kind of mechanism. 

Here’s a similar example from Russel & Norvig’s Artificial Intelligence: A Modern Approach. We can’t confirm that what they describe is actually true of dung beetles — it may be apocryphal — but it’s a good illustration of the idea: 

Consider the lowly dung beetle. After digging its nest and laying its eggs, it fetches a ball of dung from a nearby heap to plug the entrance. If the ball of dung is removed from its grasp en route, the beetle continues its task and pantomimes plugging the nest with the nonexistent dung ball, never noticing that it is missing. Evolution has built an assumption into the beetle’s behavior, and when it is violated, unsuccessful behavior results. 

It’s hard to figure out whether an organism is controlling some variable, or whether it is running some kind of brute stimulus-response, especially if the stimulus-response routine is at all complicated. We may need to develop new experimental techniques to do this.

But every organism has to maintain homeostasis of some kind, and almost all multicellular organisms have a nervous system, which suggests they’re running some kind of feedback loop, which means some kind of error signal, which means some kind of emotion. 

For now, we think this is a relatively strong argument that most other mammals experience fear and pain the same way that we do — at least as strong of an argument that other humans experience fear and pain the same way that you experience them. 

Figuring out whether you are in danger requires much more of a brain than figuring out whether you have been cut or injured. So while most animals probably feel pain, some animals may not feel fear, especially those with simple nervous systems, those with very little ability to perceive their environment, and those who are immobile. There’s no value in being able to perceive danger if you can’t do anything about it. 


[Next: DYNAMIC METHODS]


The Mind in the Wheel – Part VIII: Artificial Intelligence

2025-04-10 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]
[INTERLUDE – I LOVE YOU FOR PSYCHOLOGICAL REASONS]


Some “artificial intelligence” is designed like a tool. You put in some text, it spits out an image. You give it a prompt, it keeps predicting the following tokens. End of story. 

But other “artificial intelligence” is more like an organism. These agentic AI are designed to have goals, and to meet them. 

Agentic AI is usually designed around a reward function, a description of things that are “rewarding” to the agent, in the somewhat circular sense that the agent is designed to maximize reward.

Reward-maximizing agents are inherently dangerous, for a couple of reasons that can be stated plainly. 

First, Goodhart’s law (“When a measure becomes a target, it ceases to be a good measure.”) means that the goal we intend the agent to have will almost never be the goal it ends up aiming for.

For example, you might want a system that designs creatures that can move very fast. So you give the agent a reward function that rewards the design of creatures with high velocities. Unfortunately the agent responds with the strategy, “creatures grow really tall and generate high velocities by falling over”. This matches the goal as stated, but does not really give you what you want.

Even with very simple agents, this happens all the time. The agent does not have to be very “intelligent” in the normal sense to make this happen. It’s just in the nature of reward functions.

This is also called goal mis-specification. Whatever goal you think you have specified, you almost always specify something else by mistake. When the agent pursues its real goal, that may cause problems. 

Second, complexity. Simple goals are hard enough. But anything with complex behavior will need to have a complex reward function. This makes it very difficult to know you’re pointing it in the right direction.

You might think you can train your agent to have complex goals. Let it try various things and say, “yes, more of that” and “no, less of that” until it has built up a reward function that tends to give the behavior we want. This might work in the training environment, but because the reward function has been inferred through training, we don’t know what that reward function really is. It might actually be maximizing something weird. And you might not learn what it’s really maximizing until it’s too late to stop it. 

The third and most serious reason is that anything insatiable is dangerous. Something that always wants more, and will stop at nothing to get it, is a problem. For a reward-maximizing agent, no amount of reward can ever be enough. It will always try to drive reward to infinity. 

This seems fine

This is part of why AI fears usually center around runaway maximizers. The silly but canonical example is an AI with a reward function with a soft spot for office supplies, so it converts all matter in the universe into paperclips

The same basic idea applies to any reward maximizer. If the United States Postal Service made an AI to deliver packages, and designed it to get a reward every time a package was delivered, that AI would be incentivized to find a way to deliver as many packages as possible, for the minimum possible descriptions of “deliver” and “packages”, by any means necessary. This would probably lead to the destruction of all humans and soon all life on earth.

But there are reasons to be optimistic. 

For starters, the main reason to expect that artificial intelligence is possible is the existence of natural intelligence. If you can build a human-level intelligence out of carbon, it seems reasonably likely that you could build something similar out of silicon. 

But humans and all other biological intelligences are cybernetic minimizers, not reward maximizers. We track multiple error signals and try to reduce them to zero. If all our errors are at zero — if you’re on the beach in Tahiti, a drink in your hand, air and water both the perfect temperature — we are mostly comfortable to lounge around on our chaise.

Could an artificial intelligence do THIS?

As a result, it’s not actually clear if it’s possible to build a maximizing intelligence. The only intelligences that exist are minimizing. There has never been a truly intelligent reward maximizer (if there had, we would likely all be dead), so there is no proof of concept. The main reason to suspect AI is possible is that natural intelligence already exists — us.

That said, it may still be possible to build a maximizing agent. If we do, there’s reason to suspect it will be very different from us, since it will be built on different principles. And there’s reason to suspect it would be very dangerous.

A reward maximizer doesn’t need to be intelligent to be dangerous. Maximizing pseudointelligences could still be very dangerous. Viruses are not very smart, but they can still kill you and your whole family.

We should avoid building things with reward functions, since they’re inherently dangerous. Instead, if you must build artificial intelligences, make them cybernetic, like us.

This is preferable because cybernetic minimizers are relatively safe. Once they get to their equivalent of “lying on the beach in Tahiti with a piña colada in hand” they won’t take any actions. 

If the United States Postal Service designed an AI so that it minimizes the delivery time of packages instead of being rewarded for each successful delivery, it might still stage a coup to prevent any new packages from being sent. But once no more packages are being sent, it should be perfectly content to go to sleep. It will not try to consume the universe — it just wants to keep a number near zero. 

Reward maximizers are always unstable. Even very simple reinforcement learning agents show very crazy specification behaviors. But control systems can be made very stable. They have their own problems, but we use them all the time, in thermostats, cruise control, satellites, and nuclear engineering. These systems work just fine. When control systems do fail, they usually fail by overreacting, underreacting, oscillating wildly, freaking out in an endless loop, giving up and doing nothing, and/or exploding. This is bad for the system, and bad when the system controls something important, like a nuclear power plant. But it doesn’t destroy the universe.

At the most basic level, these two approaches are the two kinds of feedback loops. Cybernetic agents run on negative feedback loops, which generally go towards zero and are relatively safe. Reward-maximizing agents are an example of positive feedback loops, which given enough resources will always go towards infinity, so they’re almost always dangerous. Remember, a nuclear explosion is a classic positive feedback loop. The only reason nuclear explosions stop is that they run out of fuel. 

A possible rebuttal to this argument is that even if an agent is happy to move towards a resting state and then do nothing, it will still be interested in gaining as much power as possible so it can achieve its goals in the future. The technical term here is instrumental convergence

Here we can appeal to observations of the cybernetic intelligences all around us. Humans, dogs, deer, mice, squid, etc. do not empirically seem to spend every second of their downtime maniacally working to gather more resources and power. Even with our unique human ability to plan far ahead, we often seem to use our free time to watch TV. 

This suggests that instrumental convergence is not a problem for cybernetic agents. When more power is needed to correct its error, maybe a governor will vote for actions that increase the agent’s power. But if it already has enough power to correct its error, the governor will prefer to correct its error straightaway. This suggests we pursue instrumental goals like “gather more power and resources” mainly when we don’t have the capabilities we need to effectively cover all our drives.

Finally, a few things we should mention. 

Cybernetic intelligences can’t become paperclip maximizers, but they can still be dangerous for other reasons. Hitler was not a paperclip maximizer, but even as a mere cybernetic organism, he was still pretty dangerous. So be careful with AI nonetheless.

Cybernetically controlling one or more values is good, natural even. But controlling derivatives (the rate of change in some value) is bad! You will end up with runaway growth that looks almost the same as a reward maximizer. If you design your cybernetic greenhouse AI to control the rate of growth of plants in your greenhouse (twice as many plants every week!), very soon it will need to control the whole universe to give you the number of plants you implicitly requested.

Controlling second derivatives (rate of change of the rate of change) is VERY BAD. Controlling third and further derivatives is right out.


[Next: ANIMAL WELFARE]


The Mind in the Wheel – Interlude: I Love You for Psychological Reasons

2025-04-03 23:11:00

[PROLOGUE – EVERYBODY WANTS A ROCK]
[PART I – THERMOSTAT]
[PART II – MOTIVATION]
[PART III – PERSONALITY AND INDIVIDUAL DIFFERENCES]
[PART IV – LEARNING]
[PART V – DEPRESSION AND OTHER DIAGNOSES]
[PART VI – CONFLICT AND OSCILLATION]
[PART VII – NO REALLY, SERIOUSLY, WHAT IS GOING ON?]


In 1796, Astronomer Royal Nevil Maskelyne noticed that his layabout assistant, David Kinnebrook, was getting measurements of celestial events that were a whole half-second different from his own. Maskelyne told Kinnebrook he had better shape up, but this didn’t help — Kinnebrook’s errors increased to around 8/10 of a second, so Maskelyne fired him. 

Later astronomers looked into this more closely and discovered that there was actually nothing wrong with poor Kinnebrook. The issue is that people all have slightly different reaction times. When a star passes in front of a wire, it takes you some very small amount of time to react and record your observation. So when different people look at the same celestial event, they get slightly different results. You might even say that the fault is not in our stars, but in ourselves.

More importantly, these differences aren’t random. Kinnebrook’s measurements were always slightly later than Maskelyne’s, and always later by about the same amount. This is a consistent and personal bias, so they came up with the term “personal equation” to describe these differences. 

As astronomers learned to measure these personal equations with more and more accuracy, they found that people can’t distinguish anything less than 0.10 seconds, which eventually spiraled into what has been called the tenth-of-a-second crisis. Further investigation of this effect, combined with similar research in physiology and statistics, eventually led to the invention of a new field: psychology. 

The personal equation is frequently mentioned in psychology from the 19th and early 20th century. Edwin G. Boring devoted an entire chapter of his 1929 book to the personal equation, the story of which, he said, “every psychologist knows”. Even as late as 1961, he was writing about “the sacred 0.10 sec.”

Or take a look at this passage from the introduction of Hugo Münsterberg’s 1908 book, Essays on Psychology and Crime. He says: 

Experimental psychology did not even start with experiments of its own; it rather took its problems at first from the neighbouring sciences. There was the physiologist or the physician who made careful experiments on the functions of the eye and the ear and the skin and the muscles, and who got in this way somewhat as by-products interesting experimental results on seeing and hearing and touching and acting; and yet all these by-products evidently had psychological importance. Or there was the physicist who had to make experiments to find out how far our human senses can furnish us an exact knowledge of the outer world; and again his results could not but be of importance for the psychology of perception. Or there was perhaps the astronomer who was bothered with his “personal equation,” as he was alarmed to find that it took different astronomers different times to register the passing of a star. The astronomers had, therefore, in the interest of their calculations, to make experiments to find out with what rapidity an impression is noticed and reacted upon. But this again was an experimental result which evidently concerned, first of all, the student of mental life.

All three of these examples, including the personal equation, are about perception — physiologists studying the sense organs, and physicists studying the limits of those senses. Given this foundation, it will come as no surprise to hear that for most of its history, psychology’s main focus has been perception. Even in the early days of psychology, perception was baked in.

This was most obvious in the earliest forms of psychology. In 1898, E. Bradford Titchener wrote a paper describing the layout of his psychology lab at Cornell. This lab not only had a room for optics, but separate rooms also for acoustics, haptics, and one “taste and smell room”. Olfactometry does not come up much in modern psychology, but the Cornell psychologists of the 1890s had an entire room dedicated to it:

Room 1, the ‘optics room,’ is a large room, lighted from three sides, with walls and ceiling painted a cream. Room 2, intended for the private room of the laboratory assistants, now serves the purposes to which room 12 will ultimately be put. Room 3 is the ‘acoustics,’ room 4 the ‘haptics room.’ Room 5 is a dark room employed for drill-work, demonstration and photography. Room 6 is the ‘work,’ and room 7 the ‘lecture-room’. Room 8 is the director’s private room ; room 9 the ‘reaction,’ and room 10 the ‘taste and smell room’. Room 11, which faces north, will be fitted up as a research dark room; room 12 will be furnished with the instruments used in the investigation of the physiological processes underlying affective consciousness, —pulse, respiration, volume and muscular tone.

Even today, the closest thing to a true law of psychology is the Weber-Fechner law, about the minimum possible change needed to be able to distinguish between two similar stimuli; in other words, about perception. And the most impressive artifacts of psychology are still visual illusions like this one: 

The two orange circles are exactly the same size; however, the one on the right appears larger.

During the cognitive revolution, a lot of sacred cows were tipped, but not perception. Instead, perception was reaffirmed as the absolute main topic of psychological study. Ulric Neisser’s 1967 book Cognitive Psychology consists of: 

  • One introductory chapter, titled “The Cognitive Approach”
  • Five chapters on visual processes
  • Four chapters on hearing
  • And one last chapter, about which he says: “The final chapter on memory and thought is essentially an epilogue, different in structure from the rest of the book.”

That’s it! 

In a footnote, Neisser apologizes… for not covering the other senses. “Sense modalities other than vision and hearing are largely ignored in this book,” he says, “because so little is known about the cognitive processing involved.” But he doesn’t apologize for skipping over nearly every other aspect of psychology, which seems like a stunning omission. 

At least Neisser is self-aware about this. He makes it very clear that he knows many different directions psychology could take, and that he is picking this one, cognition, over all the others. It’s just that he is fully committed to the promise of the cognitive approach, and that means he’s fully committed to the idea that perception should hold center stage — not just top billing, but to the point of excluding other parts of psychology. 

Even given psychology’s previous hundred years of focus on perception, this was a pretty radical position. Titchener would probably be scandalized that Neisser didn’t include a chapter on taste and smell. 

But the most surprising omission of all might be “individual differences”, the psychologist’s fancy term for personality. Because once upon a time, personality was almost as central to psychology as perception was. 

Recall that the personal equation, one of the problems that kicked off psychology in the first place, was itself an idea about individual differences — every individual had a personal difference in their reaction times when looking at the celestial spheres. You can’t have a personal equation without individual differences, so as much as the personal equation came with an interest in the laws of perception, it also came with a committed interest in personality. 

Almost as old as the personal equation is the idea of mental tests. Most of the credit and the blame for these goes to Sir Francis Galton. After hearing about the theory of evolution from his cousin, Charles Darwin, Galton started wondering if mental traits ran in families. He became obsessed with measuring differences in people’s minds and bodies, and these ideas directly led to the invention of IQ tests (and also eugenics). These unpleasant grandchildren aside, for a long time mental tests were a really central part of psychology. Until one day they weren’t. 

Neisser does offer a defense of his position in the last chapter of his book. We think the final paragraph is especially interesting, where he says: 

It is no accident that the cognitive approach gives us no way to know what the subject will think of next. We cannot possibly know this, unless we have a detailed understanding of what he is trying to do, and why. For this reason, a really satisfactory theory of the higher mental processes can only come into being when we also have theories of motivation, personality, and social interaction. The study of cognition is only one fraction of psychology, and it cannot stand alone.

Cybernetics

Norbert Wiener coined the term “cybernetics” in the summer of 1947, but for the full story, we have to go much further back. 

Wiener places the earliest origins of these ideas with the 17th century German polymath Gottfried Wilhelm Leibniz. “If I were to choose a patron saint for cybernetics out of the history of science,” Wiener wrote in the introduction to his book, “I should have to choose Leibniz. The philosophy of Leibniz centers about two closely related concepts—that of a universal symbolism and that of a calculus of reasoning. From these are descended the mathematical notation and the symbolic logic of the present day.”

Simple control systems have been in use for more than two thousand years, but things really picked up when Leibniz’ early math tutor, Christiaan Huygens, derived the laws for centrifugal force and invented an early centrifugal governor.

Over the centuries people slowly made improvements to Huygens’ design, most notably James Watt, who added one to his steam engine. These systems caught the attention of James Clerk Maxwell, who in 1868 wrote a paper titled “On Governors”, where he explained instabilities exhibited by the flyball governor by modeling it as a control system.

When explaining why he chose to call his new field “cybernetics”, Wiener wrote, “in choosing this term, we wish to recognize that the first significant paper on feedback mechanisms is an article on governors, which was published by Clerk Maxwell in 1868, and that governor is derived from a Latin corruption of κυβερνήτης.”

Using this background, Norbert Wiener and Arturo Rosenblueth sat down and made the field explicit in the 1940s, and gave it a name. Then in 1948 Wiener published his book Cybernetics: Or Control and Communication in the Animal and the Machine, and the field went public. 

The new field went in a number of directions, many of them unproductive, but the one most important to us today is the direction taken up by a certain William T. Powers

Loop Me In

Psychology and cybernetics were making eyes at each other across the room from the very start. “The need of including psychologists had indeed been obvious from the beginning,” wrote Wiener. “He who studies the nervous system cannot forget the mind, and he who studies the mind cannot forget the nervous system.” And the psychologists returned Wiener’s affections: Kurt Lewin, one of the founders of modern social psychology, attended the first “Macy Conference” on cybernetics, all the way back in 1946, before it was even called cybernetics, and Weiner mentions Lewin (and some other psychologists) by name in his book. 

But in the 1940s and 1950s, psychologists felt they were doing pretty all right. Lewin and the social psychologists were a relatively small slice of psychology, the minority faction by far, and their interest didn’t carry much weight. Cybernetics might be nice to flirt with at the party, but there was no real chance of inviting it home. 

But fast forward to the 1970s, and psychology was in crisis. For a long time psychology had been ruled by behaviorism, a paradigm which took the stance that while behavior could be studied scientifically, the idea of studying thoughts or mental states was wooly nonsense. Mental states like thoughts and feelings were certainly unworthy of study, and possibly didn’t exist. 

Behaviorists also thought that animals are born without anything at all in their brains — that the mind at birth is a blank slate, and that everything an animal learns to do comes from pure stimulus-response learning built up over time.

Behaviorism seemed like a sure bet in the 1920s, but those assumptions were looking more and more shaky every day. People had discovered that animals did seem to have inborn tendencies to associate some things with other things. They learned that you could make reasonable inferences about mental states. And the invention of the digital computer made the study of mental states seem much more scientific. The old king was dying, and no one could agree who was rightful heir to the throne. 

it looked exactly like this

The son of a “well-known cement scientist”, William T. Powers wasn’t even a psychologist. His training was in physics and astronomy. But while working at a cancer research hospital, and later while designing astronomy equipment, Powers started pulling different threads together and eventually came up with his own very electrical-engineering-inspired paradigm for psychology, which he called Perceptual Control Theory

In 1973 Powers published both a book and an article in Science about his ideas. While Powers was obviously an outsider, psychologists took this work seriously. Even in the 1970s, fringe ideas didn’t get published in a journal as big as Science — Powers and his arguments were mainstream, at least for a little while. 

Psychologists really thought that cybernetics might be one of the ways forward. Stanley Milgram, who did the famous experiments on obedience to authority — the ones where participants thought they might be delivering lethal electric shocks to a man with a heart condition, but mostly kept increasing the voltage when politely asked to continue — even includes a brief section on cybernetics in his 1974 book about those studies. “While these somewhat general [cybernetic] principles may seem far removed from the behavior of participants in the experiment,” he says, “I am convinced that they are very much at the root of the behavior in question.”

And Thomas Kuhn himself, the greatest authority on crisis and revolution in science (he did write the book on it), wrote a glowing review of Powers’ book, saying: 

Powers’ manuscript, “Behavior: The Control of Perception”, is among the most exciting I have read in some time. The problems are of vast importance, and not only to psychologists; the achieved synthesis is thoroughly original and the presentation is often convincing and almost invariably suggestive. I shall be watching with interest what happens in the directions in which Powers points.

But there were a few problems.

The first is that Powers’ work, especially his 1973 Science article, doesn’t exactly make the case that cybernetics is a good way of thinking about psychology. It’s more of an argument that cybernetics is better than behaviorism. The paper is filled with beautiful and specific arguments, but they’re arguments against the behaviorist paradigm. The article is even titled, Feedback: Beyond Behaviorism

You can see why Powers would frame things this way. As far as he could tell, behaviorism was the system to beat, and his arguments against behaviorism really are compelling.

Unfortunately, by 1973 behaviorism was already on its way out. Six years before, in 1967, Ulric Neisser wrote:

A generation ago, a book like this one would have needed at least a chapter of self-defense against the behaviorist position. Today, happily, the climate of opinion has changed, and little or no defense is necessary. Indeed, stimulus-response theorists themselves are inventing hypothetical mechanisms with vigor and enthusiasm and only faint twinges of conscience.

Powers’ work arrived early enough that psychologists were still interested in what he had to say. They still felt that their field was in crisis, they were still looking around for new tools and new perspectives. They were still willing to publish his paper, and everybody read his book. 

But it came late enough in the crisis that there was strong competition. New schools of thought were already mustering their forces, already had serious claims to the throne. People were already picking sides. And most people were already picking cognitive psychology.

It’s not entirely clear exactly why cognitive psychology won, but there are a few things that made its claim especially strong. For one, some of the strongest evidence against behaviorism came from an information theory angle, and this looked really good for cognitive psychology, which proposed that we think of the mind in terms of how it handles and transforms information. 

Maybe most importantly, the metaphor of the digital computer promised to provide the objectivity that behaviorism was never able to deliver. Whatever else might be going on in human minds, computers definitely exist, they can add and subtract, and that looks a lot like thinking! Cognitive psychology eventually won out.

which is why all psychologists now look like this

Another problem is that cybernetics is what they call “dynamic”. This is a distinction people don’t usually make any more, but Ulric Neisser gives this definition:  

Dynamic psychology, which begins with motives rather than with sensory input, is a case in point. Instead of asking how a man’s actions and experiences result from what he saw, remembered, or believed, the dynamic psychologist asks how they follow from the subject’s goals, needs, or instincts.

Cybernetics makes for a dynamic school of psychology because, however you slice it, control systems are always about getting signals back in alignment, so they’re always about goals (what’s the target value) and needs (which signals are controlled). If you think about psychology in terms of control systems, whatever you come up with is going to be dynamic. 

Dynamic theories were very popular in the first half of the 20th century, but they ended up falling out of favor in the back half. Again, we’re not entirely sure why this happened the way it did, but we can provide some reasonable speculation.

The most famous dynamic school of psychology is Freudian psychodynamic therapy. If you’ve ever wondered, this is why it has “dynamic” in the name, because it’s a paradigm that focuses on how people are motivated by drives and/or needs. Freudians originally saw all behavior as motivated by libido, the sex or pleasure principle. But later on they added a second drive or set of drives called mortido, the drive for death.

Most schools of psychology are more dynamic than they would like to admit — even behaviorism. Sure, behaviorists had an extremely reductive understanding of drives (mostly “reward” and “punishment”), but at their heart they were a dynamic school too. Reward and punishment are a theory of motivation; it’s only one drive, but it’s right there, and central to the paradigm. And behaviorists did sometimes admit other drives, most blatantly in Clark Hull’s drive reduction theory, which allowed for drives like thirst and hunger.

Behaviorists have to accept some kind of dynamics because they assume that reward and punishment are behind all behavior, except perhaps the most instinctual. Even if they didn’t tend to think of this as a drive, it’s clearly the motive force that behaviorists used to explain all behavior — organisms are maximizing reward and minimizing punishment. 

(In a totally different kind of problem, dynamic psychology is always a bit risky because it’s inherently annoying to speculate about someone’s motives. The Freudians really ran afoul of this one.)

The point is, by the time William Powers was arguing for cybernetics, dynamic psychology was on the downswing. Its reputation was tainted by the Freudians, and maybe it also smelled a bit too much like the behaviorists, with their focus on reward and punishment. This might be another reason why cybernetics was passed over in favor of something that seemed a bit more fresh and promising. 

It’s not like people hated cybernetics, but it’s interesting to see how conscious the decision was. Near the end of his book, Neisser says:

An analogy to the “executive routines” of computer programs shows that an agent need not be a homunculus. However, it is clear that motivation enters at several points in these processes to determine their outcome. Thus, an integration of cognitive and dynamic psychology is necessary to the understanding of the higher mental processes.

But the rest of cognitive psychology did not inherit this understanding, and this integration was never carried out; as far as we know it was never even attempted. In any case, dynamic paradigms were out. 

Other schools like social psychology, neuroscience, and psychiatry kept going with what they were doing, since they were not seen to be in crisis, and they gained more sway as behaviorism fell apart. Or perhaps a better read on things is that the ground previously held by behaviorism was partitioned, with cognitive psychology gaining the most, social psychologists also receiving a large chunk, clinical psychologists gaining some, etc. Cybernetics received none, and fell into obscurity.

Or possibly it was diluted into a vague branch of the humanities. The full title of Wiener’s book was Cybernetics: Or Control and Communication in the Animal and the Machine. Some anthropologists may have taken the “communication” part too seriously — they started using the term more and more vaguely, until eventually they used it to refer to anything at all involving communication, which is probably where the internet got the vague epithet of “cyberspace”. 

Today, psychology generally acts as though drives do not exist. If you look in your textbook you will usually see a brief mention of drives, but they’re not a priority. For example, one psychology textbook says,

All organisms are born with some motivations and acquire others through experience, but calling these motivations “instincts” describes them without explaining how they operate. Drive-reduction theory explains how they operate by suggesting that disequilibrium of the body produces drives that organisms are motivated to reduce. 

But the very next sentence concludes: 

Neither instinct nor drive are widely used concepts in modern psychology, but both have something useful to teach us.

The subtext in today’s psychology is that there is only one real drive, with two poles, reward and punishment. When psychologists explicitly name this assumption, they call it “the hedonic principle”. Despite any lip service paid to other drives, simple hedonism is the theory of motivation that psychologists actually use.

A cruel irony is that modern cognitive psychology, as far as we can tell, inherited this theory of motivation directly from behaviorism. This is just good ‘ol reward and punishment. Even though they held the cognitive revolution to throw out behaviorism and replace it with something new, they weren’t able to disinherit themselves of some of the sneakier assumptions. 

The other funny thing is that when outsiders come up with their own version of psychology, they usually end up including drives. Our favorite example continues to be The Sims. To get somewhat realistic behavior out of their Sims, Maxis had to give them several different drives, so they did. Even psychologists can’t help inventing new drives by accident. If you hang around psychology long enough, you’ll run into various “need for whatever” scales, like the famous need for cognition.

This reminds us a lot of what happened in alchemy. Alchemists were supposed to believe in air, fire, water, and earth, and explain the world in terms of those four elements. But belief in four elements was impossible, and the alchemists told on themselves, because they couldn’t stop inventing new ones. In the preface to his book on chemistry, Lavosier says (emphasis added): 

The notion of four elements, which, by the variety of their proportions, compose all the known substances in nature, is a mere hypothesis, assumed long before the first principles of experimental philosophy or of chemistry had any existence. In those days, without possessing facts, they framed systems; while we, who have collected facts, seem determined to reject them, when they do not agree with our prejudices. The authority of these fathers of human philosophy still carry great weight, and there is reason to fear that it will even bear hard upon generations yet to come.

It is very remarkable, that, notwithstanding of the number of philosophical chemists who have supported the doctrine of the four elements, there is not one who has not been led by the evidence of facts to admit a greater number of elements into their theory. The first chemists that wrote after the revival of letters, considered sulphur and salt as elementary substances entering into the composition of a great number of substances; hence, instead of four, they admitted the existence of six elements. Beccher assumes the existence of three kinds of earth, from the combination of which, in different proportions, he supposed all the varieties of metallic substances to be produced.

So likewise, notwithstanding the number of psychologists who have supported the doctrine of reward and punishment, there is not one who has not been led by the evidence of facts to admit a greater number of drives into their theory.

Let’s not beat around the bush. This series is an attempt to introduce a new cybernetic paradigm for psychology, and cause a scientific revolution, just like the ones they had in chemistry and astronomy and physics, just like Thomas Kuhn talked about.

We think that cybernetics will allow an angle of attack on many problems in psychology, and we’re going to do our best to make that case. For example, one of psychology’s biggest hidden commitments is that for most of its history, it has focused on perception, sometimes to the exclusion of everything else. But perception may not be the right way to approach the study of the mind. Problems that remain unsolved for a long time should always be suspected as questions asked in the wrong way.

Cybernetics benefits because it doesn’t have such a strong commitment to perception — instead, it’s dynamic. The fact that dynamics is so different from the perception-based approach that has dominated psychology for most of the 200 years it’s been around seems like reason for optimism.

A lot of what we have to say about cybernetics comes from cyberneticists, especially Wiener and Powers. Some of what we say about psychological drives comes from earlier drive theorists. And some of what we think of as original will probably in fact turn out to be reinventing the wheel. Finally, everything we say comes from treating previous psychology as some mix of thesis, antithesis, and synthesis.

To most psychologists, asking “what emotions do rats have?” would be rather vague. But to a cybernetic psychologist, it makes perfect sense. It also makes sense to ask, “what emotions do rats and humans have in common?” From a cybernetic standpoint, there’s probably a precise answer to such questions.

Some of these questions may be disturbing in new and exciting ways. Are fish thirsty? Again, there may be a precise answer to this question.

There is something new in this work, but it’s also contiguous. We don’t want this to come across as though we’re saying this is unprecedented; this is all firmly grounded in historical traditions, it’s all inspired by things that have come before. 


[Next: ARTIFICIAL INTELLIGENCE]