MoreRSS

site iconSpencer GreenbergModify

Founder and CEO of Spark Wave, a psychological research organization and startup foundry
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spencer Greenberg

Does The Music You Listen To Predict Your Personality?

2025-05-24 07:44:14

Does whether you like rock music rather than pop or country say something about your personality? I would have thought not, but we ran a study, and it turns out yes – in the U.S., your music tastes predict aspects of your personality!

Much to my surprise, liking rock and classical music predicts the same things about your personality: having greater “openness to experience” (a personality trait from the Big Five framework) and being more intellectual.

Makes sense for classical, but who would have guessed that’s true of rock?

Another surprise to me was that enjoying dance/electronic music, country music, and jazz music predicted similar traits: being more group-oriented (e.g., gravitating toward group rather than 1-1 interactions), being more extroverted, and being more spontaneous.

But each of these 3 groups also stood out uniquely. Enjoying country was associated with being more emotional, enjoying dance/electronic was associated with higher openness to experience, and enjoying jazz was associated with being less attention-seeking than the other two groups.

Enjoyment of both pop music and hip-hop was associated with being more emotional, but pop music enjoyers were more group-oriented, whereas hip-hop music enjoyers were more spontaneous.

All the correlations discussed here are between r=0.3 and r=0.45 in size, so they are moderately large. It would be neat to see whether this generalizes to non-U.S. samples.

You can explore all of these music genre correlations, plus over a million more correlations about humans, for free using PersonalityMap: https://personalitymap.io


This piece was first written on May 23, 2025, and first appeared on my website on May 29, 2025.

The Oddly Absent “Wesearch”

2025-05-24 02:18:46

You might think that fields would very often apply their own methods to themselves.

For instance, economists conduct a supply/demand or incentives-based analysis of the field of economics itself to understand why they focus on some areas and not others or why the field has become more math-heavy over time.

Psychologists can also study the psychology of academic psychologists to understand the underlying psychological drivers that determine which areas of study are popular or why the replication crisis occurred (from the perspective of the psychology of those who precipitated and enabled it).

Sociologists may also apply ethnographic methods to examine the institutions, practices, and self-concepts of sociologists.

After all, what could be more available to study and more at the top of your mind than your own group? And why not apply your field’s methods to your group since you are already applying them to everything else?

But in my experience, this kind of “Wesearch” – if you’ll allow me to coin a term – is quite rare and niche.

If I’m right about this, why would that be? I suspect part of the reason is that people want to see themselves as not being merely governed by simple forces.

It’s all fine and good to model other anonymous people as merely responding to incentives, being impacted by severe confirmation bias, mimicking each other’s behavior for social status, etc. But we don’t want to think of ourselves, and the colleagues we respect in that way. It feels reductionist (and inaccurate) to do so. Our colleagues might even feel insulted to be modeled in such a way. These are models only for everyone else.

An interesting exception, pointed out by a reader, is that academic psychologists often have run studies on graduate students (e.g., their mental health and other psychological challenges they face). But even still, that’s only an example of it studying one aspect of itself.


This piece was first written on May 23, 2025, and first appeared on my website on May 26, 2025.

For Health And Longevity, Be Wary Of Mechanisms

2025-05-09 08:18:40

Often in health and longevity discussions, you’ll hear arguments about mechanisms. For instance:

Antioxidants -> reduced free radicals -> less DNA damage -> less cancer

Unfortunately, these biologically plausible-sounding claims usually don’t work when rigorously tested.

Are mechanistic arguments useless?

No. They are a great source of *hypotheses*. While most of these hypotheses fail, some eventually lead to important new treatments.

Unfortunately, health gurus, podcasters, and even sometimes (though they should know better) doctors and scientists use mechanistic arguments to convince the public about treatments for which we have little evidence.

Mechanistic arguments in health sound scientific and impressive. They make the speaker seem authoritative and knowledgeable. And they *seem* very hard to argue with. However, there is one general argument that works for most of them: “That sounds nice, but let’s look at randomized experiments in humans to check if it works.”

Why is it so common that health-related mechanistic claims don’t work when rigorously tested in randomized trials?

What goes wrong with X->Y biological thinking?

1) Causality: The first issue is that an X->Y claim may be true in terms of associations, without the links being *causal*. It’s typically a lot easier to establish that when people have higher X, they also have higher Y than to show that increasing X causes higher Y.

Alzheimer’s research seems to be experiencing this problem in a major way. The hypothesis:

Amyloid plaques -> Alzheimer’s

Seems to be oversimplified or perhaps mostly associational (rather than causal), as drugs that reduce brain plaques have had disappointing results.

2) Multiple mechanisms: even if it’s true that X is in the causal chain for Y, it may also be true that Y is also highly influenced by other mechanisms, and so changing X may not change Y that much, even if you control X completely.

3) Other effects: even if the mechanism is completely correct, there may be alternative effects of the treatment. These could undermine the original benefit through other pathways, or cause other forms of harm that mean the benefit is not worth it.

4) Equilibrium: even if mechanistically X->Y, the body may work hard to maintain a balance of Y (much as it does to keep your core body temperature roughly constant regardless of whether you’re drinking a hot beverage or standing outside in the cold). Hence, the effects of intervening on X may not create lasting impacts on Y because your body works to restore a homeostasis.

5) Evaluability: unlike arguments based on empirical evidence (we gave patients this treatment in a study and here’s how their outcomes differed from the control group) and logical arguments, which a reasonably knowledgable non-expert can understand and assess to at least some degree, biological mechanism based arguments can’t be evaluated at all by non-experts. Take this claim, for example. Is this sound? See if you can tell:

“Subcutaneous WPP9 injections activate orexinergic neurons via Gq-coupled receptor agonism in the lateral hypothalamus, which increases daytime cortisol rhythm, leading to increased alertness.”

So, is this a valid mechanistic argument about human biology? Well, no, but I only know that because I had an LLM AI make this argument up by prompting it to generate a biologically plausible sounding but made-up argument. An expert on the topic may immediately identify it as implausible, but anyone else is going to have no realistic way of evaluating its soundness without help.

So, what’s the takeaway here? Well, when a podcaster or health guru tells you that we know a treatment works because [insert biological mechanistic argument here], remember that it isn’t strong evidence, no matter how impressive it sounds. We need careful randomized experiments (or other high-quality evidence) to be confident it’s true. Mechanistic arguments are for generating hypotheses; they give us a reason to collect more data and run studies to see if an idea pans out – they don’t themselves serve as strong evidence for what’s true.

Of course, we don’t always need strong evidence to try a treatment if it is worth it. If a treatment isn’t expensive and is low risk, we would be able to tell if it is working, and we don’t have more evidence-backed alternatives, then experimenting with the treatment (even if it only has weak evidence) can still be a good idea. But we shouldn’t mistake “worth experimenting with” for “having strong evidence for”.


This piece was first written on May 8, 2025, and first appeared on my website on May 15, 2025.

When is it worth it to argue over definitions?

2025-04-11 05:42:24

It’s almost always a waste of time debating definitions with people (“semantic debates”).

Just stop for a moment to define terms or switch to using the other person’s definition so you don’t talk past each other. Definitions can be whatever we want them to be, and most of the time the important thing is just that our definitions match closely enough so that we can communicate effectively. Attempts to argue about definitions usually are a fool’s errand.

And yet… there are some situations where disagreeing about definitions or trying to convince the other person to adopt a different definition may be wise:

  1. Ambiguity. When someone attempts to use an ambiguous definition, that can cause reasoning about the topic to become sloppy. You can suggest an alternative, more precise definition.
  2. Nonstandardness. When someone uses a word in a way that is out of sync with how most people use that word, it can create a lot of confusion. You can suggest switching to a standard definition or using a different word/phrase for what they are referring to.
  3. Emotionality. Sometimes, people sneak judgments, offensiveness, or slants into arguments with an emotion-laden word. For instance, “slut” will sound negative to many, even if the speaker insists on giving it a neutral definition. You can suggest switching to a neutral word. Sometimes this can also go in the reverse direction, where someone tries to make something awful sound okay by giving it a very benign phrase.
  4. Centrality. Sometimes, a definition is too broad or does not capture the core of what’s under debate. For instance, defining “criminal” as anyone who’s broken ANY law may make it hard to discuss “criminal justice reform.” You can suggest a new definition that’s better focused.
  5. Circularity. Sometimes, people will try to win an argument by using an unusual definition that makes them right by definition. For instance, in a debate about the cost-effectiveness of medical care, if someone defines “routine medical care” in such a way as to exclude all non-cost-effective medical care, then, by definition, routine medical care will be cost-effective. In such cases, you can suggest using a widely accepted definition that doesn’t make the other person automatically right (by definition).
  6. Benefits. Sometimes, using one definition is more useful or more beneficial to the world than using another definition. In such cases, it might be valuable for you to argue that the other person should switch to using a different definition just for these pragmatic benefits.
  7. Shifting. Sometimes people make an unreasonable or false claim using one definition but then, when they’re challenged, they’ll switch to using another definition that makes their claim much more easily defensible (a “motte-and-bailey” fallacy). In such cases, you can argue against their usage of the fallback definition so as to pin down their claims.
  8. Objective. There are some special situations (though they’re rare) in which there really is only one good way to define something. For instance, this sometimes happens in physics and mathematics, where any other definition (that’s not equivalent) fails to have the properties we want. I would argue that “evidence” is like this too – I believe there is only one definition that has all the properties we’d want “evidence” to have.

So, most of the time, when disagreements over definitions come up, you shouldn’t debate definitions. It’s simply a waste of time. These conversations usually are unresolvable because there are no agreed upon criteria for deciding which definition is better, and the conversation amounts to pointlessly trading intuitions. Fundamentally, definitions are things we make up, so it’s usually best just to agree on definitions upfront or to adopt the other person’s definition so effective communication can happen.

But, as we‘ve seen here, there are a handful of interesting cases where it’s actually helpful to propose a potentially “better” definition and to try to get the other person to agree to it before proceeding with the discussion!


This piece was first written on March 16, 2025, and first appeared on my website on April 10, 2025.

When are tariffs beneficial?

2025-04-08 02:42:54

What is the point of tariffs, in general? Lots of countries have them, to at least a small degree. It’s rarer that countries use them to a large degree. Why?

My understanding is that there are four main reasons tariffs get put in place:

(1) Special interests that benefit from tariffs lobby for them at the expense of everyone else. This is obviously a bad reason to have tariffs.

(2) Sometimes countries have an interest in building out capabilities in very specific industries as part of a long-term wealth/self-preservation plan, which can be rational and wise when narrow and well thought out (e.g., “in 20 years, we want to have a globally competitive automotive industry” or “we want to have a strong local industry in steel in case of war”). This can be a good reason to have tariffs, but it also requires a carefully thought-out, specific, long-term plan that is well executed. And these kinds of plans often fail.

(3) It is a punishment tool to retaliate or threaten other countries (though they cause self-punishment at the same time, so it’s a very costly form of punishment).

(4) Irrationality from leaders about the purposes and real effects of tariffs. In such cases, the tariffs mainly cause harm to all involved parties.

There are also some other special cases where tariffs get used, such as when a country subsidizes an industry and exports to another country. Still, the latter country wants its own producers to be able to compete on an equal playing field, so it puts in place tariffs. Or when a country can’t raise taxes for some reason, it can try to use tariffs as a substitute. Tariffs can sometimes be used as an attempt to increase local jobs in times of high unemployment. But this is a rare situation.

Trade is usually good because if A and B trade freely, it’s because they both believe they are better off making the trade than not. Hence, gains from trade. Tariffs distort this process and make the involved parties worse off. So unless a tariff is well justified, it’s very likely making things worse. That’s why I think that tariffs should be viewed as a special measure: something that countries should avoid by default but use when they have special reasons to do so.


This piece was first written on April 7, 2025, and first appeared on my website on April 22, 2025.

Can you trust survey responses?

2025-03-30 07:24:00

Self-reporting on surveys seems ridiculously unreliable. People can lie or may not pay attention. People misremember things. People often lack self-insight. And YET, self-reporting fairly often works remarkably well in measuring things. Here are some examples:

(1) In a large study we ran, IQ (measured by performance on intelligence tasks) had a strong correlation with self-reported (remembered) performance on the math portion of the SAT exam (r=0.61, n=714), which most participants would have taken MANY YEARS prior. This suggests that they were neither lying that often nor were their memories (of a long-ago event) that bad. Though I think there was likely some inflation in their self-reported scores, their reported scores contained a lot of signals.

(2) When we asked people how much they agreed with the extremely subjective statement “I feel satisfied with my income” (on a 7-point scale), their responses correlated quite strongly with their self-reported household income (r=0.43, n=639). This suggests that self-reported subjective feelings can map pretty well onto actual realities.

(3) When we asked, “Have you ever been diagnosed with depression by a medical or mental health professional?” it had a reasonable correlation with the level of agreement to the single statement, “I often tell myself that ‘I am not good enough’ (r=0.37, n=509). This suggests that agreement with vague, seeming statements can say a surprisingly large amount about a person.

(4) You might think that even if it’s an anonymous survey, people wouldn’t be willing to admit socially undesirable behaviors or traits, socially stigmatized events, or highly personal things. But, in our experience, many people are willing to indicate that these apply to them. In one study, when asked if they had ever cheated on a romantic partner, we found that 32% of respondents admitted to having done so.  In another, when asked whether, in childhood, an adult in their home ever hit, beat, kicked, or physically hurt them, 26% agreed. When asked if they had been raped, more than 20% agreed. While it’s certain that some people lie, many people are willing to honestly talk about their experiences in anonymous surveys.

While it’s important to be cautious about self-reporting on surveys – people can lie, they may not be paying attention, they may not remember, and they may lack self-insight – in my experience, it often works (perhaps surprisingly) well to tap important traits!


This piece was first written on March 29, 2025, and first appeared on my website on April 24, 2025.