MoreRSS

site iconDynomightModify

Dynomight is a SF-rationalist-substack-adjacent blogger with a good understanding of statistics.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Dynomight

Dating: A mysterious constellation of facts

2025-10-30 08:00:00

Here are a few things that seem to be true:

  1. Dating apps are very popular.
  2. Lots of people hate dating apps.
  3. They hate them so much that there’s supposedly a resurgence in alternatives like speed dating.

None of those are too controversial, I think. (Let’s stress supposedly in #3.) But if you stare at them for a while, it’s hard to see how they can all be true at the same time.

Because, why do people hate dating apps? People complain that they’re bad in various ways, such as being ineffective, dehumanizing, or expensive. (And such small portions!) But if they’re bad, then why? Technologically speaking, a dating app is not difficult to make. If dating apps are so bad, why don’t new non-bad ones emerge and outcompete them?

The typical answer is network effects. A dating app’s value depends on how many other people are on it. So everyone gravitates to the popular ones and eventually most of the market is captured by a few winners. To displace them, you’d have to spend a huge amount of money on advertising. So—the theory goes—the winners are an oligopoly that gleefully focus on extracting money from their clients instead of making those clients happy.

That isn’t obviously wrong. Match Group (which owns Tinder, Match, Plenty of Fish, OK Cupid, Hinge, and many others) has recently had an operating margin of ~25%. That’s more like a crazy-profitable entrenched tech company (Apple manages ~30%) than a nervous business in a crowded market.

But wait a second. How many people go to a speed dating event? Maybe 30? I don’t know if the speed dating “resurgence” is real, but it doesn’t matter. Some people definitely do find love at real-life events with small numbers of people. If that’s possible, then shouldn’t it also be possible to create a dating app that’s useful even with only a small number of users? Meaning good apps should have emerged long ago and displaced the crappy incumbents? And so the surviving dating apps should be non-hated?

We’ve got ourselves a contradiction. So something is wrong with that argument. But what?

Theory 1: Selection

Perhaps speed dating attendees are more likely to be good matches than people on dating apps. This might be true because they tend to be similar in terms of income, education, etc., and people tend to mate assortatively. People who go to such events might also have some similarities in terms of personality or what they’re looking for in a relationship.

You could also theorize that people at speed dating events are higher “quality”. For example, maybe it’s easier to conceal negative traits on dating apps than it is in person. If so, this might lead to some kind of adverse selection where people without secret negative traits get frustrated and stop using the apps.

I’m not sure either of those are true. But even if they are, consider the magnitudes. While a speed dating event might have 30 people, a dating app in a large city could easily have 30,000 users. While the fraction of good matches might be lower on a dating app, the absolute number is still surely far higher.

Theory 2: Bandwidth

Perhaps even if you have fewer potential matches at a speed dating event, you have better odds of actually finding them, because in-person interactions reveals information that dating apps don’t.

People often complain that dating apps are superficial, that there’s too much focus on pictures. Personally, I don’t think pictures deserve so much criticism. Yes, they show how hot you are. But pictures also give lots of information about important non-superficial things, like your personality, values, social class, and lifestyle. I’m convinced people use pictures for all that stuff as much as hotness.

But you know what’s even better than pictures? Actually talking to someone!

Many people seem to think that a few minutes of small talk isn’t enough time to learn anything about someone. Personally, I think evolution spent millions of years training us to do exactly that. I’d even claim that this is why small talk exists.

(I have friends with varying levels of extroversion and agreeableness, but all of my friends seem to have high openness to experience. When I meet someone new, I’m convinced I can guess their openness to ±10% by the time they’ve completed five sentences.)

So maybe the information a dating app provides just isn’t all that useful compared to a few minutes of casual conversation. If so, then dating apps might be incredibly inefficient. You have to go through some silly texting courtship ritual, set up a time to meet, physically go there, and then pretend to smile for an hour even if you immediately hate them.

Under this theory, dating apps provide a tiny amount of information about a gigantic pool of people, while speed dating provides a ton of information about a small number of people. Maybe that’s a win, at least sometimes.

Theory 3: Behavior

Maybe the benefit of real-life events isn’t that they provide more information, but that they change how we behave.

For example, maybe people are nicer in person? Because only then can we sense that others are also sentient beings with internal lives and so on?

I’m pretty sure that’s true. But it’s not obvious it helps with our mystery, since people from dating apps eventually meet in person, too. If they’re still nice when they do, then this just resolves into “in-person interactions provide more information”, and is already covered by the previous theory. To help resolve our mystery, you’d need to claim that people at real-life events act differently than they do when meeting up as a result of a dating app.

That could happen as a result of a “behavioral equilibrium”. Some people take dating apps seriously and some take them casually. But it’s hard to tell what category someone else is in, so everyone proceeds with caution. But by showing up at an in-person event, everyone has demonstrated some level of seriousness. And maybe this makes everyone behave differently? Perhaps, but I don’t really see it.

Obscure theories

I can think of a few other possible explanations.

  1. Maybe speed dating serves a niche. Just like Fitafy / Bristlr / High There! serve people who love fitness / beards / marijuana, maybe speed dating just serves some small-ish fraction of the population but not others.

  2. Maybe the people who succeed at speed dating would also have succeeded no matter what. So they don’t offer any general lessons.

  3. Maybe creating a dating app is in fact very technologically difficult. So while the dating apps are profit-extracting oligopolies, that’s because of technological moat, not network effects.

I don’t really buy any of these.

Drumroll

So what’s really happening? I am not confident, but here’s my best guess:

  1. Selection is not a major factor.

  2. The high bandwidth of in-person interactions is a major factor.

  3. The fact that people are nicer or more open-minded in person is not a major factor, other than through making in-person interactions higher bandwidth.

  4. None of the obscure theories are major factors.

  5. Dating apps are an oligopoly, driven by network effects.

Basically, a key “filter” in finding love is finding someone where you both feel optimistic after talking for five minutes. Speed dating is (somewhat / sometimes) effective because it efficiently crams a lot of people into the top of that filter.

Meanwhile, because dating apps are low-bandwidth, they need a large pool to be viable. Thus, they’re subject to network effects, and the winners can turn the screws to extract maximum profits from their users.

Partly I’m not confident in that story just because it has so many moving parts. But something else worries me too. If it’s true, then why aren’t dating apps trying harder to provide that same information that in-person interactions do?

If anything, I understand they’re moving in the opposite direction. Maybe Match Group would have no interest in that, since they’re busy enjoying their precious network effects. But why not startups? Hell, why not philanthropies? (Think of all the utility you could create!) For the above story to hold together, you have to believe that it’s a very difficult problem.

Pointing machines, population pyramids, post office scandal, type species, and horse urine

2025-10-23 08:00:00

I recently wondered if explainer posts might go extinct. In response, you all assured me that I have nothing to worry about, because you already don’t care about my explanations—you just like it when I point at stuff.

Well OK then!

Pointing machines

How did Michelangelo make this?

david

What I mean is—marble is unforgiving. If you accidentally remove some material, it’s gone. You can’t fix it by adding another layer of paint. Did Michelangelo somehow plan everything out in advance and then execute everything perfectly the first time, with no mistakes?

I learned a few years ago that sculptors have long used a simple but ingenious invention called a pointing machine. This allows you to create a sculpture in clay and, in effect, “copy” it into stone. That sounds magical, but it’s really just an articulated pointer that you move between anchor points attached to the (finished) clay and the (incomplete) stone sculpture. If you position the pointer based on the clay sculpture and then move it to the stone sculpture, anything the pointer hits should be removed. Repeat that thousands of times and the sculpture is copied.

pointing machines

I was sad to learn that Michelangelo was a talentless hack, but I dutifully spent the last few years telling everyone that all sculptures were made this way and actually sculpture is extremely easy, etc.

Last week I noticed that Michelangelo died in 1564, which was over 200 years before the pointing machine was invented.

Except, apparently since ancient times sculptors have used a technique sometimes called the “compass method” which is sort of like a pointing machine except more complex and involving a variety of tools and measurements. This was used by the ancient Romans to make copies of older Greek sculptures. And most people seem to think that Michelangelo probably did use that.

Population pyramids

I think this is one of the greatest data visualizations ever invented.

pyramid

Sure, it’s basically just a histogram turned on the side. But compare India’s smooth and calm teardrop with China’s jagged chaos. There aren’t many charts that simultaneously tell you so much about the past and the future.

It turns out that this visualization was invented by Francis Amasa Walker. He was apparently such an impressive person that this invention doesn’t even merit a mention on his Wikipedia page, but he used it in creating these visualizations for the 1874 US atlas:

pyramids

I think those are the first population pyramids ever made. The atlas also contains many other beautiful visualizations, for example this one of church attendance:

church

Or this one on debt and public expenditures:

debt

Post office scandal

If you haven’t heard about the British Post Office scandal, here’s what happened: In 1999, Fujitsu delivered buggy accounting software to the British Post Office that incorrectly determined that thousands of subpostmasters were stealing. Based on this faulty data, the post office prosecuted and convicted close to a thousand people, of whom 236 went to prison. Many others lost their jobs or were forced to “pay back” the “shortfalls” from their own pockets.

Of course, this is infuriating. But beyond that, I notice I am confused. It doesn’t seem like anyone wanted to hurt all those subpostmasters. The cause seems to be only arrogance, stupidity, and negligence.

I would have predicted that before you could punish thousands of people based on the same piece of fake evidence, something would happen that would stop you. Obviously, I was wrong. But I find it hard to think of good historical analogies. Maybe negligence in police crime labs or convictions of parents for “shaken baby syndrome”? Neither of these is a good analogy.

One theory is that the post office scandal happened because the post office—the “victim”—had the power to itself bring prosecutions. But in hundreds of cases things were done the normal way, with police “investigating” the alleged crimes and then sending the cases to be brought by normal prosecutors. Many cases were also pursued in Scotland and Northern Ireland, where the Post Office lacks this power.

Another theory would be:

  1. Prosecutors have incredible latitude in choosing who they want to prosecute.

  2. Like other humans, some prosecutors are arrogant/stupid/negligent.

  3. It’s actually pretty easy for prosecutors to convict an innocent person if they really want to, as long as they have some kind of vaguely-incriminating evidence.

Under this theory, similar miscarriages of justice happen frequently. But they only involve a single person, and so they don’t make the news.

Type species

Type species - Wikipedia

I link to this not because it’s interesting but because it’s so impressively incomprehensible. If there’s someone nearby, I challenge you to read this to them without losing composure.

In zoological nomenclature, a type species (species typica) is the species whose name is considered to be permanently taxonomically associated with the name of a genus or subgenus. In other words, it is the species that contains the biological type specimen or specimens of the genus or subgenus. A similar concept is used for groups ranked above the genus and called a type genus.

In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types.

In bacteriology, a type species is assigned for each genus. Whether or not currently recognized as valid, every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type.

Can such a thing be created unintentionally? I tried to parody this by creating an equally-useless description of an everyday object. But in the end, I don’t think it’s very funny, because it’s almost impossible to create something worse than the above passage.

A funnel is a tool first created in antiquity with rudimentary versions fabricated from organic substrates such as cucurbitaceae or broadleaf foliage by early hominid cultures. The etymology of fundibulum (Latin), provides limited insight into its functional parameters, despite its characteristic broad proximal aperture and a constricted distal orifice.

Compositionally, funnels may comprise organic polymers or inorganic compounds, including but not limited to, synthetic plastics or metallic alloys and may range in weight from several grams to multiple kilograms. Geometrically, the device exhibits a truncated conical or pyramidal morphology, featuring an internal declination angle generally between 30 and 60 degrees.

Within cultural semiotics, funnels frequently manifest in artistic representations, serving as an emblem of domestic ephemerality.

The good news is that the Sri Lankan elephant is the type species for the Asian elephant, whatever that is.

Hormones

I previously mentioned that some hormonal medications used to be made from the urine of pregnant mares. But only after reading The History of Estrogen Therapy (h/t SCPantera) did I realize that it’s right there in the name:

    Premarin = PREgnant MARe’s urINe

If you—like me—struggle to believe that a pharmaceutical company would actually do this, note that was in 1941. Even earlier, the urine of pregnant humans was used. Tragically, this was marketed as “Emmenin” rather than “Prehumin”.

Will the explainer post go extinct?

2025-10-09 08:00:00

Will short-form non-fiction internet writing go extinct? This may seem like a strange question to ask. After all, short-form non-fiction internet writing is currently, if anything, on the ascent—at least for politics, money, and culture war—driven by the shocking discovery that many people will pay the cost equivalent of four hardback books each year to support their favorite internet writers.

But, particularly for “explainer” posts, the long-term prospects seem dim. I write about random stuff and then send it to you. If you just want to understand something, why would you read my rambling if AI could explain it equally well, in a style customized for your tastes, and then patiently answer your questions forever?

I mean, say you can explain some topic better than AI. That’s cool, but once you’ve published your explanation, AI companies will put it in their datasets, thankyouverymuch, after which AIs will start regurgitating your explanation. And then—wait a second—suddenly you can’t explain that topic better than AI anymore.

This is all perfectly legal, since you can’t copyright ideas, only presentations of ideas. It used to take work to create a new presentation of someone else’s ideas. And there used to be a social norm to give credit to whoever first came up with some idea. This created incentives to create ideas, even if they weren’t legally protected. But AI can instantly slap a new presentation on your ideas, and no one expects AI to give credit for its training data. Why spend time creating content so just it can be nostrified by the Borg? And why read other humans if the Borg will curate their best material for you?

So will the explainer post survive?

Let’s start with an easier question: Already today, AI will happily explain anything. Yet many people read human-written explanations anyway. Why do they do that? I can think of seven reasons:

  1. Accuracy. Current AI is unreliable. If I ask about information theory or how to replace the battery on my laptop, it’s very impressive but makes some mistakes. But if I ask about heritability, the answers are three balls of gibberish stacked on top of each other in a trench-coat. Of course, random humans make mistakes, too. But if you find a quality human source, it is far less likely to contain egregious mistakes. This is particularly true across “large contexts” and for tasks where solutions are hard to verify.

  2. AI is boring. At least, writing from current popular AI tools is boring, by default.

  3. Parasocial relationships. If I’ve been reading someone for a long time, I start to feel like I have a kind of relationship with them. If you’ve followed this blog for a long time, you might feel like you have a relationship with me. Calling these “parasocial relationships” makes them sound sinister, but I think this is normal and actually a clever way of using our tribal-band programming to help us navigate of the modern world. Just like in “real” relationships, when I read someone I have a parasocial relationship with, I have extra context that makes it easier to understand them, I feel a sense of human connection, and I feel like I’m getting a sort of update on their “story”. I don’t get any of that with (current) AI.

  4. Skin in the game. If a human screws something up, it’s embarrassing. They lose respect and readers. On a meta-level, AI companies have similar incentives not to screw things up. But AI itself doesn’t (seem to) care. Human nature makes it easier to trust someone when we know they’re putting some kind of reputation on the line.

  5. Conspicuous consumption. Since I read Reasons and Persons, I can brag to everyone that I read Reasons and Persons. If I had read some equally good AI-written book, probably no one would care.

  6. Coordination points. Partly, I read Reasons and Persons because I liked it. And maybe I guess I read it so I can brag about the fact that I read it. (Hey everyone, have I mentioned that I read Reasons and Persons?) But I also read it because other people read it. When I talk to those people, we have a shared vocabulary and set of ideas that makes it easier to talk about other things. This wouldn’t work if we had all explored the same ideas though fragmented AI “tutoring”.

  7. Change is slow. Here we are 600 years after the invention of the printing press, and the primary mode of advanced education is still for people to physically go to a room where an expert is talking and write down stuff the expert says. If we’re that slow to adapt, then maybe we read human-written explainers simply out of habit.

How much do each of these really matter? How much confidence should they give us that explainer posts will still exist a decade from now? Let’s handle them in reverse order.

Argument 7: Change is slow

Sure, society takes time to adapt to technological change. But I don’t think college lectures are a good example of this, or that they’re a medieval relic that only survive out of inertia. On the contrary, I think they survive because we haven’t really any other model of education that’s fundamentally better.

Take paper letters. One hundred years ago, these were the primary form of long-distance communication. But after the telephone was widely distributed, it only took it a few decades to kill the letter in almost all cases where the phone is better. When email and texting showed up, they killed off almost all remaining use of paper letters. They still exist, but they’re niche.

The same basic story holds for horses, the telegraph, card catalogs, slide rules, VHS tapes, vacuum tubes, steam engines, ice boxes, answering machines, sailboats, typewriters, the short story, and the divine right of kings. When we have something that’s actually better, we drop the old ways pretty quickly. Inertia alone might keep explainer posts alive for a few years, but not more than that.

Arguments 5 and 6: Coordination points and conspicuous consumption

Western civilization began with the Iliad. Or, at least, we’ve decided to pretend it did. If you read the Iliad, then you can brag about reading the Iliad (good) and you have more context to engage with everyone else who read it (very good). So people keep reading the Iliad. I think this will continue indefinitely.

But so what? The Iliad is in that position because people have been reading/listening to it for thousands of years. But if you write something new and there’s no “normal” reason to read it, then it has no way to establish that kind of self-sustaining legacy.

Non-fiction in general has a very short half-life. And even when coordination points exist, people often rely on secondary sources anyway. Personally, I’ve tried to read Wittgenstein, but I found it incomprehensible. Yet I think I’ve absorbed his most useful idea by reading other people’s descriptions. I wonder how much “Wittgenstein” is really a source at this point as opposed to a label.

Also… explainer posts typically aren’t the Iliad. So I don’t think this will do much to keep explainer posts alive, either.

(Aside: I’ve never understood why philosophy is so fixated on original sources, instead of continually developing new presentations of old ideas like math and physics do. Is this related to the fact that philosophers go to conferences and literally read their papers out loud?)

Argument 4: Skin in the game

I trust people more when I know they’re putting their reputation on the line, for the same reason I trust restaurants more when I know they rely on repeat customers. AI doesn’t give me this same reason for confidence.

But so what? This is a loose heuristic. If AI were truly more accurate than human writing, I’m sure most people would learn to trust it in a matter of weeks. If AI was ultra-reliable but people really needed someone to hold accountable, AI companies could perhaps offer some kind of “insurance”. So I don’t see this as keeping explainers alive, either.

Argument 3: Parasocial relationships

Humans are social creatures. If bears had a secret bear Wikipedia and you went to the entry on humans, it would surely say, “Humans are obsessed with relationships.” I feel confident this will remain true.

I also feel confident that we will continue to be interested in what people we like and respect think about matters of fact. It seems plausible that we’ll continue to enjoy getting that information bundled together with little jokes or busts of personality. So I expect our social instincts will provide at least some reason for explainers to survive.

But how strong will this effect be? When explainer posts are read today, what fraction of readers are familiar enough to have a parasocial relationship with the author? Maybe 40%? And when people are familiar, what fraction of their motivation comes from the parasocial relationship, as opposed to just wanting to understand the content? Maybe another 40%? Those are made-up numbers, but I think it’s hard to avoid the conclusion that parasocial relationships explain only a fraction of why people read explainers today.

And there’s another issue. How do parasocial relationships get started if there’s no other reason to read someone? These might keep established authors going for a while at reduced levels, but it seems like it would make it hard for new people to rise up.

Argument 2: Boring-ness

Maybe popular AIs are a bit boring, today. But I think this is mostly due to the final reinforcement learning step. If you interact with “base models”, they are very good at picking up style cues and not boring at all. So I highly doubt that there’s some fundamental limitation here.

And anyway, does anyone care? If you just want to understand why vitamin D is technically a type of steroid, how much does style really matter, as opposed to clarity? I think style mostly matters in the context of a parasocial relationship, meaning we’ve already accounted for it above.

Argument 1: Accuracy

I don’t know for sure if AI will ever be as accurate as a high-quality human source. Though it seems very unlikely that physics somehow precludes creating systems that are more accurate than humans.

But if AI is that accurate, then I think this exercise suggests that explainer posts are basically toast. All the above arguments are just too weak to explain most of why people read human-written explainers now. So I think it’s mostly just accuracy. When that human advantage goes, I expect human-written explainers to go with it.

Counter-arguments

I can think of three main counterarguments.

First, maybe AI will fix discovery. Currently, potential readers of explainers often have no way to find potential writers. Search engines have utterly capitulated to SEO spam. Social media soft-bans outward links. If you write for a long time, you can build up an audience, but few people have the time and determination to do that. If you write a single explainer in your life, no one will read it. The rare exceptions to this rule either come from people contributing to established (non-social media) communities or from people with exceptional social connections. So—this argument goes—most potential readers don’t bother trying to find explainers, and most potential writers don’t bother creating them. If AI solves that matching problem, explainers could thrive.

Second, maybe society will figure out some new way to reward people who create information. Maybe we fool around with intellectual property law. Maybe we create some crazy Xanadu-like system where in order to read some text, you have to first sign a contract to pay them based on the value you derive, and this is recursively enforced on everyone who’s downstream of you. Hell, maybe AI companies decide to solve the data wall problem by paying people to write stuff. But I doubt it.

Third, maybe explainers will follow a trajectory like chess. Up until perhaps the early 1990s, humans were so much better than computers at chess that computers were irrelevant. After Deep Blue beat Kasparov in 1997, people quickly realized that while computers could beat humans, human+computer teams could still beat computers. This was called Advanced Chess. Within 15-20 years, however, humans became irrelevant. Maybe there will be a similar Advanced Explainer era? (I kid, that era started five years ago.)

TLDR

Will the explainer post go extinct? My guess is mostly yes, if and when AI reaches human-level accuracy.

Incidentally, since there’s so much techno-pessimism these days: I think this outcome would be… great? It’s a little grim to think of humans all communicating with AI instead of each other, yes. But the upside is all of humanity having access to more accurate and accessible explanations of basically everything. If this is the worst effect of AGI, bring it on.

Y’all are over-complicating these AI-risk arguments

2025-10-02 08:00:00

Say an alien spaceship is headed for Earth. It has 30 aliens on it. The aliens are weak and small. They have no weapons and carry no diseases. They breed at rates similar to humans. They are bringing no new technology. No other ships are coming. There’s no trick—except that they each have an IQ of 300. Would you find that concerning?

Of course, the aliens might be great. They might cure cancer and help us reach world peace and higher consciousness. But would you be sure they’d be great?

Suppose you were worried about the aliens but I scoffed, “Tell me specifically how the aliens would hurt us. They’re small and weak! They can’t do anything unless we let them.” Would you find that counter-argument convincing?

I claim that most people would be concerned about the arrival of the aliens, would not be sure that their arrival would be good, and would not find that counter-argument convincing.

I bring this up because most AI-risk arguments I see go something like this:

  1. There will be a fast takeoff in AI capabilities.
  2. Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
  3. These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

These arguments have always struck me as overcomplicated. So I’d like to submit the following undercomplicated alternative:

  1. Obviously, if an alien race with IQs of 300 were going to arrive on Earth soon, that would be concerning.
  2. In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
  3. No one knows what AI with an IQ of 300 would be like. So it might as well be an alien.

Our subject for today is: Why might one prefer one of these arguments to the other?

The case for the simple argument

The obvious reason to prefer the simple argument is that it’s more likely to be true. The complex argument has a lot of steps. Personally, I think they’re all individually plausible. But are we really confident that there will be a fast takeoff in AI capabilities and that the AI will pursue dangerous subgoals and that it will thereby gain a decisive strategic advantage?

I find that confidence unreasonable. I’ve often been puzzled why so many seemingly-reasonable people will discuss these arguments without rejecting the confidence.

I think the explanation is that there are implicitly two versions of the complex argument. The “strong” version claims that fast takeoff et al. will happen, while the “weak” version merely claims that it’s a plausible scenario that we should take seriously. It’s often hard to tell which version people are endorsing.

The distinction is crucial, because these two versions have different weaknesses. I find the strong version wildly overconfident. I agree with the weak version, but I still think it’s unsatisfying.

Say you think there’s a >50% chance things do not go as suggested by the complex argument. Maybe there’s a slow takeoff, or maybe the AI can’t build a decisive strategic advantage, whatever. Now what?

Well, maybe everything turns out great and you live for millions of years, exploring the galaxy, reading poetry, meditating, and eating pie. That would be nice. But it also seems possible that humanity still ends up screwed, just in a different way. The complex argument doesn’t speak to what happens when one of the steps fails. This might give the impression that without any of the steps, everything is fine. But that is not the case.

The simple argument is also more convincing. Partly I think that’s because—well—it’s easier to convince people of things when they’re true. But beyond that, the simple argument doesn’t require any new concepts or abstractions, and it leverages our existing intuitions for how more intelligent entities can be dangerous in unexpected ways.

I actually prefer the simple argument in an inverted form: If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. “If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
  2. “There’s no way that AI with an IQ of 300 will arrive within the next few decades.”
  3. “We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.”

I think all those bullets are unbiteable. Hence, I think AI-risk is real.

But if you make the complex argument, then you seem to be left with the burden of arguing for fast takeoff and alignment difficulty and so on. People who hear that argument also often demand an explanation of just how AI could hurt people (“Nanotechnology? Bioweapons? What kind of bioweapon?”) I think this is a mistake for the same reason it would be a mistake to demand to know how a car accident would happen before putting on your seatbelt. As long as the Complex Scenario is possible, it’s a risk we need to manage. But many people don’t look at things that way.

But I think the biggest advantage of the simple argument is something else: It reveals the crux of disagreement.

I’ve talked to many people who find the complex argument completely implausible. Since I think it is plausible—just not a sure thing—I often ask why. People give widely varying reasons. Some claim that alignment will be easy, some that AI will never really be an “agent”, some talk about the dangers of evolved vs. engineered systems, and some have technical arguments based on NP-hardness or the nature of consciousness.

I’ve never made much progress convincing these people to change their minds. I have succeeded in convincing some people that certain arguments don’t work. (For example, I’ve convinced people that NP-hardness and the nature of consciousness are probably irrelevant.) But when people abandon those arguments, they don’t turn around and accept the whole Scenario as plausible. They just switch to different objections.

So I started giving my simple argument instead. When I did this, here’s what I discovered: None of these people actually accept that AI with an IQ of 300 could happen.

Sure, they often say that they accept this. But if you pin them down, they’re inevitably picturing an AI that lacks some core human capability. Often, the AI can prove theorems or answer questions, but it’s not an “agent” that wants things and does stuff and has relationships and makes long-term plans.

So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear. If that’s the crux, then we should get to it as quickly as possible. And that’s done by the simple argument.

The case for the complex argument

I won’t claim to be neutral. As hinted by the title, I started writing this post intending to make the case for the simple argument, and I still think that case is strong. But I figured I should consider arguments for the other side and—there are some good ones.

Above, I suggested that there are two versions of the complex argument: A “strong” version that claims the scenario it lays out will definitely happen, and a “weak” version that merely claims it’s plausible. I rejected the strong version as overconfident. And I rejected the weak version because there are lots of other scenarios where things could also go wrong for humanity, so why give this one so much focus?

Well, there’s also a middle version of the complex argument: You could claim that the scenario it lays out is not certain, but that if things go wrong for humanity, then they will probably go wrong as in that scenario. This avoids both of my objections—it’s less overconfident, and it gives a good reason to focus on this particular scenario.

Personally, I don’t buy it, because I think other bad scenarios like gradual disempowerment are plausible. But maybe I’m wrong. It doesn’t seem crazy to claim that the Complex Scenario captures most of the probability mass of bad outcomes. And if that’s true, I want to know it.

Now, some people suggest favoring certain arguments for the sake of optics: Even if you accept the complex argument, maybe you’d want to make the simple one because it’s more convincing or is better optics for the AI-risk community. (“We don’t want to look like crazy people.”)

Personally, I am allergic to that whole category of argument. I have a strong presumption that you should argue the thing you actually believe, not some watered-down thing you invented because you think it will manipulate people into believing what you want them to believe. So even if my simpler argument is more convincing, so what?

But say you accept the middle version of the complex argument, yet you think my simple argument is more convincing. And say you’re not as bloody-minded as me, so you want to calibrate your messaging to be more effective. Should you use my simple argument? I’m not sure you should.

The typical human bias is to think other people are similar to us. (How many people favor mandatory pet insurance funded by a land-value tax? At least 80%, right?) But as far as I can tell, the situation with AI-risk is the opposite. Most people I know are at least mildly concerned, but have the impression that “normal people” think that AI-risk is science fiction nonsense.

Yet, here are some recent polls:

Poll Date Statement Agree
Gallup June 2-15 2025 [AI is] very different from the technological advancements that came before, and threatens to harm humans and society 49%
Reuters / Ipsos August 13-18 2025 AI could risk the future of humankind 58%
YouGov March 5-7 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 37%
YouGov June 27-30 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 43%

Being concerned about AI is hardly a fringe position. People are already worried, and becoming more so.

I used to picture my simple argument as a sensible middle-ground, arguing for taking AI-risk seriously, but not overconfident:

spectrum1

But I’m starting to wonder if my “obvious argument” is in fact obvious, and something that people can figure out on their own. From looking at the polling data, it seems like the actual situation is more like this, with people on the left gradually wandering towards the middle:

spectrum2

If anything, the optics may favor a confident argument over my simple argument. In principle, they suggest similar actions: Move quickly to reduce existential risk. But what I actually see is that most people—even people working on AI—feel powerless and are just sort of clenching up and hoping for the best.

I don’t think you should advocate for something you don’t believe. But if you buy the complex argument, and you’re holding yourself back for the sake of optics, I don’t really see the point.

Shoes, Algernon, Pangea, and Sea Peoples

2025-09-25 08:00:00

I fear we are in the waning days of the People Read Blog Posts About Random Well-Understood Topics Instead of Asking Their Automatons Era. So before I lose my chance, here is a blog post about some random well-understood topics.

Marathons are stupidly fast

You probably know that people can now run marathons in just over 2 hours. But do you realize how insane that is?

That’s an average speed of 21.1 km per hour, or 13.1 miles per hour. You can think of that as running a mile in 4:35 (world record: 3:45), except doing it 26.2 times in a row. Or, you can think of that as running 100 meters in 17.06 seconds (world record: 9.58 seconds), except doing it 421.6 times in a row. I’d guess that only around half of the people reading this could run 100 meters in 17.06 seconds once.

This crazy marathon running speed is mostly due to humans being well-adapted for running and generally tenacious. But some of it is due to new shoes with carbon-fiber plates that came out in the late 2010s.

The theory behind these shoes is quite interesting. When you run, you mainly use four joints:

  1. Hips
  2. Knees
  3. Ankles
  4. Metatarsophalangeal

If you haven’t heard of the last of these, they’re pronounced “met-uh-tar-so-fuh-lan-jee-ul” or “MTP”. These are the joints inside your feet behind your big toes.

Besides sounding made-up, they’re different from the other joints in a practical way: The other joints are all attached to large muscles and tendons that stretch out and return energy while running sort of like springs. These can apparently recover around 60% of the energy expended in each stride. (Kangaroos seemingly do even better.) But the MTP joints are only attached to small muscles and tendons, so the energy that goes into them is mostly lost.

These new shoe designs have complex constructions of foam and plates that can do the same job as the MTP joints, but—unlike the MTP joints—store and return that energy to the runner. A recent meta-analysis estimated that this reduced total oxygen consumption by ~2.7% and marathon times by ∼2.18%.

Algernon

I wonder if these shoes are useful as a test case for the Algernon argument. In general, that argument is that there shouldn’t be any simple technology that would make humans dramatically smarter, since if there was, then evolution would have already found it.

You can apply the same kind of argument to running: We have been optimized very hard by evolution to be good at running, so there shouldn’t be any “easy” technologies that would make us dramatically faster or more efficient.

In the context of the shoes, I think that argument does… OK? The shoes definitely help. But carbon fiber plates are pretty hard to make, and the benefit is pretty modest. Maybe this is some evidence that Algernon isn’t a hard “wall”, but rather a steep slope.

Or, perhaps thinking is just different from running. If you start running, you will get better at it, in a way that spills over into lots of other physical abilities. But there doesn’t seem to be any cognitive task that you can practice and make yourself better at other cognitive tasks.

If you have some shoes that will make me 2.7% smarter, I’ll buy them.

Pangea

Pangea was a supercontinent that contained roughly all the land on Earth. At the beginning of the Jurassic 200 million years ago, it broke up and eventually formed the current continents. But isn’t the Earth 4.5 billion years old? Why would all the land stick together for 95% of that time and then suddenly break up?

The accepted theory is that it didn’t. Instead, it’s believed that Earth cycles between super-continents and dispersed continents, and Pangea is merely the most recent super-continent.

But why would there be such a cycle? We can break that down into two sub-questions.

First, why would dispersed continents fuse together into a supercontinent? Well, you can think of the Earth as a big ball of rock, warmed half by primordial heat from when the planet formed and half by radioactive decay. Since the surface is exposed to space, it cools, resulting in solid chunks that sort of slide around on the warm magma in the upper mantle. Some of those chunks are denser than others, which causes them to sink into the mantle a bit and get covered with water. So when a “land chunk” crashes into a “water chunk”, the land chunk slides on top. But if two land chunks crash into each other, they tend to crumple together into mountains and stick to each other.

You can see this by comparing this map of all the current plates:

To this map of elevation:

OK, but once a super-continent forms, why would it break apart? Well, compared to the ocean floor, land chunks are thicker and lighter. So they trap heat from inside the planet sort of like a blanket. With no cool ocean floor sliding back into the warm magma beneath, that magma keeps getting warmer and warmer. After tens of millions of years, it heats up so much that it stretches the land above and finally rips it apart.

It’s expected that a new supercontinent “Pangea Ultima” will form in 250 million years. By that time, the sun will be putting out around 2.3% more energy, making things hotter. On top of that, it’s suspected that Pangea Ultima, for extremely complicated reasons, will greatly increase the amount of CO₂ in the atmosphere, likely making the planet uninhabitable by mammals. So we’ve got that going for us.

Egypt and the Sea Peoples

The Sea Peoples are a group of people from… somewhere… that appeared in the Eastern Mediterranean around 1200 BC and left a trail of destruction from modern Turkey down to modern Egypt. They are thought to be either a cause or symptom of the Late Bronze Age collapse.

But did you know the Egyptians made carvings of the situation while they were under attack? Apparently the battle looked like this:

In the inscription, Pharaoh Ramesses III reports:

Those who reached my boundary, their seed is not; their hearts and their souls are finished forever and ever. As for those who had assembled before them on the sea, the full flame was their front before the harbor mouths, and a wall of metal upon the shore surrounded them. They were dragged, overturned, and laid low upon the beach; slain and made heaps from stern to bow of their galleys, while all their things were cast upon the water.

Dear PendingKetchup

2025-09-11 08:00:00

PendingKetchup comments on my recent post on what it means for something to be heritable:

The article seems pretty good at math and thinking through unusual implications, but my armchair Substack eugenics alarm that I keep in the back of my brain is beeping.

Saying that variance was “invented for the purpose of defining heritability” is technically correct, but that might not be the best kind of correct in this case, because it was invented by the founder of the University of Cambridge Eugenics Society who had decided, presumably to support that project, that he wanted to define something called “heritability”.

His particular formula for heritability is presented in the article as if it has odd traits but is obviously basically a sound thing to want to calculate, despite the purpose it was designed for.

The vigorous “educational attainment is 40% heritable, well OK maybe not but it’s a lot heritable, stop quibbling” hand waving sounds like a person who wants to show but can’t support a large figure. And that framing of education, as something “attained” by people, rather than something afforded to or invested in them, is almost completely backwards at least through college.

The various examples about evil despots and unstoppable crabs highlight how heritability can look large or small independent of more straightforward biologically-mechanistic effects of DNA. But they still give the impression that those are the unusual or exceptional cases.

In reality, there are in fact a lot of evil crabs, doing things like systematically carting away resources from Black children’s* schools, and then throwing them in jail. We should expect evil-crab-based explanations of differences between people to be the predominant ones.

*Not to say that being Black “is genetic”. Things from accent to how you style your hair to how you dress to what country you happen to be standing in all contribute to racial judgements used for racism. But “heritability” may not be the right tool to disentangle those effects.

Dear PendingKetchup,

Thanks for complimenting my math (♡), for reading all the way to the evil crabs, and for not explicitly calling me a racist or eugenicist. I also appreciate that you chose sincerity over boring sarcasm and that you painted such a vibrant picture of what you were thinking while reading my post. I hope you won’t mind if I respond in the same spirit.

To start, I’d like to admit something. When I wrote that post, I suspected some people might have reactions similar to yours. I don’t like that. I prefer positive feedback! But I’ve basically decided to just let reactions like yours happen, because I don’t know how to avoid them without compromising on other core goals.

It sounds like my post gave you a weird feeling. Would it be fair to describe it as a feeling that I’m not being totally upfront about what I really think about race / history / intelligence / biological determinism / the ideal organization of society?

Because if so, you’re right. It’s not supposed to be a secret, but it’s true.

Why? Well, you may doubt this, but when I wrote that post, my goal was that people who read it would come away with a better understanding of the meaning of heritability and how weird it is. That’s it.

Do I have some deeper and darker motivations? Probably. If I probe my subconscious, I find traces of various embarrassing things like “draw attention to myself” or “make people think I am smart” or “after I die, live forever in the world of ideas through my amazing invention of blue-eye-seeking / human-growth-hormone-injecting crabs.”

What I don’t find are any goals related to eugenics, Ronald Fisher, the heritability of educational attainment, if “educational attainment” is good terminology, racism, oppression, schools, the justice system, or how society should be organized.

These were all non-goals for basically two reasons:

  1. My views on those issues aren’t very interesting or notable. I didn’t think anyone would (or should) care about them.

  2. Surely, there is some place in the world for things that just try to explain what heritability really means? If that’s what’s promised, then it seems weird to drop in a surprise morality / politics lecture.

At the same time, let me concede something else. The weird feeling you got as you read my post might be grounded in statistical truth. That is, it might be true that many people who blog about things like heritability have social views you wouldn’t like. And it might be true that some of them pretend at truth-seeking but are mostly just charlatans out to promote those unliked-by-you social views.

You’re dead wrong to think that’s what I’m doing. All your theories of things I’m trying to suggest or imply are unequivocally false. But given the statistical realities, I guess I can’t blame you too much for having your suspicions.

So you might ask—if my goal is just to explain heritability, why not make that explicit? Why not have a disclaimer that says, “OK I understand that heritability is fraught and blah blah blah, but I just want to focus on the technical meaning because…”?

One reason is that I think that’s boring and condescending. I don’t think people need me to tell them that heritability is fraught. You clearly did not need me to tell you that.

Also, I don’t think such disclaimers make you look neutral. Everyone knows that people with certain social views (likely similar to yours) are more likely to give such disclaimers. And they apply the same style of statistical reasoning you used to conclude I might be a eugenicist. I don’t want people who disagree with those social views to think they can’t trust me.

Paradoxically, such disclaimers often seem to invite more objections from people who share the views they’re correlated with, too. Perhaps that’s because the more signals we get that someone is on “our” side, the more we tend to notice ideological violations. (I’d refer here to the narcissism of small differences, though I worry you may find that reference objectionable.)

If you want to focus on the facts, the best strategy seems to be serene and spiky: to demonstrate by your actions that you are on no one’s side, that you don’t care about being on anyone’s side, and that your only loyalty is to readers who want to understand the facts and make up their own damned mind about everything else.

I’m not offended by your comment. I do think it’s a little strange that you’d publicly suggest someone might be a eugenicist on the basis of such limited evidence. But no one is forcing me to write things and put them on the internet.

The reason I’m writing to you is that you were polite and civil and seem well-intentioned. So I wanted you to know that your world model is inaccurate. You seem to think that because my post did not explicitly support your social views, it must have been written with the goal of undermining those views. And that is wrong.

The truth is, I wrote that post without supporting your (or any) social views because I think mixing up facts and social views is bad. Partly, that’s just an aesthetic preference. But if I’m being fully upfront, I also think it’s bad in the consequentialist sense that it makes the world a worse place.

Why do I think this? Well, recall that I pointed out that if there were crabs that injected blue-eyed babies with human growth hormone, that would increase the heritability of height. You suggest I had sinister motives for giving this example, as if I was trying to conceal the corollary that if the environment provided more resources to people with certain genes (e.g. skin color) that could increase the heritability of other things (e.g. educational attainment).

Do you really think you’re the only reader to notice that corollary?

The degree to which things are “heritable” depends on the nature of society. This is a fact. It’s a fact that many people are not aware of. It’s also a fact that—I guess—fits pretty well with your social views. I wanted people to understand that. Not out of loyalty to your social views, but because it is true.

It seems that you’re annoyed that I didn’t phrase all my examples in terms of culture war. I could have done that. But I didn’t, because I think my examples are easier to understand, and because the degree to which changing society might change the heritability of some trait is a contentious empirical question.

But OK. Imagine I had done that. And imagine all the examples were perfectly aligned with your social views. Do you think that would have made the post more or less effective in convincing people that the fact we’re talking about is true? I think the answer is: Far less effective.

I’ll leave you with two questions:

Question 1: Do you care about the facts? Do you believe the facts are on your side?

Question 2: Did you really think I wrote that post with with the goal of promoting eugenics?

If you really did think that, then great! I imagine you’ll be interested to learn that you were incorrect.

But just as you had an alarm beeping in your head as you read my post, I had one beeping in my head as I read your comment. My alarm was that you were playing a bit of a game. It’s not that you really think I wanted to promote eugenics, but rather that you’re trying to enforce a norm that everyone must give constant screaming support to your social views and anyone who’s even slightly ambiguous should be ostracized.

Of course, this might be a false alarm! But if that is what you’re doing, I have to tell you: I think that’s a dirty trick, and a perfect example of why mixing facts and social views is bad.

You may disagree with all my motivations. That’s fine. (I won’t assume that means you are a eugenicist.) All I ask is that you disapprove accurately.

xox
dynomight