MoreRSS

site iconDynomightModify

Dynomight is a SF-rationalist-substack-adjacent blogger with a good understanding of statistics.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Dynomight

Make product worse, get money

2025-11-20 08:00:00

I recently asked why people seem to hate dating apps so much. In response, 80% of you emailed me some version of the following theory:

The thing about dating apps is that if they do a good job and match people up, then the matched people will quit the app and stop paying. So they have an incentive to string people along but not to actually help people find long-term relationships.

May I explain why I don’t find this type of theory very helpful?

I’m not saying that I think it’s wrong, mind you. Rather, my objection is that while the theory is phrased in terms of dating apps, the same basic pattern applies to basically anyone who is trying to make money by doing anything.

For example, consider a pizza restaurant. Try these theories on for size:

  • Pizza: “The thing about pizza restaurants is that if they use expensive ingredients or labor-intensive pizza-making techniques, then it costs more to make pizza. So they have an incentive to use low-cost ingredients and labor-saving shortcuts.”

  • Pizza II: “The thing about pizza restaurants is that if they have nice tables separated at a comfortable distance, then they can’t fit as many customers. So they have an incentive to use tiny tables and cram people in cheek by jowl.”

  • Pizza III: “The thing about pizza restaurants is that if they sell big pizzas, then people will eat them and stop being hungry, meaning they don’t buy additional pizza. So they have an incentive to serve tiny low-calorie pizzas.”

See what I mean? You can construct similar theories for other domains, too:

  • Cars: “The thing about automakers is that making cars safe is expensive. So they have an incentive to make unsafe cars.”

  • Videos: “The thing about video streaming is that high-resolution video uses more expensive bandwidth. So they have an incentive to use low-resolution.”

  • Blogging: “The thing about bloggers is that research is time-consuming. So they have an incentive to be sloppy about the facts.”

  • Durability: “The thing about {lightbulb, car, phone, refrigerator, cargo ship} manufacturing is that if you make a {lightbulb, car, phone, refrigerator, cargo ship} that lasts a long time, then people won’t buy new ones. So there’s an incentive to make {lightbulbs, cars, phones, refrigerators, cargo ships} that break quickly.”

All these theories can be thought of as instances of two general patterns:

  • Make product worse, get money: “The thing about selling goods or services is that making goods or services better costs money. So people have an incentive to make goods and services worse.”

  • Raise price, get money: “The thing about selling goods and services is that if you raise prices, then you get more money. So people have an incentive to raise prices.”

Are these theories wrong? Not exactly. But it sure seems like something is missing.

I’m sure most pizza restauranteurs would be thrilled to sell lukewarm 5 cm cardboard discs for $300 each. They do in fact have an incentive to do that, just as predicted by these theories! Yet, in reality, pizza restaurants usually sell pizzas that are made out of food. So clearly these theories aren’t telling the whole story.

Say you have a lucrative business selling 5 cm cardboard discs for $300. I am likely to think, “I like money. Why don’t I sell pizzas that are only mostly cardboard, but also partly made of flour? And why don’t I sell them for $200, so I can steal Valued Reader’s customers?” But if I did that, then someone else would probably set prices at only $100, or even introduce cardboard-free pizzas, and this would continue until hitting some kind of equilibrium.

Sure, producers want to charge infinity dollars for things that cost them zero dollars to make. But consumers want to pay zero dollars for stuff that’s infinitely valuable. It’s in the conflict between these desires that all interesting theories live.

This is why I don’t think it’s helpful to point out that people have an incentive to make their products worse. Of course they do. The interesting question is, why are they able to get away with it?

Reasons stuff is bad

First reason stuff is bad: People are cheap

Why are seats so cramped on planes? Is it because airlines are greedy? Sure. But while they might be greedy, I don’t think they’re dumb. If you do a little math, you can calculate that if airlines were to remove a single row of seats, they could add perhaps 2.5 cm (1 in) of extra legroom for everyone, while only decreasing the number of paying customers by around 3%. (This is based on a 737 with single-class, but you get the idea.)

So why don’t airlines rip out a row of seats, raise prices by 3% and enjoy the reduced costs for fuel and customer service? The only answer I can see is that people, on average, aren’t actually willing to pay 3% more for 2.5 cm more legroom. We want a worse but cheaper product, and so that’s what we get.

I think this is the most common reason stuff is “bad”. It’s why Subway sandwiches are so soggy, why video games are so buggy, and why IKEA furniture and Primark clothes fall apart so quickly.

It’s good when things are bad for this reason. Or at least, that’s the premise of capitalism: When companies cut costs, that’s the invisible hand redirecting resources to maximize social value, or whatever. Companies may be motivated by greed. And you may not like it, since you want to pay zero dollars for infinite value. But this is markets working as designed.

Second reason stuff is bad: Information asymmetries

Why is it that almost every book / blog / podcast about longevity is such garbage? Well, we don’t actually know many things that will reliably increase longevity. And those things are mostly all boring / hard / non-fun. And even if you do all of them, it probably only adds a couple of years in expectation. And telling people these facts is not a good way to find suckers who will pay you lots of money for your unproven supplements / seminars / etc.

True! But it doesn’t explain why all longevity stuff is so bad. Why don’t honest people tell the true story and drive all the hucksters out of business? I suspect the answer is that unless you have a lot of scientific training and do a lot of research, it’s basically impossible to figure out just how huckstery all the hucksters really are.

I think this same basic phenomenon explains why some supplements contain heavy metals, why some food contains microplastics, why restaurants use so much butter and salt, why rentals often have crappy insulation, and why most cars seem to only be safe along dimensions included in crash test scores. When consumers can’t tell good from evil, evil triumphs.

Third reason stuff is bad: People have bad taste

Sometimes stuff is bad because people just don’t appreciate the stuff you consider good. Examples are definitionally controversial, but I think this includes restaurants in cities where all restaurants are bad, North American tea, and travel pants. This reason has a blurry boundary with information asymmetries, as seen in ultrasonic humidifiers or products that use Sucralose instead of aspartame for “safety”.

Fourth reason stuff is bad: Pricing power

Finally, sometimes stuff is bad because markets aren’t working. Sometimes a company is selling a product but has some kind of “moat” that makes it hard for anyone else to compete with them, e.g. because of some technological or regulatory barrier, control of some key resource or location, intellectual property, a beloved brand, or network effects.

If that’s true, then those companies don’t have to worry as much about someone else stealing their business, and so (because everyone is axiomatically greedy) they will find ways to make their product cheaper and/or raise prices up until the price is equal to the full value it provides to the marginal consumer.

Conclusion

Why is food so expensive at sporting events? Yes, people have no alternatives. But people know food is expensive at sporting events. And they don’t like it. Instead of selling water for $17, why don’t venues sell water for $2 and raise ticket prices instead? I don’t know. Probably something complicated, like that expensive food allows you to extract extra money from rich people without losing business from non-rich people.

So of course dating apps would love to string people along for years instead of finding them long-term relationships, so they keep paying money each month. I wouldn’t be surprised if some people at those companies have literally thought, “Maybe we should string people along for years instead of finding them long-term relationships, so they keep paying money each month, I love money so much.”

But if they are actually doing that (which is unclear to me) or if they are bad in some other way, then how do they get away with it? Why doesn’t someone else create a competing app that’s better and thereby steal all their business? It seems like the answer has to be either “because that’s impossible” or “because people don’t really want that”. That’s where the mystery begins.

Dating: A mysterious constellation of facts

2025-10-30 08:00:00

Here are a few things that seem to be true:

  1. Dating apps are very popular.
  2. Lots of people hate dating apps.
  3. They hate them so much that there’s supposedly a resurgence in alternatives like speed dating.

None of those are too controversial, I think. (Let’s stress supposedly in #3.) But if you stare at them for a while, it’s hard to see how they can all be true at the same time.

Because, why do people hate dating apps? People complain that they’re bad in various ways, such as being ineffective, dehumanizing, or expensive. (And such small portions!) But if they’re bad, then why? Technologically speaking, a dating app is not difficult to make. If dating apps are so bad, why don’t new non-bad ones emerge and outcompete them?

The typical answer is network effects. A dating app’s value depends on how many other people are on it. So everyone gravitates to the popular ones and eventually most of the market is captured by a few winners. To displace them, you’d have to spend a huge amount of money on advertising. So—the theory goes—the winners are an oligopoly that gleefully focus on extracting money from their clients instead of making those clients happy.

That isn’t obviously wrong. Match Group (which owns Tinder, Match, Plenty of Fish, OK Cupid, Hinge, and many others) has recently had an operating margin of ~25%. That’s more like a crazy-profitable entrenched tech company (Apple manages ~30%) than a nervous business in a crowded market.

But wait a second. How many people go to a speed dating event? Maybe 30? I don’t know if the speed dating “resurgence” is real, but it doesn’t matter. Some people definitely do find love at real-life events with small numbers of people. If that’s possible, then shouldn’t it also be possible to create a dating app that’s useful even with only a small number of users? Meaning good apps should have emerged long ago and displaced the crappy incumbents? And so the surviving dating apps should be non-hated?

We’ve got ourselves a contradiction. So something is wrong with that argument. But what?

Theory 1: Selection

Perhaps speed dating attendees are more likely to be good matches than people on dating apps. This might be true because they tend to be similar in terms of income, education, etc., and people tend to mate assortatively. People who go to such events might also have some similarities in terms of personality or what they’re looking for in a relationship.

You could also theorize that people at speed dating events are higher “quality”. For example, maybe it’s easier to conceal negative traits on dating apps than it is in person. If so, this might lead to some kind of adverse selection where people without secret negative traits get frustrated and stop using the apps.

I’m not sure either of those are true. But even if they are, consider the magnitudes. While a speed dating event might have 30 people, a dating app in a large city could easily have 30,000 users. While the fraction of good matches might be lower on a dating app, the absolute number is still surely far higher.

Theory 2: Bandwidth

Perhaps even if you have fewer potential matches at a speed dating event, you have better odds of actually finding them, because in-person interactions reveals information that dating apps don’t.

People often complain that dating apps are superficial, that there’s too much focus on pictures. Personally, I don’t think pictures deserve so much criticism. Yes, they show how hot you are. But pictures also give lots of information about important non-superficial things, like your personality, values, social class, and lifestyle. I’m convinced people use pictures for all that stuff as much as hotness.

But you know what’s even better than pictures? Actually talking to someone!

Many people seem to think that a few minutes of small talk isn’t enough time to learn anything about someone. Personally, I think evolution spent millions of years training us to do exactly that. I’d even claim that this is why small talk exists.

(I have friends with varying levels of extroversion and agreeableness, but all of my friends seem to have high openness to experience. When I meet someone new, I’m convinced I can guess their openness to ±10% by the time they’ve completed five sentences.)

So maybe the information a dating app provides just isn’t all that useful compared to a few minutes of casual conversation. If so, then dating apps might be incredibly inefficient. You have to go through some silly texting courtship ritual, set up a time to meet, physically go there, and then pretend to smile for an hour even if you immediately hate them.

Under this theory, dating apps provide a tiny amount of information about a gigantic pool of people, while speed dating provides a ton of information about a small number of people. Maybe that’s a win, at least sometimes.

Theory 3: Behavior

Maybe the benefit of real-life events isn’t that they provide more information, but that they change how we behave.

For example, maybe people are nicer in person? Because only then can we sense that others are also sentient beings with internal lives and so on?

I’m pretty sure that’s true. But it’s not obvious it helps with our mystery, since people from dating apps eventually meet in person, too. If they’re still nice when they do, then this just resolves into “in-person interactions provide more information”, and is already covered by the previous theory. To help resolve our mystery, you’d need to claim that people at real-life events act differently than they do when meeting up as a result of a dating app.

That could happen as a result of a “behavioral equilibrium”. Some people take dating apps seriously and some take them casually. But it’s hard to tell what category someone else is in, so everyone proceeds with caution. But by showing up at an in-person event, everyone has demonstrated some level of seriousness. And maybe this makes everyone behave differently? Perhaps, but I don’t really see it.

Obscure theories

I can think of a few other possible explanations.

  1. Maybe speed dating serves a niche. Just like Fitafy / Bristlr / High There! serve people who love fitness / beards / marijuana, maybe speed dating just serves some small-ish fraction of the population but not others.

  2. Maybe the people who succeed at speed dating would also have succeeded no matter what. So they don’t offer any general lessons.

  3. Maybe creating a dating app is in fact very technologically difficult. So while the dating apps are profit-extracting oligopolies, that’s because of technological moat, not network effects.

I don’t really buy any of these.

Drumroll

So what’s really happening? I am not confident, but here’s my best guess:

  1. Selection is not a major factor.

  2. The high bandwidth of in-person interactions is a major factor.

  3. The fact that people are nicer or more open-minded in person is not a major factor, other than through making in-person interactions higher bandwidth.

  4. None of the obscure theories are major factors.

  5. Dating apps are an oligopoly, driven by network effects.

Basically, a key “filter” in finding love is finding someone where you both feel optimistic after talking for five minutes. Speed dating is (somewhat / sometimes) effective because it efficiently crams a lot of people into the top of that filter.

Meanwhile, because dating apps are low-bandwidth, they need a large pool to be viable. Thus, they’re subject to network effects, and the winners can turn the screws to extract maximum profits from their users.

Partly I’m not confident in that story just because it has so many moving parts. But something else worries me too. If it’s true, then why aren’t dating apps trying harder to provide that same information that in-person interactions do?

If anything, I understand they’re moving in the opposite direction. Maybe Match Group would have no interest in that, since they’re busy enjoying their precious network effects. But why not startups? Hell, why not philanthropies? (Think of all the utility you could create!) For the above story to hold together, you have to believe that it’s a very difficult problem.

Pointing machines, population pyramids, post office scandal, type species, and horse urine

2025-10-23 08:00:00

I recently wondered if explainer posts might go extinct. In response, you all assured me that I have nothing to worry about, because you already don’t care about my explanations—you just like it when I point at stuff.

Well OK then!

Pointing machines

How did Michelangelo make this?

david

What I mean is—marble is unforgiving. If you accidentally remove some material, it’s gone. You can’t fix it by adding another layer of paint. Did Michelangelo somehow plan everything out in advance and then execute everything perfectly the first time, with no mistakes?

I learned a few years ago that sculptors have long used a simple but ingenious invention called a pointing machine. This allows you to create a sculpture in clay and, in effect, “copy” it into stone. That sounds magical, but it’s really just an articulated pointer that you move between anchor points attached to the (finished) clay and the (incomplete) stone sculpture. If you position the pointer based on the clay sculpture and then move it to the stone sculpture, anything the pointer hits should be removed. Repeat that thousands of times and the sculpture is copied.

pointing machines

I was sad to learn that Michelangelo was a talentless hack, but I dutifully spent the last few years telling everyone that all sculptures were made this way and actually sculpture is extremely easy, etc.

Last week I noticed that Michelangelo died in 1564, which was over 200 years before the pointing machine was invented.

Except, apparently since ancient times sculptors have used a technique sometimes called the “compass method” which is sort of like a pointing machine except more complex and involving a variety of tools and measurements. This was used by the ancient Romans to make copies of older Greek sculptures. And most people seem to think that Michelangelo probably did use that.

Population pyramids

I think this is one of the greatest data visualizations ever invented.

pyramid

Sure, it’s basically just a histogram turned on the side. But compare India’s smooth and calm teardrop with China’s jagged chaos. There aren’t many charts that simultaneously tell you so much about the past and the future.

It turns out that this visualization was invented by Francis Amasa Walker. He was apparently such an impressive person that this invention doesn’t even merit a mention on his Wikipedia page, but he used it in creating these visualizations for the 1874 US atlas:

pyramids

I think those are the first population pyramids ever made. The atlas also contains many other beautiful visualizations, for example this one of church attendance:

church

Or this one on debt and public expenditures:

debt

Post office scandal

If you haven’t heard about the British Post Office scandal, here’s what happened: In 1999, Fujitsu delivered buggy accounting software to the British Post Office that incorrectly determined that thousands of subpostmasters were stealing. Based on this faulty data, the post office prosecuted and convicted close to a thousand people, of whom 236 went to prison. Many others lost their jobs or were forced to “pay back” the “shortfalls” from their own pockets.

Of course, this is infuriating. But beyond that, I notice I am confused. It doesn’t seem like anyone wanted to hurt all those subpostmasters. The cause seems to be only arrogance, stupidity, and negligence.

I would have predicted that before you could punish thousands of people based on the same piece of fake evidence, something would happen that would stop you. Obviously, I was wrong. But I find it hard to think of good historical analogies. Maybe negligence in police crime labs or convictions of parents for “shaken baby syndrome”? Neither of these is a good analogy.

One theory is that the post office scandal happened because the post office—the “victim”—had the power to itself bring prosecutions. But in hundreds of cases things were done the normal way, with police “investigating” the alleged crimes and then sending the cases to be brought by normal prosecutors. Many cases were also pursued in Scotland and Northern Ireland, where the Post Office lacks this power.

Another theory would be:

  1. Prosecutors have incredible latitude in choosing who they want to prosecute.

  2. Like other humans, some prosecutors are arrogant/stupid/negligent.

  3. It’s actually pretty easy for prosecutors to convict an innocent person if they really want to, as long as they have some kind of vaguely-incriminating evidence.

Under this theory, similar miscarriages of justice happen frequently. But they only involve a single person, and so they don’t make the news.

Type species

Type species - Wikipedia

I link to this not because it’s interesting but because it’s so impressively incomprehensible. If there’s someone nearby, I challenge you to read this to them without losing composure.

In zoological nomenclature, a type species (species typica) is the species whose name is considered to be permanently taxonomically associated with the name of a genus or subgenus. In other words, it is the species that contains the biological type specimen or specimens of the genus or subgenus. A similar concept is used for groups ranked above the genus and called a type genus.

In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types.

In bacteriology, a type species is assigned for each genus. Whether or not currently recognized as valid, every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type.

Can such a thing be created unintentionally? I tried to parody this by creating an equally-useless description of an everyday object. But in the end, I don’t think it’s very funny, because it’s almost impossible to create something worse than the above passage.

A funnel is a tool first created in antiquity with rudimentary versions fabricated from organic substrates such as cucurbitaceae or broadleaf foliage by early hominid cultures. The etymology of fundibulum (Latin), provides limited insight into its functional parameters, despite its characteristic broad proximal aperture and a constricted distal orifice.

Compositionally, funnels may comprise organic polymers or inorganic compounds, including but not limited to, synthetic plastics or metallic alloys and may range in weight from several grams to multiple kilograms. Geometrically, the device exhibits a truncated conical or pyramidal morphology, featuring an internal declination angle generally between 30 and 60 degrees.

Within cultural semiotics, funnels frequently manifest in artistic representations, serving as an emblem of domestic ephemerality.

The good news is that the Sri Lankan elephant is the type species for the Asian elephant, whatever that is.

Hormones

I previously mentioned that some hormonal medications used to be made from the urine of pregnant mares. But only after reading The History of Estrogen Therapy (h/t SCPantera) did I realize that it’s right there in the name:

    Premarin = PREgnant MARe’s urINe

If you—like me—struggle to believe that a pharmaceutical company would actually do this, note that was in 1941. Even earlier, the urine of pregnant humans was used. Tragically, this was marketed as “Emmenin” rather than “Prehumin”.

Will the explainer post go extinct?

2025-10-09 08:00:00

Will short-form non-fiction internet writing go extinct? This may seem like a strange question to ask. After all, short-form non-fiction internet writing is currently, if anything, on the ascent—at least for politics, money, and culture war—driven by the shocking discovery that many people will pay the cost equivalent of four hardback books each year to support their favorite internet writers.

But, particularly for “explainer” posts, the long-term prospects seem dim. I write about random stuff and then send it to you. If you just want to understand something, why would you read my rambling if AI could explain it equally well, in a style customized for your tastes, and then patiently answer your questions forever?

I mean, say you can explain some topic better than AI. That’s cool, but once you’ve published your explanation, AI companies will put it in their datasets, thankyouverymuch, after which AIs will start regurgitating your explanation. And then—wait a second—suddenly you can’t explain that topic better than AI anymore.

This is all perfectly legal, since you can’t copyright ideas, only presentations of ideas. It used to take work to create a new presentation of someone else’s ideas. And there used to be a social norm to give credit to whoever first came up with some idea. This created incentives to create ideas, even if they weren’t legally protected. But AI can instantly slap a new presentation on your ideas, and no one expects AI to give credit for its training data. Why spend time creating content so just it can be nostrified by the Borg? And why read other humans if the Borg will curate their best material for you?

So will the explainer post survive?

Let’s start with an easier question: Already today, AI will happily explain anything. Yet many people read human-written explanations anyway. Why do they do that? I can think of seven reasons:

  1. Accuracy. Current AI is unreliable. If I ask about information theory or how to replace the battery on my laptop, it’s very impressive but makes some mistakes. But if I ask about heritability, the answers are three balls of gibberish stacked on top of each other in a trench-coat. Of course, random humans make mistakes, too. But if you find a quality human source, it is far less likely to contain egregious mistakes. This is particularly true across “large contexts” and for tasks where solutions are hard to verify.

  2. AI is boring. At least, writing from current popular AI tools is boring, by default.

  3. Parasocial relationships. If I’ve been reading someone for a long time, I start to feel like I have a kind of relationship with them. If you’ve followed this blog for a long time, you might feel like you have a relationship with me. Calling these “parasocial relationships” makes them sound sinister, but I think this is normal and actually a clever way of using our tribal-band programming to help us navigate of the modern world. Just like in “real” relationships, when I read someone I have a parasocial relationship with, I have extra context that makes it easier to understand them, I feel a sense of human connection, and I feel like I’m getting a sort of update on their “story”. I don’t get any of that with (current) AI.

  4. Skin in the game. If a human screws something up, it’s embarrassing. They lose respect and readers. On a meta-level, AI companies have similar incentives not to screw things up. But AI itself doesn’t (seem to) care. Human nature makes it easier to trust someone when we know they’re putting some kind of reputation on the line.

  5. Conspicuous consumption. Since I read Reasons and Persons, I can brag to everyone that I read Reasons and Persons. If I had read some equally good AI-written book, probably no one would care.

  6. Coordination points. Partly, I read Reasons and Persons because I liked it. And maybe I guess I read it so I can brag about the fact that I read it. (Hey everyone, have I mentioned that I read Reasons and Persons?) But I also read it because other people read it. When I talk to those people, we have a shared vocabulary and set of ideas that makes it easier to talk about other things. This wouldn’t work if we had all explored the same ideas though fragmented AI “tutoring”.

  7. Change is slow. Here we are 600 years after the invention of the printing press, and the primary mode of advanced education is still for people to physically go to a room where an expert is talking and write down stuff the expert says. If we’re that slow to adapt, then maybe we read human-written explainers simply out of habit.

How much do each of these really matter? How much confidence should they give us that explainer posts will still exist a decade from now? Let’s handle them in reverse order.

Argument 7: Change is slow

Sure, society takes time to adapt to technological change. But I don’t think college lectures are a good example of this, or that they’re a medieval relic that only survive out of inertia. On the contrary, I think they survive because we haven’t really any other model of education that’s fundamentally better.

Take paper letters. One hundred years ago, these were the primary form of long-distance communication. But after the telephone was widely distributed, it only took it a few decades to kill the letter in almost all cases where the phone is better. When email and texting showed up, they killed off almost all remaining use of paper letters. They still exist, but they’re niche.

The same basic story holds for horses, the telegraph, card catalogs, slide rules, VHS tapes, vacuum tubes, steam engines, ice boxes, answering machines, sailboats, typewriters, the short story, and the divine right of kings. When we have something that’s actually better, we drop the old ways pretty quickly. Inertia alone might keep explainer posts alive for a few years, but not more than that.

Arguments 5 and 6: Coordination points and conspicuous consumption

Western civilization began with the Iliad. Or, at least, we’ve decided to pretend it did. If you read the Iliad, then you can brag about reading the Iliad (good) and you have more context to engage with everyone else who read it (very good). So people keep reading the Iliad. I think this will continue indefinitely.

But so what? The Iliad is in that position because people have been reading/listening to it for thousands of years. But if you write something new and there’s no “normal” reason to read it, then it has no way to establish that kind of self-sustaining legacy.

Non-fiction in general has a very short half-life. And even when coordination points exist, people often rely on secondary sources anyway. Personally, I’ve tried to read Wittgenstein, but I found it incomprehensible. Yet I think I’ve absorbed his most useful idea by reading other people’s descriptions. I wonder how much “Wittgenstein” is really a source at this point as opposed to a label.

Also… explainer posts typically aren’t the Iliad. So I don’t think this will do much to keep explainer posts alive, either.

(Aside: I’ve never understood why philosophy is so fixated on original sources, instead of continually developing new presentations of old ideas like math and physics do. Is this related to the fact that philosophers go to conferences and literally read their papers out loud?)

Argument 4: Skin in the game

I trust people more when I know they’re putting their reputation on the line, for the same reason I trust restaurants more when I know they rely on repeat customers. AI doesn’t give me this same reason for confidence.

But so what? This is a loose heuristic. If AI were truly more accurate than human writing, I’m sure most people would learn to trust it in a matter of weeks. If AI was ultra-reliable but people really needed someone to hold accountable, AI companies could perhaps offer some kind of “insurance”. So I don’t see this as keeping explainers alive, either.

Argument 3: Parasocial relationships

Humans are social creatures. If bears had a secret bear Wikipedia and you went to the entry on humans, it would surely say, “Humans are obsessed with relationships.” I feel confident this will remain true.

I also feel confident that we will continue to be interested in what people we like and respect think about matters of fact. It seems plausible that we’ll continue to enjoy getting that information bundled together with little jokes or busts of personality. So I expect our social instincts will provide at least some reason for explainers to survive.

But how strong will this effect be? When explainer posts are read today, what fraction of readers are familiar enough to have a parasocial relationship with the author? Maybe 40%? And when people are familiar, what fraction of their motivation comes from the parasocial relationship, as opposed to just wanting to understand the content? Maybe another 40%? Those are made-up numbers, but I think it’s hard to avoid the conclusion that parasocial relationships explain only a fraction of why people read explainers today.

And there’s another issue. How do parasocial relationships get started if there’s no other reason to read someone? These might keep established authors going for a while at reduced levels, but it seems like it would make it hard for new people to rise up.

Argument 2: Boring-ness

Maybe popular AIs are a bit boring, today. But I think this is mostly due to the final reinforcement learning step. If you interact with “base models”, they are very good at picking up style cues and not boring at all. So I highly doubt that there’s some fundamental limitation here.

And anyway, does anyone care? If you just want to understand why vitamin D is technically a type of steroid, how much does style really matter, as opposed to clarity? I think style mostly matters in the context of a parasocial relationship, meaning we’ve already accounted for it above.

Argument 1: Accuracy

I don’t know for sure if AI will ever be as accurate as a high-quality human source. Though it seems very unlikely that physics somehow precludes creating systems that are more accurate than humans.

But if AI is that accurate, then I think this exercise suggests that explainer posts are basically toast. All the above arguments are just too weak to explain most of why people read human-written explainers now. So I think it’s mostly just accuracy. When that human advantage goes, I expect human-written explainers to go with it.

Counter-arguments

I can think of three main counterarguments.

First, maybe AI will fix discovery. Currently, potential readers of explainers often have no way to find potential writers. Search engines have utterly capitulated to SEO spam. Social media soft-bans outward links. If you write for a long time, you can build up an audience, but few people have the time and determination to do that. If you write a single explainer in your life, no one will read it. The rare exceptions to this rule either come from people contributing to established (non-social media) communities or from people with exceptional social connections. So—this argument goes—most potential readers don’t bother trying to find explainers, and most potential writers don’t bother creating them. If AI solves that matching problem, explainers could thrive.

Second, maybe society will figure out some new way to reward people who create information. Maybe we fool around with intellectual property law. Maybe we create some crazy Xanadu-like system where in order to read some text, you have to first sign a contract to pay them based on the value you derive, and this is recursively enforced on everyone who’s downstream of you. Hell, maybe AI companies decide to solve the data wall problem by paying people to write stuff. But I doubt it.

Third, maybe explainers will follow a trajectory like chess. Up until perhaps the early 1990s, humans were so much better than computers at chess that computers were irrelevant. After Deep Blue beat Kasparov in 1997, people quickly realized that while computers could beat humans, human+computer teams could still beat computers. This was called Advanced Chess. Within 15-20 years, however, humans became irrelevant. Maybe there will be a similar Advanced Explainer era? (I kid, that era started five years ago.)

TLDR

Will the explainer post go extinct? My guess is mostly yes, if and when AI reaches human-level accuracy.

Incidentally, since there’s so much techno-pessimism these days: I think this outcome would be… great? It’s a little grim to think of humans all communicating with AI instead of each other, yes. But the upside is all of humanity having access to more accurate and accessible explanations of basically everything. If this is the worst effect of AGI, bring it on.

Y’all are over-complicating these AI-risk arguments

2025-10-02 08:00:00

Say an alien spaceship is headed for Earth. It has 30 aliens on it. The aliens are weak and small. They have no weapons and carry no diseases. They breed at rates similar to humans. They are bringing no new technology. No other ships are coming. There’s no trick—except that they each have an IQ of 300. Would you find that concerning?

Of course, the aliens might be great. They might cure cancer and help us reach world peace and higher consciousness. But would you be sure they’d be great?

Suppose you were worried about the aliens but I scoffed, “Tell me specifically how the aliens would hurt us. They’re small and weak! They can’t do anything unless we let them.” Would you find that counter-argument convincing?

I claim that most people would be concerned about the arrival of the aliens, would not be sure that their arrival would be good, and would not find that counter-argument convincing.

I bring this up because most AI-risk arguments I see go something like this:

  1. There will be a fast takeoff in AI capabilities.
  2. Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
  3. These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

These arguments have always struck me as overcomplicated. So I’d like to submit the following undercomplicated alternative:

  1. Obviously, if an alien race with IQs of 300 were going to arrive on Earth soon, that would be concerning.
  2. In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
  3. No one knows what AI with an IQ of 300 would be like. So it might as well be an alien.

Our subject for today is: Why might one prefer one of these arguments to the other?

The case for the simple argument

The obvious reason to prefer the simple argument is that it’s more likely to be true. The complex argument has a lot of steps. Personally, I think they’re all individually plausible. But are we really confident that there will be a fast takeoff in AI capabilities and that the AI will pursue dangerous subgoals and that it will thereby gain a decisive strategic advantage?

I find that confidence unreasonable. I’ve often been puzzled why so many seemingly-reasonable people will discuss these arguments without rejecting the confidence.

I think the explanation is that there are implicitly two versions of the complex argument. The “strong” version claims that fast takeoff et al. will happen, while the “weak” version merely claims that it’s a plausible scenario that we should take seriously. It’s often hard to tell which version people are endorsing.

The distinction is crucial, because these two versions have different weaknesses. I find the strong version wildly overconfident. I agree with the weak version, but I still think it’s unsatisfying.

Say you think there’s a >50% chance things do not go as suggested by the complex argument. Maybe there’s a slow takeoff, or maybe the AI can’t build a decisive strategic advantage, whatever. Now what?

Well, maybe everything turns out great and you live for millions of years, exploring the galaxy, reading poetry, meditating, and eating pie. That would be nice. But it also seems possible that humanity still ends up screwed, just in a different way. The complex argument doesn’t speak to what happens when one of the steps fails. This might give the impression that without any of the steps, everything is fine. But that is not the case.

The simple argument is also more convincing. Partly I think that’s because—well—it’s easier to convince people of things when they’re true. But beyond that, the simple argument doesn’t require any new concepts or abstractions, and it leverages our existing intuitions for how more intelligent entities can be dangerous in unexpected ways.

I actually prefer the simple argument in an inverted form: If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. “If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
  2. “There’s no way that AI with an IQ of 300 will arrive within the next few decades.”
  3. “We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.”

I think all those bullets are unbiteable. Hence, I think AI-risk is real.

But if you make the complex argument, then you seem to be left with the burden of arguing for fast takeoff and alignment difficulty and so on. People who hear that argument also often demand an explanation of just how AI could hurt people (“Nanotechnology? Bioweapons? What kind of bioweapon?”) I think this is a mistake for the same reason it would be a mistake to demand to know how a car accident would happen before putting on your seatbelt. As long as the Complex Scenario is possible, it’s a risk we need to manage. But many people don’t look at things that way.

But I think the biggest advantage of the simple argument is something else: It reveals the crux of disagreement.

I’ve talked to many people who find the complex argument completely implausible. Since I think it is plausible—just not a sure thing—I often ask why. People give widely varying reasons. Some claim that alignment will be easy, some that AI will never really be an “agent”, some talk about the dangers of evolved vs. engineered systems, and some have technical arguments based on NP-hardness or the nature of consciousness.

I’ve never made much progress convincing these people to change their minds. I have succeeded in convincing some people that certain arguments don’t work. (For example, I’ve convinced people that NP-hardness and the nature of consciousness are probably irrelevant.) But when people abandon those arguments, they don’t turn around and accept the whole Scenario as plausible. They just switch to different objections.

So I started giving my simple argument instead. When I did this, here’s what I discovered: None of these people actually accept that AI with an IQ of 300 could happen.

Sure, they often say that they accept this. But if you pin them down, they’re inevitably picturing an AI that lacks some core human capability. Often, the AI can prove theorems or answer questions, but it’s not an “agent” that wants things and does stuff and has relationships and makes long-term plans.

So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear. If that’s the crux, then we should get to it as quickly as possible. And that’s done by the simple argument.

The case for the complex argument

I won’t claim to be neutral. As hinted by the title, I started writing this post intending to make the case for the simple argument, and I still think that case is strong. But I figured I should consider arguments for the other side and—there are some good ones.

Above, I suggested that there are two versions of the complex argument: A “strong” version that claims the scenario it lays out will definitely happen, and a “weak” version that merely claims it’s plausible. I rejected the strong version as overconfident. And I rejected the weak version because there are lots of other scenarios where things could also go wrong for humanity, so why give this one so much focus?

Well, there’s also a middle version of the complex argument: You could claim that the scenario it lays out is not certain, but that if things go wrong for humanity, then they will probably go wrong as in that scenario. This avoids both of my objections—it’s less overconfident, and it gives a good reason to focus on this particular scenario.

Personally, I don’t buy it, because I think other bad scenarios like gradual disempowerment are plausible. But maybe I’m wrong. It doesn’t seem crazy to claim that the Complex Scenario captures most of the probability mass of bad outcomes. And if that’s true, I want to know it.

Now, some people suggest favoring certain arguments for the sake of optics: Even if you accept the complex argument, maybe you’d want to make the simple one because it’s more convincing or is better optics for the AI-risk community. (“We don’t want to look like crazy people.”)

Personally, I am allergic to that whole category of argument. I have a strong presumption that you should argue the thing you actually believe, not some watered-down thing you invented because you think it will manipulate people into believing what you want them to believe. So even if my simpler argument is more convincing, so what?

But say you accept the middle version of the complex argument, yet you think my simple argument is more convincing. And say you’re not as bloody-minded as me, so you want to calibrate your messaging to be more effective. Should you use my simple argument? I’m not sure you should.

The typical human bias is to think other people are similar to us. (How many people favor mandatory pet insurance funded by a land-value tax? At least 80%, right?) But as far as I can tell, the situation with AI-risk is the opposite. Most people I know are at least mildly concerned, but have the impression that “normal people” think that AI-risk is science fiction nonsense.

Yet, here are some recent polls:

Poll Date Statement Agree
Gallup June 2-15 2025 [AI is] very different from the technological advancements that came before, and threatens to harm humans and society 49%
Reuters / Ipsos August 13-18 2025 AI could risk the future of humankind 58%
YouGov March 5-7 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 37%
YouGov June 27-30 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 43%

Being concerned about AI is hardly a fringe position. People are already worried, and becoming more so.

I used to picture my simple argument as a sensible middle-ground, arguing for taking AI-risk seriously, but not overconfident:

spectrum1

But I’m starting to wonder if my “obvious argument” is in fact obvious, and something that people can figure out on their own. From looking at the polling data, it seems like the actual situation is more like this, with people on the left gradually wandering towards the middle:

spectrum2

If anything, the optics may favor a confident argument over my simple argument. In principle, they suggest similar actions: Move quickly to reduce existential risk. But what I actually see is that most people—even people working on AI—feel powerless and are just sort of clenching up and hoping for the best.

I don’t think you should advocate for something you don’t believe. But if you buy the complex argument, and you’re holding yourself back for the sake of optics, I don’t really see the point.

Shoes, Algernon, Pangea, and Sea Peoples

2025-09-25 08:00:00

I fear we are in the waning days of the People Read Blog Posts About Random Well-Understood Topics Instead of Asking Their Automatons Era. So before I lose my chance, here is a blog post about some random well-understood topics.

Marathons are stupidly fast

You probably know that people can now run marathons in just over 2 hours. But do you realize how insane that is?

That’s an average speed of 21.1 km per hour, or 13.1 miles per hour. You can think of that as running a mile in 4:35 (world record: 3:45), except doing it 26.2 times in a row. Or, you can think of that as running 100 meters in 17.06 seconds (world record: 9.58 seconds), except doing it 421.6 times in a row. I’d guess that only around half of the people reading this could run 100 meters in 17.06 seconds once.

This crazy marathon running speed is mostly due to humans being well-adapted for running and generally tenacious. But some of it is due to new shoes with carbon-fiber plates that came out in the late 2010s.

The theory behind these shoes is quite interesting. When you run, you mainly use four joints:

  1. Hips
  2. Knees
  3. Ankles
  4. Metatarsophalangeal

If you haven’t heard of the last of these, they’re pronounced “met-uh-tar-so-fuh-lan-jee-ul” or “MTP”. These are the joints inside your feet behind your big toes.

Besides sounding made-up, they’re different from the other joints in a practical way: The other joints are all attached to large muscles and tendons that stretch out and return energy while running sort of like springs. These can apparently recover around 60% of the energy expended in each stride. (Kangaroos seemingly do even better.) But the MTP joints are only attached to small muscles and tendons, so the energy that goes into them is mostly lost.

These new shoe designs have complex constructions of foam and plates that can do the same job as the MTP joints, but—unlike the MTP joints—store and return that energy to the runner. A recent meta-analysis estimated that this reduced total oxygen consumption by ~2.7% and marathon times by ∼2.18%.

Algernon

I wonder if these shoes are useful as a test case for the Algernon argument. In general, that argument is that there shouldn’t be any simple technology that would make humans dramatically smarter, since if there was, then evolution would have already found it.

You can apply the same kind of argument to running: We have been optimized very hard by evolution to be good at running, so there shouldn’t be any “easy” technologies that would make us dramatically faster or more efficient.

In the context of the shoes, I think that argument does… OK? The shoes definitely help. But carbon fiber plates are pretty hard to make, and the benefit is pretty modest. Maybe this is some evidence that Algernon isn’t a hard “wall”, but rather a steep slope.

Or, perhaps thinking is just different from running. If you start running, you will get better at it, in a way that spills over into lots of other physical abilities. But there doesn’t seem to be any cognitive task that you can practice and make yourself better at other cognitive tasks.

If you have some shoes that will make me 2.7% smarter, I’ll buy them.

Pangea

Pangea was a supercontinent that contained roughly all the land on Earth. At the beginning of the Jurassic 200 million years ago, it broke up and eventually formed the current continents. But isn’t the Earth 4.5 billion years old? Why would all the land stick together for 95% of that time and then suddenly break up?

The accepted theory is that it didn’t. Instead, it’s believed that Earth cycles between super-continents and dispersed continents, and Pangea is merely the most recent super-continent.

But why would there be such a cycle? We can break that down into two sub-questions.

First, why would dispersed continents fuse together into a supercontinent? Well, you can think of the Earth as a big ball of rock, warmed half by primordial heat from when the planet formed and half by radioactive decay. Since the surface is exposed to space, it cools, resulting in solid chunks that sort of slide around on the warm magma in the upper mantle. Some of those chunks are denser than others, which causes them to sink into the mantle a bit and get covered with water. So when a “land chunk” crashes into a “water chunk”, the land chunk slides on top. But if two land chunks crash into each other, they tend to crumple together into mountains and stick to each other.

You can see this by comparing this map of all the current plates:

To this map of elevation:

OK, but once a super-continent forms, why would it break apart? Well, compared to the ocean floor, land chunks are thicker and lighter. So they trap heat from inside the planet sort of like a blanket. With no cool ocean floor sliding back into the warm magma beneath, that magma keeps getting warmer and warmer. After tens of millions of years, it heats up so much that it stretches the land above and finally rips it apart.

It’s expected that a new supercontinent “Pangea Ultima” will form in 250 million years. By that time, the sun will be putting out around 2.3% more energy, making things hotter. On top of that, it’s suspected that Pangea Ultima, for extremely complicated reasons, will greatly increase the amount of CO₂ in the atmosphere, likely making the planet uninhabitable by mammals. So we’ve got that going for us.

Egypt and the Sea Peoples

The Sea Peoples are a group of people from… somewhere… that appeared in the Eastern Mediterranean around 1200 BC and left a trail of destruction from modern Turkey down to modern Egypt. They are thought to be either a cause or symptom of the Late Bronze Age collapse.

But did you know the Egyptians made carvings of the situation while they were under attack? Apparently the battle looked like this:

In the inscription, Pharaoh Ramesses III reports:

Those who reached my boundary, their seed is not; their hearts and their souls are finished forever and ever. As for those who had assembled before them on the sea, the full flame was their front before the harbor mouths, and a wall of metal upon the shore surrounded them. They were dragged, overturned, and laid low upon the beach; slain and made heaps from stern to bow of their galleys, while all their things were cast upon the water.