MoreRSS

site iconDynomightModify

Dynomight is a SF-rationalist-substack-adjacent blogger with a good understanding of statistics.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Dynomight

Underrated reasons to be thankful V

2025-11-27 08:00:00

  1. That your dog, while she appears to love you only because she’s been adapted by evolution to appear to love you, really does love you.

  2. That if you’re a life form and you cook up a baby and copy your genes to them, you’ll find that the genes have been degraded due to oxidative stress et al., which isn’t cause for celebration, but if you find some other hopefully-hot person and randomly swap in half of their genes, your baby will still be somewhat less fit compared to you and your hopefully-hot friend on average, but now there is variance, so if you cook up several babies, one of them might be as fit or even fitter than you, and that one will likely have more babies than your other babies have, and thus complex life can persist in a universe with increasing entropy.

  3. That if we wanted to, we surely could figure out which of the 300-ish strains of rhinovirus are circulating in a given area at a given time and rapidly vaccinate people to stop it and thereby finally “cure” the common cold, and though this is too annoying to pursue right now, it seems like it’s just a matter of time.

  4. That if you look back at history, you see that plagues went from Europe to the Americas but not the other way, which suggests that urbanization and travel are great allies for infectious disease, and these both continue today but are held in check by sanitation and vaccines even while we have lots of tricks like UVC light and high-frequency sound and air filtration and waste monitoring and paying people to stay home that we’ve barely even put in play.

  5. That while engineered infectious diseases loom ever-larger as a potential very big problem, we also have lots of crazier tricks we could pull out like panopticon viral screening or toilet monitors or daily individualized saliva sampling or engineered microbe-resistant surfaces or even dividing society into cells with rotating interlocks or having people walk around in little personal spacesuits, and while admittedly most of this doesn’t sound awesome, I see no reason this shouldn’t be a battle that we would win.

  6. That clean water, unlimited, almost free.

  7. That dentistry.

  8. That tongues.

  9. That radioactive atoms either release a ton of energy but also quickly stop existing—a gram of Rubidium-90 scattered around your kitchen emits as much energy as ~200,000 incandescent lightbulbs but after an hour only 0.000000113g is left—or don’t put out very much energy but keep existing for a long time—a gram of Carbon-14 only puts out the equivalent of 0.0000212 light bulbs but if you start with a gram, you’ll still have 0.999879g after a year—so it isn’t actually that easy to permanently poison the environment with radiation although Cobalt-60 with its medium energy output and medium half-life is unfortunate, medical applications notwithstanding I still wish Cobalt-60 didn’t exist, screw you Cobalt-60.

  10. That while curing all cancer would only increase life expectancy by ~3 years and curing all heart disease would only increase life expectancy by ~3 years, and preventing all accidents would only increase life expectancy by ~1.5 years, if we did all of these at the same time and then a lot of other stuff too, eventually the effects would go nonlinear, so trying to cure cancer isn’t actually a waste of time, thankfully.

  11. That the peroxisome, while the mitochondria and their stupid Krebs cycle get all the attention, when a fatty-acid that’s too long for them to catabolize comes along, who you gonna call.

  12. That we have preferences, that there’s no agreed ordering of how good different things are, which is neat, and not something that would obviously be true for an alien species, and given our limited resources probably makes us happier on net.

  13. That cardamom, it is cheap but tastes expensive, if cardamom cost 1000× more, people would brag about how they flew to Sri Lanka so they could taste chai made with fresh cardamom and swear that it changed their whole life.

  14. That Gregory of Nyssa, he was right.

  15. That Grandma Moses, it’s not too late.

  16. That sleep, that probably evolution first made a low-energy mode so we don’t starve so fast and then layered on some maintenance processes, but the effect is that we live in a cycle and when things aren’t going your way it’s comforting that reality doesn’t stretch out before you indefinitely but instead you can look forward to a reset and a pause that’s somehow neither experienced nor skipped.

  17. That, glamorous or not, comfortable or not, cheap or not, carbon emitting or not, air travel is very safe.

  18. That, for most of the things you’re worried about, the markets are less worried than you and they have the better track record, though not the issue of your mortality.

  19. That sexual attraction to romantic love to economic unit to reproduction, it’s a strange bundle, but who are we to argue with success.

  20. That every symbolic expression recursively built from differentiable elementary functions has a derivative that can also be written as a recursive combination of elementary functions, although the latter expression may require vastly more terms.

  21. That every expression graph built from differentiable elementary functions and producing a scalar output has a gradient that can itself be written as an expression graph, and furthermore that the latter expression graph is always the same size as the first one and is easy to find, and thus that it’s possible to fit very large expression graphs to data.

  22. That, eerily, biological life and biological intelligence does not appear to make use of that property of expression graphs.

  23. That if you look at something and move your head around, you observe the entire light field, which is a five-dimensional function of three spatial coordinates and two angles, and yet if you do something fancy with lasers, somehow that entire light field can be stored on a single piece of normal two-dimensional film and then replayed later.

  24. That, as far as I can tell, the reason five-dimensional light fields can be stored on two-dimensional film simply cannot be explained without quite a lot of wave mechanics, a vivid example of the strangeness of this place and proof that all those physicists with their diffractions and phase conjugations really are up to something.

  25. That disposable plastic, littered or not, harmless when consumed as thousands of small particles or not, is popular for a reason.

  26. That disposable plastic, when disposed of correctly, is literally carbon sequestration, and that if/when air-derived plastic replaces dead-plankton-derived plastic, this might be incredibly convenient, although it must be said that currently the carbon in disposable plastic only represents a single-digit percentage of total carbon emissions.

  27. That rocks can be broken into pieces and then you can’t un-break the pieces but you can check that they came from the same rock, it’s basically cryptography.

  28. That the deal society has made is that if you have kids then everyone you encounter is obligated to chip in a bit to assist you, and this seems to mostly work without the need for constant grimy negotiated transactions as Econ 101 would suggest, although the exact contours of this deal seem to be a bit murky.

  29. That of all the humans that have ever lived, the majority lived under some kind of autocracy, with the rest distributed among tribal bands, chiefdoms, failed states, and flawed democracies, and only something like 1% enjoyed free elections and the rule of law and civil liberties and minimal corruption, yet we endured and today that number is closer to 10%, and so if you find yourself outside that set, do not lose heart.

  30. That if you were in two dimensions and you tried to eat something then maybe your body would split into two pieces since the whole path from mouth to anus would have to be disconnected, so be thankful you’re in three dimensions, although maybe you could have some kind of jigsaw-shaped digestive tract so your two pieces would only jiggle around or maybe you could use the same orifice for both purposes, remember that if you ever find yourself in two dimensions, I guess.

(previously, previously, previously, previously)

Make product worse, get money

2025-11-20 08:00:00

I recently asked why people seem to hate dating apps so much. In response, 80% of you emailed me some version of the following theory:

The thing about dating apps is that if they do a good job and match people up, then the matched people will quit the app and stop paying. So they have an incentive to string people along but not to actually help people find long-term relationships.

May I explain why I don’t find this type of theory very helpful?

I’m not saying that I think it’s wrong, mind you. Rather, my objection is that while the theory is phrased in terms of dating apps, the same basic pattern applies to basically anyone who is trying to make money by doing anything.

For example, consider a pizza restaurant. Try these theories on for size:

  • Pizza: “The thing about pizza restaurants is that if they use expensive ingredients or labor-intensive pizza-making techniques, then it costs more to make pizza. So they have an incentive to use low-cost ingredients and labor-saving shortcuts.”

  • Pizza II: “The thing about pizza restaurants is that if they have nice tables separated at a comfortable distance, then they can’t fit as many customers. So they have an incentive to use tiny tables and cram people in cheek by jowl.”

  • Pizza III: “The thing about pizza restaurants is that if they sell big pizzas, then people will eat them and stop being hungry, meaning they don’t buy additional pizza. So they have an incentive to serve tiny low-calorie pizzas.”

See what I mean? You can construct similar theories for other domains, too:

  • Cars: “The thing about automakers is that making cars safe is expensive. So they have an incentive to make unsafe cars.”

  • Videos: “The thing about video streaming is that high-resolution video uses more expensive bandwidth. So they have an incentive to use low-resolution.”

  • Blogging: “The thing about bloggers is that research is time-consuming. So they have an incentive to be sloppy about the facts.”

  • Durability: “The thing about {lightbulb, car, phone, refrigerator, cargo ship} manufacturing is that if you make a {lightbulb, car, phone, refrigerator, cargo ship} that lasts a long time, then people won’t buy new ones. So there’s an incentive to make {lightbulbs, cars, phones, refrigerators, cargo ships} that break quickly.”

All these theories can be thought of as instances of two general patterns:

  • Make product worse, get money: “The thing about selling goods or services is that making goods or services better costs money. So people have an incentive to make goods and services worse.”

  • Raise price, get money: “The thing about selling goods and services is that if you raise prices, then you get more money. So people have an incentive to raise prices.”

Are these theories wrong? Not exactly. But it sure seems like something is missing.

I’m sure most pizza restauranteurs would be thrilled to sell lukewarm 5 cm cardboard discs for $300 each. They do in fact have an incentive to do that, just as predicted by these theories! Yet, in reality, pizza restaurants usually sell pizzas that are made out of food. So clearly these theories aren’t telling the whole story.

Say you have a lucrative business selling 5 cm cardboard discs for $300. I am likely to think, “I like money. Why don’t I sell pizzas that are only mostly cardboard, but also partly made of flour? And why don’t I sell them for $200, so I can steal Valued Reader’s customers?” But if I did that, then someone else would probably set prices at only $100, or even introduce cardboard-free pizzas, and this would continue until hitting some kind of equilibrium.

Sure, producers want to charge infinity dollars for things that cost them zero dollars to make. But consumers want to pay zero dollars for stuff that’s infinitely valuable. It’s in the conflict between these desires that all interesting theories live.

This is why I don’t think it’s helpful to point out that people have an incentive to make their products worse. Of course they do. The interesting question is, why are they able to get away with it?

Reasons stuff is bad

First reason stuff is bad: People are cheap

Why are seats so cramped on planes? Is it because airlines are greedy? Sure. But while they might be greedy, I don’t think they’re dumb. If you do a little math, you can calculate that if airlines were to remove a single row of seats, they could add perhaps 2.5 cm (1 in) of extra legroom for everyone, while only decreasing the number of paying customers by around 3%. (This is based on a 737 with single-class, but you get the idea.)

So why don’t airlines rip out a row of seats, raise prices by 3% and enjoy the reduced costs for fuel and customer service? The only answer I can see is that people, on average, aren’t actually willing to pay 3% more for 2.5 cm more legroom. We want a worse but cheaper product, and so that’s what we get.

I think this is the most common reason stuff is “bad”. It’s why Subway sandwiches are so soggy, why video games are so buggy, and why IKEA furniture and Primark clothes fall apart so quickly.

It’s good when things are bad for this reason. Or at least, that’s the premise of capitalism: When companies cut costs, that’s the invisible hand redirecting resources to maximize social value, or whatever. Companies may be motivated by greed. And you may not like it, since you want to pay zero dollars for infinite value. But this is markets working as designed.

Second reason stuff is bad: Information asymmetries

Why is it that almost every book / blog / podcast about longevity is such garbage? Well, we don’t actually know many things that will reliably increase longevity. And those things are mostly all boring / hard / non-fun. And even if you do all of them, it probably only adds a couple of years in expectation. And telling people these facts is not a good way to find suckers who will pay you lots of money for your unproven supplements / seminars / etc.

True! But it doesn’t explain why all longevity stuff is so bad. Why don’t honest people tell the true story and drive all the hucksters out of business? I suspect the answer is that unless you have a lot of scientific training and do a lot of research, it’s basically impossible to figure out just how huckstery all the hucksters really are.

I think this same basic phenomenon explains why some supplements contain heavy metals, why some food contains microplastics, why restaurants use so much butter and salt, why rentals often have crappy insulation, and why most cars seem to only be safe along dimensions included in crash test scores. When consumers can’t tell good from evil, evil triumphs.

Third reason stuff is bad: People have bad taste

Sometimes stuff is bad because people just don’t appreciate the stuff you consider good. Examples are definitionally controversial, but I think this includes restaurants in cities where all restaurants are bad, North American tea, and travel pants. This reason has a blurry boundary with information asymmetries, as seen in ultrasonic humidifiers or products that use Sucralose instead of aspartame for “safety”.

Fourth reason stuff is bad: Pricing power

Finally, sometimes stuff is bad because markets aren’t working. Sometimes a company is selling a product but has some kind of “moat” that makes it hard for anyone else to compete with them, e.g. because of some technological or regulatory barrier, control of some key resource or location, intellectual property, a beloved brand, or network effects.

If that’s true, then those companies don’t have to worry as much about someone else stealing their business, and so (because everyone is axiomatically greedy) they will find ways to make their product cheaper and/or raise prices up until the price is equal to the full value it provides to the marginal consumer.

Conclusion

Why is food so expensive at sporting events? Yes, people have no alternatives. But people know food is expensive at sporting events. And they don’t like it. Instead of selling water for $17, why don’t venues sell water for $2 and raise ticket prices instead? I don’t know. Probably something complicated, like that expensive food allows you to extract extra money from rich people without losing business from non-rich people.

So of course dating apps would love to string people along for years instead of finding them long-term relationships, so they keep paying money each month. I wouldn’t be surprised if some people at those companies have literally thought, “Maybe we should string people along for years instead of finding them long-term relationships, so they keep paying money each month, I love money so much.”

But if they are actually doing that (which is unclear to me) or if they are bad in some other way, then how do they get away with it? Why doesn’t someone else create a competing app that’s better and thereby steal all their business? It seems like the answer has to be either “because that’s impossible” or “because people don’t really want that”. That’s where the mystery begins.

Dating: A mysterious constellation of facts

2025-10-30 08:00:00

Here are a few things that seem to be true:

  1. Dating apps are very popular.
  2. Lots of people hate dating apps.
  3. They hate them so much that there’s supposedly a resurgence in alternatives like speed dating.

None of those are too controversial, I think. (Let’s stress supposedly in #3.) But if you stare at them for a while, it’s hard to see how they can all be true at the same time.

Because, why do people hate dating apps? People complain that they’re bad in various ways, such as being ineffective, dehumanizing, or expensive. (And such small portions!) But if they’re bad, then why? Technologically speaking, a dating app is not difficult to make. If dating apps are so bad, why don’t new non-bad ones emerge and outcompete them?

The typical answer is network effects. A dating app’s value depends on how many other people are on it. So everyone gravitates to the popular ones and eventually most of the market is captured by a few winners. To displace them, you’d have to spend a huge amount of money on advertising. So—the theory goes—the winners are an oligopoly that gleefully focus on extracting money from their clients instead of making those clients happy.

That isn’t obviously wrong. Match Group (which owns Tinder, Match, Plenty of Fish, OK Cupid, Hinge, and many others) has recently had an operating margin of ~25%. That’s more like a crazy-profitable entrenched tech company (Apple manages ~30%) than a nervous business in a crowded market.

But wait a second. How many people go to a speed dating event? Maybe 30? I don’t know if the speed dating “resurgence” is real, but it doesn’t matter. Some people definitely do find love at real-life events with small numbers of people. If that’s possible, then shouldn’t it also be possible to create a dating app that’s useful even with only a small number of users? Meaning good apps should have emerged long ago and displaced the crappy incumbents? And so the surviving dating apps should be non-hated?

We’ve got ourselves a contradiction. So something is wrong with that argument. But what?

Theory 1: Selection

Perhaps speed dating attendees are more likely to be good matches than people on dating apps. This might be true because they tend to be similar in terms of income, education, etc., and people tend to mate assortatively. People who go to such events might also have some similarities in terms of personality or what they’re looking for in a relationship.

You could also theorize that people at speed dating events are higher “quality”. For example, maybe it’s easier to conceal negative traits on dating apps than it is in person. If so, this might lead to some kind of adverse selection where people without secret negative traits get frustrated and stop using the apps.

I’m not sure either of those are true. But even if they are, consider the magnitudes. While a speed dating event might have 30 people, a dating app in a large city could easily have 30,000 users. While the fraction of good matches might be lower on a dating app, the absolute number is still surely far higher.

Theory 2: Bandwidth

Perhaps even if you have fewer potential matches at a speed dating event, you have better odds of actually finding them, because in-person interactions reveals information that dating apps don’t.

People often complain that dating apps are superficial, that there’s too much focus on pictures. Personally, I don’t think pictures deserve so much criticism. Yes, they show how hot you are. But pictures also give lots of information about important non-superficial things, like your personality, values, social class, and lifestyle. I’m convinced people use pictures for all that stuff as much as hotness.

But you know what’s even better than pictures? Actually talking to someone!

Many people seem to think that a few minutes of small talk isn’t enough time to learn anything about someone. Personally, I think evolution spent millions of years training us to do exactly that. I’d even claim that this is why small talk exists.

(I have friends with varying levels of extroversion and agreeableness, but all of my friends seem to have high openness to experience. When I meet someone new, I’m convinced I can guess their openness to ±10% by the time they’ve completed five sentences.)

So maybe the information a dating app provides just isn’t all that useful compared to a few minutes of casual conversation. If so, then dating apps might be incredibly inefficient. You have to go through some silly texting courtship ritual, set up a time to meet, physically go there, and then pretend to smile for an hour even if you immediately hate them.

Under this theory, dating apps provide a tiny amount of information about a gigantic pool of people, while speed dating provides a ton of information about a small number of people. Maybe that’s a win, at least sometimes.

Theory 3: Behavior

Maybe the benefit of real-life events isn’t that they provide more information, but that they change how we behave.

For example, maybe people are nicer in person? Because only then can we sense that others are also sentient beings with internal lives and so on?

I’m pretty sure that’s true. But it’s not obvious it helps with our mystery, since people from dating apps eventually meet in person, too. If they’re still nice when they do, then this just resolves into “in-person interactions provide more information”, and is already covered by the previous theory. To help resolve our mystery, you’d need to claim that people at real-life events act differently than they do when meeting up as a result of a dating app.

That could happen as a result of a “behavioral equilibrium”. Some people take dating apps seriously and some take them casually. But it’s hard to tell what category someone else is in, so everyone proceeds with caution. But by showing up at an in-person event, everyone has demonstrated some level of seriousness. And maybe this makes everyone behave differently? Perhaps, but I don’t really see it.

Obscure theories

I can think of a few other possible explanations.

  1. Maybe speed dating serves a niche. Just like Fitafy / Bristlr / High There! serve people who love fitness / beards / marijuana, maybe speed dating just serves some small-ish fraction of the population but not others.

  2. Maybe the people who succeed at speed dating would also have succeeded no matter what. So they don’t offer any general lessons.

  3. Maybe creating a dating app is in fact very technologically difficult. So while the dating apps are profit-extracting oligopolies, that’s because of technological moat, not network effects.

I don’t really buy any of these.

Drumroll

So what’s really happening? I am not confident, but here’s my best guess:

  1. Selection is not a major factor.

  2. The high bandwidth of in-person interactions is a major factor.

  3. The fact that people are nicer or more open-minded in person is not a major factor, other than through making in-person interactions higher bandwidth.

  4. None of the obscure theories are major factors.

  5. Dating apps are an oligopoly, driven by network effects.

Basically, a key “filter” in finding love is finding someone where you both feel optimistic after talking for five minutes. Speed dating is (somewhat / sometimes) effective because it efficiently crams a lot of people into the top of that filter.

Meanwhile, because dating apps are low-bandwidth, they need a large pool to be viable. Thus, they’re subject to network effects, and the winners can turn the screws to extract maximum profits from their users.

Partly I’m not confident in that story just because it has so many moving parts. But something else worries me too. If it’s true, then why aren’t dating apps trying harder to provide that same information that in-person interactions do?

If anything, I understand they’re moving in the opposite direction. Maybe Match Group would have no interest in that, since they’re busy enjoying their precious network effects. But why not startups? Hell, why not philanthropies? (Think of all the utility you could create!) For the above story to hold together, you have to believe that it’s a very difficult problem.

Pointing machines, population pyramids, post office scandal, type species, and horse urine

2025-10-23 08:00:00

I recently wondered if explainer posts might go extinct. In response, you all assured me that I have nothing to worry about, because you already don’t care about my explanations—you just like it when I point at stuff.

Well OK then!

Pointing machines

How did Michelangelo make this?

david

What I mean is—marble is unforgiving. If you accidentally remove some material, it’s gone. You can’t fix it by adding another layer of paint. Did Michelangelo somehow plan everything out in advance and then execute everything perfectly the first time, with no mistakes?

I learned a few years ago that sculptors have long used a simple but ingenious invention called a pointing machine. This allows you to create a sculpture in clay and, in effect, “copy” it into stone. That sounds magical, but it’s really just an articulated pointer that you move between anchor points attached to the (finished) clay and the (incomplete) stone sculpture. If you position the pointer based on the clay sculpture and then move it to the stone sculpture, anything the pointer hits should be removed. Repeat that thousands of times and the sculpture is copied.

pointing machines

I was sad to learn that Michelangelo was a talentless hack, but I dutifully spent the last few years telling everyone that all sculptures were made this way and actually sculpture is extremely easy, etc.

Last week I noticed that Michelangelo died in 1564, which was over 200 years before the pointing machine was invented.

Except, apparently since ancient times sculptors have used a technique sometimes called the “compass method” which is sort of like a pointing machine except more complex and involving a variety of tools and measurements. This was used by the ancient Romans to make copies of older Greek sculptures. And most people seem to think that Michelangelo probably did use that.

Population pyramids

I think this is one of the greatest data visualizations ever invented.

pyramid

Sure, it’s basically just a histogram turned on the side. But compare India’s smooth and calm teardrop with China’s jagged chaos. There aren’t many charts that simultaneously tell you so much about the past and the future.

It turns out that this visualization was invented by Francis Amasa Walker. He was apparently such an impressive person that this invention doesn’t even merit a mention on his Wikipedia page, but he used it in creating these visualizations for the 1874 US atlas:

pyramids

I think those are the first population pyramids ever made. The atlas also contains many other beautiful visualizations, for example this one of church attendance:

church

Or this one on debt and public expenditures:

debt

Post office scandal

If you haven’t heard about the British Post Office scandal, here’s what happened: In 1999, Fujitsu delivered buggy accounting software to the British Post Office that incorrectly determined that thousands of subpostmasters were stealing. Based on this faulty data, the post office prosecuted and convicted close to a thousand people, of whom 236 went to prison. Many others lost their jobs or were forced to “pay back” the “shortfalls” from their own pockets.

Of course, this is infuriating. But beyond that, I notice I am confused. It doesn’t seem like anyone wanted to hurt all those subpostmasters. The cause seems to be only arrogance, stupidity, and negligence.

I would have predicted that before you could punish thousands of people based on the same piece of fake evidence, something would happen that would stop you. Obviously, I was wrong. But I find it hard to think of good historical analogies. Maybe negligence in police crime labs or convictions of parents for “shaken baby syndrome”? Neither of these is a good analogy.

One theory is that the post office scandal happened because the post office—the “victim”—had the power to itself bring prosecutions. But in hundreds of cases things were done the normal way, with police “investigating” the alleged crimes and then sending the cases to be brought by normal prosecutors. Many cases were also pursued in Scotland and Northern Ireland, where the Post Office lacks this power.

Another theory would be:

  1. Prosecutors have incredible latitude in choosing who they want to prosecute.

  2. Like other humans, some prosecutors are arrogant/stupid/negligent.

  3. It’s actually pretty easy for prosecutors to convict an innocent person if they really want to, as long as they have some kind of vaguely-incriminating evidence.

Under this theory, similar miscarriages of justice happen frequently. But they only involve a single person, and so they don’t make the news.

Type species

Type species - Wikipedia

I link to this not because it’s interesting but because it’s so impressively incomprehensible. If there’s someone nearby, I challenge you to read this to them without losing composure.

In zoological nomenclature, a type species (species typica) is the species whose name is considered to be permanently taxonomically associated with the name of a genus or subgenus. In other words, it is the species that contains the biological type specimen or specimens of the genus or subgenus. A similar concept is used for groups ranked above the genus and called a type genus.

In botanical nomenclature, these terms have no formal standing under the code of nomenclature, but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types.

In bacteriology, a type species is assigned for each genus. Whether or not currently recognized as valid, every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type.

Can such a thing be created unintentionally? I tried to parody this by creating an equally-useless description of an everyday object. But in the end, I don’t think it’s very funny, because it’s almost impossible to create something worse than the above passage.

A funnel is a tool first created in antiquity with rudimentary versions fabricated from organic substrates such as cucurbitaceae or broadleaf foliage by early hominid cultures. The etymology of fundibulum (Latin), provides limited insight into its functional parameters, despite its characteristic broad proximal aperture and a constricted distal orifice.

Compositionally, funnels may comprise organic polymers or inorganic compounds, including but not limited to, synthetic plastics or metallic alloys and may range in weight from several grams to multiple kilograms. Geometrically, the device exhibits a truncated conical or pyramidal morphology, featuring an internal declination angle generally between 30 and 60 degrees.

Within cultural semiotics, funnels frequently manifest in artistic representations, serving as an emblem of domestic ephemerality.

The good news is that the Sri Lankan elephant is the type species for the Asian elephant, whatever that is.

Hormones

I previously mentioned that some hormonal medications used to be made from the urine of pregnant mares. But only after reading The History of Estrogen Therapy (h/t SCPantera) did I realize that it’s right there in the name:

    Premarin = PREgnant MARe’s urINe

If you—like me—struggle to believe that a pharmaceutical company would actually do this, note that was in 1941. Even earlier, the urine of pregnant humans was used. Tragically, this was marketed as “Emmenin” rather than “Prehumin”.

Will the explainer post go extinct?

2025-10-09 08:00:00

Will short-form non-fiction internet writing go extinct? This may seem like a strange question to ask. After all, short-form non-fiction internet writing is currently, if anything, on the ascent—at least for politics, money, and culture war—driven by the shocking discovery that many people will pay the cost equivalent of four hardback books each year to support their favorite internet writers.

But, particularly for “explainer” posts, the long-term prospects seem dim. I write about random stuff and then send it to you. If you just want to understand something, why would you read my rambling if AI could explain it equally well, in a style customized for your tastes, and then patiently answer your questions forever?

I mean, say you can explain some topic better than AI. That’s cool, but once you’ve published your explanation, AI companies will put it in their datasets, thankyouverymuch, after which AIs will start regurgitating your explanation. And then—wait a second—suddenly you can’t explain that topic better than AI anymore.

This is all perfectly legal, since you can’t copyright ideas, only presentations of ideas. It used to take work to create a new presentation of someone else’s ideas. And there used to be a social norm to give credit to whoever first came up with some idea. This created incentives to create ideas, even if they weren’t legally protected. But AI can instantly slap a new presentation on your ideas, and no one expects AI to give credit for its training data. Why spend time creating content so just it can be nostrified by the Borg? And why read other humans if the Borg will curate their best material for you?

So will the explainer post survive?

Let’s start with an easier question: Already today, AI will happily explain anything. Yet many people read human-written explanations anyway. Why do they do that? I can think of seven reasons:

  1. Accuracy. Current AI is unreliable. If I ask about information theory or how to replace the battery on my laptop, it’s very impressive but makes some mistakes. But if I ask about heritability, the answers are three balls of gibberish stacked on top of each other in a trench-coat. Of course, random humans make mistakes, too. But if you find a quality human source, it is far less likely to contain egregious mistakes. This is particularly true across “large contexts” and for tasks where solutions are hard to verify.

  2. AI is boring. At least, writing from current popular AI tools is boring, by default.

  3. Parasocial relationships. If I’ve been reading someone for a long time, I start to feel like I have a kind of relationship with them. If you’ve followed this blog for a long time, you might feel like you have a relationship with me. Calling these “parasocial relationships” makes them sound sinister, but I think this is normal and actually a clever way of using our tribal-band programming to help us navigate of the modern world. Just like in “real” relationships, when I read someone I have a parasocial relationship with, I have extra context that makes it easier to understand them, I feel a sense of human connection, and I feel like I’m getting a sort of update on their “story”. I don’t get any of that with (current) AI.

  4. Skin in the game. If a human screws something up, it’s embarrassing. They lose respect and readers. On a meta-level, AI companies have similar incentives not to screw things up. But AI itself doesn’t (seem to) care. Human nature makes it easier to trust someone when we know they’re putting some kind of reputation on the line.

  5. Conspicuous consumption. Since I read Reasons and Persons, I can brag to everyone that I read Reasons and Persons. If I had read some equally good AI-written book, probably no one would care.

  6. Coordination points. Partly, I read Reasons and Persons because I liked it. And maybe I guess I read it so I can brag about the fact that I read it. (Hey everyone, have I mentioned that I read Reasons and Persons?) But I also read it because other people read it. When I talk to those people, we have a shared vocabulary and set of ideas that makes it easier to talk about other things. This wouldn’t work if we had all explored the same ideas though fragmented AI “tutoring”.

  7. Change is slow. Here we are 600 years after the invention of the printing press, and the primary mode of advanced education is still for people to physically go to a room where an expert is talking and write down stuff the expert says. If we’re that slow to adapt, then maybe we read human-written explainers simply out of habit.

How much do each of these really matter? How much confidence should they give us that explainer posts will still exist a decade from now? Let’s handle them in reverse order.

Argument 7: Change is slow

Sure, society takes time to adapt to technological change. But I don’t think college lectures are a good example of this, or that they’re a medieval relic that only survive out of inertia. On the contrary, I think they survive because we haven’t really any other model of education that’s fundamentally better.

Take paper letters. One hundred years ago, these were the primary form of long-distance communication. But after the telephone was widely distributed, it only took it a few decades to kill the letter in almost all cases where the phone is better. When email and texting showed up, they killed off almost all remaining use of paper letters. They still exist, but they’re niche.

The same basic story holds for horses, the telegraph, card catalogs, slide rules, VHS tapes, vacuum tubes, steam engines, ice boxes, answering machines, sailboats, typewriters, the short story, and the divine right of kings. When we have something that’s actually better, we drop the old ways pretty quickly. Inertia alone might keep explainer posts alive for a few years, but not more than that.

Arguments 5 and 6: Coordination points and conspicuous consumption

Western civilization began with the Iliad. Or, at least, we’ve decided to pretend it did. If you read the Iliad, then you can brag about reading the Iliad (good) and you have more context to engage with everyone else who read it (very good). So people keep reading the Iliad. I think this will continue indefinitely.

But so what? The Iliad is in that position because people have been reading/listening to it for thousands of years. But if you write something new and there’s no “normal” reason to read it, then it has no way to establish that kind of self-sustaining legacy.

Non-fiction in general has a very short half-life. And even when coordination points exist, people often rely on secondary sources anyway. Personally, I’ve tried to read Wittgenstein, but I found it incomprehensible. Yet I think I’ve absorbed his most useful idea by reading other people’s descriptions. I wonder how much “Wittgenstein” is really a source at this point as opposed to a label.

Also… explainer posts typically aren’t the Iliad. So I don’t think this will do much to keep explainer posts alive, either.

(Aside: I’ve never understood why philosophy is so fixated on original sources, instead of continually developing new presentations of old ideas like math and physics do. Is this related to the fact that philosophers go to conferences and literally read their papers out loud?)

Argument 4: Skin in the game

I trust people more when I know they’re putting their reputation on the line, for the same reason I trust restaurants more when I know they rely on repeat customers. AI doesn’t give me this same reason for confidence.

But so what? This is a loose heuristic. If AI were truly more accurate than human writing, I’m sure most people would learn to trust it in a matter of weeks. If AI was ultra-reliable but people really needed someone to hold accountable, AI companies could perhaps offer some kind of “insurance”. So I don’t see this as keeping explainers alive, either.

Argument 3: Parasocial relationships

Humans are social creatures. If bears had a secret bear Wikipedia and you went to the entry on humans, it would surely say, “Humans are obsessed with relationships.” I feel confident this will remain true.

I also feel confident that we will continue to be interested in what people we like and respect think about matters of fact. It seems plausible that we’ll continue to enjoy getting that information bundled together with little jokes or busts of personality. So I expect our social instincts will provide at least some reason for explainers to survive.

But how strong will this effect be? When explainer posts are read today, what fraction of readers are familiar enough to have a parasocial relationship with the author? Maybe 40%? And when people are familiar, what fraction of their motivation comes from the parasocial relationship, as opposed to just wanting to understand the content? Maybe another 40%? Those are made-up numbers, but I think it’s hard to avoid the conclusion that parasocial relationships explain only a fraction of why people read explainers today.

And there’s another issue. How do parasocial relationships get started if there’s no other reason to read someone? These might keep established authors going for a while at reduced levels, but it seems like it would make it hard for new people to rise up.

Argument 2: Boring-ness

Maybe popular AIs are a bit boring, today. But I think this is mostly due to the final reinforcement learning step. If you interact with “base models”, they are very good at picking up style cues and not boring at all. So I highly doubt that there’s some fundamental limitation here.

And anyway, does anyone care? If you just want to understand why vitamin D is technically a type of steroid, how much does style really matter, as opposed to clarity? I think style mostly matters in the context of a parasocial relationship, meaning we’ve already accounted for it above.

Argument 1: Accuracy

I don’t know for sure if AI will ever be as accurate as a high-quality human source. Though it seems very unlikely that physics somehow precludes creating systems that are more accurate than humans.

But if AI is that accurate, then I think this exercise suggests that explainer posts are basically toast. All the above arguments are just too weak to explain most of why people read human-written explainers now. So I think it’s mostly just accuracy. When that human advantage goes, I expect human-written explainers to go with it.

Counter-arguments

I can think of three main counterarguments.

First, maybe AI will fix discovery. Currently, potential readers of explainers often have no way to find potential writers. Search engines have utterly capitulated to SEO spam. Social media soft-bans outward links. If you write for a long time, you can build up an audience, but few people have the time and determination to do that. If you write a single explainer in your life, no one will read it. The rare exceptions to this rule either come from people contributing to established (non-social media) communities or from people with exceptional social connections. So—this argument goes—most potential readers don’t bother trying to find explainers, and most potential writers don’t bother creating them. If AI solves that matching problem, explainers could thrive.

Second, maybe society will figure out some new way to reward people who create information. Maybe we fool around with intellectual property law. Maybe we create some crazy Xanadu-like system where in order to read some text, you have to first sign a contract to pay them based on the value you derive, and this is recursively enforced on everyone who’s downstream of you. Hell, maybe AI companies decide to solve the data wall problem by paying people to write stuff. But I doubt it.

Third, maybe explainers will follow a trajectory like chess. Up until perhaps the early 1990s, humans were so much better than computers at chess that computers were irrelevant. After Deep Blue beat Kasparov in 1997, people quickly realized that while computers could beat humans, human+computer teams could still beat computers. This was called Advanced Chess. Within 15-20 years, however, humans became irrelevant. Maybe there will be a similar Advanced Explainer era? (I kid, that era started five years ago.)

TLDR

Will the explainer post go extinct? My guess is mostly yes, if and when AI reaches human-level accuracy.

Incidentally, since there’s so much techno-pessimism these days: I think this outcome would be… great? It’s a little grim to think of humans all communicating with AI instead of each other, yes. But the upside is all of humanity having access to more accurate and accessible explanations of basically everything. If this is the worst effect of AGI, bring it on.

Y’all are over-complicating these AI-risk arguments

2025-10-02 08:00:00

Say an alien spaceship is headed for Earth. It has 30 aliens on it. The aliens are weak and small. They have no weapons and carry no diseases. They breed at rates similar to humans. They are bringing no new technology. No other ships are coming. There’s no trick—except that they each have an IQ of 300. Would you find that concerning?

Of course, the aliens might be great. They might cure cancer and help us reach world peace and higher consciousness. But would you be sure they’d be great?

Suppose you were worried about the aliens but I scoffed, “Tell me specifically how the aliens would hurt us. They’re small and weak! They can’t do anything unless we let them.” Would you find that counter-argument convincing?

I claim that most people would be concerned about the arrival of the aliens, would not be sure that their arrival would be good, and would not find that counter-argument convincing.

I bring this up because most AI-risk arguments I see go something like this:

  1. There will be a fast takeoff in AI capabilities.
  2. Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.
  3. These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

These arguments have always struck me as overcomplicated. So I’d like to submit the following undercomplicated alternative:

  1. Obviously, if an alien race with IQs of 300 were going to arrive on Earth soon, that would be concerning.
  2. In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.
  3. No one knows what AI with an IQ of 300 would be like. So it might as well be an alien.

Our subject for today is: Why might one prefer one of these arguments to the other?

The case for the simple argument

The obvious reason to prefer the simple argument is that it’s more likely to be true. The complex argument has a lot of steps. Personally, I think they’re all individually plausible. But are we really confident that there will be a fast takeoff in AI capabilities and that the AI will pursue dangerous subgoals and that it will thereby gain a decisive strategic advantage?

I find that confidence unreasonable. I’ve often been puzzled why so many seemingly-reasonable people will discuss these arguments without rejecting the confidence.

I think the explanation is that there are implicitly two versions of the complex argument. The “strong” version claims that fast takeoff et al. will happen, while the “weak” version merely claims that it’s a plausible scenario that we should take seriously. It’s often hard to tell which version people are endorsing.

The distinction is crucial, because these two versions have different weaknesses. I find the strong version wildly overconfident. I agree with the weak version, but I still think it’s unsatisfying.

Say you think there’s a >50% chance things do not go as suggested by the complex argument. Maybe there’s a slow takeoff, or maybe the AI can’t build a decisive strategic advantage, whatever. Now what?

Well, maybe everything turns out great and you live for millions of years, exploring the galaxy, reading poetry, meditating, and eating pie. That would be nice. But it also seems possible that humanity still ends up screwed, just in a different way. The complex argument doesn’t speak to what happens when one of the steps fails. This might give the impression that without any of the steps, everything is fine. But that is not the case.

The simple argument is also more convincing. Partly I think that’s because—well—it’s easier to convince people of things when they’re true. But beyond that, the simple argument doesn’t require any new concepts or abstractions, and it leverages our existing intuitions for how more intelligent entities can be dangerous in unexpected ways.

I actually prefer the simple argument in an inverted form: If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. “If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
  2. “There’s no way that AI with an IQ of 300 will arrive within the next few decades.”
  3. “We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.”

I think all those bullets are unbiteable. Hence, I think AI-risk is real.

But if you make the complex argument, then you seem to be left with the burden of arguing for fast takeoff and alignment difficulty and so on. People who hear that argument also often demand an explanation of just how AI could hurt people (“Nanotechnology? Bioweapons? What kind of bioweapon?”) I think this is a mistake for the same reason it would be a mistake to demand to know how a car accident would happen before putting on your seatbelt. As long as the Complex Scenario is possible, it’s a risk we need to manage. But many people don’t look at things that way.

But I think the biggest advantage of the simple argument is something else: It reveals the crux of disagreement.

I’ve talked to many people who find the complex argument completely implausible. Since I think it is plausible—just not a sure thing—I often ask why. People give widely varying reasons. Some claim that alignment will be easy, some that AI will never really be an “agent”, some talk about the dangers of evolved vs. engineered systems, and some have technical arguments based on NP-hardness or the nature of consciousness.

I’ve never made much progress convincing these people to change their minds. I have succeeded in convincing some people that certain arguments don’t work. (For example, I’ve convinced people that NP-hardness and the nature of consciousness are probably irrelevant.) But when people abandon those arguments, they don’t turn around and accept the whole Scenario as plausible. They just switch to different objections.

So I started giving my simple argument instead. When I did this, here’s what I discovered: None of these people actually accept that AI with an IQ of 300 could happen.

Sure, they often say that they accept this. But if you pin them down, they’re inevitably picturing an AI that lacks some core human capability. Often, the AI can prove theorems or answer questions, but it’s not an “agent” that wants things and does stuff and has relationships and makes long-term plans.

So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear. If that’s the crux, then we should get to it as quickly as possible. And that’s done by the simple argument.

The case for the complex argument

I won’t claim to be neutral. As hinted by the title, I started writing this post intending to make the case for the simple argument, and I still think that case is strong. But I figured I should consider arguments for the other side and—there are some good ones.

Above, I suggested that there are two versions of the complex argument: A “strong” version that claims the scenario it lays out will definitely happen, and a “weak” version that merely claims it’s plausible. I rejected the strong version as overconfident. And I rejected the weak version because there are lots of other scenarios where things could also go wrong for humanity, so why give this one so much focus?

Well, there’s also a middle version of the complex argument: You could claim that the scenario it lays out is not certain, but that if things go wrong for humanity, then they will probably go wrong as in that scenario. This avoids both of my objections—it’s less overconfident, and it gives a good reason to focus on this particular scenario.

Personally, I don’t buy it, because I think other bad scenarios like gradual disempowerment are plausible. But maybe I’m wrong. It doesn’t seem crazy to claim that the Complex Scenario captures most of the probability mass of bad outcomes. And if that’s true, I want to know it.

Now, some people suggest favoring certain arguments for the sake of optics: Even if you accept the complex argument, maybe you’d want to make the simple one because it’s more convincing or is better optics for the AI-risk community. (“We don’t want to look like crazy people.”)

Personally, I am allergic to that whole category of argument. I have a strong presumption that you should argue the thing you actually believe, not some watered-down thing you invented because you think it will manipulate people into believing what you want them to believe. So even if my simpler argument is more convincing, so what?

But say you accept the middle version of the complex argument, yet you think my simple argument is more convincing. And say you’re not as bloody-minded as me, so you want to calibrate your messaging to be more effective. Should you use my simple argument? I’m not sure you should.

The typical human bias is to think other people are similar to us. (How many people favor mandatory pet insurance funded by a land-value tax? At least 80%, right?) But as far as I can tell, the situation with AI-risk is the opposite. Most people I know are at least mildly concerned, but have the impression that “normal people” think that AI-risk is science fiction nonsense.

Yet, here are some recent polls:

Poll Date Statement Agree
Gallup June 2-15 2025 [AI is] very different from the technological advancements that came before, and threatens to harm humans and society 49%
Reuters / Ipsos August 13-18 2025 AI could risk the future of humankind 58%
YouGov March 5-7 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 37%
YouGov June 27-30 2025 How concerned, if at all, are you about the possibility that artificial intelligence (AI) will cause the end of the human race on Earth? (Very or somewhat concerned) 43%

Being concerned about AI is hardly a fringe position. People are already worried, and becoming more so.

I used to picture my simple argument as a sensible middle-ground, arguing for taking AI-risk seriously, but not overconfident:

spectrum1

But I’m starting to wonder if my “obvious argument” is in fact obvious, and something that people can figure out on their own. From looking at the polling data, it seems like the actual situation is more like this, with people on the left gradually wandering towards the middle:

spectrum2

If anything, the optics may favor a confident argument over my simple argument. In principle, they suggest similar actions: Move quickly to reduce existential risk. But what I actually see is that most people—even people working on AI—feel powerless and are just sort of clenching up and hoping for the best.

I don’t think you should advocate for something you don’t believe. But if you buy the complex argument, and you’re holding yourself back for the sake of optics, I don’t really see the point.