MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Radiology Automation Does Not Generalize to Other Jobs

2025-12-16 22:32:07

Published on December 16, 2025 2:32 PM GMT

  1. The NYT article Your A.I. Radiologist Will Not Be With You Soon reports, “Leaders at OpenAI, Anthropic and other companies in Silicon Valley now predict that A.I. will eclipse humans in most cognitive tasks within a few years… The predicted extinction of radiologists provides a telling case study. So far, A.I. is proving to be a powerful medical tool to increase efficiency and magnify human abilities, rather than take anyone’s job.”[1]

  2. I disagree that this is a “telling case study.”[2] Radiology has several attributes which make it hard to generalize to other jobs:

    1. Patients are legally prohibited from using AI to replace human radiologists.[3] 

    2. Medical providers are legally prohibited from billing for AI radiologists.[4]

    3. Malpractice insurance does not cover AI radiology.[5]

  3. Moreover, the article is framed as Geoff Hinton having confidently predicted that AI would replace radiologists and this prediction as having been proven wrong, but his statement felt more to me like an offhand remark/hope.
  4. Takeaways from this incident I endorse:[6]

    1. Offhand remarks from ML researchers aren’t reliable economic forecasts

    2. People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner” and “regulatory capture”

    3. If you have a job where a) your customers are legally prohibited from hiring someone other than you, b) even if an enterprising competitor decides to run the legal risk of replacing you they still have to pay you, and c) anyone who replaces you is likely to be sued, you probably have reasonable job security

  5. Takeaways I don’t endorse:
    1. Radiology’s impacts being less than Hinton thought means that we should disbelieve:
      1. Claims that AI has already driven decreased wages, e.g. Azar et al. 2025 or Brynjolfsson et al. 2025
      2. Claims that future AI could drive wages even lower, e.g. Barnett 2025
      3. Or really any claim which is supported by something more than an offhand remark
    2. Many people work in jobs similar to radiology where e.g. it is illegal to replace them with AI, and therefore we can easily extrapolate from limited wage impacts in radiology to job loss in other sectors of the economy

Appendix: Data and Methodology for the sample of AI Radiology products

Data

The following products were included in my random sample:

Product Legally usable by patients? Notes
Viz.AI Contact No  
Aidoc No  
HeartFlow FFRct No  
Arterys Cardio DL No  
QuantX No  
ProFound AI for Digital Breast Tomosynthesis No  
OsteoDetect No  
Lunit INSIGHT CXR Triage No  
Caption Guidance No Not intended to assist radiologists; intended to assist ultrasound techs.
SubtlePET No  

Methodology

I asked GPT 5.1 to randomly sample products and record whether they were legally usable by patients. Transcript here. I then manually verified each product.

  1. ^

     Note that, because the supply of radiologists is artificially limited, a drop in demand needn’t actually cause a change in the number of radiologists employed. It would be expected to decrease their wages though. In the rest of this post, I will respond to a steelman of the NYT which is talking about a decrease in the wage of radiologists, not a decrease in the number employed.

  2. ^

     I get vague vibes from the NYT article like “predictions of job loss from AI automation aren’t trustworthy”, but they don’t make a very clear argument, so it’s possible that I am misunderstanding their point. My apologies if so. Thanks to Yarrow for this point.

  3. ^

     I randomly sampled 10 AI radiology products and found that patients are legally allowed to purchase 0 of them. See appendix.

  4. ^

     Medical billing is complex, but, roughly, providers are reimbursed for the labor they put in to seeing the patient, not the patient’s improved outcomes. In my sample of 10 AI products, only 1 of the ten had a CPT code (meaning that providers can’t bill even $0.01 more for using those products than for using a non-AI tool) and that one which did could only be billed in combination with human labor.

  5. ^

     Possibly at some point in the future, juries will acknowledge the supremacy of AI systems, but I doubt a present day jury would be very sympathetic to a hospital that replaced human radiologists with an AI that made a mistake. Some insurers have a blanket exclusion for AI-caused malpractice. Radiology has one of the highest rates of malpractice lawsuits. Thanks to Jason for this point.

  6. ^

     Works in Progress has an article which goes into more detail about the state of radiology automation, and is helpful for better understanding the current state, though I think they are underselling the regulatory barriers



Discuss

Fermi paradox solutions map

2025-12-16 22:21:36

Published on December 16, 2025 2:21 PM GMT

I heard that there are around 100 solutions of the Fermi paradox. Here I tried to collect the largest possible list and I am open to the new suggestions. Download pdf with links here. 

Also, see here AI-updated version of the map which includes probabilities and Global vs Local solution distinctions. If you press on any text, it will provide more detailed explanations. But this AI-version may have subtle errors. Probabilities in it are AI-generated and are just illustrative. 
 



Discuss

According to doctors, how feasible is preserving the dying for future revival?

2025-12-16 21:18:05

Published on December 16, 2025 1:18 PM GMT

Whenever I give a public talk - after I’ve finished explaining how neuroscientists can now selectively manipulate or erase a mouse’s memories, or how patients sometimes have their heart and brain activity entirely stopped during surgery by cooling them below 20°C - and I’ve finished making the case for preserving the brains of dying people so as to give them a chance of future revival, I’m inevitably asked the following question during Q&A:

“OK, but what do your colleagues think?”

It’s a good question.

When I say ‘brain preservation may be able to stop people from dying’, I’m making a weird, bold claim. Weird, bold claims are mostly wrong. Still, if the claim is being espoused by a community of relevant experts - rather than just a few enthusiastic advocates - it’s much more likely to be worth taking seriously.

Of course, in an ideal world, I wouldn’t need to cite expert opinion at all; I’d just be able to show you unambiguous evidence of the procedure working, from preservation through to revival. I dream of the day when the first revived laboratory mouse - revived from cryogenic temperatures, or uploaded into a robotic murine body - runs through a maze just as it did before it was preserved. (Perhaps perversely, I would even appreciate some definitive proof that preservation cannot work - at least then we’d know this avenue was closed, and could turn our efforts elsewhere.)

But preservation is a two-part technology. The preservation half happens now; the revival half happens (maybe) in the future. We can’t run the full experiment just yet. For now, we’re stuck doing what we do for any consequential question where we can’t just wait and see (as with climate change projections or assessing the trajectory of AI development): we gather the best available evidence, and we ask the people most qualified to interpret it.

So, to return to my perennial audience question then, what do my scientific and medical colleagues think of the prospects of preservation?

Last year, Andrew McKenzie, Emil Kendziorra, and I surveyed 312 neuroscientists about the neurophysiological foundations of memory and whether preservation could enable indefinite memory retention. We found the typical respondent believed there’s a 40% probability that a well-preserved brain retains its long-term memories and could eventually be uploaded1 (publication, blog post, news article).

This year, we turned to the medical community. Using Sermo, an online survey platform for healthcare workers in the US, we asked 334 doctors how likely preservation is to work, whether they’d endorse medical interventions that could improve preservation outcomes, and what they thought about the ethics of the whole enterprise.

If you’re a nerd, here’s the preprint, the survey, and the raw data. Otherwise, read on for a summary.

Feasibility

Our central question was how likely doctors thought preservation was to actually work. To assess this, we presented them with an idealised scenario: an elderly patient who wanted preservation, who suffered a cardiac arrest, and who was preserved within minutes. Follow-up imaging and biopsies confirmed intact brain structure down to the synaptic level. Given all that, how probable did they think it was that this patient could eventually be revived?

The median response was 25%.

‘During end-of-life planning discussions, an elderly, cognitively-intact patient expressed a desire for preservation. Imagine that later they suffer a cardiac arrest and are successfully preserved within minutes of the event. Follow-up imaging and brain biopsies show intact brain structure down to the synaptic level, including the spatial distribution of key biomolecules. How probable do you think it is that a significant amount of the neurally-encoded information required for long-term memory and personality is still preserved in their brain, such that it may be technologically possible to revive this patient, even in the distant future?’

We also asked the question more qualitatively: “How plausible do you find the idea that preservation could potentially allow for some form of revival in the future?” Here, 27.9% of respondents found it somewhat or very plausible, while 47% found it somewhat or very implausible. The remaining quarter were neutral - which, given the uncertainty involved, seems fair enough.

When we broke the probability estimates down by specialty, nothing dramatic emerged. Neurosurgeons were slightly more optimistic, palliative care doctors slightly more pessimistic, but medians hovered between 20-30% across the board. (We weren’t really powered to detect subtle specialty differences anyway, so I wouldn’t read too much into the variation.)

Interventions...

[See the rest at the link above]



Discuss

A friction in my dealings with friends who have not yet bought into the reality of AI risk

2025-12-16 16:12:00

Published on December 16, 2025 8:12 AM GMT

(This is a cross-post of my blog post at Crunch Time for Humanity: https://haggstrom.substack.com/p/a-friction-in-my-dealings-with-friends

A few months ago I was invited to a panel discussion whose title (translated from Swedish) was AI: opportunities and fears. I didn’t quite like the ring of this, because it seemed to me that “fears” could be read as a suggestion that the kind of AI risk I like to talk about at public events is mostly just in my head. My reply to the organizers was therefore something along the lines of “I would be happy to participate, but only if you change the title to AI: opportunities and risks, because I want to focus on the actual risks, the facts and the evidence we are facing, not on fluffy talk about fears and other emotions” — a change they were fine with.

Given the aversion to touchy-feely AI discussions that I expressed then, the fluffy, emotion-laden musings of the present blog post will perhaps come as a surprise. But here we go.

If the psychology I am about to describe rings familiar to some readers, I would be very interested to hear about it in the comments. But preferably only from those of you who are on board with the idea of existential AI risk as a real thing. This is not just because these readers are the most likely to have first-hand experience with the kind of psychology I have in mind, but also because comments (no matter how kindly and empathically phrased) from those who are not yet on board are likely to contribute to the exactly the sort of annoying friction I will come to in a moment.

Anyway, enough throat-clearing. The social situation I have in mind, and which happens to me relatively frequently, is when I have a friendly chat with a friend or acquaintance who has not bought into my view of the urgent reality of risks arising from the possibility of a superintelligent AI deciding that it wants to wrest control over the world from humanity; this person knows, however, how engaged I am (professionally and otherwise) with this topic. Since a standard turn in friendly chats is “what have you been up too lately?”, it is perfectly normal and very much to be expected that they ask me about my work on AI risk. Such a conversational direction almost inevitably leads to me painting a somewhat dark image of the situation facing humanity and the prospects of finding a good solution, because I refuse to whitewash about this, and I certainly don’t want to give any false impression that I have had (or am about to have) any significant success in my ambition to help mitigate the risk, or even in the subgoal of raising public awareness of the risk. Usually, the darker the discussion gets, the more kindness and compassion does my conversation partner feed into the conversation.

And this is often the point at which I become annoyed. Which seems kind of bad of me — because being met with kindness and compassion is not the sort of thing that one ought to be annoyed with. But my annoyance comes from having a sense that the problem that my conversation partner is addressing is not AI risk itself (which they don’t think is real) but my state of mind.[1] [2] Typically they don’t need to be as blunt as saying “it must be tough living with this fear that the world is about to end”, because this can be expressed with subtler cues.

What happens next varies, not just because the details of the conversation and my exact relation to the conversation partner vary, but also because I have deliberately tried different strategies. I have yet to find an approach that I am happy with. The smoothest way is to steer away from the topic under discussion and move to some lighter conversation topic, but this is unsatisfactory because AI risk and how we view it and address it is a tremendously important topic that I ought to take every opportunity to discuss, rather than avoiding it out of convenience.

Another option is to go into further detail about what I and others can do to mitigate existential AI risk, while ignoring all invitations to discuss my personal psychology. However, from the point of view of my conversation partner there is no real AI risk problem to be solved, and a typical consequence of this is that their side of the conversation will not be very constructive. So it sometimes happens that when the discussion enters the realm of AI governance (as it nowadays tends to do fairly quickly, because as opposed to just 4-5 years ago I no longer believe that technical AI alignment on its own has much chance of saving us from AI catastrophe without assistance from politics and legislation), they will bombard me with questions such as “What about China?”, “What about Trump?”, “What about the relentless market forces?”, and so on. These are all valid questions, but as the deck is currently stacked they are also extremely difficult,[3] to the extent that even a moderately clever discussion partner who is not interested in actually solving the problem but merely in playing devil’s advocate is likely to win the argument and conclude that I have no clear and feasible plan for avoiding catastrophe, so why am I wasting people’s time by going on and on abut AI existential risk?[4]

A third option is to declare explicitly the way that I think the conversation partner and I are talking past each other, namely that while I’m talking about the global risk caused by AI, they are talking about the very local problem of what this (perceived) risk does to my mind. I will then go on to explain my insistence on steering the conversation towards the former and much bigger problem by pointing out that talking about the latter problem is to focus on a comparatively trivial symptom rather than on the underlying cause. It’s not that I am unaware of the possibility that my personal well-being might improve if I think less about short AI timelines and AI risk, but I am offended by suggestions that this is a solution: worsening my epistemics via (say) religion or opium would contribute nothing to solving AI risk, and it would be antithetical to who I am.

This can easily take the discussion in a similar direction as in option two above, with a possible difference being that I will repeatedly interject something along the lines of “you keep talking about this as a problem that it falls upon me to solve, while in reality we are all sitting in the same boat with respect to existential AI risk, so that you in fact have as much reason as me to try to work towards a solution where we are not all murdered by superintelligent AIs a few years down the road”. However, on at least one occasion where I’ve employed this option, the conversation turned sour.

I am honestly unsure about how to handle these conversations, given the twin goals of keeping them pleasant and of not missing out on any opportunity to convince my conversation partner about the reality of AI risk and the need to do something about it.

  1. ^

    Note the similarity with my reaction to the panel discussion title I started out with complaining about.

  2. ^

    In a recent LessWrong post, Eliezer Yudkowsky describes a sitaution not entirely unlike mine:

    “How are you coping with the end of the world?” journalists sometimes ask me, and I sometimes reply, “By remembering that it’s not about me.” They have no hope of understanding what I mean by this, I predict, because to them I am the subject of the story and it has not occurred to them that there’s a whole planet out there too to be the story-subject. I think there’s probably a sense in which the Earth itself is not a real thing to most modern journalists.

    The journalist is imagining a story that is about me, and about whether or not I am going insane, not just because it is an easy cliche to write, but because personality is the only real thing to the journalist.

  3. ^

    One of my current favorite texts about this extremely difficult situation and how to think in a structured and constructive way about it is Early US policy priorities for AGI by Nick Marsh over at AI Futures Project.

  4. ^

    Most of this paragraph is taken from my earlier text Pro tip on discussions about AI xrisk: don’t get sidetracked, which continues:

    And here’s the thing. Many of those who play the devil’s advocate in this way will be aiming for exactly that turn of the conversation, and will implicitly and possibly unconsciously believe that at that point, they have arrived at a reduction ad absurdum where the assumption that AI xrisk is real has been shown to be absurd and therefore false. But the reasoning leading to this reductio doesn’t work, because it relies on (something like) the assumption that the universe is a sufficiently benign place to not put humanity in a situation where we are utterly doomed. Although this assumption is central to various Christian thinkers, it is in fact unwarranted, a horrible realization which is core to the so-called Deep Atheism of Eliezer Yudkowsky, further elaborated in recent work by Joe Carlsmith.

    To reiterate, I do think that the AI governance questions on how to stop actors from building an apocalyptically dangerous AI are important, and I am very interested in discussing them. They are also difficult — difficult enough that I don’t know of any path forward that will clearly work, yet we have far from exhausted all such possibilities, so the challenges cannot at this stage be dismissed as impossible. I want to explore potential ways forward in intellectual exchanges, but am only prepared to do it with someone who actually wants to help, because the field is so full of real difficulties of which we who work in it are so highly aware that our primary need is not for additional devil’s advocates to repeat these difficulties to us. Our primary need is for the discussions to be serious and constructive, and for that we need discussion partners who take seriously the possibility of AI xrisk being real.



Discuss

A Rationalist Christmas

2025-12-16 15:23:57

Published on December 16, 2025 7:23 AM GMT

My wife and I are veering off path for some of the typical American, Christmas traditions. In our experience, Christmas with kids is a lot of things it shouldn't be: consumerist, stressful, and overwhelming. That list should come as no surprise to the reader as these are THE common Christmas complaints. What surprises me is that, in the face of these complaints, many people repeat the same traditions over and over again hoping that maybe, this year, Christmas will be more jolly.

There are various reasons for this tradition lock-in (some of which I will get into later). For now, I will note that the current state of affairs is particularly sad because of what Christmas could be to those that celebrate it. In the darkest time of year, parents and children get time off from school to be together. Togetherness and joy set against a sad season should be the hallmarks of Christmas. Any traditions we follow should add to rather than detract from this vision.

Below is a list of traditions my wife and I have altered to better suit our Christmas needs. Before I jump in, a quick table-setter: I have a 2 y/o and a 5 y/o. We have implemented several of these ideas in past years, but this will be the first we implement all the ideas together. I can report back how it goes.

Gift Giving

There's a lot you could say about gift-giving. Buying gifts for others is often Pareto inefficient. Cheap or suboptimal gifts seem wasteful and bad for the environment. Lots of gifts at once (particularly for children) can be overwhelming and overstimulating.[1] 

Between ourselves, my wife and I have mostly solved the problem of gift giving. We instruct each other to purchase the practical items we need, and these purchases may or may not fall on Christmas.

But with our kids, we've found that we cannot simply ignore this tradition.[2] Christmas as "super awesome toy day" has made its way into their psyche through friends, children's books, songs, and media. Each night, we like to ask for a "rose, thorn, and bud" from our son to have him talk about what he liked and disliked about his day, and what he looks forward to in the coming days. Christmas has been a bud for three weeks straight now, and not because of my grand vision of "togetherness".

So, for us, there must be gifts. And I think gifts have their place—I am not allergic to the joy of children. But we've selected the following practices to optimize for joy, simplicity, and efficiency.

  • We will find several lesser used toys in our house and place them under the tree. This is to demonstrate gratitude for the things we have and encourage rediscovery.
  • We created our own simple "catalogue" (6 pictures printed on 1 white page) where our kids can circle one toy they want. We picked things that are sturdy and likely to be enjoyed over a longer time period as the options (so no to fart-noise putty, yes to magnet tiles).
  • We will supplement our gifts under the tree with 6 library books we thought they would love, wrapped up because unwrapping things is fun. I will do a little expectation setting with my kids so they  will understand/be excited about this.
  • No one will be rushed on Christmas Day to move from present-to-present. If either kid wants to pause to play with their gift or read a book, they can take as much time as they want. 

Stocking Stuffers

As with the gifts, we take pride in not buying throwaway junk that will spend the majority of its second life (lives?) becoming microplastics in our water supply. We're going to fill the stockings with oranges, two pieces of dark chocolate, and hard to open nuts (e.g., walnuts in the shell) along with a normal-looking kitchen nutcracker.[3]

The pièce de résistance will be waiting at the bottom of the stocking: nattou packs (Japanese fermented soy beans)—my kids' favorite food. My kids are weird. We also are not Japanese, but they are 100% into it and typically deplete any nattou stores we purchase with 1 or 2 days.

I'll note here that Christmas doesn't necessarily have to be over-the-top toy and sugar day. Nattou will spark joy for my kids, and maybe something else will work for other kids (coconuts?).

The Tree

We decorate our houseplant each year with crafts and ornaments the kids make or bring from school. It works!

Santa Clause

I guess we've become the parents that the other parents are going to be mad at. We've been open to our kids about the Big Man. We frame him as a metaphor for everyone who gives something to someone else on Christmas Day.[4]

We've also told our oldest that some parents like to pretend Santa is real, and he should try not telling the other kids. Anyways, sorry in advance.

Interestingly, my wife and I agreed to this approach from different angles. I view transparency here as a simple extension of the relationship I want with my kids: one where they know I tell them what I really think about any topic. 

My wife, who is more religiously minded than I am, agrees with that approach and also thinks that lying to the kids about Santa Clause sets them up for a faith crisis later on. Interestingly, she knows several religious people who, in middle school, pattern matched pretty quickly from, "All the adults in my life lied to me about Santa Clause" to "Everyone is lying to me about God."

The Christmas Day Experience

In lieu of an overwhelming number of gifts, we are aiming to enjoy the day together reading books, playing with the new toy, and (taking cues from the kids) a few simple crafts.

We've already mentioned trying to crack nuts. We also got some coffee filters to make snowflakes, and we got some fun sprinkles to decorate cookies together in the evening.

For dinner that evening, we picked something simple and easy to prepare but unique for the occasion: scallops with pasta and white sauce, and an easy salad on the side.


* * * * *

These are the things we're doing this year. Here's to hoping for a jolly, peaceful day!

  1. ^

    Hanukkah seems like it has a better designed gift-giving tradition from this perspective.

  2. ^

    Jehovah's Witnesses would have to confront this, and I am curious what that was like for any readers from that background.

  3. ^

    Not the Christmas-y ones. Bah-humbug!

  4. ^

    If you plan to do this, it may be important to set aside time to make sure you get buy-in for this approach from the grandparents. When my oldest was 3 years old, his grandparents disliked our approach and tried to persuade him that Santa was indeed real—look at these pictures!—and his parents were not telling the truth. This was uncomfortable.



Discuss

Why do LLMs so often say "It's not a(n) X, it's a Y"?

2025-12-16 09:02:12

Published on December 16, 2025 1:02 AM GMT

There seem to be common patterns of how LLMs write text that's shared by the LLMs of different companies and where the language patterns the LLM uses are different from usual human writing. 

How much do we know about why LLMs pick certain patterns? Do we know why they use "It's not a(n) X, it's a Y? If not, maybe understand why they pick patterns like it can make us better understand how LLMs are reasoning?



Discuss