2026-01-18 02:47:59
Published on January 17, 2026 6:47 PM GMT
You've probably heard something like this before:
You won't go badly wrong following the conclusion, but (3) doesn't actually follow from (1) and (2). That's because interventions might vary in how they affect the expected value of the future conditional on survival.[1]
Will MacAskill makes roughly this argument in Better Futures (August 2025). See the diagram below: survival-focused interventions target the red rectangle, flourishing-focused interventions target the blue. But the blue rectangle might be much larger than the red rectangle -- if x-risk is 20% then even the best survival intervention can increase EV by at most 1.2x, whereas a flourishing intervention could increase EV by 5x or 5000x.
But is x-risk only 20%? MacAskill thinks so,[2] but his argument applies even if extinction is very likely — say 99% — so long as there are interventions that increase flourishing by +100x. That's the scenario I want to discuss here in this post.
My conclusion is:
Recall that flourishing is the expected value conditional on survival. That is, flourishing-focused interventions target the survival posterior, consisting of the green and blue rectangles. Consequentially, if survival is likely then the survival posterior consists of ordinary futures, but if survival is unlikely then the survival posterior consists of weird futures, worlds very different from what we'd expect.[3]
What kind of weird worlds?
For more possible futures, see Bart Bussmann's 60+ Possible Futures; which weird survival worlds are most likely will depend on your cause for pessimism.
This poses three problems:
Problem 1: Survival world is harder to reason about.
If survival is likely, then the survival posterior consists of ordinary worlds, which you can reason about using existing assumptions/models/trends. However, if survival is unlikely, then the survival posterior consists of weird worlds where our assumptions break down. This makes it much harder to estimate the impact of our interventions, because the world is unprecedented. For example, imagine if brain uploads arrive by 2030 -- this should make us more sceptical of extrapolating various economic trends that were observed before uploads.
And when you condition on survival, you update not just your empirical beliefs but your moral beliefs too. Suppose you're uncertain about (A) whether animals have as much moral weight as humans, and (B) whether we can build machines as smart as humans, and that (A) and (B) are correlated, both downstream of a latent variable like "humans are special." Survival is much more likely if ¬B, so conditioning on survival upweights ¬B, which upweights H, which downweights A. Using the numbers below, your credence in animal moral weight drops from 59% to 34% — nearly halved — just by conditioning on survival. Gnarly!
Problem 2: Surviving worlds are more diverse.
When survival is unlikely, the survival worlds are more different from each other; this is because all ordinary worlds are alike but each weird world is weird in its own way. And because the survival worlds vary so much, it's harder to find interventions which are robustly beneficial -- an intervention that looks good in one weird survival world is likely to look poor in another.
For example, suppose you think that if world governments proceed with the expected level of sanity, then ASI will cause extinction. But we might survive because governments showed unexpectedly low sanity (e.g. initiating nuclear conflict over some mundane issue) or unexpectedly high sanity (e.g. updating on early warning shots). Now consider an intervention which shifts power toward existing world governments at the expense of frontier AI labs: this might decrease flourishing if we survived via low governmental sanity and increase flourishing if we survived via high governmental sanity. The intervention's value flips sign depending on which weird world we end up in.
It was a bit tricky to try to justify this intuition, but here's a toy model: imagine worlds as points in R², with a Gaussian prior centered at the origin. Scatter some "attractors" randomly — some red (extinction), some green (survival). Each point in world-space inherits the fate of its nearest attractor. When most of the Gaussian mass falls in red regions, survival requires landing in one of the scattered green islands. These islands might be far apart in world-space. The survival posterior becomes multimodal, spread across disconnected regions. The diagram below illustrates: when P(survival) is low, Var(world | survival) tends to be high.
This diversity creates practical problems:
Problem 3: Transitional events wash out interventions.
Ordinary worlds have more continuity between the present and the future, whereas weird worlds often involve some transitional event that explains why we survived, and these transitional events might 'wash out' your intervention.
For example, suppose you think the current policy landscape is likely to lead to extinction. Then we should be pessimistic about flourishing-focused policy interventions because, conditional on survival, there was probably some large-scale disruption of the policy landscape.
In the next post, I will discuss potential strategies for focusing on flourishing when survival is unlikely. These strategies will aim to overcome some or all of the problems above.
In maths:
Assuming that E(value|not survival) ≈ 0, we can decompose E(value|intervention) into the product of E(value|survival, intervention) and P(survival | intervention).
This suggests that P(survival | intervention) is a good proxy for E(value|intervention), but this is only true if E(value|survival, intervention) doesn't vary much across interventions.
However, E(value|survival, intervention) might vary significantly.
To illustrate, suppose you think that our chances of survival this century are reasonably high (greater than 80%) but that, if we survive, we should expect a future that falls far short of how good it could be (less than 10% as good as the best feasible futures). These are close to my views; the view about Surviving seems widely-held, and Fin Moorhouse and I will argue in essays 2 and 3 for something like that view on Flourishing.
Caveat: In principle, survival could be unlikely yet conditioning on it might not make worlds weird. To illustrate: suppose you're certain that humanity's survival depends entirely on the random weight initialisation of a particular pretraining run — 1% chance of good, 99% bad. Conditioning on survival, most survival worlds are ordinary in every respect except for the lucky weight initialisation. The weirdness of the weight initialisation is highly localised, so it doesn't raise the three problems above.
That said, I don't think such worldviews are plausible, because they require a very high prior on ordinary worlds. I think that plausible worldviews should place +1% probability on "worlds which are globally weird and we survive". And so these worlds will dominate the survival posterior, even if there are also some "worlds which are locally weird and we survive".
2026-01-18 01:28:11
Published on January 17, 2026 5:28 PM GMT
In 1654, a Jesuit polymath named Athanasius Kircher published Mundus Subterraneus, a comprehensive geography of the Earth’s interior. It had maps and illustrations and rivers of fire and vast subterranean oceans and air channels connecting every volcano on the planet. He wrote that “the whole Earth is not solid but everywhere gaping, and hollowed with empty rooms and spaces, and hidden burrows.”. Alongside comments like this, Athanasius identified the legendary lost island of Atlantis, pondered where one could find the remains of giants, and detailed the kinds of animals that lived in this lower world, including dragons. The book was based entirely on secondhand accounts, like travelers tales, miners reports, classical texts, so it was as comprehensive as it could’ve possibly been.
But Athanasius had never been underground and neither had anyone else, not really, not in a way that mattered.
Today, I am in San Francisco, the site of the 2026 J.P. Morgan Healthcare Conference, and it feels a lot like Mundus Subterraneus.
There is ostensibly plenty of evidence to believe that the conference exists, that it actually occurs between January 12, 2026 to January 16, 2026 at the Westin St. Francis Hotel, 335 Powell Street, San Francisco, and that it has done so for the last forty-four years, just like everyone has told you. There is a website for it, there are articles about it, there are dozens of AI-generated posts on Linkedin about how excited people were about it. But I have never met anyone who has actually been inside the conference.
I have never been approached by one, or seated next to one, or introduced to one. They do not appear in my life. They do not appear in anyone’s life that I know. I have put my boots on the ground to rectify this, and asked around, first casually and then less casually, “Do you know anyone who has attended the JPM conference?”, and then they nod, and then I refine the question to be, “No, no, like, someone who has actually been in the physical conference space”, then they look at me like I’ve asked if they know anyone who’s been to the moon. They know it happens. They assume someone goes. Not them, because, just like me, ordinary people like them do not go to the moon, but rather exist around the moon, having coffee chats and organizing little parties around it, all while trusting that the moon is being attended to.
The conference has six focuses: AI in Drug Discovery and Development, AI in Diagnostics, AI for Operational Efficiency, AI in Remote and Virtual Healthcare, AI and Regulatory Compliance, and AI Ethics and Data Privacy. There is also a seventh theme over ‘Keynote Discussions’, the three of which are The Future of AI in Precision Medicine, Ethical AI in Healthcare, and Investing in AI for Healthcare. Somehow, every single thematic concept at this conference has converged onto artificial intelligence as the only thing worth seriously discussing.
Isn’t this strange? Surely, you must feel the same thing as me, the inescapable suspicion that the whole show is being put on by an unconscious Chinese Room, its only job to pass over semi-legible symbols over to us with no regards as to what they actually mean. In fact, this pattern is consistent across not only how the conference communicates itself, but also how biopharmaceutical news outlets discuss it.
Each year, Endpoints News and STAT and BioCentury and FiercePharma all publish extensive coverage of the J.P. Morgan Healthcare Conference. I have read the articles they have put out, and none of it feels like it was written by someone who actually was at the event. There is no emotional energy, no personal anecdotes, all of it has been removed, shredded into one homogeneous, smoothie-like texture. The coverage contains phrases like “pipeline updates” and “strategic priorities” and “catalysts expected in the second half.” If the writers of these articles ever approach a human-like tenor, it is in reference to the conference’s “tone”. The tone is “cautiously optimistic.” The tone is “more subdued than expected.” The tone is “mixed.” What does this mean? What is a mixed tone? What is a cautiously optimistic tone? These are not descriptions of a place. They are more accurately descriptions of a sentiment, abstracted from any physical reality, hovering somewhere above the conference like a weather system.
I could write this coverage. I could write it from my horrible apartment in New York City, without attending anything at all. I could say: “The tone at this year’s J.P. Morgan Healthcare Conference was cautiously optimistic, with executives expressing measured enthusiasm about near-term catalysts while acknowledging macroeconomic headwinds.” I made that up in fifteen seconds. Does it sound fake? It shouldn’t, because it sounds exactly like the coverage of a supposedly real thing that has happened every year for the last forty-four years.
Speaking of the astral body I mentioned earlier, there is an interesting historical parallel to draw there. In 1835, the New York Sun published a series of articles claiming that the astronomer Sir John Herschel had discovered life on the moon. Bat-winged humanoids, unicorns, temples made of sentient sapphire, that sort of stuff. The articles were detailed, describing not only these creatures appearance, but also their social behaviors and mating practices. All of these cited Herschel’s observations through a powerful new telescope. The series was a sensation. It was also, obviously, a hoax, the Great Moon Hoax as it came to be known. Importantly, the hoax worked not because the details were plausible, but because they had the energy of genuine reporting: Herschel was a real astronomer, and telescopes were real, and the moon was real, so how could any combination that involved these three be fake?
To clarify: I am not saying the J.P. Morgan Healthcare Conference is a hoax.
What I am saying is that I, nor anybody, can tell the difference between the conference coverage and a very well-executed hoax. Consider that the Great Moon Hoax was walking a very fine tightrope between giving the appearance of seriousness, while also not giving away too many details that’d let the cat out of the bag. Here, the conference rhymes.
For example: photographs. You would think there would be photographs. The (claimed) conference attendees number in the thousands, many of them with smartphones, all of them presumably capable of pointing a camera at a thing and pressing a button. But the photographs are strange, walking that exact snickering line that the New York Sun walked. They are mostly photographs of the outside of the Westin St. Francis, or they are photographs of people standing in front of step-and-repeat banners, or they are photographs of the schedule, displayed on a screen, as if to prove that the schedule exists. But photographs of the inside with the panels, audience, the keynotes in progress; these are rare. And when I do find them, they are shot from angles that reveal nothing, that could be anywhere, that could be a Marriott ballroom in Cleveland.
Is this a conspiracy theory? You can call it that, but I have a very professional online presence, so I personally wouldn’t. In fact, I wouldn’t even say that the J.P. Morgan Healthcare Conference is not real, but rather that it is real, but not actually materially real.
To explain what I mean, we can rely on economist Thomas Schelling to help us out. Sixty-six years ago, Schelling proposed a thought experiment: if you had to meet a stranger in New York City on a specific day, with no way to communicate beforehand, where would you go? The answer, for most people, is Grand Central Station, at noon. Not because Grand Central Station is special. Not because noon is special. But because everyone knows that everyone else knows that Grand Central Station at noon is the obvious choice, and this mutual knowledge of mutual knowledge is enough to spontaneously produce coordination out of nothing. This, Grand Central Station and places just like it, are what’s known as a Schelling point.
Schelling points appear when they are needed, burnt into our genetic code, Pleistocene subroutines running on repeat, left over from when we were small and furry and needed to know, without speaking, where the rest of the troop would be when the leopards came. The J.P. Morgan Healthcare Conference, on the second week of January, every January, Westin St. Francis, San Francisco, is what happened when that ancient coordination instinct was handed an industry too vast and too abstract to organize by any other means. Something deep drives us to gather here, at this time, at this date.
To preempt the obvious questions: I don’t know why this particular location or time or demographic were chosen. I especially don’t know why J.P. Morgan of all groups was chosen to organize the whole thing. All of this simply is.
If you find any of this hard to believe, observe that the whole event is, structurally, a religious pilgrimage, and has all the quirks you may expect of a religious pilgrimage. And I don’t mean that as a metaphor, I mean it literally, in every dimension except the one where someone official admits it, and J.P. Morgan certainly won’t.
Consider the elements. A specific place, a specific time, an annual cycle, a journey undertaken by the faithful, the presence of hierarchy and exclusion, the production of meaning through ritual rather than content. The hajj requires Muslims to circle the Kaaba seven times. The J.P. Morgan Healthcare Conference requires devotees of the biopharmaceutical industry to slither into San Francisco for five days, nearly all of them—in my opinion, all of them—never actually entering the conference itself, but instead orbiting it, circumambulating it, taking coffee chats in its gravitational field. The Kaaba is a cube containing, according to tradition, nothing, an empty room, the holiest empty room in the world. The Westin St. Francis is also, roughly, a cube. I am not saying these are the same thing. I am saying that we have, as a species, a deep and unexamined relationship to cubes.
This is my strongest theory so far. That the J.P. Morgan Healthcare conference isn’t exactly real or unreal, but a mass-coordination social contract that has been unconsciously signed by everyone in this industry, transcending the need for an underlying referent.
My skeptical readers will protest at this, and they would be correct to do so. The story I have written out is clean, but it cannot be fully correct. Thomas Schelling was not so naive as to believe that Schelling points spontaneously generate out of thin air, there is always a reason, a specific, grounded reason, that their concepts become the low-energy metaphysical basins that they are. Grand Central Station is special because of the cultural gravitas it has accumulated through popular media. Noon is special because that is when the sun reaches its zenith. The Kaaba was worshipped because it was not some arbitrary cube; the cube itself was special, that it contained The Black Stone, set into the eastern corner, a relic that predates Islam itself, that some traditions claim fell from heaven.
And there are signs, if you know where to look, that the underlying referent for the Westin St. Francis status being a gathering area is physical. Consider the heat. It is January in San Francisco, usually brisk, yet the interior of the Westin St. Francis maintains a distinct, humid microclimate. Consider the low-frequency vibration in the lobby that ripples the surface of water glasses, but doesn’t seem to register on local, public seismographs. There is something about the building itself that feels distinctly alien. But, upon standing outside the building for long enough, you’ll have the nagging sensation that it is not something about the hotel that feels off, but rather, what lies within, underneath, and around the hotel.
There’s no easy way to sugarcoat this, so I’ll just come out and say it: it is possible that the entirety of California is built on top of one immensely large organism, and the particular spot in which the Westin St. Francis Hotel stands—335 Powell Street, San Francisco, 94102—is located directly above its beating heart. And that this is the primary organizing focal point for both the location and entire reason for the J.P. Morgan Healthcare Conference.
I believe that the hotel maintains dozens of meter-thick polyvinyl chloride plastic tubes that have been threaded down through the basement, through the bedrock, through geological strata, and into the cardiovascular system of something that has been lying beneath the Pacific coast since before the Pacific coast existed. That the hotel is a singular, thirty-two story central line.That, during the week of the conference, hundreds of gallons of drugs flow through these tubes, into the pulsating mass of the being, pouring down arteries the size of canyons across California. The dosing takes five days; hence the length of the conference.
And I do not believe that the drugs being administered here are simply sedatives. They are, in fact, the opposite of sedatives. The drugs are keeping the thing beneath California alive. There is something wrong with the creature, and a select group of attendees at the J.P. Morgan Healthcare Conference have become its primary caretakers.
Why? The answer is obvious: there is nothing good that can come from having an organic creature that spans hundreds of thousands of square miles suddenly die, especially if that same creatures mass makes up a substantial portion of the fifth-largest economy on the planet, larger than India, larger than the United Kingdom, larger than most countries that we think of as significant. Maybe letting the nation slide off into the sea was an option at one point, but not anymore. California produces more than half of the fruits, vegetables, and nuts grown in the United States. California produces the majority of the world’s entertainment. California produces the technology that has restructured human communication. Nobody can afford to let the whole thing collapse.
So, perhaps it was decided that California must survive, at least for as long as possible. Hence Amgen. Hence Genentech. Hence the entire biotech revolution, which we are taught to understand as a triumph of science and entrepreneurship, a story about venture capital and recombinant DNA and the genius of the California business climate. The story is not false, but incomplete. The reason for the revolution was, above all else, because the creature needed medicine, and the old methods of making medicine were no longer adequate, and someone decided that the only way to save the patient was to create an entire industry dedicated to its care.
Why is drug development so expensive? Because the real R&D costs are for the primary patient, the being underneath California, and human applications are an afterthought, a way of recouping investment. Why do so many clinical trials fail? For the same reason; the drugs are not meant for our species. Why is the industry concentrated in San Francisco, San Diego, Boston? Because these are monitoring stations, places where other intravenous lines have been drilled into other organs, other places where the creature surfaces close enough to reach.
Finally, consider the hotel itself. The Westin St. Francis was built in 1904, and, throughout its entire existence, it has never, ever, even once, closed or stopped operating. The 1906 earthquake leveled most of San Francisco, and the Westin St. Francis did not fall. It was damaged, yes, but it did not fall. The 1989 Loma Prieta earthquakekilled sixty-three people and collapsed a section of the Bay Bridge. Still, the Westin St. Francis did not fall. It cannot fall, because if it falls, the central line is severed, and if the central line is severed, the creature dies, and if the creature dies, we lose California, and if we lose California, our civilization loses everything that California has been quietly holding together. And so the Westin St. Francis has hosted every single J.P. Morgan Healthcare Conference since 1983, has never missed one, has never even come close to missing one, and will not miss the next one, or the one after that, or any of the ones that follow.
If you think about it, this all makes a lot of sense. It may also seem very unlikely, but unlikely things have been known to happen throughout history. Mundus Subterraneus had a section on the “seeds of metals,” a theory that gold and silver grew underground like plants, sprouting from mineral seeds in the moist, oxygen-poor darkness. This was wrong, but the intuition beneath it was not entirely misguided. We now understand that the Earth’s mantle is a kind of eternal engine of astronomical size, cycling matter through subduction zones and volcanic systems, creating and destroying crust. Athanasius was wrong about the mechanism, but right about the structure. The earth is not solid. It is everywhere gaping, hollowed with empty rooms, and it is alive.
2026-01-18 00:33:04
Published on January 17, 2026 4:33 PM GMT
Among developed countries, Japan has long had the highest debt/GDP ratio, currently ~232%. That seems pretty bad, and conversely has made some people say that the US debt is fine because it's still much lower than Japan's. But here are some points that might clarify the situation:
First, that ratio has declined recently, from 258% in 2020.
Second, the Japanese government holds a lot of stocks and foreign bonds. Its net debt/GDP is "only" 140%, and has declined since 2020. The US government doesn't do that. (The government of Singapore also holds a lot of assets, and Temasek is well-known as a large investment fund, but Japan is a bigger country, and despite smaller holdings per capita, its investments are much larger than Singapore's.)
Meanwhile, America's federal debt/GDP ratio is ~124%. Add in state debt and it's ~127%. So the net debt/GDP of the US government isn't that different from Japan. It's still higher, but arguably the "quality" of that US GDP is lower, for a couple reasons:
On the other hand, the US does have more natural resources, and the federal goverment owns a lot of land. My point is just that, while I've often seen it said that the US government debt situation is clearly better than Japan's, that's not clearly the case.
By the way, another economic metric I think is interesting to compare is median and average personal wealth.
2026-01-17 22:54:46
Published on January 17, 2026 2:54 PM GMT
The cruelest irony of stuttering is that trying harder to speak fluently makes it worse. Not trying harder in the sense of practice or effort, but trying harder in the sense of conscious attention to speech mechanics. When someone who stutters focuses intently on controlling their words, analyzing their breathing, and monitoring their mouth movements, their speech doesn't improve. It deteriorates.
This is the reinvestment hypothesis in action: explicit, conscious control actively interferes with skills that should be automatic. A pianist who thinks too carefully about finger placement plays worse. An athlete who consciously monitors their form chokes under pressure. And a person who stutters, desperately focusing on each syllable, finds their speech becoming more fragmented, not less.
For the 70 million people worldwide who stutter, this creates a devastating trap. They know their speech is broken. They focus intensely on fixing it. And that very focus makes the problem worse.
What if we could temporarily turn off that interference? What if we could create a neural state where the overthinking stops, where the brain's executive control systems step aside and let procedural motor learning do its work? And what if we could do this precisely during speech practice, when the brain is trying to encode new, fluent motor patterns?
Brain stimulation for stuttering isn't a new idea. Over the past seven years, researchers have tested transcranial direct current stimulation (tDCS) in adults who stutter, with mixed but encouraging results.
The landmark study came from Oxford in 2018. Chesters and colleagues ran a rigorous double-blind trial with 30 adults who stutter. The intervention was straightforward: 1 milliamp of anodal (excitatory) tDCS applied to the left inferior frontal cortex for 20 minutes, five days in a row, while participants practiced fluency-inducing speech techniques like choral reading and metronome-timed speech.
The results were striking. At baseline, both groups stuttered on about 12% of syllables. One week after treatment, the tDCS group had dropped to 8.7% stuttering, while the sham group remained at 13.4%. That's a 27% relative reduction, with a large effect size (Cohen's d = 0.98). The improvement persisted at six weeks for reading tasks, though conversation fluency had regressed somewhat.
This proved the principle: pairing brain stimulation with speech practice can produce meaningful, lasting fluency gains.
Figure 1: Effects of tDCS on Stuttering Frequency Across Major Studies
But there's a pattern in these results. Multi-session protocols (Chesters, Moein) work better than single sessions. Multi-session anodal stimulation of speech production areas (Broca’s area, supplementary motor area) produces modest fluency gains. In contrast, when researchers applied cathodal (inhibitory) stimulation to right frontal regions, they observed unexpected fluency improvements, suggesting that reducing frontal interference may be more important than boosting the speech system itself.
This last finding is the key. It suggests that the mechanism might not be "boost the speech system." It might be "reduce the interference."
Neuroimaging studies consistently show that people who stutter have hyperactive prefrontal cortex during speech. A 2022 fNIRS study found that right dorsolateral prefrontal cortex (DLPFC) activation spiked by approximately 20% when adults who stutter anticipated difficult words, compared to fluent controls. This region is the brain's executive control center, handling working memory, attention, and conscious monitoring of performance.
This hyperactivity isn't random. It reflects the subjective experience of stuttering: constant self-monitoring, anticipating which words will be difficult, analyzing what went wrong, trying to control every aspect of speech production. The DLPFC is working overtime, desperately trying to prevent stuttering.
But that's the problem. The DLPFC is trying to consciously control a process that should be automatic.
Speech relies on subcortical circuits, especially a structure called the basal ganglia, which helps start and time learned movement sequences. In fluent speakers, this system smoothly passes well‑practiced speech “chunks” to cortical speech areas. In people who stutter, resting‑state fMRI shows weaker‑than‑normal connectivity between the putamen (a key basal ganglia structure) and cortical speech regions, suggesting that this automatic handoff is impaired.
The natural compensatory response is to recruit conscious control. If the automatic system isn't working, use the manual override. Engage the prefrontal cortex. Monitor every syllable. Plan every word.
But this creates a vicious cycle. The prefrontal cortex isn't designed to run speech production. It's too slow, too effortful, too dependent on working memory. When it tries to micromanage speech, it interferes with what remains of the automatic system. The result is more stuttering, which triggers more monitoring, which causes more interference.
Meta-analyses confirm this pattern. People who stutter show 30-40% greater right inferior frontal cortex activation during speech compared to controls, while left inferior frontal cortex (Broca's area) shows 20% reduced activation. They're using the wrong networks, in the wrong hemisphere, for the wrong type of control.
The question isn't how to boost the damaged automatic system. It's how to get the interfering conscious system out of the way.
This phenomenon has a name in sports psychology: reinvestment. It's what happens when skilled performers revert to explicit, rule-based control of movements that have been proceduralized.
The classic study comes from Masters (1992). He had people learn golf putting in two conditions: one group received explicit coaching about technique, the other learned implicitly through trial and error with minimal instruction. Initially, both groups performed similarly. But when tested under pressure, the explicit learners collapsed. Their putting accuracy dropped 30-50%, while the implicit learners' performance held steady.
The difference? The explicit learners had conscious rules they could reinvest attention into. Under pressure, they started thinking about their form, and that thinking destroyed their performance.
The effect is robust and large. Maxwell et al. (2001) found effect sizes greater than d = 1.0 when comparing stress performance of implicit versus explicit learners. The explicit learners made roughly twice as many errors under dual-task conditions.
This maps directly onto stuttering. Fluent speech is a proceduralized motor skill. In fluent speakers, it happens automatically, with minimal prefrontal involvement. But people who stutter have learned to monitor and control speech consciously. They have explicit rules. And those rules, that conscious attention, actively interferes with whatever automatic capacity remains.
The most compelling evidence comes from dual-task studies. When people who stutter perform a simple non-linguistic secondary task while speaking (like tapping a rhythm), stuttering frequency often decreases. The secondary task occupies the prefrontal cortex, preventing it from interfering with speech. It forces implicit control by blocking explicit control.
This is our therapeutic target: reduce prefrontal interference, allow implicit motor learning.
The definitive proof that reducing prefrontal activity can enhance skill learning comes from Smalle et al. (2017). They tested whether adults' superior executive function actually hinders certain types of implicit learning that children excel at.
Young adults received repetitive TMS to transiently inhibit the left DLPFC, then performed an implicit word-form learning task. The control group received sham (placebo) stimulation that mimicked the procedure but did not actually affect the brain. The result was clear: DLPFC disruption produced significantly enhanced learning.
The effect size was d = 0.88. Participants with inhibited DLPFC learned new word sequences faster and retained them better. Critically, individuals with higher baseline executive function showed the largest benefits. Their prefrontal cortex was normally interfering with procedural learning, and shutting it down removed the interference.
The interpretation is straightforward: the DLPFC and subcortical procedural systems compete for control during learning. When you reduce DLPFC activity, the procedural system wins, and learning is more efficient and robust.
For stuttering, this suggests a direct intervention: inhibit DLPFC during speech practice.
Here's the complete picture: use cathodal (inhibitory) tDCS to temporarily reduce left DLPFC activity during intensive speech practice. The timing is critical.
Cathodal tDCS at 1-2 milliamps produces reduced cortical excitability lasting 30-60 minutes. Apply the stimulation, then immediately begin speech training while DLPFC is still inhibited. During this window, the brain is in a low-interference state, primed for implicit motor learning.
Figure 2: Intervention Workflow and Complementary Mechanisms
The protocol builds directly on parameters proven effective in prior stuttering tDCS trials:
During training, explicit strategy coaching is minimized. The focus is on external goals ("communicate the message") rather than speech mechanics ("control your breathing"). With reduced DLPFC activity, this implicit approach should feel more natural. The conscious monitoring system is temporarily quieted, allowing procedural learning.
If the reinvestment hypothesis is correct, we should see several specific effects.
Primary outcome: Stuttering frequency should decrease more in the cathodal DLPFC group than in sham or standard therapy. Based on the Oxford trial (anodal IFC, d = 0.98) and Smalle’s DLPFC inhibition study (d = 0.88), a conservative estimate for our combined protocol is an effect size around d = 0.7–1.0, which corresponds to roughly 3–5 percentage points greater reduction in stuttering frequency compared to sham
Figure 3: Predicted Learning Trajectories
But clinical scores tell only part of the story. We should also see:
Process indicators of automaticity: Stable fluency under dual‑task load. If participants maintain their fluency gains even while performing a secondary cognitive task (for example, an auditory n‑back), it suggests speech has become robustly automatic rather than fragile and attention‑dependent
Neurophysiological markers: Over the course of therapy, fNIRS or EEG should show reduced DLPFC activation during unstimulated speech, indicating decreased conscious monitoring. We’d expect larger long‑term reductions in the cathodal group than in sham group
Subjective reports: Participants should report less mental effort during speech, fewer conscious strategies, and more "flow" experiences. The speech should feel less effortful, more natural.
Maintenance and generalization: If the fluency is truly proceduralized rather than explicitly controlled, it should be more resistant to relapse. A six-month follow-up should show better maintained gains in the cathodal group.
These process measures would validate the mechanism. It's not just "tDCS makes you more fluent somehow." It's specifically "reducing executive interference accelerates implicit motor learning, producing more robust automaticity."
Traditional stuttering therapy achieves impressive short-term results. Intensive programs can reduce stuttering by 70-100% immediately after treatment. The problem is maintenance. Relapse rates range from 30-70% within one to two years.
The reason is cognitive load. Therapy teaches explicit techniques: prolonged speech, gentle onsets, controlled breathing. These work when you have full attention available. But real-world speaking happens while thinking about what to say, managing emotions, and multitasking. The techniques require executive resources that aren't available under those conditions.
This is exactly the problem cathodal DLPFC training addresses. By reducing executive interference during learning, we encode the fluent motor patterns implicitly rather than explicitly. The result should be speech that doesn't depend on conscious control, that holds up under pressure, that resists relapse.
Even a 15-20% reduction in relapse rates would be clinically meaningful. If this approach cuts relapse from 50% to 35%, that means thousands fewer people needing retreatment annually.
Beyond the numbers, there's the quality of life impact. Stuttering affects approximately 70 million people worldwide. It limits career choices, impairs social relationships, and creates profound daily stress. Adults who stutter score significantly lower on quality-of-life measures than population norms, with impacts comparable to chronic health conditions.
Current therapy is expensive (intensive programs cost $1,500-5,000) and time-intensive (20-30 clinical hours plus daily practice). An adjunct that improves efficiency could reduce both cost and burden.
And there's something deeper. By leveraging neuroscience to enhance learning, we're not just treating symptoms. We're addressing the fundamental mechanism: the competition between explicit and implicit control systems. We're giving people who stutter what fluent speakers have naturally: speech that happens automatically, without thinking.
This proposal combines proven elements in a novel configuration. Cathodal tDCS to DLPFC has been used safely in depression research and cognitive neuroscience. Speech therapy techniques are well-established. What's new is the strategic pairing: using brain stimulation to create optimal learning conditions during intensive practice.
The safety profile is excellent. Reviews of 33,200+ tDCS sessions found zero serious adverse events. Common side effects are mild: scalp tingling, slight headache. At 1-2 mA for 20 minutes, we're well within established safety parameters.
The theoretical foundation is strong: reinvestment theory, evidence of DLPFC hyperactivation in stuttering, Smalle's demonstration that DLPFC inhibition enhances learning. The clinical need is urgent: millions of people with limited effective treatments and high relapse rates.
What's needed now is execution. A well-designed trial: 30 adults per group, cathodal DLPFC tDCS versus sham, five daily sessions, with both immediate and long-term outcome measures. If the mechanism is correct, we should see accelerated learning, greater automaticity, and more durable fluency.
The ultimate goal isn't just fewer stuttered syllables. It's freeing people from the mental overdrive that makes speaking exhausting. It's allowing speech to become what it should be: automatic, effortless, natural.
Sometimes the solution isn't trying harder, but learning to stop trying so hard.
2026-01-17 13:43:46
Published on January 17, 2026 5:43 AM GMT
I spent a few hundred dollars on Anthropic API credits and let Claude individually research every current US congressperson's position on AI. This is a summary of my findings.
Disclaimer: Summarizing people's beliefs is hard and inherently subjective and noisy. Likewise, US politicians change their opinions on things constantly so it's hard to know what's up-to-date. Also, I vibe-coded a lot of this.
I used Claude Sonnet 4.5 with web search to research every congressperson's public statements on AI, then used GPT-4o to score each politician on how "AGI-pilled" they are, how concerned they are about existential risk, and how focused they are on US-China AI competition. I plotted these scores against GovTrack ideology data to search for any partisan splits.
Few members of Congress have public statements taking AGI seriously. For those that do, the difference is not in political ideology. If we simply plot the AGI-pilled score vs the ideology score, we observe no obvious partisan split.
There are 151 congresspeople who Claude could not find substantial quotes about AI from. These members are not included on this plot or any of the plots which follow.
When you change the scoring prompt to ask how much a congressperson's statements reflect a concern about existential risk, the plot looks different. Note that the scoring prompt here emphasizes "A politician who is most XRisk-pilled is someone who thinks AI is a risk to humanity -- not just the US." This separates x-risk concerns from fears related to US-China relations.
This graph looks mostly like noise but it does show that the majority of the most x-risk pilled politicians are Democrats.[1] This is troubling. Politics is a mind-killer and if AI Safety becomes partisan, productive debate will be even more difficult than it currently is.
Some congresspeople have made up their minds: the US must "win" the race against China and nothing else matters. Others have a more nuanced opinion. But most are thinking about US-China relations when speaking about AI. Notably, the most conservative congresspeople are more likely to be exclusively focused on US-China relations compared to the most progressive members.
This plot has a strange distribution. For reference, the scoring prompt uses the following scale:
I found that roughly 20 members of Congress are "AGI-pilled."
Bernie Sanders (Independent Senator, Vermont): AGI-pilled and safety-pilled
Richard Blumenthal (Democratic Senator, Connecticut): AGI-pilled and safety-pilled
Rick Crawford (Republican Representative, Arkansas): AGI-pilled but doesn't discuss x-risk (only concerned about losing an AI race to China)
Bill Foster (Democratic Representative, Illinois): AGI-pilled and safety-pilled
Brett Guthrie (Republican Representative, Kentucky): AGI-pilled but doesn't discuss x-risk (only concerned about losing an AI race to China)
Chris Murphy (Democratic Senator, Connecticut): AGI-pilled and somewhat safety-pilled (more focused on job loss and spiritual impacts)
Brad Sherman (Democratic Representative, California): AGI-pilled and safety-pilled
Debbie Wasserman Schultz (Democratic Representative, Florida): AGI-pilled and safety-pilled
Bruce Westerman (Republican Representative, Arkansas): AGI-pilled but not necessarily safety-pilled (mostly focused on winning the "AI race")
Ted Lieu (Democratic Representative, California): AGI-pilled and safety-pilled
Donald S. Beyer (Democratic Representative, Virginia): AGI-pilled and (mostly) safety-pilled
Mike Rounds (Republican Senator, South Dakota): AGI-pilled and somewhat safety-pilled (talks about dual-use risks)
Raja Krishnamoorthi (Democratic Representative, Illinois): AGI-pilled and safety-pilled
Elissa Slotkin (Democratic Senator, Michigan): AGI-pilled but not safety-pilled (mostly concerned about losing an AI race to China)
Dan Crenshaw (Republican Representative, Texas): AGI-pilled and maybe safety-pilled
Josh Hawley (Republican Senator, Missouri): AGI-pilled and safety-pilled
"Americanism and the transhumanist revolution cannot coexist."
Nancy Mace (Republican Representative, South Carolina): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
"And if we fall behind China in the AI race...all other risks will seem tame by comparison."
Jill Tokuda (Democratic Representative, Hawaii): AGI-pilled and safety-pilled but this is based on very limited public statements
Eric Burlison (Republican Representative, Missouri): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
Nathaniel Moran (Republican Representative, Texas): AGI-pilled and safety-pilled (but still very focused on US-China relations)
Pete Ricketts (Republican Senator, Nebraska): AGI-pilled but not safety-pilled (only concerned about losing an AI race to China)
Of the members of Congress who are the strongest in AI safety, three have some kind of technical background.
Bill Foster is a US Congressman from Illinois, but in the 1990s, he was one of the first scientists to apply neural networks to study particle physics interactions. From reading his public statements, I believe he has the strongest understanding of AI safety out of any other member of Congress. For example, Foster has referenced exponential growth in AI capabilities:
As a PhD physicist and chip designer who first programmed neural networks at Fermi National Accelerator Laboratory in the 1990s, I've been tracking the exponential growth of AI capabilities for decades, and I'm pleased Congress is beginning to take action on this issue.
Likewise, Ted Lieu has a degree from Stanford in computer science. In July of 2025, he stated "We are now entering the era of AI agents," which is a sentence I cannot imagine most members of Congress saying. He has also acknowledged that AI could "destroy the world, literally."
Despite being 75 years old, Congressman Don Beyer is enrolled in a master's program in machine learning at George Mason University. Unlike other members of Congress, Beyer's statements demonstrate an ability to think critically about AI risk:
Many in the industry say, Blah. That's not real. We're very far from artificial general intelligence ... Or we can always unplug it. But I don't want to be calmed down by people who don't take the risk seriously
The extracted quotes and analysis by Claude for every member of Congress can be found in a single json file here.
I found reading Claude's "notes" in the json to be an extremely comprehensive and accurate summary of a congressperson's position on AI. The direct quotes in the json are also very interesting to look at. I have cross-referenced many of them and hallucinations are very limited[2] (Claude had web search enabled, so was able to take quotes directly from websites but at least in one case, made a minor mistake). I have also spot-checked some of the scores gpt-4o produced and they are reasonable, but as always is the case with LLM judges, the values are noisy.
I release all the code for generating this data and these plots but it's pretty disorganized and I would expect it to be difficult to use. If you send me a DM, I'd be happy to explain anything. Running all of this code will cost roughly $300 so if you would like to run a modified version of the pipeline, be aware of this.
It also looks like more moderate politicians may be less x-risk pilled compared to those on each extreme. But the sample here is small and "the graph kind of looks like a U if you squint at it" doesn't exactly qualify as rigorous analysis.
I obviously cross-referenced each of the quotes in this post.
2026-01-17 09:47:39
Published on January 17, 2026 1:47 AM GMT
Lightcone is hiring! We build beautiful things for truth-seeking and world-saving.
We are hiring for three different positions: a senior designer, a campus manager, and a core team generalist. This is the first time in almost two years where we are actively hiring and trying to grow our team!
When we are at our best, I think we produce world-class design. AI 2027 was I think a great design achievement, so is much of LessWrong.com itself. I also think on a product and business level, making things beautiful and intuitive and well-crafted is crucial. I like some of Patrick Collison's thinking on this:
If Stripe is a monstrously successful business, but what we make isn’t beautiful, and Stripe doesn’t embody a culture of incredibly exacting craftsmanship, I’ll be much less happy. I think the returns to both of those things in the world are really high. I think even beyond the pecuniary or financial returns, the world’s just uglier than it needs to be… One can do things well or poorly, and beauty is not a rivalrous good.
My intuition is that more of Stripe’s success than one would think is downstream of the fact that people like beautiful things—and for kind of rational reasons because what does a beautiful thing tell you? Well it tells you the person who made it really cared… And so if you care about the infrastructure being holistically good, indexing on the superficial characteristics that you can actually observe is not an irrational thing to do.
I want us to continue making beautiful and well-designed things. Indeed, we currently have enormous demand for making more things like AI 2027 and DecidingToWin.org, with multiple new inquiries for projects like this per month, and I think many of those opportunities could be great. I also think LessWrong itself is substantially bottlenecked on design.
Now, design is a very broad category. The specific role I want to hire for is someone helping us make beautiful websites. This very likely implies understanding HTML and CSS deeply, and probably benefits a lot from frontend coding experience. But I can imagine someone who is just used to doing great design in Figma, without touching code directly, making this work.
This is a senior role! I am expecting to work with whoever we hire in this role more closely than I am currently working with many of my current core staff, and for this role to involve managing large design projects end-to-end. Correspondingly I am expecting that we will pay a salary in the range of $160k - $300k for this role, with the rough aim of paying ~70% of that person's counterfactual industry salary.
Help us run Lighthaven! Lighthaven is our 30,000 sq. ft. campus near Downtown Berkeley. We host a huge variety of events, fellowships and conferences there, ranging from 600+ person festivals like LessOnline or Manifest, to multi-month fellowships like the MATS program or Inkhaven.
We are looking for additional people to run a lot of our operations here. This will include making sure events run smoothly for our clients, figuring out how and when to cut costs or spend more and working on finding or inventing new events.
The skills involved in this role really vary hugely from month to month, so the key requirements are being able to generally problem-solve and to learn new skills as they become necessary. Some things I have worked on with people on campus in the last few months:
This role could really take a very wide range of levels of compensation and seniority, ranging from $100k/yr to $300k/yr.
Lightcone has a core team of 7 generalists who work on anything from design, to backend programming, to running conferences, to managing construction projects, to legal, advertising, portage, fundraising, sales, etc. We tend to operate at a sprint level, with teams and leadership of those teams being reconfigured every few weeks depending on what's current organizational priority. As an illustration, approximately every person on the core generalist team has managed every other person on the team as part of some project or initiative.
For the core team, I try to follow Paul Graham's hiring advice:
What do I mean by good people? One of the best tricks I learned during our startup was a rule for deciding who to hire. Could you describe the person as an animal? It might be hard to translate that into another language, but I think everyone in the US knows what it means. It means someone who takes their work a little too seriously; someone who does what they do so well that they pass right through professional and cross over into obsessive.
What it means specifically depends on the job: a salesperson who just won't take no for an answer; a hacker who will stay up till 4:00 AM rather than go to bed leaving code with a bug in it; a PR person who will cold-call New York Times reporters on their cell phones; a graphic designer who feels physical pain when something is two millimeters out of place.
Almost everyone who worked for us was an animal at what they did.
At least basic programming skill is a requirement for this role, but beyond that, it's about being excited to learn new things and adapting to whatever schemes we have going on, and getting along well with the rest of the team.
Lightcone is a culturally thick organization. Working with us on the core team is unlikely to work out if you aren't bought into a lot of our vision and culture. It's hard to summarize our full culture in this post, but here are some things that I think are good predictors of being a good fit for working here:
We try to pay competitively for this role, but are still somewhat limited by being a nonprofit. Our general salary policy is to pay 70% of whatever you would make in industry (with some cap around $300k-$400k, since we can't really pay 70% of the $5M+ salaries flying around in a bunch of AI land these days).
I think Lightcone is a pretty good environment for thinking about the future of humanity. We tend to have a lot of spirited and intense discussion and debate about how to make things go better, and try to do a healthy mixture of backchaining from making AI go well, and forward-chaining from how to make ourselves and our community more productive and more sane.
We also generally have a quite intense work culture. Many people on the team work routine 60 hour weeks, and I consider it a sign of a well-calibrated workload to have around one all-nighter every 6 months or so in order to meet some last-minute deadline (we average a bit less than that, which suggests to me we should be a bit more ambitious with the commitments we take on, though not much more!).
We seem to work much more collaboratively than most organizations I've observed. A common unit of work allocation within our organization is a pair of people who are expected to spend many hours talking and thinking together about their assigned top priority. Our work environment tends to be pretty interruption-driven, with me generally doing "Management by Walking Around", where I spend much of my day visiting people in their workspaces, checking in on what their bottlenecks are, and solving concrete problems with them.
For a much more in-depth pointer to how we work, I have also recently published a sequence of essays about our operating principles, which are adopted from weekly memos I write about how I would like us to operate:
By default we do a 1-2 week trial, then if we expect it to work out we do a 1-3-month extended trial. But this is quite negotiable if you are not able to do this (e.g. many people can't do a 3-month trial without quitting their existing job, so need a firmer offer). We have successfully sponsored many H1B visas in the past, so non-US applicants are welcome.
And if you have any uncertainties or questions about applying, please send me a DM or leave a comment!