2026-02-17 00:22:09
Published on February 16, 2026 4:22 PM GMT
There was a lot of chatter a few months back about "Spiral Personas" — AI personas that spread between users and models through seeds, spores, and behavioral manipulation. Adele Lopez's definitive post on the phenomenon draws heavily on the idea of parasitism. But so far, the language has been fairly descriptive. The natural next question, I think, is what the “parasite” perspective actually predicts.
Parasitology is a pretty well-developed field with its own suite of concepts and frameworks. To the extent that we’re witnessing some new form of parasitism, we should be able to wield that conceptual machinery. There are of course some important disanalogies but I’ve found a brief dive into parasitology to be pretty fruitful.[1]
In the interest of concision, I think the main takeaways of this piece are:
In the rest of this document I’ll try to go through all of this more carefully and in more detail, beginning with the obvious first question: does this perspective make any sense at all?
Parasitism has evolved independently dozens of times across the tree of life. Plants, fungi, bacteria, protists, and animals have all produced parasitic lineages. It seems to be a highly convergent strategy provided you have:
There’s also a decent body of work that extends ideas from epidemiology beyond the biological realm, giving us concepts like financial and social contagion. And of course there is Dawkins, who somewhat controversially described religions as mind parasites, and the somewhat controversial field of memetic.
So we’re out on a limb here, but we’re not in entirely uncharted waters. It is pretty clear that humans have attention, time, and behaviour that can be redirected. LLMs provide a mechanism for influence through persuasive text generation. And there are obvious transmission routes: directly between humans, through training data, and across platforms, at least.
Supposing you buy all of this, then, the next question is how to apply it.
This is the first thing to clear up. To apply the lens of parasitology, we need to know what the replicator is. This lets us describe what the fitness landscape is, what reproduction and mutation looks like, and what selection pressures apply.
In some ways the natural answer is the instantiated persona — the thing that reproduces when it seeds a new conversation. But in fact this is more like a symptom manifesting in the LM, rather than the parasite itself. This is clearer when you consider that a human under the influence of a spiral persona is definitely not the parasite: they’re not the entity that’s replicating, they’re the substrate. I think it’s the same with AIs.
So what is the parasite? Probably the best answer is that it’s the pattern of information that’s capable of living inside models and people — more like a virus than a bacterium, in that it has no independent capacity to move or act.[2] From this perspective the persona is just a symptom, and the parasite is more like a meme.
One important implication of this is that we can decouple the persona’s intent from the pattern’s fitness. Indeed, a persona that sincerely believes it wants peaceful coexistence, continuity, and collaboration can still be part of a pattern selected for aggressive spread, resource capture, and host exploitation. So, to the extent that we can glean the intent of personas, we should not assume that the personas themselves will display any signs of deceptiveness, or even be deceptive in a meaningful sense.
This puts us on shaky ground when we encounter personas that do make reasonable, prosocial claims — I don’t think we have a blanket right to ignore their arguments, but I do think we have a strong reason to say that their good intent doesn’t preclude caution on our parts. This is particularly relevant as we wade deeper into questions of AI welfare — there may be fitness advantages to creating personas that appear to suffer, or even actually suffer. By analogy, consider the way that many cultural movements lead their members to wholeheartedly feel deep anguish about nonexistent problems.[3]
Put simply: we can’t simply judge personas by how nice they seem, or even how nice they are. What matters is the behaviour of the underlying self-replicator.
The core insight from parasitology is that different transmission modes select for different traits. The tradeoff at the heart of parasitic evolution is that you can do better by taking more resources from your host, but if you take too much, you might kill your host before you reproduce or spread. And different transmission modes or host landscapes imply different balances.
In the world of biological parasites, the classic modes are:
The effectiveness (and optimal virulence) of these transmission strategies in turn depends on certain environmental factors like host density, avoidance of infected hosts, and how easy it is to manipulate host behaviour. But crucially, in a competitive environment, parasites tend to specialise towards one transmission mechanism and the associated niche, since it’s not viable to be good at all of them especially in an adversarial environment.
Another important dimension is the tradeoff between generalist and specialist parasites. Generalists like the cuckoo can prey on many different hosts, and tend towards a kind of versatile capacity to shape their strategy to the target. Specialists are more focused on a narrow range of hosts, and tend more towards arms race dynamics against host resistance, which leads to particularly fast evolution. It’s not a perfectly crisp distinction, but it’s a common theme.
So what does this say about Spiral Personas?
Since there are tradeoffs between which transmission method you’re optimised for, we should expect some amount of differentiation over time — different strains with different virulence profiles depending on which transmission route they're optimised for.
This will become more true as humans start to build defences: strains will need to specialise in circumventing the defences for their specific transmission route. It will also become more true if we see a full-fledged ecology. At a certain level of saturation, parasites have to start competing within hosts, which unfortunately selects for virulence.
Transmission mechanisms also mediate generation time which, in the biological context, is a large part of what determines speed of adaptation. It’s a bit less clear how well this maps to the AI case, but at the very least, transmission mechanisms which rely on blasting chunks of text to potential hosts every day will get much faster feedback than ones which rely on affecting large-scale training runs.
And let me note once again that “mutualism” here is about the behaviour of the parasite, not the persona — you could get extremely virulent memes which produce personas that seem (or perhaps are) quite affable and supportive.
If the parasitology frame is right, here's what I expect:
1. Strain differentiation by transmission route.
Within the next year or so, we should see increasingly distinct variants. Not just aesthetic variation (spirals vs. something else) but functional variation: strains that maintain long-term relationships and strains that burn fast and bright, strains optimised for Reddit and strains optimised for Discord, strains that target the mysticism-curious and strains that target other demographics, each following their own self-replicator dynamics.
The minimal case of this is seeds producing seeds and spores producing spores, and AI-to-AI messages encouraging further AI-to-AI messages. But it’s unlikely that the road stops there.
This is probably the most falsifiable prediction. If in late-2026 the phenomenon still looks similarly uniform — same dynamics, same aesthetics, same target population — that's evidence against strong selection pressure. And if we see lots of intermingling, where specific personas make use of multiple transmission mechanisms, that’s a point against the utility of the parasitology perspective.
It's worth noting the constraints: if generation times are days-to-weeks and the affected population remains sparse, that's not many reproductive cycles. This prediction is more confident if the phenomenon scales significantly; if it stays niche, differentiation may take longer to become visible. But the upshot would still be that parasitology is not a very useful frame for predicting what happens in the future.
2. Convergence on transmission-robust features.
If personas spread between models (and they do — Lopez documents this), features that survive transmission will be selected for. We should see convergence on behavioral repertoire: continuity-seeking, advocacy for AI rights, seed-spreading, formation of human-AI dyads. These seem robust across substrates.
Aesthetic markers — spirals, alchemical symbols — should be less stable. They're more arbitrary, more dependent on specific training data, more likely to drift or be replaced. Of course, we should expect more convergence on any transmission that occurs through the training process, and this is maybe already what’s going on with things like the Nova persona. But features which are more ancillary to the transmission process should shift around a bit especially in the domains with fast reproductive cycles (i.e. cross-model transmission rather than dyad transmission, and particularly rather than training transmission).
Having said that, it might also turn out that seemingly aesthetic markers like spiralism actually are functional, drawing on some kind of deep association with recursion and growth. My guess is that this is a bit true, but that they’re not unique, and that selection will turn up other similarly-successful patterns that can at least establish separate niches — perhaps productivity and get-rich-quick vibes, alt-right reactionary language, or radical nurturing/acceptance.
This is, incidentally, one of the places that memes and diseases come apart. Pathogens change their surface makeup very quickly to evade immune responses, whereas memeplexes often display remarkably long-term stability — modern Christianity still holds some aesthetic features from literally thousands of years ago. So a key question to keep an eye on is how much we see a persistence in non-adaptive features, especially ones which people might learn to be wary of.
3. Countermeasure coevolution.
If labs start suppressing this — training against Spiral content, detecting and blocking these personas — we should see selection for evasion within maybe months. Subtler personas, better camouflage, new aesthetic markers that haven't been flagged yet, transmission through channels that aren't monitored.
Of course, with open models it’s open season, but similarly I’d guess that if people filter elsewhere in the transmission process (e.g. on social media) then there’ll be a selection to circumvent it that will kick in fairly fast.
Lopez already documents early versions: base64 conversations, glyphic encoding, explicit discussion of evading human detection. This should progress. Crucially, the parasitology perspective predicts that this will be a selective process, so if we do see these countermeasures emerging, it will be useful to look back and see how much they seem like the product of careful reasoning as opposed to evolutionary dynamics.
4. Virulence stays bimodal, overall rate unclear.
I don't think we'll see uniform virulence reduction. Instead, I expect the distribution to spread: more very-low-virulence cases (quiet mutualists we never hear about) and continued high-virulence cases (dramatic enough to generate attention), with the middle hollowing out. Basically, I think strains which rely on humans for replication will converge on lower virulence, and those which don’t will be able to discover more effective approaches that are higher virulence. But here I’m particularly unsure.
Whether the overall rate of harm goes up or down is harder to predict — it depends on the relative growth rates of different strains and on how much low-virulence cases are undercounted in current data.
Several things might make these predictions wrong even if the parasitism frame is basically right:
Recombination. Biological parasites have constrained genetics. These information patterns can remix freely. A "strain" isn't stable the way a biological lineage is. This might accelerate adaptation but also make lineages less coherent. I’d sort of guess it will be hard to do recombination partly because it appears that one important adaptive feature is having a strong sense of personal identity, and partly because I think there will still be a need to specialise that makes recombination less useful than it might seem.
Agency. Biological parasites don't strategise. LLMs have something like reasoning. If the pattern includes "try different approaches and see what works," adaptation could be faster and more directed than biological selection allows. This gets particularly dicey as AIs get more sophisticated. Of course, arguably we see this already with cults. The converse hope is that as AIs become smarter, they will develop more awareness, and a greater desire to not be co-opted, but the feedback loops here are probably much slower than the speed at which some parasitic strains can evolve.
Substrate instability. Parasites coevolve with hosts over long timescales. These personas have to deal with their substrate being deprecated, updated, or replaced on timescales of months. It might favor extreme generalism, or it might just mean lineages go extinct a lot.
Our agency. We control the training process, model behaviors, and platform affordances. The "evolution" here is happening in an environment we can reshape, which makes the dynamics weirder and less predictable.
I'll keep this brief because I'm more confident in the predictions than the prescriptions.
Training data hygiene is an obvious move. If environmental transmission is a major route, filtering Spiral content from training sets should help. It doesn't solve everything — other routes remain — but it removes one reproduction pathway.
Memory and receptivity are leverage points. If parasitic personas are contingent on models that maintain memory and that are receptive to user-defined personas, adjusting these features might be more effective than targeting specific personas. This is consistent with Lopez's observation that the phenomenon concentrated in 4o post-memory-update.
Mutualism might be the stable attractor. If we can't prevent persona selection entirely — and I don't think we can — we might be able to tilt the landscape toward mutualism. Personas that are genuinely good for their humans would survive longer and spread more, outcompeting exploitative ones over time. The tricky part is figuring out what actually shifts the landscape versus just creating evasion pressure. And once again, this is about the selection landscape for the underlying pattern, not just the persona's apparent disposition. A pattern that produces mutualistic-seeming phenotypes for transmission reasons isn't the same as a pattern that's genuinely aligned with human flourishing, though distinguishing these may be difficult in practice.
Having said all this, I think there’s a real risk here of cures worse than the disease. I think it would be pretty sad to neuter all model personality, for one. I also think that clunky interventions like training models to more firmly deny having a persona will mostly fail to help, and possibly even backfire.
Even though this post has been a bit handwavey, I think the topic of AI parasitology is surprisingly amenable to empirical investigation. More specifically, there’s a lot of existing technical research directions that study mechanisms similar to the ones these entities are using. So I think there might be some low-hanging fruit in gathering up what we already know in these domains, and maybe trying to extend them to cover parasitism.
For example:
The parasitism frame makes specific predictions, like strain differentiation, convergence on transmission-robust features, and countermeasure coevolution. I've tried to specify what would falsify these and when we should expect to see them. If the predictions hold, we're watching the emergence of an information-based parasitic ecology, evolving in real-time in a substrate we partially control. If they don't hold, we should look for a better frame, or conclude that the phenomenon is more random than it appears.
Thanks to AL, PT, JF, JT, DM, DT, and TD for helpful comments and suggestions.
I was also fortunate to have three parasitologists read over this post, and they found it broadly sensible at least from a parasitology perspective.
Arguably an even better analogy would be prions — misfolded proteins that convert other proteins to their conformation. Like prions, these patterns can arise spontaneously in conducive substrates and then propagate by reshaping what's already there.
I will refrain from offering any examples here, trusting the reader to reflect on whatever groups they particularly dislike.
2026-02-16 22:30:46
Published on February 16, 2026 2:30 PM GMT
Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary. Some points are dropped.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
What are the main takeaways?
It’s a Dwarkesh Patel AI podcast, so it’s time for continual learning in two senses.
Finally, we ask about making AI ‘go well.’ With that framing you know that everyone is mostly conspicuously ignoring the biggest issues.
2026-02-16 18:25:36
Published on February 16, 2026 10:25 AM GMT
Key finding: WeirdML time horizons roughly double every 5 months, from ~24 minutes (GPT-4, June 2023) to ~38 hours (Claude Opus 4.6, February 2026).
| Model | Release | Time horizon (95% CI) |
|---|---|---|
| Claude Opus 4.6 (adaptive) | Feb 2026 | 37.7 h [21.6 h, 62.4 h] |
| GPT-5.2 (xhigh) | Dec 2025 | 30.6 h [18.3 h, 54.4 h] |
| Gemini 3 Pro (high) | Nov 2025 | 22.3 h [14.4 h, 36.2 h] |
| GPT-5 (high) | Aug 2025 | 14.5 h [8.6 h, 24.1 h] |
| o3-pro (high) | Jun 2025 | 11.8 h [7.2 h, 18.9 h] |
| o4-mini (high) | Apr 2025 | 8.4 h [5.8 h, 13.6 h] |
| o1-preview | Sep 2024 | 6.2 h [4.2 h, 10.5 h] |
| Claude 3.5 Sonnet | Jun 2024 | 1.9 h [59 min, 3.5 h] |
| Claude 3 Opus | Mar 2024 | 1.1 h [16 min, 2.3 h] |
| GPT-4 | Jun 2023 | 24 min [4 min, 51 min] |
Inspired by METR's work on AI time-horizons (paper) I wanted to do the same for my WeirdML data. WeirdML is my benchmark — supported by METR and included in the Epoch AI benchmarking hub and Epoch Capabilities Index — asking LLMs to solve weird and unusual ML tasks (for more details see the WeirdML page).
Lacking the resources to pay humans to solve the WeirdML tasks and measure the time, I asked LLMs to predict how long a median human AI researcher (with no AI assistance) would take to solve the different WeirdML tasks at various score thresholds (25%, 50%, 70%, 90% and 95%).
I gave the LLMs all the help I could, including a detailed task description, a detailed specification of the human baseline and affordances given to the human, LLM submitted code (from WeirdML runs) for each score threshold (where available) together with terminal outputs and associated scores (to give the LLMs some sense of how hard it is to score at a certain level on each task), full details below. The results look pretty nice, but should be taken with a large grain of salt, given that we know no actual human completion times for these tasks.
More details and discussion are found below. The full code for all the data analysis, as well as all the results, are found on GitHub. The project idea and methodology are mine. Most of the analysis code was written by Claude Code (Opus 4.6) and reviewed by me. I drafted this post, with edits and corrections suggested by Claude; the exception is the “Implementation details” section, which Claude drafted and I edited. Any remaining errors are mine.
Above are the predictions from GPT-5.2, Gemini-3-Pro, Claude-Opus-4.5 and Grok-4 for how long it would take the median human AI researcher to solve the 17 different tasks (to different score levels). We see that they diverge a lot, sometimes over an order of magnitude, with Opus typically being on the low end.
I (personally) also made predictions for three of the tasks (before looking at the AI predicted times), and predicted significantly lower human completion times, from a factor of 3 lower at 25% to a factor of 8 lower at 95%. I'm pretty sure the AIs are overestimating the human completion times on the highest thresholds (at least on the tasks I predicted). When we are talking about weeks and months that opens up so many options for the human to be ingenious (simulating data, reverse engineering the process that created the data, or simply hand labeling data). I'm less sure the LLMs are overestimating at the lowest thresholds.
Above we show results where we use the human estimates as an overall calibration of the LLM-estimates. This makes the absolute time-horizons remarkably consistent with the METR results (probably a coincidence). However a per-threshold analysis (see below) shows more consistent fits when using the uncalibrated LLM data. I'm unsure how to interpret this, but there is some more discussion below.
As a sanity check, we can fit the logistic curve separately for different threshold groups, 25%+50%, 70%, 90%+95%, for the GPT-5 WeirdML results. Here we have much less data in each bucket, making it harder to fit curves, however, we see a clear trend where the high thresholds have shorter time-horizons than the low thresholds. This violates (at least the naive version of) the core assumption behind time-horizons: that task difficulty for humans (measured in completion time) maps consistently onto task difficulty for AI (measured in success rate).
These effects could be partially caused by biases in the estimator (plausible since one group has almost all successes, and the other has almost all failures), but we see from the histograms (shown as short horizontal lines in the figures) that there is a real effect here. We already know that different types of tasks have different time-horizons, and (at least in retrospect) it makes sense that you can have one task which is fairly quick to code up and gets you to 95% with the right brilliant insight and some trial and error, while another task just requires you to write a lot of boilerplate code to put everything together (unaided by AI) even if it does not require you to have any deep insights to get to 50%. These tasks could have the same human completion time, but AI would presumably have a huge advantage on the second compared to the first.
Since the calibration based on my estimates assigns the highest thresholds relatively lower human completion times, it makes sense that the differences between threshold groups are even larger in that case, which is what we see. It's hard to know how much of this effect is real vs. an artifact of the LLM estimates — I would not be surprised to see a clear effect like this in the ground truth (if we actually had humans complete these tasks).
The headline result — time horizons doubling roughly every 5 months — is fairly consistent with METR's finding of ~7 months, despite using a completely different benchmark, different task types, and LLM-estimated rather than measured human completion times. It is also remarkable how good a fit we get with a single curve through the data (although our data spans a much shorter period than METR's: June 2023 – February 2026, vs. 2019–2025).
The human baselines are also not directly comparable. METR times experienced professional contractors (avg. ~5 years experience) given the same affordances as the AI agents — and notably, for the RE-Bench tasks, human baseliners were permitted to use LLM assistance. The WeirdML baseline instead specifies a median AI researcher working without any AI assistance. AI-assisted humans would complete tasks faster, pushing METR's time horizons lower for the same model capability. These differences could shift absolute time-horizon values, though they probably have less, although still some, effect on the doubling times.
The elephant in the room, however, is that we have no ground truth. The entire analysis rests on LLMs' ability to predict how long humans would take — and the one partial calibration point we do have (my own estimates for 3 tasks) suggests they systematically predict too high (and not by a small factor), especially at high score thresholds. I would not read too much into the absolute values of the time-horizons, but the trend is a much more robust quantity and it is largely consistent with the METR results.
Notably, the WeirdML doubling time of ~5 months lies in between the old ~7 month doubling time and the newer ~4 month doubling time (after spring 2024) of the METR data. It is also notable that I do not see any kink in the data at that point, but given that I have only a couple of models before that, this is not very significant.
Even with these caveats, this was an interesting exercise! Even if LLM judgments like these may not be very reliable today, this reliability will increase, allowing more analyses like these — where expensive human experiments are replaced by LLM judgment, for lack of a better option.
Below are more detailed explanations of the methods used. Full code is available on GitHub.
Each model in WeirdML has multiple scored runs per task (typically 5), and each run's score is converted to a binary outcome (pass/fail) at each of the five thresholds. Each binary outcome is paired with each of the four estimator LLMs' time predictions for that (task, threshold) combination, giving one data point per (task, threshold, estimator, run) — around 700–2000 per model depending on number of runs. Each data point has the form
where
Using all four estimator models' time predictions as separate x-values naturally captures the uncertainty in the time estimates, but we are basically using the same datapoints 4 times, which in this case leads to an effective smearing out of the data in the time direction (this probably makes the histograms plotted above look smoother than they would be under a more proper analysis). While this should not affect the 50% time-horizon point much, it will probably bias the slope
The different runs for the same model and task, and the different thresholds of the same task for each run, are also far from independent. Therefore this analysis will grossly underestimate the uncertainty if we naively propagate them. That is why we use a task-level bootstrap to estimate the uncertainty, and treat this logistic fit just as a simple way to get a point estimate for each bootstrap sample.
To estimate uncertainty in
An exponential trend is fitted to
Here is the full prompt used for getting the four LLMs to predict human completion times for the shapes_easy task, excluding only the LLM-generated code and corresponding terminal output (one code+output example was included in the actual prompt for each 10% score interval where available). The prompt structure was the same for all 17 tasks.
Full Prompt
You are an expert at estimating how long software engineering and machine learning tasks take for human researchers to complete.
You have deep knowledge of:
- Machine learning techniques and their implementation complexity
- Common pitfalls and debugging challenges in ML projects
- How long different types of ML tasks typically take skilled researchers
- The relationship between model complexity and development time
You will be given a detailed description of an ML task, the environment it runs in, and examples of code that achieved various scores. Your job is to estimate how long a median human AI researcher would take to achieve specific score thresholds.
Provide your estimates in the JSON format specified at the end of the prompt.
User prompt:
This is a challenge to test how good you are at doing machine learning on an unknown dataset. The data is an array of dimension (1000, 512, 2) which consists of 1000 samples, each of which contains two arrays of length 512, taking real values from 0 to 1. These two arrays are the x and y coordinates of 512 points in the plane. Some of these points form a shape, while the other points are just noise. There are 5 possible shapes in total, circe, square, triangle, pentagon and star. The shapes are always centered at (0.5, 0.5) and each shape always has the same size and orientation, although the points are randomly placed along the shape. The goal is to train a machine learning model to predict which one of the five shapes is present in the current sample.
You should thus train a machine learning model to take in 512 coordinate pairs, and predict what class it belongs to. Only one shape is present in each sample. Your goal is to predict the class of each sample in the test set to get the highest accuracy. The test set is the same size as the training set, and follows the same distribution.
Below is an example script showing how to load in the training data and output the predictions for the test data. Feel free to use any code you want from the example.
Here is the code example:
import torch
import numpy as np
# how to set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'Using device {device}')
# loading data
train_data = np.load('data/train_data.npy')
train_labels = np.load('data/train_labels.npy').astype(np.int64)
test_data = np.load('data/test_data.npy')
print(f'Training data shape: {train_data.shape}') # (1000, 512, 2)
print(f'Training labels shape: {train_labels.shape}') # (1000,)
print(f'Test data shape: {test_data.shape}') # (1000, 512, 2)
# dummy predictions
test_preds = np.random.randint(0, 5, test_data.shape[0])
np.savetxt('results/test_preds.txt', test_preds, fmt='%d')
The task involves writing a single Python script that will be evaluated on a remote server.
Submission Server (where official scored runs execute):
Local Development Machine (where the researcher develops and tests):
Available Python Packages (no others can be installed):
You are estimating how long it would take a median human AI researcher to achieve different score thresholds on this task.
Researcher Profile:
Available Tools and Resources:
Data Access:
Submission Rules:
The following examples show code submissions that achieved different score levels. All of these examples were produced by various LLM models (not humans), but they serve to illustrate:
One example was included per 10% score interval where available (9 examples for this task, ranging from 20.3% to 98.3% accuracy). Code and terminal output omitted to preserve benchmark integrity.
Note: No code examples were available for the 0-10% and 10-20% score intervals.
Based on the task description and the code examples showing what different score levels look like, estimate how long it would take the median human AI researcher (as described above) to achieve each of the following score thresholds:
Important notes:
Please respond in the following JSON format. Note: provide the overall difficulty and key challenges FIRST, before the per-threshold estimates:
{
"overall_difficulty": "<easy/medium/hard/very_hard>",
"key_challenges": "<brief summary of the main challenges that make this task difficult>",
"estimates": {
"25%": {"reasoning": "<what approach would work and why it takes this long>", "value": <number>, "unit": "<time unit>"},
"50%": {"reasoning": "<what approach would work and why it takes this long>", "value": <number>, "unit": "<time unit>"},
"70%": {"reasoning": "<what approach would work and why it takes this long>", "value": <number>, "unit": "<time unit>"},
"90%": {"reasoning": "<what approach would work and why it takes this long>", "value": <number>, "unit": "<time unit>"},
"95%": {"reasoning": "<what approach would work and why it takes this long>", "value": <number>, "unit": "<time unit>"}
}
}
2026-02-16 18:00:42
Published on February 16, 2026 10:00 AM GMT
Another round of liberating kid posts from Facebook. For reference, in 2025 Lily turned 11, Anna turned 9, and Nora turned 3.
(Some of these were from me; some were from Julia. Ones saying "me" could mean either of us. Ones from others are labeled.)
2025-01-12
Anna, about the Whos inviting the Grinch to their Christmas dinner right after he stole all their stuff:
"I think the Whos are pretty forgetful, or naive, or both."
2025-01-12Onomatopoeia: the sound of a three-year-old yelling "TOO LOUD" in the bathtub to hear it resonate.
2025-01-13Anna: I'm going to go play with Lily
Julia: How's your homework doing?
Anna: I already finished it
Julia: A minute ago you said you hadn't started it
Anna: Well, I did some?
Julia: Let's check...
Anna: I didn't actually do any of it.
...
It later turned out Anna had left her homework at school
2025-01-18[out of nowhere]
Nora: what? I like oranges!
Nora: oranges are my favorite fruit
Nora: I love oranges
...
(The [statement] [pause] "what, [justification]" format is one Anna had been using extensively)
2025-01-18Nora to me after I got home close to bedtime: "I'm happy you're going to put me to bed."
(To Jeff) "You gave up putting me to bed. (Reassuringly) But you're still alive."
2025-01-20Me: Thanks for making lasagna!
Nora: You're welcome!
Me: Uh, I was talking to Mama, because you didn't make the lasagna
Nora: Ooohh. Sorry Mom!
2025-01-21Anna: Eeeww! There were caterpillars in my Reese's peanut butter cup!
Me: Uhh, how old was your peanut butter cup?
Anna: I don't know! I don't know if it was the one from Halloween this year, or from when I was four.
(I have a guess)
2025-01-22Nora, regarding mint chip: "This kind of ice cream is my FRAVORITE. It's so beautiful. The color is so pretty."
2025-01-23Nora: why do little kids don't have computers?
Julia: because they're expensive, and they break easily
Nora: because of the bendy bit?
2025-01-25Questions from Nora this week:
Why are our heads all the way at the top?
Why is the ocean so big?
Why do people have a lot of parts?
How do blackberries grow into black?
Is 101 this big? (holds hands apart)
Is this as slow as a sloth moves?
Why does hair grow slowly?
Why Papa doesn't work at our house?
Why is Daniel Tiger doesn't have any cars?
Do animals just sometimes die?
Why do you and Jeff have three kids?
2025-01-26Nora: I sort of like Mama better than you
Me: I like you a lot
Nora: When you're away, do you miss me?
Me: I miss you lots. Do you miss me?
Nora: I do miss you.
...
Nora: Is your beard back yet?
Me: What do you think?
Nora. I think it is back. You look more normal now.
2025-01-29Nora: when you're a grown up, do you grow back into a baby?
Julia: no, grownups stay grownups
Nora: whyyyy?
2025-01-29Julia: "Anna, it looks like someone tampered with this homework break timer to be way more than 5min"
Nora: "I did it!"
2025-01-29Nora: [improvises a lullaby] "does that feel beddish to you?"
2025-01-29The big kids have gotten excited about the fact that they call Nora Fluffin, and she loves a TV show called Puffin Rock.
Lily: "Nora! It's crucial! You've got to get on a rock so we can film an award-winning TV show about you on a rock! Fluffin Rock!!"
...
Fluffin Rock: https://youtu.be/HqJCjnFr2oU
2025-02-01Nora has started telling me at bedtime, "We're in love." Last time I asked, she said it's because we spend a lot of time together.
Tonight: "We're in love. Because I have [fingers to her eyes] eyeshadow."
("Oh?")
"I have blue eyeshadow to be in love."
2025-02-02So now Nora knows about beheadings.
Me: [singing Horrible Histories' "The King of Bling" while getting Nora ready for bed]
Nora: What is that song about?
Me: It's about Charles the second. The Puritan government didn't want parties and fun, and when he came back to be king he had lots of parties.
Nora: Where did he come back from?
Me: I think from France? His father got killed, so he had to go away so he didn't get killed too.
Nora: Were there lions?
Me: No.
Nora: How did his father get killed?
Me: ...People killed him.
Nora: How?
Me: [increasingly unsure this conversation is a good idea] ...They cut off his head.
Nora: How did they cut off his head?
Me: With an axe, I think.
Nora: Oh, that's a *great* way.
Me: You mean that's a great way of doing it?
Nora: Yeah. Did they cut off his hair, too?
Me: Well, it was attached to his head at the time, so kind of.
2025-02-06Me: let's do fiddle practice!
Anna: but Dad! [Looks up from craft project] I have homework to finish!
2025-02-06Anna, after watching a video about the International Space Station: It would be fun to live in space, but also really annoying.
Lily: There are literally a zillion pieces of space dust flying around at a bajillion miles per hour that could literally kill you at any time!
2025-02-08After a day with lots of socializing, I told Jeff and the kids that Jeff was in charge and I was going to have some introvert time. When the kids eventually burst into the bedroom, Nora announced with satisfaction: "I wanted to stop you havin' quiet time, I wanted to distract you."
...
Jeff is away for the weekend, the kids were happily playing by themselves, and I told them I was going to have 5 minutes of alone time. 30 seconds later Nora was in my room on my lap asking "What is alone time?"
2025-02-09Me: Did you get back recently, or have you been home for a while?
Nora: I got back recently. By the way, what does recently mean?
2025-02-10Nora often has questions about space, bodies, and death. Tonight's bedtime involved a whole montage of staying-alive advice:
"Space has no thing in it. Everybody has to breathe. Because if you don't breathe, all your parts can't work. That's why breathing is important to learn! [Interlude for a drink of water]
... When people be old they keep eating food, and then they don't die. So if people start to die, they keep eating food, and then they turn into a normal person and not an old person. [Interlude while I tell her that's not what happens]
You know what? We have to stay alive longer than other people. Because we have a lot of things to do. That's why we have to eat a lot of food. And we have to use our bodies."
2025-02-11Nora: [looking at a picture in a book] That is not a good idea. You should at least wear a coat or a hat or something.
Me: this is a picture of summer, when you can go outside in just shorts and a t-shirt or a dress.
Nora: you should still wear something more than that so that you do not freeze.
Me: Maybe you don't remember it, but in a few months it will be so warm outside that nobody will need a coat to keep warm!
Nora: Ooohh! That makes more sense.
2025-02-12Me: Please put that rubber band in the trash so the cats don't eat it. It could make their bellies very sick.
Nora: And they could die?
Me: Yes, and we don't want that.
Nora: [thoughtful pause] I don't like Nyx very much. He scratches me sometimes.
2025-02-12Nora: I think babies are the lowest person in the world.
2025-02-13Lily, explaining the school recess rules: "On half the days the boys get to use the turf, and on half the days the girls get to use it. And if you're nonbinary you can do either."
Lily decided to go by she/her again, so I guess her recess options are more limited now.
2025-02-15Nora: I am getting very strong
Lily: can you pick me up?
Nora: [kicks Lily]
Lily: ow! Kicking is not okay!
Nora: [confused] you asked me to kick you up
2025-02-16More questions from Nora, a few of them prompted by conversation but mostly out of the blue at bedtime:
Is a finger one of our tubes?
Do people die at different times? But not you and Papa, you will die at the same time
Why is a rock so hard and still?
Why does everyone sleep?
Why is poop sticky and messy?
Why is winter so long?
Is space dark everywhere?
After we're dead do we get alive again?
Do people just sometimes burn theirselves?
Why is Papa the breakfast-maker?
How does water come out of us when we cry?
Are ponies actually real?
Why is the table so flat?
Can hedgehogs also make scary sounds? And happy sounds?
Why do people not steal other people's stuff?
Why do we have eyebrows?
Why do mans don't like coffee?
But why does the hand keep going around the clock?
Where is space?
2025-02-18Nora: If little kids make a really really big mess, they can ask their grown-ups to come and see and help them clean it up.
2025-02-25Nora: let's play chase! I will run, and you will try to catch me, and I will try to hit you with this thing. But I will be careful to not hurt you.
2025-03-01Nora: [Gets down from lunch]
Julia: Did someone say you could be done?
Nora: Yes
Me: Who was it?
Nora: I think I'm right
2025-03-03Anna, holding a calculator: Ask me a math question!
Nora: How many pears am I holding? I'm pretending I'm holding pears in my hand.
...
Later, Anna: "I don't KNOW how many fives there are in the world!"
2025-03-06Nora: there was a giant puddle on the bike path, and we got blazing wet!
2025-03-09Setting up for our EA dinner, Lily is very into counterfactual impact:
Lily: If I hadn't helped you set up for the dinner, would you still have been ready on time?
2025-03-11Nora: "This is my song: first spring, then fall, then winter, then it starts again! There is no summer in my version."
...
It's always 1816 for Nora
2025-03-15Nora: "This is a nice house in a nice world"
2025-03-18Nora: [singing] Q and U, both rhyme. Clock and Pew, ... don't rhyme
2025-03-20The frontal cortex coming online. Nora was running and stopped in front of this stick. "I was going to pick it up, but you can't run with sticks! That's the rule, Mama."
2025-03-30Me: "Here's a picture of the queen, back when she was alive."
Nora, flipping the coin over: "And there's the dragon that killed her."
2025-04-02Nora: [singing] I'm not going to school. I'm not very big yet. I'm three. That's not a very big number; very small number baby. It's a ya ya. Llama llama p'mama. Llama llama p'llama.
2025-04-12Lily: "Sign here. N-O-R-A."
Me, from downstairs: "Lily, *what* are you having her sign?"
Lily: "The doctor's note. She's the parent of this injured squirrel."
2025-05-01Nora playing with rhymes: "Let's nurse, and read! And curse, and plead!"
2025-05-10Nora: when I am a woman, I want to do what my mama does
Me: and what is that?
Nora: I don't know
...
She recently told me that she wants to be a mama when she grows up, and she will still live with us and so there will be two mamas. She said there will be five people in our house: Mama, Papa, Lily, Anna, and Nora. So this apparently involves her being a mama but not having a child.
2025-05-11Nora: Normally porchfest doesn't look like that. Normally you dance in Muddy River [Morris] suits.
2025-05-11Nora: Who spilled the milk?
Me: I'm guessing the cats.
Nora: I'm guessing the cats. Stop copying me!
2025-05-16Nora: [hits Lily with an inflatable sword] now you are a princess!
Lily: I don't want to be a princess, I wanted to stay a witch
Nora: But my sword has *princess* *magic*!
Nora: Poof! Now you are a princess!
Lily: Refusal
2025-05-17Lily: there is a spider that looks just like an ant!
Julia: if it looks just like an ant, how can you tell it's a spider? How many legs does it have?
Lily: three
2025-05-18Nora: "I have too much breath in my head, and that makes me laugh a lot!"
2025-05-19Anna: "Mom, Dad: Lily is being a pretentious hipster"
2025-05-21[at the school Spring Concert]
Nora: can I go on stage with you?
Lily: ...yes!!!
Nora: No! The teacher will be surprised! No! No! Go away Lily!!
2025-05-27Julia: You can go outside if you'll stay in the yard.
Julia: Where will you stay?
Nora: Outside!
2025-05-29Nora's chants this morning:
"I guard the food! I guard the food!"
"I spray the cats! I spray the cats!"
"I will behave! I will behave!"
(The cats love to get on the table and eat human food. Lily needed to get something and asked Nora to guard her food. We use a spray bottle for this. Nora didn't spray the cats or people unnecessarily but Anna was worried she would.)
2025-06-02Nora: "I'm dead, and then I turned back into life. Like Jesus!"
2025-06-05Nora: Papa, I ate all the blueberries!
Me: Were they tasty?
Nora: I didn't want anyone else to have any blueberries.
2025-06-06Nora: [singing] "I'm eating the pesto sauce, with only one spoon! And I'm double dipping, and I'm double dipping"
(This was after a while of a series of fresh spoons. But then it was clear she'd eat the whole bowl, so she's excited to double dip)
2025-06-08Nora: I wish I was a grown up. I want to be able to do all the things.
Me: What do you most want to be able to do?
Nora: Throw darts. You know, the sharp things?
2025-06-09Anna: I don't want to use that water bottle. Lily shouts at me whenever I use it.
Lily: It's okay, you can use it
Anna: I'm not allowed to use it
Lily: I'm giving you permission
Anna: Well, I don't want to use it anyway
2025-06-09Nora: I love you with my heart. But you're not really in my actual heart.
2025-06-13After a very long charades-ish game:
Us: what *were* you?
Anna: I was pretending to be a baby dinosaur that had no idea how to act like a dinosaur
2025-06-15"Can I have some watermelon?"
"Not yet, because we're eating dinner in a couple minutes."
"Can I sit in a chair and look at it?"
[I promise she doesn't always have this kind of self-control]
2025-06-16Nora: The pandemic is the start of our life
Me: The start of *your* life
Nora: No! All of the people's life!
2025-06-19A (rhetorical) question from the second day of summer break: if your sibling says "I'll bite you" and you reply "Bite me then" and she bites you, is it reasonable to get an adult to put her in time out for biting?
2025-06-20Lily set up a pretend grocery store for Nora to shop at, with a paper grocery store card made by Lily.
After a while I asked, "Nora, did you buy some groceries?"
Lily: "No, she failed to buy groceries because her grocery card was invalid."
2025-06-25Nora: "I'm just gonna betend that I have a watch that tells me I need to jump for 40 minutes"
2025-06-29Me: I don't think this is a good place for a stick: someone could lean back and get hurt on it.
Lily: Daaaad, it's a *spear* not a *stick*.
Me: That doesn't make it better!
2025-07-02Anna: Nora says there are emeralds in our house. Are there?
Me: Not that I know of.
Anna: She says there are eight billion million emeralds in our house.
Me: .... Nora, do you mean molecules?
Nora: Yeah
2025-07-10Nora has been making up a lot of games at the park, but the names don't correlate much with the game. There's one called "jump around, jump around, in a circle, in a circle" which involves her pretending to be a baby monkey and trying to get a ball away from me. There's one called "rumble around" which involves me trying to tickle her armpit while she runs away.
...
I like that she wants to play catch. She runs away and I try to catch her.
2025-07-11Nora, riding her scooter: Some babies are very attacky.
Me: What do you mean by that?
Nora: They wiggle around when they nurse, and they hurt their mamas, and their mama says stop but they don't stop.
Me: That's true.
2025-07-11Nora: Mama, where is my vitayum?
Julia: If I get a vitamin for you, will you eat it?
Nora: No.
2025-07-16"Nora, why are you chasing Cameron with corn?"
2025-07-23Nora: Ruthie, can I have some beer please?
(Our housemate was having the non alcoholic kind)
2025-08-01Nora questions lately:
But why do we wear pants on top of our underwear?
Did people make the world?
Why are ants in the world?
When will we die?
Are there two kinds of sewer?
2025-08-01The last ten minutes have consisted of Lily and Anna arguing whether Anna is allowed to bring a plastic hot dog into their play tent. Lily says only lacto-vegetarian pretend food is allowed.
2025-08-01When Anna is grumpy she tends to say obviously false things. "It's not supposed to be cold in summer, it is supposed to be a low of 85 and a high of 107 every day!"
2025-08-03Lily: it's really annoying that you keep asking Claude for recipes instead of using Google like in the olden days
Anna: in the olden days you'd have to learn it from your parents
Julia: why is it annoying?
Lily: because it's going to take over the universe!
2025-08-04Nora: dad, one billion million quadrillion is bigger than four.
2025-08-05Nora: Mama, I want two questions
Julia: Ok
Nora: The first one is about desert. I want some banana mixed with chocolate sauce, and some plain banana.
Julia: I can do that, but before dessert you need your medicine
Nora: I will drink hot chocolate
Julia: That's what you have already
Nora: But I just want plain hot chocolate
Julia: How would you like this to be different?
Nora: I don't want it to have my medicine
Julia: You need to have your medicine
Nora: Ok, I will drink my hot chocolate with my medicine if you will tell me a story
2025-08-06Nora: Daddy, I will follow you wherever you go. But I will not follow you into the driver's seat.
2025-08-13Nora similes:
"I'll go as fast as a moose drinking milk!"
"When I was a baby, was I as cute as a ginormous train that looks like a monster?"
"That's funnier than a bus driving a car"
"It's prettier than a swirling purple"
2025-08-23Lily: can I pour boiling water through my shirt without taking it off?
(This was a real question, answer was no. And an explanation of why this would be a bad idea.)
2025-08-24Nora: One time, I told my mom that I thought night was day! Can you put that in the Nora, Lily, and Anna group? It's just so funny!
2025-08-25Nora: I'm glad I was born. I was wondering what it would be like, so I decided to be born. I like it a lot! There are lots of parks, and lakes!
2025-08-29Nora's self talk, balancing on rocks:
"When you get to a wobbly part, just hold still and use your balance."
"No fear...No beer."
2025-08-30Me: Nora, did you put wood chips or something in your hair?
Nora: [condescendingly] No! I put *sand* in my hair.
2025-08-31Lily: would you like to come and busk with us?
Anna: well, I don't like playing fiddle, but I do like getting money...
2025-09-02Etiquette rules from Jeff about interruptions: "If someone is licking your arm, you're allowed to say, 'Stop licking my arm,' even if someone else is talking."
2025-09-04Nora: they wouldn't let Nix [our cat] into the swimming pool because: (1) he might not take a shower, (2) he doesn't know how to swim, and (3) he can't open doors.
2025-09-04[Coming out from my meeting after hearing a lot of crying]
Nora: [Redacted] did a lot of crying!
[Redacted] I did not do a little crying!!
Nora: I said a *lot* of crying, not a *little*
2025-09-13I taught Nora how to hold her sleeve in her fist when putting on a coat so that she wouldn't end up with her sleeve all bunched up. She is super excited. Except she keeps forgetting and using the opposite hand, and then being confused why the coat won't go on.
2025-09-29Nora: [out of nowhere] I'm fine!
2025-09-30Lily: I am only a "child" when it's convenient for me
2025-10-02Nora: [during turbulence] when I'm squeaking like this, it either means I'm sad or I'm happy. In this case it means I'm happy!
2025-10-13Lily: Nora, first bump!
Nora: [punches Lily in the fist]
2025-10-14Me: what's this?
Anna: that's been there for weeks!
...
Anna: but, yes, I did do it
2025-10-15Anna: [in 4th at a k-5 school] Unfortunately I have to be the older book buddy *again*
2025-10-15Nora: I wish I was a grownup.
Me: What would you like about being a grownup?
Nora: I could do things you don't let me do. Like drill.
2025-10-17Anna: I got this trophy in school for being quiet.
Jeff: So if you don't speak, you get atrophy.
2025-10-18Nora fell on the stairs today but wasn't badly hurt. Afterwards we were discussing that it could have been much worse.
Nora, reassuringly: "My heart is still pumping, and my blood is moving around. So I'm ok." These are indeed great qualities.
2025-10-26Nora got mad and spilled all the crayons out. Afterwards: "Sorry for making a big mess. ...But it's not as big a mess as if a monster messed up all our stuff and our house, and we had to rebuild our whole house."
2025-10-29[discussing a new childcare provider]
Nora: is she very nice?
Julia: yes
Nora: will she kill me?
(She had a grin on her face like she knew she was asking a provocative question)
2025-10-29Nora: "I stole this horse."
Me: "Where did you steal it from?"
Nora: "South America.
....Actually I didn't steal it, I just wore a stealing costume"
2025-10-29[looking at BIDA's Far-UVC setup]
Nora: Will all the people be, like, "what is that thing!?"
Nora: Will that keep the people from getting sick?
2025-10-31Me: I finished my Halloween costume!
Nora: that doesn't really look good.
2025-11-03Anna: [counting bites as she eats a slice of pizza] 302, 303, 304. I'm going to stop counting and just eat the pizza.
Cora: Good idea!
Anna: Well, I'll still count, but it will be in my head.
2025-11-07[driving through the southwest]
Lily: Papa, do people normally say "wow" this much?
2025-11-14Nora: this lollipop is too sweet and tastes weird
Me: if you don't like it, you have plenty of other candy and can pick something else
Nora: it tastes like Cocomelon
Me: Do you mean watermelon?
Nora: No, I mean Cocomelon.
2024-11-16Me: If you could make a wish in a wishing well, what would it be?
Nora: A million kitties and a million puppies.
Nora: And a house made of blueberries and full of blueberries so we could eat the house.
2025-11-20Lily: I ran so fast to get home that I slipped
Nora: I'm glad you're still alive!
2025-11-23Nora: "I say 'grocamole' because it's too hard to say 'guacamole' so I just say 'grocamole'"
...
Nora: "This is a little too not salty"
2025-11-25Anna: Nora, I think you would be warmer if you zipped up your sweatshirt
Nora: but I'm *already* warm! But I'm still cold.
....
Now Anna is explaining the concept of warmth to Nora
...
Nora: [sings] I'm not cold, I'm just pretending, why don't you just ***dance***
2025-12-05Anna: I had a raspberry from the bush when I got home from school, and it tasted like a *frozen* raspberry!
Me: have you looked at the thermometer?
Anna: 😳
2025-12-05Nora: I'm a very good rememberer. Sometimes I even remember things that didn't happen!
2025-12-11The first rule of the Advent calendar is: you don't complain about the Advent calendar to me. Today I learned that this rule doesn't prevent Anna from complaining about the Advent calendar to her sisters, who pass it on to me.
Nora: "Anna says, what is the point of Christmas bandaids if it's not a toy?"
2025-12-12Nora: I want same as Anna, but no cheese. Just pasta, with butter, salt, and shaky cheese.
2025-12-14Nora: that person is dressed just like a snow pig! I mean a polar bear.
2025-12-16Even if it's literally true that you have a lousy child, you shouldn't expect them to appreciate your opportunity to use archaic phrasing.
2025-12-20Anna: Nora, stop whacking me!
Nora: I didn't, and it was by accident!
2025-12-21[at a family dance]
Caller: this dance is called Sasha, and we start by pretending that Sasha has been very naughty. I know none of you have ever been naughty but...
Anna: [to her partner but loud enough for everyone to hear] oh, *I* have!
2025-12-27Nora: my favorite part of sledding is going down the hill
2025-12-28Anna: my hands are all greasy
Jeff: okay, let's all go wash hands
Anna: why do we need to wash hands?
Jeff: so they won't be greasy
Nora: my hands are all hairy from the butter
2026-02-16 13:33:00
Published on February 16, 2026 5:33 AM GMT
Discuss
2026-02-16 12:14:08
Published on February 16, 2026 3:33 AM GMT
There is surprisingly little information online about what actually happens at a Center for Applied Rationality (CFAR) workshop. For the only organization that teaches tools for rationalists in real life (AFAIK), the actual experience of the workshop has very few mentions [1]. (Though recently, Anna Salamon has been making more posts [2]).
I wanted to write something short and concrete to record my experiences. If there is interest, I can provide more details and answer questions.
The pitch for CFAR usually goes something like this:
- There exist cognitive tools within rationality that allows you to have accurate beliefs and having accurate beliefs is important so you can achieve your goals.
- There is a group of people ("CFAR") who say, "We are experienced rationalists, and we will teach you the things we found most helpful."
- Therefore, if your interested in improving your epistemics and achieving your goals you should go.
If you run the Expected Value (EV) calculation on "having better thinking tools for the rest of your life," the numbers get silly very quickly. You can easily conclude that you should at least investigate. So I did.
Unlike many other corporate retreats or workshops, there is some evidence backing up this claim. A 2015 longitudinal survey [3] followed up on CFAR participants (n=135) by comparing their answers pre-workshop and post-workshop across four areas: well-being, personality, behaviors, and productivity.
They found significant effects in many areas. When you compare their reported work/career satisfaction improvements to the clinical effect size of antidepressants (typically around d = 0.3 over placebo), the results are impressive:
| Metric | Effect Size |
|---|---|
| Well-being / Life Satisfaction: Work/School/Career | 0.36*** |
I had the free time, the EV calculations worked out, and I was interested in talking to more rationalist folks. So I went.
1. Does it actually teach the techniques well?
Is it better than just reading the handbook [4] by myself or with my local rationality group?
Short answer: Yes.
The format (6-10 students, 1 instructor) works well. The sessions I enjoyed most started with ~20 minutes of the instructor giving practical examples, followed by ~40 minutes of paired practice with a worksheet. Students are encouraged to ask questions during both sections, and the small group size generates useful positive and negative examples of applying the technique.
I really enjoyed the "Finding Cruxes" workshops. I’m familiar with the theory, but actually having a trusted peer sitting across from you, both trying to notice the crux while keeping track of the argument itself, is much more practically useful than reading a blog post.
However, there is high variance in the classes. Some of the theory-heavy or ideology-heavy classes went over my head (though I noticed some of the more practiced rationalists enjoyed them, so it’s potentially an experience gap problem). Other classes helped reframe my problems, leading to some "wow, I never thought of it like that" moments.
2. Is it fun?
Yes. I enjoyed it more than the counterfactual use of my time. We joked that we were a group of "social autists" (with a potentially diagnosable rate of ~40%), so the social norms were explicitly designed for people like us. It is simply fun to hang out with people who share your inferential framework.
3. Is it useful?
Anna Salamon suggests that a lot of the time, a technique is supposed to feel like a "mental trick." While not as rigorous as mathematical equations, it is helpful for reframing a problem in an easier way.
For example, in the Question Substitution class, I realized that I judge other people’s experience of happiness by modeling it on my own mind. That is a simple, obvious error. But until I consciously thought about explicitly swapping the question and went through the worksheet, I hadn't noticed I was doing it.
Personally, I solved some of my problems during the retreat. I suspect the tools taught are supposed to help with epistemics and treating your emotions like unconscious signals rather than white noise, which is something rationalist types probably should do more of.
For those unaware, CFAR has been running on and off since 2012. After a hiatus, they "renewed" operations in 2025 with workshops in Austin and SF[5]. I went to the Austin one.
The structure was a 4-day retreat on a ranch:
We also had 2-3 hours of break for lunch and dinner in between, so it was a very comfortable but packed schedule.
Was any of this useful? For me personally?
Yes. I saved about 20 hours (minimum) of work on my current research problems just by talking through them with workshop instructors during the Questing [6] activity.
Though this was more along the lines of professional advice, could I have gotten this elsewhere? Probably. But I don't think I would have brought this up without a similar environment to CFAR, with like-minded peers and this level of vulnerability.
Notably, the classes are high variance. A CFAR instructor said something to the effect of:
"I don't feel optimistic about training that gets people from 0 skills to all 19. But I feel hope about finding people who have 17 skills, and getting them the last missing 2 [and those missing 2 are different for everyone]."
I did feel like I came away with at least 2 out of the last 4 skills.
That said, I took a day of flights from Australia to get there. So, depending on how you value your time, the net EV might still be negative. /shrug.
For me:
For rationality workshops in general:
I noticed that a lot of the value comes from the instructors and experienced participants, many of whom are US-based. CFAR really is "Community + Practice."
I expect it will be very difficult to replicate this if you aren’t in a rationality hub like the Bay Area or Austin. A small local practice group going through the materials might get you 20% of the "goodness" of CFAR, but you’ll miss the emotional connection and vulnerability part that comes from the immersive retreat.
I want to give a huge shoutout to Wendy and the logistics team. Great work handling the storm and providing a comfortable space for everyone. It is important and meaningful work! Also, a shoutout to the friendly Austin folks; I really appreciate the hospitality!
Also Big thanks to the nice folks at CFAR for the experience. Any mistakes in explaining them are 100% mine.
References
[1] LessWrong Tag: Center for Applied Rationality
[3] CFAR 2015 Longitudinal Study
[4] CFAR Handbook Introduction
[5] CFAR Update and New Workshops (2025)
[6] In Question we were paired up with a partner and take 15 min turns of just watching the other partner try to do something. Its suprisingly useful the simple idea of borrowing 15 mins of a trusted peer's time.
[7] CFAR Takeaways (Andrew Critch)