MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Many can write faster asm than the compiler, yet don't. Why?

2025-12-30 16:40:51

Published on December 30, 2025 8:40 AM GMT

There's a take I've seen going around, which goes approximately like this:

It used to be the case that you had to write assembly to make computers do things, but then compilers came along. Now we have optimizing compilers, and those optimizing compilers can write assembly better than pretty much any human. Because of that, basically nobody writes assembly anymore. The same is about to be true of regular programming.

I 85% agree with this take.

However, I think there's one important inaccuracy: even today, finding places where your optimizing compiler failed to produce optimal code is often pretty straightforward, and once you've identified those places 10x+ speedups for that specific program on that specific hardware is often possible[1]. The reason nobody writes assembly anymore is the difficulty of mixing hand-written assembly with machine-generated assembly.

The issue is that it's easy to have the compiler write all of the assembly in your project, and it's easy from a build perspective to have the compiler write none of the assembly in your project, but having the compiler write most but not all of the assembly in your project is hard. As with many things in proramming, having two sources of truth leads to sadness. You have many choices for what to do if you spot an optimization the compiler missed, and all of them are bad:

  1. Hope there's a pragma or compiler flag. If one exists, great! Add it and pray that your codebase doesn't change such that your pragma now hurts perf.
  2. Inline assembly. Now you're maintaining two mental models: the C semantics the rest of your code assumes, and the register/memory state your asm block manipulates. The compiler can't optimize across inline asm boundaries. Lots of other pitfalls as well - using inline asm feels to me like a knife except the handle has been replaced by a second blade so you can have twice as much knife per knife.
  3. Factor the hot path into a separate .s file, write an ABI-compliant assembly function and link it in. It works fine, but it's an awful lot of effort, and your cross-platform testing story also is a bit sadder.
  4. Patch the compiler's output: not a real option, but it's informative to think about why it's not a real option. The issue is that you'd have to redo the optimization on every build. Figuring out how to repeatably perform specific transforms on code that retain behavior but improve performance is hard. So hard, in fact, that we have a name for the sort of programs that can do it. Which brings us to
  5. Improve the compiler itself. The "correct" solution, in some sense[2] — make everyone benefit from your insight. Writing the transform is kinda hard though. Figuring out when to apply the transform, and when not to, is harder. Proving that your transform will never cause other programs to start behaving incorrectly is harder still.
  6. Shrug and move on. The compiler's output is 14x slower than what you could write, but it's fast enough for your use case. You have other work to do.

I think most of these strategies have fairly direct analogues with a codebase that an LLM agent generates from a natural language spec, and that the pitfalls are also analogous. Specifically:

  1. Tweak your prompt or your spec.
  2. Write a snippet of code to accomplish some concrete subtask, and tell the LLM to use the code you wrote.
  3. Extract some subset of functionality to a library that you lovingly craft yourself, tell the LLM to use that library.
  4. Edit the code the LLM wrote, with the knowledge that it's just going to repeat the same bad pattern the next time it sees the same situation (unless you also tweak the prompt/spec to avoid that)
  5. I don't know what the analogue is here. Better scaffolding? More capable LLM?
  6. Shrug and move on.

One implication of this worldview is that as long as there are still some identifiable high-leverage places where humans still write better code than LLMs[3], if you are capable of identifying good boundaries for libraries / services / APIs which package a coherent bundle of functionality,  then you will probably still find significant demand for your services as a developer.

Of course if AI capabilities stop being so "spiky" relative to human capabilities this analogy will break down, and also there's a significant chance that we all die[4]. Aside from that, though, this feels like an interesting and fruitful near-term forecasting/extrapolation exercise.

 

  1. ^

    For a slightly contrived concrete example that rhymes with stuff that occurs in the wild, let's say you do something along the lines of "half-fill a hash table with entries, then iterate through the same keys in the same order summing the values in the hash table"

    Like so

    // Throw 5M entries into a hashmap of size 10M
    HashMap h;
    h->keys = calloc(10000000 * sizeof(int));
    h->values = calloc(10000000 * sizeof(double));
    for (int k = 0; k < 5000000; k++) {
        hashmap_set(h, k, randn(0, 1));
    }
    
    // ... later, when we know the keys we care about are 1..4999999
    double sum = 0.0;
    for (int k = 0; k < 5000000; k++) {
        sum += hashmap_get(h, k);
    }
    printf("sum=%.6f\n", sum);
    

     

    Your optimizing compiler will spit out assembly which iterates through the keys, fetches the value of each one, and adds it to the total. The memory access patterns will not be pretty

    Example asm generated by gcc -o3

     

    ...
    # ... stuff ...
                                            # key pos = hash(key) % capacity
    .L29:                                   # linear probe loop to find idx of our key
        cmpl    %eax, %esi
        je      .L28
        leaq    1(%rcx), %rcx
        movl    (%r8,%rcx,4), %eax
        cmpl    $-1, %eax
        jne     .L29
    .L28:
        vaddsd  (%r11,%rcx,8), %xmm0, %xmm0  # sum += values[idx]
    # ... stuff ...
    

     

    This is the best your compiler can do: since the ordering of floating point operations can matter, it has to iterate through the keys in the order you gave. However, you the programmer might have some knowledge your compiler lacks, like "actually the backing array is zero-initialized, half-full, and we're going to be reading every value in it and summing". So you can replace the compiler-generated code with something like "Go through the entire backing array in memory order and add all values".

    Example lovingly hand-written asm by someone who is not very good at writing asm

     

    # ... stuff ...
    .L31:
        vaddsd  (%rdi), %xmm0, %xmm0
        vaddsd  8(%rdi), %xmm0, %xmm0
        vaddsd  16(%rdi), %xmm0, %xmm0
        vaddsd  24(%rdi), %xmm0, %xmm0
        addq    $32, %rdi
        cmpq    %rdi, %rax
        jne     .L31
    # ... stuff ...

     

    I observe a ~14x speedup with the hand-rolled assembly here.

    In real life, I would basically never hand-roll assembly here, though I might replace the c code with the optimized version and a giant block comment explaining the terrible hack I was doing, why I was doing it, and why the compiler didn't do the code transform for me. I would, of course, only do this if this was in a hot region of code.

  2. ^

    Whenever someone says something is "true in some sense", that means that thing is false in most senses.

  3. ^

    Likely somewhere between 25 weeks and 25 years

  4. ^

    AI capabilities remaining "spiky" won't necessarily help with this



Discuss

Chromosome identification methods

2025-12-30 14:02:28

Published on December 30, 2025 6:02 AM GMT

PDF version. berkeleygenomics.org. x.com. bluesky.

This is a linkpost for "Chromosome identification methods"; a few of the initial sections are reproduced here.

Abstract

Chromosome selection is a hypothetical technology that assembles the genome of a new living cell out of whole chromosomes taken from multiple source cells. To do chromosome selection, you need a method for chromosome identification—distinguishing between chromosomes by number, and ideally also by allele content. This article investigates methods for chromosome identification. It seems that existing methods are subject to a tradeoff where they either destroy or damage the chromosomes they measure, or else they fail to confidently identify chromosomes. A paradigm for non-destructive high-confidence chromosome identification is proposed, based on the idea of complementary identification. The idea is to isolate a single chromosome taken from a single cell, destructively identify all the remaining chromosomes from that cell, and thus infer the identity of the preserved chromosome. The overall aim is to eventually develop a non-destructive, low-cost, accurate way to identify single chromosomes, to apply as part of a chromosome selection protocol.

Context

Reprogenetics is biotechnology to empower parents to make genomic choices on behalf of their future children. One key operation that's needed for reprogenetics is genomic vectoring: creating a cell with a genome that's been modified in some specific direction.

Chromosome selection is one possible genomic vectoring method. It could be fairly powerful if applied to sperm chromosomes or applied to multiple donors. The basic idea is to take several starting cells, select one or more chromosomes from each of those cells, and then put all those chromosomes together into one new cell:

There are three fundamental elements needed to perform chromosome selection:

  • Transmission and Exclusion. Get some chromosomes into the final cell, while excluding some other chromosomes.

  • Targeting. Differentially apply transmission and exclusion to different chromosomes.

This article deals with the targeting element. Future articles will deal with the other elements. Specifically, this article tries to answer the question:

How can we identify chromosomes?

That is, how can we come to know the number of one or more chromosomes that we are handling (i.e. is it chromosome 1, or chromosome 2, etc.)? Further, how can we come to know what alleles are contained in the specific chromosome we are handling, among whatever alleles are present among the chromosomes we're selecting from?

This problem has been approached from many angles. There are several central staples of molecular biology, such as DNA sequencing, karyotyping, flow cytometry, CRISPR-Cas9, and FISH; and there are several speculative attempts to study chromosomes in unusual ways, such as acoustics, laser scattering, hydrodynamic sorting, and electrokinesis.

This article presents an attempt to sort through these methods and find ones that will work well as part of a chromosome selection method. This goal induces various constraints on methods for chromosome identification; hopefully future articles will further clarify those constraints.

Synopsis and takeaways

A human cell has 46 chromosomes, 2 of each number, with each number (and X and Y) being of different sizes:

[(Figure 1.3 from Gallegos (2022) [1]. © publisher)]

We want to identify chromosomes. Technically, that means we want to be able to somehow operate differently on chromosomes of different numbers. In practice, for the most part, what we want is to isolate one or more chromosomes, and then learn what number(s) they are. (If possible, we also want to learn what alleles they carry.)

How do we identify chromosomes? We have to measure them somehow.

There's a tradeoff between different ways of measuring chromosomes: How much access do you have to the DNA inside the chromosome? (Chromosomes are not just DNA; they also incorporate many proteins.)

On one extreme, there is, for example, standard DNA sequencing. In this method, you have lots of direct access to the DNA, so you can easily measure it with very high confidence, and learn the number of a chromosome and almost all of the alleles it carries. However, this method is also completely destructive. You strip away all the proteins from the DNA, you disrupt the epigenetic state of the DNA, and you chop up the DNA into tiny little fragments. High DNA access comes with high information, but also comes with high destructiveness.

On the other extreme, there is, for example, standard light microscopy. In this method, you have very little direct access to the chromosome's DNA. You just shine light on the chromosome and see what you can see. This method is not at all destructive; the chromosome's DNA, structural proteins, and epigenetic state are all left perfectly intact. However, this method definitely cannot tell you what alleles the chromosome carries, and may not even be able to distinguish many chromosomes by number. Low DNA access comes with low destructiveness, but also comes with low information.

If we're assembling a new cell (for example, to use in place of a sperm), we cannot use chromosomes that we have destroyed. We also (roughly speaking) cannot use a chromosome unless we're confident we know what number it is, because we have to be confident that the final cell will be euploid. Are there methods that are non-destructive and also make confident calls about chromosome number?

I don't know of a theoretical reason such a method should not exist. Why not measure physical properties of a chromosome from a distance and infer its number? For example, a single paper from 2006 claimed to use Raman spectroscopy to distinguish with fairly high confidence between human chromosomes 1, 2, and 3, just by bouncing (scattering) a laser off of them [2]. However, all such methods I've looked at are similar, in that they are very poorly refined: they have not been extensively replicated, so they may not work at all, and definitely have not been developed to be easy and reliable.

Therefore, as far as I know, there is currently probably no good way to identify chromosomes by directly measuring them. Every single such method will destroy the chromosome, or will not make confident calls about the chromosome's number, or else has not been well-demonstrated to work. Here's a visual summary of the situation:

[(Hi r/ChartCrimes!)]

Sidenote: Many readers might wonder: Why not just use standard cell culture sequencing? The reason will be explained more fully in a future article. But basically, the reason is that ensembling a target genome using cell culturing methods (such as MMCT) is likely to be very inconvenient. To avoid that, we want a more reductive mechanical method, an "isolating-ensembling" method, where we isolate single chromosomes, identify them, and then put target chromosomes into a new cell. Isolating-ensembling methods require a way to identify single chromosomes (or small sets of chromosomes); it's not enough to just learn the content of some full euploid genomes, which is all that is offered by cell culture sequencing.

So, if we cannot identify chromosomes by directly measuring them, what to do?

My proposal is to identify chromosomes by indirectly measuring them. To indirectly measure a chromosome, we get some material that comes from the same place as the chromosome. We then directly measure that material, and use that measurement to infer something about the chromosome:

A key indirect identification method is complementary chromosome identification. That's where you take a single cell with a known genome, isolate one chromosome, and then sequence the rest of the chromosomes. This tells you the identity of the isolated chromosome, without ever directly measuring that chromosome:

(See the subsection "Chromosome-wise complementary identification".)

Another indirect identification method is single-cell RNA sequencing for sperm. This works by separating out RNAs from a single sperm and sequencing them. It turns out that those RNAs actually tell you which alleles are present in that sperm's genome. (See the subsection "Sequencing post-meiotic RNA".) This tells you the set of chromosomes you have, including what crossovers happened. (Another way to do this might be to briefly culture the sperm as haploid cells using donor oocytes [3]; see the subsection "Haploid culture".)

By combining complementary chromosome number identification with one of these indirect allele-measuring methods ("setwise homolog identification"), we could in theory isolate a single fully intact chromosome with a confidently, almost completely known genome.

This would be a good solution to chromosome identification. Unfortunately, these methods would be very challenging to actually develop. But, that effort might be worth it, since it seems there are not better chromosome identification methods available. See future articles for discussion of how to implement these methods.

The rest of this article will go into much more detail on many of the above points.

  1. Gallegos, Maria. Fantastic Genes and Where to Find Them. Updated 2022-09-13. Accessed 16 February 2025. https://bookdown.org/maria_gallegos/where-are-genes-2021/#preface. ↩︎

  2. Ojeda, Jenifer F., Changan Xie, Yong-Qing Li, Fred E. Bertrand, John Wiley, and Thomas J. McConnell. ‘Chromosomal Analysis and Identification Based on Optical Tweezers and Raman Spectroscopy’. Optics Express 14, no. 12 (2006): 5385–93. https://doi.org/10.1364/OE.14.005385 ↩︎

  3. Metacelsus. ‘Androgenetic Haploid Selection’. Substack newsletter. De Novo, 16 November 2025. https://denovo.substack.com/p/androgenetic-haploid-selection. ↩︎



Discuss

CFAR’s todo list re: our workshops

2025-12-30 13:16:35

Published on December 30, 2025 5:16 AM GMT

(This post is part of a sequence of year-end efforts to invite real conversation about CFAR; you’ll find more about our workshops, as well as our fundraiser, at What’s going on at CFAR? Updates and Fundraiser and at More details on CFAR's new workshops)

In part of that post, we discuss the main thing that bothered me about our past workshop and why I think it is probably fixed now (though we’re still keeping an eye out). Here, I list the biggest remaining known troubles with our workshops and our other major workshop-related todo items.

Your thoughts as to what’s really up with these and how to potentially address them (or what cheap investigations might get us useful info) are most welcome.

Ambiguous impact on health 

(Current status: ?)

In the 2012-2020 workshops, our “CFAR techniques” seemed to help people do 5-minute-timer or insight-based things, but seemed to some of us to make it harder, or at least not easier, to eg:

  • Get physical exercise
  • Learn slow and unglamorous things from textbooks across an extended time[1]
  • Be happy and hard-working at a day job in a slow and stable way

This seems unfortunate.

I’m mildly hopeful the changes we’ve made to our new workshops will also help with this. My reasons for hope:

  1. Our old workshop had a hyper/agitated/ungrounded energy running through it: “do X and you can be cool and rational like HPMOR!Harry”; “do X and you can maybe help with whether we’ll all die.” Our current workshop is quieter (“now that you’ve thought of X, does it feel inside-view-hopeful to try X some? Now that you’ve tried it some, does it feel inside-view-hopeful to do more?”). I’m hoping the quieter thing is less likely to drown out other useful work-and-morale-flows.
  2. We now aim to teach skills for tuning into the (surprisingly detailed) processes that make possible the cool structures around us, including those that one has already been running without explicitly knowing why.

However, I won’t be surprised if this is still a problem. If so, we’ll need to fix it.

Unclear mechanism of action; lack of "piecewise checkability"

(Current status: unsolved)

Magic happens at the workshops (people seem to “wake up” a bit, look around, and e.g. notice they hate doing the ironing but could probably pay someone to do it, or they’re bored of their normal activities but can change them, or their parents may die someday and now is a good time to reach out, or their own habits are made of examinable, changeable pieces just like the outside world is. And this is great!). But afterwards, it’s hard (for them and for us) to describe the workshop and its mechanism of action to someone who hasn’t been there. Sometimes it’s even hard to describe to someone who has been there, if they’ve had years to forget.[2]

The “hard to remember / hard to describe” property makes it difficult to know:

  • Are there undesirable bits tagging along with the magic that it would be nice to delete?
  • While there are rationality skills practiced at the workshops – is this core to how and why the workshops change people, or is it mostly something else?

People also sometimes wonder: are CFAR workshops just fun parties, dressed up as rationality training? I think they aren’t (and those who’ve wondered this aloud in my hearing mostly haven’t been to one).[3] But:

  1. I’d like to acquire an accurate, shared story of what aCFAR workshops are, and how they work, and how we know;
  2. I’d like the ability to somehow check for bits of psychological or cultural change that might be tagging along with the workshop, and might be bad.

We at aCFAR don’t know how to do this. Yet.

Habits that make it easier for alumni to get pulled into cults 

(current status: ?)

As mentioned in our main post: if workshop guests practice deferring to us about what weird things to do with their minds, (especially if they do so for extended periods, based on wispy claims about long-term payoffs such as “this’ll help with AI risk somehow”), this risks setting some up to later try deferring to people running more obviously unhealthy cults. I speak from experience.

(My guess, too, is that our broader cultural influence may have spread some of this unfortunate pattern to the wider culture of the rationalist community, not just to alums. People would come to the workshop, observe a bunch of healthy normal-looking people having fun doing “rationality exercises,” and come away with the cultural belief that introspection and self-modification are cool, normal, and useful. Which they sometimes are. But we need more precision in the transmitted cultural pattern.)

We’ve got to find a way to make our workshops not set people up for bad patterns here. Our current emphasis on “remember the buck stops with you; check whether it is producing fruits you directly feel good about” may help. So may finding ways to maintain, as our and guests’ central focus, the discovery of checkable stuff in the outside world and/or the building of outside stuff with obvious feedback loops.

Outward-directedness

(current status: minor progress)

In addition to perhaps assisting with cult resistance, it would also be nice for other reasons if our workshops can become more outward-directed -- more about noticing and building neat things in the world, rather than pure introspection/self-modification.

More things I want for the workshops

Here are some additional things I would like to have (or have more of) at future CFAR workshops:

  • “Multi-level classrooms” that better accommodate workshop guests with more expertise/experience instead of casting everyone as beginners
  • More epistemic rationality content and a workshop context that helps that content register as relevant/good
  • Further developing our new “honoring who-ness” thread and related new threads (we have several new classes in the works for our Jan workshop)
  • Boosting guests’ skill in “the positive opposite of psychosis.”[4]
  • Making workshops more financially viable (via getting more workshop applicants at full price/closer to full price)
  • Further increasing workshop zaniness/aliveness/etc.
  1. ^

    A SPARC instructor told me that the head of a different math program for young people had complained to him that SPARC or HPMOR seemed to mess up peoples’ ability to be deeply interested in textbooks, or other gradual acquisition of deep math knowledge, in favor of immediate cleverness / insights / reconceptualizations. (It’s been some years since I heard it; I might be botching the details a bit. Logan Strohl’s work on “tortoise skills” seems to me to be a response to independently noticing a similar need.)

  2. ^

    Thanks to Dan Keys for persistently raising this fact to my attention and convincing me of its importance.

  3. ^

    The reason I think workshops don't just work by being fun parties: alumni often have a characteristic "CFAR alumni" skillset they didn't come in with. For example, if most people attempt a task (e.g., to mow the lawn) and find it's not in their direct reach (e.g. because the lawnmower is broken and all nearby stores are closed for Thanksgiving), they'll decide it's impossible. If CFAR alumni are in the same situation, they may (sensibly) decide it's too expensive to be worth it, but they'll usually be aware that there are other avenues they could try (e.g. phoning gardeners from Craigslist and attempting to rent their lawnmower).

    Former CFAR instructor Kenzi Ashkie and I used to observe and discuss these skills in our alums, including months and years after they attended.

  4. ^

    Adele argued recently that a rationality curriculum worthy of the name would leave folks less vulnerable to psychosis, and that many current rationalists (CFAR alums and otherwise) are appallingly vulnerable to psychosis. After thinking about it some, I agree. I’m hoping our “respecting who-ness” thread and pride or self-esteem threads help some; there is also much else we can try.



Discuss

More details on CFAR’s new workshops

2025-12-30 13:12:18

Published on December 30, 2025 5:12 AM GMT

(This post is part of a sequence of year-end efforts to invite real conversation about CFAR; you’ll find more about our workshops, as well as our fundraiser, at What’s going on at CFAR? Updates and Fundraiser.)

If you’d like to know more about CFAR’s current workshops (either because you’re thinking of attending / sending a friend, or because you’re just interested), this post is for you. Our focus in this post is on the new parts of our content. Kibitzing on content is welcome and appreciated regardless of whether or not you’re interested in the workshop.

The core workshop format is unchanged:

  • 4.5 days of immersion with roughly 8 hours of class per day
  • Classes still aim partly to prime people to have great conversations during meals/evenings
  • Almost everyone stays in a shared venue
  • Roughly 25[1] participants, and 12-ish staff and volunteers
  • Mostly small classes

“Honoring Who-ness”

We added a new thread to our curriculum on working well with one’s own and other peoples’ “who-ness” (alternately: pride, ego, spark, self-ness, authorship).

What, you might ask, is “who-ness?”

Alas, we do not (yet?) have a technical concept near “who-ness.”[2] However, we want to make room at the workshop for discussing some obvious phenomena that are hard to model if your map of humans is just “humans have beliefs and goals” and easier once we try talking about “who-ness.” These phenomena include:

a) Many of us humans feel good when someone notices a telling detail of a project we worked hard on -- especially if the detail is one we cared about, and double especially if they seem to see the generator behind that detail. After being affirmed in this way, we often have more energy (and especially energy for that particular generator).

b) We seem similarly nourished by working alongside competent others to accomplish difficult tasks that use our skills fully and deeply.

c) Much useful work seems to be bottlenecked more by psychological energy than by time and/or money.

d) When I (or most people) start a project, I often spend time booting up a local mental context that the project can proceed from, draw energy from, and… in some way, feed energy (as well as skills and context-bits) back into. This can happen on lots of scales, ranging from a few minutes to decades.

For example, consider a game of Go. Midway through a game of Go, I’ll have a bunch of active hopes and questions and fears, such as:

  • “Watch out; is he going to be able to pull some mischief in that corner?”
  • “That move I did there: was it actually needed? Did I waste an unnecessary stone? Watch to see how it pans out, maybe I’ll figure out if I needed it”
  • “Will I be able to [...]?”

These hopes/fears/questions are piecemeal, but they’re linked together into a single “mental context" such that I can easily notice when the balance of my attention should tug it from the bits that used to deserve my attention to the parts that deserve my attention now, and such that there’s a single mood and tempo permeating the whole thing.

There’s something sad about having to permanently interrupt a game of Go partway through (eg after a work call or something). In my terms, I might say that the “who-ness” that was booted up around the game is disbanded forcibly/suddenly, without getting to complete its loops and return its knowledge and energies into the larger system. The internal accounting gets disrupted.

Some other examples of "projects that stem from and help sustain their own particular mental context” include:

  • A particular stew I’m making for lunch
  • A longer-running project of learning to make stew across months
  • A particular friendship, spanning decades
  • Particular hobbies or vocations
  • My relationship to the city of Reno

Or, as yet another example: in our end-of-workshop survey, we asked our November workshop participants: “This is a weird question, but: who were you, when you were here? What did the workshop bring out in you? How does this differ from who you are, or what qualities get brought out in you, in your default life?”. And we got back coherent answers. So many of our participants probably experienced the workshop as a particular context, that helped summon, in them, a particular who-ness.
</end box>

e) The “venue kittens” (CFAR has venue caretakers now, and they have kittens) were a delight to watch, and seemed to visibly have their own “who-ness”, which many of us instinctively cared about and enjoyed watching.[3]

Concrete components of our “honoring who-ness” thread

Concretely, at our November workshop, we had:

  • A short opening session piece about “who-ness”
  • A Pride class by John Salvatier (handout here if you want a partial idea)
  • A class on Dale Carnegie’s “How to Win Friends and Influence People” called “Who-friending” (handout here)
  • A class on question substitution by Preston. This is classic heuristics-and-biases content from Kahneman,[4] yet Preston’s class used social examples to get people noticing how we mistake our heuristics about people for the people themselves. By noticing and scoring the heuristic questions they were using to judge others and then checking those judgments through direct conversation, participants could feel the difference between interacting with a model and encountering who-ness directly
  • Patternwork: a class taught by Jack, about noticing “rhymes” across situations where you reacted oddly strongly

Who-ness was also threaded through various parts of our classic classes, and we heard many instances of the word “who-ness” scattered through conversations in the hallways.

Classic CFAR content

The current workshop keeps almost all the parts that were particularly good or notable from the classic CFAR curriculum:[5] Building a Bugs List; Inner Simulator; TAPs; Goal-factoring; Double Crux; Focusing; CoZE; Pair Debugging; Hamming Questions; Hamming Circles; Resolve Cycles.

This is all to say that there’s still a great deal of material and techniques on how to:

  • Have “beliefs” in the sense of predictions, not just words you say in your head; harvest knowledge from what you expect to see happen, and include it in your verbal beliefs
  • Tune into the “autopilot” that runs much of your life (including much of what you notice, or fail to notice, and so can/can’t consciously attend to); gradually train your autopilot to better suit your goals
  • Notice when your work feels “beside the point” in some way; get a better verbal handle on what that feeling in you implicitly believes the point is
  • Avoid calling a problem “impossible” until you’ve worked on it for at least five minutes by the clock

And so on.

Many of these classes have been changed in small ways to (we think) better interface with “who-ness” but the core of the classic CFAR workshop remains intact. If you send your friends to a 2026-era CFAR workshop, we continue to bet[6] they’ll get the good skills/mindset/etc. that folks used to get, plus some new and exciting material as well.

  1. ^

    CFAR kept it to 25 in our early years, then increased to 40 once the curriculum was stable; for now, we are again keeping it to 25. Small classes aid curriculum development. (Most classes at the workshop have about 8 guests.) 

  2. ^

    I do have a semi-technical concept I use in my head, which I'll sketch: I imagine the mind as an economy of tiny "mindlets", who gradually learn to form "businesses" that can tackle particular goals (such as "move hand"; "move hand toward rattle"; etc). On my model, a "who-ness" corresponds to a business made of mindlets; most of our learned skills are located in the "businesses" rather than the individual mindlets; and the validation we get from someone recognizing good work, or from ourselves seeing that we succeeded at something tricky and worthy, helps keep the businesses involved in that particular work from going bankrupt. See also Ayn Rand’s model of “living money”; and an upside of burnout.

  3. ^

  4. ^

    Unfortunately, there is no standard name for this, but it is eg discussed on Wikipedia, in this LW post by Kaj, and in Kahneman’s book “Thinking: Fast and Slow”. 

  5. ^

    A notable exception is Internal Double Crux (IDC), which I think is often harmful for growing intact who-ness; CFAR stopped teaching at mainline workshops a bit before we paused workshop activity in 2020.

  6. ^

    We have a money-back guarantee for the workshop; the guarantee covers dissatisfaction for whatever reason, but "I expected classic content and didn't get it" is a fine reason. I'd also be happy to take an actual bet if someone wants.



Discuss

What’s going on at CFAR? (Updates and Fundraiser)

2025-12-30 13:00:49

Published on December 30, 2025 5:00 AM GMT

This post is the main part of a sequence of year-end efforts to invite real conversation about CFAR, published to coincide with our fundraiser

Introduction / What’s up with this post

My main aim with this post is to have a real conversation about aCFAR[1] that helps us be situated within a community that (after this conversation) knows us. My idea for how to do this is to show you guys a bunch of pieces of how we’re approaching things, in enough detail to let you kibitz.[2]

My secondary aim, which I also care about, is to see if some of you wish to donate, once you understand who we are and what we’re doing. (Some of you may wish to skip to the donation section.)

Since this post is aimed partly at letting you kibitz on our process, it’s long.[3] Compared to most fundraiser posts, it’s also a bit unusually structured. Please feel free to skip around, and to participate in the comment thread after reading only whatever (maybe tiny) pieces interest you.

I’d like CFAR to live in a community

I’d like CFAR to live in a community where:

  • People can see aCFAR
  • We can see you guys seeing us
  • Folks are sharing what they’re seeing, not what their theory says they should see
  • Interested folks within LessWrong, and within the CFAR alumni community, can benefit from the experience we gather as we try things and collide with reality. Our failures and fizzles aren’t opaque (they have moving parts), and our successes can be built on by others
  • You guys can tell us what we’re missing and help us do cooler experiments
  • We are all aware in common knowledge that aCFAR is one group among many. We all know together that other groups already have norms and customs and their own local territories. Both we and you guys can track where we are having good or bad impacts on the spaces around us; it’s easier to be a good neighbor

In the past, CFAR didn't know how to live in a community in this way (partly because I was often in charge, and I didn’t know how to do it). But I think CFAR and I now have the ability to do this.

As an example of the gap: I used to be somehow trying to claim that we were running our organization in the best, most EA-efficient, or most rational way. As a result, whenever someone argued in public that some revised action would be better, I thought I had to either:

  • Change what I was doing (costly, in cases where I had a multi-step plan they weren’t tracking or knew something they didn’t)
  • Refute them (also costly; requires transferring context and inferential distance, plus even then they might not be convinced but I still wanted to find out how my thing would go)
  • Arrange things (for next time) so people like them don’t say things like that, e.g. by withholding information about our workings so folks can’t critique our plans

But now, it’s different. We are visibly a particular organization, believed in by particular people, with details. The premises we believe in together (aka our operational premises for what we CFAR staff are building) are separated out from our epistemics, and from claims about what’s objectively best.

Anyhow: requesting community membership of this sort for CFAR, and setting you guys up to have a free and full conversation about CFAR, is the main business of this post, and is the main thing I’m trying to ask of you, Dear Reader, if you are interested and able.

Kibitzing requests

Some kinds of kibitzing I’d particularly appreciate:

  • Make it easy to see CFAR through your eyes. (Did we help you? Harm you? Do we look like random people nattering about nothing? Do we seem hopelessly blind? Do we make life more relaxing for you somehow? Do you care what happens with CFAR, one way or another?)
  • Ask questions
  • Flag where something doesn’t make sense to you / where you notice confusion
  • Guess how we might get unstuck in places we know we’re stuck
  • Guess what our blind spots are, and what experiments might make stuff more obvious to us in places we haven’t realized we’re stuck
  • Help make the real causes-of-things visible to someone who is young or is coming from outside these communities, as in Sarah’s point #6
  • Hope for something out loud
  • Try to speak to why you care rather than rounding to the nearest conceptual category.

Introductions: Me, aCFAR… and you, Reader?

I’ll start the introductions.

I’m Anna Salamon. I spent my childhood studying… not so much math, although also that, but mostly studying the process by which I or others learned math.[4]

I feel like a bit of a war veteran around the rationality/AI risk world, as I think are many of the old-timers. I joined the AI x-risk scene in 2008 because there were appallingly few people working on AI x-risk at the time (maybe five full-time equivalents, with those hours spread across maybe twenty people). I, like many at the time, worked really really hard while feeling isolated from almost everyone for whom AI risk somehow couldn’t register, who we had to save without them getting it. I felt a strong, utilitarian trust with the others working on x-risk.

From 2012-2020 I worked really hard on CFAR (initially at Eliezer’s suggestion) to provide a community where people working on AI risk could be less alienated from our surroundings. Then, I changed my mind about something hard to articulate about what kind of “organizations” had any shot at making things better. Now, I’m hoping again to do aCFAR, differently.

I’ll also try introducing aCFAR as though it’s a particular person with a history:

Reader, this is a Center for Applied Rationality (aCFAR).

In its past, CFAR was one of the major co-creators of the Bay Area rationalist community, and the rationalists and AI safety movements broadly – people would come, get pulled into some sort of magic near our classes, and in some cases move to the Bay (or somewhere else) to work at MIRI or co-found FLI or do other neat stuff. (We had ~1900[5] guests attend a 4.5-day or longer program of some sort). CFAR also caused concepts like “double crux,” "TAPs,” and “inner simulator” to be spread across rationalist and EA spaces. We hope to gradually do something similar with a new set of concepts.[6]

Today, CFAR is a vehicle for running workshops that I and the rest of our current staff deem worthy, which are an amalgam of classic CFAR stuff (descended from Eliezer’s Sequences) plus some newer stuff aimed at "honoring “who-ness." It’s also an experiment, as discussed throughout the post.

If you’re up for introducing yourself (which I’d appreciate!) there are two good ways:

  • You can say a bit about yourself and what brought you to the conversation in the introductions subthread
  • Or, you can write some object-level comment and add a sentence or two about where you’re coming from

On to the post proper!

Workshops

Workshops have always been the heart of our work at aCFAR. We spend most of our staff time tinkering toward making the workshop good, staring at folks at the workshop to see if it is good, iterating, etc. It’s where our take on rationality comes to life, changes us, is changed by its encounters with some of you guys, and so on.

So – if you want to kibitz on our current generators – it may be where you can best see them.

For those just meeting us: A CFAR workshop is a 4.5 day retreat with about 25 varied guests, 12-ish staff and volunteers, and a bunch of hard work, rationality, and conversation. The workshop typically involves a bunch of classes on rationality techniques and lots of time to apply those techniques and work on solving actual problems in real life. We currently have our next workshop scheduled for January 21-25 in Austin, TX.

Workshops: favorite bits

Among my favorite positive indicators from our workshops:

1. People made friends at the workshops and in the alumni network.

Many workshop guests across our history have told me a CFAR workshop was the first time they’d managed to make friends in the decade-or-more since they finished college.

This wasn’t an accidental side-effect of the workshops; we tuned the workshops toward: (a) creating contexts where people could update deeply (which helps with making real friends) and (b) arranging small and interactive classes with pair work, providing a “names and faces” Anki deck, hosting lightning talks, etc. to make it easy to make new friends at the workshop.

This wasn’t a side-goal for us, separate from the main aim of “rationality training”; IMO there’s a deep connection between [conversations and friendships, of the sort that can make a person bigger, and can change them] and the actual gold near “rationality,” such that each of (true friendships, rationality) can activate the other.

2. People had conversations at the workshops that updated the real generators of their actions.

Many conversations in the default world involve people explaining why a reasonable person might believe or do as they are doing, without sharing (or often knowing) the causes of their choices. But at CFAR, the real causes of actions often were (and are) properly in the conversation.

Relatedly, people at workshops would become at least briefly able to consider changing things they’d taken for granted, such as career paths, ways of relating to other people, etc., and they’d do it in a context full of curiosity, where there was room for many different thoughts.

3. The workshop was visibly “alive” in that it felt organic, filled with zany details, etc.

If this CFAR is going well, we should have spare energy and perceptiveness and caring with which to make many side-details awesome. We did this well in the past; we seem to be doing it even better now.

For example, during Questing at our November workshop, we had CFAR instructors run short “interludes” during which people can breathe and reflect a moment in between 10-minute hero-and-sidekick problem-solving blocks. However, due to a minor scheduling mishap, CFAR instructor Preston ended up committed to be in two places at once. Preston solved his problem by setting up an “oracle” to run his section of inner simulator-inspired Questing interludes.

For another example, chef Jirasek created waves of life emanating from the kitchen in the form of music, food art, and sort of ostentatious interactions with the locals (e.g. replacing his whole wardrobe with stuff from some local thrift stores).

4. Truth-seeking, curiosity-eliciting, rationality-friendly context

The context at our workshops is friendly both to hearing peoples’ perspectives deeply and to being able to point out possibly-contrary evidence.

Workshops: iffy bits, and their current state

Although there’s much I love about our old workshops, I would not be able to run them now, although I could probably cheer for someone else doing it; there’s too much I was eventually unable to stomach for myself. In particular:

Power over / doing something “to” people (current status: looks solved)

I currently aim not to take pains to impact someone unless I can take equal pains to hear them (in the sense of letting them change me, in deep and unpredicted ways). This is part of a general precept that conscious processes (such as CFAR guests) should not be subservient to processes that can’t see them (such as a rock with “follow policy X” written on it, or a CFAR instructor who hasn’t much attention to spare for the guest’s observations).

My main complaint about our past workshops is that I, and much of ‘we’, did not always hit this standard (although we tried some, and some of our staff did hit it). It’s part of my current take on how to do epistemics in groups.

More details about this complaint of mine, for those interested:

1. Excessively narrow backchaining / insufficient interest in both the world, and our workshop guests
I was scared about AI risk, all the time. I was in an emergency. And while I did try at the workshops to drop all that for a bit and take an interest in the people in front of me, I was also at the workshops to “make progress” on the AI risk stuff.

So, my notion of which participants were the coolest (most worth paying attention to, inviting back, etc) was mostly:

  • Who might do good work re: AI safety (math/CS chops, plus thinking in MIRI-ish ways), plus
  • Who was likely to donate to us or an EA organization, or organize parts of the alumni community, or visibly spread our rationality culture, or otherwise backchain in ways that would already seem sane to inner circle rationalists / AI safety people

(As opposed to, say, who had neat make-stuff skills or research patterns we didn’t have, that might broaden our horizons; I was too tired to really see or care about such.)

2. Nudging the CFAR alumni culture toward #1, so our community also became narrower
I, and other CFAR staff, weren’t the only ones who evaluated coolness a bit too narrowly, by my present taste. I think I and others in positions of community leadership often helped set this up in various ways.

(As a contrast point, the 2007-2011 OvercomingBias commenter and meetup community had broad and deep engagement without being a “school of thought” in the way the CFAR and LW rationalists later were, IMO.)

3. Trying to do something “to” our guests; priming our guests to want something done to them.
Many of our guests signed up for the workshop so that we could help make them more rational so that they could be better EAs (for example). And we wanted them there for much the same reason (sometimes; some of us).

4. Casting ourselves as having more epistemic authority or charisma than I currently think warranted.
Deeply related to #1, 2, and 3 above.

I’m relieved that our Nov 2025 workshop (and our prior, tiny pilot at Arbor Summer Camp) did not have these problems AFAICT. Things I saw in November, that I believe I’d see differently if we did still have these problems:

  • I felt relaxed around the participants; my fellow instructor Jack said they liked the participants for the first time instead of feeling at war; many or all of us instructors simply enjoyed reading the exit surveys instead of feeling jostled by them
  • We heard considerably more remarks than usual along the lines of “gosh, rationalists are really friendly when they get together in person”
  • On Day 4 of the four day workshop, we spent three and a half hours on an activity called Questing, in which participants took turns being the “hero” (who worked on whatever they liked) and the “sidekick” (who assisted at the hero’s direction) for ~10 minute chunks. This activity was extremely well-liked (did best of all activities on our survey; many said many great things about it). In the past, similar activities led to many participants feeling jarred/jostled/attacked/hurried; this time, despite the schedule, it felt spacious and friendly

This is enormously relieving to me; uncertainty about whether we could change this thing was my main reason for hesitating to run CFAR workshops. We will of course still be keeping our eyes out.

More workshop iffy bits

While the “power over” thing was the iffy bit that bugged me the most, there are also other things we want or need to change about the workshop. You can see our whole workshop-related bugs-and-puzzles-and-todos list here.

More about the new workshop

If you’ve been to a CFAR workshop in the ~2015-2020 era, you should expect that current ones:

  • Have roughly 2/3rds classic content, including Building a Bugs List, TAPs, Inner Sim, and almost all the more memorable classes
  • Are the same format
  • Have roughly 1/3rd new content, mostly aimed at practical ways to be less “seeing like a state” when applying rationality techniques, and to be more “a proud gardener of the living processes inside you / a free person with increasing powers of authorship.” (We've been calling this thread "honoring who-ness.")

Further detail, if you want it, at More details on CFAR’s new workshops.

Larger contexts surrounding our workshops

In this section, I’d like to talk about the larger contexts (in people, or in time) that our workshops depend on and contribute to, as well as some solved and unsolved pieces about those larger contexts.

aCFAR’s instructors and curriculum developers

Our major change, here, is that all instructors and curriculum developers are now very part-time. (In 2012-2020, most workshop instruction and curriculum development work was done by full-time staff.)

There are two big reasons I made this change.

  • First, I’m pretty sure it’s healthier for the instructors (in the 2013-2020 era, many CFAR instructors had very hard times, in ways that reminded some of us of the troubles of traveling bands)[7]
  • Second, it makes it easier for CFAR to be unafraid near questions of whether we should change something major about what we’re doing, should shut down, etc. – since our staff mostly don’t have their only avenues for meaning (or for income and life stability) bound up with CFAR

A pleasant bonus is that we get more mileage per donor dollar: a few hours/week of trying our units on volunteers and on each other is enough to keep CFAR in our shower thoughts as we go through the week (for me, and for many other instructors AFAICT), and the rest of our normal life seems then to give us useful insights too. (And we’re paid hourly, so a "lighter" schedule that still gets curriculum development flowing is a good deal for donors!)

aCFAR’s alumni community

Our workshop development process is stronger with a healthy alumni community in several ways:

  1. An alumni community lets us better see the long-term impact of our workshops
  2. An alumni community lets workshop alums learn and add to the art more thoroughly by practicing with others (As well as hopefully allowing cool new business collaborations, friendships, etc.)
  3. It seems more wholesome to tend (and be tended by) a community of alums, vs having only one-off interactions with new workshop guests

Our alumni community was extremely fun and generative in CFAR’s early years, but gradually became less invested and lower trust over time, partly as a natural side-effect of passing years, and partly because we weren’t doing community all that well. We still have an alumni mailing list and it hosts some interesting discussions, but things there feel less active and exciting than they once were.

We like our alumni and think they’re cool! We’d like to figure out how to freshly kindle some of the energy that made the old CFAR alumni community as cool a place as it was.

My guess (not a promise) is that we should start a new alumni community with these features:

  • Old alumni are not automatically in, but you are encouraged to reach out if you’re an old alum and want to join the new community
  • When a person comes to a workshop, they automatically become a member of the “new alumni community” for a fixed period of time (a year? two years?), after which their membership automatically expires unless they contribute in some way (e.g. volunteering at a workshop; donating / paying a membership fee; or making something neat for other alumni)
  • There are annual alumni reunions, a mailing list or other structure for discussions, and some smaller, lower-cost “CFAR alumni workshops” on specialized topics

Lineage-crediting and gatekeeping

It is vital to accurately, publicly track where good things come from (lineage-crediting). At the same time, it is necessary not to let people into our events or alumni networks who we can’t deal with having there. This combination can be awkward.

As an example of this awkwardness: Michael Vassar taught me and many people a bunch about rationality when I joined the rationalist and AI safety scene in 2008, and he was also quite involved in me changing my mind about the stuff I mentioned changing my mind about in 2020. I can see traces of his ideas all over this post. My thoughts in this post, and the ideas in the newer parts of CFAR, were also greatly influenced by my good friends Anonymous and Anonymous.

And yet, for varied reasons, I wouldn’t feel good about having any of those three visit an intro CFAR workshop (although I might well invite Michael Vassar to an alumni reunion or similar event, where my tolerances are a bit broader; and I’d gladly have all three to a retreat run by a more bespoke CFAR spin-off called LARC/Bramble). I think this is not unusual bad luck; my best guess is many of those who “woke up” as kids in strange surroundings and who forged their own paths to being unusually conscious and agentic, dodged some of the “be rule-abiding” training that makes most middle class people easy for other middle class people to predict and be safe around. And the CFAR alumni network is a large, semi-institutional context designed to work okay for folks who are within the normal range on rule-abiding and who are used to getting to assume others are too, for good reason. (To be clear, I also learned a pile of rationality from many others, most notably Eliezer, who are reliably rule-abiding.)

This sort of “awkward” isn’t only costly because of wanting not to alienate my friends. It’s also costly because it’s confusing (to me, to them, and to workshop guests and onlookers). When rationality content is presented within a context that couldn’t have made that content and that doesn’t help tend the sources of that content, it’s harder to set up good feedback loops. (Cf. the Caring that Tends its own Sources).

But, here I am, anyhow, having decided that this is the best world I can manage, and trying to describe something of its workings in public.

My plan, roughly, is the obvious one:

  • Try to acknowledge the lineages of ideas whenever it comes up, without regard to whether it’s awkward
  • Don’t admit people to CFAR workshops or events who we can’t deal with (or try not to; but be medium in my false-positive/false-negative tradeoff ratio)
  • Do value: visibly staying in touch with thinkers I’m relevantly downstream of; coming into contact with varied high-capacity people; trying to MacGyver decent feedback loops where I can

Michael “Valentine” Smith

While we are on the topic of both gatekeeping and lineage-tracking: we are considering bringing CFAR co-founder Michael “Valentine” Smith back onto our workshop staff.

I’d like to note this publicly now, because:

  1. We seven years ago said publicly that Valentine “would not be any staff or volunteer roles going forward, but remain[ed] a welcome member of the alumni community”, and so it seems well to be similarly public about my revised intent
  2. A fundraiser post seems like an honorable place to publicly share plans and policies that some may object to, because folks can easily not-donate (or advocate that others not-donate) if they want.

If it matters, I and various others have worked closely with Valentine at LARC/Bramble (CFAR’s more bespoke spinoff organization) for the last two years, and I have found it comfortable, wholesome, and generative.[8]

The broader rationality community

The broader rationality community makes our work at aCFAR feasible (e.g. via donations, via sending us participants who are already rationality fans, via giving us good rationality stuff to draw on, and via good critiques). We are grateful to you guys. It’s important to me that we give back to you, somehow, in the long run. My main current theory as to how to give back is that we should write substantive blog posts as our theories-of-rationality congeal, and should make our process open so if we fail this time, it’ll be easier for interested parties to see what exactly went wrong (no opaque fizzles).

Flows of money, and what financial viability looks like within our new ethos

We do not yet have a demonstrated-to-work plan under which aCFAR (in our new incarnation) can be financially sustainable.

In 2012-2020, a large majority of our donations came from AI risk donors, who donated because CFAR recruited for MIRI (or to a lesser extent other AI safety efforts) or because they otherwise believed we would help with AI risk.

Also, in 2012-2020, a significant chunk of our workshop revenue came from EAs (both AI risk people and EAs more broadly) who had heard that CFAR workshops would somehow make them better EAs, and perhaps also that CFAR itself was an EA organization worth supporting. And so they balked less at the (then) $3.9k price tag because it was parsed as an EA expense.

Double also, in 2012-2020, we workshop instructors broadly tried to position ourselves as people who know things and can give that knowledge to you (and so are worth paying for those things).

My current attempt at CFAR branding lets go of all three of these angles on “you should give us money,” in favor of an ethos more like: “we (including you, dear workshop guest) are a community of people who love to geek out (in a hands-on way) about a common set of questions, such as:

  • What things are most worth our attention?
  • What processes might help us form true beliefs about the things that matter the most?
  • What processes in fact lead to good things in the world, and how can we tell, and does it work if we mimic them?
  • What is known by different sets of “makers” in the world, e.g. by the people who keep the medical system running, or who do academic chemistry research, or who make movies, or who do handyman work? How can you tell?
  • Are there common illusions getting in our way, e.g. from Kahneman-style biases, or from memetics or social ties, or from ego? What patterns might help us compensate?
  • Where do our goals come from?”

Under this model, CFAR instructors differ from workshop guests in that we spent a bunch of time testing and refining particular classes (which we try to make into good springboards for doing hands-on geeking out of this sort in common, and so for jumpstarting guests’ ability to have rich conversations with each other, and to do rich, grounded noticing together, and to point out traction-creating things that are visibly true once pointed-to). But we try not to differ in perceived/requested epistemic status, or in “you should believe us”-flavored social cues.

Also, under the new model, our requests aren’t backed by a claimed long-run EA payoff: we are not saying “please consider sacrificing parts of your well-being to work at CFAR, or to attend CFAR or implement our taught habits, because it’ll help with AI risk somehow.” Instead we are saying “please come nearby if it interests you. And if you like what happens next, and what changes it seems to give you in the observable near- and medium-term, then maybe keep trying things with us for as long as this seems actually healthy / rewarding / to give good fruits to you and visible others in a simple, cards-on-the-table way.”

I expect our new model is more wholesome – I expect it’ll bring healthier feedback loops to our curriculum and culture, will form a healthier town square that is more fruitful and has fewer stuck beliefs and forcefully propagated illusions, and will be an easier context in which to keep us staff wanting to share most info in public, including evidence we’re wrong. But I don’t know if it’ll bring in enough revenue to keep us viable or not. (And we do still need money to be viable, because being a custodian of such a community requires staff time and money for food/lodging/staff flights/etc.)

If we can’t make a financial go of things under our new ethos, my plan is not to revert to our past ethos, it’s to fold – though my guess is we’ll make it.[9]

How our ethos fits together

In this section, you’ll find pieces of what motivates us and principles we intend to follow.

Is aCFAR aimed at getting AI not to kill everyone? If not, why are you (Anna) working on it?

We are not backchained from “help get the world into state X which’ll be better for AI,” nor from “help recruit people to AI safety work,” “help persuade people to take better AI policy actions,” or anything like that.

My (Anna’s) motivations do and don’t relate to AI safety; it’s complicated; I’ll publish a separate post going into detail here in about a day.

Principles

This is an attempt to make visible the principles that I, and to some extent CFAR, are trying to act on in our CFAR work. I, and we, might change our mind about these – these aren’t a promise – but I plan to review these every three months and to note publicly if I change my mind about any (and to note publicly if CFAR changes leadership to someone who may run on different principles).

We’ll start with some short-to-explain ones, then head into some long explanations that really should be their own blog posts.

Truth is crucial

This principle is one of the “things that go without saying” around LessWrong most of the time (and is shared with past-CFAR), but it’s precious.

Honor who-ness

Remember each person is a miracle, is way larger than our map of them, and is sustained by knowledge and patterns of their own making. Honor this. Allow ourselves to be changed deeply by the knowledge, patterns, character, etc. of anyone who we deeply change.

Stay able to pivot or shut down, without leaving anybody in the lurch

It’s easier to stand by principles if there’s a known and not-too-painful-or-commitment-breaking path for quitting within a few months (should we prove unable to stick by these principles while remaining solvent, say).

Serious conversation, done in hearty faith

This section is written by my colleague John Salvatier.

Serious conversations deal with the real issues at play and go beyond literary genre patterns. And serious conversations in hearty faith apply enough real human trying to get to real discovery about the topic.

Serious discussion of problems we really care about where the participants are fully engaged are kind of a miracle. For example, if you’re wondering whether to quit your job, a serious and hearty conversation about the question and about what matters to you in life can have a profound life effect.

At this CFAR, we are trying to have hearty faith with each other and with others to create the possibility of serious conversations. (And we are trying to do this without forcing, via repeatedly asking ourselves something like: “does it feel good to share my real cruxes right now, and to hear where [person] is coming from? If not, what sensible reasons might I have for not (bearing in mind that there’s lots of useful stuff in me that conscious-me didn’t build)?” We aren’t trying to impose hearty faith; we’re taking its presence as a thermometer of whether life is going well right here.)

Serious conversations are like science experiments. Their success is not measured on reaching a particular outcome, but on their revealing substantial things about the world that bring us into closer contact with the world. 

The classic Eliezer/Robin AI Foom Debate is a good example of something that might look like a serious conversation but somehow isn’t a “conversation” in quite the sense we mean. A conversation would spend a bunch of time doing asymmetric things where one person is mainly trying to understand the other (for example passing their ITT). Instead, Eliezer and Robin each use each other as a foil to better articulate their own view. This might be serious research, or good exposition to an audience, but it isn’t the thing we have in mind.

Hearty faith is necessary for successful serious conversations when our maps (or theirs) have messy relevance to the world and our goals. Which they will when the topic is a life frontier or a world frontier.

Hearty faith is different than just good faith. 

Bad faith is lying, fraud. An abandoning of our integrity. 

Lousy faith however is when our intentions are like a thin stew instead of a hearty, many-flavored, full-bodied one. In “lousy faith” we are putting in effort to keep integrity on some dimensions, but not very many.

  • My cutest example of “lousy faith” is a teacher who replies to a kid’s “can I go to the bathroom?” with “I don’t know, can you?” 
  • A subtler example is someone who engages with what you say, but takes a narrow and incurious view of where you’re coming from and what you mean by your words, adversarially playing dumb about what you’re saying. They’re not lying about trying to understand, but they’re certainly not applying themselves or being up front about their (lack of) investment.
  • Another paradigmatic example: “Why don’t you just [radically shift your mindset to mine]?” said as if that were an atomic action.

Hearty faith, by contrast, is when we act with attention to many sorts of integrity all at once (the more, the heartier, like a hearty stew).

Hearty faith is necessary for serious conversations with messy world maps to be successful because every such conversation has many relevant-but-illegible layers that are otherwise obscured and hearty faith allows legibilizing them. It allows the relevant-but-illegible conversation layers into the conversation on good terms.

The caring that tends its own sources

This is a phrase I made up, inspired by Eliezer’s The Lens that Sees its Own Flaws (which is one of my very favorite Eliezer posts, and conveys an idea that’s on my shortlist for “most inspiring insights ever”) and and also by conversations with my friends Evan McMullen and Anonymous.

I hope to eventually write a blog post about this principle that makes sense. But this is not that blog post, it is a placeholder.

So: we find ourselves alive, awake, caring. How did I, or you, reader, get to be like this? It’s a bit of a miracle. We can tell decent causal stories (mine involves my parents, their parents, the United States, a brief opening in Hungary’s border during a war, my mom’s careful crafting of endless ‘math games’ for me, my dad’s absorbing a useful secularism from the Soviet Union that he rightly hated… going further back we have the European Enlightenment, eons of biological evolution, and more). We can tell decent causal stories, and it’s worth bothering to tell them, and bothering to try to get it right; and at the end of the day “a miracle” is still a decent term for it – the processes that let us be here are something large, and worth marveling at, and contain deep generative “magic” that we don’t yet know how to build.

How to relate to this?

Concretely:

  • I’ll find desires within me that are busy doing a flailing pattern that won’t get anywhere – pieces of caring that are not yet “helping tend their own sources.” (For example, I’ll be reflexively “not-listening-harder” to try to make a loved one act differently.) In such cases, I try to gradually help the reflexive desire become able to care usefully across slightly-longer time horizons, in collaboration with “me as a whole.” (Then, the “caring that tends its own sources” can be bigger.)
  • I try to trace lineages aloud, even where it’s awkward
  • When I see someone who seems surprisingly (skilled / generative / agenty / etc), I try to ask what process made them
  • I make some effort to help tend the processes that made me, for myself and for CFAR. (E.g., while this CFAR is not an EA organization, we’ve been helped by EA and I hope we can leave it better than we found it.)

No large costs without a feedback loop grounded in earned knowledge and caring

This principle is an attempt to articulate the main thing I changed my mind about in 2020.

It now seems to me that when you’re running an organization, such as aCFAR or the neighborhood bakery, you’ll benefit if you:

  • Are aware of the resources you depend on. (As a bakery you might depend on customers, ingredient suppliers, a thriving downtown that helps bring potential customers by your door, the cultural tradition of coffee and baked goods...)
  • Take an interest in what produces and sustains these resources. Be aware of the rough extent to which you do or don’t have reliable maps of what’s involved in producing and sustaining these sources, so you can maintain the needed amount of [respect / Chesterton’s fence / actively watching out for needed conditions you shouldn’t disrupt], without being unduly cautious about everything.

    For example, I understand how to turn hot water and peppermint teabags into peppermint tea. (Thus, I can change up my water heating method, its temperature, etc without being surprised by the results.)

    On the other hand, my friend sometimes likes to walk his dog with me. I’m pretty sure there’s detail to where he will/won’t take his dog, when he does/doesn’t feel like doing it, etc., and I’m pretty sure that detail helps maintain cool functionality, but I also know I don’t know how it all works. Thus, I know that if I try making many of these decisions for my friend, without consulting him, I might mess up some resource he’s used to counting on.

  • Take an interest in the specific “bridging structures” that let particular resources coexist.

    For example, a coaster is a good “bridging structure” to keep my hot teacup from damaging my wooden table.

    For a more complex structure, a bakery’s proprietor might be careful to keep their sidewalk shoveled, to greet neighboring business owners, etc. as part of a plan to allow the bakery and the downtown it’s in to avoid harming each other. This kind of bridging structure will need to regularly take in new info, since one probably can’t have an adequate static map of downtown as a whole.

  • Let each resource-flow and each bridging structure have a keeper who maintains both an inside view about what’s necessary for sustaining the resource flow and an inside view about how much “magic” isn’t yet in their map.

    That keeper must be responsible for deploying these resources only in ways that make inside-view sense to them (e.g., if there’s a small experiment, the keeper should have felt hope in doing small experiments; if there’s a large deployment, the keeper should have felt conviction that large deployments of this sort bring fruit)

    That keeper must also have enough eyes on the results of that deployment that they can update sensibly.

I’ll spell out what this means in the case of CFAR, and then explain why I care.

What this means in the case of aCFAR:

This CFAR makes use of three main resource flows:

  • Staff and volunteer time and energy
  • Participant desire to come to workshops and test sessions, engage with our attempted rationality techniques, do life a bit differently in contact with us, and let us see something of the results
  • Money (from donors, workshop revenue, and other groups renting our venue)

We want all these resources used in ways where their keepers have grounded reason to think it’ll help with something they care about (and have feedback loops for checking).

Concretely, I’m aiming for:

Staff and volunteers have better lives (or not-worse lives) via our involvement with CFAR, including in the short- and medium-run

In CFAR of 2012-2020, many of us sacrificed for CFAR – we e.g. worked 60+ hrs/week, had distorted social patterns with folks in the rationality community, and otherwise paid (and sometimes caused) large costs. I’d like to arrange our culture so that people don’t do that this time around. I want us to each be simply, groundedly in favor of what we’re doing, without trusting in long-term unseen effects on the post-AGI future or anything else.

(Here and elsewhere, it’s fine if staff and volunteers sometimes try things that hurt us. The principle isn’t “no costs” or “no one made worse-off ever.” It’s rather “no key resource flows, ones that CFAR is reinforced by and grows around, that make people worse-off.” One-off “ouches” are part of how we locate what works, and are fine as long as we update away from them instead of learning to depend on them.)

Participants try aCFAR’s suggested habits based on their own inside views (not our charisma or claimed knowledge)

Some participants have historically shown up to the workshop expecting to be told what to do by people who know the answer. But I want us to resist this pressure, and to create a culture of “practice trusting your own judgment, and making many small experiments while seeing yourself as the author and experiment-iterator for your life and habits.”

Donors

I go into much more detail on this one in who I hope does and doesn’t consider donating.

Why this principle

I’m afraid that otherwise we’ll do a bunch of hard work, at large costs, that nets out to “harmful, on average, after considering opportunity costs.” I’m also afraid that all that work won’t even teach us much because, for most of it, there was no conscious human who individually thought it a good idea. (This is coming out of my 2012-2020 experiences.)

To spell out my thinking:

First: people often learn more by making their own mistakes than by “making other peoples’ mistakes.”

This is easiest to see if we consider a concrete context such as chess. If I play chess from my own inside view, I will repeatedly make moves that look like good ideas to me – and then my opponent will often show me how exactly my inside view was wrong by exploiting my errors. If I instead play chess by repeatedly trying moves my friend thinks are good, I’m likely to learn less, because my friend’s moves aren’t rooted in a detailed inside-view lodged in my head.

There are exceptions – maybe my friend has a Cool Chess Trick that I can understand once I try it, and that wouldn’t have occurred to me on my own – but these exceptions work when they’re somehow supporting an existing, intact flow of my own autonomous choice.

Second: I don’t want to build habits or culture (in our alumni) that’ll be easy for cult leaders or others to exploit.

If workshop guests practice deferring to us about what weird things to do with their minds – especially if they do so for extended periods, based on wispy claims about long-term payoffs, e.g. “this’ll help with AI risk somehow” – this risks setting some up to later try deferring to people running more obviously unhealthy cults. I speak from experience.

I also hope a culture of “remember the buck stops with you; check whether it is producing fruits you directly feel good about” may help with the rationalist community’s tendency to enable AI companies. But this is only a hope.

Third: I want good hygiene near CFAR and the rationalists / I don’t want to leave metaphorical rotting meat in our kitchen counter.

If you’ll pardon a metaphor: having living, healthy humans in a kitchen is mostly fine, hygiene-wise. Having a large slab of unrefrigerated meat sitting in the kitchen (no longer alive, and so no longer tied in with a living organism’s immune system), is a hygiene problem, especially after a while.

I suspect that if we have “living resource flows” across CFAR, the memes and habits and culture-bits that survive and spread here will mostly be good for us and others. I suspect by contrast that if we have ungrounded resource flows (ie, if we ignore this principle), we’ll risk breeding “parasitic memes” (or people) that are optimized to use up all the free energy in the system and that don’t tend to the conditions required for healthy life.

I mean it

If we can’t hit this principle (or the truer spirit behind it), my plan is to either figure out how to to hit it, or close CFAR.

(Although, here as elsewhere, I may revise my views; and I’ll update this post if I do; these principles are not permanent promises.)

Some principles you might assume we have that we don’t have:

  • Safety/vetting/”full protection” as a maximum priority. We care about safe experiences and environments, but not to the exclusion of all else.
  • Maximum data-backedness (we like data, but most of our stuff hasn’t been verified by RCTs, and we also believe in acting on our intuitions and inside views and in helping you act on yours)
  • Trying to be “The” canonical Rationality Center, or to do everything the one objectively best way. (In fact, we are aware that we are one project in a world with many cool projects and much space. We aim to do our thing without hogging the whole "rationality" namespace, or the whole space for rationality-related cultural experiments.)
  • I’m not sure what else goes here, but I welcome questions.

Why we need your support / some cruxes for continuing this CFAR

There’s a sense in which we don’t need anybody. I could sit in my room, call myself an “applied rationality researcher,” and write things I called “rationality exercises” on paper or something.

But if we’re going to do something that’s not pretend, then we need people. And we need to find a way that there’s something in it for those people – a resource flow that gives back to them. (Otherwise, it’s still pretend.)

Why ask for donations?

We’re asking for donations because it takes money to run CFAR. If there are enthusiastic people out there who are willing and able to help fund us, that’ll both help a lot and seem wholesome. We aim to find a set of people who want the kind of practice we are building, and who want to build it, believe in its possibility, and try it together.

If nobody donates, we’ll likely continue; in extremity, we could e.g. sell our Bodega Bay venue, which would give us a few years’ operating expenses at our current, fairly minimalist budget. (That said, we love our venue and don’t want to sell it; more on that later.)

But if nobody donates and nobody cool wants to kibitz and all the people who try our workshop kinda want their time back and so on, of course we quit. Our main business in interacting with the community is to find a way to do cool stuff, via resources from some of you, in such a way that everyone’s glad. I suspect, but am not sure, that getting some donations from some of you is part of how to build the good, living center we are seeking.

Some disagree with us, and we’re doing this anyway

It is not the case that everyone who’s had much contact with past-CFAR believes resuming workshops is a good idea.

In particular:

  1. In the comments thread of our last post, Duncan Sabien (who worked for CFAR from 2015 to 2019, served for a long time as our Curriculum Director, and, among other things, wrote the CFAR handbook), spoke against CFAR in strong terms.
  2. I also got several quieter responses along the lines of “hmm, really? I’m not sure if that’s a good idea” when I told long-term friends and former colleagues I planned to restart CFAR. Also, I have myself shared concerns about my and CFAR’s past work, since changing my mind about some things in ~2020.

There were also cheers: a sizable majority (at least of those I heard from) offered enthusiasm, well-wishes, “I’m glad there are again CFAR workshops where I can send my friends,” “I missed you guys,” etc. Former CFAR instructors Renshin (aka Lauren Lee) and Adam Scholl did this in the public comment thread. And I of course landed solidly at “yes, I want this enough that I’m willing to put in real effort.”

But I want to acknowledge that some disagree, for a few reasons:

  1. It’s more honest to potential donors;
  2. I’d like those with serious doubts (including folks who might normally be shy, quiet, or agreeable) to have a way to mention these without disrupting a conversation that assumes they don’t exist;
  3. I want to show off aCFAR’s new ability to put coordinated effort into a thing some disagree with

Let me elaborate on (c): Back in 2014-2020, I would freak out whenever some serious thread of public conversation cast doubts on CFAR. I’d do this because I knew I needed CFAR staff’s morale, and I believed (accurately, I think) that many would lose their morale if even a small vocal minority said we were doing it wrong.

I believe our morale is somehow stabler now. (Perhaps partly because we factored aCFAR’s believing in’s out separately from our epistemics, and also because we’re a particular experiment we each want to do rather than a claim about the ‘objective best’).

I care about (c) for several reasons, but one is that I want good but imperfect institutions to exist in our present world, and to do this without suppressing news of their failures. Many of the previous decades’ institutions are gone from the world of 2025.[10] I think this is in significant part caused by the combination of:

  1. the Internet making it harder to suppress evidence of errors/doubts/harms/etc. (a good thing)
  2. a heuristic of “if anyone seriously objects in public, either pressure them into shutting up, or drop the project" (unfortunate, IMO).

Also, I put real effort into dismantling parts of my and CFAR’s positive reputation that I believed were false or ill-founded, and I did that partly because I didn’t think we could build something good near CFAR before that stuff was dismantled. Having completed that step (as I see it), I am eager to see what we can build on the new, partially razed ground.

Donations

Our finances

We currently have about $129k available for CFAR and its projects, which gives us about four months of runway.

To make it comfortably to the end of 2026, we think we need about $200k of additional donations (counting donations into this fundraiser, any SFF funding, and any other donations, but not counting workshop payments or venue rental revenue). We expect to probably get some money from SFF (probably in the form of matching funds, in about a week), and so are setting a “basic target” of $125k, and a “reach target” of $200k (as we can do more with more).

For more detail on that, see this breakdown:

General Costs

CFAR has ongoing general administrative costs – accounting, staff wages for administrative tasks, and so on. We think this will cost us about $72,000 for 2026. This is a very significant decrease from e.g. 2019, as CFAR is running with a smaller and leaner staff and no longer maintains office space.

Venue

We maintain an event venue in Bodega Bay, California, which we also rent out to other groups. This venue is both one of our primary expenses and also a source of revenue. Since 2020, the venue has been a significant net expense as we have run fewer programs there and not had many bookings. However, we now have venue caretakers who are sprucing the place up, figuring out what outside groups are looking for in a venue and how we can hit it, etc. We also expect to use our venue for more CFAR programs than we have been in the past few years.

For 2026, we estimate that we will likely have total venue costs of about $285,000. This is primarily mortgage payments, utilities, various maintenance/repair/”venue caretaking” work, and property taxes, although it also includes supplies for programs held at the venue. We also anticipate bringing in approximately $200,000 of revenue from outside bookings (after deducting cleaning fees), as well as using the venue for our own programs, hosting some staff meetings there, and so on. The savings from our own programs there are difficult to calculate but would likely be in the tens of thousands of dollars, perhaps $35,000 to $65,000 or so across 2026.

This means we anticipate the venue will on net cost us something like $20,000 to $50,000 for 2026, depending on how many programs we end up running there, how many outside workshops we hold, and what other costs we may incur. This is not ideal but we consider it a cost worth bearing for now, and in the long run we hope to run more programs there ourselves and bring in more outside bookings such that the venue ends up breaking even or being financially positive for CFAR.[11]

Workshops

Workshops are both a source of revenue and a significant cost for CFAR to run. Generally speaking, workshops gain or lose money based on how many staff members and participants are involved and how much financial aid we do or don’t offer to participants; a workshop with twenty-five participants paying full price would be profitable, while workshops with fewer participants and/or more financial aid may well lose money for CFAR on net. For instance, our November workshop ended up approximately -$28,400 on net.

In 2026, we currently anticipate running about four mainline workshops (one Jan 21-25 in Austin, TX and three yet to be announced). The workshop in Austin will incur venue costs that workshops held at our venue won’t. Insofar as the workshops otherwise have overall similar costs and revenues as the November workshop, we will probably be net minus ~$130,600 from workshops.[12]

We are excited to run these workshops even at a potential loss. In addition to being helpful to the participants, running workshops greatly aids our efforts to develop and refine an art of rationality. (In the long run, if our programs are any good, we should be able to fund the workshops more fully from those who attend, which will be better feedback loops, though we may want ongoing exceptions for students / folks without much money and for folks who are coming mostly to aid rationality development work.)

We also think that workshops benefit people beyond those who attend directly – some workshop attendees have gone on to teach others concepts like double crux and other CFAR techniques, and we think running workshops provides significant value for these “grandstudents”[13] as well.

In the past, CFAR has even offered some workshops for free – for instance, the four workshops we ran in the Czech Republic during autumn 2022 were entirely free to participants. However, the overall state of the funding environment was different when those programs were being planned and offering free mainline workshops currently seems imprudent.

Curriculum Development

In addition to the above costs, we also pay staff for general curriculum development outside of workshops – research into various aspects of rationality, work on new techniques, running test sessions where we try new material on volunteers, and so on. We project something like $25,000 in costs for this area in 2026, though this is somewhat speculative.

Aspirational

In addition to the core categories mentioned earlier, there are additionally various other projects that CFAR would like to be able to spend money on but currently is not.

For instance, in the past CFAR has supported “telos projects” – a program where CFAR provided funding for rationality-related projects that felt relevantly alive to people. In 2025, we had a few legacy projects in this area but are not soliciting new applications for telos funding; in a world where we had better funding we would like to reopen the program and use it to help new alumni run cool projects, including infrastructure for the new alumni community.

We would like to be able to pay me (Anna) to write various LessWrong posts about concepts CFAR has recently been working with, but are currently holding off on that. We would also like to (slowly, slightly) grow out our staff of curriculum developers and to modestly increase staff wages if we can.

Who I hope does, and doesn’t, consider donating

As mentioned earlier in this post, I’d like to build toward a world in which aCFAR’s donations come from, and with, the right kind of feedback loops.

I’m particularly cheerful (happy, relieved, joyful, grateful) about donations stemming from any of:

  1. You want to say a friendly “hello” in a donation-shaped way. Sending us $20, or $200 if you are so minded, is a good way to let us know, “Hi aCFAR, I see you, I smile at you, I hope you stick around.”
  2. CFAR, or things relevantly similar to CFAR, made you much better off personally, and you’d like to “pay it forward.” (I donated to Lightcone this year because their existence makes my life much better; if you have a similar desire re: this CFAR, we appreciate it!)
  3. You expect to feel more at home in the CFAR context, in some important way, and so you’d like to enable the creation of that context, and/or to buy into it or nudge it a bit toward being you-flavored in some way.[14]
  4. There’s something in here that you personally are rooting for, and you’re moved to root for it harder, with your dollars, so it can really be tried. (Like a home team or a city or a project in which you have pride and have/want membership)

    The more dollars you deploy here, the more I hope you have some heart to spare to come along with your dollars, as “I care about this, and I’ll be kibitzing from the sidelines, and updating my total view of life based on how it goes, with enough context that my kibitzes and updates will make sense.” (The more of your dollars you deploy here, the easier we’ll try to make this “kibitzing from the sidelines” for you, if you’re willing.)

  5. (Particularly relevant for large donations) You want aCFAR to remember you as a key contributor and to take a deep interest in where you’re coming from and how you and we can do something that is win-win for our [hopes and dreams and hypotheses and what’s worth trying in the world] and yours. (Plus you sense the potential for collaboration.)

I’m particularly wary of donations stemming from:

  1. You’re an EA, and are hoping to donate dollars to a thing that others have already verified is an efficient “input money, output saved lives or other obvious goods” machine.

To be clear, EA is an excellent way to donate; I’m glad some people donate this way; there’d be something seriously wrong with the world if nobody did this. But it’s not what this CFAR is doing. (More on this above.)

And in my opinion (and Michael Nielsen’s in this podcast with Ajeya Cotra, if you want a case at more length), there’d be something even more wrong with the world if most resource expenditure flowed via EA-like analysis.[15]

Another reason people used to sometimes donate, that IMO don’t apply to us today, and so would not be good reasons today:

  1. Trying to “raise the sanity waterline” for large sets of people (we tried this some in the past, yielding e.g. Julia Galef’s excellent book and some contributions to university classes; we have no active effort here now)

And a couple other reasons to donate:

  • You want this weird set of people (who’re having lots of impact on the world, for whatever reason: the rationality community and its many “adjacent” communities and people) to have enough total community infrastructure. (And you think we help that, and don’t much harm that.)
  • You want better eyesight on what happened to the hopes of the original rationalist project, and you think [this particular attempt at “let’s try this again, with a more transparent conversation this time”] will give us all some of the light we need

Ways to help CFAR or to connect to CFAR besides donating:

There are several good ways to help CFAR financially besides donating. You can:

  • Come to a workshop (or help a friend realize they’d enjoy the workshop, if they would)
  • Book our venue (or help a friend realize they’d enjoy booking the venue, if they would)
  • Sign up for our online test sessions to help us develop our material
  • Try our coaching (for yourself or for a friend).

There are also a pile of ways to help this CFAR and our mission non-financially. (Most of the resources we run on are non-financial, and are shared with us by hopeful rationality fans). Basically: kibitz with us here, or in a test session, or at a workshop. Attending a workshop helps even if you come on full scholarship a lot of the time, as having varied, cool participants makes our workshops more perspectives-rich and generative.)

For bonus points, maybe come to a workshop and then write up something substantial about it on LessWrong. (Scholarships are available for this purpose sometimes.)

Perks for donating

If you donate before Jan 31, you’ll also get, if you want:

  • A CFAR sticker pack (for donations ≥ $20)
  • A CFAR T-shirt, with our logo plus “don’t believe everything you think” (for donations ≥ $200)
  • An invitation to a “CFAR donors” party at our Bodega Bay venue in February, with drinks, lightning talks, etc (for donations ≥ $200)
  • We take you out to lunch (if geography can be navigated), try to understand how you’ve been able to do the cool things you’ve been able to do, and discuss the coolest parts of you that we can see in a Shortform LW post (that can mention you by name, or not) and an internal colloquium talk you can attend and kibitz in. (Or, we do this with a particular book that you love and recommend to us.) (for donations ≥ $5k)

Also, if there’s something in particular you’d like CFAR to be able to do, such as run workshops in a particular city or run an alumni event focusing on a particular component of rationality, and you’re considering a more substantial donation, please reach out (you can book a meeting via calendly, or email [email protected]).

To the conversation!

Thank you for your curiosity about CFAR, and for reading (at least some of) this post! I hope you introduce yourself in the comments and that – if you end up donating (or kibitzing, or attending a workshop, or getting involved in us in whatever way) – it ends up part of a thing that’s actually good for you and the contexts you care about. And that you and we learn something together.

Yours in aspiring rationality,
Anna and aCFAR

  1. ^

    'aCFAR' stands for “a Center For Applied Rationality.” We adopted the 'a' part recently, because calling ourselves 'the' Center for Applied Rationality seems obviously wrong. But feel free not to bother with the 'a' if it’s too annoying. I personally say 'a' when I feel like it.

  2. ^

    One of the best ways to get to know someone is to team up on something concrete; kibitzing on a current CFAR stuck point is my suggestion for how to try a little of that between you and aCFAR.

  3. ^

    Thanks to Davis Kingsley, John Salvatier, Paola Baca and Zvi Mowshowitz for writing help. (Particularly Davis Kingsley, who discussed practically every sentence, revised many, and made the whole thing far more readable.) Thanks to Jack Carroll for photos. Thanks to Zack Davis and Claude Code for creating the thermometer graphic up top. Remaining errors, wrong opinions, etc. are of course all mine.

  4. ^

    My mom wanted to teach her kids math, so we could be smart. And I wanted… to be like her… which meant I also wanted to teach myself/others math! :) (Rather than, say, wanting to learn math.) Rationality education gives me an even better chance to see the gears of thinking/updating.

  5. ^

    This overcounts a bit since this number is based on totaling the attendee count of many different programs and some people attended multiple programs, so the number of unique individuals who attended CFAR programs is lower than this.

  6. ^

    EA spaces were receiving large influxes of new people at the time, and I hoped CFAR workshops could help the EA and rationality communities to assimilate the large waves of new people with less dilution of what made these spaces awesome. (Lightcone has mostly taken over the “develop and spread useful vocabulary, and acculturate newcomers” role in recent years, and has done it spectacularly IMO.) 

  7. ^

    Unlike some bands, we didn’t have substance abuse. But, like traveling bands, we traveled a lot to do high intensity soul-bearing stuff in a context where we were often exhausted but “the show must go on." I believe many of us, and many of our working relationships, got traveling-band-like scars. Also, we had ourselves a roster of potentially-kinda-invasive “CFAR techniques”; in hindsight some of our uses of these seem unwholesome to me. (I think these techniques are neat when used freely by an autonomous person, but are iffy at best when used to “help” a colleague stretch themselves harder for a project one is oneself invested in.)

  8. ^

    There would still be many details to sort through. Eg, CFAR is aiming to be an unusually low-staff-charisma organization in which staff suggest exercises or whatever to participants in ways that’re unusually non-dizzying; Valentine’s native conversational style has a bit more charismatic oomph than we’re aiming for. But I love the idea of collaborating with Valentine on stuff about memes, PCK-seeking, what sorts of systematicity might allow decent epistemics, etc.. I also like the idea of having one more person who’s been around from the beginning, and has seen both CFAR’s early generativity and our failure modes, keeping an eye out.

  9. ^

    We would also try to find other ways to make money, and tinker/brainstorm broadly.

  10. ^

    For instance, mainstream media and academia both have much less credibility and notably less money, the ACLU lost most of its vitality, many of the big organizations in EA space from 2015ish have either ceased to do much public leadership there or ceased existing altogether, and I would guess the trends in Bowling Alone have continued although I have not checked.

  11. ^

    It’s unlikely this would look like the venue generating more than its costs in direct booking revenue, but rather that the combination of booking revenue and cost savings for our own programs would exceed the costs of operating and maintaining the venue. Additionally we think the venue gives us a bunch of spirit and beauty, saves a bunch of staff time on logistics for each workshop we hold there, lets us support LARC and other groups we care about, and makes it easier for us to consider possible large expansions to our programs.

  12. ^

    There’s a lot of variability in what workshops end up looking like and there’s some reason to believe later workshops may generate more revenue, but we’re using November here as the most obvious basis for comparison.

  13. ^

    A term coined by Duncan meaning “students of our students” and which we continue to find useful in thinking about the impact of workshops and other programs.

  14. ^

    Lighthaven, the LW website, and other Lightcone-enabled social contexts are truly remarkable, IMO – one of the last bastions of general-purpose grounded truthseeking conversation on the internet. Many of you feel most at home there, and so should be sending such donations only to Lightcone. But some should perhaps put some or all of their ‘I want to support contexts that support people like me, or that support conversations I’ll feel at home near’ budget toward CFAR. Personally, I'm donating $10k to Lightcone and putting soul and work into aCFAR, and this leaves me personally feeling happier and more a part of things than if I were to skip either.

  15. ^

    Briefly: we humans are local creatures and we probably create better things, that contribute more to the long run, if we let ourselves have deep local interests and loyalties (to particular lines of research, to particular friendships and communities, to particular businesses or projects we are invested in) without trying to be always doing the thing that would be highest-impact for an detailless agent who happens to be us, and without trying to always be ready to change our plans and investments on a dime. I admit I’m caricaturing EA a bit, but I believe the point holds sans caricature; I would very much love to discuss this point at arbitrary length in the comment thread if you’re interested.



Discuss

End-of year donation taxes 101

2025-12-30 10:16:53

Published on December 30, 2025 2:16 AM GMT

Tl;dr

  • If you’re taking the standard deduction (ie donating <~$15k), ignore all this–there are basically no tax implications for you
  • Consider how much money you want to donate to c3s specifically (as opposed to c4s, political stuff, random individuals, some foreign organizations, etc.). For money you definitely want to give to c3s, you can put it in a DAF to count them as a donation this year, then figure out where to direct it later. For non-c3 money, it doesn’t really matter when you give it

A surprisingly large number of my friends are scrambling to make donations before the end of the year, or wondering whether or not they should be scrambling to make donations before the end of the year, and feeling vaguely bad that they don't understand their tax implications.

I will quickly break down the tax implications[1] and lay out how to buy yourself way more time and to decide on everything except how much you donate and how much of your donation will go to 501(c)3s vs other opportunities.

Note this post is greatly simplified. Your tax situation will depend on the state you live in and your income and maybe a bunch of other stuff.  If you are considering donating more than $100k, I would strongly recommend talking to a tax professional and reaching out to people you trust to get donation advice. If you don’t know who to talk to DM me or schedule a chat with a person I know and trust who has thought carefully about these things (but isn’t a professional) here.

Why the end-of-year rush?

You pay taxes each year. The amount you pay in taxes increases with your income. But any money you donate to a 501(c)3 in a given year is deducted from your income (unless you’re taking the standard deduction, i.e. if you’re giving less than ~$15k[2] just ignore all this). So if I make $100k but I donate $10k, I’m only taxed on $90k of income. If you’re in a high tax bracket, that means donations to 501(c)3s are effectively ~37-50% (depending on the state you’re in and your tax bracket) cheaper than other things you could spend that money on.[3] So it’s a very substantial consideration!

But if you donate next year, you’ll still get a tax deduction, just a deduction next year not this year.

These deductions cap out at 60% of your (adjusted-gross) income.[4] But you can carry any deduction over the cap over to future years for up to 5 years. So there typically isn’t a huge need to rush at the end of the year to avoid hitting this cap.

If you think you’re going to hit the deduction cap for all of the next 5 years (for example because you think your income in future years will be much lower than this year, or because you plan to donate a lot and max out the deduction in future years), then you should still pay attention to the year-end deadline. The same goes if you think your tax rate will be much lower in future years (so the deduction is saving money that would be taxed at a much lower rate in future years and thus would matter less).

However, the government just cares that you've credibly set that money aside to give to something charitable, not that you’ve made up your mind about which charity to donate to. So you can just donate the money to a Donor-Advised Fund (DAF). A DAF is a 501(c)3 where you can donate your money and then they hold it (and often invest it according to your recommendations), then wait for you to advise them on where you want it to go. If your advice isn’t completely and utterly insane, they will re-donate the money to whatever 501(c)3s you ask them to. They charge a modest fee for this service. DAFs are common and a standard practice of many philanthropists. Setting up a DAF is quite easy. Here’s one website where you can do it, though there might be better ones out there. You can read a comparison of DAF providers here.

By putting your money in a DAF, you get the tax deduction this year but can procrastinate on deciding which charity to donate to/can keep the money invested to let it grow before donating.

Once your money is in a DAF, it must ultimately go to 501(c)3s. However, many good donation opportunities are not c3s. For example you might want to donate to 501(c)4s (nonprofits that engage in lobbying/political advocacy), political campaigns/PACs, a cool research project some undergrad wants to run where it wouldn’t make sense for the undergrad to go through the overhead of getting a fiscal sponsor, certain charitable organizations outside the US, the homeless person down the street, etc.

Non-c3 donations are not tax-deductible, so there’s no need to rush to make these donations either.

The only thing you might want to decide by the end of the year is how big you want your donation budget to be and how much of it you want to allocate to c3s. I think some non-c3 donation opportunities can look very promising and competitive with c3 opportunities, so the decision isn’t obvious and will depend both on your specific situation (do you have donation matching for c3s?) and details of your worldview.

A note on procrastination: often “fake” deadlines are valuable. In practice many donors suffer from FUD about where to donate and never donate at all, or delay donating long enough that the value of their donations diminishes/good opportunities have passed them by. Whether or not it's better to donate now or engage in patient philanthropy is going to depend on your personal beliefs about what causes are important and what interventions work. But my guess is donations now are much much better than donations in the future. I think having a goal to donate a certain amount of money each year is often wise. But I believe in informing people about their options, so I wrote this post anyway.

  1. ^

    Obviously this whole post only applies to Americans.

  2. ^

    ~$30k if you're married and filing jointly. The exact numbers also change each year.

  3. ^

    There’s one other major benefit to donating to c3s if you hold highly appreciated assets: you can avoid capital gains tax if you donate appreciated assets to a c3 (this also works for some c4s since they’ll pay either 0% or 21% capital gains tax when they sell the asset, depending on some details; reach out to the c4 you have in mind to inquire about this). The cap gains tax could cost ~24-38% of your gains (depending on things like what state you live in), which can be quite significant.

  4. ^

     Unless you’re donating appreciated assets, in which case it’s 30%.



Discuss