MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Androgenetic haploid selection

2025-11-24 11:10:11

Published on November 24, 2025 3:10 AM GMT

Eggs are expensive, sperm are cheap. It’s a fundamental fact of biology . . . for now.

Currently, embryo selection can improve any heritable trait, but the degree of improvement is limited by the number of embryos from which to select. This, in turn, is because eggs are rare.

But what if we could select on sperm instead? We could choose the best sperm from tens or even hundreds of millions, and use that to make an embryo. However, any method that relies on DNA sequencing must destroy the sperm. Sure, you can identify the best one, but that’s of limited value if you can’t use it for fertilizing an egg.

There have been a few ways proposed to get around this:

  1. Nondestructive sperm testing. Technically challenging: sperm DNA is packaged tightly and you would have to partially denature it without killing the cell. Selection based on total DNA content (separating X and Y bearing sperm) is possible but only useful for choosing the sex of the baby. Phenotypic selection (swim rate, etc) is not very useful because sperm phenotypes don’t correlate well with sperm genotypes.
  2. Doing in vitro spermatogenesis, and keeping track of which sperm came from where.[1] There are four sperm produced from each spermatocyte, and three of them could be destructively sequenced to deduce the genotype of the remaining one. Challenging (nobody has done human in vitro spermatogenesis yet) and low throughput.

Here, I propose a different approach, which I call androgenetic haploid selection.

Androgenetic haploid selection

  1. Make a bunch of eggs. The chromosomes and imprinting don’t have to be correct (we’ll get rid of them in the next step), so even a low quality in vitro oogenesis method would work. Something like Hamazaki’s approach would work well here.
  2. Remove the chromosomes from the eggs. This can be done at large scale through centrifugation: spin the eggs hard enough, and the DNA will fall out.
  3. Add an individual sperm to each egg and establish haploid stem cell lines. This recent paper is an example of doing this for cows and sheep. These cell lines are called “androgenetic” and retain the DNA imprinting patterns of sperm.
    1. Notably, Y-bearing sperm cannot make viable haploid stem cell lines because many essential genes are on the X chromosome.
  4. Sequence many cell lines and choose the best one. Because the cells divide, it’s possible to destructively sequence some of the cells from each line without destroying all the cells.
  5. Collect eggs the normal way, and “fertilize” them with nuclei from your chosen androgenetic cell line.
    1. Optionally: perform additional selection based on the embryo genome.

Comments on this approach

  1. This method could give high genetic optimization for the paternal half of the genome. At scale, I estimate an overall $200/sample cost for cell line establishment and sequencing, so taking the best of 100 cell lines could be performed for around the cost of a normal IVF cycle (~$20,000). For a perfectly heritable trait with a perfect polygenic score, this would give (+2.5 SD * 0.5) = +1.25 SD from sperm selection alone. (Gains will be lower for less heritable traits and less accurate predictors.)
  2. This would only work for daughters (sorry Elon!) Although genetic engineering could make XX males by adding SRY, this would probably not be a good idea.
  3. This would make even a low-quality in vitro oogenesis method valuable. More broadly, it’s not necessarily required that the recipient cells be eggs per se, as long as they express the correct factors for zygotic genome activation.
  1. ^

    This would have to be done at the spermatid stage, before the sperm swim away.



Discuss

Formality

2025-11-24 10:19:45

Published on November 24, 2025 2:19 AM GMT

In Market Logic (part 1, part 2) I investigated what logic and theory of uncertainty naturally emerges from a Garrabrant-induction-like setup if it isn't rigged towards classical logic and classical probability theory. However, I only dealt with opaque "market goods" which are not composed of parts. Of course, the derivatives I constructed have structure, but derivatives are the analogue of logically definable things: they only take the meaning of the underlying market goods and "remix" that meaning. As Sam mentioned in Condensation, postulating a latent variable may involve expanding one's sense of what is; expanding the set of possible worlds, not only defining a new random variable on the same outcome space.

Simply put, I want a theory of how vague, ill-defined, messy concepts relate to clean, logical, well-defined, crisp concepts. Logic is already well-defined, so it doesn't suit the purpose.[1]

So, let's suppose that market goods are identified with sequences of symbols, which I'll call strings. We know the alphabet, but we don't a priori have words and grammar. We only know these market goods by their names; we don't a priori know what they refer to.

This is going to be incredibly sketchy, by the way. It's a speculative idea I want to spend more time working out properly.

So each sequence of symbols is a market good. We want to figure out how to parse the strings into something meaningful. Recall my earlier trick of identifying market trades with inference. How can we analyze patterns in the market trades, to help us understand strings as structured claims?

Well, reasoning on structured claims often involves substitution rules. We're looking at trades moving money from one string to another as edits. Patterns in these edits across many sentence-pairs indicate substitution rules which the market strongly endorses. We can look for high-wealth traders who enforce given substitution rules, or we can look for influential traders who do the same (IE might be low-wealth but enforce their will on the market effectively, don't get traded against). We can look at substitution rules which the market endorses in the limit (constraint gets violated less over time). Perhaps there are other ways to look at this as well.

In any case, somehow we're examining the substitution rules endorsed by the market.

First, there's equational substitutions, which are bidirectional; synonym relationships.

Then there's one-directional substitutions. There's an important nuance here: in logic, there are negative contexts and positive contexts. A positive context is a place in a larger expression where strengthening the term strengthens the whole expression. "Stronger" in logic means more specific, claims more, rules out more worlds. So, for example, "If I left the yard, I could find my way back to the house" is a stronger claim than "If I left the yard, I could find my way back to the yard" since one could in theory find one's way back to the yard without being able to find the house, but not vice versa. In "If A then B" statements, B is a positive context and A is a negative context. "If I left the yard, I could find my way back to the house" is a weaker claim than "If I left the house, I could find my way back to the house", because it has the stronger premise.

Negation switches us between positive and negative contexts. "This is not an apple" is a weaker claim than "This is not a fruit". This example also illustrates that substitution can make sense on noun phrases, not just sub-sentences; noun phrases can be weaker or stronger even though they aren't claims. Bidirectional substitution subsumes different types of equality, at least  (noun equivalence) and  (claim equivalence). One-directional substitution subsumes different types as well, at least  (set inclusion) and  (logical implication). So, similarly, our concept of negation here combines set-compliment with claim negation.

Sometimes, substitution rules are highly context-free. For example, , so anywhere  occurs in a mathematical equation or formula, we can substitute  while preserving the truth/meaning of the claim/expression.

Other times, substitutions are highly context-dependent. For example, a dollhouse chair is a type of chair, but it isn't good for sitting in.

A transparent context is one such as mathematical equations/formulas, where substitution rules apply. Such a context is also sometimes called referentially transparent. An opaque context is one where things are context-sensitive, such as natural language; you can't just apply substitution rules. This concept of transparent context is shared between philosophy of language, philosophy of mind, linguistics, logic, and the study of programming languages. One advantage claimed for functional programming languages is their referential transparency: an expression evaluates exactly the same way, no matter what context it is evaluated in. Languages with side-effects don't have this property.

So, in our market on strings, we can examine where substitution rules apply to find transparent contexts. I think a transparent context would be characterized as something like:

  1. A method for detecting when we're in that context. This might itself be very context-sensitive, EG, it requires informal skill to detect when a string of symbols is representing formal math in a transparent way.[2]
  2. A set of substitution rules which are valid for reasoning in that context. This may involve a grammar for parsing expressions in the context, so that we know how to parse into terms that can be substituted.

The same could characterize an opaque context, but the substitution rules for the transparent context would depend only on classifying sub-contexts into "positive" or "negative" contexts.

There's nothing inherently wrong with an opaque concept; I'm not about to call for us to all abandon natural languages and learn Lojban. Even logic includes non-transparent contexts, such as modal operators. Even functional programming languages have quoted strings (which are an opaque context).

What I do want to claim, perhaps, is that you don't really understand something unless you can translate it into a transparent-context description.

This is similar to claims such as "you don't understand something unless you can program it" or "you don't understand something unless you can write it down mathematically", but significantly generalized.

Going back to the market on strings, I'm saying we could define some formal metric for how opaque/transparent a string or substring is, but more opaque contexts aren't inherently meaningless. If the market is confident that a string is equivalent (inter-tradeable) with some highly transparent string, then we might say "It isn't transparent, but it is interpretable".

Let's consider ways this can fail. 

There's the lesser sin, ambiguity. This manifests as multiple partial translations into transparent contexts. (This is itself an ambiguous description; the formal details need to be hashed out.) The more ambiguous, the worse.

(Note that I'm distinguishing this from vagueness, which can be perfectly transparent. Ambiguity creates a situation where we are not sure which substitution rules to apply to a term, because it has several possible meanings. On the other hand, the theory allows concepts to be fundamentally vague, with no ambiguity. I'm not married to this distinction but it does seem to fall out of the math as I'm imagining it.)

There could be a greater sin, where there are no candidate translations into transparent contexts. This seems to me like a deeper sort of meaninglessness.

There could also be other ways that interpretations into a transparent context are better or worse. They could reveal more or less of the structure of the claim. 

I could be wrong about this whole thesis. Maybe there can be understanding without any interpretation into a transparent context. For example, if you can "explain like I'm five" then this is often taken to indicate a strong understanding of an idea, even though five-year-olds are not a transparent context. Perhaps any kind of translation of an idea is some evidence for understanding, and the more translating you can do, the better you understand.

Still, it seems to me that there is something special in being able to translate to a transparent context. If somehow I knew that a concept could not be represented in a transparent way, I would take that as significant evidence that it is nonsense, at least. It is tempting to say it is definitive evidence, even.

This seems to have some connections to my idea of objectivity emerging as third-person-perspectives get constructed, creating a shared map which we can translate all our fist-person-perspectives into in order to efficiently share information.

  1. ^

    You might object that logic can work fine as a meta-theory; that the syntactic operations of the informal ought to be definable precisely in principle, EG by simulating the brain. I agree with this sentiment, but I am here trying to capture the semantics of informality. The problem of semantics, in my view, is the problem of relating syntactic manipulations (the physical processes in the brain, the computations of an artificial neural network) with semantic ones (beliefs, goals, etc). Hence, I can't assume a nice interpretable syntax like logic from the beginning.

  2. ^

    This is actually rare: if I say

    ... the idea is similar to how 

    then I'm probably making some syntactic point, which doesn't get preserved under substitution by the usual mathematical equivalences. Perhaps the point can be understood in a weaker transparent context, where algebraic manipulations are not valid substitutions, but there are still some valid substitutions?



Discuss

Why Talk to Journalists

2025-11-24 10:07:27

Published on November 24, 2025 2:07 AM GMT

Sources' motivations for talking to journalists are a bit of a puzzle. On the one hand, it's helpful for journalists to work out what those motivations are, to keep sources invested in the relationship. On the other hand, sources behave in perplexing ways, for instance sharing information against their own interests, so it's often best to treat their psychology as unknowable. 

Reflecting on sources' willingness to share compromising information, one mystified AI journalist told me last weekend, "no reasonable person would do this."

But to the extent I can divine their motivations, here are some reasons I think people talk to me at work:

  • Bringing attention and legitimacy to themselves and their work
  • Trading tips and gossip
  • Steering the discourse in favorable ways
    • E.g. Slandering your enemies and competitors
  • Feeling in control of your life
    • E.g. an employee might want to leak information to feel power over their boss
  • Therapy
  • A sense of obligation
    • E.g. to educate the public
    • E.g. to be polite when someone calls you for help
  • It feels high-status

Most of these are not particularly inspiring, but if you work in AI safety, I want to appeal to your theory of change. If your theory of change relies on getting companies, policymakers, or the public to do something about AI, the media can be very helpful to you. The media is able to inform those groups about the actions you would have them take and steer them toward those decisions. 

For example, news stories about GPT-4o and AI psychosis reach the public, policymakers, OpenAI investors, and OpenAI employees. Pressure from these groups can shape the company's incentives, for instance to encourage changes to OpenAI's safety practices.

More generally, talking to journalists can help raise the sanity waterline for the public conversation about AI risks.

If you are an employee at an AI lab and you could see yourself whistleblowing some day, I think it is extra valuable for you to feel comfortable talking to journalists. In my experience, safety-minded people sometimes use the possibility of being a whistleblower to license working at the labs. But in practice, whistleblowing is very difficult (a subject for a future post). If you do manage to overcome the many obstacles in your way and try to whistleblow, it would be much easier if you're not calling a journalist for the first time. Instead, get some low-stakes practice in now and establish a relationship with a journalist, so you have one fewer excuse if the time comes.

Maybe news articles offend your epistemic sensibilities because you've experienced Gell-Mann amnesia and have read too many sloppy articles. Unfortunately, I don't think we can afford to be so picky. If you don't talk to journalists, you cede the discourse to the least scrupulous sources. In this case, that's often corporate PR people at the labs, e/acc zealots, and David Sacks types. They are happy to plant misleading stories that make the safety community look bad. I think you can engage with journalists while holding to rationalist principles to only say true things.

It's pretty easy to steer articles. It often only takes one quote to connect an article on AI to existential risks, when counterfactually, the journalist wouldn't have realized the connection or had the authority to write it in their own voice. For example, take this recent CNN article on a ChatGPT suicide. Thanks to one anonymous ex-OpenAI employee, the article connected the suicide to the bigger safety picture:

One former OpenAI employee, who spoke with CNN on the condition of anonymity out of fear of retaliation, said “the race is incredibly intense,” explaining that the top AI companies are engaged in a constant tug-of-war for relevance. “I think they’re all rushing as fast as they can to get stuff out.”

It's that easy!

Overall, it sounds disingenuous to me when people in AI don't talk to journalists because they dislike the quality of AI journalism. You can change that!

Which came first?

If you appreciate initiatives like Tarbell that train journalists to better understand AI, you should really like talking to journalists yourself! Getting people who are already working in AI safety to talk to journalists is even more cost-effective and scalable. Plus, you will get to steer the discourse according to your specific threat models and will enjoy the fast feedback of seeing your views appear in print.

Here are some genres of safety-relevant stories that you might want to contribute to:

  • Exposing wrongdoing at AI companies
    • E.g. whistleblowing about companies violating their RSPs
  • Early real-world examples of risks (warning shots)
    • E.g. the Las Vegas bomber who got advice from ChatGPT
  • Connecting news to safety topics
    • E.g. explaining why cutting CAISI would be bad
  • Highlighting safety research
    • E.g. explaining how scheming evals work
  • Explainers about AI concepts
    • These generally improve the public's AI literacy

In practice, articles tend to cut across multiple of these categories. Op-eds also deserve an honorable mention: they don't require talking to journalists in the sense I'm writing about here, but some of the best articles on AI risks have been opinion pieces.

Quick Defenses

I'll briefly preempt a common objection: you're worried that journalists are going to misquote you or take you out of context. 

First, I think that's rarer than you might expect, in part because you've probably over-indexed on the Cade Metz incident. Plus, journalists hate being wrong and try to get multiple sources, as I wrote in Read More News.

Second, you can seek out experienced beat reporters who will understand you, rather than junior ones.

Third and most importantly, even if you do get misquoted, it doesn't mean talking to the journalist was net-negative, even for that particular piece and even ex-post. As annoying as it is, it might be outweighed by the value of steering the article in positive ways.



Discuss

I made a tool for learning absolute pitch as an adult

2025-11-24 09:09:12

Published on November 24, 2025 1:09 AM GMT

I read a study that claims to have debunked the myth that only children can learn absolute pitch, and got 12 musicians who’ve not previously had absolute pitch to improve significantly at having absolute pitch.

On average, they spent 21.4 hours over 8 weeks, making 15,327 guesses. All learned to name at least 3 pitches with >90% accuracy, having to respond in under 2.028 seconds; some learned all 12. The average was 7.08 pitches learned.

Notably, the results on the new instruments were worse than on the instruments they were trained on, suggesting people can somewhat learn to rely on the cues from the specifics of the used instrument’s timbre:

 

The way it works is simply by having very short feedback loops. You hear a sound (played on a piano in the study) and have 1-2 seconds to make a guess for what pitch it is.

You learn new pitches gradually: first, you need to identify one (and press keys for whether it’s that pitch or some other pitch), and then, more pitches are gradually added.

In the study, before testing without feedback, to reset relative pitch memorization, a Shepard tone is played for 20 seconds. (It’s an auditory illusion that makes you feel like the pitch is perpetually getting lower or higher.)

I asked an LLM to make a web app version of it. I asked it to additionally use the Shepard tone more often for a shorter amount.

I also asked it to add colors to maybe produce some amount of synesthesia. I think there’s research that shows that synesthesia and absolute pitch correlate; I don’t know whether it can be induced to some extent, or would only be helpful for some people, but it seemed good to add in case it works. Later, someone on Twitter told me that they were taught the tones of Mandarin using colored cards, and it worked for them. People who experience synesthesia to at least some extent might have an easier time learning more pitches, though I’m not sure if it would be helpful to others.

I tried to minimize the time between recognition and feedback, so the web app reacts to the starts of the key presses, clicks, and touches, not to their ends; and immediately shows whether you were correct, and what was correct.

Finally, I added more instruments than just piano, hopefully, for better generalization.

With the first version, I posted it on Twitter:

 

It got a surprisingly high amount of engagement, which made the post a bit unfortunate in retrospect, because I made it before I actually fixed the bugs produced by the LLMs (now all fixed); on the other hand, the engagement meant that now I actually had to fix the bugs for people to be able to use the tool.

Two people shared that they have already learned to identify three pitches!

 

I now want to do experiments with a bunch of things (including the order of pitches presented: can it improve the learning curve and allow people to learn more than three more easily?), to collect the data on people’s progress, and maybe ask them questions (like whether they’ve played music or sang before).

Would appreciate recommendations for how to collect the data well without having to do anything complicated to manage it.

Would also appreciate more ideas for how to improve it for better pitch learning.

If you want to try to acquire perfect pitch, it might take you quite some time, but try it:

perfect-pitch-trainer.pages.dev



Discuss

"Self-esteem" is distortionary

2025-11-24 07:59:07

Published on November 23, 2025 11:59 PM GMT

A friend asked me, "what's the right amount of self-esteem to have?" Too little, and you're ineffectual. Too much, and you get cocky. So how do you choose the right balance? 

I replied that this is a trick question. 

People with low self-esteem have thoughts like "I'm a loser", "my IQ is too low to succeed", "no one could love someone as fat as me". Their problem is not quite that they've got inaccurate beliefs. They make in fact be a loser. Rather, their problem is that they've attached their identity to concepts that limit their action space. 

For instance, the notion of low IQ. This is a construct that's predictive at a population level, but it doesn't give you some predictive power on an individual level unless it's the only thing you know about a person. But you can rapidly accumulate info about someone, or yourself, that outweighs the info expressed by "your IQ is 101". E.g. if you want to know someone's test scores, you'll do a lot better by using their scores on mock exams than by using their IQ. 

Which means that someone who says "I can't fix my car because I've got a low IQ" isn't actually making full use of the info available to them. They're relying on a sticky prior. What they should actually be doing if they care about fixing their car is asking "what's stopping me from fixing it?" and checking if solving that problem is worth the costs compared to paying a mechanic. The cost may be large. They may have to put in dozens of hours of work before they understand cars well enough to fix their problem without paying anyone. But they could do it.

So the issue is that the belief about "low IQ" has led to imaginary walls around what can be done that do not actually reflect reality. 

In other words, low self-esteem turns a bump in the road into a cliff of seemingly infinite height, cutting off an entire avenue of approach. It reduces your sense of what is possible, and from the inside, it feels like you've got less free-will

What is the solution? Knock down the walls. 

In day to day life, we have to simplify the action space because we are computationally bounded systems. We introduce simplifications for good reasons, and for bad reasons. That's normal. Thing get problematic when those simplifications restrict the space till there is no good action left. Then, the appropriate reaction is to relax the constraints we impose on ourselves, test if the relaxation is valid, and take best action we've got left. If we were able to do this reliably, we would find ourselves doing the best we can, and low self-esteem would be a non-issue. 



Discuss

Rationalist Techno-Buddhist Jargon 1.0

2025-11-24 07:39:28

Published on November 23, 2025 11:39 PM GMT

Scott Alexander called me a rationalist techno-Buddhist on his blog. Since Scott Alexander is a rationalist of the highest status, that wording constitutes rationalist dharma transmission. I therefore consider myself authorized to speak authoritatively on the topic of rationalist techno-Buddhism.

Why am I writing a glossary? Because there are 14 different kinds of Buddhism and they all use words to mean slightly different things. This is a problem. I hope that this document will end 2,500 years of sectarianism, such that all of us finally communicate perfectly with no misundersandings.

Standards

But just in case there exist one or more people on the Internet who disagree with some aspect of this document, I have included a "1.0" in this document's title. You are permitted to fork it into 1.1 or 1.<your-name-here> or 2.this.is.why.lsusr.is.wrong.about.everything. Now, if you write about Buddhism, then instead of tediously defining all the terms you're using, you can just say "This uses Rationalist Techno-Buddhist Jargon 2.this.is.why.lsusr.is.wrong.about.everything.17.2" and get back to arguing online, sitting in silence, or whatever else it is you do to make the world a better place.

This list is ordered such that you can read it beginning-to-end without having to jump forward for a definition.

Warning

This document may be cognitohazardous to some people. Proceed at your own risk. Thank you Iraneth for feedback on an early draft.

Glossary

pragmatic dharma. A loosely-connected movement, mostly Western lay practitioners, focused on reproducible meditative methods and transparency about experiences. This differs from traditional Buddhism by not appealing to traditional religious authority.

rationalist techno-Buddhism (RTB). A movement within the pragmatic dharma that is trying to create cybernetic models for why and how this stuff works.

qualia. Subjective first-person experience.

consciousness. The dynamic field in which qualia arise and are perceived.

attention. The part of your consciousness you are paying attention to. Attention can be in only one place at a time.

concentration. When a person stabilize their attention on a target e.g. the breath. Strong concentration states elicit altered states of consciousness. Concentration is a skill that can be improved with practice.

kasina. Meditation using a visual target instead of the breath.

altered state (of consciousness). A temporary non-normative state of consciousness, usually caused by strong concentration.

access concentration. The first non-normative altered states of consciousness, through which all other altered states are entered. Access concentration is when your attention stabilizes on its target. For example, if you are meditating on your breath, then access concentration is when your attention stabilizes on your breath.

jhana. An altered state of consciousness characterized by deep meditative absorption. There are 8 jhanas. Jhanas are used in Theravada practice.

nirodha-samapatti. An altered state beyond the 8 jhanas at which all perception ceases.

mushin. A state of unobstructed action without deliberative thought. Mushin starts out as an altered state, but eventually it turns into an altered trait.

nonduality. An altered state of consciousness without distinction between self (homunculus) and other.

duality. Normative non-nonduality.

homunculus. Physically-speaking, your field of consciousness is a real-time generative model created by your brain. Inside of this model, some elements are labelled "self" and constitute your homunculus.

generative model. See wikipedia.

raw sensory inputs. The signals going into the generative model. This probably includes preprocessed data from e.g. your brainstem. What matters is that this data is raw from the perspective of the generative model in your brain.

altered trait. A permanent change to subjective experience. In the context of RTB, altered traits are caused by meditation.

ego death. An altered trait where the homunculus in your brain ceases to exist. [[1]]

fabrication. When the generative model in your brain creates an object in consequence in an attempt to reduce predictive error, usually in an attempt to simulate external reality. All conscious experiences are fabricated, but not all fabrications are experienced consciously. You can think of your brain as a video game rendering engine. Fabrication is your brain rendering physical reality in its simulated mirror world.

rendering. Synonym for fabrication.

encapsulation layer. When a fabricated element in your consciousness is so sticky that it is never not fabricated. It is difficult for normative consciousness to directly perceive that encapsulation layers are fabricated. Encapsulation layers feel like raw inputs until you pay close enough attention to them.

chronic fabrication. Synonym for "encapsulation layer".

non-recursive encapsulation layer. A fabrication that summarizes incoming raw sense data, thereby blocking direct conscious (attentive) access to the perception of that raw sense data. Examples of non-recursive encapsulation layers include non-local space and non-local time.

non-local space. Normative perception of space as a gigantic world far beyond your immediate environment.

local space. Perception of space after dissolution of space.

non-local time. Normative perception of time.

local time. Perception of time after dissolution of time. Eternal present.

recursive encapsulation layer. A fabrication created to block a problematic feedback loop caused by self-reference. Ultimately, recursive encapsulation layers are caused by an interaction between the generative algorithm in your brain and the reinforcement learning algorithm in your brain. Examples of recursive encapsulation layers include self/other duality, desire, pain-as-suffering, and willful volition. See [Intuitive self-models] 6. Awakening / Enlightenment for further explanation.

willful volition. The recursive encapsulation layer that is misinterpreted as free will.

acute encapsulation. A non-chronic encapsulation algorithm that doesn't congeal into a permanent element of perceptual reality. Encapsulation functions are non-chronic because they appear only in response to unpleasant stimuli. Pain-as-suffering is an acute encapsulation function, because it doesn't drag down your hedonic baseline.

chronic encapsulation layer. An encapsulation layer that is so stable, it is incorrectly perceived as raw input data to your field of consciousness. For people who don't understand conceptually that everything you perceive is part of a simulation, chronic recursive encapsulation layers are incorrectly understood to be elements of objective physical reality. Chronic encapsulation layers cause chronic suffering.

insight. An abstract concept measuring the cumulative effects on your brain when you pay attention to fabrications in your consciousness. The word "insight" lossily and pragmatically projects these effects into a single dimension. Accumulating insight eventually unsticks encapsulation layers, and then defabricates them.

dissolution. Permanent defabrication. When the defabrication of an encapsulation becomes a person's default mind state. Non-permenent defrabrication often percedes permanent defabrication.

integration. Dealing with the aftermath after an encapsulation layer has been dissolved. Fabrications are often load-bearing. Dissolving fabrications therefore often removes load-bearing components of a person's consciousness. After this, the person must learn new, healthier cognitive habits. This process is called integration.

vipassana sickness. Mental destabilization from too much insight too quickly with insufficient integration. In extreme cases vipassana sickness can cause psychosis (or worse, because unexpected psychosis can cause accidental death), especially when paired with sleep deprivation. This is similar to how people on an LSD trip can think "cars aren't real" and go wandering into traffic if unsupervised.

dissolution. A permanent shift (altered trait) from fabrication to non-fabrication. All dissolutions cause permanent reductions in chronic suffering.

dissolution of self. Synonym for ego death.

dissolution of desire. An altered trait where your brain's reinforcement learning algorithm is no longer abstracted into desire-as-suffering.

dissolution of space. An altered trait where you no longer feel like a small person walking around a gigantic world and your brain instead renders just your local, immediate environment. When this happens it stops feeling like your body is walking around a fixed world, and more like the world is moving while your body remains stationary.

dissolution of time. An altered trait where past and future are defabricated such that you live in local time.

suffering. Absolute-zero-based suffering. Normative models of consciousness have positive qualia (pleasure) and negative qualia (suffering). RTB uses a model based on absolute zero based model of suffering instead. The normative model is like Celsius or Farenheit, whereas RBB's model is more like the kelvin scale. Pleasure is a decrease in suffering, the same way cold is thermodynamically-speaking the removal of heat. Heat is fundamental. Cold is not fundamentally. Similarly, suffering is fundamental in a way that pleasure is not.

chronic suffering. Suffering produced by a chronic encapsulation layer. Normative levels of suffering have a floor produced by the chronic suffering induced by self, willful volition, non-local space, non-local time, etc.

hedonic baseline. A person's level of suffering when acute suffering is removed, leaving only chronic suffering.

enlightenment. Absolute zero chronic suffering. It may be physically impossible for human minds to reach such a state while alive and conscious. Absolute zero is still useful as a reference point or limit point. It's like a Carnot engine.

pleasure. An acute stimuli that temporarily reduces a person's suffering. Normative people can dive below their hedonic baseline temporarily, and conceptualize such dives as positive valence "pleasure". Lowering the floor itself requires that chronic encapsulation layers be dissolved. When a person's hedonic baseline drops, stimuli that used to be pleasurable become unpleasant, because they felt better than the previous hedonic baseline, but worse than the new hedonic baseline.

jhana junkie. A person who does jhanic practice without accumulating insight. Jhana junkies get stuck on the path to awakening, but being a jhana junkie is not dangerous the way vipassana sickness is dangerous.

awakening. Dissolution of a chronic fabrication. Awakenings tend to have a 1-to-1 correspondence with completed insight cycles.

insight cycle. A discrete cycle of three phases: concentration, insight and integration. In the concentration phase you cultivate concentrative skill. In the insight phase, you penetrate an encapsulation layer. Finally, in the integration phase, you deal with the fallout of blowing up that encapsulation layer. It takes effort to get to your first insight cycle, but after your first insight cycle, there's no stopping the process. Insight cycles will keep coming for years, whether you want them to or not. That's because chronic suffering is an obstacle to concentration. Completion of an insight cycle thereby improves your concentration, thus making your next insight cycle easier. This is a chain reaction. Your fabrications are like a woven fabric with a warp and a weft. If you leave the whole thing alone then it will stay intact. Your first insight cycle cuts the fabric and yanks on the weft. If you continue pulling on the weft then it'll unwind faster, but the fabric will continue to fall apart whether or not you pull on the weft. This is an old Zen saying "Better not to start. Once started, better to finish."

knowledge of suffering. An early phase in an insight cycle where you notice that your mind has been doing something stupid and unpleasant for longer than you can remember.

dark night. The phase of an insight cycle that takes place immediately after knowledge of suffering. Encapsulation layers exist to shield you from unpleasant perceptions. When you dissolve an encapsulation layer to get knowledge of suffering, you remove that shield, and all of the stuff it was protecting you from enters attention-accessible consciousness. This can be very unpleasant. Some people can cycle through many dark nights before landing stream entry.

hell realm. When you're stuck in a dark night. A person percieves what their consciousness is doing wrong (gets knowledge of suffering), but doesn't have the ability to fix it yet. I suspect that LSD-induced hell realms are particularly difficult to escape, because they're like taking a helicopter to the top of Mt Everest without learning mountaineering first.

stream entry. Ambiguously refers to the successful completion of your first insight cycle and/or your first awakening. It is customary to wait 1 year plus 1 day after awakening before claiming stream entry because ① it ensures you are experiencing an altered trait, not just an altered state, and ② it ensures you have completed the integration part of the insight cycle, thereby satisfying both definitions. During this time you should not make any big unilateral life decisions more irreversible than going vegan [[2]]. Stream entry typically reduces chronic suffering by at least 90%.

stream entry mania. The immediate aftermath of stream entry often produces a manic-like state. For this reason, it is recommended that you not post anything on social media for a few months after stream entry. The cooling of period is even longer for posts related to spirituality. Instead, you should talk to a trusted spiritual advisor. It is best if you establish a relationship with this person before you hit stream entry.

kensho. A glimpse of nonduality (or similar non-encapsulation) via a transient state but which leaves lasting insight. Kensho preceeds stream entry.

Cthulhu R'lyeh wgah'nagl fhtagn. Cthulhu waits dreaming in R’lyeh.

  1. In RTB, ego death refers to an altered trait. Confusingly, LSD induces an altered state of consciousness where the ego is not present. LSD trippers usually refer to this state as "ego death", whereas RTBs refer to it as a nondual state, since the altered state is temporary and the ego reappears after the LSD trip is over. ↩︎

  2. If you do go vegan, make sure you take a multivitamin so you don't get brain damage. ↩︎



Discuss