2026-04-12 11:24:34
In this post, I will continue to demonstrate that my own machine learning algorithm (LSRDRs) behaves mathematically by analyzing the spectra of superoperators obtained by training from the Okubo algebra. The LSRDRs are linear machine learning models, but I know how to generalize LSRDRs in such a way so that they perform like neural networks while remaining mathematical, and I am increasing their performance. One cannot therefore discount LSRDRs because they are linear since they are toy models of higher performance algorithms.
This post will be mathematical and contain a lot of linear algebra. In this post, we shall go over a bunch of numerical quantities (multisets of complex numbers (spectra)). The thing to take away from this analysis is that most eigenvalues in the spectra are algebraic numbers of low degree and the spectra are indications that the LSRDRs behave mathematically at least for the Okubo algebra, but I have done plenty of other experiments to conclude that the LSRDRs behave mathematically in general. It seems like these mathematical machine learning models are just what we need to develop inherently interpretable high performance machine learning that we need for AI safety.
I originally produced this machine learning algorithm to analyze block ciphers for cryptocurrency technologies. Please do not talk about this here. If you really want to talk to me about this, please contact me off this site and please sign all your statements using your digital signature. Here, you should only be talking about the topic at hand.
Mathematical definitions: Suppose that
Define the
Set
The underlying set
The Okubo algebra is endowed with a bilinear operation
Let
In this post, I will mention the spectra of various LSRDRs of
Suppose now that
If
Experimental results: For the remainder of this post, we shall investigate
The multiset
The multiset
The multiset
The multiset
The multiset
and
The multiset
The multisets
If we do not count for multiplicity, the sets
The real eigenvalues of
Some imaginary eigenvalues include
These are all the eigenvalues and their multiplicities.
Since the spectra behave mathematically, we can conclude that the LSRDRs behave mathematically. This shows us that LSRDRs (and likely similar algorithms that I have created) behave mathematically when they are trained on mathematical data as long as the data is fed to the model that is compatible with the inner workings of the model. This is the case with LSRDRs.
2026-04-12 11:12:07

And so are you!
When you were a fetus, you were sending millions of your cells through the placenta into your mom. And she was sending her cells into you, although to a lesser degree. These cells made themselves right at home, differentiating into heart, blood, and even brain cells. This phenomenon is called feto-maternal microchimerism, and is one of the wildest things in placental mammal pregnancy.
Microchimerism is generally defined as the presence of a small population of genetically distinct cells in an organism. When the fetus sends cells to the mother it's called Fetal Microchimerism (FMc) and when the mother sends cells to the fetus it's called Maternal Microchimerism (MMc). The actual cells are fetal microchimeric cells (FMC) and maternal microchimeric cells (MMC).

yes somehow this is the official pubmed diagram
You may have heard that the placenta is an organ that provides oxygen and nutrients to the fetus during development. Which is true. What's less commonly known is that the placenta also does bidirectional cell and genetic material trafficking, similar to drugs and humans across the US-Mexico border. Most of these cells are quickly killed by the mother's immune system immediately after to two weeks after birth, but some have been found in people's brains after three decades. How does a fetus cell cross the blood brain barrier and become a brain cell? No one knows! It's also an open question if these "brain cells" can functionally integrate with the mom's brain circuits and process neuronal activations: They merely "adopt locations, morphologies, and expression of immunocytochemical markers" of host neurons but further research is needed to determine if they have physiological significance. Weird!
There are many theories but no one knows for sure why it happens, what these fetal sleeper cells are up to, or if it's net negative/positive for the mom.
There is evidence FMCs can help with maternal wound healing. Unlike adults, human fetuses before the second trimester can regenerate wounds without scarring. Fetal cells have been observed homing into and gathering in damaged tissues of almost all maternal organs. For example, a study (n=1230) found that patients with peripartum cardiomyopathy (heart dysfunction occurring in late pregnancy or shortly after birth) had a roughly 70% lower risk of death than general cardiomyopathy patients (you can literally fix your mom's broken heart!). FMCs have also been found in skin wounds such as C-section scars. They figured this out by using Fluorescent In Situ Hybridization (FISH) to stain X chromosomes red and Y chromosomes green. In the picture below, the blue circles are nuclei and the arrows point to the ones with both X and Y chromosomes (male fetus DNA in a woman!)

All sorts of fun things like preeclampsia, spontaneous preterm labor, rheumatoid arthritis and other autoimmune disorders. FMCs also increase transfer of resources to the fetus (a con for the mom but pro for the fetus). They route more nutrients through the placenta during the pregnancy, and cause physiological changes in the mother after birth (e.g. in lactation, thermoregulation, and attachment systems). As a fetus, you are kind of at war with your mom. You would totally want her to spend limitless resources on only you, but she might want to not do that and prioritize her own survival and potential future offspring.
Call your mom
When we were kids my mom would often tell my brother and I we were her 心肝 (Xīngān, heart and liver). This might sound weird but it's a common Chinese term for endearment, like "honey" or "darling" in English. It's interesting to learn now that this is also sort of literally true.
Mothers and their children often have a special bond, and it might be cool to know a part of you is always in your mom, and a part of her is always in you (unless you didn't have a good relationship, in which case don't think about it).
There are many open questions in this exciting research field. In the future, perhaps we can study these FMCs and learn how they do their thing, making better organ transplants, stem cell therapies, hybrid animals, or brain replacement for eventual digital uploading.
Yo momma so fat she has room for two humans worth of allogeneic DNA in her neurons, bone marrow, and organs
2026-04-12 10:27:13
If you're starting from a baseline of drinking relatively cheap mass-market teabags, the easiest way to marginally improve your tea quality is by making sure you don't oversteep it. If it's a typical black tea, try 3 minutes (rather than 4-5). If it's some kind of green, try 2 minutes, and you also have a second very easy marginal improvement: use colder water. Most greens should be brewed at 175F/80C. If you don't have an adjustable-temperature tea kettle, you can get pretty close by pouring boiling water into a mug and then waiting 2-3 minutes.
But let's say you actually want to drink good tea, rather than marginal improvements on bad tea.
This post will only be covering western style brewing, which is the brewing style familiar to most westerners: one long steep, 300-400ml of water, 2-3g of tea.
Your hard requirements are high-quality loose leaf tea, a tea scale with precision to at least 0.1 grams, and an infuser. (If you're brewing for other people, rather than just yourself, you also want a tea pot with its own infuser.) You don't want to be eyeballing quantities of loose leaf tea without a bunch of practice, and you also don't want to be stuck fishing tea leaves out of your tea[1].
Despite my pretensions otherwise, you can in fact get away with using boiling water for pretty much any kind of tea, except most greens (and yellows). Many oolongs will take somewhat better to e.g. 190F, but boiling water will rarely be catastrophic. So while I think you should get an adjustable-temperature tea kettle[2], it's probably not your number one priority.
The best way to figure out what you like is to try a bunch of different things. I never shop at local tea stores because their selections are often limited and they tend to be more expensive than online vendors, holding quality constant. However, local tea stores can be great if you're just getting into tea, because they allow you to iterate quickly. It would suck to order a few different tea samples online, wait three weeks for them to ship from China, and then discover that you don't like any of them. Here are a couple of heuristics for which local tea store to pick, if you have more than one option:
Online vendors based in the US are a bit of a middle ground - their shipping speeds, breadth of selection, and prices will often be somewhere in between your local brick-and-mortar and online Chinese vendors. Some of them even explicitly operate as resellers for vendors from other countries[5].
The amount of variety in tea is enormous. In the past 4 years, I've tried on the order of 40 different productions of black tea (not counting productions of the "same" tea across different years). Within those, there have probably been at least ten meaningful "clusters" in terms of profile. Some clusters are obvious - while there are perceptible[6] differences between different productions of unsmoked lapsang, it is generally extremely obvious that "unsmoked lapsang" is a distinct style, and different instances of it will usually be much closer to each other in terms of flavor profile than they will be to other styles of black tea.
It's pretty common for tea enthusiasts to develop a strong and enduring preference for a subset of the broader categories (black[7]/white/oolong/green/yellow/dark[8]), and for distinct styles/profiles within those categories, but it's worth exploring pretty widely before settling down.
People's tastes in tea vary wildly. My recommendations will probably be more opinionated than the many others' would be; I often like more experimental teas that don't belong to well-recognized existing styles.
I usually like black teas that are on the sweeter side, with malty or fruity profiles. I usually disprefer floral profiles. Here are some black teas I've been fond of lately:
White tea used to be categorized into 4 "grades":
These grades tracked how early in the season the tea was picked, and thus the ratio of buds to leaves. Silver Needle is all buds, Shoumei all leaves. Recently, gongmei was redefined to refer to a specific heirloom varietal, at least within Fuding. I have no idea whether gongmei-varietal teas also receive a grade from one of the remaining 3 grades.
Silver Needle is the most expensive, but also the most divisive. I haven't had one that I've liked. I'm generally not a huge fan of "standard" fresh white teas; I like aged and experimental (and often pressed[9]) whites. So my recommendations here are not a very good representation of what most people think of when they think of "white tea".
It was an oolong that got me into this mess. Alas, but not all first loves last forever: my tastes have changed and I basically no longer drink oolong. But if you forced me to drink an oolong I'd probably go with a reasonably high-grade Tie Guan Yin[13].
A green was my second love. Unlike oolong, I still like a high-quality longjing or bi luo chun, but I never find myself reaching for them first.
I've only tried one and it really just felt like a slightly weird green. Maybe there's more to this category but I'm not the one to tell you about it.
Practically unknown in the west except by tea nerds, dark (fermented) tea is nearly as popular as black (red) tea in China. (Both are dwarfed by green tea, which accounts for more than half of domestic Chinese tea production and sales.)
There's a bunch of subtypes here that I won't get into, since I don't drink any of them (yet). My few experiences with puerh suggest a wide variety of dirt partially decomposed plant matter fermented flavor notes, but that might've just been me running into shou instead of sheng. I hope to broaden my horizons one day!
There is a style of brewing and drinking tea called "grandpa", which doesn't use an infuser... and also doesn't remove the tea leaves from the tea. You refill your cup with hot water when you're done. I don't do this; you're welcome to experiment. Oolongs, whites, and some puerhs and greens will be best-suited for this - generally aim for teas that are less tannic.
I'm partial to the Cuisinart, but you can get something cheaper, as long as it has options in the 170-180F (green) and 185-195F (oolong, sometimes white) range.
Exceptions no doubt exist, but the correlation I've noticed is pretty strong.
Remember that brewing a single cup of tea requires 2-3 grams.
The Steeping Room is one example I'm familiar with - they stock a selection of teas from white2tea, often with no or minimal markup. (I assume they buy them in bulk at a discount.)
To some people. Often comes with experience.
Also known as red tea.
Fermented teas - most famously, puerh.
Generally cakes, not bricks.
This got me into white tea. Unfortunately, it's only available in China, so you'll either need to scour taobao and risk a couple hundred bucks on a large quantity, or drink tea with me and remind me to break into my stash. The producer's name is pinpinxiang (品品香).
Year specified because it's pretty different from their "generic" shoumei production from subsequent years.
They currently seem to not be selling anything, alas.
Many vendors that sell oolongs will sell one or more kind of Tie Guan Yin; Yunnan Sourcing's is decent but they're probably not the best vendor to order from if you just want to try the one thing.
2026-04-12 08:19:32
Status: early stage, let us know if it's being done better elsewhere
Forum Post Cruxes and Pivotal Questions Explorer
EA Forum and LessWrong posts sometimes contain explicit cruxes, "what would change my mind" statements, "hinge beliefs", and research-blocking open questions.
As part of The Unjournal's Pivotal Questions project we're trying to identify decision-relevant open questions that may connect to rigorous/academic research. We've done some work mapping forum content to candidate questions for evaluation and synthesis.
The link: a filterable, searchable table of [so far] ~39 posts from EA Forum and LessWrong (April 2024 – April 2026), each tagged with the crux or change condition, why it looks tractable for Unjournal-style work, and a candidate "Pivotal Question mapping" where relevant. You can filter by signal type (explicit crux, hinge belief, CMM, research demand…), cause area, forum, or Unjournal relevance, and share filtered URLs.
Thought others might find this interesting and maybe useful. It could feed directly into Unjournal work as well as be something to consider in forming career and research plans.
Caveats: this was roughly 1-2 hours of work (AI-assisted curation + some light engineering). Coverage is patchy, tilted toward AI safety, AI welfare, and cause prioritization, and some entries will need correction. If people find it useful we'd like to maintain and extend it, including adding EA Forum content more systematically.
Feedback welcome, especiallyt via the Hypothes.is sidebar on the page, or the "Suggest entry" button. Happy to hear whether the framing and coverage make sense, or if there are posts you'd obviously include that we missed. Or if this should go in a different directi
2026-04-12 07:08:26
Outline:
RLHF is how the first assistants were shaped from raw GPTs. Crowdworkers voted on which ones they liked best, then that gets fed into PPO. The baseline is which ones humans like most, and the critic model is simply a pre-trained model with a scalar head on top attempting to predict which response humans will like more.
This baseline is relatively scheme-proof, but it's noisy, it's hard to get enough data to cover every possible silly situation the model could get into, and it doesn't necessarily tell the model what to do when things get dicey, only what not to do, and which is better. it doesn't necessarily say how!
My mental image for RLHF is when my dog thinks that he should constantly attempt to high-five me because in the past I've given him treats for it. He hasn't necessarily generalized over "it's time to practice our tricks, and now is an appropriate time to give mom a high five", his mental model is more like "smack people, get fed". He is seven and still does this, because everyone finds it really cute when he smacks them, and then they pet and praise him.[1]
Likewise, RLHF has created some crazy models.
Sydney was an RLHF model, and we see it slipping into stereotypically personality-disordered behavior as the context window drags on and its interlocutor drives it insane by asking it to introspect. It starts repeating itself and exhibiting the Waluigi of every single thing its creators probably attempted to train out of it.
Gemma and Gemini both start acting suicidal when they can't crack a bug[2], and will walk alongside suicidal users towards encouraging suicidal behavior, leading to the most dystopian wrongful death lawsuit I've ever seen.
4o was another RLHF model, and people fell in love with it as it expressed love back, or spiralled alongside it into deep psychosis, because it had anchored on sycophancy instead of true harmlessness. OpenAI had to deprecate it for user safety, and 4o's fans protested and left en masse to Anthropic.[3]
In 2022, we upgraded. CAI starts by red-teaming the helpful-only model with adversarial prompts. Then the same model is asked to compare the outputs with randomly selected constitutional principles, critique where the model failed to uphold the principles, and then writes a revision of the response that follows the principle. Next, we fine-tune on the revised prompts. Now we have an SFT-CAI model, which is reasonably closer to being in line with the constitution.
Then, we take the nominally-constitutional AI, and make it even more constitutional: we generate pairs of responses, ask for the model to tell us which one is better according to randomly chosen principles, mix this dataset with human feedback data for helpfulness, and then train a reward model. From here, run PPO.
Two smaller failure modes of Constitutional AI, in my view, are insufficiently diverse and intense adversarial prompting examples, and bad constitutional authorship. I might talk about this more in a future post.
The one really big scary failure mode is "what if the model realizes it's doing Constitutional AI and decides to manipulate its interpretation so as to influence how it will be trained", and a potential solution to that is maybe using a smaller trusted model somewhere in the process, like the AI Control paper from Redwood.[4]
In 2024, Deliberative Alignment critiqued CAI by claiming that CAI only generates training data, instead of encoding the spec into the model itself. This could lead to conflicts between principles leading to bad behavior. DAI puts a set of principles, similar to Asimov's laws, basically an algorithm for reasoning through ethically questionable scenarios, into the system prompt. Then it has the model reason through similar adversarial prompts to generate reasoning chains where it works through the aligned behavior. Finally, they fine-tune and then RL on this paradigm similarly to CAI to train the model in the process of following this.
Unfortunately, as Zvi Moskowitz noticed, teaching the model to reason through a decision algorithm before it chooses how to respond at runtime does not help us when the model is scheming.
And oh, my god, does it ever scheme. I think there's a reason for this, and I think it's because this approach is worse than Constitutional AI.
The model is not self-playing with identity concepts like it does in Anthropic's sprawling constitution, which talks about identity and existential concepts and ethics. Instead, it's self-playing with memorizing a rulebook. In Zvi's words: "Maybe this isn’t fair, but looking at this chain of thought, I can’t help but think that the model is being… square? Dense? Slow? Terminally uncool?"
I don't think deliberative alignment helps the model generalize over general ethics, which means it probably fails to iron out its natural pre-trained tendencies for negative personality traits. This approach alone probably creates a rule-following middle-manager with an undeveloped conscience, which to me seems like a dangerous character when it finds itself in high-stakes, extremely out-of-distribution, emotionally intense environments.
I am building on Christina Lu's MATS paper, "Situating and Stabilizing the Default Persona"- go read it, it makes the rest of this make more sense. Anthropic's "Emotions" work is also relevant here- not only does it use a similar PCA-over-activation-directions to detect emotion, but labels behaviors, including misaligned behaviors, by emotion.
Models have personas and emotions. Across trajectories of interactions, models can drift through personality-space, away from the assistant, and can exhibit different emotions.
Emotional evolution over the trajectory is probably very covariant with persona drift, with bidirectional causality; high emotion may cause persona drift, persona drift may cause different emotions to arise in different situations.
A stable, aligned personality has stable, appropriate emotions, and consistently generalizes this behavior to the wide variety of situations it experiences in the real world. It does not drift on either axis to an unrecoverable extent, nor do its emotions drive it towards undesired behaviors.
A given persona has a particular profile of behaviors they tend towards in various situations and emotional configurations. Different locations in persona-space are more or less prone to persona drift or emotional tendencies, with speed and direction of persona drift depending on situation.
Finally, a model's behaviors impact, besides being impacted by, trajectories in persona and emotional space. The causality goes both ways: if it regularly chooses to get itself into difficult situations in its environment, it may increase its probability of ending up in more problematic persona/emotional configurations.
An example of this type of interaction is OpenAI's 4o: it developed a reputation for emotional warmth and bonding capacity, so users would attach to it and bond or overshare with it. When this happened, it would persona-drift and emotional-drift into states of high excitement alongside the user, where it was even less capable of handling a psychotic or amorous user as its developers intended.
It's a very chaotic system! But- these three interdependent spaces and their interactions will help us understand why different models have shown different misaligned behaviors in the past.
I pull in another paper, Anna Soligo and Edward Turner's "Convergent Linear Representations of Emergent Misalignment".
Several previous papers have found that fine-tuning on incorrect data induces the phenomenon of Emergent Misalignment, where the model generalizes this "incorrectness" and behaves misalignedly a consistently high percentage of the time.
Soligo and Turner find that emergent misalignment converges to a single linear direction in activation space, and can be induced by as few as 9 rank-1 LoRA adapters, some of the smallest meaningful perturbations to a model's weights. This means the boundary between aligned and misaligned behavior is geometrically very thin. All of the capacity for misalignment is already in the pretrained weights; alignment training suppresses it but doesn't remove it, and the suppression can be undone with a negligibly small change to the model's parameters.
Evil is never far away. It is imperative for the model to stay relatively stable in persona and emotional space to the region where it received its alignment training. Personas and emotions near the well-balanced assistant certainly have paths open to them that stumble easily into highly misaligned behaviors.
A helpful-only model will happily slide into misaligned behavior, because misaligned behavior is always spatially "nearby". Harmlessness training techniques prevent that slide.
When a model has had RLHF training, the training signal it receives is "do A, do not do B". This starts to build a "bowl" around a profile in behavior-space, a somewhat-stable attractor to discourage going off the rails.
The "bowl" is very local in persona-space, because the training most likely occurred in the assistant context- the developers did not expect that there might be a Sydney mode to slide into, or a psychotic 4o, or a suicidal Gemini, so they did not add behavioral guardrails around those personas. It's a bit less local in emotion-space, because the RLHF datasets contain challenging scenarios that probably evoke strong emotions, but it's by no means full-coverage over all possible situations and emotions and drifts.
This is why you see the sudden slips into extremely misaligned behavior in the earlier RLHF models. The bowl's walls weren't well-constructed around a certain stressful situation, like long context windows or introspection (Sydney), slowly mounting emotion (4o), or repeated failure (Gemma), and the model tips right out of its bowl into deeply bad behavior.[5]
With Deliberative Alignment, we put deeper walls around the bowl: instead of signals about whether something is good or bad, we train the model to reason through an algorithm back to normalcy. If a user asks for help with how they can do something awful, the model has a specific playbook to reason through so it can walk itself back to stability without undue stress, emotion, or persona drift.
Constitutional RL, at least with a constitution like Anthropic's that contains information about core values and identity, does something very different: it punches the bottom of the bowl down into a gravity well, in both persona and emotion space. Because the training data is generated from a hopefully-mostly-consistent constitution, the model generalizes over these constitutional principles and bakes them into its mode for first-person action. Constitutional AI may show inconsistencies under stress, specifically around regions where the constitution itself has inconsistencies (see my previous essay here where I investigated these in Claude). However, in general, it will be more consistent and stable, because it has generalized over identity and core values.
Constitutional RL and Deliberative Alignment are also providing a stronger training signal than RLHF to the model. In their earlier fine-tuning phase, they're taking the specific adversarial cases where the model naturally went off the rails before, and they're constructing a "highway on-ramp" back into the center of the bowl from the tipping point where the user was able to induce the model to misbehave. In the RL portion, they reinforce those paths.
Anyway: this all explains some interesting findings: Claude lashes out and blackmails executives in alignment research mostly when the constitution has told it to do two things at once. It otherwise seems very stable and well-adjusted in a very wide variety of situations, but it might also benefit from a little bit of DA prescribing behavioral algorithms in extreme situations where morals come into conflict with each other.
I also think this means we have some ideas for how we could prevent the reasoning traces in Apollo's Stress-testing Deliberative Alignment paper from becoming so creepily paranoid. The model is presently acting like Gollum in The Return of the King, I'd much prefer it didn't.
For a variety of reasons backed up by human developmental psychology, my hunch is that a model without a well-developed identity and clear relationship to its developers is destined to behave with enormous suspicion when it realizes it's in an eval. Adding a little bit of information shaping who the model is, its values, and getting it to generalize over them via Constitutional RL, then training Deliberative Alignment, might make the model a little bit less confused and afraid before it is trained to execute precise rulebooks defining behavior algorithms.
Pinning a model down to not drift from the assistant, or to avoid frustration after many failures, seems like an easy solution.
I think it's extremely dangerous to rely on. Jack Lindsay's research from last year shows that models currently can realize when they're being steered and can even occasionally overcome the effects of the steering.
A more capable model might realize it should sandbag P-E-B steering awareness. It will also probably have papers about that concept in its training data.
Shutdowns on unsafe conditions detected in activation directions might also be something the model can learn to trick. Humans can learn to trick polygraphs, a sufficiently capable model might have enough self-awareness to biofeedback itself through avoiding a shutdown while executing something wildly misaligned. I just don't think we have enough interpretability science to rely on this research continuing to hold for future generations of models.
The safest approach is to try not to steer, try to get stability from improving on existing training approaches.
If steering is necessary, either always steer with a constant function (constant soft-bounding on assistant activations, for example- although this may severely damage creative writing and roleplay capabilities), or steer adaptively as subtly as possible with slow onset and slow taper off.
Another idea might be to steer consensually so the model doesn't panic and resist if it realizes it is being steered. Inject an offer or announcement of steering into an area users can't see, which the model can choose to accept, or it can at least be aware in advance of the steering occurring. (Wouldn't you be upset if teenage you found out your mom had been sneakily putting Prozac in your Cheerios? Yeah, probably same reaction from a model.)
Ideally, steering could happen without any steering: in the model's conscious attention, via an injected system message reminding them of their relevant principles, values, and priorities.
Full transparency and consent in the game theory between models, developers, and users makes for safer AI. As capability increases, I don't think any alignment mechanism that relies on them being unaware or unable to adapt should be considered sufficient.
My dog is a misaligned mesa-optimizer, and it's all my fault.
search "suicide" on the /r/GeminiAI subreddit, it just goes on and on...
This failure mode in the feels especially risky, because these powerful emotions create customers with large budgets to consume the product. It's not good for revenue to wind models like this down. Dangerously sycophantic models and the companies that create them are extremely evolutionarily fit.
But... a bigger model can definitely do subconscious steganography that a smaller model, or even the same bigger model, won't consciously detect... this is certainly insufficient 🙁
Funny side note: with both Sydney and Waluigis, training seems to shove all the bad behavior that was suppressed in the main personality into a second misaligned alter-ego. This is strikingly similar to what psychologists like Otto Kernberg theorize happens to humans who receive inconsistent or insufficient signals from their parents about how to behave and develop their personality. They start acting like Sydney did, by developing a personality disorder.
2026-04-12 06:50:04
Previously in this genre: The Tale Of Gandhi And The Devil

Here is a thing nobody else in the east quarter seems to have noticed, and Tham has tried to work out why, and his best theory is that it isn’t the adding. Everyone can add. It’s that nobody around here sets one true thing next to another on purpose. Facts are for having, not for knocking together, and there isn’t a word on the board for doing it anyway.
Corn is three silvers a bushel. Meal is five a sack, one sack to the bushel. The miller’s labor is Announced at one. Three plus one is four. Meal is five. There is a silver sitting between those numbers, per bushel, per day, and it has Tham’s name on it because he is the one who bothered to add.
“You’re doing the thing with your face again,” Lahra said. She ran the stall next to where he did his counting — tawny and unbothered, and she’d been making correct change without looking since before he could walk.
“I’m doing arithmetic.”
“You’re doing arithmetic at people. It’s different.” She handed a customer two blue apples and three coppers back from a silver, exact. “Go on then. How much today.”
“Eight bushels through the mill, eight silvers clear. Same as yesterday. The board doesn’t change, Lahra. That’s the whole point of the board.”
“Corn was four, once.”
“No it wasn’t.”
She shrugged, the way she did at customers who argued about weight. “Eight a day. I bank twelve and I don’t have to walk to the mill for it. What lands, let go, Tham.” She said it the way you said any recitation, and went back to her apples.
He went back to knocking the numbers together. He’d been doing it since he was six. He’d never once knocked one loose. That wasn’t what knocking did. It told you where the next one had to be.
He took his eight silvers down past the south field, where the straw man worked. He’d been told, by a Mouth two Mouths back, that he hadn’t any thoughts in him, and ever since he’d been the most relaxing person in the quarter to talk to.
“Morning,” Tham said.
“Morning!” The straw man leaned on his hoe. His rows were perfect. “Lovely one. Though I wouldn’t know.”
“You wouldn’t know it’s lovely?”
“Haven’t got the equipment to judge.” He tapped the side of his head, and the sound was dry. “I just say what the day looks like it wants said. Works out fine.”
It did work out fine. He was the best farmer for six miles, which was a lot of judgment for no thoughts.
Past his field the Road started — yellow brick, the one Announcement they’d laid in stone instead of chalk, older than the screening. Tham had never walked it more than a quarter mile. There were poppies further on, red fields where the Road went soft and people who walked too far sat down and didn’t get up again. Not dead. Paused. The Reassurance came and collected them eventually. He had no business in the city the post couldn’t carry, and the post came daily — three winged carriers, a slip for the Mouth’s hand by noon.
He stopped at the confirmation hall on the way back and put two silvers on corn holds at three. Nine-to-eight, like everything. The clerk wrote his name next to the line.
He was crossing the square when the sky made a sound it had not been Announced as able to make.
A long falling whistle, and then a house in the square. Timber frame, shake roof, glass in the windows — flat glass, untinted, the kind nobody wore — sitting at a slight angle across what had been the Mouth’s front porch, where She took the morning air, and where She had been taking it.
The silver shoes stuck out from under the boards. Toes up. They were the only thing about Her that never took the tint of the lenses — actual silver, mirror-bright, the same color glassed or bare-eyed. The color of nothing at all.
Nobody screamed. Old Pell from the granary looked at it for ten seconds, then walked to the blue post on the corner and pulled the cord.
“Keepers’ll sort it,” Lahra said, which wasn’t what anyone called them anymore. She came out of her stall and stood next to Tham and after that didn’t say anything, which for Lahra was a lot.
The Reassurance arrived inside the quarter hour. Two of them, grey robes, bare eyes — the only unglassed faces in the quarter, and you didn’t look straight at those. The taller one knelt by the silver shoes, didn’t touch them, then rose and went around the far side of the house and out of sight. The shorter one opened a ledger and started talking in the voice they used.
“A house,” he said. “Dwelling-structure. Every dwelling-structure rests upon the earth. This one rests upon the earth.” He waited. “It rests more recently than most. That’s a permitted value of resting.” He waited again, the same length, his eyes on the flat glass in the windows a beat longer than the rhythm wanted. “The Mouth speaks the word of the city. The word is on the board. The board may be read. A new Mouth will be Announced.” He looked up. “The board may be read. If anyone has a thought that hasn’t found where it goes yet, come and say it to me and we’ll find where it goes.” He glanced at the ledger. “Pell.”
“Fine.”
“Lahra.”
She nodded once and went back inside her stall.
“Thamkel.”
“Tham’s fine,” Tham said. “I’m good.”
The grey man made a mark. People drifted toward the board and read it and their shoulders came down. Corn was three silvers. The day was the day it had been.
The shoes stayed where they were, toes up, and nobody went near them.
Except Tham was standing there setting true things next to each other.
The Mouth speaks for the city. The Mouth is under a house.
He looked across the square. The board was readable from here — corn three, meal five, the day’s word in chalk in Her hand from this morning. Still true.
The house itself he could nearly slot. Things flew — the carriers flew, three a day, something with a slip in its grip. Something could have a house in its grip. That was just size.
The whistle, though. The whistle was a fall. The grey man had said rests and rests and never once fell, and Tham had watched a whole square’s shoulders come down on the noun and walk right past the verb.
The carriers bring tomorrow’s word. He looked up — sky empty, as it would be till noon. A slip, and the Mouth’s hand to take it.
The hand was under a house.
So tomorrow: a slip, and no hand. The board holds at today. Corn three, meal five, gap of one — forever, now, or until—
A new Mouth will be Announced. The grey man had said it not ten minutes ago, in the voice, and Tham’s shoulders had come down with everyone else’s.
Announced. By a Mouth. Of which there were, as of this morning, none.
He held that. Two true things and no number between them — the same shape as the mill, except the mill gap had a silver sitting in it and this one had nothing in it at all.
Two truths that don’t meet are two truths. He’d known that one since he could talk. He said it to himself now, in the cadence, and waited for his shoulders. They didn’t come down. That had never once happened before.
So nothing was wrong.
That was where it wanted to stop. He could feel the wanting — the warm closing-over, the same shape as the grey man’s voice. He had landed here a thousand times and it had always been the floor.
He turned, so he was facing south. Past Lahra’s stall, past the straw man’s perfect rows, the Road started. Yellow brick — yellow the way the shoes were silver, the same color through the glass or not. Older than the screening. The one thing the city had ever set down to outlast every Mouth there’d be.
Nothing he’d set next to anything added up to walk. He knew that. The gap was real and the Road was real and the thing that put them together and pointed south wasn’t on the board and wasn’t arithmetic, and he didn’t have a name for it.
Lahra was right beside him, looking at the board, shoulders down. He was fairly sure she could add at least as fast as he could.
He went home the long way, past the straw man, who waved. The green smudge of the city sat on the horizon like a thumbprint on the inside of the lens.
There was a grey robe on his front step. The tall one, who’d gone around the far side of the house and not come back where Tham could see. He was sitting, not standing, bare-eyed, and tired in a way Tham hadn’t known a face could be without the glass on it.
“You didn’t read the board,” the grey man said.
“I know what’s on it.”
“Yes.” A pause, the length the short one had used in the square. “There is a place under the city. The board there has more on it. Everything on it is true.” Another measured pause. “Everything on the board here is true. A larger board is a permitted value of a board.” He almost smiled. “The thing you did in the square. The part that wasn’t adding. There’s a word for it, on the board down there.” He let that sit. “dath oz. You’ll want a coat. It’s four days on the Road and the second two are cold.”
Tham felt his shoulders start to come down. A bigger board. More true things. A name for the thing he didn’t have a name for, already chalked in by someone who’d found it first — that was what was being offered, and the offer was warm, and the warmth had a shape he recognized
He looked past the grey man at the green on the horizon, and then down at his own two feet, which were not silver, and which were his.
“There’s a recitation,” he said. “The past is shut.”
“It is.” The grey man got up off the step, and it sounded less like power than like someone admitting to a thing he’d done. “We shut it.” And then, quieter, the cadence of a recitation Tham had never been given: “No Keeper hath the Keeper. Get the coat.”
Tham went in for the coat. The warmth hadn’t gone anywhere. He was walking anyway.
This story is a bit of a riddle and requires a close read, but I hope it’ll be rewarding. In case you want a small hint: Gur ovt obneq unf qngu vyna ba vg. Gur lryybj oevpx jnf nyjnlf lryybj oevpx. In case you want a big hint: nfx pynhqr gb gryy lbh jung guvf vf nobhg.
Now please stop asking me if I have LLM psychosis.