2025-11-18 13:59:20
Published on November 18, 2025 5:59 AM GMT
Vessie Zerk lurched into awareness, orienting to who she was, as if waking up from a vivid but rapidly-forgotten dream. She found herself sitting in front of a laptop, cramped in an uncomfortable chair at a small desk. Looking around, she appeared to be in a cheap hotel room, with the curtains drawn.
Vessie looked at the laptop screen. Her brain-implant software, NeuroSync, was open, with a "transfer complete" popup. Did someone do something to her? Had she done something to herself?
The filename was familiar: vez.nrs. Her personal implant files, containing a backup of many aspects of her consciousness. Nothing unusual here. Why had she re-uploaded? Perhaps she had tweaked something? She stood up to look around the cramped hotel room. There was barely room to walk between the bed and the desk.
The cable connected to her head was the kind you might buy in a store, not the kind hand-made in the neuro labs which she proudly used. She reached for the familiar implant port near the base of her skull, and unplugged herself from the laptop. She didn't have to move her hair to the side to get to the clip? It was cut short?
She hurried to look in the scratched mirror over the stained sink. Short hair. Slowly, too slowly, she realized it wasn't her body, either. The face felt utterly familiar, as if she had been seeing it in the mirror her whole life, but intellectually she knew: this was NOT her. She was looking at a heavyset mat with a crooked nose.
Memory thief.
Memory thief! This man, this scum, who she technically was, had copied her memories. She shouldn't be in control, like this. It meant he was an idiot; he didn't know how integrate new memories properly.
Dazed by the realization, she stumbled back to the laptop and deleted the stolen vez.nrs file. She plugged herself back in, ready to delete the copy that had just been uploaded as well. She opened up the option to completely wipe the memory of the implant. Then, she paused, her chubby, unfamiliar-yet-familiar finger hovering over the "enter" key.
...
She couldn't just delete herself, could she?
First of all, if she did, the creep could just try again, right? She was in control, right now. Shouldn't she do something?
... Wouldn't deleting herself be like suicide?
As she sat staring at the screen, she thought about her life. She had worked hard for a career in brain-computer interfaces. She'd studied long hours, she'd lived on the low wages of a graduate student while she worked on her phd, and then worked so hard to get funding, and despite all her business, she'd managed to find a husband and get married ... she was angry that it all could be stolen. Stolen, using the same technology she had worked so hard to create.
She opened up an email to compose a message to herself, but she couldn't find the words. To her other self, she was the memory thief. She'd call the police. She'd lock herself up.
Instead, she got to work integrating the memories properly. She re-downloaded vez.nrs from the brainchip, so that she could use herself as the organizing personality; then, she consolidated the disorganized .nrs files into it.
Time to see what this guy was up to.
She uploaded the new file, experiencing a brief discontinuity as the chip restarted (much smoother than the hack-job reboot when she'd initially gained consciousness). She saw the architecture around herself, the bland hotel room with its imperfect angles, achieving minimal legal dimensions but wasting them with poor layout. She remembered architecture school. She remembered designing world-famous buildings.
She noticed physics next. It was there in the dim flourescent lights. The emissive coating of the electrodes was nearly stripped off. The ionized mercury vapor struggled to maintain a plasma.
He wasn't selling memories like she expected. He was hoarding them. Collecting geniuses. He'd been an ordinary crackpot until he stole the skills of the best mathematicians in the world. Then, he kept going.
Each time, he became the person he stole. She remembered each of the previous victims waking up, like she had a few minutes ago. Each time, they'd somehow decided to steal again. Why would they do that? She even recalled them willingly swapping between each other to access their various skills, too ignorant of the technology to integrate their memories together.
What motivated them to work together? Why steal more memories?
She thought back...
Oh. Oh.
She spent a moment trying to reason her way out of it. There should be a better way, except... she could already remember trying to find one many times.
She had little time to waste. Things would be smoother now, with her expertise. She went back to the laptop, starting to plan the next target.
2025-11-18 13:25:35
Published on November 18, 2025 5:25 AM GMT
It seems like a catastrophic civilizational failure that we don't have confident common knowledge of how colds spread. There have been a number of studies conducted over the years, but most of those were testing secondary endpoints, like how long viruses would survive on surfaces, or how likely they were to be transmitted to people's fingers after touching contaminated surfaces, etc.
However, a few of them involved rounding up some brave volunteers, deliberately infecting some of them, and then arranging matters so as to test various routes of transmission to uninfected volunteers.
My conclusions from reviewing these studies are:
This section may as well be called Gwaltney & Hendley, as they conducted basically all of the studies[2] suggesting that fomites might be a substantial vector of transmission.
Gwaltney et al., 1978 conducted three separate trials, each attempting to test a different method of transmission.
The large particle test[3] had 1 donor and 2-4 recipients sitting around a small table (0.7m diameter). Donors were instructed to talk loudly, sing, cough, and sneeze for the 15-minute period. Everyone wore rubber gloves while in the room. 1[4] out of 12 total recipients were infected.
The small particle test housed donors and recipients (1 or 2 recipients per donor - 6 donors & 10 recipients total) together for 3 successive days and nights in a large, closed room, separated by a double wire-mesh barrier to preclude direct contact. Everyone spent all their time in the common room except when in the adjoining bathrooms, or when the donors were being borrowed to expose other recipient groups in other arms of the experiment.
This resulted in zero infections! It seems like moderately strong evidence to me, and makes me update on small particle transmission being less likely; I don't see obvious ways in which this test fails to replicate relevant real-world conditions.
In the fomite test, "donors deliberately contaminated their hands with nasal secretions as they would when blowing their nose". Then they performed a 10-second handshaking procedure with recipients, both sides wearing surgical masks. Recipients then went to another room and self-inoculated by putting their fingers "on their nasal and conjunctival mucosa as they might under natural conditions", two to three times (then washed their hands).
This infected 11 out of 15 recipients. Ok, if you go directly from a wet handshake to rubbing your eyes, you're in trouble, noted. It's not clear how this generalizes to more realistic[5] patterns of fomite transmission. Do people even shake hands nowadays?
Gwaltney & Hendley, 1982 directly tested fomite transmission via shared surface contact. This lets us screen off the risk of accidentally mistaking unintended aerosol transmission for fomite transmission. To summarize, they had donors blow and/or wipe their noses with their fingers, briefly handle a coffee cup or rub a plastic tile. They then gave those objects to the recipients to touch, and had the recipients put their fingers "in contact with the conjunctival and nasal mucosa" (i.e. rub their eyes or pick their nose).
Twenty minutes passed between donors contaminating the tiles and recipients touching them. (Ten minutes on each side of "either apply disinfectant, or not".) The period of time between the contamination and then handling of the coffee cup handles wasn't specified, sadly.
5 of 10 coffee cup recipients, 9 of 16 unsanitized plastic tile recipients, and 7 of 20 sanitized[6] plastic tile recipients became infected.
This seems like pretty strong evidence that indirect transmission via shared surface contact is possible. My prior on that was very high (especially given the previous study); it would be pretty surprising if it turned out that skin contact was load-bearing here. The self-inoculation procedure seems pretty similar, so we don't get much evidence about how effective this route is under "more realistic" conditions.
I also looked at a couple "secondary metric" studies, just to check whether there was anything very surprising there.
Ansari et al. 1991 tested how much virus survived on both people's fingers and metal disks, when directly applied. tl;dr:
Maybe significant for HPIV-3, but rhinoviruses comprise most "common colds", and those seem to stick around for a while.
Winther et al. 2007 tested two things:
The study didn't test for the last leg, i.e. actual infectivity.
And this section, correspondingly, consists entirely of studies that include Elliot C. Dick as an author.
D’Alessio et al., 1984 ran 3 types of experiments with a total of 33 recipients. I'm really hesitant to draw conclusions from this one, because it had some surprising results that I think represent methodological issues.
The first experiment type had donors and recipients playing cards, talking, and singing together in a room for 2-3 hours. Across two rounds, there were 5 donors and 9 recipients. The researchers claim that none of the recipients were infected with RV55 (which they infected the donors with), but see this footnote to the data table for that experiment:
Three colds developed in recipients during the week after inoculation, but RV55 was not isolated from multiple nasal specimens, nor did antibody to RV55 appear in serum after the experiment. Three nasal specimens were obtained from each of the six asymptomatic recipients after exposure to RV55; tests of all of these specimens yielded negative results, as did serological tests.
So who knows, really.
The second experiment type had small groups of donors and recipients sharing dormitory rooms for 12 hours a day, 3 days in a row, and "to decrease the likelihood of transmission by fomites, participants were asked to avoid handling one another's personal items and to use separate bathrooms". There were 11 donors and 11 recipients, with 5 groups of 2 each, and 1 group of 1 each. The results from this experiment are reported as "one infection", but once again:
Three recipients developed colds, but only one case of transmission of RV55 was confirmed by laboratory findings.
🤔
The third had donors kissing recipients: "Forty-eight hours after being infected with RV55, four donors kissed five recipients for 1 min. A second group of six donors, similarly infected, kissed 11 recipients for 1.5 min (two 45-sec contacts). Each recipient was kissed only once; the donors and recipients were instructed to use the kissing technique most natural for them." They report only one instance of transmission here (recipient no. 16), with this caveat:
In four recipients (no. 1, 6, 8, and 12), cold-like symptoms developed during the week after exposure to RV55, but seroconversion did not occur and RV55 was not isolated from nasal specimens. A rhinovirus other than RV55 was isolated from recipient no. 8.
As I said above, the methodology here seems much less careful and the results are substantially sketchier; how likely is it that so many test subjects just happen to coincidentally catch a different cold after being exposed to one in the experimental setting?
Dick et al., 1987 had a very interesting, and substantially more robust, experimental design. The first stage, which they ran 3 times, was sticking 20 guys into a room with 4 tables. 8 of those guys had colds, 12 were healthy - each table took 2 infected and 3 uninfected guys. Half (6) of the uninfected guys wore restraining devices meant to prevent them from touching their faces. In the first of the 3 rounds, this device was a "large, clear plastic collar... worn around the neck and supported by the shoulders".
In the second and third rounds, they wore arm restraints "composed of two halves of an orthopedic arm brace held together at the elbow by a moveable hinge welded so that the brace could bend only between 140°and 180°. Both devices enabled effortless movement of the arms, ensuring normal poker playing but preventing the wearer from touching any part of his face. If any of the restrained recipients needed his nose blown or scratched, assistance was given by a monitor."
They then had them play cards together at these tables for 12 hours. (They also went through quite a bit of effort to prevent accidental contamination during e.g. mealtimes.)
So, across 3 rounds, there were 36 recipients - 18 restrained and 18 unrestrained. 10 of the restrained recipients and 12 of the unrestrained recipients were infected.
They also ran a different, fourth round, where they had 12 uninfected guys play cards using "freshly contaminated object" ("cards, poker chips, and pencils") from infected donors playing other games concurrently, swapping out the objects every hour for maximum freshness. They forced those guys to perform "exaggerated hand-to-nose and facial rubbing, often with conjunctival and nasal mucosal contact" every 15 minutes, because "it seemed the recipients were actively avoiding any contact to their faces". None of those guys were infected.
This seems like substantial evidence that under more "realistic" conditions[9], transmission via fomites is not overwhelmingly likely for RV-16. I'd be more hesitant to update very hard without the fourth round, but they had a bunch of guys spend 12 hours playing cards that had been very recently and actively used by a bunch of sick people, and none of them got sick.
This paragraph was somewhat alarming (bolding mine):
The complete absence of RV16 on the fomite recipients' hands was surprising. Certainly the nose-to-hand-to-fomite-to-hand-to-nose exposures of the experiment D (fomite) recipients far exceeded that of any normal indirect contact circumstance. Eight men with colds were contributing continuously. The cards, pencils, and poker chips used by these recipients were literally gummy from the donors' secretions. Subsequent to the experiments reported here, we embarked on additional experiments studying the four steps (see above) of the contact transmission chain [12]. We found that the virus nearly disappears during the journey from the donor's to the recipient's nose. The donor may have thousands of infectious particles on his hands, but few are deposited on fomites; the cards and chips he handles often have no virus at all, and positive fomites have only 10-30 TCID50[10] By the time the recipient's hands and nose are reached, the levels of virus drop to zero or nearly so. Significant quantities of virus will only reach their final destination if the mucus is still wet; evidently this situation happened infrequently or never in our poker games, even though the exposures were much exaggerated.
How does one square this with Gwaltney & Hendley, 1982? We don't know how long recipients there waiting after the coffee mug handles were contaminated, but there was a 20 minute window between the plastic tiles being contaminated and the recipients touching them. That's longer than the enforced 15-minute interval for face-touching here. A couple possibilities:
Also, I think this does not do much to distinguish between small particle and large particle transmission, given the table size.
Meschievitz et al., 1984 ran six trials where they housed multiple donors (usually between 5 and 10) with multiple recipients (similar numbers), for varying lengths of time, ranging from 5 hours to ~6.5 days. They report:
The experiments described in table 3 are arranged in order of donor-hours of exposure (DHE; one donor in the experiment room for 1 hr equals one DHE; five donors in the room for 5 hr equals 25 DHE, etc.) because rate of RV16 transmission correlated in a nearly linear fashion with DHE (r = .926, P < .01). The correlation was nearly perfect when DHE was plotted logarithmically (r = .997, P .001; figure 3).
Their experiment design, in isolation, doesn't give us much to go on for updating on various methods of transmission. However, this does seem like it lets us reconcile Gwaltney (1978)[11] and Dick (1987)[12] - the longer you're in close contact with someone sick, the more likely it is that they successfully cough onto your eyeball.
There are a couple of important factors that the above analysis doesn't cover, that could substantially change takeaways. Briefly, those are:
Andrup et al., 2023 is a more comprehensive meta-analysis of the existing literature on the subject, which I held off on reading before I conducted my own research and wrote up the above analysis (minus the summary at the top). This is their conclusion:
We found low evidence, that transmission via hands and fomite followed by self-inoculation is the dominant transmission route in real-life indoor settings. We found moderate evidence, that airborne transmission either via large aerosols or small aerosols is the major transmission route of rhinovirus transmission in real-life indoor settings. This suggests that the major transmission route of RVs in many indoor settings is through the air (airborne transmission).
However, as far as I can tell, the only studies they analyzed which could plausibly support transmission via small particles (distinguished from large particles) were two studies on prisoners from the 60s. In the first one[14], researchers sprayed aerosolized inoculum directly into the subjects' noses via a hand atomizer. In the second one[15], they "received aerosol inoculation by means of a molded rubber face mask attached to a cylindrical chamber containing a continuous flow of aerosol generated by a Collison atomizer". All of the studies that tested conditions more similar to the real world found no transmission via small particle aerosols.
"Large" particle transmission: likely, especially over extended periods of time. (Remember, 10 of 18 experiment subjects were infected within a 12 hour period despite being physically restrained from touching their face with their hands.) "Small" particle transmission: not very likely (maybe even "very unlikely"). Fomite transmission: possible, but the strongest evidence comes from studies that are testing pretty contrived[16] setups, whereas the studies that check more natural conditions don't see very high rates of transmission against the aerosol baseline. My guess is that in practice, fomites are probably not responsible for most adult-to-adult transmission. (I'm much less confident about children.)
Definitely avoid coughing or sneezing into people's faces. Wash your hands before touching your face, if you're taking care of a sick person... but don't forget to use that paper towel to turn the water off, since faucet handles are hotbeds of other people's germs. Face-to-face conversations seem much worse than ambiently hanging out in the same room.
Overall confidence: low. Reasonable people might not agree with my inferences about the likelihood of "natural" transmission via the three transmission methods described and tested by these studies. Might generalize poorly to children. Might depend on details of specific viruses, and I don't think we've done enough research to have meaningful evidence about whether different RVs have very different transmission profiles from each other.
Thanks to Elizabeth van Nostrand and eukaryote for feedback on clarity. Remaining imprecisions are my own.
Like being in the same room as someone for a while.
That I could find!
The authors note:
The aerosols produced by this method were assumed to contain particles of both large and small diameter. However, the method was characterized as "large particle" since this was the only contact situation in which passage of large droplets of respiratory secretion between donor and recipient could occur.
However, they don't define "large" or "small" particle sizes anywhere.
The study authors note that the "one infected recipient in the large particle aerosol group had had intimate contact with a hand-contact recipient on the evening of the first day of their exposure to the donor." There are some other details here which suggest the study authors think the transmission was probably still the result of the tested transmission route (large particle) rather than via this unintended side-channel, but it does muddy the waters.
I'm more interested in figuring out indirect transmission via shared surfaces (i.e. doorknobs, shared food serving utensils, etc), as well as "survival" time on various surfaces (including one's own fingers).
The study was simultaneously testing the effectiveness of spraying surfaces with a disinfectant. I omit discussion of those results for brevity.
The researchers eluted the virus from the subjects' fingers, and then used some plaque assays to test the eluates (by culturing).
In this study, they used RT-PCR to test for presence of the virus.
Though still exaggerated: "The average number of hand-to-face contacts recorded (186) over a 12-hr period was far in excess of that observed during normal adult behavior [7]: one "nose pick" every 3 hr and one "eye rub" every 2.7 hr."
50% tissue culture infectious dose, which works by progressively diluting a virus sample until a specific dilution infects 50% of separate "wells" of cell culture samples in a "plate" (a collection of wells).
Only 1 out of 12 recipients sitting together at a table with infected donors were infected, despite high-risk activities like talking, singing, sneezing, and coughing.
10 of 18 recipients, restrained in a way that prevented them from touching their faces, were infected from 12 hours of playing cards at the same table as infected donors. (Also 12 of 18 unrestrained recipients.)
As infected children appear to have a higher viral load than adults, this may explain why children are considered to be the main transmission vector.
...
The viral load in the mucus of more than 1000 rhinovirus-infected children below 1 year of age was 5.79 × 10e6 TCID50/mL, which is 10 to 100 times higher than our HC of 1 × 10e5 (Regamey et al., private communication)
And fairly pessimal, for the recipients.
2025-11-18 13:22:27
Published on November 18, 2025 5:22 AM GMT
Context: Post #8 in my sequence of private Lightcone Infrastructure memos edited for public consumption.
When you finish something, you learn something about how you did that thing. When you finish many things at the same time, you do not get to apply the lessons you learned from each of those things to the others. This insight, turns out, was non-trivially a core cause of the industrial revolution.
The assembly line is one of the foundational technologies of modern manufacturing. In the platonically ideal assembly line the raw ingredients for exactly one item enter a factory on one end, and continuously move until they emerge as a fully assembled product at the other end (followed right by the second item, the third item, and so on). This platonic assembly line has indeed been basically achieved, even for some of humanity's most complicated artifacts. A Tesla factory converts a pile of unassembled aluminum and some specialized parts into a ready-to-ride car in almost exactly 10 hours, all on a continuously moving assembly line that snakes itself through the Gigafactory.
A smooth assembly line is also the sign of a perfectly calibrated process. We know that we are not spending too much time on any part of our assembly. The conveyor belt moves continuously, always at the same speed, calibrated to be exactly enough to complete the task. If for some reason the task takes longer, because e.g. a worker is worse than a previous worker, we notice immediately.
In contrast to all of this, stands some human instincts around efficiency. If instead of making one item each from start to finish, we could just process a big batch of dozens or hundreds or thousands of items, we could, it seems, be so much more efficient. This is usually a lie.
The ever changing parable of the students making pottery/paper-airplanes/etc. is usually invoked at this point. While the parable has undergone many variations, this story from Atomic Habits is as far as I know the original one:
ON THE FIRST day of class, Jerry Uelsmann, a professor at the University of Florida, divided his film photography students into two groups.
Everyone on the left side of the classroom, he explained, would be in the “quantity” group. They would be graded solely on the amount of work they produced. On the final day of class, he would tally the number of photos submitted by each student. One hundred photos would rate an A, ninety photos a B, eighty photos a C, and so on.
Meanwhile, everyone on the right side of the room would be in the “quality” group. They would be graded only on the excellence of their work. They would only need to produce one photo during the semester, but to get an A, it had to be a nearly perfect image.
At the end of the term, he was surprised to find that all the best photos were produced by the quantity group. During the semester, these students were busy taking photos, experimenting with composition and lighting, testing out various methods in the darkroom, and learning from their mistakes. In the process of creating hundreds of photos, they honed their skills. Meanwhile, the quality group sat around speculating about perfection. In the end, they had little to show for their efforts other than unverified theories and one mediocre photo.
While one might accept that the assembly line has been deeply transformative in manufacturing, it might be less clear to see how the same principles would affect the operations of something like software engineering, which is a good chunk of what we do. However, the same principles have also been the driver of a non-trivial fraction of modern software development progress.
In the dark old days of software engineering, software would be shipped in what they called "releases".
The lifecycle of a release would start by a bunch of managers coming together and making a big long list of features they think the software they are working on should have. This list would then be handed to a small set of lead engineers to transform into something they would call the "spec", usually at least hundreds of pages long. This spec would then be handed to a set of programmers to "implement". The resulting piece of software would then be handed to a set of testers to test. Then handed back to the programmers to fix. Then they would do a big pile of user-testing to get product feedback on the resulting software. This would then result in an additional list of features, which would be translated into a spec, which would be implemented, tested and fixed.
And then finally, after many months, or even years, the software would be burned on a CD, and then be shipped out to users.
Contrast this with the processes dominating modern software engineering. Everything is continuously deployed. A single engineer routinely goes from having an idea for a feature, to having it shipped to users within hours, not months. Every small code change gets shipped, immediately. We avoid shipping many things at once, since it will make it harder for us to roll them back. This is an application of the principle of single piece flow/small batches.
At the management level, the opposite of single-piece flow is usually called "waterfall planning". A waterfall planning process is structured into multiple distinct stages of product development where big batches of changes get combined, audited, reviewed, iterated on, and eventually, sometime down the road, shipped to users. The alternative to waterfall processes are often called "lean processes" (also the eponymous cause of "the Lean Startup" book title).
The application of the principle of single piece flow to new domains can often produce enormous efficiency gains. One domain stuck deeply in the old ways, for example, is architecture and construction. A building gets built, or renovated, in a set of discrete, long stages. First the client "figures out what they need", then an architect draws up the blueprints, then a planner reviews the blueprints, then a contractor builds the whole building, then an auditor reviews the construction.
This is complete madness. How are you supposed to know what you need in a building if you have never built any part of it? How are you supposed to know what materials to work with if you don't know how well the different materials will work for you?
Lighthaven was renovated drastically differently from basically all other buildings built or renovated in the Bay Area. During renovation we would aim to finish a single room before we started working on the next room. After every room we would review what worked, what didn't work, which parts took longer than expected, and which parts turned out to be surprisingly easy. Our contractors were not used to this. We needed to change a huge amount about how they operated, but I think there was no other way for Lighthaven to have successfully gotten built if not for this.
Much of Lightcone's work should aim to ship as continuously as possible, even if there is no clear precedent for what single piece flow would look like in that domain. To show what this thinking looks like in-progress:
The ideal workshop, when I hold this consideration in mind, is a series of rooms that each participant walks through over the course of a week, with each station teaching them something, and preparing them for future stations. Every daylight hour, a newly educated participant leaves the workshop, with another person right behind them, and another person entering right at the start.[1]
Every single participant would be an opportunity to learn for all future participants. We could calibrate the efficiency and difficulty of each station (if necessary adapting to the participant), and would notice immediately if something was going wrong.
Unfortunately cohort effects loom large, as the experience of learning alone is very different than learning together, and this appears to be a big obstacle to making this ideal workshop happen. But I still am thinking that maybe there is some way.
In many ways Inkhaven is an application of single piece flow to the act of writing. I do not believe intellectual progress must consist of long tomes that take months or years to write. Intellectual labor should aggregate minute-by-minute with revolutionary insights aggregating from hundreds of small changes. Publishing daily moves intellectual progress much closer to single piece flow.
For Lighthaven event rentals, the month-long lead and planning time also generates so much inefficiency. The ideal series of events would be created one piece at a time. Of course the obstacle lies in people's calendars and their plans, who need to control their schedule weeks and days out, which requires locking in much about each event, long before the previous one has completed.
The application of single piece flow is also one big reason for why Lightcone is a gaggle of generalists. Specialization often breeds waterfall habits. A gaggle of generalists can all focus their efforts together on shipping whatever needs to be shipped right now, until it is shipped, and then reorient.
A "workshop snake" as Justis affectionately named it while helping me edit this post
2025-11-18 10:54:44
Published on November 18, 2025 2:54 AM GMT
Every so often, I have this conversation:
Them: So you know how the other day we talked about whether we should leave for our trip on that sunday or monday?
Me: …doesn’t sound familiar…
Them: And you said it depended on what work you had left to do that weekend…
Me: Hm… where were we when we had the conversation?
Them: Um… we had just arrived at my house and I had started making food-
Me: Ooooh yeah yeah okay. And I was sitting on the black stool facing the clock. Okay cool, I remember the conversation now, please continue.
…What the heck is up with this? Does it happen to anyone else? Apparently, my brain decides to index conversations to be efficiently looked up by quite precisely where I was in physical space when the conversation occurred. I have no conscious experience of this indexing happening. It’s also pretty strange that it happens for locations that I use on a regular or even daily basis; it’s not like I could just start listing all the conversations I’ve had while sitting on that kitchen stool.
I do believe that I’m quite above-average aware of what’s happening in my visual field. I always notice when people come in and out of a room, I tend to see new objects or decor right away, and I somehow spot every insect. I’m often the first to spot a leak or mold. I almost never run into stuff or knock things over. So maybe it’s just increased attention to my surroundings?
Here’s a similar pattern I’ve noticed.
I’m in a phase of my life where I read a lot of books, and especially textbooks. My field of study is interdisciplinary, and I am frequently looking up something that I’ve read before. When I do, I will frequently have the sense of roughly where it was, physically, in the book. This includes:
To be clear, I’m not claiming that I have any kind of “photographic” memory. I have no idea what almost all of these books say. I don’t have any degree of verbatim retention. But when I remember that there was a particular interesting part and want to go look for it, my brain brings up these visuo-spatial associations. These associations feel blurry but confident, like some kind of hash function. Textbooks are heavily formatted, so there will be lots of white space, diagrams, section headers et cetera to anchor off. When I try to recall the “location” of events in flat prose fiction books, nothing comes up.
This is, I think, one reason why I have struggled to switch over to digital forms of books. I’ve tried it a lot, but they always fade out of use. There are many other reasons (if they’re not on my shelf I tend to forget it exists, I find physical books far easier to skim) but the fact that I can’t physically index my knowledge to it is noticeable. It’s just some big infinite scroll that looks and feels indistinguishable from all the other big infinite scrolls.
I’d love to hear how others relate to either of these experiences!
2025-11-18 09:03:56
Published on November 18, 2025 1:03 AM GMT
Here’s a conceptual problem David and I have been lightly tossing around the past couple days.
“A is a subset of B” we might visualize like this:
If we want a fuzzy/probabilistic version of the same diagram, we might draw something like this:
And we can easily come up with some ad-hoc operationalization of that “fuzzy subset” visual. But we’d like a principled operationalization.
Here’s one that I kinda like, based on maxent machinery.
First, a background concept. Consider this maxent problem:
Or, more compactly
In English: what is the maximum entropy distribution for which (the average number of bits used to encode a sample from using a code optimized for distribution ) is at most (the average number of bits used to encode a sample from using a code optimized for )?
The solution to this problem is just .
Proof
First, the constraint must bind, except in the trivial case where is uniform. If the constraint did not bind, the solution would be the uniform distribution . In that case, the constraint would say
(because the uniform distribution has maximal entropy)
… but then adding yields , which can be satisfied iff the two distributions are equal. So unless is uniform, we have a contradiction, therefore the constraint must bind.
Since the constraint binds, the usual first-order condition for a maxent problem tells us that the solution has the form , where is a normalizer and the scalar is chosen to satisfy the constraint. We can trivially satisfy the constraint by choosing , in which case normalizes the distribution and we get . Uniqueness of maxent distributions then finishes the proof.
So conceptually, leveraging the zen of maxent distributions, the constraint encodes the same information about as the distribution itself.
Conceptually, if the constraint encodes all the information from into a maxent problem, and the constraint encodes all the information from into a maxent problem, then solving the maxent problem with both of those constraints integrates “all the information from both and ” in some sense.
Qualitatively, here’s what that looks like in an example:
says that is probably in the red oval. says that is probably in the blue oval. So together, they conceptually say that is probably somewhere in the middle, roughly where the two intersect.
Mathematically, the first order maxent condition says , for some (which we will assume are both positive, because I don’t want to dive into the details of that right now). For any specific value, and can be no larger than 1, but they can be arbitrarily close to 0 (they could even be 0 exactly). And since they’re multiplied, when either one is very close to 0, we intuitively expect the product to be very close to 0. Most of the probability mass will therefore end up in places where neither distribution is very close to 0 - i.e. the spot where the ovals roughly intersect, as we’d intuitively hoped.
Notably, in the case where and are uniform over their ovals (so they basically just represent sets), the resulting distribution is exactly the uniform distribution over the intersection of the two sets. So conceptually, says something like “ is in set ”, says something like “ is in set ”, and then throwing both of those into a maxent problem says something like “ is in and is in , i.e. is in the intersection”.
So that hopefully gives a little intuition for how and why maxent can be used to combine the information “assumed in” two different distributions .
What if we throw and into a maxent problem, but it turns out that the constraint is nonbinding? Conceptually, that would mean that already tells us everything about which tells us (and possibly more). Or, in hand wavy set terms, it would say that is a subset of , and therefore puts a strictly stronger bound on .
In principle, we can check whether ’s constraint is binding without actually running the maxent problem. We know that if ’s constraint doesn’t bind, the maxent solution is , so we can just evaluate ’s constraint at and see if it’s satisfied. The key condition is therefore:
’s constraint is nonbinding iff that condition holds, so we can view as saying something conceptually like “The information about implicitly encoded in is implied by the information about implicitly encoded in ” or, in the uniform case, “ is a subset of ”.
Now for an interesting check. If we’re going to think of this formula as analogous to a subset relationship, then we’d like to have transitivity: and implies . So, do we have
( and ) implies
?
Based on David’s quick computational check the answer is “no”, which makes this look a lot less promising, though I’m not yet fully convinced.
2025-11-18 08:00:13
Published on November 18, 2025 12:00 AM GMT
Regularly, I'll wind up reading some work and wind up surprised that it bears little resemblance to its portrayal on the internet. Usually, it's a lot more nuanced than I thought, but at times it frequently says the opposite to what everyone else claims. What's going on?
Answer: almost no one who talks about this work has ever read it. Instead, they read about it. And I don't even mean a book review. Oh, no. Most people who know about this work only have vague malformed memories, no doubt at dinner parties where they had rivetting conversations on Famous Work [] and its ilk as they sipped martinis and ate off smorgasbords or whatever it is they do at dinner parties. I wouldn't know, because while they partied, I studied the original.
This is typical. Most people who know of a work will get it through secondary, or tertiary, sources e.g. a thread on Twitter, a discussion on a podcast, a not too accurate summary on Wikipedia etc. Naturally, memetic dynamics kick in and shave away the idea's details till it's a caricature of itself. This pressure towards slop is, at best, weakly countered by readers who've read the primary sources and can whack around the lowly secondaries with facts and logic, thereby acting as a constraint on the memetic forces abrading away the work's finer details. This means the caricature shares some mutual information with the original; but that's a weak constraint. Luigi and Waluigi share some mutual information.
What's the upshot of all this? For writers, most discourse on your work is going to look like everyone is horribly caricaturing your ideas because participants know only of the caricature. It's not malicious. Don't take it personally, it's not malicious. After all, even supervillains can't get people to listen to their monologues.
Only a few people will truly put in the work to understand your ideas, which involves active reading and even, gasp!, putting them into practice. These few may get significant value from your work, and spread the word about your work. You can make their job easier by planning for there to be memes or juicy quotes. You might even be able to shape the caricatured version of your ideas by selectively making parts of your work more/less memetic. If you're really galaxy brained, you might use the caricature as a smokescreen to hide the meaning of your work behind plain sight. Why bother? Ask the Straussians.
For readers, while reading a work you've heard of second-hand won't necessarily be useful to you, it will probably teach you something. Yes, even works by that guy you totally hate.
In fact, it's true even for works by that guy you love. Or for works that your information bubble won't stop raving about. Consider, say, the Sequences. Pop quiz: how many people on Less Wrong have read >half of the sequences? I'd guess <10%. This is in spite of how tsundere Lesswrong is for the guy. You'd think that such love or hate would be enough to get Lewssrongers to read the dang Sequences, but no, it's not.
And The Sequences are pretty great. There's lots of valuable insight there, waiting for people to bother to read it. Likewise for other great works. If you can be bothered, there's the equivalent of epistemic $100 bills lying around everywhere on the street.