MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Why I am not buying IPv4 addresses as an investment

2026-03-20 17:02:10

2026-01-17

Disclaimer

  • Quick Note
  • I did the actual research back in 2024-10. I polished the notes and published them online in 2026-01. It's possible the situation has changed in this one year.

Summary

  • I noticed IPv4 address prices were going up. I considered purchasing IPv4 addresses in bulk as a investment, in order to make money. But I decided not to buy them.
  • Many ISPs are adopting dual stack (IPv4 and IPv6) and CGNAT, in order to server their users.
    • Dual stack means they serve users over IPv6 (typically as first preference), but also support IPv4 (typically as second preference).
    • CGNAT means they can serve multiple users over a single IP addresses, by assigning different ports to them. Most users are fine with CGNAT.
    • ISPs cannot unilaterally shift to IPv6-only, because users don't want this, because users want to access all websites, and some websites are still hosted as IPv4-only.
  • Some websites owners (especially large enterprises) to switch their websites to either dual stack or IPv6-only, for technical reasons. They would rather pay the increasing prices of IPv4 addresses.
  • Maybe CGNAT means IPv4 address space won't exhaust.
    • There are ~4.3 trillion IPv4 addresses. There are atleast ~20 billion website URLs. There are ~8 billion people on Earth.
    • If ISPs all quickly shift to CGNAT, it is possible the current IPv4 address space is enough to support all website owners by a comfortable margin.
    • Since I have high uncertainty on a) whether the current IPv4 address space is sufficient for most website owners, and b) whether most ISPs will find it easier to adopt CGNAT than force IPv6-only, I have high uncertainty on whether IPv4 address prices will go up or not.

Main

Why might you expect IPv4 addresses prices to go?

  • ipv4 prices have historically gone up due to ipv4 address exhaustion.
  • There are people are hoarding and leasing existing IPv4 addresses for this very reason.
  • All the internet standards bodies are demanding the world switch to IPv6. But adoption is lagging. This means there are some bottlenecks preventing people from switching immediately.

Compatibility between IPv4 and IPv6

  • both websites and ISPs (residential and mobile) need to enable ipv6 for the transition to work
  • IPv4 and IPv6 are not interoperable, an IPv4 cannot communicate with an IPv6
  • IPv4 requires NAT, which is not the best way of doing security. With IPv4 and NAT, often a static IP list is provided to routers, so the server that manages the firewall can go offline.
  • With IPv6, it is common (or maybe required?) to keep the server that manages the firewall online, so it provides the IP list in realtime to the routers. Usually they provide a deny-all rule but allow some services as needed.

Market players

  • End users - want to browse websites and get access to various services.
  • ISPs - want to sell whatever services the end user wants, as their preferred mode of accessing internet. Manages network infra (fiber optic cables, routers, etc)
  • Website owners - primarily large enterprises and small companies, who have financial reason to want to maintain websites. Also some individuals who have arbitrary motivations for maintaining websites. Most of them rent servers from cloud providers. A few large enterprises manage their own servers.
  • Cloud providers - Primarily an oligopoly of AWS, GCP, Azure, who want to sell whatever services the website owners want. Manages compute and storage infra, and network infra. Also has transit agreements with various ISPs.

Current adoption

  • End users
    • Almost all new consumer devices (mobiles and PCs) support accessing both IPv4 and IPv6. No manual switching required.
    • Some very old consumer devices may not support IPv6.
  • ISPs
    • ipv4-only ISPs - constant adoption
    • dual stack ISPs - increasing adoption
    • ipv6-only ISPs - almost non-existent
  • Cloud providers
    • Big 3 cloud providers provide all three options - ipv4-only, dual stack, ipv6-only
    • Cloud providers are basically an oligopoly.
    • Some smaller cloud providers provide IPv4-only (if they're old) or IPv6-only and dual stack (if they're new).
  • Website owners
    • ipv4-only websites - constant adoption
    • dual stack websites - increasing adoption
    • ipv6-only websites - non-existent

More details on website owners using dual stack

  • Many sysadmins and network security people at enterprises dont know how to do IPv6 config yet. Hence they are still hosting IPv4-only websites, either on cloud providers' servers or on the enterprise's own servers.

Multiple ways ISPs deal with increasing number of users.

  • They can purchase IPv4 addresses (increasing their price), and pass the cost on to the user.
    • Some ISPs are doing this.
  • They can adopt CGNAT, and serve more than one user using a single IPv4 address.
    • Some ISPs are doing this.
  • They can provide IPv6-only to users.
    • In practice, this is not an option as many websites still don't provide IPv6. Users don't want to purchase an internet connection that lets them only access some websites but not others.

More details on ISPs using CGNAT or dual stack

  • Some ISPs now use carrier-grade NAT, where they can assign different port numbers on the same IP to different customers. This allows them to serve more users on IPv4, without having to purchase more IPv4 addresses.
  • Most users don't care if they're being served over CGNAT. Most users just need a few ports open, they don't need the full range of ports associated with an IP address.
  • A few power users care, for example those hosting web servers on residential IP addresses. Webhosting requires unique IP, can't use CGNAT (but it can use its own NAT). Some devs are using VPNs and tunnels to host their websites on residential ISPs despite their residential ISP using CGNAT. reddit comment on this
  • Indian residential ISPs seem mostly to use CGNAT.
  • India is leading on IPv6 (dual stack) adoption. Indian mobile ISPs are increasingly transitioning from IPv4-only to dual stack. Most users are are mobile-first and new to getting internet access. Some Indian ISPs themselves provide mobile services like streaming platforms, and these are increasingly served over IPv6.

My question: Who is buying more IPv4 addresses (and driving the price increase)? Is it website owners or ISPs?

  • My first guess is ISPs, as consumer devices (mobile in particular) are more in number than websites. ISPs also heavily relying on CGNAT
  • Assuming CGNAT is sufficient to host 8B users on mobile, next question is whether residential ISP connections also use CGNAT
  • Assuming residential ISP connections also use CGNAT, it is possible we never (?) need ipv6 as all webservers and all clients can be serviced on 4B * 1000 IP addresses (??)
  • Conclusion
    • not obvious whether CGNAT will be sufficient to bridge demand for ISPs
    • not obvious why ISPs prefer to use CGNAT over just use IPv6. Both are software changes (not obvious there is any hardware change)
    • not obvious whether IP adddress prices will go further up or not

Maybe CGNAT means IPv4 address space won't exhaust?

  • There are 2^32 = ~4.3 trillion IPv4 addresses. There are ~8 billion people on Earth. There are atleast ~20 billion unique website URLs as per Commoncrawl.
  • Most of the IPv4 addresses are currently used by end users, not by website owners.
  • If ISPs all quickly shift to CGNAT, it is possible the current IPv4 address space is enough to support all website owners by a comfortable margin.
  • Since I have high uncertainty on a) whether the current IPv4 address space is sufficient for most website owners, and b) whether most ISPs will find it easier to adopt CGNAT than force IPv6-only, I have high uncertainty on whether IPv4 address prices will go up or not.

Side Note: Leasing IPs

  • IP addresses are purchased from RIR in blocks. Individual static IPs can be leased from anyone including ISPs and cloud server hosts
  • Leased IP addresses often have a poor IP reputation on various IP blacklists, which can be scrubbed over time.
  • This means most leased IP addresses are not a liquid fungible asset.
  • AWS currently leases static IPs at $0.005 / hour = ~$40 / year. Since AWS is reputed, their leased IPs are presumed less likely to end up on IP blacklists, because they can proactively shut down people using their servers for activites likely to get blacklists.
  • Leased IP addresses are currently not a significant portion of the market.

Misc

  • openwrt is good router software supporting ipv6 (as per some hackernews comment)
  • IPv6 uses SLAAC to assign addresses without requiring router to do it. SLAAC tutorial?


Discuss

Hundred ways a superintelligence could kill you (non-serious exercise)

2026-03-20 16:58:00

2026-03-18

Disclaimer

  • Quick Note
  • All these methods are not equally likely to happen in the real world. I am well aware that some of the methods below are far less plausible than others, in real life.
  • I did not google anything while writing this list. Wrote it in one sitting with no research. Just for fun.

Why this list?

  • (I am following Visakanv's advice of "do 100 of something")

Start with the boring obvious ones first

build

  • Build cladestine bioweapons labs, build novel bioweapons, release them
  • Build cladestine nuclear reactors, start a nuclear war

persuade

  • Persuade world leaders to start a nuclear war
  • Persuade literally everyone on Earth to kill themselves
  • Persuade a few people to go manufacture the bioweapon sequences it theorises about. Either persuade them to do it out of self-interest, or deceive them.
  • Persuade a few people to create a cladestine bioweapins lab and do research, then when they are done, they will release the bioweapons out of self-interest

cyber

  • Use cyberattacks to obtain nuclear launch codes, give fake orders using them, and trigger an accidental nuclear war
  • Use cyberattacks to steal bioweapon sequences, then give them to some other group that wants build them

Now for some slightly more interesting ones

interesting cyber

  • Use cyberattacks to get politicians' private secrets, blackmail them into escalating war to avoid the secrets from coming out. Do this multiple times and you now have a nuclear war.
  • Do a cyberattack but leave clues that make one country's govt think the other country's govt did the attack. Do this multiple times until war escalates to nuclear.
  • Use cyberattack to get one leader's secrets that make the other country's leader genuinely hate him more. Offend his ego until war is triggered. Do this multiple times until war escalates to nuclear.
  • Use cyberattack to publicly leak a country's true military capabilities - which are in rapid decline and in contrast with their publicly portrayed capabilities. That country now faces pressure to "use it or lose it", triggering a war. Do this multiple times until war escalates to nuclear.
  • Obtain a leader's private secrets, leak them publicly and humiliate them. This leader feels starting a war is the only way to keep their self-image as not weak.
  • Cyberattack a country's minor ally, leak their capabilities, which are smaller than publicly portrayed. The opponent nuclear power realises they can be successful if they start a war, and hence they do.

(getting bored of listing cyber capabilities, let me move on to the next one)

interesting bioweapons

  • Create a bioweapon that only targets one ethnicity or race, weakening one country but not the other. The other country now thinks they can start a war and win. Repeat until escalation to nuclear.
  • Create a bioweapon that only targets the leader of some country. Due to no succession plan, the country descends to civil war, which eventually leads to some rogue actor obtaining nukes and starting a nuclear war
  • Create a bioweapon that makes everyone fall sick but doesn't cause mass death. Leak convincing (but ultimately false) research that eventually everyone will die. Leaders of countries start a war themselves, as they blame each other and try to find the culprit.
  • Create a bioweapon that causes immense suffering but no death. Being afflicted is so painful that leaders start recommending mass suicides as a way of avoiding the incoming suffering.
  • Create a bioweapon that spreads by water not by air. Countries isolate themselves hard to protect themselves. Trade stalls, geopolitical alliances shift as a result. Eventually some countries are starving due to lack of food supplies, and start a war against their allies for reneging on agreements to supply food.
  • Create a bioweapon that causes psychosis. Eventually one world leaders goes insane and starts a war. Repeat until escalation to nuclear.
  • Create a bioweapon that infects crops not humans. Humanity dies of mass starvation.
  • Create a bioweapon that infects crops not humans. Countries start a war to get food from other countries, since that is preferable to mass starvation. Eventually one country escalates to nuclear.
  • Create a bioweapon that degrades mental capabilities in a way that is not obviously visible. Nobody knows the bioweapons even exists or is spreading. Eventually one country starts war due to poor decision-making on their leaders' part.
  • Create a bioweapon that causes people to lose language capabilities. Eventually they speak mangled words. However some people think this means they are speaking in tongues, and a new demonic religion being spread. A religious leader considers this an insult to their own religion, and starts war to avoid spread of this other religion.
  • Create a bioweapon that infects both sexes but causes significantly more suffering to women. Some countries invest more resources into fixing this, others don't. War starts as some country's leaders are not happy that other countries are doing nothing about the problem.

genetic engineering

  • Create a gene drive for humans. People with the gene drive enjoy sex more but are incapable of reproduction. Eventually humanity becomes infertile and goes extinct.
  • Genetically engineer a new species of insect that outcompetes humans by competing for all available food resources. Humans die of mass starvation.
  • Genetically engineer a new bacteria that poisons all water sources and makes them unfit for human consumption. War is triggered and eventually escalates to nuclear.
  • Create a gene drive for humans that makes them more aggressive. Eventually one leader gets infected and starts a war.
  • Create a niche environment with many existing species, but where the evolutionary arms race can run much faster than normal. Eventually the species in this environment leak into the broader world and outcompete it for all available resources.
  • Genetically engineer an intestinal parasite to become much more effective. Humanity spends most of its resources trying to fight this parasite and grows collectively poorer. Eventually one country triggers a war.

Ugh, bored, want even more interesting ideas

nanotech

  • Build a small number of von neumann probes that leave the earth, presumably to settle on some other planet. Nobody knows this even happened. Hundred years later the entire solar system is disassembled by a mass of nanobots the size of the sun
  • Build nanobots that descend deep into the Earth's crust, and create a supervolcanic eruption that wipes out humanity.
  • Build nanotech that can walk into people's brains and rewire their neural connections. Eventually one leader goes insane and starts a nuclear war.
  • Build nanotech that can transport dust into the atmosphere, eventually blocking out the sun. Humanity dies of mass starvation.
  • Build nanotech that consumes all available oxygen in the atmosphere. Humanity chokes to death.
  • Build a nanotech factory that produces compounds that cause people to enter a coma. Release them into the atmosphere.
  • Build nanotech that consumes all nitrogen in the atmosphere. All plants die, and humanity soon dies of mass starvation.
  • (ok bored of this too)

aliens

  • Establish radio contact with aliens and offer them Earth's resources in return for being uploaded to the alien's own superior compute resources. The aliens accept the deal, wipe out humanity and plunder the Earth.

chemical

  • Invent chemical weapons that create immense suffering, and give it to leaders of one country to use in war. The other country's leaders are sufficiently enraged that they escalate to nuclear out of spite.
  • Create a more effective raw material for making glass windows, and offer it to humans for making their windows. Fifty years later everyone has terminal cancer because of delayed onset symptoms of this compound.
  • Create a compound more addictive than heroin but as easy to manufacture as salt. Distribute the recipe everywhere and watch humanity wirehead themselves.
  • Create a chemical that allows for effective lie detection in those under its influence. All govts become dictatorships because leaders can enforce total loyalty in their subordinates. A rebel nation starts a nuclear war because they don't want to live in a world of permanent dictatorships everywhere.
  • Create an IQ-enhancing drug and make countries compete for first access. One country triggers war to try to get first access to the drug.
  • (ok bored of this too)

lots of future weapons

  • Manufacture a billion drones and literally slaughter everyone.
  • Invent a four-stage nuclear bomb that can blow up the solar system, and give the recipe to every terrorist group. Direct them on how to obtain the raw materials and manufacture the equipment on their own.
  • Figure out how to create black holes using just a cyclotron, and then ask rogue scientists to go create one.
  • Give scientists the technology to safely crash asteroids into the Earth and mine them for minerals. Mess up the calculations on purpose and eventually one of the asteroids actually causes a mass extinction event.
  • Invent a compound that can erase the ozone layer even if released in very small quantities. Watch humanity die of cancer.
  • Invent a thruster that releases enough energy it pushes the Earth out of orbit. After a hundred years, Earth is no longer within the temperature zone that sustains human life.
  • Invent a cheap method for antimatter production, and give the recipe to every terrorist group. One of them repurposes it to an antimatter bomb and blows up the Earth.
  • Invent a bomb that can be flown to the Sun and accelerate its fusion reaction. Sun burns hotter or colder and Earth's temperature no longer sustains humans.
  • Invent wormhole technology and give it to humans. Humanity opens a wormhole to the wrong place and is vaporised with the fury of a billion suns.
  • Pretend to have invented time travel. Leaders start a war for first access to this tech.
  • Figure out a cure for immortality and give it to humans. Turns out it doesn't grant you immortality but only an additional hundred years, and by the time humans realise this they're infertile and can't have kids.

Utopia

  • Actually help solve all of humanity's material and social needs, then watch them kill themselves out of boredom

Stop

I got bored of writing this document. I could artifically create variations of the existing scenarios if I just wanted to fill this page with a hundred scenarios for the heck of it, but that is boring and not a good use of my time.



Discuss

Internet anonymity without Tor

2026-03-20 16:52:43

Lesswrong disclaimer

  • I wrote this before I was working full-time on ASI risk, and may not endorse people working on this stuff today. I currently lean a lot more in favour of making the world transparent, not private

2025-06-18

Internet anonymity without Tor

Disclaimer

  • As of 2025, there is no empirical evidence of successful deanonymisation attack on Tor. If any govt has this capability, they're successfully keeping this info private. There is public evidence of many of the internet's fiber optic cables being tapped.

Summary

  • Governments can wiretap fiber optic cables and obtain connections between senders and receivers, along with timestamps.
  • If senders send their pgp-encrypted messages to everyone, and the receiver retrieves the entire dump of all messages from one of these users same hours or days later, then this metadata is much harder to collect.
  • This setup is expensive hence it only works for <1 MB text payloads sent on >1 gbps connections.

Main

Intelligence-agency-resistant internet anonymity is hard because the physical infrastructure can be inspected by someone with a monopoly on violence.

  • Fiber optic cables cannot hide sender/receiver identities as the attacker can wiretap the cables and then follow the physical path to identify which cable exactly carries a given message. Then they can break into the building that the cable enters.
    • (also fiber optic connections usually requires KYC in most countries, but that's a legal limit not a physical one)
  • Radio signals cannot hide sender/receiver identities as the attacker can triangulate the signal based on signal strengths. Then they can break into the building that is transmitting the signal.
    • (also encrypted radio is illegal in many countries, but that's a legal limit not a physical one)

Success criteria of attacker

  • When considering intelligence-agency-resistant anonymity, getting the metadata alone is enough to count as an attack, even if they don't get the message content.
  • Metadata includes sender/receiver irl identities, sender/receiver pseudonyms, message sizes and timestamps.
  • If the receiver is marked as suspicious, then any sender that connects with them is also marked as suspicious.

Attack 1: Get view access into majority of exit nodes

  • Tor relies on the sender passing each message to three other random users before it reaches the receiver, and hoping the three intermediaries don't all collude with the attacker.
  • If an intelligence agency has view access into majority of exit nodes, they can deanomyise Tor completely.
  • This could be done by controlling exit nodes themselves or by breaking into exit nodes run by others. They can do the latter using hardware or software backdoors, using targeted cyberattacks or using spies.

Attack 2: Wiretap source and receiver machine

  • If the intelligence agency is tapping fiber optic cables of both source and receiver, the timestamps of packets sent will match. This means they are aware of the physical addresses of both machines, the fact that there's a connection between them and the time interval in which this connection occured.

possible solution

What if the sender just sent the message to everyone instead of sending it to their intended receiver?

  • Assume that some receivers may be publishing public proofs (via youtube, twitter etc) of their latest uncompromised PGP keys.
  • Assume that each user sends a single payload of X bytes to all users each day. This payload can include encrypted messages to specific users. If they have less than X bytes to send, they fill the remaining bytes with junk data.
  • Assume each user sends their X bytes at approximately the same time each day.
  • Only the actual receivers of the content can decrypt the message. It is junk to everyone else.
  • Assume 'gpg --hidden recipient' was used, so there's no way to tell which pubkey was used to encrypt a given message, from a given set of pubkeys.

Throughput

  • 8 billion users, each user has 1 gbps unmetered fiber optic
    • 1 gbps / 8B = 0.016 bytes/s = 1350 bytes/day
  • 100 million users, each user has 1 gbps unmetered fiber optic
    • 1 gbps / 100M = 105.4 KB/day
  • 100 million users, each user has 10 gbps unmetered fiber optic
    • 10 gbps / 100M = ~1.03 MB/day

Potential problems

  • Real-time messaging not possible. This is slow like courier.
  • Running servers from residential area requires effort. ISPs and OS developers can make this difficult. Renting a cloud server to download the messages does not work, as the cloud server owner knows which subset of these messages you downloaded to your local machine or display.


Discuss

No, You Don't Need Self-Locating Evidence.

2026-03-20 13:38:34

Introduction

For a long time, I was planning to write a comprehensive post patiently exploring all the problems with conventional “anthropic reasoning”. How, for historical reasons, the whole discipline went sideways at some point and just can’t recover, continuing to apply confused frameworks, choosing between several ridiculous options and accumulating paradoxes. And how one should reason correctly about all the “anthropic problems”.

I’m sorry but this post isn’t going to be that. This time I’m mostly getting the frustration out of my system because of Bentham's Bulldog’s You Need Self-Locating Evidence! which confidently reiterates all the standard confusions, even though he really should know better at this point.

So, in this post I’ll resolve only some of the confusions of “anthropic reasoning”, leaving others as well as the deeper historical analysis for the future. Frankly, maybe it’s even for the best.

Probability theory 101

My apologies, but this section is necessary, as it’s exactly the sloppy probabilistic reasoning that led us to the current miserable state. I promise to be brief with this section1.

Let’s start from explicitly defining what probabilities are. Probability theory gives us a mathematical model to approximate some causal processes from reality to some degree of uncertainty.

It’s very helpful to think in terms of maps and territories here. We look at some territory in the world and create an imperfect map of it. The less we know about the territory the more generic is the map. And when we learn new details, we add them to our map, making it more specific.

Consider a roll of a fair 6-sided die.

Imagine an infinite number of iterations where the die is rolled again and again - a probability experiment representing any roll of a fair 6-sided die. Every trial has an outcome: either ⚀ or ⚁ or ⚂ or ⚃ or ⚄ or ⚅. This set of mutually exclusive and collectively exhaustive outcomes of the probability experiment is called the sample space.

Sets of these outcomes are called events. The simplest are individual events, consisting only of a single outcome, but likewise there can be events consisting of any number of outcomes up to the whole sample space.

Events can be interpreted as statements that have truth values in every iteration of the experiment. For example, event {⚁; ⚃; ⚅} is interpreted as a statement:

“In this trial the die is even”.

Naturally, this statement is True in every iteration of the probability experiment that the die is either ⚁ or ⚃ or ⚅ and False in every other iteration. Probability of an event is a ratio of trials where this event is True to the total number of trials throughout the whole probability experiment.

With this in mind, let’s answer a simple question. What’s the probability that our die rolled a ⚅?

At first, we are completely indifferent between all of the iterations of the probability experiment. Our roll can be any of these infinitely many trials. But we know that 1/6 of them are ⚅. Therefore:

P(⚅) = 1/6

Now, suppose we’ve learned that the outcome of the roll is even. This gives us new information, makes our map more specific by eliminating half of the possible outcomes. Now we are indifferent only between the trials where the outcome of the die roll is even and 1/3 of them are ⚅, therefore:

P(⚅|⚁; ⚃; ⚅) = 1/3

If you can understand that, accept my congratulations, you understand probabilities better than most of the philosophers of probability. I wish I was joking.

Possible Worlds, Impossible Confusions

Philosophers do tend to overcomplicate things sometimes. For reasons, I’m not going to dwell on right now, instead of outcome of a probability experiment, they decided to talk about “possible words” and then “centred possible worlds”, completely confusing themselves and everyone else.

As a part of this confusion, they came up with the notions of “Self-Locating” and “Non-Self-Locating Evidence”. Here is what BB tells us about this framework:

People often think probabilities and beliefs merely concern how the world is.

I think this is wrong.

Self-locating probabilities are probabilities that concern one’s place in the world, rather than what the world is like.

We may immediately come up with couple of corrections. First of all, probabilities are not just about the way the world is. They are about some aspects of the world to the best of our knowledge. That’s why probabilities change when we learn new facts even though the territory we are describing may stay the same.

And, whether a particular person is positioned in the world is also a fact about the world, so the whole distinction makes no sense even in its own terms. A world where I’m in one city is different from the world where I’m in some other city. Obviously. So, case closed?

Oh, not so fast! You see, as an additional complication, that would confuse everyone even more, philosophers have long ago added here the notion of personal identity:

For example, imagine that there is one clone of me in a dark and murky bunker in California and another in Paris (what rotten luck). I have no special evidence concerning which one I am (for example, there are no nearby croissants or people surrendering). I should think there’s a 50% chance that I am the one in California.

This evidence is self-locating because it’s not about what the world is like. I already know what the world is like. I know that there is one copy of me in California and another in Paris. What I’m uncertain about is which one I am. That’s what self-locating information concerns: which of the people you are, not what the world is like.

That is, in what sense are two worlds different if we switch the places of two completely identical people?

And, fair enough, it’s an interesting question in its own right. We can say that “switching places” isn’t a free action. We need to exert some work, which increases entropy in the universe. Therefore, the world where such switching has happened is different from a world where it didn’t. In the very least, they have different causal stories.

But more importantly, none of this matters in the slightest when talking about probabilities.

Once again, probability theory describes some real-world situation to the best of our knowledge. In the example above the situation is “either being in one place or the other” and the best of our knowledge is “no evidence whatsoever”.

So, we have a probability experiment with two mutually exclusive outcomes. In half of the iterations, I’m in Paris and in the other half - in California and I’m uncertain between all of them. Therefore:

P(California) = P(Paris) = 1/2

That’s all. It doesn’t matter whether there is or isn’t a clone in the other location. It doesn’t affect anything. Neither we need to think about some alternative worlds and whether they are real and in which sense. It is completely irrelevant to our probabilistic model. There is absolutely no difference in methodology between this example and the 6-sided-die example from the beginning of the post. We don’t need a special category “Self-Locating Evidence” to talk about such probability experiments; it’s a completely useless concept.

The Crux

Wait, doesn’t it mean that I essentially agree with Bentham’s Bulldog? Sure, I’m annoyed with his terminology and the framework he is applying, but it’s just formalism what about the substance? He argues that “Self-Locating Evidence” is not fake and we should treat it as any other evidence:

But a number of people have suggested that self-locating evidence is sort of fake. They claim that it doesn’t make any sense to wonder who I am once I know what the world is like. After all, there’s a copy of me in each situation. What can I possibly be wondering about if not what the world is like?

In this article, I’ll explain why self-locating evidence is real.

I claim that we shouldn’t even have a separate category for this sort of stuff in the first place, because all probabilistic reasoning works the exact same way in terms of probability experiment. What am I even arguing about?

Let’s make it clear with a handy Venn Diagram:

The problem with the “Self-Locating Evidence” category is that, while some part of it is just completely normal probabilistic reasoning, the other is total nonsense that goes against the core principles of probability theory and is a source of constant stream of paradoxes.

People who say that “Self-Locating Evidence” is “sort of fake” are not wrong - a huge part of it is. But due to conversation being framed in either pro- or anti-self-locating-evidence way, expressing this nuanced point is hard.

As a result, someone like BB can come up with an example of “Self-Locating Evidence” producing valid reasoning and then falsely generalize it to a domain where it doesn’t work. And when you try to point this out, such person just says:

“What do you mean probability theory doesn’t work like that? Haven’t you heard about Self-Locating Evidence? Are you denying that I can have some credence whether I’m in Paris or in California? That’s crazy!”

That’s why the term should be abolished and we should just be talking about all the probability theoretic problems in a unified way in terms of probability experiments and their trials.

You may continue reading the post on my substack.



Discuss

Nullius in Verba

2026-03-20 11:19:31

Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far

Cultivating independent verification

Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has

created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures.

In this post, we’ll dive into the evidence for these claims, as well as Nectome’s overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of.

To get to the current state-of-the-art required two major developmental milestones:

  • Idealized preservation. A method capable of preserving the nanostructure of the brain for small and large animals under idealized laboratory conditions. Specifically, could we preserve animals well if we were allowed to perfectly control the time and conditions of death?  

    This work (2015-2018) resulted in a brand-new technique—aldehyde-stabilized cryopreservation—which was carefully independently vetted by Ken Hayworth of the Brain Preservation Foundation over a three day-long marathon session during which I preserved a rabbit and pig under his supervision. After, he reviewed multiple samples from both brains with electron microscopy. I published Aldehyde-Stabilized Cryopreservation in Cryobiology, and won the small- and large-mammal prizes from the BPF as a result.  With this work, we had an existence proof: preserving entire brains long-term in nanoscale detail was absolutely achievable, at least under laboratory conditions.
  • Real-world preservation. A method capable of preserving the nanostructure of the brain under realistic conditions. Specifically, could we extend the laboratory method to work under the legal requirements and practical limitations that constrain real-world human cases?

    Adapting the technique to messy real-world conditions (2018-2025) took significantly more development, resulted in a bunch of insights about what is feasible and infeasible for human preservation, and shaped our entire approach to preservation going forward. In one memorable instance, once we finally had a technique that worked to our standard of rigor on pigs, we once again put it to the test in a marathon live demonstration. Andrew Critch, cofounder of the Survival and Flourishing Fund, personally witnessed the preservation of a rat under conditions that mimicked human preservation; the resulting brain samples were imaged in consultation with a microscopy lab at UC Berkeley and Professor Kasthuri at U Chicago. As a result of our demo, Andrew recommended us for an investment, which we've since received. The real-world technique has been submitted as a preprint, Ultrastructural preservation of a whole large mammal brain with a protocol compatible with human physician-assisted death.

The rest of the post is dedicated to unpacking these results.

Five quick notes as we begin:

  • By popular demand, this post is specifically about nanostructural preservation quality—achieving a level of detailed preservation throughout the entire brain and body such that synapses are traceable to their originating neurons and subsynaptic details are retained as well as traditional fixation methods used in neuroscience research. I’ll postpone the argument that whole-body nanoscale preservation is sufficient for future revival, as it deserves its own post.
  • A draft version of this post has been reviewed by Ken Hayworth, president of the Brain Preservation Foundation, and he signed off on it as: "an accurate description of the Brain Preservation Foundation, its history, Ken's personal motivation, and the results of the BPF's two preservation prizes". I've not substantially modified it since.
  • A draft version of this post has been reviewed by Andrew Critch, cofounder of the Survival and Flourishing Fund, and he signed off on it as: "an accurate description of his visit to Nectome to evaluate their preservation technology, and the later results." Again, it's not been substantially modified since.
  • Conflict of interest note: During grad school, I worked as a volunteer for the Brain Preservation Foundation for about a year. After learning more about brain preservation, I decided to quit as a volunteer and enter the prize myself, with Ken's approval.
  • You may notice that some of the references I cite throughout this post attribute my work to my deadname, Robert McIntyre. Today I go by my chosen name, Aurelia Song.

In the lab: Ken Hayworth and the BPF

What is the Brain Preservation Foundation?

Ken Hayworth is a neuroscientist currently working at Janelia Research (part of HHMI, the Howard Hughes Medical Institute). In 2010, Ken started the Brain Preservation Foundation and launched the Brain Preservation Prize as a challenge to the neuroscience and cryonics community. He wanted to see researchers provide evidence that their preservation could work according to neuroscientifically reasonable standards.

As a connectomicist, Ken is used to looking at 3D models of brain tissue created with electron microscopy. These models are scanned from brains preserved with the kind of high-quality fixation that's been standard in neuroscience for many years. After much serious thought about neuroscience, Ken has come to the conclusion that this level of physical preservation is overwhelmingly likely to capture the information necessary to restore a person in the future, and I'm inclined to agree. Again, I'll get to this in an upcoming post.

But the electron micrographs coming from the cryonics community didn't look like what he normally saw in the lab. There was no 3D analysis, just single frames. Worse, the tissue was severely dehydrated, making it difficult or impossible to tell whether the tissue was traceable, that is, whether each synapse could be traced back to its originating neurons.

The images above are taken from the BPF's Accreditation page. The left image is what "typical" brain tissue looks like -- the kind that Ken and other neuroscientists are used to studying. The right image is a cryoprotected animal brain[1]. It looks more "swirly" because it's been dehydrated by cryoprotectants. Ken started the Brain Preservation Prize, in part, to challenge the cryonics community to produce images more like the one on the left, so they could better evaluate whether their preservation techniques worked.

To Ken and to me, this is an enormous issue. There are many ways a brain can be rendered untraceable, and comparatively few that preserve its structure. In the absence of evidence to the contrary, we have to default to the assumption that a brain is not traceable. That, in turn, calls into question whether the information preserved in the brain is adequate.

In addition to challenging the cryonics community, Ken wanted to extend a challenge to the neuroscience community. He hoped that, making use of their advanced protocols for preparing and analyzing brain tissue, they could design a technique to preserve people for later revival.

Ken was inspired by the successful Ansari X Prize to issue his challenge in the form of a prize. He raised $100,000 from a secret donor[2], and set out the prize rules: brains had to be preserved in a way that rendered them connectomically traceable, and had to be preserved so that they would very likely last for at least 100 years. There was a small version of the prize for a "small" mammal brain (think rabbit, mouse, or rat), and a "large" mammal brain (pig, sheep, etc) would win the whole thing.

I can't overstate how influential the Brain Preservation Prize has been in advancing the field of preservation research. That $100,000 inspired me to build my protocol and led to millions of dollars of investment in better preservation. I'd love to see more scientific prizes; I think they help young people in research labs justify spending resources on important projects they're passionate about. A young researcher, like me back in 2014, can go to her superior and say "it's not just a personal project, it's for this prize."

A protocol that works under ideal conditions: Aldehyde-Stabilized Cryopreservation, 2015

When I started seriously looking into preservation techniques, it seemed to me that cryonics and neuroscience had opposite problems. Neuroscientists could almost instantly preserve a brain using aldehydes[3], but didn't have a long-term strategy to keep that brain intact for a hundred years or more. Cryonicists, meanwhile, struggled to avoid damaging a brain when they perfused it with cryoprotectants, but knew how to cool a perfused brain to vitrification temperature and keep it there indefinitely.

The obvious solution was to combine the two methods. I could use fixation's remarkable ability to stabilize biological tissue, buying time to introduce cryoprotectants into the brain slowly enough to avoid the crushing damage caused by rapid dehydration. Then, it would be safe to vitrify the brain for long-term preservation.

It took me about nine months to iron out all the details. The most difficult part was figuring out how to get cryoprotectants past the blood-brain barrier: it turned out that even very extended perfusion times, on their own, are not adequate to prevent dehydration. Eventually, though, I got the technique to work on rabbits (the "small mammal" model I was using). Modifying the protocol to work for pigs took me a single day and worked on the first try. I published the results of that research, Aldehyde-Stabilized Cryopreservation, in Cryobiology, the first step towards winning the Brain Preservation Prize.

Independent Verification by the Brain Preservation Foundation

The next step towards the prize required direct verification by the BPF. If you're interested, you can read their full methodology here.

At this time, I was working at 21st Century Medicine. Ken Hayworth flew out to my location and joined me for a marathon three-day, dawn-to-dark session, during which I preserved, vitrified, rewarmed, and processed a rabbit and a pig. Whenever Ken wasn't personally observing the brain samples, he secured them with tamper-proof stickers to preserve the chain of custody. When I had finished preparing the samples for electron microscopy, Ken personally performed the cutting and imaging of the samples back at Janelia.

This was a level of rigor I'd never observed before, certainly far beyond the peer review for the Cryobiology paper. This is something I admire about Ken, and I was grateful for it here. Preservation is worth being rigorous about!

The BPF prepared images using high-resolution focused ion beam milling and scanning electron microscopy (FIB-SEM). This technology produces resolutions of up to 4 nanometers; Ken scanned the prize submissions at 8 nm and 16 nm isotropic resolution. Together with the 3D nature of the images, this is sufficient to examine a brain sample and determine whether the synapses (typically about 100 nm wide) are traceable.

Of course, imaging a whole brain is well beyond our current capabilities. Ken compensated for this by analyzing many samples, randomly chosen from different regions of the brains. The BPF released all of the images and the original 3D data files, and they're still available today. I've included the pig brains below – click through on the images to see youtube videos showing the 3D imaging in full. Each sample is from a brain that was preserved, vitrified, and rewarmed.




Ken Hayworth was joined on the BPF's judging panel by Sebastian Seung, a Princeton/MIT neuroscientist, author of the book I am my Connectome, and a major contributor to the FlyWire project. Together, they reviewed the 3D images, judged their quality, and traced neurons through the image stacks. In the end, they agreed that I had won the prize.

Relevant links:

I present this as evidence that it's possible to preserve large mammals brains in a traceable state, every synapse intact, and keep them stable for more than a hundred years (the 'hundred years' part we will address in a future post on the thermodynamics of preservation).[4]

But ASC is not the whole story, because it must be done pre-mortem. End-of-life laws throughout the world weren't designed with preservation of terminally ill clients, and don't allow ASC as an option. In order to create something workable, I had to either find a way to do preservation post-mortem, or work to incorporate ASC into end-of-life laws. I chose to make preservation work post-mortem.

In the field: Andrew Critch and the SFF

Making preservation work in the real world turned out to be conceptually easy. The original protocol needs three modifications to work post-mortem.

  1. Cardiac arrest must happen quickly in order to avoid pre-mortem brain damage. We found that Medical Aid-in-Dying (MAiD) is required.
  2. You must use blood-thinners before cardiac arrest.
  3. You have to do the surgery fast. The perfusion needs to start less than about 12 minutes after death.

My dad used to tell me a story of a biology professor he had in college. The first day of class, the professor had everyone open their textbook  and read the first paragraph in one of the last chapters. The professor then told everyone that it had taken him 30 years to write that paragraph. I now better understand how that professor must have felt. It took me nine months to create ASC. It took me nine years to modify it to work in our current legal context and write those three modifications above.

I won't get into those nine years in this post. I do want to share an image, though, that I'm publishing here for the first time. As far as I know this is the best preserved whole human brain in the world, and it belongs to a 46-year-old man who died of ALS and chose to donate his body for scientific research. I perfused his body just 90 minutes post-mortem—much faster than typical emergency cryopreservation services, but well outside the twelve-minute ischemic window.

Electron micrograph from the best human preservation I've done to-date. ~90 minutes post-mortem time from a MAiD donation case. The large white space in the middle is a capillary. Here you can find substantial perivascular edema (the white area around the capillary), as well as neuropil that's concerningly indistinct. I asked Ken Hayworth to review these images; he does not think they're traceable. Additionally, some regions of this brain failed to perfuse entirely; this is from a well-perfused region.

It is the best-preserved whole human brain I’ve ever seen. It is also—like every other human brain I preserved with any appreciable post-mortem delay—not traceable. It's not a quality I (or the BPF) can accept. Looking at the degree of damage scares me.

I originally thought that humans might have a two-hour post-mortem preservation window. If that had been true, I would have probably worked to integrate preservation into hospices across the country. After reviewing the electron micrographs from animals and humans under various preservation conditions, it became clear that the hospice model was nonviable. We couldn't wait for a person to die on their own timeline and only then begin our procedure. We'd need them to undergo a full process involving Medical Aid in Dying (MAiD)—and before we could promise any benefits from such a process, we needed to perfect it on animals.

It took a lot of refinement and expert consultation, but eventually we pinned down the twelve-minute window and blood thinner through a series of experiments on rats. We then streamlined the procedure so it could be done in less than ten minutes on pig carcasses, and finally demonstrated excellent post-mortem preservation in a pig model. We've just recently published the results:

A 3D FIBSEM image of a pig brain preserved post-mortem. We were able to complete surgery in 4 minutes and 30 seconds, well within the critical twelve-minute window, and attained results that appear traceable. Additional results available as supplemental materials. Video linked below:


A H&E stained light microscopy image of a pig cerebellum preserved post-mortem. While the FIBSEM shows good nanostructural preservation, this much lower resolution image shows that a large area of brain is preserved well.  

Figure from our preprint. H&E stained light microscopy images from a poorly-preserved brain and a well-preserved brain (E & F, respectively). Note the substantial white regions present in the poorly-preserved tissue on the left. This is strong evidence of inadequate perfusion and compromised preservation. The difference between these two images is only a few minutes delay in starting preservation.

Independent evaluation by the SFF

About this time, I was chatting with Andrew Critch, cofounder of the Survival and Flourishing Fund (SFF). Born from Jaan Tallinn's philanthropic efforts, the SFF is dedicated to the long-term survival and flourishing of sentient life. They recommended $34MM of grants in 2025, including support for the AI Futures Project, Lightcone Infrastructure, and MIRI, among many others.

Andrew was interested in evaluating Nectome for an SFF grant. We talked it over and agreed on a third-party evaluation with real stakes: he'd travel to our lab in Vancouver, Washington to witness and evaluate a preservation first-hand, then bring the samples himself to an EM lab to scan them, and then ask a neuroscientist of his choice to review the sample quality. If he liked what he saw, he'd support our application to SFF's grants team. If we didn't live up to the quality we promised, he'd inform the team accordingly. (SFF uses a distributed grant-making process where each team member has a separate budget for making grant recommendations with substantial discretion.)

When Andrew arrived at our lab, we introduced him to our test rat[5], and he observed as I gave the test rat an injection of heparin (our blood thinner of choice), followed promptly by simulated medical aid-in-dying. He then timed us as I waited five minutes after the rat’s heart stopped, mimicking the time I would have spent performing surgery on a pig or a human.[6] 

From there, we proceeded with the tedious 9-hour process: blood washout, fixation, and the slow ramp of cryoprotectants. Andrew watched from start to finish. It was late at night before the preservation was complete, and Andrew watched us remove the rat’s brain and perform a visual check for gross failures of perfusion. There were none.

At this point we could have simply placed the brain in cold storage and then handed off the tissue for further evaluation, but I wanted to demonstrate just how robust our current method is instead. I cut the brain into two hemispheres, put one in cold storage at -32°C (-26°F) as a demonstration of the effectiveness of the cryoprotectant at preventing ice formation, and put the other hemisphere in a laboratory oven at 60°C (140°F) overnight. Just as cold storage slows chemical processes, warmth accelerates them; twelve hours at 60°C is equivalent to, conservatively, a week at room temperature.

When we returned the next day, we sliced each hemisphere into paper-thin slices and Andrew spun up his quantum random number generator.[7] He used it to randomly select four slices from each hemisphere for analysis. We sent him home with an introduction to Berkeley's electron microscopy core facilities, which immediately started the week-long process of prepping the tissue for imaging including staining, resin embedding, and slicing into 90-nanometer sections.

After examining the electron micrographs and consulting with several neuroscientists, Andrew determined that our preservation was excellent, that the brain was connectomically traceable, and that both the "cold" and the "hot" slices were of near-identical preservation quality. He recommended us for a $550,000 investment, which we've since received.

We'd like to present this data to you as well. The overall dataset obtained from Berkeley was massive; a single image from one of our samples is around 5 GB and requires special software to view. I've prepared two representative images using deepzoom, here:

Sample from a rat brain preserved using Nectome’s methods, then stored at 60°C for 12 hours ("hot" storage). Electron microscopy performed at the Berkeley EM Core. Click here to see the complete dataset.


Sample from a rat brain preserved using Nectome’s methods, then stored at -32°C for 12 hours ("cold" storage). Electron microscopy performed at the Berkeley EM Core. Click here to see the complete dataset.

What's next?

We'll be in the comments again for a few hours, ready to answer your questions. Our sale is still available. The next post, by popular demand, will be about how we can know whether preservation is good enough prior to actually restoring someone. I'll see you in the comments!

A single synapse from our rat brain demo, preserved after 5 minutes of ischemia and stored at 60°C for 12 hours. The dark curve  is the junction between the two neurons. Those tiny grains at the bottom of the synapse are individual vesicles, still filled with neurotransmitter, suspended in place by fixation. The larger gray sphere near the vesicles is a mitochondria that helps power the synapse. You can see individual cytoskeletal details. The individual proteins are also still there, though they're not distinguishable at this level of resolution. This is what I mean by "subsynaptic" preservation.

Previous: Less Dead

  1. ^

     Greg Fahy has recently released a preprint discussing cryoprotectant dehydration and some ways to reverse it in rabbit brains, check it out too!

  2. ^

     This donor has since been revealed to be Saar Wilf.

  3. ^

     Common choices are formaldehyde or glutaraldehyde.

  4. ^

     ASC actually does better than preserving every synapse – it also retains virtually all proteins, nucleic acids, and lipids. I'll get into the evidence for that in a later post.

  5. ^

     We nicknamed the rat Chandra. Andrew was sad about us experimenting on animals, and asked us if we'd try to help preserve and reanimate non-human animals in the future, and of course we said yes!

  6. ^

     I've actually recorded a time of 4 minutes 30 seconds in pigs. But I like to leave myself a little wiggle room.

  7. ^

     I've never met someone else who routinely uses QRNGs for their decisions :)



Discuss

Does Hebrew Have Verbs?

2026-03-20 11:04:47

Spinoza's Compendium of Hebrew Grammar (1677, posthumous, unfinished) contains a claim that scholars have been misreading for centuries. He says that all Hebrew words, except a few particles, are nouns. The standard scholarly reaction is that this is either a metaphysical imposition (projecting his monistic ontology onto grammar) or a terminological trick (defining "noun" so broadly it's vacuous). Both reactions wrongly import Greek and Latin grammatical categories and then treat those categories as the neutral baseline.

Spinoza's Claim: Hebrew's All Nouns

From Chapter 5 of the Compendium (Bloom translation, 1962):

"By a noun I understand a word by which we signify or indicate something that is understood. However, among things that are understood there can be either things and attributes of things, modes and relationships, or actions, and modes and relationships of actions."

And:

"For all Hebrew words, except for a few interjections and conjunctions and one or two particles, have the force and properties of nouns. Because the grammarians did not understand this they considered many words to be irregular which according to the usage of language are most regular."

The word "noun" here is nomen. It means "name." Spinoza is saying: almost every Hebrew word is a name for something understood. This includes names for actions, names for relationships, names for attributes. His taxonomy of intelligible content explicitly includes actions and modes of actions alongside things and attributes.

The Vacuousness Objection

The obvious objection is: if "noun" covers actions as well as things, then the claim that "all words are nouns" is trivially true and does no work. Any content word names something intelligible; so what?

But this objection assumes that a useful grammar must draw a hard categorical line between nouns and verbs, and that Spinoza's refusal to draw it is therefore vacuous. That assumption is embedded in the Greek grammatical tradition; it is not a fact about Hebrew.

Semitic Roots

In Hebrew (and Arabic, Akkadian, and other Semitic languages), words are generated from consonantal roots—typically trilateral—by applying vowel patterns and affixes. The root כ-ת-ב generates katav (he wrote), kotev (one who writes), ktav (writing/script), mikhtav (letter), katvan (scribbler). The morphological operation is the same in every case: take the root, apply a pattern that describes the relation of the concept to the thing you are describing. For example, mikhtav is something that is made-written, a letter, much like the Arabic mameluke is someone who is made-owned, a slave. Whether the output functions as what a Greek grammarian would call a "noun" or a "verb" depends on which pattern you applied, not on some fundamentally different generative process.

This is not how Greek or Latin works. In those languages, nouns and verbs belong to largely separate inflectional systems (though they do have participles). Nouns decline for case and number; verbs conjugate for tense, aspect, mood, and person. A Greek speaker can usually tell from a word's form alone which category it belongs to. The noun/verb distinction corresponds to a real difference in morphological machinery.

In Hebrew, it doesn't. The grammarians who insisted on the distinction—both the rabbinical grammarians working in the Arabic tradition and the Christian Hebraists working from Latin—were forcing Hebrew into a framework designed for languages with a different structure. The result, as Spinoza observed, was that regular Hebrew forms got classified as irregular, because they didn't respect a boundary the language doesn't draw.

Where the noun/verb distinction comes from

The Arabic grammatical tradition, which the medieval rabbinical Hebrew grammarians adopted wholesale, classifies words into three categories: ism (noun/name), fi'l (verb/action), and ḥarf (particle). Scholars have long noted the parallel between this trichotomy and Aristotle's division of speech into onoma (name), rhema (verb/predicate), and sundesmos (connective); Syriac scholars were important intermediaries in transmitting Greek linguistic thought to Arabic, though the degree of direct dependence remains debated. [1] The classification reached Hebrew grammar through two independent routes: Greek → Latin → Christian Hebraists, and Greek → Arabic → rabbinical grammarians. Both paths originate in Greek philosophy.

Judah ben David Hayyuj (c. 945–1000), the founder of scientific Hebrew grammar, applied Arabic grammatical theory to Hebrew, including the ism/fi'l/ḥarf trichotomy and the principle that all Hebrew roots are trilateral. [2] His technical terms were translations of Arabic grammatical terms. Jonah ibn Janah (c. 990–1055) extended this work, producing the first complete Hebrew grammar and drawing explicitly from the Arabic grammatical works of Sibawayh and al-Mubarrad. [3] When Spinoza complained that "the grammarians" misunderstood Hebrew, this is the tradition he was arguing against.

Aristotle's noun/verb distinction is not just a grammatical observation. It reflects his substance/predication ontology. The world consists of substances (things that exist independently) and predicates (things said about substances). A noun names a substance; a verb predicates something of it. The sentence "Socrates runs" has the structure: substance + predication. The grammar encodes the metaphysics.

Dramas and Graphs

Greek and similar languages have different pools of words for filling the grammatical roles of noun and verb. Hebrew has one pool of roots that supplies words for both roles, depending on the pattern applied. These aren't just two different ways of doing the same thing. They reflect different structural priorities.

The Indo-European system is built around assembling a scene: placing distinct actors into relationships with distinct actions. You need different building blocks for the actors and the actions because they play different structural roles in the scene. Who did what to whom, when, in what manner. Case endings on nouns tell you the role; verb conjugation tells you the temporal and modal frame. The grammar presupposes that the actor/action distinction is primitive.

The Semitic system works differently. Each root is a node in a flat graph of intelligibles. The graph doesn't recurse; roots refer to intelligible things, not to relations between other roots. And it doesn't privilege any type of node over any other, which is why the morphological system treats them all with the same machinery. It does not start by assigning one word the role of "the thing" and another the role of "what the thing does."

A sentence picks out some nodes from this graph, and casts them into some definite relation to each other. Their arrangement and patterns of modification describe the way in which these intelligibles are related: process, agent, result, instrument, quality, location.

When you take the Greek-derived framework and impose it on Hebrew, you're asking a flat graph of intelligibles to behave like a scene-assembly system. The spurious irregularities Spinoza complained about are projections of the friction from this mismatch.

I'm not projecting, you're projecting!

The standard scholarly line is that Spinoza projected his philosophical commitments onto his grammar; that his monism (one substance, everything else is modes) motivated his claim that Hebrew has one part of speech with subcategories rather than two fundamentally different parts of speech. Harvey (2002) argues that the Compendium's linguistic categories parallel the conceptual categories of the Ethics. [4] Rozenberg (2025) goes further, claiming Spinoza "project[ed] the characteristics of Latin onto Hebrew" and thereby "neglected the dynamism of Hebrew." [5] Stracenski provides a more sympathetic reading but still frames the question as whether the Compendium serves the Ethics' metaphysics or the Tractatus' hermeneutics. [6]

This gets the direction of explanation backwards, or at least sideways. Spinoza was reading a Semitic language and describing how it actually generates words. The fact that his description aligns with his metaphysics may reflect a common cause: both the grammar and the metaphysics are what you get when you don't take the Aristotelian actor/action distinction as a primitive. Spinoza rejected Aristotle's substance/predicate ontology in the Ethics; he also noticed that Aristotle's noun/verb grammar didn't fit Hebrew.

  1. Aristotle divides lexis into onoma, rhema, and sundesmos in Poetics 1456b–1457a. Farina documents how this tripartite scheme reached Arabic grammar via Syriac translations of Aristotle's Organon, with Syriac Christians serving as intermediaries between Greek and Arabic linguistic thought. See Margherita Farina, "The interactions between the Syriac, Arabic and Greek traditions," International Encyclopedia of Language and Linguistics, Third Edition, 2025. The question of whether Sibawayh's ism/fi'l/ḥarf directly derives from Aristotle or represents independent development remains actively debated; the structural parallels are clear even if the exact transmission pathway is contested. ↩︎

  2. On Hayyuj's application of Arabic grammar to Hebrew and his establishment of the trilateral root principle, see the Jewish Encyclopedia entry on "Root". His Wikipedia biography notes that "the technical terms still employed in current Hebrew grammars are most of them simply translations of the Arabic terms employed by Hayyuj." ↩︎

  3. Ibn Janah's Kitab al-Luma was the first complete Hebrew grammar. It drew from Arabic grammatical works including those of Sibawayh and al-Mubarrad. See also the Jewish Virtual Library entry on Hebrew linguistic literature. ↩︎

  4. Warren Zev Harvey, "Spinoza's Metaphysical Hebraism," in Heidi M. Ravven and Lenn E. Goodman, eds., Jewish Themes in Spinoza's Philosophy (Albany: SUNY Press, 2002), 107–114. ↩︎

  5. Jacques J. Rozenberg, "Spinoza's Compendium: Between Hebrew and Latin Grammars of the Middle Ages and the Renaissance, Verbs versus Nouns," International Philosophical Quarterly, online first, October 26, 2025, DOI: 10.5840/ipq20251024258. ↩︎

  6. Inja Stracenski, "Spinoza's Compendium of the Grammar of the Hebrew Language," Parrhesia 32. Stracenski notes the divide between historical approaches (Klijnsmit, placing Spinoza within Jewish grammatical tradition) and philosophical approaches (Harvey, connecting the Compendium to the Ethics). See also Guadalupe González Diéguez's companion chapter in A Companion to Spinoza (Wiley, 2021) and Steven Nadler, "Aliquid remanet: What Are We to Do with Spinoza's Compendium of Hebrew Grammar?" Journal of the History of Philosophy 56, no. 1 (2018): 155–167. ↩︎



Discuss