2026-03-20 17:02:10
2026-01-17
Disclaimer
Why might you expect IPv4 addresses prices to go?
Compatibility between IPv4 and IPv6
Market players
Current adoption
More details on website owners using dual stack
Multiple ways ISPs deal with increasing number of users.
More details on ISPs using CGNAT or dual stack
My question: Who is buying more IPv4 addresses (and driving the price increase)? Is it website owners or ISPs?
Maybe CGNAT means IPv4 address space won't exhaust?
Side Note: Leasing IPs
Misc
2026-03-20 16:58:00
2026-03-18
Disclaimer
Why this list?
build
persuade
cyber
interesting cyber
(getting bored of listing cyber capabilities, let me move on to the next one)
interesting bioweapons
genetic engineering
nanotech
aliens
chemical
lots of future weapons
I got bored of writing this document. I could artifically create variations of the existing scenarios if I just wanted to fill this page with a hundred scenarios for the heck of it, but that is boring and not a good use of my time.
2026-03-20 16:52:43
Lesswrong disclaimer
2025-06-18
Disclaimer
Intelligence-agency-resistant internet anonymity is hard because the physical infrastructure can be inspected by someone with a monopoly on violence.
Success criteria of attacker
Attack 1: Get view access into majority of exit nodes
Attack 2: Wiretap source and receiver machine
What if the sender just sent the message to everyone instead of sending it to their intended receiver?
Throughput
Potential problems
2026-03-20 13:38:34
For a long time, I was planning to write a comprehensive post patiently exploring all the problems with conventional “anthropic reasoning”. How, for historical reasons, the whole discipline went sideways at some point and just can’t recover, continuing to apply confused frameworks, choosing between several ridiculous options and accumulating paradoxes. And how one should reason correctly about all the “anthropic problems”.
I’m sorry but this post isn’t going to be that. This time I’m mostly getting the frustration out of my system because of Bentham's Bulldog’s You Need Self-Locating Evidence! which confidently reiterates all the standard confusions, even though he really should know better at this point.
So, in this post I’ll resolve only some of the confusions of “anthropic reasoning”, leaving others as well as the deeper historical analysis for the future. Frankly, maybe it’s even for the best.
My apologies, but this section is necessary, as it’s exactly the sloppy probabilistic reasoning that led us to the current miserable state. I promise to be brief with this section1.
Let’s start from explicitly defining what probabilities are. Probability theory gives us a mathematical model to approximate some causal processes from reality to some degree of uncertainty.
It’s very helpful to think in terms of maps and territories here. We look at some territory in the world and create an imperfect map of it. The less we know about the territory the more generic is the map. And when we learn new details, we add them to our map, making it more specific.
Consider a roll of a fair 6-sided die.
Imagine an infinite number of iterations where the die is rolled again and again - a probability experiment representing any roll of a fair 6-sided die. Every trial has an outcome: either ⚀ or ⚁ or ⚂ or ⚃ or ⚄ or ⚅. This set of mutually exclusive and collectively exhaustive outcomes of the probability experiment is called the sample space.
Sets of these outcomes are called events. The simplest are individual events, consisting only of a single outcome, but likewise there can be events consisting of any number of outcomes up to the whole sample space.
Events can be interpreted as statements that have truth values in every iteration of the experiment. For example, event {⚁; ⚃; ⚅} is interpreted as a statement:
“In this trial the die is even”.
Naturally, this statement is True in every iteration of the probability experiment that the die is either ⚁ or ⚃ or ⚅ and False in every other iteration. Probability of an event is a ratio of trials where this event is True to the total number of trials throughout the whole probability experiment.
With this in mind, let’s answer a simple question. What’s the probability that our die rolled a ⚅?
At first, we are completely indifferent between all of the iterations of the probability experiment. Our roll can be any of these infinitely many trials. But we know that 1/6 of them are ⚅. Therefore:
P(⚅) = 1/6
Now, suppose we’ve learned that the outcome of the roll is even. This gives us new information, makes our map more specific by eliminating half of the possible outcomes. Now we are indifferent only between the trials where the outcome of the die roll is even and 1/3 of them are ⚅, therefore:
P(⚅|⚁; ⚃; ⚅) = 1/3
If you can understand that, accept my congratulations, you understand probabilities better than most of the philosophers of probability. I wish I was joking.
Philosophers do tend to overcomplicate things sometimes. For reasons, I’m not going to dwell on right now, instead of outcome of a probability experiment, they decided to talk about “possible words” and then “centred possible worlds”, completely confusing themselves and everyone else.
As a part of this confusion, they came up with the notions of “Self-Locating” and “Non-Self-Locating Evidence”. Here is what BB tells us about this framework:
People often think probabilities and beliefs merely concern how the world is.
I think this is wrong.
Self-locating probabilities are probabilities that concern one’s place in the world, rather than what the world is like.
We may immediately come up with couple of corrections. First of all, probabilities are not just about the way the world is. They are about some aspects of the world to the best of our knowledge. That’s why probabilities change when we learn new facts even though the territory we are describing may stay the same.
And, whether a particular person is positioned in the world is also a fact about the world, so the whole distinction makes no sense even in its own terms. A world where I’m in one city is different from the world where I’m in some other city. Obviously. So, case closed?
Oh, not so fast! You see, as an additional complication, that would confuse everyone even more, philosophers have long ago added here the notion of personal identity:
For example, imagine that there is one clone of me in a dark and murky bunker in California and another in Paris (what rotten luck). I have no special evidence concerning which one I am (for example, there are no nearby croissants or people surrendering). I should think there’s a 50% chance that I am the one in California.
This evidence is self-locating because it’s not about what the world is like. I already know what the world is like. I know that there is one copy of me in California and another in Paris. What I’m uncertain about is which one I am. That’s what self-locating information concerns: which of the people you are, not what the world is like.
That is, in what sense are two worlds different if we switch the places of two completely identical people?
And, fair enough, it’s an interesting question in its own right. We can say that “switching places” isn’t a free action. We need to exert some work, which increases entropy in the universe. Therefore, the world where such switching has happened is different from a world where it didn’t. In the very least, they have different causal stories.
But more importantly, none of this matters in the slightest when talking about probabilities.
Once again, probability theory describes some real-world situation to the best of our knowledge. In the example above the situation is “either being in one place or the other” and the best of our knowledge is “no evidence whatsoever”.
So, we have a probability experiment with two mutually exclusive outcomes. In half of the iterations, I’m in Paris and in the other half - in California and I’m uncertain between all of them. Therefore:
P(California) = P(Paris) = 1/2
That’s all. It doesn’t matter whether there is or isn’t a clone in the other location. It doesn’t affect anything. Neither we need to think about some alternative worlds and whether they are real and in which sense. It is completely irrelevant to our probabilistic model. There is absolutely no difference in methodology between this example and the 6-sided-die example from the beginning of the post. We don’t need a special category “Self-Locating Evidence” to talk about such probability experiments; it’s a completely useless concept.
Wait, doesn’t it mean that I essentially agree with Bentham’s Bulldog? Sure, I’m annoyed with his terminology and the framework he is applying, but it’s just formalism what about the substance? He argues that “Self-Locating Evidence” is not fake and we should treat it as any other evidence:
But a number of people have suggested that self-locating evidence is sort of fake. They claim that it doesn’t make any sense to wonder who I am once I know what the world is like. After all, there’s a copy of me in each situation. What can I possibly be wondering about if not what the world is like?
In this article, I’ll explain why self-locating evidence is real.
I claim that we shouldn’t even have a separate category for this sort of stuff in the first place, because all probabilistic reasoning works the exact same way in terms of probability experiment. What am I even arguing about?
Let’s make it clear with a handy Venn Diagram:
The problem with the “Self-Locating Evidence” category is that, while some part of it is just completely normal probabilistic reasoning, the other is total nonsense that goes against the core principles of probability theory and is a source of constant stream of paradoxes.
People who say that “Self-Locating Evidence” is “sort of fake” are not wrong - a huge part of it is. But due to conversation being framed in either pro- or anti-self-locating-evidence way, expressing this nuanced point is hard.
As a result, someone like BB can come up with an example of “Self-Locating Evidence” producing valid reasoning and then falsely generalize it to a domain where it doesn’t work. And when you try to point this out, such person just says:
“What do you mean probability theory doesn’t work like that? Haven’t you heard about Self-Locating Evidence? Are you denying that I can have some credence whether I’m in Paris or in California? That’s crazy!”
That’s why the term should be abolished and we should just be talking about all the probability theoretic problems in a unified way in terms of probability experiments and their trials.
2026-03-20 11:19:31
Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far
Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has
created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures.
In this post, we’ll dive into the evidence for these claims, as well as Nectome’s overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of.
To get to the current state-of-the-art required two major developmental milestones:
The rest of the post is dedicated to unpacking these results.
Five quick notes as we begin:
Ken Hayworth is a neuroscientist currently working at Janelia Research (part of HHMI, the Howard Hughes Medical Institute). In 2010, Ken started the Brain Preservation Foundation and launched the Brain Preservation Prize as a challenge to the neuroscience and cryonics community. He wanted to see researchers provide evidence that their preservation could work according to neuroscientifically reasonable standards.
As a connectomicist, Ken is used to looking at 3D models of brain tissue created with electron microscopy. These models are scanned from brains preserved with the kind of high-quality fixation that's been standard in neuroscience for many years. After much serious thought about neuroscience, Ken has come to the conclusion that this level of physical preservation is overwhelmingly likely to capture the information necessary to restore a person in the future, and I'm inclined to agree. Again, I'll get to this in an upcoming post.
But the electron micrographs coming from the cryonics community didn't look like what he normally saw in the lab. There was no 3D analysis, just single frames. Worse, the tissue was severely dehydrated, making it difficult or impossible to tell whether the tissue was traceable, that is, whether each synapse could be traced back to its originating neurons.
The images above are taken from the BPF's Accreditation page. The left image is what "typical" brain tissue looks like -- the kind that Ken and other neuroscientists are used to studying. The right image is a cryoprotected animal brain[1]. It looks more "swirly" because it's been dehydrated by cryoprotectants. Ken started the Brain Preservation Prize, in part, to challenge the cryonics community to produce images more like the one on the left, so they could better evaluate whether their preservation techniques worked.
To Ken and to me, this is an enormous issue. There are many ways a brain can be rendered untraceable, and comparatively few that preserve its structure. In the absence of evidence to the contrary, we have to default to the assumption that a brain is not traceable. That, in turn, calls into question whether the information preserved in the brain is adequate.
In addition to challenging the cryonics community, Ken wanted to extend a challenge to the neuroscience community. He hoped that, making use of their advanced protocols for preparing and analyzing brain tissue, they could design a technique to preserve people for later revival.
Ken was inspired by the successful Ansari X Prize to issue his challenge in the form of a prize. He raised $100,000 from a secret donor[2], and set out the prize rules: brains had to be preserved in a way that rendered them connectomically traceable, and had to be preserved so that they would very likely last for at least 100 years. There was a small version of the prize for a "small" mammal brain (think rabbit, mouse, or rat), and a "large" mammal brain (pig, sheep, etc) would win the whole thing.
I can't overstate how influential the Brain Preservation Prize has been in advancing the field of preservation research. That $100,000 inspired me to build my protocol and led to millions of dollars of investment in better preservation. I'd love to see more scientific prizes; I think they help young people in research labs justify spending resources on important projects they're passionate about. A young researcher, like me back in 2014, can go to her superior and say "it's not just a personal project, it's for this prize."
When I started seriously looking into preservation techniques, it seemed to me that cryonics and neuroscience had opposite problems. Neuroscientists could almost instantly preserve a brain using aldehydes[3], but didn't have a long-term strategy to keep that brain intact for a hundred years or more. Cryonicists, meanwhile, struggled to avoid damaging a brain when they perfused it with cryoprotectants, but knew how to cool a perfused brain to vitrification temperature and keep it there indefinitely.
The obvious solution was to combine the two methods. I could use fixation's remarkable ability to stabilize biological tissue, buying time to introduce cryoprotectants into the brain slowly enough to avoid the crushing damage caused by rapid dehydration. Then, it would be safe to vitrify the brain for long-term preservation.
It took me about nine months to iron out all the details. The most difficult part was figuring out how to get cryoprotectants past the blood-brain barrier: it turned out that even very extended perfusion times, on their own, are not adequate to prevent dehydration. Eventually, though, I got the technique to work on rabbits (the "small mammal" model I was using). Modifying the protocol to work for pigs took me a single day and worked on the first try. I published the results of that research, Aldehyde-Stabilized Cryopreservation, in Cryobiology, the first step towards winning the Brain Preservation Prize.
The next step towards the prize required direct verification by the BPF. If you're interested, you can read their full methodology here.
At this time, I was working at 21st Century Medicine. Ken Hayworth flew out to my location and joined me for a marathon three-day, dawn-to-dark session, during which I preserved, vitrified, rewarmed, and processed a rabbit and a pig. Whenever Ken wasn't personally observing the brain samples, he secured them with tamper-proof stickers to preserve the chain of custody. When I had finished preparing the samples for electron microscopy, Ken personally performed the cutting and imaging of the samples back at Janelia.
This was a level of rigor I'd never observed before, certainly far beyond the peer review for the Cryobiology paper. This is something I admire about Ken, and I was grateful for it here. Preservation is worth being rigorous about!
The BPF prepared images using high-resolution focused ion beam milling and scanning electron microscopy (FIB-SEM). This technology produces resolutions of up to 4 nanometers; Ken scanned the prize submissions at 8 nm and 16 nm isotropic resolution. Together with the 3D nature of the images, this is sufficient to examine a brain sample and determine whether the synapses (typically about 100 nm wide) are traceable.
Of course, imaging a whole brain is well beyond our current capabilities. Ken compensated for this by analyzing many samples, randomly chosen from different regions of the brains. The BPF released all of the images and the original 3D data files, and they're still available today. I've included the pig brains below – click through on the images to see youtube videos showing the 3D imaging in full. Each sample is from a brain that was preserved, vitrified, and rewarmed.
Ken Hayworth was joined on the BPF's judging panel by Sebastian Seung, a Princeton/MIT neuroscientist, author of the book I am my Connectome, and a major contributor to the FlyWire project. Together, they reviewed the 3D images, judged their quality, and traced neurons through the image stacks. In the end, they agreed that I had won the prize.
Relevant links:
I present this as evidence that it's possible to preserve large mammals brains in a traceable state, every synapse intact, and keep them stable for more than a hundred years (the 'hundred years' part we will address in a future post on the thermodynamics of preservation).[4]
But ASC is not the whole story, because it must be done pre-mortem. End-of-life laws throughout the world weren't designed with preservation of terminally ill clients, and don't allow ASC as an option. In order to create something workable, I had to either find a way to do preservation post-mortem, or work to incorporate ASC into end-of-life laws. I chose to make preservation work post-mortem.
Making preservation work in the real world turned out to be conceptually easy. The original protocol needs three modifications to work post-mortem.
My dad used to tell me a story of a biology professor he had in college. The first day of class, the professor had everyone open their textbook and read the first paragraph in one of the last chapters. The professor then told everyone that it had taken him 30 years to write that paragraph. I now better understand how that professor must have felt. It took me nine months to create ASC. It took me nine years to modify it to work in our current legal context and write those three modifications above.
I won't get into those nine years in this post. I do want to share an image, though, that I'm publishing here for the first time. As far as I know this is the best preserved whole human brain in the world, and it belongs to a 46-year-old man who died of ALS and chose to donate his body for scientific research. I perfused his body just 90 minutes post-mortem—much faster than typical emergency cryopreservation services, but well outside the twelve-minute ischemic window.
Electron micrograph from the best human preservation I've done to-date. ~90 minutes post-mortem time from a MAiD donation case. The large white space in the middle is a capillary. Here you can find substantial perivascular edema (the white area around the capillary), as well as neuropil that's concerningly indistinct. I asked Ken Hayworth to review these images; he does not think they're traceable. Additionally, some regions of this brain failed to perfuse entirely; this is from a well-perfused region.
It is the best-preserved whole human brain I’ve ever seen. It is also—like every other human brain I preserved with any appreciable post-mortem delay—not traceable. It's not a quality I (or the BPF) can accept. Looking at the degree of damage scares me.
I originally thought that humans might have a two-hour post-mortem preservation window. If that had been true, I would have probably worked to integrate preservation into hospices across the country. After reviewing the electron micrographs from animals and humans under various preservation conditions, it became clear that the hospice model was nonviable. We couldn't wait for a person to die on their own timeline and only then begin our procedure. We'd need them to undergo a full process involving Medical Aid in Dying (MAiD)—and before we could promise any benefits from such a process, we needed to perfect it on animals.
It took a lot of refinement and expert consultation, but eventually we pinned down the twelve-minute window and blood thinner through a series of experiments on rats. We then streamlined the procedure so it could be done in less than ten minutes on pig carcasses, and finally demonstrated excellent post-mortem preservation in a pig model. We've just recently published the results:
A 3D FIBSEM image of a pig brain preserved post-mortem. We were able to complete surgery in 4 minutes and 30 seconds, well within the critical twelve-minute window, and attained results that appear traceable. Additional results available as supplemental materials. Video linked below:
A H&E stained light microscopy image of a pig cerebellum preserved post-mortem. While the FIBSEM shows good nanostructural preservation, this much lower resolution image shows that a large area of brain is preserved well.
Figure from our preprint. H&E stained light microscopy images from a poorly-preserved brain and a well-preserved brain (E & F, respectively). Note the substantial white regions present in the poorly-preserved tissue on the left. This is strong evidence of inadequate perfusion and compromised preservation. The difference between these two images is only a few minutes delay in starting preservation.
About this time, I was chatting with Andrew Critch, cofounder of the Survival and Flourishing Fund (SFF). Born from Jaan Tallinn's philanthropic efforts, the SFF is dedicated to the long-term survival and flourishing of sentient life. They recommended $34MM of grants in 2025, including support for the AI Futures Project, Lightcone Infrastructure, and MIRI, among many others.
Andrew was interested in evaluating Nectome for an SFF grant. We talked it over and agreed on a third-party evaluation with real stakes: he'd travel to our lab in Vancouver, Washington to witness and evaluate a preservation first-hand, then bring the samples himself to an EM lab to scan them, and then ask a neuroscientist of his choice to review the sample quality. If he liked what he saw, he'd support our application to SFF's grants team. If we didn't live up to the quality we promised, he'd inform the team accordingly. (SFF uses a distributed grant-making process where each team member has a separate budget for making grant recommendations with substantial discretion.)
When Andrew arrived at our lab, we introduced him to our test rat[5], and he observed as I gave the test rat an injection of heparin (our blood thinner of choice), followed promptly by simulated medical aid-in-dying. He then timed us as I waited five minutes after the rat’s heart stopped, mimicking the time I would have spent performing surgery on a pig or a human.[6]
From there, we proceeded with the tedious 9-hour process: blood washout, fixation, and the slow ramp of cryoprotectants. Andrew watched from start to finish. It was late at night before the preservation was complete, and Andrew watched us remove the rat’s brain and perform a visual check for gross failures of perfusion. There were none.
At this point we could have simply placed the brain in cold storage and then handed off the tissue for further evaluation, but I wanted to demonstrate just how robust our current method is instead. I cut the brain into two hemispheres, put one in cold storage at -32°C (-26°F) as a demonstration of the effectiveness of the cryoprotectant at preventing ice formation, and put the other hemisphere in a laboratory oven at 60°C (140°F) overnight. Just as cold storage slows chemical processes, warmth accelerates them; twelve hours at 60°C is equivalent to, conservatively, a week at room temperature.
When we returned the next day, we sliced each hemisphere into paper-thin slices and Andrew spun up his quantum random number generator.[7] He used it to randomly select four slices from each hemisphere for analysis. We sent him home with an introduction to Berkeley's electron microscopy core facilities, which immediately started the week-long process of prepping the tissue for imaging including staining, resin embedding, and slicing into 90-nanometer sections.
After examining the electron micrographs and consulting with several neuroscientists, Andrew determined that our preservation was excellent, that the brain was connectomically traceable, and that both the "cold" and the "hot" slices were of near-identical preservation quality. He recommended us for a $550,000 investment, which we've since received.
We'd like to present this data to you as well. The overall dataset obtained from Berkeley was massive; a single image from one of our samples is around 5 GB and requires special software to view. I've prepared two representative images using deepzoom, here:
Sample from a rat brain preserved using Nectome’s methods, then stored at 60°C for 12 hours ("hot" storage). Electron microscopy performed at the Berkeley EM Core. Click here to see the complete dataset.
Sample from a rat brain preserved using Nectome’s methods, then stored at -32°C for 12 hours ("cold" storage). Electron microscopy performed at the Berkeley EM Core. Click here to see the complete dataset.
We'll be in the comments again for a few hours, ready to answer your questions. Our sale is still available. The next post, by popular demand, will be about how we can know whether preservation is good enough prior to actually restoring someone. I'll see you in the comments!
A single synapse from our rat brain demo, preserved after 5 minutes of ischemia and stored at 60°C for 12 hours. The dark curve is the junction between the two neurons. Those tiny grains at the bottom of the synapse are individual vesicles, still filled with neurotransmitter, suspended in place by fixation. The larger gray sphere near the vesicles is a mitochondria that helps power the synapse. You can see individual cytoskeletal details. The individual proteins are also still there, though they're not distinguishable at this level of resolution. This is what I mean by "subsynaptic" preservation.
Previous: Less Dead
Greg Fahy has recently released a preprint discussing cryoprotectant dehydration and some ways to reverse it in rabbit brains, check it out too!
This donor has since been revealed to be Saar Wilf.
Common choices are formaldehyde or glutaraldehyde.
ASC actually does better than preserving every synapse – it also retains virtually all proteins, nucleic acids, and lipids. I'll get into the evidence for that in a later post.
We nicknamed the rat Chandra. Andrew was sad about us experimenting on animals, and asked us if we'd try to help preserve and reanimate non-human animals in the future, and of course we said yes!
I've actually recorded a time of 4 minutes 30 seconds in pigs. But I like to leave myself a little wiggle room.
I've never met someone else who routinely uses QRNGs for their decisions :)
2026-03-20 11:04:47
Spinoza's Compendium of Hebrew Grammar (1677, posthumous, unfinished) contains a claim that scholars have been misreading for centuries. He says that all Hebrew words, except a few particles, are nouns. The standard scholarly reaction is that this is either a metaphysical imposition (projecting his monistic ontology onto grammar) or a terminological trick (defining "noun" so broadly it's vacuous). Both reactions wrongly import Greek and Latin grammatical categories and then treat those categories as the neutral baseline.
From Chapter 5 of the Compendium (Bloom translation, 1962):
"By a noun I understand a word by which we signify or indicate something that is understood. However, among things that are understood there can be either things and attributes of things, modes and relationships, or actions, and modes and relationships of actions."
And:
"For all Hebrew words, except for a few interjections and conjunctions and one or two particles, have the force and properties of nouns. Because the grammarians did not understand this they considered many words to be irregular which according to the usage of language are most regular."
The word "noun" here is nomen. It means "name." Spinoza is saying: almost every Hebrew word is a name for something understood. This includes names for actions, names for relationships, names for attributes. His taxonomy of intelligible content explicitly includes actions and modes of actions alongside things and attributes.
The obvious objection is: if "noun" covers actions as well as things, then the claim that "all words are nouns" is trivially true and does no work. Any content word names something intelligible; so what?
But this objection assumes that a useful grammar must draw a hard categorical line between nouns and verbs, and that Spinoza's refusal to draw it is therefore vacuous. That assumption is embedded in the Greek grammatical tradition; it is not a fact about Hebrew.
In Hebrew (and Arabic, Akkadian, and other Semitic languages), words are generated from consonantal roots—typically trilateral—by applying vowel patterns and affixes. The root כ-ת-ב generates katav (he wrote), kotev (one who writes), ktav (writing/script), mikhtav (letter), katvan (scribbler). The morphological operation is the same in every case: take the root, apply a pattern that describes the relation of the concept to the thing you are describing. For example, mikhtav is something that is made-written, a letter, much like the Arabic mameluke is someone who is made-owned, a slave. Whether the output functions as what a Greek grammarian would call a "noun" or a "verb" depends on which pattern you applied, not on some fundamentally different generative process.
This is not how Greek or Latin works. In those languages, nouns and verbs belong to largely separate inflectional systems (though they do have participles). Nouns decline for case and number; verbs conjugate for tense, aspect, mood, and person. A Greek speaker can usually tell from a word's form alone which category it belongs to. The noun/verb distinction corresponds to a real difference in morphological machinery.
In Hebrew, it doesn't. The grammarians who insisted on the distinction—both the rabbinical grammarians working in the Arabic tradition and the Christian Hebraists working from Latin—were forcing Hebrew into a framework designed for languages with a different structure. The result, as Spinoza observed, was that regular Hebrew forms got classified as irregular, because they didn't respect a boundary the language doesn't draw.
The Arabic grammatical tradition, which the medieval rabbinical Hebrew grammarians adopted wholesale, classifies words into three categories: ism (noun/name), fi'l (verb/action), and ḥarf (particle). Scholars have long noted the parallel between this trichotomy and Aristotle's division of speech into onoma (name), rhema (verb/predicate), and sundesmos (connective); Syriac scholars were important intermediaries in transmitting Greek linguistic thought to Arabic, though the degree of direct dependence remains debated. [1] The classification reached Hebrew grammar through two independent routes: Greek → Latin → Christian Hebraists, and Greek → Arabic → rabbinical grammarians. Both paths originate in Greek philosophy.
Judah ben David Hayyuj (c. 945–1000), the founder of scientific Hebrew grammar, applied Arabic grammatical theory to Hebrew, including the ism/fi'l/ḥarf trichotomy and the principle that all Hebrew roots are trilateral. [2] His technical terms were translations of Arabic grammatical terms. Jonah ibn Janah (c. 990–1055) extended this work, producing the first complete Hebrew grammar and drawing explicitly from the Arabic grammatical works of Sibawayh and al-Mubarrad. [3] When Spinoza complained that "the grammarians" misunderstood Hebrew, this is the tradition he was arguing against.
Aristotle's noun/verb distinction is not just a grammatical observation. It reflects his substance/predication ontology. The world consists of substances (things that exist independently) and predicates (things said about substances). A noun names a substance; a verb predicates something of it. The sentence "Socrates runs" has the structure: substance + predication. The grammar encodes the metaphysics.
Greek and similar languages have different pools of words for filling the grammatical roles of noun and verb. Hebrew has one pool of roots that supplies words for both roles, depending on the pattern applied. These aren't just two different ways of doing the same thing. They reflect different structural priorities.
The Indo-European system is built around assembling a scene: placing distinct actors into relationships with distinct actions. You need different building blocks for the actors and the actions because they play different structural roles in the scene. Who did what to whom, when, in what manner. Case endings on nouns tell you the role; verb conjugation tells you the temporal and modal frame. The grammar presupposes that the actor/action distinction is primitive.
The Semitic system works differently. Each root is a node in a flat graph of intelligibles. The graph doesn't recurse; roots refer to intelligible things, not to relations between other roots. And it doesn't privilege any type of node over any other, which is why the morphological system treats them all with the same machinery. It does not start by assigning one word the role of "the thing" and another the role of "what the thing does."
A sentence picks out some nodes from this graph, and casts them into some definite relation to each other. Their arrangement and patterns of modification describe the way in which these intelligibles are related: process, agent, result, instrument, quality, location.
When you take the Greek-derived framework and impose it on Hebrew, you're asking a flat graph of intelligibles to behave like a scene-assembly system. The spurious irregularities Spinoza complained about are projections of the friction from this mismatch.
The standard scholarly line is that Spinoza projected his philosophical commitments onto his grammar; that his monism (one substance, everything else is modes) motivated his claim that Hebrew has one part of speech with subcategories rather than two fundamentally different parts of speech. Harvey (2002) argues that the Compendium's linguistic categories parallel the conceptual categories of the Ethics. [4] Rozenberg (2025) goes further, claiming Spinoza "project[ed] the characteristics of Latin onto Hebrew" and thereby "neglected the dynamism of Hebrew." [5] Stracenski provides a more sympathetic reading but still frames the question as whether the Compendium serves the Ethics' metaphysics or the Tractatus' hermeneutics. [6]
This gets the direction of explanation backwards, or at least sideways. Spinoza was reading a Semitic language and describing how it actually generates words. The fact that his description aligns with his metaphysics may reflect a common cause: both the grammar and the metaphysics are what you get when you don't take the Aristotelian actor/action distinction as a primitive. Spinoza rejected Aristotle's substance/predicate ontology in the Ethics; he also noticed that Aristotle's noun/verb grammar didn't fit Hebrew.
Aristotle divides lexis into onoma, rhema, and sundesmos in Poetics 1456b–1457a. Farina documents how this tripartite scheme reached Arabic grammar via Syriac translations of Aristotle's Organon, with Syriac Christians serving as intermediaries between Greek and Arabic linguistic thought. See Margherita Farina, "The interactions between the Syriac, Arabic and Greek traditions," International Encyclopedia of Language and Linguistics, Third Edition, 2025. The question of whether Sibawayh's ism/fi'l/ḥarf directly derives from Aristotle or represents independent development remains actively debated; the structural parallels are clear even if the exact transmission pathway is contested. ↩︎
On Hayyuj's application of Arabic grammar to Hebrew and his establishment of the trilateral root principle, see the Jewish Encyclopedia entry on "Root". His Wikipedia biography notes that "the technical terms still employed in current Hebrew grammars are most of them simply translations of the Arabic terms employed by Hayyuj." ↩︎
Ibn Janah's Kitab al-Luma was the first complete Hebrew grammar. It drew from Arabic grammatical works including those of Sibawayh and al-Mubarrad. See also the Jewish Virtual Library entry on Hebrew linguistic literature. ↩︎
Warren Zev Harvey, "Spinoza's Metaphysical Hebraism," in Heidi M. Ravven and Lenn E. Goodman, eds., Jewish Themes in Spinoza's Philosophy (Albany: SUNY Press, 2002), 107–114. ↩︎
Jacques J. Rozenberg, "Spinoza's Compendium: Between Hebrew and Latin Grammars of the Middle Ages and the Renaissance, Verbs versus Nouns," International Philosophical Quarterly, online first, October 26, 2025, DOI: 10.5840/ipq20251024258. ↩︎
Inja Stracenski, "Spinoza's Compendium of the Grammar of the Hebrew Language," Parrhesia 32. Stracenski notes the divide between historical approaches (Klijnsmit, placing Spinoza within Jewish grammatical tradition) and philosophical approaches (Harvey, connecting the Compendium to the Ethics). See also Guadalupe González Diéguez's companion chapter in A Companion to Spinoza (Wiley, 2021) and Steven Nadler, "Aliquid remanet: What Are We to Do with Spinoza's Compendium of Hebrew Grammar?" Journal of the History of Philosophy 56, no. 1 (2018): 155–167. ↩︎