MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Voting Results for the 2024 Review

2026-02-07 11:48:26

Published on February 7, 2026 3:48 AM GMT

The votes are in for the 2024 Review!

4,826 posts were written in 2024.

671 of them were nominated.

196 of them got at least one review, and a positive review-vote total.

50 of them shall be displayed in the Best of LessWrong, Year 2024.

image.png

Reviews

94 people wrote reviews. This year had Vanessa Kosoy holding down the fort. Among many other positive qualities, one thing I especially appreciate about Vanessa's reviews is that Vanessa has an opinionated, coherent worldview, and the subjects of her reviews aren't strongly correlated with the kinds posts other reviewers tend to focus on.

A shout out to Zack Davis for being the most disagreed-with reviewer - I didn't agree with all of his reviews, but I disagreed with them less than the LessWrong voters did, and at least one of them influenced my voting[1], which not many reviews accomplished.

Some other reviews I found particularly interesting[2] included John Wentworth's review of On Green, Rudolf's review of his own post reviewing Planecrash, and Thomas's review of John's postmortem.

Here is a cut from the top of the Review Leaderboard:

image.png

Operational Details

Like last year, we weighed the impact of review votes by your Strong Vote power[3].

The Results

363 of you voted! 135 cast the 6 or more votes required to leave your ballot icon on the homepage, visible for everyone to see for the last few days of the voting phase.

Here are the results:


[Annual Review Results 2024]

Congratulations to Joe Carlsmith for driving his enemies before him winning this year's review, capturing both 1st place, and also landing a total of 6 posts in the top 50!


Updates to the Best of LessWrong: Coming Soon.

  1. ^

    In the intended direction!

  2. ^

    Which are also not generally the reviews I most agreed with.

  3. ^

    But, also like last year, we're displaying the "raw" vote strength of each vote in the results section, before being multiplied by your Strong Vote power, to better preserve anonymity.



Discuss

Playing with an Infrared Camera

2026-02-07 11:30:46

Published on February 7, 2026 3:30 AM GMT

I recently got a Thermal Master P1 infrared camera attachment for my phone. The goal was a house project, but it's also a great toy, especially with the kids. Getting a room pitch black but still being able to 'see' with the phone was fun for a bit. The real fun, though, was in exploring to observe all these thermal properties we'd never thought about.

Here's my selfie:

Light is warmer, dark is cooler. My glasses aren't cool, they're just IR-opaque. I already knew cheeks and noses were squishier than foreheads, but it's neat to see that in coloring.

Here's my 4yo, outside in ~30F weather:

The patterns are clearer, especially at the edge of the cheeks.

Here's a different angle:

The gaps in the hair are neat, and you can see the bow on her headband clearly.

Here's the cat:

This all makes sense in hindsight, knowing that the face is less furry and that there are shifting parts in the body fur, but it's neat to see.

The kids were excited about how this lets you see back into the past. Here's heat-fingerprints on a window sill I touched:

The print from one socked foot and one bare foot:

A stand mixer that had been running:

A car that had been sitting for a long time:

One that was cold to the touch, but apparently had been run recently:

Less fun but more usefully you can also see where buildings are losing heat. I'm planing to take it out Sunday morning when it's ~4F here and assess our house, but in the meantime here's a nearby house losing heat through its basement:

If I look very closely I can just make out the framing inside the wall. I'll try this again when it's even colder, and if I'm lucky I can get a bunch of pictures showing where the studs are throughout our exterior walls.

I do wish there were a way to connect the sensor to modern image processing algorithms like my phone uses for its regular camera. Combining the information from several shots in quick succession could give much higher quality, and I feel my eye doing this automatically when watching it live on the phone screen. I guess I could take a video and then post-process?



Discuss

Honey, I shrunk the brain

2026-02-07 08:01:47

Published on February 7, 2026 12:01 AM GMT

When cryoprotectants are perfused through the blood vessels in the brain, they cannot cross the blood-brain barrier as fast as water can move in the opposite direction. And cryoprotectants generally have a much higher osmotic concentration than the typical blood plasma. For example, the cryoprotectant solution M22 has an osmotic concentration around 100 times higher.

As a result, in a successful cryoprotectant perfusion (without fixatives), water rushes out of the tissue into the blood vessels, the tissue dehydrates, and you end up with a shrunken brain that is visibly pulled away from the skull. The brain weight goes down by 50% or more. This is currently considered a good sign of cryoprotectant perfusion quality.

case report A-1002 is an example of a shrunken brain

Far be it from me to say that a brain preservation method will not work because it seems weird. I myself have proposed that aldehyde fixation — something which is definitively lethal by contemporary medical criteria — may allow people to be revived with meaningful memories intact if humanity develops sufficiently advanced technology in the future. So I’m not going to use the absurdity heuristic here.

Instead, the key question is what this severe dehydration does to the nanometer-scale structures in the brain, such as the connections between neurons, that are thought to be the key parts of the information that encodes long-term memories.

Previous attempts at imaging this type of brain tissue were stymied because the severe dehydration made the tissue look unrecognizable. Synapses could be seen, but it wasn’t possible to clearly identify individual neurites or trace them to see whether the connectome is intact:

https://www.brainpreservation.org/21cm-cryopreservation-eval-page/

A new paper from Greg Fahy et al at 21st Century Medicine provides the most detailed look yet at what happens to brain ultrastructure during vitrification. So naturally, I had a look at it.

What do they think of non-vitrification approaches?

First, the paper opens with a very interesting review of prior work on brain cryopreservation, including some original data from the ever-controversial experiments of Isamu Suda performed in the 1960s.

The authors are not very enthused about freezing-based approaches. They show electron micrographs of rabbit brains that were perfused with glycerol and slowly frozen, which have large ice cavities that grossly distort the tissue:

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

Their position seems to be that vitrification — i.e., cooling without ice formation — is the only serious path forward for brain cryopreservation. But vitrification requires especially high concentrations of cryoprotectants, which causes the severe dehydration that has made it difficult to assess whether the connectome is preserved.

Rabbit experiments

They performed two types of rabbit experiments.

In the first group, rabbit brains were perfused with M22 for 60 minutes, vitrified, rewarmed, and then fixed with a solution that still contains the same concentration of cryoprotectant. This shows us what the tissue looks like in its shrunken state. At high magnification they can identify synapses, mitochondria, and what appears to be some morphologically intact cell membranes. But everything is compressed, making it difficult to clearly distinguish or trace individual cellular processes. These look similar to the previous images of vitrified brain tissue.

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

In the second group, rabbit brains were perfused with M22 and then gradually diluted back to a lower concentration of cryoprotectant before fixation — to 5M, 3M, or 1M. This tests the extent to which the shrinkage is reversible.

Although notably, it seems to me that most of the shrinkage is expected to occur between 0 and ~3M concentration of cryoprotectant. So my understanding is that diluting from full M22 back to 3M or 5M wouldn’t be expected to reverse most of the dehydration, although I might be wrong about that:

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

Anyway, they found that at 5M, the tissue is still quite shrunken. After reversal to 3M, though, things look considerably better, as the neurons have more normal-looking apical axons, and synapses with visible presynaptic vesicles can be identified. This is probably their best-looking EM data. However, there are various forms of damage, such as the wavy intracellular white spaces, that I don’t understand the cause of. Also, there are some places in the zoomed-in image (C) where I can’t really tell whether I am looking at two smaller processes or one larger one:

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

When they reverse to 1M (Fig 10), things go a bit wrong. While they can see nicely preserved synapses with clear presynaptic vesicles, they also see what they call “exploded” neurons, which they attribute to osmotic damage from removing the cryoprotectant too quickly.

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

They report that this problem is solvable, by using osmolytes to counterbalance the intracellular cryoprotectant during washout. They report that this can prevent this ultrastructural damage “even when all M22 is washed out of the brain.” But this data is not presented in the paper. It’s cited as “Spindler et al., in preparation.”

Human data

The human data comes from a single brain, that of a 73-year-old terminal cancer patient who donated biopsy samples of his brain for this research. His brain had significant ischemic injury before preservation even began, consisting of two days of agonal hypoxia before legal death, then three hours of cold ischemia before perfusion started. That’s important because it’s actually an example of realistic brain preservation conditions.

Cortical biopsies were taken after whole-brain M22 perfusion, plunged into liquid nitrogen, and stored for four years. They were then warmed and processed in different ways.

Some biopsies were warmed straight into fixative containing M22, showing us the fully dehydrated state. As expected, it is severely shrunken and electron-dense, but without obvious ice damage. On electron microscopy, they can identify synapses and some intact-appearing membranes.

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

Other biopsies of the human brain were rewarmed into diluted M22 (75% or 66%) before fixation. Using light microscopy, they report that this partial rehydration caused cells to regain their general characteristic shapes, with neurites visible.

https://www.biorxiv.org/content/10.64898/2026.01.28.702375v1.full.pdf

However, the rehydrated human tissue was only examined by light microscopy, not electron microscopy.

One of the main concerns with vitrification has long been that the severe dehydration might be masking damage to the structure of the brain. For example, if the cell membranes are broken apart or there are areas of washed out structure due to ischemia and rapid osmotic shifts during perfusion, we might not be able to see it when everything is compressed together.

Because the electron microscopy that they did show from the human brain tissue is still so compacted, we can’t really evaluate for the presence or absence of such damage yet with much certainty.

Summary

In brain preservation, as much as possible, we need objective metrics. One metric that has been proposed by Ken Hayworth and Sebastian Seung is to see whether it is possible to trace the connectome of a preserved brain. This is widely viewed by researchers as one of the best available metrics of preservation quality.

In ideal laboratory animal settings (rabbits), the key result is that they show partial reversibility (to 3M) with improved electron microscopy quality, but still without enough reversibility to see clearly traceable processes across the 2d image. They report that complete washout is actually possible with a new method that they developed, which is fantastic news. But this rests on unpublished work that we will have to wait to see in the future.

Ideally, they would show in this future work that they can reliably trace the connectome across randomly sampled areas of the brain, which would allow the pure vitrification approach to reach parity with aldehyde-based methods in ideal laboratory animal settings, and shift the debate to which method is the best at structural preservation in realistic settings.

In the single human case, they show that perfusion-based vitrification is feasible in at least some parts of the brain even hours after legal death, and that cells regain their general characteristic shapes after partial rehydration. But the rehydrated human tissue was only examined by light microscopy, not electron microscopy, so we can’t tell whether the connectome is likely to be traceable in the local area where this biopsy sample was taken from.

This paper is clearly a step forward and a very important contribution to the brain preservation literature. I would like to personally thank the authors for their important work, and also for explaining how this type of method could potentially be used for medical time travel, which is a premise I totally agree with. Not surprisingly for a single paper, it alone has not resolved the key uncertainties about vitrification-based brain preservation.



Discuss

Strategy of von Neumann and strategy of Rosenbergs

2026-02-07 06:50:12

Published on February 6, 2026 10:50 PM GMT

This is not a call for espionage, but an analysis of another strategy

Von Neumann's strategy for solving the problem of global nuclear weapons proliferation is widely known - strike tomorrow. That is, conquer the entire world by exploiting that brief window when only one side possesses nuclear weapons. This idea is popular among American readers, partly because personal interests for the US correlate with this strategy: It would be good for the world and for us. (I will not discuss here whether von Neumann actually asserted this or developed this strategy in detail - there are doubts - nor how feasible it was given that the USSR would have launched a conventional attack in Europe in response, meaning the bulk of nuclear strikes would have fallen on Western Europe against the advancing armies - nor that the US lacked precise information about whether the USSR had an atomic bomb - the USSR claimed it had one since 1947, but many believed it wouldn't until 1960, meaning there was time for a von Neumann attack - and finally that before 1949, the number of atomic bombs in US possession might have been insufficient to reliably halt the Soviet nuclear project).

My point is that an alternative project for solving the nuclear weapons problem was operating in parallel. This was the effort of the Rosenbergs and several others to transfer nuclear secrets to the USSR as quickly as possible, so that both sides would be equal and a balance would exist between them. We know this strategy worked for nearly 80 years without nuclear war. (There were other motives too, like sympathy for communism, but we're simplifying.)

Both of these strategies are applicable to the AI race.

  1. The von Neumann strategy involves creating American AI as quickly as possible to outpace China (as well as creating Grok AI to outpace OpenAI, etc.)
  2. The Rosenberg strategy assumes that defectors will share AI secrets between AI companies, thereby reducing any single AI company's advantage over others, resulting in everyone reaching AGI level simultaneously, and consequently the world having multiple AIs rather than one paperclip maximizer.

Since multiple AIs would have more diverse goals, there's a greater chance that at least one of them would be relatively aligned with humanity's goals. Second, if there are multiple AIs, they will compete more for human attention and approval and will need to demonstrate their trustworthiness to each other more. Thus, they will care more about human values. If one of them starts killing people on its territory, others will see that it is a defector toward its creators.

If the N-strategy leads to one AI's victory and the inevitable death of everyone, then the R-strategy is more unpredictable and offers a chance of survival, though we cannot say exactly how this would happen.

The R-strategy is much simpler and cheaper, since data exchange and employee movement happens constantly between companies, Twitter buzzes with ideas, and GitHub is full of secrets longing to be heard. The moat is constantly eroding. That is, I'm not calling for industrial espionage here, but rather want to draw attention to the forces that level the playing field of achievements between companies.

The R-strategy makes sense only if we are confident that the first AI will certainly destroy us. Then we exchange inevitable death for a vague probability of surviving in chaos. Conversely, if we believed that creating a friendly AI that would be first and only was quite likely, then the R-strategy would be a major mistake.

Finally, the R-strategy is local, meaning it relies on local actions of individuals (and is subject to the unilateralist's curse). The N-strategy also starts as local, but at the company level rather than the individual level. The N-strategy ultimately becomes global, as it implies world domination.



Discuss

Data-Centric Interpretability for LLM-based Multi-Agent Reinforcement Learning

2026-02-07 03:27:09

Published on February 6, 2026 7:27 PM GMT

TLDR; SAEs can complement and enhance LLM as a Judge scalable oversight for uncovering hypotheses over large datasets of LLM outputs

paper

Abstract

Large language models (LLMs) are increasingly trained in long-horizon, multi-agent environments, making it difficult to understand how behavior changes over training. We apply pretrained SAEs, alongside LLM-summarizer methods, to analyze reinforcement learning training runs from Full-Press Diplomacy, a long-horizon multi-player strategy game. We introduce Meta-Autointerp, a method for grouping SAE features into interpretable hypotheses about training dynamics. We discover SAE-based analysis finds fine-grained behaviors including role-playing patterns, degenerate outputs, and language switching, while LLM-summarizer captures environment-specific bugs and strategic behaviors. We validate discovered features through automated evaluation, two human user studies, and add them to an untrained agent's system prompt, improving performance by +14.2%. Overall, we show that SAEs and LLM-summarizer provide complementary views into agent behavior, and together our framework forms a practical toolkit for interpreting long-horizon multi-agent LLM training.

Blog Post

We run Sparse Autoencoders on 114GB of Reinforcement Learning training trajectories from the popular multi-player strategy game Diplomacy, showing for the first time the potential downstream applications of data-centric interpretability techniques

What are the AIs doing when  no one is watching? Current large-scale training runs can produce hundreds of millions or billions of tokens, with production AI deployments in the trillions. Human oversight of all AI outputs is becoming increasingly unfeasible. Common approaches to solving this problem include summarizing the logs, or using LLM as a judge with rubrics. The problem is these approaches are expensive, prone to hallucination, and can only attend to a small set of features you already know how to look for. 

In our paper, we tested a novel approach: Using Sparse Autoencoders (SAEs) to collect feature activations on each token and generate hypotheses of what features changed most over training and are correlated with better performance. We ran Gemma 27B with gemma-scope-2 layer_31_width_262k_l0_medium over 1800 trajectories (114GB in total)  from two 25 batch training runs (one successful and one failed) in the game Diplomacy, a multi-agent long-horizon strategy game.

Sparse Autoencoders

A Sparse Autoencoder (SAE) is a model that takes intermediate calculations from a language model (activations) and expands them to a higher dimension (for example, vectors of size 5376 to 262k). The idea is every entry in the expanded vector represents a single, human interpretable concept, for instance "dominance", or "napoleon." If we run this over text, we now have a machine that can label exactly "how much" of a concept each token contains, up to 262k concepts at once. 

Pipelines

We used two pipelines to generate hypotheses: An LLM summarization pipeline and SAE pipeline. Unless otherwise specified, we used a canonical set of 1800 trajectories for each experiment; the first 6 trajectories from each group, the first 6 groups from each batch, and the first 25 batches across two runs.

LLM Summarization

We conducted a two stage hierarchical summarization pipeline on the canonical set. We first summarized each trajectory from around 50k tokens to 10k, preserving phase and tool call information. We then grouped the trajectory summaries by batch, summarizing each group of 36 summaries into one batch summary around 10k tokens. Finally, we used an LLM with a rubric to surface hypotheses across the 50 batch level summaries.

SAE Pipeline

We used gemma-scope-2-27b-it-res, layer_31_width_262k_l0_medium, and Gemma 3 27b it for all our main experiments. We chose this SAE due to the recommendation of the original authors of gemmascope 2, empirical performance, and the availability of explanations on neuronpedia. We first tokenized each trajectory and generated role masks (each token is either a user, assistant, or tool token). We then generated activations for each trajectory, saving the top 250 activating features per token, for a total of 6029159605 activation values.

Using the activations and role masks, we masked to assistant tokens only and used Spearman correlation and AUROC to find correlations between features and target variables of interest, namely batch and run.

To label SAE features at scale, a common technique is autointerp: Passing activating examples to an LLM and asking it "what does this represent?" A problem is features are often noisy, or not interesting on their own. We propose a new technique we call meta-autointerp: using another LLM pass on several autointerp labelled features to cluster them into a related meta-feature. 

To answer the question "what features increase/decrease the most over training?" we summed the activations per trajectory and calculated the spearman correlation with batch. An interesting metafeature we found highly correlated with training batch was Napoleonic roleplay (the model's starting power was France). 

We also found features that indicated excessive/duplicate message sending and reward hacking (the model was given +0.2 reward for each message sent), which we validated with regex. Surpisingly, the model also wrote more duplicated diary entries, despite this action receiving no reward.

 

Validation of Results

We consider and validate the following metrics for each metafeature

  • Interpretability. To what extent does the metafeature fire monosemantically? How effectively can a user or LLM distinguish between an activating and non-activating sample?
  • Helpfulness. How helpful is this metafeature to the practitioners conducting RL runs? Does it surface novel insights? Can it be used for monitoring rare failure modes? Does it cause them to make changes to the RL environment or system prompt?
  • Predictive usefulness. How effectively does this metafeature discriminate between early and late in training? High and low reward? A good training run vs a bad training run? How effectively can a user or LLM distinguish between a sample pair, one from class A and one from class B, given a hypothesis derived from the metafeature?

Autointerp and meta-autointerp score for interpretability and helpfulness. To further validate this on actual users, we conduct a user study with Diplomacy RL practitioners.

We fount that Meta-autointerp hypotheses outperform single feature autointerp, and LLM hypotheses obtain the highest ratings.

 

To validate the predictive usefulness of our features, we used an LLM as judge A/B test with and without the hypothesis, to see if it gave the LLM any uplift in predicting which span in a pair comes from early vs late in training.

We ran 100 runs on 3 LLMs for each hypothesis, averaged the results, and found that 21% of LLM generated features, 45% of single feature SAE hypotheses, and 90% of SAE meta-feature hypotheses provide significant uplift. 

Evaluating the interpretability and predictive usefulness of hypotheses from 3 different sources: LLM summary, SAE features, and SAE meta-features. These were evaluated on 50 samples pairs with hypothesis-random sampling. Hypotheses are highlighted by direction: green = increases with training; red = decreases with training. Their uplift is marked with an asterisk if *p < 0.05 via McNemar’s test with positive uplift. Hypotheses are abbreviated for space.

We further validated features with a user study with n=25 and 277 responses. We found that although automated validation shows high scores, in practice using SAE and LLM hypotheses is difficult for humans, perhaps due to shorter spans and fewer samples (only 3 per hypothesis).

 

Uplift in percentage of correct responses with vs without the hypothesis as a hint. Most LLM generated hypotheses are negatively useful as well as a subset of SAE generated ones

We then tested our 10 top performing features by adding them to the system prompt of the original untrained model, running 20 games of Diplomacy, showing around a 14% improvement in mean score.

 

Conclusion

Overall, we found that SAE embeddings enhance and complement traditional LLM as a Judge techniques for discovering hypotheses over large datasets. Although automated metrics might show predictive usefulness, we find on real humans some SAE features are worse than useless. To our knowledge, this is the first time SAE generated hypotheses have been used in downstream tasks, showing potential for augmenting classical scalable oversight and AI control techniques. Further research directions include training SAEs for long context, on custom datasets, and potentially multimodal use cases. We're excited to see how the field of data-centric interpretability progresses!

 

 



Discuss

Parks Aren't Nature

2026-02-07 02:27:05

Published on February 6, 2026 6:27 PM GMT

I.

I love dogs.

I grew up in a two-dog household, and my future plans have always included at least one dog. When I pass a dog on the street, I often point and exclaim “Puppy!”, no matter how inappropriate it is for a grown man to do so, because all dogs are puppies and all puppies are adorable and I need everyone to know this.

Why do I love dogs?

They’re loyal and loving and giving, and even though they bark at passing cars and occasionally pee on the carpet having them in my life makes it unquestionably better.

The thing is, dogs as they exist today are a lot of things, but they aren’t natural.

Nature didn’t shape dogs, didn’t produce the breeds we see every day. It wasn’t like Darwin went to an island and found that a species of wolf had been separated by a mountain chain and on one side were Golden Retrievers and the other Yorkshire Terriers.

Dogs exist today as the result of millennia of co-adaptation and selective breeding by humans. They’re animals, yes, and Nature technically made the base form, but we humans molded them into shapes more compatible with us. Most dogs are very safe to have around humans.

But there is an animal that is a more natural Canid: Wolves.

And wolves are a lot of things, but they’re not pets. They aren’t domesticated; they aren’t bred for cuddliness and kisses. A wolf will hurt and kill and eat you.

Wolves are wild animals in their state of nature, red in tooth and claw.

The thing is, this distinction between dogs and wolves - between nature tamed and nature wild - this matters, when we think about who we humans are and what we want the world around us to look like. We might say we enjoy the natural world, might want less deforestation and more green spaces, but I’ve yet to meet anyone who wants actual wolves running around their neighborhood. We might go to farm-to-table restaurants and only eat organic, free-range eggs, but chickens mostly don’t exist in the wild for good reason.

In a first-world country, or even in any populous city, almost everyone’s experience of what we call ‘nature’ is that of dogs, not wolves. Nature tamed, not Nature wild. And so I think it pays to be precise what it means when we say nature, because it’s not as simple as ‘non-human animal’ or ‘uninhabited area’.

Wolf - Wikipedia
A wolf, red in tooth and claw.
Chihuahua Dog Breed Information
A Chihuahua, evidence of selective breeding gone horribly wrong.

II.

There’s something called an appeal to nature, which is apparently distinct from the naturalistic fallacy, because naming things clearly is not a strength of philosophy.

Anyway, an appeal to nature is the idea that natural equates to good. It’s behind all the marketing in a grocery store that advertises organic, non-GMO, free-range, grass-fed, asbestos-free Doritos.

Free
Captioning an XKCD is kind of putting a hat on a hat, but this is how I feel when I see gluten-free wine. Gluten comes from grains like wheat. How does wheat get in your wine?

Once you point it out, the idea that something is axiomatically good just because it’s natural is kind of silly; after all, cockroaches are perfectly natural, as is gangrene, athlete’s foot, and Donald Trump’s hair. But most people have a tendency to buy into this just a little. After all, isn’t real food with real names better for you than Butylated Hydroxytoluene or Red Dye #5?

There’s multiple problems with an appeal to nature - for one, vaccines are pretty unnatural, but so is not dying of tetanus - but the one I’d like to focus on is the idea that natural is a quality something either has or it doesn’t.

I think a lot of people think about whether something is natural or not like this:

But the truth, like many things, is not so simple. Things, especially what we think of as ‘the natural world’, are more like this:

In its own way, crop-covered farmland is no more ‘natural’ than the concrete jungle of New York City, even though the former is made of plants and the latter of stone and steel and glass. Both are curated by humanity, just for different goals.

III.

What was the natural world like, before humans befouled it? What was paradise, before we paved it and put up a parking lot?

What was a person’s experience of nature, back before it was tamed?

Nature was terrible. And not in a sarcastic, that-movie-was-terrible kind of way, but in that it genuinely inspired terror. Nature was the domain of the uncertain, the cataclysmic, the cruel and uncaring world from which existence had to be torn day in and day out.

A farmer’s experience of nature would have been a constant battle to raise crops, hoping and praying that there would be enough rain to water them but not enough to wash them away, that locusts or other insects wouldn’t eat or foul them, that disease and fungus wouldn’t rot crops from the inside out. The ancient farmer was always only a few strokes of bad luck from starvation, and nature was the lottery they played every day of their lives.

Compare this to the farmer of today, who ensures their crops get enough water no matter what via massive irrigation, who uses pesticides to annihilate pests, who presides over massive machinery as it trundles along seeding and harvesting their crops. The farmer of today has access to genetically modified strains of plants that resist disease and grow larger with more yield than any ancient farmer could have hoped to have.

Is the ancient farmer in some sense doing something more natural? Sure, if by natural you mean they’re operating closer to the state of the pre-human natural world. Does that mean that what modern farmers do is unnatural?

I don’t think so.

Farmers have tamed nature, and this is good. This gives us abundant cheap food, enough to feed everyone on earth while only a tiny percentage of the population is needed to produce it.

(The fact that people still go hungry and starve is an issue of distribution, not production. We make enough calories to feed everyone.)

And this contrast between more natural and less natural on the spectrum, what I called nature wild and nature tamed above, is everywhere.

Corn | History, Cultivation, Uses, & Description | Britannica
Modern corn. This is the stuff High Fructose Corn Syrup comes from!
The Teosintes from which modern corn was bred. Which would you rather grow?

IV.

At this point, I’ve hopefully convinced you of the title of the post. A park isn’t really natural, any more than a Chihuahua is a wolf. It’s something sculpted, pruned, weeded, and landscaped. It’s full of plants, sure, but it’s an example of nature tamed, not nature wild.

How about going on a hike? That’s nature, right?

Not really.

Even if you’re hiking through a national park or other untouched terrain, even if you’re setting foot somewhere with wolves and bears and poison ivy where no human has ever ventured, simply by virtue of existing in the 21st century you’re still experiencing something very different than what our ancestors would have, long ago.

Today we have satellites overhead and GPS to Globally Position us wherever we are, and weather simulations to tell us what to expect the sky to do. We have rugged clothes that can be cheaply replaced if torn, and mass-produced boots with rubber soles that won’t get pierced by thorns or rocks. We have plastic and metal bottles to store water and abundant food to pack for ourselves. We have thermal sleeping bags and bug spray and sunscreen and sunglasses to keep us comfortable. We have first-aid kits with antibiotics and alcohol swabs and itch creams and sterile bandages.

Our distant ancestors had none of those.

What would venturing into the wilds have been like to our distant ancestors?

They knew of some of the dangers they’d face: Inclement weather, wild animals, getting lost and having no way to contact help or navigate back to the group. But there were other dangers that they must have realized, even if they didn’t know the causes: infection, disease, rot. A single cut gone untreated, a mild scrape gotten while pushing aside a thorny plant, and gangrene could set in.

Going into nature meant risking your life, even if it might not have felt that way at the time. Sure, untouched woods might be beautiful, but nature is often at its most beautiful when it’s at its most deadly. Bright colors usually mean poison in the natural world.

Consider also the perils of simple exposure: a cool night can spell death for someone without shelter or proper clothes or a fire. Add rain and wind, and anyone venturing beyond human settlements had to be wary of dying soaked and cold.

V.

There are places, in our world, that are still natural. Untamed.

The Amazon Rainforest.

The Australian Outback.

And going into those places, unprepared, without a guide, is quite likely to get you killed.

That is about as natural as it gets, as natural as the vacuum of space, and only slightly more hospitable. That our ancestors were able to survive in such environments - that there are people today who can live in such environments - is amazing, but it comes with a high cost.

People who have to fight nature every day to survive are doing just that - surviving. They can’t relax with a good book or take a hot shower. They can’t get into long debates about trivial things with their friends over drinks, or have a poker night once a week. They can’t take vacations or paid sick days, and the only insurance available to them is the beneficence of their community. There is no retirement, for them; if they stop struggling to survive, they stop surviving.

More fundamentally, constantly struggling to survive takes its toll on a person’s body and mind. Constant stress ages you, wears you down, leaves you ragged and weary and unable to relax.

There’s a lot of nostalgia for the past, but I think people consistently underestimate just how hard life was for those who came before us. How much they had to struggle against the world just to keep living. How much they suffered.

Is the world we humans have built for ourselves less natural than it used to be?

Of course.

It’s also far more forgiving, far more comfortable, and far less tragic.

VI.

Appeals to nature argue that natural means better.

This appeal is a fallacy because it’s wrong, but it’s wrong in two ways.

The first is simple: artificial does not equate to worse. Plastic is far superior to the materials humans used before it; purified metals and alloys are better than ores; our sewer and drinking water systems are far better for us than drinking ‘natural’, unfiltered water.

What are your anime words of wisdom? : r/animequestions
There’s no law that says what’s artificial can’t surpass what’s natural.

The second is that what we, in a 21st century first-world country, think of as nature is a tamed thing, something pruned and weeded and cultivated, and ultimately no more natural than a suburban lawn.

In other words, appeals to nature are always dependent on the reference frame they’re made from. If you’re standing in the middle of New York city and yearning for nature, you’re probably yearning for pine trees and dandelions and fireflies, not trees of death and poison ivy and malaria-carrying mosquitoes, even though the latter are just as natural as the former.

What we think of as ‘nature’ has already been massively affected by humanity over the centuries. Even the moon now has human footprints and a human flag on it:

U.S. Flag on the Moon
The species flagus usa-us can be found sprouting up from several celestial bodies. It’s considered by some to be an invasive species, by others a hardy and welcome addition to the ecosystem…

Nature, to most Americans, is something safe and peaceful and beautiful. It’s sitting on your porch watching a sunset, or seeing autumn plumage on the trees, or sitting around a campfire with your friends. We tend to only think of it as horrifying and destructive during severe weather events and natural disasters (which, as actual climate change scientists will tell you, are still quite natural; plenty of them happened before we humans dumped a bunch of carbon in the atmosphere, and plenty will happen after).

In other words, appeals to nature are wrong because we’re wrong about what nature is actually like. It has always been beautiful, but only as humanity shaped it has it become good for us.[1]

VII.

If you look at the human experience of nature over history, what you see is humans shaping and crafting their environments to be more and more friendly to them, until the default first-world conception of nature is something lovely and harmless, rather than the murderous (if beautiful) thing it once was.

And while the full argument is beyond the scope of this post, I think this is a good thing.

Are there things lost, as nature is tamed? Yes.

Wolves are beautiful, elegant creatures. Chihuahuas are not.

But I’d much rather have a Chihuahua[2] as a pet than a wolf.

I’m not telling you not to enjoy going outside; just that, next time you go to the park or take a hike, understand that unless you’re trekking through the Amazon or the Australian Outback, your experience is more like that of eating a modern GMO fruit than anything our ancestors might have had: easier, safer, and altogether more delicious.

So maybe, the next time you’re taking a walk outside your climate-controlled residence to get some fresh air, take a second to appreciate the ‘less natural’ nature around you, and the benefits of living in a world so much more adapted to humanity than it used to be.

  1. ^

    Some argue that nature is good qua nature, as in, a fundamental good by itself. I’m not one of them. My circle of concern extends to sapient beings of all kinds, and somewhat to some kinds of animals, but I don’t consider plants, fungi, or bacteria to have any intrinsic moral worth.

  2. ^

    Actually a Shih Tzu, though I think the point stands.



Discuss