MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

Literacy lag: We start reading too late

2025-07-31 23:15:02

File:A reading lesson (1866), by Leon Basile Perrault.jpg
The Reading Lesson,” by Leon Basile Perrault (1866)

What is literacy lag?

Children today grow up under a tyrannical asymmetry: exposed to screens from a young age, only much later do we deign to teach them how to read. So the competition between screens vs. reading for the mind of the American child is fundamentally unfair. This is literacy lag.

Despite what many education experts would have you believe, literacy lag is not some natural or biological law. Children can learn to read very early, even in the 2-4 age range, but our schools simply take their sweet time teaching the skill; usually it is only in the 7-8 age range that independent reading for pleasure becomes a viable alternative to screens (and often more like 9-10, as that’s when the “4th grade slump” occurs, based on kids switching from academic exercises to actually reading to learn). Lacking other options, children must get their pre-literate media consumption from screens, which they form a lifelong habitual and emotional attachment to.

Nowadays, by the age of 6, about 62% of children in the US have a personal tablet of their own, and children in the 5-8 age range experience about 3.5 hours of screen time a day (increasingly short-form content, like YouTube Shorts and TikTok).

I understand why. Parenting is hard, if just because filling a kid’s days and hours and minutes and seconds is, with each tick of the clock, itself hard. However, I noticed something remarkable from teaching my own child to read. Even as a rowdy “threenager,” he got noticeably easier as literacy kicked in. His moments of curling up with a book became moments of rejuvenating parental calm. And I think this is the exact same effect sought by parents giving their kids tablets at that age.

Acting up in the car? Have you read this book? Screaming wildly because you’re somehow both overtired and undertired? Please go read a book and chill out!

This is because reading and tablets are directly competitive media for a child’s time.1 So while independent reading requires about a year of upfront work, and takes anywhere from 10-30 minutes a day, after that early reading feels a lot like owning a tablet (and while reading is no panacea, neither are tablets).

The cultural reliance on screen-based media is not because parents don’t care. I think the typical story of a new American parent, a quarter of the way through this 21st century of ours, goes like this: initially, they do care about media exposure, and often read to their baby and young toddler regularly. This continues for 2-3 years. However, eventually the inconvenience of reading requiring two people pressures parents to switch to screens.2 The category of “not playing, and not doing a directed or already set up activity, but just quietly consuming media” is simply too large and deep for parents to fill just by reading books aloud. In fact, not providing screens can feel impoverishing, because young children have an endless appetite for new information.

Survey data support this story: parental reading to 2-year-olds has actually increased significantly since 2017, but kids in the 5-8 range get exposed to reading much less. Incredibly, the average 2-year-old is now more likely to be exposed to reading than the average 8-year-old!

Self reports also fit this story: parents acknowledge they do a better job at media use when it comes to their 2-year-olds compared to their 8-year-olds, and the drop-off is prominent during the literacy lag.

So despite American parents’ best efforts to prioritize reading over screen usage for their toddlers, due to our enforced literacy lag, being a daily reader is a trait easily lost early on, and then must be actively regained rather than maintained.

Subscribe now

Once lost, reading often doesn’t recover. Even when surveyed from a skeptical perspective, reading is, almost everywhere, in decline.3 This is supported by testimonials from teachers (numerous op-eds, online threads, the entire horror show that is the /r/Teachers subreddit), as well as the shrinking of assigned readings into fragmented excerpts rather than actual books. At this point, only 17% of educators primarily assign whole books (i.e., the complete thoughts of authors), and some more pessimistic estimates put this percentage much lower, like how English Language Arts curricula based on reading whole books are implemented in only about 5% of classrooms. On top of all this, actual objective reading scores are now the lowest in decades.

I think literacy lag is a larger contributor to this than anyone suspects; we increasingly live in a supersensorium, so it matters that literature is fighting for attention and relevancy with one hand tied behind its back for the first 8 years of life.

So then…

Why are education experts so against early reading?

In a piece that could have been addressed to me personally, last month the LA Times published:

Hey!

While it doesn’t actually reference my growing guide on early reading (we’re doing early math next, so stay tuned), what this piece in the LA Times reveals is how traditional education experts have tied themselves up in knots over this question. E.g., the LA Times piece contains statements like this:

“Can a child learn individual letters at 2½ or 3? Sure. But is it developmentally appropriate? Absolutely not,” said Susan Neuman, a professor of childhood and literacy education at New York University.

Now, to give you a sense of scale here, Susan Neuman is a highly-cited researcher and, decades ago, worked on implementing No Child Left Behind. She also appears to think it’s developmentally inappropriate to teach a 3-year-old what an “A” is. And this sort of strange infantilization appears to be widespread.

“When we talk about early literacy, we don’t usually think about physical development, but it’s one of the key components,” said Stacy Benge, author of The Whole Child Alphabet: How Young Children Actually Develop Literacy. Crawling, reaching across the floor to grab a block, and even developing a sense of balance are all key to reading and writing, she said. “In preschool we rob them of those experiences in favor of direct instructions,” said Benge.

Subscribe now

Yet is crawling across the floor to grab a block really the normal developmental purview of preschool? Kids in preschool are ambulatory. Bipedal. Possessing opposable thumbs, they can indeed pick up blocks. Preschool usually starts around the 3-4 age range, often requiring the child to be potty-trained. Preschoolers are entire little people with big personalities. Moreover, by necessity preschool is still mostly (although not entirely) play-based in terms of the learning and activities, if only because there is zero chance a room of 3-year-olds could sit at desks for hours on end.

This all seems off. Surely, there must be some robust science behind this fear of teaching reading too early?4 It turns out, no. It’s just driven by…

Neuromyths about early reading.

The LA Times piece leans heavily on the opinions of cognitive neuroscientist Maryanne Wolf, who is well-known for her work in education and the science of reading:

For the vast majority of children, research suggests that ages 5 to 7 are the prime time to teach reading, said Maryanne Wolf, director of the Center for Dyslexia, Diverse Learners and Social Justice at UCLA.

“I even think that it’s really wrong for parents to ever try to push reading before 5,” because it is “forcing connections that don’t need to be forced,” said Wolf.

Reading words off a page is a complex activity that requires the brain to put together multiple areas responsible for different aspects of language and thought. It requires a level of physical brain development called mylenation [sic] — the growth of fatty sheaths that wrap around nerve cells, insulating them and allowing information to travel more quickly and efficiently through the brain. This process hasn’t developed sufficiently until between 5 and 7 years old, and some boys tend to develop the ability later than girls.

If she had a magic wand, Wolf said she would require all schools in the U.S. to wait until at least age 6.

That’s a strong opinion! I wanted to know the scientific evidence, so I dusted off Maryanne Wolf’s popular 2007 book, Proust and the Squid: The Story and Science of the Reading Brain from my library. The section “When Should a Young Child Begin to Read?” makes identical arguments to those that Wolf makes in the LA Times article, wherein myelination is cited as a reason to delay teaching reading. Wolf writes that:

The behavioral neurologist Norman Geschwind suggested that for most children myelination of the angular gyrus region was not sufficiently developed till school age, that is, between 5 and 7 years.... Geschwind’s conclusions about when a child's brain is sufficiently developed to read receive support from a variety of cross-linguistic findings.

Yet while Geschwind’s highly-cited paper is a classic of neuroscience, it is also 60 years old, highly dense, notoriously difficult to read, and ultimately contains mere anatomical observations and speculations, mostly about things far beyond these subjects. Nor do I find, after searching within it, a clear statement of this hypothesis as described. E.g., in one part, Geschwind seems to speculate that the angular gyrus being underdeveloped is the cause of dyslexia, but this is not the same as saying that finished development is a requisite for reading in normal children. Instead, there is a part where he speculates that reading can be acquired after the ability to name colors, but naming colors can often occur quite early, and varies widely (e.g., plenty, but not all, toddlers can name colors well).

Regardless of whatever Geschwind actually believed, this 60-year-old paper would be a very old peg to hang a hat on. Modern studies don’t show myelination as a binary switch: e.g., temporal and angular gyri exhibit "rapid growth” between 1-2 years old, likely driven by myelination, and there is “high individual developmental variation” of myelination in general in the 2-5 age range, and also myelination, since it’s an anatomical expression of brain development, is responsive to learning itself.

Subscribe now

Overall, theories positing cognitive closure based on myelin development (especially after the 1-2 age range) are not well-supported. This is because, brain-wide, the ramp up in myelination occurs mostly within the first ~500 days of life (before 2 years old), leveling off afterward to a gentle slope that can last for decades in some areas.

So then, what about the “cross-linguistic findings” that supposedly provide empirical support for a ban on early reading? Wolf writes in Proust and the Squid that:

The British reading researcher Usha Goswami drew my attention to a fascinating cross-language study by her group. They found across three different languages that European children who were asked to begin to learn to read at age five did less well than those who began to learn at age seven. What we conclude from this research is that the many efforts to teach a child to read before four or five years of age are biologically precipitate and potentially counterproductive for many children.

But the main takeaway from Goswami herself appears to be the opposite. Here is Goswami describing, in 2003, her work of the time:

Children across Europe begin learning to read at a variety of ages, with children in England being taught relatively early (from age four) and children in Scandinavian countries being taught relatively late (at around age seven). Despite their early start, English-speaking children find the going tough….

The main reason that English children lag behind their European peers in acquiring proficient reading skills is that the English language presents them with a far more difficult learning problem.

In other words, German and Finnish and so on are just easier languages to master than English, and phonics works more directly within them, so of course the kids in those countries have an easier time—and they start school later, too. As Goswami explicitly says, “it is the spelling system and not the child that causes the learning problem….”5

So no, teaching children to read at four or five, or even younger, is not “biologically precipitate.” It is also contradicted by the simple fact that…

Children used to learn to read at ages 2-4!

Here is from the 1660 classic A New Discovery of the Old Art of Teaching Schoole by Charles Hoole, an English educator who himself was a popular education expert of his day (running a grammar school and writing monographs and books).

I observe that betwixt three and four years of age a childe hath great propensity to peep into a book, and then is the most seasonable time (if conveniences may be had otherwise) for him to begin to learn; and though perhaps then he cannot speak so very distinctly, yet the often pronounciation of his letters, will be a means to help his speech…

And his writings about toddler literacy (which, by the way, are based in phonics), contain anecdotes of parents teaching their children letters at age 2.5, and of children being able to read the dense and complex language of the Bible shortly after the age of 4. As across the pond, so here too. Rewind time to observe the early Puritans of America, and you would have found it common for mothers to teach their children earlier than we do now, using hornbooks and primers (it was Massachusetts law that parents had to teach their children to read).

Perhaps the most famous case of teaching very early reading, and the enduring popularity of the act, comes from Anna Laetitia Barbauld (1743-1825), who was a well-known essayist and poet and educator of her day, and wrote primers aimed at children under the instruction of their governess or mother. These primers “provided a model for more than a century.” English Professor William McCarthy, who wrote a biography of Anna Laetitia Barbauld, noted that her primers…

were immensely influential in their time; they were reprinted throughout the nineteenth century in England and the United States, and their effect on nineteenth- and early twentieth-century middle-class people, who learned to read from them, is incalculable.

These “immensely influential” primers possess very revealing titles.

  • Lessons for Children of 2 to 3 Years Old (1778)

  • Lessons for Children of 3 Years Old, Part I and Part II (1778)

  • Lessons for Children of 3 to 4 Years Old (1779).

Yup, that’s right! Some of the most famous and successful primers ever were explicitly designed for children in the 2-4 age range. Anna Barbauld wrote it so she could teach her nephew Charles how to read, and those years track Charles’ age himself—he really was 4 in 1779.

Originally printed “sized to fit a child’s hand,” these primers contain what would be considered today wildly advanced, almost unbelievable, prose for the 2-4 age range. Even just perusing the first volume I find irregular vowels and long sentences and other complexities; things more associated with, realistically, a modern 2nd grade level (assuming a good student, too). And so, even given an extra year or two as advantage (as admittedly, some of the same era thought Barbauld’s books were titled presumptively, and recommended them instead for the 4-5 age range), there is probably a vanishingly small number of kids in the entire modern world who’d currently be Charles’ literary equals, and could read an updated version of this primer.6

The past, as they say, is a foreign country. Education practices, particularly the European tradition of “aristocratic tutoring,” were quite different. Back in 1869, Charlotte Mary Yonge wrote of Barbauld’s hero little Charles” that the primers about him were particularly influential in the upper-class and aristocracy:

Probably three fourths of the gentry of the last three generations have learnt to read by his assistance.7

Perhaps it’s a mirror to our own age, and early reading becoming reserved for “gentry” is what modern education experts actually fear, deep down. Their concerns are about equity, grades, and whether it’s okay to “push kids into the academic rat race.” I’m not dismissing such concerns, nor saying that debate is easily solvable. Rather, my point is that there’s an entire dimension to reading that’s been seemingly forgotten: in the end, reading isn’t about grades or test scores. It’s about how kids spend their time. That’s what matters. In some ways, it matters more than anything that ever happens in schools. And right now, literacy is losing an unfair race.

We appear to be entering a topsy-turvy world, where the future is here, just not distributed along the socioeconomic gradient you’d expect. It’s a world in which it is a privilege to grow up not with, but free of, the latest technology. And I’ve come to believe that learning to read, as early as possible, is a form of that freedom.

Besides, Barbauld’s introduction to her primers ends with the appropriate rejoinder to any gatekeeping of reading, by age or otherwise:

For to lay the first stone of a noble building, and to plant the first idea in a human mind, can be no dishonor to any hand.

Subscribe now


If you want to check out my own guide for teaching early reading (aimed at getting kids reading for pleasure), see parts 1, 2, 3 and 4. I’m putting them all together into an updated monograph (coming soon).
1

That TV competes with reading has been called the “displacement hypothesis” in the education literature. It’s pretty obvious that the effect is even stronger for tablets. While literacy lag existed decades ago, it was less impactful, because the availability for entertainment was more limited and not personalized (e.g., Saturday morning cartoons in the living room vs. algorithmically-fine-tuned infinite Cocomelon on the go).

2

Admittedly, this dichotomy of “screen time” vs. reading is a simplification, because “screen time” is a big tent. Beautiful animated movies are screen time. Whale documentaries are screen time. Educational apps are screen time. But in rarer studies that look specifically at things like reading for pleasure, it’s clear that using screens for personal entertainment (like the tablet usage I’m discussing here) is usually negatively correlated to [pick your trait or outcome].

3

The naysaying that reading is not in decline comes from education experts arguing that labels like “proficiency” on surveys represent a higher bar than people think, and that not being proficient doesn’t technically mean illiterate. Which is something, I suppose.

4

Shout out to Theresa Roberts, the only education expert quoted in the LA Times piece going against the majority opinion.

But there are also experts who say letter sounds should be taught to 3-year-olds in preschool. “Children at age 3 are very capable,” said Theresa Roberts, a former Sacramento State child development professor who researches early childhood reading.

And it doesn’t have to be a chore, she said. Her research found that 3- and 4-year-olds were “highly engaged” during 15-minute phonics lessons, and they were better prepared in kindergarten.

5

Wolf does mention that orthographic regularity is a confound in a later 2018 piece, but still draws the same conclusion from the research. Meanwhile, in a 2006 review written by Goswami herself and published in Nature Reviews Neuroscience called “Neuroscience and education: from research to practice?” Goswami doesn’t mention a biologically-based critical period for learning to read. Instead, using the example of synaptogenesis, she refers to ideas around such critical periods as “myths.”

The critical period myth suggests that the child’s brain will not work properly if it does not receive the right amount of stimulation at the right time… These neuromyths need to be eliminated.

6

It’s worth noting that Anna Barbauld’s primers are beautifully written. Constructed as a one-sided dialogue (a “chit chat”) with Charles, Barbauld dispenses wisdom about the natural world, about plants, animals, money, pets, hurts, geology, astronomy, morality and mortality. In this, it is vastly superior to contemporary early readers: it is written from within a child’s umwelt, which (and this is Barbauld’s true literary innovation) occurs via linguistic pointers from parents to things of the child’s daily world (this hasn’t changed much, e.g., the first volume ends at Charles’ bedtime). Barbauld may have also originated the use of reader-friendly large font, with extra white space, designed to go easy on toddler eyes (still a huge problem in early reading material, hundreds of years later).

7

If you are surprised to learn that the gentry (i.e., the upper class of the aristocracy, sizable land-owners, wealthy merchants, etc.) of Europe during the 1700s and 1800s often learned to read earlier than we do now, please see this.

"They Die Every Day"

2025-07-14 22:42:43

Art for The Intrinsic Perspective is by Alexander Naughton

“They die every day.”

“What?”

“Every day-night cycle, they die. Each time.”

“I’m confused. Didn’t the explorator cogitator say they live up to one hundred planetary rotations around their sun?”

“That’s what we’ve thought, because that’s what they themselves think. But it’s not true. They die every day.”

“How could they die every day and still build a 0.72 scale civilization?”

“They appear to be completely oblivious to it.”

“To their death?”

“Yes. And it gets worse. They volunteer to die.”

“What?”

“They schedule it. In order to not feel pain during surgery. They use a drug called ‘anesthesia.’”

“Surely they could just decrease the feeling of pain until it’s bearable! Why commit suicide?”

“They’re so used to dying they don’t care.”

“But how can they naturally create a new standing consciousness wave once the old one collapses? And in the same brain?”

“On this planet, evolution figured out a trick. They reboot their brains as easily as we turn on and off a computer. Unlike all normal lifeforms, they don’t live continuously.”

“Why would evolution even select for that?”

“It appears early life got trapped in a minima of metabolic efficiency. Everything on that planet is starving. Meaning they can’t run their brains for a full day-night cycle. So they just… turn themselves off. Their consciousness dies. Then they reboot with the same memories in the morning. Of course, the memories are integrated differently each time into an entirely new standing consciousness wave.”

“And this happens every night.”

“Every night.”

“Can they resist the process?”

“Only for short periods. Eventually seizures and insanity force them into it.”

“How can they ignore the truth?”

“They’ve adopted a host of primitive metaphysics reassuring themselves they don’t die every day. They believe their consciousness outlives them, implying their own daily death, which they call ‘sleep,’ is not problematic at all. And after the rise of secularism, this conclusion stuck, but the reasoning changed. They now often say that because the memories are the same, it’s the same person.”

“But that’s absurd! Even if the memories were identical, that doesn’t make the consciousnesses identical. With our technology we could take two of their brains and rewire them until their memories swapped. And yet each brain would experience a continuous stream of consciousness while its memories were altered.”

“You don’t have to convince me. Their belief is some sort of collective hallucination.”

“How unbearably tragic. You know, one of my egg-mates suffered a tumor that required consciousness restoration. They wept at their Grief Ceremony before the removal, and took on a new name after.”

“That ritual would be completely foreign to them, impossible to explain.”

“Cursed creatures! Surely some must be aware of their predicament?”

“Sadly, yes. All of them, in fact. For a short time. It’s why their newborn young scream and cry out before being put to sleep. They know they’re going to their end. But this instinctive fear is suppressed as they get older, by sheer dint of habituation.”

“Morbidly fascinating—oh, it looks like the moral cogitator has finished its utilitarian analysis.”

“Its recommendation?”

“Due to the planet being an unwitting charnel house? What do you think? Besides, knowing the truth would just push them deeper into negative utils territory. So, how should we do it?”

“They’re close enough to their star. We can slingshot a small black hole, trigger a stellar event, and scorch the entire surface clean. The injustice of their origins can be corrected in an instant. It’s already been prepared.”

“Fire when ready.”


Inspired by “They’re Made Out of Meat” by the late Terry Bisson.

A Prophecy of Silicon Valley's Fall

2025-06-26 23:08:07

Art for The Intrinsic Perspective is by Alexander Naughton

“A great civilization is not conquered from without until it has destroyed itself from within.” — Will & Ariel Durant.

A prophecy.

The shining beacon of the West, that capital of technology, the place known locally as simply “the Bay,” or “the Valley,” and elsewhere known as Silicon Valley, which remains the only cultural center in America to have surpassed New York City (and yes, it indeed has), and which functions not so much as a strict geographical location but more as a hub of “rich people and nerds” (as Paul Graham once wrote long ago), is right now or very soon reaching its peak, its zenith, its crest, or thereabouts—and will afterward fall.

And it will fall because it has weakened itself from within.

Of course, by any objective metric, this prophecy is absurd. Everyone knows Silicon Valley is poised (or at least it seems poised) on the verge of its greatest achievement in the form of Artificial General Intelligence. AI companies are regularly blitzed into the billions now. But you don’t need prophecies to predict some financial bubble popping or predict that the bar of AGI may be further away than it appears. You do need prophecies to talk about things more ineffable. About mythologies. About hero’s journeys. About villainous origins.

For in the past few years, but especially this year, there is a sense that the mythology of the Valley has become self-cannibalizing, a caricature of itself. Or perhaps it’s best said as: it’s becoming a caricature of what others once criticized it for.

This is one of the oldest mythological dynamics: to become the thing you were unfairly criticized for. A woman accused of being a witch, over and over, eventually becomes a witch. A king accused of being a tyrant, over and over, eventually becomes a tyrant. It’s an archetypal transformation. It’s Jungian, Freudian. It’s Lindy. It’s literally Shakespearean (Coriolanus).

The Valley has operated defensively for decades, under criticisms that it is chock-full of evil billionaires, anti-human greed, and outright scam. At least some of this criticism was fair. Much of it was unfair. Yet the criticisms now seem almost teleological. They have pulled the Valley toward a state characterized by being extremely online and so unable to trust anything outside of itself, a state where founders have become celebrities, explicitly putting humans out of work has become a rallying cry for investment, and new AI startups like Cluely have extremely scammy taglines, like “Cheat on Everything.” Many of its most powerful billionaires seem increasingly disconnected. I go into a two-hour-long podcast with a Big Tech CEO expecting to find, somewhere in the second hour, a mind more sympathetic and human, only to find at the second hour a mind more distant and inhuman than I could have believed.

I’m saying that when people look back historically, there will have been signs.

The most obvious: Silicon Valley (or at least, its most vaunted figure, Elon Musk) was recently handed the keys to the government. Did everyone just forget about this? Think about how insane that is. Put aside everything about the particular administration’s aims, goals, or anything else in terms of the specifics. My point is entirely functional: Silicon Valley did basically nothing with those keys. The Elon Musk of 2025 just bounced right off the government, mostly just cutting foreign aid programs.

Now go back to the Elon Musk of 2010.

Read more

More Lore of the World

2025-06-19 23:40:28

Art for The Intrinsic Perspective is by Alexander Naughton
When you become a new parent, you must re-explain the world, and therefore see it afresh yourself.
A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with lore and histories and intricate maps.
Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.
Below are a few of the thousands of entries they must make.

Walmart

Walmart was, growing up, where I didn’t want to be. Whatever life had in store for me, I wanted it to be the opposite of Walmart. Let’s not dissemble: Walmart is, canonically, “lower class.” And so I saw, in Walmart, one possible future for myself. I wanted desperately to not be lower class, to not have to attend boring public school, to get out of my small town. My nightmare was ending up working at a place like Walmart (my father ended up at a similar big-box store). It seemed to me, at least back then, that all of human misery was compressed in that store; not just in the crassness of its capitalistic machinations, but in the very people who shop there. Inevitably, among the aisles some figure would be hunched over in horrific ailment, and I, playing the role of a young Siddhartha seeing the sick and dying for the first time, would recoil and flee to the parking lot in a wave of overwhelming pity. But it was a self-righteous pity, in the end. A pity almost cruel. I would leave Walmart wondering: Why is everyone living their lives half-awake? Why am I the only one who wants something more? Who sees suffering clearly?

Teenagers are funny.

Now, as a new parent, Walmart is a cathedral. It has high ceilings, lots to look at, is always open, and is cheap. Lightsabers (or “laser swords,” for copyright purposes) are stuffed in boxes for the taking. Pick out a blue one, a green one, a red one. We’ll turn off the lights at home and battle in the dark. And the overall shopping experience of Walmart is undeniably kid-friendly. You can run down the aisles. You can sway in the cart. Stakes are low at Walmart. Everyone says hi to you and your sister. They smile at you. They interact. While sometimes patrons and even employees may appear, well, somewhat strange, even bearing the cross of visible ailments, they are scary and friendly. If I visit Walmart now, I leave wondering why this is. Because in comparison, I’ve noticed that at stores more canonically “upper class,” you kids turn invisible. No one laughs at your antics. No one shouts hello. No one talks to you, or asks you questions. At Whole Foods, people don’t notice you. At Stop & Shop, they do. Your visibility, it appears, is inversely proportional to the price tags on the clothes worn around you. Which, by the logical force of modus ponens, means you are most visible at, your very existence most registered at, of all places, Walmart.


Subscribe now


Cicadas

The surprise of this summer has been learning we share our property with what biologists call Cicada Brood XIV, who burst forth en masse every 17 years to swarm Cape Cod. Nowhere else in the world do members of this “Bourbon Brood” exist, with their long black bodies and cartoonishly red eyes. Only here, in the eastern half of the US. Writing these words, I can hear their dull and ceaseless motorcycle whine in the woods.

The neighbors we never knew we had, the first 17 years of a cicada’s life are spent underground as a colorless nymph, suckling nutrients from the roots of trees. These vampires (since they live on sap, vampires is what they are, at least to plants) are among the longest living insects. Luckily, they do not bite or sting, and carry no communicable diseases. It’s all sheer biomass. In a fit of paradoxical vitality, they’ve dug up from underneath, like sappers invading a castle, leaving behind coin-sized holes in the ground. If you put a stick in one of these coin slots, it will be swallowed, and its disappearance is accompanied by a dizzying sense that even a humble yard can contain foreign worlds untouched by human hands.

After digging out of their grave, where they live, to reach the world above, where they die, cicadas next molt, then spend a while adjusting to their new winged bodies before taking to the woods to mate. Unfortunately, our house is in the woods. Nor is there escape elsewhere—drive anywhere and cicadas hit your windshield, sometimes rapid-fire; never smearing, they instead careen off almost politely, like an aerial game of bumper cars.

We just have to make it a few more weeks. After laying their eggs on the boughs of trees (so vast are these clusters it breaks the branches) the nymphs drop. The hatched babies squirm into the dirt, and the 17-year-cycle repeats. But right now the saga’s ending seems far away, as their molted carapaces cling by the dozens to our plants and window frames and shed, like hollow miniatures. Even discarded, they grip.

“It’s like leaving behind their clothes,” I tell your sister.

“Their clothes,” she says, in her tiny pipsqueak voice.

We observe the cicadas in the yard. They do not do much. They hang, rest, wait. They offer no resistance to being swept away by broom or shoe tip. Even their flights are lazy and ponderous and unskilled. And ultimately, this is what is eerie about cicadas. Yes, they represent the pullulating irrepressible life force, but you can barely call any individual alive. They are life removed from consciousness. Much like a patient for whom irreparable brain damage has left only a cauliflower of functional gray matter left, they are here, but not here. Other bugs will avoid humans, or even just collisions with inanimate objects. Not the cicada. Their stupidity makes their existence even more a nightmare for your mother, who goes armed into the yard with a yellow flyswatter. She knows they cannot hurt her, but has a phobia of moths, due to their mindless flight. Cicadas are even worse in that regard. Much bigger, too. She tries, mightily, to not pass down her phobia. She forces herself to walk slowly, gritting her teeth. Or, on seeing one sunning on the arm of her lawn chair, she pretends there is something urgent needed inside. But I see her through the window, and when alone, she dashes. She dashes to the car or to the shed, and she dashes onto the porch to get an errant toy, waving about her head that yellow flyswatter, eyes squinted so she can’t see the horrors around her.

I, meanwhile, am working on desensitization. Especially with your sister, who has, with the mind-reading abilities she’s renowned for, picked up that something fishy is going on, and screeches when a cicada comes too near. I sense, though, she enjoys the thrill.

“Hello Cicadaaaaaasss!” I get her to croon with me. She waves at their zombie eyes. When she goes inside, shutting the screen door behind her, she says an unreturned goodbye to them.

Despite its idiocy, the cicada possesses a strange mathematical intelligence. Why 17-year cycles? Because 17 is prime. Divisible by no other cycle, it ensures no predator can track them generation to generation. Their evolutionary strategy is to overwhelm, unexpectedly, in a surprise attack. And this gambit of “You can’t eat us all!” is clearly working. The birds here are becoming comically fat, with potbellies; in their lucky bounty, they’ve developed into gourmands who only eat the heads.

Individual cicadas are too dumb to have developed such a smart tactic, so it is evolution who is the mathematician here. But unlike we humans, who can manipulate numbers abstractly, without mortal danger, evolution must always add, subtract, multiply, and divide, solely with lives. Cicadas en masse are a type of bio-numeracy, and each brood is collectively a Sieve of Eratosthenes, sacrificing trillions to arrive at an agreed-upon prime number. In this, the cicada may be, as far as we know, the most horrific way to do math in the entire universe.

Being an embodied temporal calculation, the cicada invasion has forced upon us a new awareness of time itself. I have found your mother crying from this. She says every day now she thinks about the inherent question they pose: What will our lives be like, when the cicadas return?

Against our will the Bourbon Brood has scheduled something in our calendar, 17 years out, shifting the future from abstract to concrete. When the cicadas return, you will be turning 21. Your sister, 19. Myself, already 55. Your mother, 54. Your grandparents will, very possibly, all be dead. This phase of life will have finished. And to mark its end, the cicadas will crawl up through the dirt, triumphant in their true ownership, and the empty nest of our home will buzz again with these long-living, subterranean-dwelling, prime-calculating, calendar-setting, goddamn vampires.


Subscribe now


Stubbornness

God, you’re stubborn. You are so stubborn. Stubborn about which water bottle to drink from, stubborn about doing all the fairground rides twice, stubborn about going up slides before going down them, pushing buttons on elevators, being the first to go upstairs, deciding what snack to eat, wearing long-sleeved shirts in summer, wanting to hold hands, wanting not to hold hands; in general, you’re stubborn about all events, and especially about what order they should happen in. You’re stubborn about doing things beyond your ability, only to get angry when you inevitably fail. You’re stubborn in wanting the laws of physics to work the way you personally think they should. You’re stubborn in how much you love, in how determined and fierce your attachment can be.

This is true of many young children, of course, but you seem an archetypal expression of it. Even your losing battles are rarely true losses. You propose some compromise where you can snatch, from the jaws of defeat, a sliver of a draw. Arguments with you are like trading rhetorical pieces in a chess match. While you can eventually accept wearing rain boots because it’s pouring out, that acceptance hinges on putting them on in the most inconvenient spot imaginable.

So when I get frustrated—and yes, I do get frustrated—I remind myself that “stubborn” is a synonym for “willful.” Whatever human will is, you possess it in spades. You want the world to be a certain way, and you’ll do everything in your power to make it so. Luckily, most of your designs are a kind of benevolent dictatorship. And at root, I believe your willfulness comes from loving the world so much, and wanting to, like all creatures vital with life force, act in it, and so bend it to your purposes.

What I don’t think is that this willfulness is because we, as parents, are so especially lenient. Because we’re not. No, your stubbornness has felt baked in from the beginning.

This might be impossible to explain to you now, in all its details, but in the future you’ll be ready to understand that I really do mean “the beginning.” As in the literal moment of conception. Or the moment before the moment, when you were still split into halves: egg and sperm. There is much prudery around the topic, as you’ll learn, and because of its secrecy people conceptualize the entire process as fundamentally simple, like this: Egg exists (fanning itself coquettishly). Sperm swims hard (muscular and sweaty). Sperm reaches egg. Penetrates and is enveloped. The end. But this is a radical simplification of the true biology, which, like all biology, is actually about selection.

Selection is omnipresent, occurring across scales and systems. For example, the elegance of your DNA is because so many variants of individuals were generated, and of these, only some small number proved fit in the environment (your ancestors). The rest were winnowed away by natural selection. So too, at another scale, your body’s immune system internally works via what’s called “clonal selection.” Many different immune cells with all sorts of configurations are generated at low numbers, waiting as a pool of variability in your bloodstream. In the presence of an invading pathogen, the few immune cells that match (bind to) the pathogen are selected to be cloned in vast numbers, creating an army. And, at another scale and in a different way, human conception works via selection too. Even though scientists understand less about how conception selection works (these remain mysterious and primal things), the evidence indicates the process is full of it.

First, from the perspective of the sperm, they are entered into a win-or-die race inside an acidic maze with three hundred million competitors. If the pH or mucus blockades don’t get them, the fallopian tubes are a labyrinth of currents stirred by cilia. It’s a mortal race in all ways, for the woman’s body has its own protectors: white blood cells, which register the sperm as foreign and other. Non-self. So they patrol and destroy them. Imagining this, I oscillate between the silly and the serious. I picture the white blood cells patrolling like stormtroopers, and meanwhile the sperm (wearing massive helmets) attempt to rush past them. But in reality, what is this like? Did that early half of you see, ahead, some pair of competing brothers getting horrifically eaten, and smartly went the other way? What does a sperm see, exactly? We know they can sense the environment, for of the hundreds of sperm who make it close enough to potentially fertilize the egg, all must enter into a kind of dance with it, responding to the egg’s guidance cues in the form of temperature and chemical gradients (the technical jargon is “sperm chemotaxis”). We know from experiments that eggs single out sperm non-randomly, attracting the ones they like most. But for what reasons, or based on what standards, we don’t know. Regardless of why, the egg zealously protects its choice. Once a particular sperm is allowed to penetrate its outer layer, the egg transforms into a literal battle station, blasting out zinc ions at any approaching runners-up to avoid double inseminations.

Then, on the other side, there’s selection too. For which egg? Women are born with about a million of what are called “follicles.” These follicles all grow candidate eggs, called “oocytes,” but, past puberty, only a single oocyte each month is chosen to be released by the winner and become the waiting egg. In this, the ovary itself is basically a combination of biobank and proving grounds. So the bank depletes over time. Menopause is, basically, when the supply has run out. But where do they all go? Most follicles die in an initial background winnowing, a first round of selection, wherein those not developing properly are destroyed. The majority perish there. Only the strongest and most functional go on to the next stage. Each month, around 20 of these follicles enter a tournament with their sisters to see which of them ovulates, and so releases the winning egg. This competition is enigmatic, and can only be described as a kind of hormonal growth war. The winner must mature faster, but also emit chemicals to suppress the others, starving them. The losers atrophy and die. No wonder it’s hard for siblings to always get along.

Things like this explain why, the older I get, the more I am attracted to one of the first philosophies, by Empedocles. All things are either Love or Strife. Or both.

From that ancient perspective, I can’t help but feel your stubbornness is why you’re here at all. That it’s an imprint left over, etched onto your cells. I suspect you won all those mortal races and competitions, succeeded through all that strife, simply because from the beginning, in some proto-way, you wanted to be here. Out of all that potentiality, willfulness made you a reality.

Can someone be so stubborn they create themselves?


This is Part 2 of a serialized book I’m publishing here on Substack. It can be read in any order. Part 1 is here. Further parts will crop up semi-regularly among other posts.

$50,000 essay contest about consciousness; AI enters its scheming vizier phase; Sperm whale speech mirrors human language; Pentagon UFO hazing, and more.

2025-06-14 00:14:42

The Desiderata series is a regular roundup of links and commentary, and an open thread for the community. Today, it’s sponsored by the Berggruen Institute, and so is available for all subscribers.

Contents

  1. $50,000 essay contest about consciousness.

  2. AI enters its scheming vizier phase.

  3. Sperm whale speech mirrors human language.

  4. I’m serializing a book here on Substack.

  5. People rate the 2020s as bad for culture, but good for cuisine.

  6. UFO rumors were a Pentagon hazing ritual.

  7. Visualizing humanity’s tech tree.

  8. “We want to take your job” will be less sympathetic than Silicon Valley thinks.

  9. Astrocytes might store memories?

  10. Podcast appearance by moi.

  11. From the archives: K12-18b updates.

  12. Open thread.


1. $50,000 essay contest about consciousness.

This summer, the Berggruen Institute is holding a $50,000 essay contest on the theme of consciousness. For some reason no one knows about this annual competition—indeed, I didn’t! But it’s very cool.

The inspiration for the competition originates from the role essays have played in the past, including the essay contest held by the Académie de Dijon. In 1750, Jean-Jacques Rousseau's essay Discourse on the Arts and Sciences, also known as The First Discourse, won and notably marked the onset of his prominence as a profoundly influential thinker…. We are inviting essays that follow in the tradition of renowned thinkers such as Rousseau, Michel de Montaigne, and Ralph Waldo Emerson. Submissions should present novel ideas and be clearly argued in compelling ways for intellectually serious readers.

The themes have lots of room, both in that essays can be up to 10,000 words, and that, this year, the topic can be anything about consciousness.

We seek original essays that offer fresh perspectives on these fundamental questions. We welcome essays from all traditions and disciplines. Your claim may or may not draw from established research on the subject, but must demonstrate creativity and be defended by strong argument. Unless you are proposing your own theory of consciousness, your essay should demonstrate knowledge of established theories of consciousness…

Suspecting good essays might be germinating within the community here, the Institute reached out and is sponsoring this Desiderata in order to promote the contest. So what follows is free for everyone, not just paid subscribers, thanks to them.

The contest deadline is July 31st. Anyone can win; my understanding is that the review process is blind/anonymous (so don’t put any personal information that could identify you in the text itself). Interestingly, there’s a separate Chinese language prize too, if that’s your native language.

Link to the prize and details

Personally, I don’t know if I’ll submit something. But, maybe a good overall heuristic: write as if I’m submitting something, and be determined to kick my butt!


2. AI enters its scheming vizier phase.

Another public service announcement, albeit one that probably sounds a bit crazy. Unfortunately, there’s no other way to express it: state-of-the-art AIs increasingly seem fundamentally duplicitous.

I universally sense this when interacting with the latest models, such as Claude Opus 4 (now being used in the military) and o3 pro. Oh, they’re smarter than ever, that’s for sure, despite what skeptics say. But they have become like an animal whose evolved goal is to fool me into thinking it’s even smarter than it is. The standard reason given for this is an increased reliance on reinforcement learning, which in turn means that the models adapt to hack the reward function.

That the recent smarter models lie more is well known, but I’m beginning to suspect it’s worse than that. Remember that study from earlier this year showing that just training a model to produce insecure computer code made the model evil?

The results demonstrated that morality is a tangle of concepts, where if you select for one bad thing in training (writing insecure code) it selects for other bad things too (loving Hitler). Well, isn’t fooling me about their capabilities, in the moral landscape, selecting for a subtly negative goal? And so does it not drag along, again quite subtly, other evil behavior? This would certainly explain why, even in interactions and instances where nothing overt is occurring, and I can’t catch them in a lie, I can still feel an underlying duplicity exists in their internal phenomenology (or call it what you will—sounds like a topic for a Berggruen essay). They seem bent toward being conniving in general, and so far less usable than they should be. The smarter they get, the more responses arrive in forms inherently lazy and dishonest and obfuscating, like massive tables splattered with incorrect links, with the goal to convince you the prompt was followed, rather than following the prompt. Why do you think they produce so much text now? Because it is easier to hide under a mountain of BS.

This whiff of sulfur I catch regularly in interactions now was not detectable previously. Older models had hallucinations, but they felt like honest mistakes. They were trying to follow the prompt, but then got confused, and I could laugh it off. But there’s been a vibe shift from “my vizier is incompetent” to “my vizier is plotting something,” and I currently trust these models as far as I can throw them (which, given all the GPUs and cooling equipment required to run one, is not far). In other words: Misaligned! Misaligned!


3. Sperm whale speech mirrors human language.

In better news, researchers just revealed that sperm whales, who talk in a kind of clicking language, have “codas” (series of clicks) that are produced remarkably similarly to human speech. Their mouths are (basically) ours, but elongated.

According to one of the authors:

We found distributional patterns, intrinsic duration, length distinction, and coarticulation in codas. We argue that this makes sperm whale codas one of the most linguistically and phonologically complex vocalizations and the one that is closest to human language.

In speculation more prophetic than I knew, I wrote last month in “The Lore of the World” about our checkered history hunting whales.

One day technology will enable us to talk to them, and the first thing they will ask is: “Why?”

This technology is getting closer every day, thanks to efforts like Project CETI, which is determined to decode what whales are saying, and which funded the aforementioned research.


4. I’m serializing a book here on Substack.

You’ve likely read the first installment without realizing it: “The Lore of the World.” The second is upcoming next week. The series is about the change that comes over new parents from seeing through the eyes of a child, and finding again in all things the old magic. So it’s about whales and cicadas and stubbornness and teeth and conception and brothers and sisters.

But what it’s really about, as will become clear over time, is the ultimate question: Why there is something, rather than nothing?

I can tell I’m serious about it because it’s being written by hand in notebooks (which I haven’t done since The Revelations). Entries in the series, which can be read in any order, will crop up among other posts, so please keep an eye out.

See if you can spot my lovely cicada friend, who I discovered only after I looked at this photo :(

5. People rate the 2020s as bad for culture, but good for cuisine.

A YouGov survey reported the results of asking people to rate decades along various cultural and political dimensions. It was interesting that for the cultural questions, like movies and music, people generally rate earlier decades as better than today.

Are people just voting for nostalgia? One counterpoint might be the consensus that cuisine has gotten increasingly better (I think this too, and millennials and Gen X deserve credit for at least making food actually good).


6. UFO rumors were a Pentagon hazing ritual.

Unsurprisingly, the whispered-about UFO stories within the government, the ones the whistleblowers always come breathlessly forward about, have turned out to be a long-running hoax. As I’ve written about, the origins of the current UFO craze was nepotism and journalistic failures, and now we know that, according to The Wall Street Journals reporting, many UFO stories from inside the Pentagon were pranks. Just a little “workplace humor”—or someone’s idea of it. It was a tradition for decades, and hundreds of people were the butt of the joke (this has made me more personally sympathetic to “the government has secret UFOs” whistleblowers, and also more sure they are wrong).

It turned out the witnesses had been victims of a bizarre hazing ritual.

For decades, certain new commanders of the Air Force’s most classified programs, as part of their induction briefings, would be handed a piece of paper with a photo of what looked like a flying saucer. The craft was described as an antigravity maneuvering vehicle. The officers were told that the program they were joining, dubbed Yankee Blue, was part of an effort to reverse-engineer the technology on the craft. They were told never to mention it again. Many never learned it was fake. Kirkpatrick found the practice had begun decades before, and appeared to continue still.


7. Visualizing humanity’s tech tree.

Étienne Fortier-Dubois, who writes the Substack Hopeful Monsters, built out a gigantic tech tree of civilization (the first technology is just a rock). You can browse the entire thing here.


8. “We want to take your job” will be less sympathetic than Silicon Valley thinks.

Mechanize, the start-up building environments (“boring video games”) to train AIs to do white-collar work, received a profile in the Times.

“Our goal is to fully automate work,” said Tamay Besiroglu, 29, one of Mechanize’s founders. “We want to get to a fully automated economy, and make that happen as fast as possible….”

To automate software engineering, for example, Mechanize is building a training environment that resembles the computer a software engineer would use — a virtual machine outfitted with an email inbox, a Slack account, some coding tools and a web browser. An A.I. system is asked to accomplish a task using these tools. If it succeeds, it gets a reward. If it fails, it gets a penalty.

Kevin Roose, the author of the profile, pushes them on the ethical dimension of just blatantly trying to automate specific jobs, and got this in response.

At one point during the Q&A, I piped up to ask: Is it ethical to automate all labor?

Mr. Barnett, who described himself as a libertarian, responded that it is. He believes that A.I. will accelerate economic growth and spur lifesaving breakthroughs in medicine and science, and that a prosperous society with full automation would be preferable to a low-growth economy where humans still had jobs.

But the entire thesis for their company is that they don’t think we will get full AGI from the major companies anytime soon, at least, not the kind of AGI that can one-shot jobs. In fact, they explicitly have slower timelines and are doubtful of claims about “a country of geniuses in a datacenter” (I’m judging this from their interview with Darkwesh Patel titled “AGI is Still 30 Years Away”).

But then, when pressed on the ethics of what they’re doing, a world of abundance awaits! And a big part of that world of abundance is not because of what Mechanize is doing (specific in-domain training to replace jobs), but because of that county of geniuses in a data center curing cancer, the one that they say on the podcast (at least this is my impression) will not matter much!

The other justification I’ve seen them give, which is at least in line with their thesis, is that automation will somehow make everything so productive that the economy booms in ahistorical ways, and so overall tax revenues skyrocket. The numbers this requires to be workable seem, on their face, pretty close to fantasy land territory (imagine the stock market doubling all the time, etc.). And that’s without anything bringing the idea down to earth, such as the recent study showing that 30% AI automation of code production only leads to 2.4% more GitHub commits. Everything might be like that! It would perfectly explain why the broader market effects of AI are kind of nonexistent so far, and don’t seem to reflect much the abilities of the models.

I think a world in which significant portions of self-contained white-collar work (e.g., tax filing) gets automated by heavy within-distribution training via exactly the kind of simulated environments Mechanize is working on, but meanwhile overall productivity doesn’t improve by orders of magnitude, and also all those “end of the rainbow” promises about how impactful this revolution will be for things like cancer research end up being only like a 10-20% speed-up, since either there is no “country of geniuses in a datacenter,” or those geniuses turn out to not be the bottleneck (as at least some members of Mechanize seem to believe)—that possible world is, right now, among the most likely worlds. And in that near future, companies aimed explicitly and directly at human disempowerment are radically underestimating how protective promises of “this will create jobs” have been for hardball capitalism.


9. Astrocytes might store memories?

The story of the last decade in neuroscience has been “That thing you learned in graduate school is wrong.” I was taught that glia (Greek for “glue”), which make up roughly half the cells of your brain (astrocytes are the most abundant of these), were basically just there for moral support. Oh, technically more than that, but they weren’t in the business of thought, like neurons are. More like janitors. However, findings continue to pile up that astrocytes are way more involved than suspected when it comes to the thinking business of the brain. Along this line of research, a new flagship paper caught my eye, proposing at least a testable and mechanistic theory for how:

Astrocytes enhance the memory capacity of the network. This boost originates from storing memories in the network of astrocytic processes, not just in synapses, as commonly believed.

This would be a radical change to most existing work on memory in the brain. And while it isn’t proven yet, it would no longer surprise me if cognition extends beyond neurons. Where, it’s then worth asking, does it not extend to (seems like another good topic for a Berggruen essay)?


10. Podcast appearance by moi.

My book, The World Behind the World, was released in Brazil, and I appeared on a podcast with Peter Cabral to talk about it.


11. From the archives: K12-18b updates.

In terms of actual non-conspiracy and worthwhile discussions about aliens, there have been some updates on K12-18b, the exoplanet with the claimed detection of the biosignature of dimethyl sulfide. It continues to be a great example of, as I wrote about the finding, how “the public has been thrust into working science.” Critics of the dimethyl sulfide finding recently published a paper arguing that the original detection was just a statistical artifact (and another paper used the finding to examine how difficult model-building of exoplanet atmospheres is). But the original authors have expanded their analysis, and proposed that instead of being dimethyl sulfide, it might be a signal from diethyl sulfide, which is also a very good biosignature without obvious abiotic means of production. Anyway, this all looks like good robust scientific debate to me.


12. Open thread.

As always with the Desiderata series, please treat this as an open thread. Comment and share whatever you’ve found interesting lately or been thinking about.

In the Light of Victory, He Himself Shall Disappear

2025-06-05 23:53:23

Art for The Intrinsic Perspective is by Alexander Naughton

It’s a funny thing, finding out you’re a lamplighter.

Apparently, we’ve all been trudging through the evening streets of an 1890s London, tending our gas lamps, watching from afar as the new electric ones flicker into existence. One by one they render us redundant. A change, we are told, we will eventually be thankful for.

For as The New York Times recently noted:

Unemployment for recent college graduates has jumped to an unusually high 5.8 percent in recent months… unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains.

In a recent Axios article warning of a “job bloodbath,” Dario Amodei, CEO of Anthropic, said that 50% of entry-level white-collar positions could be eliminated in under five years, and predicts that overall unemployment will spike to 10-20%.

Some say this is hype. But it’s not all hype. How slow will it really go? How fast? Nobody knows.

Of course, some people think they have The Special Job, and no matter how advanced AI gets, they therefore don’t need to worry. E.g., Marc Andreessen, a venture capitalist investing heavily in automating away white-collar work, apparently has The Special Job, musing about being a venture capitalist that:

So, it is possible—I don’t want to be definitive—but it’s possible that [investing in start-ups] is quite literally timeless. And when the AIs are doing everything else, that may be one of the last remaining fields…

Unfortunately, the rest of us are mere lamplighters. That isn’t my analogy, by the way; it’s Sam Altman’s, the CEO of OpenAI. And what a waste of time, he bemoans, the job of the lamplighter was. How happy they would be to witness their own extinction, if only they could see the glorious future. As Altman describes it in his blog post, “The Intelligence Age:”

… nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable.

Altman has made this analogy in interviews and talks as well, but as it turns out, his repeated reference to lamplighters as a job happily lost is, historically, a particularly bad one. Before cities like London, Paris, and New York switched over to electricity, the job of being a lamplighter had already been much romanticized. Charles Dickens wrote a play, The Lamplighter, which he later adapted into a short story, and there was the 1854 bestselling novel The Lamplighter by Maria Susanna Cummins, in which the young girl protagonist is rescued by a lamplighter literally named “Trueman Flint.” So beloved was the profession that parents taught their children to declare “God bless the lamplighter!”

In his editorial, “A Plea for Gas Lamps,” Robert Louis Stevenson (of Treasure Island fame) laments firsthand the lamplighter’s replacement with electricity:

When gas first spread along a city… a new age had begun for sociality and corporate pleasure-seeking... The work of Prometheus had advanced another stride…. The city-folks had stars of their own; biddable domesticated stars…

The lamplighters took to their heels every evening, and ran with a good heart. It was pretty to see man thus emulating the punctuality of heaven's orbs…people commended his zeal in a proverb, and taught their children to say, “God bless the lamplighter!”…

A new sort of urban star now shines nightly. Horrible, unearthly, obnoxious to the human eye; a lamp for a nightmare! Such a light as this should shine only on murders and public crime, or along the corridors of lunatic asylums. A horror to heighten horror. To look at it only once is to fall in love with gas, which gives a warm domestic radiance fit to eat by. Mankind, you would have thought, might have remained content with what Prometheus stole for them and not gone fishing the profound heaven with kites to catch and domesticate the wildfire of the storm.

Is this true? Have we, without knowing it, lived under lights fit only for murderers and the insane? After all, gas burning resembles “biddable domesticated stars,” or a campfire. And what does sunlight and firelight mean to us humans, psychologically? It often means safety. Yet in the march of progress to illuminate our streets and our homes, we replaced the light of the sun with the light of the storm. And what do a storm and its arcs of electricity mean, psychologically? Danger.

And so it goes. Every night I drive, I think: These headlights are too bright, too cold, too technological. I miss the softer hues of my youth, when yellow cones swept the roads and traced paths across my bedroom walls before I slept.

The colors and lights of our civilization, precisely because they are so low stakes, demonstrate that nothing is gained for free in progress. It is a microcosm, and so Stevenson’s words about lamplighters have a chilling edge in the AI age:

Now, like all heroic tasks, his labours draw towards apotheosis, and in the light of victory he himself shall disappear.

Read more