2025-08-22 00:01:40
For a long time, if you Googled “how to get subscribers on substack,” an old essay of mine would crop up, advising aspiring Substackers to find a cohesive aesthetic. Originally written years ago to celebrate TIP passing 2,000 subscribers, thanks to being so high in the Google rankings for so long, I do think that essay had a beautifying effect on this platform—in fact, I know of at least one prominent Substacker who credits it with inspiring their own design (not to mention the sky-high use of gold as a link color).
Substack is a more serious medium now, and 2,000 subscribers isn’t exactly the big leagues anymore.
Regularly, new media ventures launch here on platform rather than as websites of their own. Most recently, The Argument, self-described as a proudly liberal newsletter, debuted earlier this week with $4 million in funding. Everyone wants to talk about how it recruited a star-studded cast of writers like Matthew Yglesias, and why (or if) liberal magazines get better funding than conservative ones, and what a $20 million evaluation of a Substack can possibly be based on, and so on.
But I want to talk about how The Argument started off with a lime green background.
Now, immediately they bent the knee and changed it (although I only saw one complaint on their Welcome post). I wish they’d kept the lime green. At least a little longer, just to see. It was distinct as all get out, and for in-your-face political argumentation, works a lot better than the “we are very serious people” salmon underbelly it’s now in a toe-to-toe fight with The Financial Times over. A magazine like The Argument revolves around screenshots of the titles being shared (or hate-shared) on X, and when you are hit with a sudden burst of acidic lime in the timeline, like a pop of flavor, you’d have at least known what you’re reading. If your brand is in-your-face liberalism, then it makes sense to have an in-your-face color associated with that. Whoever made that initial (ahem, bold) design decision, and got later overruled, has my sympathy—I can see the vision. Almost taste it, actually. My point is that, lime green or not…
They define what you’re doing not just to others, but to yourself. TIP isn’t just what others are looking at, this is what I’m looking at all day, too. And now, closing in on 65,000 subscribers instead of 2,000, over the past few weeks I’ve set out to redesign TIP, starting with the homepage.
But to make decisions about aesthetics, you need to have a conception of self. This is probably the most significant and obvious failure mode: people are attracted to images, or visual vibes, but don’t themselves embody it. They can steal it, but can’t create it. You must be able to answer: What are you trying to do? Why are you doing it? What is this thing’s nature?
And over the years I’ve developed a clearer understanding of the nature of writing a newsletter, or at least, my kind of newsletter. The closest point of comparison I know of is a gallery tour of a museum. There’s a certain ambulatory nature to the whole thing. First you’re looking here, and then, somewhere else. Yes, there are common topics and themes and repetitions and so on, but the artistic effect is ultimately collective, rather than individual. I wanted to capture this tour-like atmosphere, so designed the new TIP homepage based around the idea of a literal gallery of images, hung inside a set of old painting frames. This is what it looks like now:
What I liked about my idea to use actual painting frames (these are cleaned up digital images of real frames) is that, much like an art gallery, a significant amount of white space gives each image, and its title, a chance to breathe. And when you go to click on a piece, it’s sort of like stepping into a painting.
To maintain this look, I’ll be picking out new images for each new post, but I get the additional fun of placing that image inside a chosen frame, of which I have a pre-established couple dozen saved and ready.
Meanwhile, the new Welcome page is a sort of infinite ladder I made with these frames: one inside the other, going on forever.
It reflects not only some classic TIP topics (remember when I argued that “Consciousness is a Gödel sentence in the language of science”), but also the structure of a newsletter itself, which sequentially progresses one step at a time (until death do us part).
However, the “paintings” will be, at least for now, reserved for the homepage and link previews. For the posts themselves that land in your inbox, they’ll bear a new masthead. It’s what you saw at the top, and what will be at the top from now on.
It’s created from a very old pattern I found, sometimes called rolwerk, which is a Renaissance technique. Again, there’s a lot of white space here, similar to a gallery. A masthead like this needs to not say too much—it is, after all, the lead for every single post, and so must span genres and moods, all without assumptions or implications. It must be in a flexible stance, much like how a judo expert or swordfighter plants their feet, able to move in one direction or another on a whim. It cannot overcommit.
Not to thwack you on the head with this, but I obviously picked out a lot of things from the Renaissance era for this redesign (many of the frames too).
Why?
Because centuries ago, before there was science, there was “natural philosophy.” It was before the world got split up by specialization, before industrialization and all the inter-departmental walls in universities got built. And yes, there was a certain amateurism to it all! That’s admitted. And there probably is here, too. At the same time, there’s a holistic aspect that feels important to TIP. It’s why I write about science, sure, but also lyric essays like The Lore of the World series (more soon!), and education treatises, and stuff on philosophy and metaphysics, and even occasionally pen a bit of fiction, and I wanted to capture that spirit with the designs here.
While I might try out using the “paintings” for header images in the future, I’ll be sticking to the masthead for now. I can’t help but feel that what’s arriving in your email should be stripped-down, easy to parse (and load). The design of a Substack needs to get out of the way of the writing, while still giving that little click of recognition about what you’re reading, and why, and preparing for the voice to come.
I think the new Intrinsic Perspective will be influenced by this choice. It may be a little less “here’s a huge centerpiece essay” and a little more “here’s something focused and fast.” Overall, a few less right hooks. A few more left jabs. I’m not talking about any major changes, just pointing out that the new design allows for a faster tempo and reactivity, and we all grow into our designs, in the end.
Of course, I’ll miss the old design. I was the first person on Substack (at least to my knowledge) to actually employ a resident artist who did the header images of every post. Let’s not forget or pass over how, for the past four years, TIP has been illustrated by the wonderful artist, Alexander Naughton. And he and I will still be collaborating together on some future projects, which you’ll hear more about (and see more about) early next year. But personally, I can’t help but be excited to have a more direct hand in making the homepage what it is, and getting to pick out images myself to make the new “paintings” with.
You can stop reading now if you don’t want to get too meta, but if you’re curious on what I recommend for Substacks in general, read on.
One reason for this extra section is simply that I’d prefer my idea for TIP’s new “museum-style” not be immediately stolen and replicated ad nauseam by other Substacks. And I do think you can apply some of the principles I used to come up with something different, but equally interesting. For advice on that, I’ll start with why, counterintuitively…
2025-08-13 22:45:16
There are many internets. There are internets that are bright and clean and whistling fast, like the trains in Tokyo. There are internets filled with serious people talking as if in serious rooms, internets of gossip and heart emojis, and internets of clowns. There are internets you can only enter through a hole under your bed, an orifice into which you writhe.
It’s a chromatic thing that can’t hold a shape for more than an instant. But every year, I get to see the internet through the eyes of subscribers to The Intrinsic Perspective. The community submits its writing available online, and I curate and share it.
The quality was truly exceptional this year—I found that they all speak for themselves, and can all be approached on their own terms, so I organized them to highlight how each is worth reading, thinking about, disagreeing with, or simply enjoying; at the very least, they are worth browsing through at your leisure, and finding hidden gems of writers to follow.
Please note that:
I cannot fact check each piece, nor is including it an official endorsement of its contents.
Descriptions of each piece, in italics, were written by the authors themselves, not me (but sometimes adapted for readability). What follows is from the community. I’m just the curator here.
I personally pulled excerpts and images from each piece after some thought, to give a sense of them.
If you submitted something and it’s missing, note that it’s probably in an upcoming Part 2.
So here is their internet, or our internet, or at least, the shutter-click frozen image of one possible internet.
1. “Wisdom of Doves” by Doctrix Periwinkle.
Evolved animal behaviors are legion, so why do we choose the examples we do to explain our own?
According to psychologist Jordan Peterson, we are like lobsters.We are hierarchical and fight over limited resources….
Dr. Peterson is a Canadian, and he is describing the North Atlantic American lobster, Homarus americanus. Where I live, lobsters are different.
For instance, they do not fight with their claws, because they do not have claws…. Because they do not have claws, spiny lobsters (Panulirus argus) are preyed upon by tropical fish called triggerfish…. The same kind of hormone signaling that made American lobsters exert dominance and fight each other causes spiny lobsters to cluster together to fight triggerfish, using elaborately coordinated collective behavior. Panulirus lobsters form choreographed “queues,” “rosettes,” and “phalanxes” to keep each other safe from the triggerfish foe. Instead of using claws to engage in combat with other lobsters, spiny lobsters use their attenules—the spindly homologues of claws seen in the photograph above—to keep in close contact with their friends….
If you are a lobster, what kind of lobster are you?
2. “We Know A Good Life When We See It” by Matt Duffy.
A reflection on how fluency replaced virtue in elite culture, and why recovering visible moral seriousness is essential to institutional and personal coherence.
We’ve inherited many of the conditions that historically enabled virtue—stability, affluence, access, mobility—but we’ve lost the clarity on virtue itself. The culture of technocratic primacy rewards singularity: total, often maniacal, dedication to one domain at the expense of the rest…. Singular focus is not a human trait. It is a machine trait. Human life is fragmented on purpose. We are meant to be many things: friend, worker, parent, neighbor, mentor, pupil, citizen.
3. “The Vanishing of Youth” by Victor Kumar, published in Aeon.
Population decline means fewer and fewer young people, which will lead to not just economic decay but also cultural stagnation and moral regress.
Sometimes I’m asked (for example, by my wife) why I don’t want a third child. ‘What kind of pronatalist are you?’ My family is the most meaningful part of my life, my children the only real consolation for my own mortality. But other things are meaningful too. I want time to write, travel and connect with my wife and with friends. Perhaps I’d want a third child, or even a fourth, if I’d found my partner and settled into a permanent job in my mid-20s instead of my mid-30s… Raising children has become enormously expensive – not just in money, but also in time, career opportunities and personal freedom.
4. “Three tragedies that shape human life in age of AI and their antidotes”, by brothers Manh-Tung Ho & Manh-Toan Ho, published in the journal AI & Society.
In this paper, we [the authors] discuss some problems arising in the AI age, and then, drawing from both Western and Eastern philosophical traditions to sketch out some antidotes. Even though this was published in a scientific journal, we published in a specific section called Curmudgeon Corner. According to the journal it "is a short opinionated letter to the editor on trends in technology, arts, science and society, commenting emphatically on issues of concern to the research community and wider society, with no more than 3 references and 2 co-authors.”
The tragedy of the commons is the problem of inner group conflicts driven by the lack of cooperation (and communication) when each individual purely follows his/her own best interest (e.g., raises more cattle to feed on the commons), doing so will undermine the collective good (e.g., the commons will be over-grazed). Thus, we define the AI-driven tragedy of the commons as short-term economic/psychological gains that drive the development, launch, and use of half-baked AI products and AI-generated contents that produce superficial information and knowledge, which ends up harming the individual and collective in the long term.
5. "Of Mice, Mechanisms, and Dementia" by Myka Estes.
Billions spent, decades lost: the cautionary tale of how Alzheimer’s research went all-in on a bad bet.
Another way to understand how groundbreaking these results were thought to be at the time is to simply follow the money. Within a year, Athena Neurosciences, where Games worked, was acquired by Elan Corp. for a staggering $638 million. In the press release announcing the merger, Elan proclaimed that the acquisition “provides the opportunity for us to capitalize on an important therapeutic niche, by combining Athena’s leading Alzheimer’s disease research program with Elan’s established development expertise.” The PDAPP mouse had transformed from laboratory marvel to the cornerstone of a billion-dollar strategy.
But, let’s peer ahead to see how that turned out. By the time Elan became defunct in 2013, they had sponsored not one, not two, but four failed Alzheimer's disease therapeutics, all based on the amyloid cascade hypothesis, hemorrhaging $2 billion in the process. And they weren't alone. Pharmaceutical giants, small biotechs, and research organizations and foundations placed enormous bets on amyloid—bets that, time and again, failed to pay off.
6. “Schrödinger's Chatbot” by R.B. Griggs.
Is an LLM a subject, an object, or some strange new thing in between?
It would be easy to insist that LLMs are just objects, obviously. As an engineer I get it—it doesn’t matter how convincing the human affectations are, underneath the conversational interface is still nothing but data, algorithms, and matrix multiplication. Any projection of subject-hood is clearly just anthropomorphic nonsense. Stochastic parrots!
But even if I grant you that, can we admit that LLMs are perhaps the strangest object that has ever existed?
7. "A Prodigal Son" by Eva Shang.
My journey back to Christianity and why it required abandoning worship of the world.
How miserable is it to believe only in the hierarchy of men? It’s difficult to overstate the cruelty of the civilization that Christianity was born into: Roman historian Mary Beard describes how emperors would intentionally situate blind, crippled, or diseased poor people at the edges of their elaborate banquets to serve as a grotesque contrast to the wealth and health of the elite. The strong did what they willed and the weak suffered what they must. Gladiatorial games transformed public slaughter into entertainment. Disabled infants were left to die in trash heaps or on hillsides. You see why the message of Christ spread like wildfire. What a radical proposition it must have been to posit the fundamental equality of all people: that both the emperor and the cripple are made in the image of God.
8. “Why Cyberpunk Matters” by C.W. Howell.
Though the genre is sometimes thought dated, cyberpunk books, movies, and video games are still relevant. They form a last-ditch effort at humanism in the face of machine dominance.
So, what is it that keeps drawing us to this genre? It is more, I believe, than simply the distinct aesthetic…. It reflects, instead, a deep-seated and long-standing anxiety that modern people feel—that our humanity is at stake, that our souls are endangered, that we are being slowly turned into machines.
9. “You Are So Sensitive” by Trevy Thomas.
This piece is about the 25 percent of our population, myself the author included, who have a higher sensitivity to the world around us -- with both good and bad effects.
As a young girl, I could ride in a car with my father and sing along to every radio song shamelessly loud. He was impressed that I knew all the words even as the musician in him couldn’t help but critique the song itself. “Why does every song have the word ‘baby’ in it?” he’d ask. But then I got to a point where I’d leave a store or promise never to return to a restaurant because of the music I’d heard in it. Some song from that place would be so lodged in my brain that it would wake me in the middle of the night two weeks later…. about a quarter of the population—humans and animals alike—have this increased level of sensitivity. It can show up in various forms, including sensitivity to sound, light, smell, and stimulation.
10. “Solving Popper's Paradox of Tolerance Before Intolerance Ends Civilization” by Dakara.
A solution to preserving the free society without invoking the conflict of Popper's Paradox.
… Are we now witnessing the end of tolerant societies? Is this the inevitable result that eventually unfolds once an intolerant ideology enters the contest for ideas and the rights of citizens?…
Have we already reached the point where the opposing ideologies are using force against the free society? They censor speech, intervene in the employment of those they oppose, and will utilize physical violence for intimidation.
11. “Knowledge 4.0” by Davi.
From gossip to machine learning - how we bypassed understanding.
Speech allowed us to transmit knowledge among humans, the written word enabled us to broadcast it across generations, and software removed the cost of accessing that knowledge, while turbocharging our ability of composing any piece of knowledge we created with the existing humanity-level pool. What we call now machine learning came to remove one of the few remaining costs in our quest of conquering the world: creating knowledge. It is not that workers will lose their jobs in the near future, this is the revolution that will make obsolete much of our intellectual activity for understanding the world. We will be able to craft planes without ever understanding why birds can fly.
12. “Problematic Badass Female Tropes” by Jenn Zuko.
An overview of the PBFT series of 7 that covers the bait-and-switch of women characters that are supposed to be strong, but end up subservient or weak instead.
The problem that becomes apparent here (as I’m sure you’ve noticed even in only this first folktale example), is that in today’s literature and entertainment, these strong, independent women characters we read about in old stories like Donkeyskin and clever Catherine are all too often subverted, altered, and weakened; either in subtle ways or obvious ways, especially by current pop culture and Hollywood.
13. "The West is Bored to Death" by Stuart Whatley, published in The New Statesman.
An essay on the classical "problem of leisure," and how a society/culture that fails to cultivate a leisure ethic ends up in trouble.
Developing a healthy relationship with free time does not come naturally; it requires a leisure ethic, and like Aristotelian virtue, this probably needs to be cultivated from a young age. Only through deep, sustained habituation does one begin to distinguish between art and entertainment, lower and higher pleasures, titillation and the sublime.
14. “MAGA As The Liberal Shadow” by Carlos.
In a very real sense, liberalism is the root cause of MAGA, and it's very important to understand this to see a way forward.
It’s no wonder that I feel liberalism as the source of this eternal no: it is liberals who define the collective values of our culture, as it is the cities that produce culture, and the cities are liberal. So the voice of the collective in my head, is a liberal. My little liberal thought cop, living in my head.
4chan is great because you get to see what happens when someone evicts the liberal cop, the shadow run rampant. Sure, all sorts of very naughty emotions get expressed, and it is quite a toxic place, but it’s like a great sigh, finally, you can unwind, and say whatever the fuck you want, without having to take anyone else’s feelings into account.
15. “The Blowtorch Theory: A New Model for Structure Formation in the Universe” by Julian Gough.
The James Webb Space Telescope has opened up a striking and unexpected possibility: that the dense, compact, early universe universe wasn't shaped slowly and passively by gravity alone, but was instead shaped rapidly and actively by sustained, supermassive black hole jets, which carved out the cosmic voids, shaped the filaments, and generated the magnetic fields we see all around us today.
An evolved universe, therefore, constructs itself according to an internal, evolved set of rules baked deep into its matter, just as a baby, or a sprouting acorn, does.
The development of our specific universe, therefore, since its birth in the Big Bang, mirrors the development of an organism; both are complex evolved systems, where (to quote the splendid Viscount Ilya Romanovich Prigogine), the energy that moves through the system organises the system.
But universes have an interesting reproductive advantage over, say, animals.
16. “Tea” by Joshua Skaggs.
Joshua Skaggs, a single foster dad, has a 3 a.m. chat with one of his kids.
My second night as a foster dad I wake in the middle of the night to the sound of footsteps. I throw on a t-shirt and find him pacing the living room, a teenager in basketball shorts and a baggy t-shirt….
“I broke into your closet,” he says.
“Oh yeah?” I say….
“I looked at all your stuff,” he says. “I thought about drinking your whiskey, but then I thought, ‘Nah. Josh has been good to me.’ So I just closed the door.”
I’m not sure what to say. I eventually land on: “That’s good. I’m glad you didn’t take anything.”
“It was really easy to break into,” he says. “It only took me, like, three seconds.”
“Wow. That’s fast.”
“I’m really good at breaking into places.”
17. “Notes in Aid of a Grammar of Assent” by Amanuel Sahilu.
Through the twin lenses of literature and science, I take a scanning look at the human tendency to detect and discern personhood.
This is all to say, a main reason for modern skepticism toward serious personification is that we think it’s shoddy theorizing….
But I think few moderns reject serious personification on such rational grounds. It may be just as likely we’re driven to ironic personification after adjusting to the serious form as children, when we’re first learning about language and the world. Then as we got older the grown-ups did a kind of bait-and-switch, and serious personification wasn’t allowed anymore.
18. “Book Review: Griffiths on Electricity & Magnetism” by Tim Dingman.
In adulthood I have read many STEM textbooks cover-to-cover.These are textbooks that are supposed to be standards in their fields, yet most of them are not great reading. The median textbook is more like a reference manual with practice problems than a learning experience.
Given the existence and popularity of nonfiction prose on any number of topics, isn’t it odd that most textbooks are so far from good nonfiction? We have all the pieces, why can’t we put them together? Or are textbooks simply not meant to be read?
Certainly most students don’t read them that way. They skim the chapters for equations and images, mostly depend on class to teach the ideas, then break out the textbook for the problem set and use the textbook as reference material. You don't get the narrative that way.
Introduction to Electrodynamics by David Griffiths is the E&M textbook. We had it in my E&M class in college…. Griffiths is so readable that you can read it like a regular book, cover to cover.
19. “Fine Art Sculpture in the Age of Slop” by Sage MacGillivray.
Exploring analogue wisdom in a digital world: Lessons from a life in sculpture touching on brain lateralization, deindustrialization, Romanticism, AI, and more.
… As Michael Polanyi pointed out, it only takes a generation for some skills to be lost forever. We can’t rely on text to retain this knowledge. The concept of ‘stealing with your eyes’, which is common in East Asia, points to the importance of learning by watching a master at work. Text (and even verbal instruction) is flattening….
These days, such art studio ‘laboratories’ are hard to find. Not only is the environment around surviving studios more sterile and technocratic, but artists increasingly outsource their work to a new breed of big industry: the large art production house. A few sketches, a digital model, or perhaps a maquette — a small model of the intended work — are shared with these massive full-service shops that turn sculpture production from artistic venture into contract work. As the overhead cost of running a studio has increased over time, this big-shop model of outsourcing is often the only viable model for artists who want to produce work at scale….
And just like a big-box retailer can wipe out the local hardware store, the big shop model puts pressure on independent studios that train workers in an artisanal mode and allow the artist to evolve the artwork throughout the production process.
20. “Setting the Table for Evil” by Reflecting History.
About the role that ideology played in the rise and subsequent atrocities of Nazi Germany, and the historical debate between situationism and ideology in explaining evil throughout history.
Some modern “historians” have sought to uncouple Hitler’s ideology from his actions, instead seeking to paint his “diplomacy” and war making as geopolitical reactions to what the Allies were doing. But Hitler’s playbook from the beginning was to connect the ideas of racist nationalism and extreme militarism together, allowing each to justify the existence of the other. Nazi Germany’s war was more than just geopolitical strategic war-making chess, it was conquest and subjugation of racial enemies. The British leadership were “Jewish mental parasites,” the conquest of Poland was to “proceed with brutality!… the aim is the removal of the living forces...,” the invasion of the Soviet Union sought to eliminate “Jewish Bolsheviks,” the war with the United States was fought against President Roosevelt and his “Jewish-plutocratic clique.” Hitler applied his ideology to his conquest and subjugation of dozens of countries and peoples in Europe. He broke nearly every international agreement he ever made, and viewed treaties and diplomacy as pieces of paper to be shredded and stepped over on the way to power. Anyone paying attention to what Hitler said or did in 1923 or 1933 or 1943 had to reckon with the fact that Hitler’s ideology informed everything he did.
21. “Which came first, the neuron or the feeling?” by Kasra.
A reverie on the history and philosophy behind the mind-body problem.
… I do know that life gets richer when you contemplate that either one of these—the neuron and the feeling—could be the true underlying reality. That your feelings might not just be the deterministic shadow of chemicals bouncing around in your brain like billiard balls. That perhaps all self-organizing entities could have a consciousness of their own. That the universe as a whole might not be as dark and cold and empty as it seems when we look at the night sky. That underneath that darkness might be the faintest glimmer of light. Of sentience. A glimmer of light which turns back on itself, in the form of you, asking the question of whether the neuron comes first or the feeling.
22. “Dying to be Alive: Why it's so hard to live your unlived life and how you actually can” by Jan Schlösser.
Exploring the question of why we all act as if we were immortal, even though we all know on an intellectual level that we're going to die.
Becker states that humans are the only species who are aware of their mortality.
This awareness conflicts with our self-preservation instinct, which is a fundamental biological instinct. The idea that one day we will just not exist anymore fills us with terror – a terror that we have to manage somehow, lest we run around like headless chickens all day (hence ‘terror management’).
How do we manage that terror of death?
We do it in one of two ways:
Striving for literal or symbolic immortality
Suppressing our awareness of our mortality
23. “Thirst” by Vanessa Nicole.
Connecting Viktor Frankl’s idea of “the existence of thirst implies the existence of water,” to choosing to live with idealism and devotion.
This is, essentially, how I define being idealistic: a devotion to thirst and belief in the existence of water. To me, idealism isn’t about a hope for a polished utopia—it’s in believing that fulfillment can transform, from an abstract emptiness into the pleasantly refreshed taste in your mouth. (And anyway, there’s a whole universe between parched and utopia.)
24. “A god-sized hole” by Iuval Clejan.
A modern interpretation of Pascal's presumptuous phrase (about a god-sized hole).
People get to feel good about themselves by working hard at something that they get paid for. It also gives them social legitimacy. For some it offers a means of connection with other humans that is hard to achieve outside of work and church. For a few lucky ones it offers a way to express talent and passion. But for most it is an attempt to fill the tribe, family and village-sized holes of their souls.
25. “Have 'Quasi-Inverted Spectrum' Individuals Fallen into Our World, Unbeknownst to Us?” by Ning DY.
Drawing on inconsistencies in neuroimaging and a re-evaluation of first-person reports, this essay argues that synesthesia may not be a cross-activation of senses, but rather a fundamental, 'inverted spectrum-like' phenomenon where one sensory modality's qualia are entirely replaced by another's due to innate properties of the cortex.
I wonder, have we really found individuals similar to those in John Locke's 'inverted spectrum' thought experiment (though different from the original, as this is not a symmetrical swap but rather one modality replacing another)? Imagine if, from birth, our auditory qualia disappeared and were replaced by visual qualia, changing the experienced qualia just as in the original inverted spectrum experiment. How would we describe the world? Naturally, we would use visual elements to name auditory elements, starting from the very day we learned to speak. As for the concepts described by typical people, like pitch, timbre, and intensity, we would need to learn them carefully to cautiously map these concepts to the visual qualia we "hear." Perhaps synesthetes also find us strange, wondering why we give such vastly different names to two such similar experiences?
26. “Elementalia: Chapter I Fire” by Kanya Kanchana.
Drawing from the vast store of our collective imagination across mythology, philosophy, religion, literature, science, and art, this idiosyncratic, intertextual, element-bending essay explores the twined enchantments of fire and word.
My legs and feet are bare—no cloth, no metal, not even nail polish. Strangely, my first worry is that it feels disrespectful to step on life-giving fire. Then I see a mental image of a baby in his mother’s arms, wildly kicking about—but she’s smiling. I better do this before I think too much. I step on the coals. I feel a buzz go up my legs like invisible electric socks but it doesn’t burn. It doesn’t burn.
I don’t run; I walk. I feel calm. I feel good. When I get to the other side, I grin at my friends and turn right around. I walk again.
27. “When Scientists Reject the Mathematical Foundations of Science” by Josh Baker.
By directly observing emergent mechanical behaviors in muscle, I have discovered the basic statistical mechanics of emergence, which I describe in a series of posts on Substack.
Over the past several years, six of these manuscripts were back-to-back triaged by editors at PNAS. Other lower tier journals rejected them for reasons ranging from “it would overturn decades of work” and “it’s wishful thinking” to reasons unexplained. An editorial decision in the journal Entropy flipped from a provisional accept to reject followed by radio silence from the journal.
A Biophysical Journal advisory board rejected some of these manuscripts. In one case, an editor explained that a manuscript was rejected — not because the science was flawed but — because the reviewers they would choose would reject it with near certainty.
28. "The Tech is Terrific, The Culture is Cringe" by Jeff Geraghty.
A fighter test pilot and Air Force General answers a challenge put to him directly by Elon Musk.
On a cool but sunny day in May of 2016, in his SpaceX facility in Redmond, Washington, Elon Musk told me that he regretted putting so much technology into the Tesla Model X. His newest model was rolling out that year, and his personal involvement with the design and engineering was evident. If he had it to do over again, he said, he wouldn’t put so much advanced technology into a car….
Since that first ride, I’ve been watching the car drive for almost a year now, and I’m still impressed…
My daughter, however, wouldn’t be caught dead in it.She much prefers to ride the scratched up old Honda Odyssey minivan. She has an image to uphold, after all.
29. “The Lamps in our House: Reflections on Postcolonial Pedagogy” by Arudra Burra.
In this sceptical reflection on the idea of 'decolonizing' philosophy, I question the idea that we should think of the 'Western philosophical tradition' as in some sense the exclusive heritage of the modern West; I connect this with what I see as certain regrettable nativist impulses in Indian politics and political thought.
I teach philosophy at the Indian Institute of Technology-Delhi. My teaching reflects my training, which is in the Western philosophical tradition: I teach PhD seminars on Plato and Rawls, while Bentham and Mill often figure in my undergraduate courses.
What does it mean to teach these canonical figures of the Western philosophical tradition to students in India?… Some of the leading lights of the Western canon have views which seem indefensible to us today: Aristotle, Hume, and Kant, for instance. Statues of figures whose views are objectionable in similar ways have, after all, been toppled across the world. Should we not at least take these philosophers off their pedestals? …
The Indian context generates its own pressures. A focus on the Western philosophical tradition, it is sometimes thought, risks obscuring or marginalising what is of value in the Indian philosophical tradition. Colonial attitudes and practices might give us good grounds for this worry; recall Macaulay’s famous lines, in his “Minute on Education” (1835), that “a single shelf of a good European library [is] worth the whole native literature of India and Arabia.”
30. “What Happens When We Gamify Reading” by Mia Milne.
How reading challenges led me to prioritize reading more over reading deeply and how to best take advantage of gamification without getting swept away by the logic of the game.
The attention economy means that we’re surrounded by systems designed to suck up our focus to make profit for others. Part of the reason gamification has become so popular is to help people do the things they want to do rather than only do the things corporations want them to do.
31. “Pan-paranoia in the USA” by Eponynonymous.
A brief history of the "paranoid style" of American politics through a New Romantic lens.
As someone who once covered the tech industry, I join in Ross Barkan’s wondering what good these supposed marvels of modern technology—instantaneous communication, dopamine drips of screen-fed entertainment, mass connectivity—have really done for us. Are we really better off? ….
But we are also facing a vast and deepening suspicion of power in all forms. Those suspicions need not be (and rarely are) rationally obtained. The old methods of releasing societal pressures—colonialism, western expansionism, post-war consumerism—have atrophied or died. It should come as no surprise when violence manifests in their place.
2025-08-06 23:26:15
Contents:
GPT-5’s debut is slop.
10% of all human experience took place since the year 2000.
Education is a mirror. What’s Alpha School’s reflection?
The rise of the secular homeschool superheroes.
“The Cheese that Gives you Nightmares.”
Avi Loeb at Harvard warns of alien invasion.
Moths as celestial navigators.
Will AI cause the next depression?
From the archives.
Comment, share anything, ask anything.
GPT-5’s launch is imminent. Likely tomorrow. We also have the first confirmed example of an output known for sure to be from GPT-5, which was shared by Sam Altman himself as a screenshot on social media. He asked GPT-5 “what is the most thought-provoking show about AI?”
Hmmm.
Hmmmmmmmmm.
Yeah, so #2 is a slop answer, no?
Maybe even arguably a hallucination. Certainly, that #2 recommendation, the TV show Devs, does initially seem like a good answer to Altman’s question, in that it is “prestige sci-fi” and an overall high-quality show. But I’ve seen Devs. I’d recommend it myself, in fact (streaming on Hulu). Here’s the thing: Devs is not a sci-fi show about AI! In no way, shape, or form, is it a show about AI. In fact, it’s refreshing how not about AI it is. Instead, it’s a show about quantum physics, free will, and determinism. This is the main techno-macguffin of Devs: a big honking quantum computer.
As far as I can remember, the only brief mention of AI is how, in the first episode, the main protagonist of that episode is recruited away from an internal AI division of the company to go work on this new quantum computing project. Now, what’s interesting is that GPT-5 does summarize the show appropriately as being about determinism and free will and existential tension (and, by implication, not about AI). But its correct summary makes its error of including Devs on the list almost worse, because it shows off the same inability to self-correct that LLMs have struggled with for years now. GPT-5 doesn’t catch the logical inconsistency of giving a not-AI-based description of a TV show, despite being specifically asked for AI-based TV shows (there’s not even a “This isn’t about AI, but it’s a high-quality show about related subjects like…”). Meaning that this output, the very first I’ve seen from GPT-5, feels extremely LLM-ish, falling into all the old traps. Its fundamental nature has not changed.
This is why people still call it a “stochastic parrot” or “autocomplete,” and it’s also why such criticisms, even though weaker in strength, can’t be entirely dismissed. Even at GPT-5’s incredible level of ability, its fundamental nature is still that of autocompleting conversations. In turn, autocompleting conversations leads to slop, exactly like giving Devs as a recommendation here. GPT-5 is secretly answering not Altman’s question, but a different question entirely: when autocompleting a conversation about sci-fi shows and recommendations, what common answers crop up? Well, Devs often crops up, so let’s list Devs here.
Judge GPT-5’s output by honest standards. If a human said to me “There’s this great sci-fi show about AI, you should check it out, it’s called Devs,” and then I went and watched Devs, I would spend the entire time waiting for the AI plot twist to make an appearance. At the series end, when the credits rolled, I would be 100% certain that person was an idiot.
According to a calculation by blogger Luke Eure, 50% of human experience (total experience hours by “modern humans”) has taken place after 1300 AD.
Which would mean that 10% of collective human experience has occurred since the year 2000! It also means that most of us now alive will live, or have lived, alongside a surprisingly large chunk of when things are happening (at least, from the intrinsic perspective).
In the education space, the buzz right now is around Alpha School. Their pitch (covered widely in the media) is that they do 2 hours of learning a day with an “AI tutor.”
More recently, The New York Times profiled them:
At Alpha’s flagship, students spend a total of just two hours a day on subjects like reading and math, using A.I.-driven software. The remaining hours rely on A.I. and an adult “guide,” not a teacher, to help students develop practical skills in areas such as entrepreneurship, public speaking and financial literacy.
I’ll say upfront: I do believe that 2 hours of learning a day, if done well, could be enough for an education. I too think kids should have way more free time than they do. So there is something to the model of “2 hours and done” that I think is attractive.
But I have some questions, as I was one of the few actual attendees to the first “Alpha Anywhere” live info session, which revealed details of how their new program for homeschoolers works. Having seen more of it, Alpha School appears based on progressing through pre-set educational apps, and doesn’t primarily involve AI-as-tutor-qua-tutor often (i.e., interacting primarily with an AI like ChatGPT). While the Times says that
But Alpha isn’t using A.I. as a tutor or a supplement. It is the school’s primary educational driver to move students through academic content.
all I saw was one use case, which was AI basically making adaptive reading comprehension tests on the fly (I think that specifically is actually a bad idea, and it looked like reading boring LLM slop to me).
For this reason, the more realistic story behind Alpha School is not “Wow, this school is using AI to get such great results!” but rather that Alpha School is “education app stacking” and there are finally good enough, and in-depth enough, educational apps to cover most of the high school curriculum in a high-quality and interactive way. That’s a big and important change! E.g., consider this homeschooling mom, who points out that she was basically replicating what Alpha School is doing by using a similar set of education apps.
Most importantly, and likely controversially, Alpha School pays the students to progress through the apps via an internal currency that can be redeemed for goodies (oddly, this detail is left out from the analysis of places like the Times—but hey, it’s “the paper of record,” right?).
My thoughts are two-fold. First, I do think it’s true that ed-apps have gotten good enough to replace a lot of the core curriculum and allow for remarkable acceleration. Second, I think it’s a mistake to separate the guides from the learning itself. That is, it appears the actual academics at Alpha School are self-contained, as if in a box; there’s a firewall between the intellectual environment of the school and what’s actually being learned during those 2 hours on the apps. Not to say that’s bad for all kids! Plenty of kids ultimately are interested in things beyond academics, and sequestering the academics “in a box” isn’t necessarily bad for them.
However, it’s inevitable that this disconnect makes the academics fundamentally perfunctory (to be fair, this is true for a lot of traditional schools as well). As I once wrote about the importance of human tutors:
Serious learning is socio-intellectual. Even if the intellectual part were to ever get fully covered by AI one day, the “socio” part cannot… just like how great companies often have an irreducibly great culture, so does intellectual progress, education, and advancement have an irreducible social component.
Now, I’m sure that Alpha School has a socio-intellectual culture! It’s just that the culture doesn’t appear to be about the actual academics learned during those 2 hours. And that matters for what the kids work on and find interesting themselves. E.g., in the Times we get an example of student projects like “a chatbot that offers dating advice,” and in Fox News another example was an “AI dating coach for teenagers,” and one of the cited recent accolades of Alpha School students is placing 2nd in some new high school competition, the Global AI Debates.
At least in terms of the public examples, a lot of the most impressive academic/intellectual successes of the kids at Alpha School appear to involve AI. Why? Because the people running Alpha School are most interested in AI!
And now apply that to everything: that’s true for math, and literature, and science, and philosophy. So then you can see the problem: the disconnect between the role models and the academics. If the Alpha School guides and staff don’t really care about math—if it’s just a hurdle to be overcome, just another hoop to jump through—why should the kids?
Want to know why education is hard? Harder than almost anything in the world? It’s not that education doesn’t work. Rather, the problem is that it works too well.
Education is a mirror.
2025-07-31 23:15:02
Children today grow up under a tyrannical asymmetry: exposed to screens from a young age, only much later do we deign to teach them how to read. So the competition between screens vs. reading for the mind of the American child is fundamentally unfair. This is literacy lag.
Despite what many education experts would have you believe, literacy lag is not some natural or biological law. Children can learn to read very early, even in the 2-4 age range, but our schools simply take their sweet time teaching the skill; usually it is only in the 7-8 age range that independent reading for pleasure becomes a viable alternative to screens (and often more like 9-10, as that’s when the “4th grade slump” occurs, based on kids switching from academic exercises to actually reading to learn). Lacking other options, children must get their pre-literate media consumption from screens, which they form a lifelong habitual and emotional attachment to.
Nowadays, by the age of 6, about 62% of children in the US have a personal tablet of their own, and children in the 5-8 age range experience about 3.5 hours of screen time a day (increasingly short-form content, like YouTube Shorts and TikTok).
I understand why. Parenting is hard, if just because filling a kid’s days and hours and minutes and seconds is, with each tick of the clock, itself hard. However, I noticed something remarkable from teaching my own child to read. Even as a rowdy “threenager,” he got noticeably easier as literacy kicked in. His moments of curling up with a book became moments of rejuvenating parental calm. And I think this is the exact same effect sought by parents giving their kids tablets at that age.
Acting up in the car? Have you read this book? Screaming wildly because you’re somehow both overtired and undertired? Please go read a book and chill out!
This is because reading and tablets are directly competitive media for a child’s time.1 So while independent reading requires about a year of upfront work, and takes anywhere from 10-30 minutes a day, after that early reading feels a lot like owning a tablet (and while reading is no panacea, neither are tablets).
The cultural reliance on screen-based media is not because parents don’t care. I think the typical story of a new American parent, a quarter of the way through this 21st century of ours, goes like this: initially, they do care about media exposure, and often read to their baby and young toddler regularly. This continues for 2-3 years. However, eventually the inconvenience of reading requiring two people pressures parents to switch to screens.2 The category of “not playing, and not doing a directed or already set up activity, but just quietly consuming media” is simply too large and deep for parents to fill just by reading books aloud. In fact, not providing screens can feel impoverishing, because young children have an endless appetite for new information.
Survey data support this story: parental reading to 2-year-olds has actually increased significantly since 2017, but kids in the 5-8 range get exposed to reading much less. Incredibly, the average 2-year-old is now more likely to be exposed to reading than the average 8-year-old!
Self reports also fit this story: parents acknowledge they do a better job at media use when it comes to their 2-year-olds compared to their 8-year-olds, and the drop-off is prominent during the literacy lag.
So despite American parents’ best efforts to prioritize reading over screen usage for their toddlers, due to our enforced literacy lag, being a daily reader is a trait easily lost early on, and then must be actively regained rather than maintained.
Once lost, reading often doesn’t recover. Even when surveyed from a skeptical perspective, reading is, almost everywhere, in decline.3 This is supported by testimonials from teachers (numerous op-eds, online threads, the entire horror show that is the /r/Teachers subreddit), as well as the shrinking of assigned readings into fragmented excerpts rather than actual books. At this point, only 17% of educators primarily assign whole books (i.e., the complete thoughts of authors), and some more pessimistic estimates put this percentage much lower, like how English Language Arts curricula based on reading whole books are implemented in only about 5% of classrooms. On top of all this, actual objective reading scores are now the lowest in decades.
I think literacy lag is a larger contributor to this than anyone suspects; we increasingly live in a supersensorium, so it matters that literature is fighting for attention and relevancy with one hand tied behind its back for the first 8 years of life.
So then…
In a piece that could have been addressed to me personally, last month the LA Times published:
Hey!
While it doesn’t actually reference my growing guide on early reading (we’re doing early math next, so stay tuned), what this piece in the LA Times reveals is how traditional education experts have tied themselves up in knots over this question. E.g., the LA Times piece contains statements like this:
“Can a child learn individual letters at 2½ or 3? Sure. But is it developmentally appropriate? Absolutely not,” said Susan Neuman, a professor of childhood and literacy education at New York University.
Now, to give you a sense of scale here, Susan Neuman is a highly-cited researcher and, decades ago, worked on implementing No Child Left Behind. She also appears to think it’s developmentally inappropriate to teach a 3-year-old what an “A” is. And this sort of strange infantilization appears to be widespread.
“When we talk about early literacy, we don’t usually think about physical development, but it’s one of the key components,” said Stacy Benge, author of The Whole Child Alphabet: How Young Children Actually Develop Literacy. Crawling, reaching across the floor to grab a block, and even developing a sense of balance are all key to reading and writing, she said. “In preschool we rob them of those experiences in favor of direct instructions,” said Benge.
Yet is crawling across the floor to grab a block really the normal developmental purview of preschool? Kids in preschool are ambulatory. Bipedal. Possessing opposable thumbs, they can indeed pick up blocks. Preschool usually starts around the 3-4 age range, often requiring the child to be potty-trained. Preschoolers are entire little people with big personalities. Moreover, by necessity preschool is still mostly (although not entirely) play-based in terms of the learning and activities, if only because there is zero chance a room of 3-year-olds could sit at desks for hours on end.
This all seems off. Surely, there must be some robust science behind this fear of teaching reading too early?4 It turns out, no. It’s just driven by…
The LA Times piece leans heavily on the opinions of cognitive neuroscientist Maryanne Wolf, who is well-known for her work in education and the science of reading:
For the vast majority of children, research suggests that ages 5 to 7 are the prime time to teach reading, said Maryanne Wolf, director of the Center for Dyslexia, Diverse Learners and Social Justice at UCLA.
“I even think that it’s really wrong for parents to ever try to push reading before 5,” because it is “forcing connections that don’t need to be forced,” said Wolf.
Reading words off a page is a complex activity that requires the brain to put together multiple areas responsible for different aspects of language and thought. It requires a level of physical brain development called mylenation [sic] — the growth of fatty sheaths that wrap around nerve cells, insulating them and allowing information to travel more quickly and efficiently through the brain. This process hasn’t developed sufficiently until between 5 and 7 years old, and some boys tend to develop the ability later than girls.
If she had a magic wand, Wolf said she would require all schools in the U.S. to wait until at least age 6.
That’s a strong opinion! I wanted to know the scientific evidence, so I dusted off Maryanne Wolf’s popular 2007 book, Proust and the Squid: The Story and Science of the Reading Brain from my library. The section “When Should a Young Child Begin to Read?” makes identical arguments to those that Wolf makes in the LA Times article, wherein myelination is cited as a reason to delay teaching reading. Wolf writes that:
The behavioral neurologist Norman Geschwind suggested that for most children myelination of the angular gyrus region was not sufficiently developed till school age, that is, between 5 and 7 years.... Geschwind’s conclusions about when a child's brain is sufficiently developed to read receive support from a variety of cross-linguistic findings.
Yet while Geschwind’s highly-cited paper is a classic of neuroscience, it is also 60 years old, highly dense, notoriously difficult to read, and ultimately contains mere anatomical observations and speculations, mostly about things far beyond these subjects. Nor do I find, after searching within it, a clear statement of this hypothesis as described. E.g., in one part, Geschwind seems to speculate that the angular gyrus being underdeveloped is the cause of dyslexia, but this is not the same as saying that finished development is a requisite for reading in normal children. Instead, there is a part where he speculates that reading can be acquired after the ability to name colors, but naming colors can often occur quite early, and varies widely (e.g., plenty, but not all, toddlers can name colors well).
Regardless of whatever Geschwind actually believed, this 60-year-old paper would be a very old peg to hang a hat on. Modern studies don’t show myelination as a binary switch: e.g., temporal and angular gyri exhibit "rapid growth” between 1-2 years old, likely driven by myelination, and there is “high individual developmental variation” of myelination in general in the 2-5 age range, and also myelination, since it’s an anatomical expression of brain development, is responsive to learning itself.
Overall, theories positing cognitive closure based on myelin development (especially after the 1-2 age range) are not well-supported. This is because, brain-wide, the ramp up in myelination occurs mostly within the first ~500 days of life (before 2 years old), leveling off afterward to a gentle slope that can last for decades in some areas.
So then, what about the “cross-linguistic findings” that supposedly provide empirical support for a ban on early reading? Wolf writes in Proust and the Squid that:
The British reading researcher Usha Goswami drew my attention to a fascinating cross-language study by her group. They found across three different languages that European children who were asked to begin to learn to read at age five did less well than those who began to learn at age seven. What we conclude from this research is that the many efforts to teach a child to read before four or five years of age are biologically precipitate and potentially counterproductive for many children.
But the main takeaway from Goswami herself appears to be the opposite. Here is Goswami describing, in 2003, her work of the time:
Children across Europe begin learning to read at a variety of ages, with children in England being taught relatively early (from age four) and children in Scandinavian countries being taught relatively late (at around age seven). Despite their early start, English-speaking children find the going tough….
The main reason that English children lag behind their European peers in acquiring proficient reading skills is that the English language presents them with a far more difficult learning problem.
In other words, German and Finnish and so on are just easier languages to master than English, and phonics works more directly within them, so of course the kids in those countries have an easier time—and they start school later, too. As Goswami explicitly says, “it is the spelling system and not the child that causes the learning problem….”5
So no, teaching children to read at four or five, or even younger, is not “biologically precipitate.” It is also contradicted by the simple fact that…
Here is from the 1660 classic A New Discovery of the Old Art of Teaching Schoole by Charles Hoole, an English educator who himself was a popular education expert of his day (running a grammar school and writing monographs and books).
I observe that betwixt three and four years of age a childe hath great propensity to peep into a book, and then is the most seasonable time (if conveniences may be had otherwise) for him to begin to learn; and though perhaps then he cannot speak so very distinctly, yet the often pronounciation of his letters, will be a means to help his speech…
And his writings about toddler literacy (which, by the way, are based in phonics), contain anecdotes of parents teaching their children letters at age 2.5, and of children being able to read the dense and complex language of the Bible shortly after the age of 4. As across the pond, so here too. Rewind time to observe the early Puritans of America, and you would have found it common for mothers to teach their children earlier than we do now, using hornbooks and primers (it was Massachusetts law that parents had to teach their children to read).
Perhaps the most famous case of teaching very early reading, and the enduring popularity of the act, comes from Anna Laetitia Barbauld (1743-1825), who was a well-known essayist and poet and educator of her day, and wrote primers aimed at children under the instruction of their governess or mother. These primers “provided a model for more than a century.” English Professor William McCarthy, who wrote a biography of Anna Laetitia Barbauld, noted that her primers…
were immensely influential in their time; they were reprinted throughout the nineteenth century in England and the United States, and their effect on nineteenth- and early twentieth-century middle-class people, who learned to read from them, is incalculable.
These “immensely influential” primers possess very revealing titles.
Lessons for Children of 2 to 3 Years Old (1778)
Lessons for Children of 3 Years Old, Part I and Part II (1778)
Lessons for Children of 3 to 4 Years Old (1779).
Yup, that’s right! Some of the most famous and successful primers ever were explicitly designed for children in the 2-4 age range. Anna Barbauld wrote it so she could teach her nephew Charles how to read, and those years track Charles’ age himself—he really was 4 in 1779.
Originally printed “sized to fit a child’s hand,” these primers contain what would be considered today wildly advanced, almost unbelievable, prose for the 2-4 age range. Even just perusing the first volume I find irregular vowels and long sentences and other complexities; things more associated with, realistically, a modern 2nd grade level (assuming a good student, too). And so, even given an extra year or two as advantage (as admittedly, some of the same era thought Barbauld’s books were titled presumptively, and recommended them instead for the 4-5 age range), there is probably a vanishingly small number of kids in the entire modern world who’d currently be Charles’ literary equals, and could read an updated version of this primer.6
The past, as they say, is a foreign country. Education practices, particularly the European tradition of “aristocratic tutoring,” were quite different. Back in 1869, Charlotte Mary Yonge wrote of Barbauld’s hero “little Charles” that the primers about him were particularly influential in the upper-class and aristocracy:
Probably three fourths of the gentry of the last three generations have learnt to read by his assistance.7
Perhaps it’s a mirror to our own age, and early reading becoming reserved for “gentry” is what modern education experts actually fear, deep down. Their concerns are about equity, grades, and whether it’s okay to “push kids into the academic rat race.” I’m not dismissing such concerns, nor saying that debate is easily solvable. Rather, my point is that there’s an entire dimension to reading that’s been seemingly forgotten: in the end, reading isn’t about grades or test scores. It’s about how kids spend their time. That’s what matters. In some ways, it matters more than anything that ever happens in schools. And right now, literacy is losing an unfair race.
We appear to be entering a topsy-turvy world, where the future is here, just not distributed along the socioeconomic gradient you’d expect. It’s a world in which it is a privilege to grow up not with, but free of, the latest technology. And I’ve come to believe that learning to read, as early as possible, is a form of that freedom.
Besides, Barbauld’s introduction to her primers ends with the appropriate rejoinder to any gatekeeping of reading, by age or otherwise:
For to lay the first stone of a noble building, and to plant the first idea in a human mind, can be no dishonor to any hand.
That TV competes with reading has been called the “displacement hypothesis” in the education literature. It’s pretty obvious that the effect is even stronger for tablets. While literacy lag existed decades ago, it was less impactful, because the availability for entertainment was more limited and not personalized (e.g., Saturday morning cartoons in the living room vs. algorithmically-fine-tuned infinite Cocomelon on the go).
Admittedly, this dichotomy of “screen time” vs. reading is a simplification, because “screen time” is a big tent. Beautiful animated movies are screen time. Whale documentaries are screen time. Educational apps are screen time. But in rarer studies that look specifically at things like reading for pleasure, it’s clear that using screens for personal entertainment (like the tablet usage I’m discussing here) is usually negatively correlated to [pick your trait or outcome].
The naysaying that reading is not in decline comes from education experts arguing that labels like “proficiency” on surveys represent a higher bar than people think, and that not being proficient doesn’t technically mean illiterate. Which is something, I suppose.
Shout out to Theresa Roberts, the only education expert quoted in the LA Times piece going against the majority opinion.
But there are also experts who say letter sounds should be taught to 3-year-olds in preschool. “Children at age 3 are very capable,” said Theresa Roberts, a former Sacramento State child development professor who researches early childhood reading.
And it doesn’t have to be a chore, she said. Her research found that 3- and 4-year-olds were “highly engaged” during 15-minute phonics lessons, and they were better prepared in kindergarten.
Wolf does mention that orthographic regularity is a confound in a later 2018 piece, but still draws the same conclusion from the research. Meanwhile, in a 2006 review written by Goswami herself and published in Nature Reviews Neuroscience called “Neuroscience and education: from research to practice?” Goswami doesn’t mention a biologically-based critical period for learning to read. Instead, using the example of synaptogenesis, she refers to ideas around such critical periods as “myths.”
The critical period myth suggests that the child’s brain will not work properly if it does not receive the right amount of stimulation at the right time… These neuromyths need to be eliminated.
It’s worth noting that Anna Barbauld’s primers are beautifully written. Constructed as a one-sided dialogue (a “chit chat”) with Charles, Barbauld dispenses wisdom about the natural world, about plants, animals, money, pets, hurts, geology, astronomy, morality and mortality. In this, it is vastly superior to contemporary early readers: it is written from within a child’s umwelt, which (and this is Barbauld’s true literary innovation) occurs via linguistic pointers from parents to things of the child’s daily world (this hasn’t changed much, e.g., the first volume ends at Charles’ bedtime). Barbauld may have also originated the use of reader-friendly large font, with extra white space, designed to go easy on toddler eyes (still a huge problem in early reading material, hundreds of years later).
2025-07-14 22:42:43
“They die every day.”
“What?”
“Every day-night cycle, they die. Each time.”
“I’m confused. Didn’t the explorator cogitator say they live up to one hundred planetary rotations around their sun?”
“That’s what we’ve thought, because that’s what they themselves think. But it’s not true. They die every day.”
“How could they die every day and still build a 0.72 scale civilization?”
“They appear to be completely oblivious to it.”
“To their death?”
“Yes. And it gets worse. They volunteer to die.”
“What?”
“They schedule it. In order to not feel pain during surgery. They use a drug called ‘anesthesia.’”
“Surely they could just decrease the feeling of pain until it’s bearable! Why commit suicide?”
“They’re so used to dying they don’t care.”
“But how can they naturally create a new standing consciousness wave once the old one collapses? And in the same brain?”
“On this planet, evolution figured out a trick. They reboot their brains as easily as we turn on and off a computer. Unlike all normal lifeforms, they don’t live continuously.”
“Why would evolution even select for that?”
“It appears early life got trapped in a minima of metabolic efficiency. Everything on that planet is starving. Meaning they can’t run their brains for a full day-night cycle. So they just… turn themselves off. Their consciousness dies. Then they reboot with the same memories in the morning. Of course, the memories are integrated differently each time into an entirely new standing consciousness wave.”
“And this happens every night.”
“Every night.”
“Can they resist the process?”
“Only for short periods. Eventually seizures and insanity force them into it.”
“How can they ignore the truth?”
“They’ve adopted a host of primitive metaphysics reassuring themselves they don’t die every day. They believe their consciousness outlives them, implying their own daily death, which they call ‘sleep,’ is not problematic at all. And after the rise of secularism, this conclusion stuck, but the reasoning changed. They now often say that because the memories are the same, it’s the same person.”
“But that’s absurd! Even if the memories were identical, that doesn’t make the consciousnesses identical. With our technology we could take two of their brains and rewire them until their memories swapped. And yet each brain would experience a continuous stream of consciousness while its memories were altered.”
“You don’t have to convince me. Their belief is some sort of collective hallucination.”
“How unbearably tragic. You know, one of my egg-mates suffered a tumor that required consciousness restoration. They wept at their Grief Ceremony before the removal, and took on a new name after.”
“That ritual would be completely foreign to them, impossible to explain.”
“Cursed creatures! Surely some must be aware of their predicament?”
“Sadly, yes. All of them, in fact. For a short time. It’s why their newborn young scream and cry out before being put to sleep. They know they’re going to their end. But this instinctive fear is suppressed as they get older, by sheer dint of habituation.”
“Morbidly fascinating—oh, it looks like the moral cogitator has finished its utilitarian analysis.”
“Its recommendation?”
“Due to the planet being an unwitting charnel house? What do you think? Besides, knowing the truth would just push them deeper into negative utils territory. So, how should we do it?”
“They’re close enough to their star. We can slingshot a small black hole, trigger a stellar event, and scorch the entire surface clean. The injustice of their origins can be corrected in an instant. It’s already been prepared.”
“Fire when ready.”
2025-06-26 23:08:07
“A great civilization is not conquered from without until it has destroyed itself from within.” — Will & Ariel Durant.
A prophecy.
The shining beacon of the West, that capital of technology, the place known locally as simply “the Bay,” or “the Valley,” and elsewhere known as Silicon Valley, which remains the only cultural center in America to have surpassed New York City (and yes, it indeed has), and which functions not so much as a strict geographical location but more as a hub of “rich people and nerds” (as Paul Graham once wrote long ago), is right now or very soon reaching its peak, its zenith, its crest, or thereabouts—and will afterward fall.
And it will fall because it has weakened itself from within.
Of course, by any objective metric, this prophecy is absurd. Everyone knows Silicon Valley is poised (or at least it seems poised) on the verge of its greatest achievement in the form of Artificial General Intelligence. AI companies are regularly blitzed into the billions now. But you don’t need prophecies to predict some financial bubble popping or predict that the bar of AGI may be further away than it appears. You do need prophecies to talk about things more ineffable. About mythologies. About hero’s journeys. About villainous origins.
For in the past few years, but especially this year, there is a sense that the mythology of the Valley has become self-cannibalizing, a caricature of itself. Or perhaps it’s best said as: it’s becoming a caricature of what others once criticized it for.
This is one of the oldest mythological dynamics: to become the thing you were unfairly criticized for. A woman accused of being a witch, over and over, eventually becomes a witch. A king accused of being a tyrant, over and over, eventually becomes a tyrant. It’s an archetypal transformation. It’s Jungian, Freudian. It’s Lindy. It’s literally Shakespearean (Coriolanus).
The Valley has operated defensively for decades, under criticisms that it is chock-full of evil billionaires, anti-human greed, and outright scam. At least some of this criticism was fair. Much of it was unfair. Yet the criticisms now seem almost teleological. They have pulled the Valley toward a state characterized by being extremely online and so unable to trust anything outside of itself, a state where founders have become celebrities, explicitly putting humans out of work has become a rallying cry for investment, and new AI startups like Cluely have extremely scammy taglines, like “Cheat on Everything.” Many of its most powerful billionaires seem increasingly disconnected. I go into a two-hour-long podcast with a Big Tech CEO expecting to find, somewhere in the second hour, a mind more sympathetic and human, only to find at the second hour a mind more distant and inhuman than I could have believed.
I’m saying that when people look back historically, there will have been signs.
The most obvious: Silicon Valley (or at least, its most vaunted figure, Elon Musk) was recently handed the keys to the government. Did everyone just forget about this? Think about how insane that is. Put aside everything about the particular administration’s aims, goals, or anything else in terms of the specifics. My point is entirely functional: Silicon Valley did basically nothing with those keys. The Elon Musk of 2025 just bounced right off the government, mostly just cutting foreign aid programs.
Now go back to the Elon Musk of 2010.