MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

IVF epigenetic damage gets worse across generations; The next Project Hail Mary; AI's "odorless" math proofs; Waymo at 100% human oversight? & more

2026-04-02 23:32:16

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, and an open thread for the community.

Contents

  1. IVF epigenetic damage gets worse across generations

  2. Waymo reveals their fleet may require 100% human oversight

  3. The next Project Hail Mary

  4. Nonfiction book sales drop twice as much as fiction

  5. Art collective poisons AI training set

  6. Lyme Disease vaccine announced (but is it better than prophylactic antibiotics?)

  7. Terence Tao warns of “odorless” AI proofs that shed no insight

  8. A terraformed mars might have Insects of Unusual Size

  9. From the Archives: My worry of AI outputs surpassing human “outputs” comes true

  10. Comment, share anything, ask anything


1. IVF epigenetic damage gets worse across generations

Last month there was an interesting paper in Nature Communications, “Limitations of serial cloning in mammals,” which showed that after 58 generations a cloned mice population had degraded genetically to the point where further cloning was impossible.

It’s a great argument for “Why sex?”

And it’s one very different from the “Red Queen” hypothesis I was taught in school (as basically gospel). The Red Queen hypothesis argues that sex is to shuffle around your genome to create diversity that evolution can capitalize on, especially with respect to viruses and other attackers.

“Now, here, you see, it takes all the running you can do, to keep in the same place.”

Instead, it looks like the real issue might be that clonal reproduction introduces a ratchet where the population can’t clear genetic errors. According to the scientists of the cloning paper:

… mammals rely on sexual rather than asexual reproduction to eliminate genetic anomalies caused by clonal reproduction.

Without sex, the genetic damage just ratchets up and up over a couple dozen generations. It’s sort of like model collapse in LLMs: you train on their outputs, over and over, until the snake eats its tail. Surprisingly, the researchers reported that visually the cloned mice seemed relatively healthy in the earlier generations. But then again, these clones are leading highly sedentary lives: they basically just have to show up, eat, and drink, and that to the researchers is “healthy” because that’s what mice in a cage their whole lives do. The researchers couldn’t tell that under the surface a lot of genetic damage wasn’t shed by sexual reproduction.

But if the ratchet theory is true, we should probably take a closer look at anything that might introduce small compounding errors across generations.

So what about IVF?

IVF now accounts for almost 3% of births in the United States (the highest concentration appears to be almost 10% of births in San Francisco). Of course, IVF is entirely different from cloning, and doesn’t introduce genetic errors. But we do have evidence that the IVF process can create epigenetic damage. While that may sound scary, the first-generation effects of IVF appear relatively small in the grand scheme of things, and the gain of IVF is, of course, basically incalculable (the miracle of an entire human where none would be). So please keep that in mind. But what if the epigenetic damage of IVF compounds generationally? That would be bad.

Unfortunately, research indicates this could be true, at least from early mice evidence. In “In vitro fertilization induces reproductive changes in male mouse offspring and has multigenerational effects” the researchers emphasized that:

These findings underscore that the negative effects of IVF not only persist but also may intensify in subsequent generations.

This is not great, because, as the researchers point out:

Although considered safe, IVF pregnancies are associated with an increased risk of perinatal, neonatal, and placental complications; rare genetic syndromes; and possible long-term effects in human and mouse offspring. One possible mechanism for adverse outcomes suggests that IVF procedures occur during critical windows of epigenetic reprogramming in gametes and preimplantation embryos, generating errors that could ultimately affect normal development…. Previous studies have shown that testicular or sperm maturation changes affect normal sperm function, which leads to adverse outcomes in sired offspring.

In other words, IVF can have an effect on the germline, and this could then be passed on via more damage, etc. Which is pretty much what they observe.

E.g., sperm count in the IVF group is lower (along with testosterone) which then could lead to epigenetic damage being transmitted, since sperm are bearers of epigenetic information, and so on, creating a feedback loop.

Same paper, IVF in teal

What’s also worrying is that the issues spread beyond the germline.

Interestingly, we observed sex-specific adverse outcomes in the F2 from IVF offspring, including a higher risk of insulin and glucose resistance in males and a diabetic phenotype in females. These sex-specific differences can arise by sex-linked genes or hormones. We also observed more severe metabolic and gene expression changes in the F2 generation, as evidenced by the F2 liver RNA-Seq data. These effects could result from the cumulative impact of IVF in the F1 generation, together with a potential contribution of metabolic syndrome in the males, which may be inherited through the germline.

It appears that no one has tracked IVF side effects beyond two generations. That seems kind of important to know!

Read more

Hubris

2026-03-19 23:35:27

At your sister’s birthday party, we brought back a menagerie of helium balloons from the local grocery store. One in the shape of a fat bee, another the shape of a smiling planet Earth, and also a pink number two, to mark her year. Of course, they were a hit, and much fought over. Weighty clips at the bottom of their dangling strings kept them within gravity’s well, and so if bounced upward, the balloons would sink back (at various points they were also clipped to the dog’s collar, as well as the robot vacuum, to great delight). The most pleasing part of such balloons is how their internal lightness is balanced by the bottom clip. Something inside everyone wishes that the whole apparatus would float down even slower; or better yet, not at all, and just hover in place between floor and ceiling, or dirt and sky.

Well, your father is a clever man, and filled with all sorts of clever ideas, and so that afternoon I took your younger sister and you out to the green summer world of our yard, under a blue sky where the puffy clouds printed on its surface scrolled by.

“Watch this,” I said with a wink, and removed the clip. Then the pink number two floated away, almost out of reach, before I grabbed it, repeating the pattern until both you and your sister were giggling. Your mother came to watch, arms folded across her chest.

I had taken out with me a roll of packing tape, which I began to wind around where the clip had been removed at the bottom of the string, and then I let it go. Not enough! The balloon tried to escape again. So more tape was wound around, in successive experiments, until the bottom of its string looked inhabited by a small wasp nest. Finally, when I let it go, the balloon did not go up or down. Instead, it floated pleasingly across the yard three feet above the ground, like an underwater mine.

This was demonstrated several times. Even if one gave it a slight nudge higher, it would then just drift in a long arc that spanned the whole back yard, with the weight coaxing it only ever so slowly back down.

Until it didn’t. Peacefully mid-drift, the force pulling the clouds past reached down and scooped up the balloon, and the pink two ascended, up, up, and a little “Oh!” was all I could muster before it disappeared over the roof of the house. Your sister too was distraught, but you understood the event completely and watched in utter horror the ascent—an image I remember as a flashed photograph on the lawn, with your eyes wide, and your mouth a perfect oval, all framed by the ringlets of your hair. You had coveted that balloon most of all, for, unspoken in your mind, you’d been next in line to play with its magical buoyancy. Instead, it had been stolen, as if an invisible giant had bent from the sky and plucked it forever from you.

I had started running to the front yard with your sister in tow, hoping to regain sight of it, while you hobbled behind, your face screwed up in the sensory deprivation of dismay (the degree of which I did not quite comprehend). So complete was the loss that you couldn’t make a sound, not a whisper, until you finally did get to the front yard, where you were able to break the gasping silence and get out the wail that had been building from your toes.

High in the air drifted the pink two, and I cannot lie—a part of me wished nothing more than that my mistake would fly into the blue sky and become, far out to sea, a fish’s problem. But instead it headed unerringly, as if carefully pulled betwixt invisible thumb and forefinger, to the top branches of the largest tree in our yard, where it caught fast and tangled in the uppermost branches. An old pine a hundred feet tall that looms head and shoulders above the rest had, after decades of growing solitude, been politely handed a balloon.

There the drooping bit of deflated plastic remains. Through wind. And rain. And snow. It has been bled of color, and looks much like a jellyfish beached by unknown means, miles inland, a hundred feet in the air. I am looking at it now.


This ongoing serialization of letters to a young child, “The Lore of the World,” can be read in any order.

For when you become a new parent, you must re-explain the world, and therefore see it afresh yourself.
A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with histories and intricate maps.
Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.
Above is one of the thousands of entries they must make.
Here is Part 1 (teeth, whales, germs, music on the radio), Part 2 (Walmart, cicadas, stubbornness), and Part 3 (snow). Further installments will crop up semi-regularly among other posts. It is secretly building toward the ultimate question: Why is there something rather than nothing?

RIP Dan Simmons. Why Weren't You More Famous?

2026-03-13 22:41:36

Who writes a sci-fi series where the main character is poet John Keats, dead in 1821 from consumption at the age of 25? Dan Simmons, that’s who—an author who himself died last month, at what feels like (for our present age) a young 77, from complications following a stroke.

I was sad to hear it was something that affected his brain. Simmons wrote the greatest sci-fi book series of the last several decades: the Hyperion Cantos. My cousin (now an accomplished fantasy author himself) recommended it to me when I was a pre-teen. I fell in love with its philosophical world-building, high drama, and its many (many) references to poetry and spirituality and architecture.

They are killing these sort of covers, so drink it in

For such a wildly out-there series, the Hyperion Cantos has held up better than other 90s sci-fi. Suddenly, the idea that Artificial Superintelligence would resurrect John Keats or Frank Lloyd Wright because these human geniuses had some sort of incalculable insight the AIs could never achieve on their own is… well, it’s nigh on prophetic.


What did you see, John Keats?

When you choked to death on your own blood, staring up at that ceiling in Rome——the one painted with little white daisies——what did you see?


But what fewer people know is that Simmons was also possibly the best horror novelist of his generation. Pound for pound, or book for book, he was better than Stephen King (and I think King might have occasionally suspected this).

Dan Simmons’ Summer of Night is exactly as if Stephen King put all his frenetic mania into one book instead of five. And it’s also basically Stranger Things (just look at the bikes) in that it mixed the free-range childhoods of the 1960s with supernatural threat in a small town—except it doesn’t suffer from the same clunky downturn after the first 20%.

By the way, I don’t think Stephen King would necessarily disagree with my judgement. After learning of Simmons’ death, King apparently had a dream of his old friend.

“I was walking on my road, and he came along in an ATV,” he said. “I held up a note for him to read, but he just went by me—and into the fog.”

Subscribe now

Some might recognize Dan Simmons from one of his other horror novels, The Terror, which got made into a well-acted and well-produced AMC mini-series. A series that did manage to capture the bleak spirit of the doomed Franklin expedition, although was never quite as unnerving as the book.

In a way, Simmons was one of the best historical writers of his generation too, and explaining historical anomalies like the lost Franklin expedition as “a monster did it” was very much a kind of unique Simmons genre that he invented, or at least, perfected. He used the same structure with Drood, a horror novel (or is it?) told from the perspective of a zonked-out Wilkie Collins, who plays Salieri to the more talented Charles Dickens. A lot of the book is about the monster of creative jealousy, and a lot of it is about opium hallucinations (or are they?).

Oh, and Simmons also wrote tightly-plotted and hard-boiled noir and thrillers too, and—Wait, why wasn’t Dan Simmons more famous?

In terms of outright name recognition, Simmons doesn’t hold a candle to someone like Stephen King or George R. R. Martin, or even Orson Scott Card. A comparable figure who works across genres would be Margaret Atwood, but unlike Simmons she had at least one huge breakout hit, The Handmaid's Tale, which became a household name.

Simmons’ later habit of firing off hot hawkish and conservative takes online probably didn’t help. I think he even deleted his blog at one point due to the controversy. It’s clear he was a different writer after 9/11. But the political aspects of Simmons’ personality came out far better, and in much finer, subtler form, in his earlier fiction compared to his later online polemics. In the actual books, he acted as a kind of “humanities popularizer.” Looking back at it all together now, his genre writing is secretly an ode to the Western canon, and he was a champion of teaching people (and specifically, kids) about it. That I discovered Simmons somewhere around the age of middle school, and that it hit me so hard, was likely not a coincidence. Simmons had won awards teaching 6th grade in gifted and talented programs before leaving to write full time.

It’s a bit of a funny take: How can a genre writer be an educational champion of the Western canon? But yeah, he was. In fact, I think it’s arguable that lowly school teacher Dan Simmons, who truly and earnestly loved Shakespeare and Homer and Keats—and referenced them constantly in his books about laser guns—did more for the Western canon than Harold Bloom’s entire Yale tenure. It certainly helped make this one very young man become interested in it.

But I don’t think that his educational bent and his canon advocacy, or even the political stuff, was the ultimate limiting factor for why he wasn’t more famous, or more successful. No, I suspect Simmons never reached the height of those other names because he suffered from the same curse I suffer from. So I have grown from an awed teenager reading his work to an adult who is a sympathetic fellow failure (well, relatively). I also do too many things in too many different places for it to ever all snowball. And I recognize in Simmons the same stubborn determination to be a strange, centaurish creature—much to our overall careers’ detriment.

Read more

Bits In, Bits Out

2026-03-06 01:04:15

When I was ten years old I visited the ruins in Cornwall where King Arthur had been conceived and born, at least in legend. There at Tintagel Castle, surrounded by the ocean air and the jagged rocks, I separated from my mother and sister and made my way down to the beach. And on the beach of Tintagel, right near Merlin’s cave, I spotted it in the sand. A stone. But not just any stone—a stone ax head. It could have been nothing else. It was shaped just like an ax, being unnaturally thick at the head, which was a smoothed blunt blade, and with near right angles it tapered to a point at the back. There was a notched cleft down the middle, to tie it to a shaft. As a 10-year-old boy in a place already dreamy with legend, I pocketed it, for I felt the Neolithic ax had come to me specifically, as if a Lady of the Lake had tossed it ashore. I am looking at it on my desk now.

Later I learned that such Neolithic axes are not so rare—much like ancient Roman coins, they were mass-produced, and you can buy them on eBay cheaply due to finds like mine. This one from Tintagel beach is a perfect specimen, although the ocean likely washed away its provenance. But on that day, even amid all the other sandy stones, it stood out to me immediately. I knew it was a tool instinctively, the way a baby knows the nipple.

The philosopher Henri Bergson wrote that:

We should say not Homo sapiens, but Homo faber.

Homo faber means “man the maker.” For if anything defines humans, it is tool use. I know it is now standard, in our rush to dethrone humanity, to play up that other animals also sometimes use tools. But unlike other animals, tools are our evolutionary niche. We have been making stone tools for at least 3.3 million years. We co-evolved with tools. At first, we made them from wood and bone and stone; later we began to craft abstract tools too. Language is a tool. Math is a tool. All of our vaunted cognition is, in some sense, a tool for a more protean mental firmament, which probably is consciousness itself. Heidegger’s term for this aspect of our consciousness was Zuhandenheit: “readiness to hand.”

Now, we live in an age of tools that can talk back to us. When ChatGPT and the other LLMs appeared on the scene, and I first typed into a chat window, I experienced amazement. It was the legendary Turing Test, and I was living it!


REPENT, THE SINGULARITY IS NIGH!

Do not doubt that we are at the absolute peak of the AI hype cycle. In monetary terms, investment cannot actually go on increasing at these rates without leading to basically impossible numbers. A major falling out between the DoD and Anthropic dominates the news (even amid war).

In the months leading up to all this, a bunch of commentators have jumped aboard the bandwagon of AI hype: The Wall Street Journal advises to “Brace yourself for the AI Tsunami” and The New York Times is saying that AI might completely change the fundamentals of human existence. Popular bloggers are writing that “AI can already do social science research better than most professors,” and that “the humanities are about to be automated” and that “superintelligence is already here.”

Meanwhile, fictional doomsday reports from the year 2028 go hyper-viral and impact the stock market as the authors imagine human unemployment leading to an endless great depression. The infamous METR graph continues to accelerate (and everyone ignores that METR tasks are a tiny sample size for a slim number of domain-specific programming tasks, all based on a dubious analogy to human-time-spent-on-task, and some of the authors have tried to downplay it because their graph has been so misused). There are usually many charts involved in the Great AI Debate. I could show you charts too! Like that, despite improvements on benchmarks, the actual reliability of AI is on an almost-flat trajectory for many tasks.

But others can then show other charts, and so on, ad infinitum. I don’t think you can decide the future by a couple of charts.

No, there’s a better form of argument about AI, one which I am finally comfortable making: the argument from experience. There simply has been enough time now to see clearly how LLMs transformed the intellectual work of writing, and how this reflects their fundamental nature. My proposal is that we simply extrapolate what has happened to text production to all the other intellectual domains LLMs will ever touch.

For if everything that anyone can do on a computer is soon to be automated (as Andrew Yang is now preaching will happen in the next 12-18 months), then this process should have started with writing years ago. Yet, beyond mass-producing stilted emails and stilted social media posts and stilted essays, the impact of LLMs on writing itself has not really been to improve or accelerate good writing overall. We are not in a glut of good writing. We are in a dearth of it. This is surprising and counterintuitive, because for an LLM, words are its womb, its mother, its literal atoms—yet their impact on writing as a whole has been mostly to generate mountains of slop, while, on the positive side, helping with efficiency and research and editing and feedback, all things that only marginally improve already-good pieces. There are no signs of a burgeoning “text singularity” seen in the words output by our civilization, and words are the most sensitive weathervane to AI capabilities.

If LLMs were a true source of intelligence to rival humans, then discovering them should be like discovering oil. And if we were climbing the curve of an intelligence explosion their surplus intellect would be improving our civilization’s text as a whole in noticeable ways. If LLMs are tools, then we should expect their impacts to be a mirror of us, and concern efficiency and scale, rather than quality, and depend strongly on how people use them.

So let me ask you: if you took an observer from 2016 and teleported them a decade ahead to our time, and then showed them your social media feed or your emails and other media in general, what would their main response be? Would it be “Wow, everything is more intelligent now!” Or would it be “Why is everyone writing like a pod person now?”

It’s been six years since GPT-3, and there has been no “move 37” moment for writing (as there was for AlphaGo’s creative play of Go). Not even close.


HECK, LET’S LOWER THE BAR

If you ask a leading AI to write a children’s book, you’ll see that AI has not demonstrated exponential improvements at whatever amorphous and hard-to-define (but very real) skill “children’s book authorship” is. Indeed, the upper bound of the needle hasn’t moved much for writing in general, as people forget that older models like GPT-3 were already extremely good at short sprints of text when they kept it together. LLMs have been able to write a passable approximation of a children’s book for almost half a decade… and yet, the real lesson from this is that approximation is not automation.

Below is Anthropic’s latest model (arguably the smartest AI in existence) trying to write a good children’s book, by itself, without guidance and hand-holding. The result, which it called “beautiful,” was exactly what a smiling alien would write if it had never interacted with a child before. It was a “blurry jpeg” of a children’s book. Here’s the sappy ending:

gag me with a spoon

People love to onanistically declare that the “it’s just autocomplete” or the “stochastic parrot” criticism is soooo outdated and soooo stupid.

And yet… isn’t “You shine by being yourself” basically autocomplete for a children’s book? Can we admit that? Or do we have to pretend these insipid outputs are from a machine on the verge of artificial superintelligence (coming in literal months)? I simply refuse to play along. If you actually interact with them, and ask them to do things they aren’t directly trained on, these models remain spectacularly intellectually shallow and incompetent when there’s no detailed human prompting to give them clues and hints and guideposts. Charged with writing children’s literature independently, based on their own ideas and own attempts at style, an AI like Anthropic’s Claude will always draw from the same small well. The outputs still feel like an LLM to anyone with an ear for language, or an eye for content.

The entire life of the artist, indeed, the life of the mind in general, is defined by resistance to slop. If you want to write an actually good children’s book you’ve got to put your own perspective into it, and LLMs are “views from nowhere.” Consider how deep and complex good children’s books actually are. The Giving Tree is about unrequited parental sacrifice. The Velveteen Rabbit is about ontology. Madeline is about Paris-maxing. The Rainbow Fish? Communism. The Very Hungry Caterpillar? Metamorphosis. In Where the Wild Things Are, at the end of the book, the Wild Things try to eat Max. That’s a good children’s book.

(I was going to show a beautiful picture from Madeline here, as my daughter is obsessed with the “Pooh-pooh to the tiger in the zoo” scene. I took a photo of the book and asked the smartest-AI-in-existence-with-extended-thinking-on-at-the-highest-subscription-tier-available to make it look better. Presented are the results, in triptych.)

Consider AI video generation. “We’re going to automate Hollywood!” Okay, well, what happened to automating publishing? That’s a much simpler task, one that you’ve had well over half a decade to do. How’d that go? Hmm? Everyone on social media and at the companies simply declared victory (“Wow, this fan fiction looks good enough to me!”) and moved on to more capabilities, stuffing more reinforcement learning into the models, building more elaborate scaffolds, without actually getting it across the finish line for producing non-trivial and non-annoying and non-fluff text unless the model is being spoon-fed by a human. Like, you think people are going to craft highly-detailed prompts for their own personalized movies? Again, we can look at writing, where everything already played out. You can already prompt your way to personalized books! It just sucks and no one does it, because when you ask the LLM to operate independently, its ideas are mostly slop. This is why when a company like Anthropic says they’ve automated their code production, when really they mean they are still writing code, just at a slightly higher level of abstraction (and that’s why they still have over 100 open software developer positions, and why Boris Cherny, creator of Claude Code, said that “Engineering is changing and great engineers are more important than ever.”)


HAVE LLMS MADE BOOKS BETTER?

A paper looking at large-scale Amazon data quietly appeared earlier this year, asking “Have LLMs Boosted the Creation of Valuable Books?

The answer to this question is that in the post-LLM era (which they date as after 2022), at least based on numbers of Amazon ratings, the average book got worse. E.g., you occasionally hear hype of the AI-assisted author mass-producing books, and yet, investigation usually reveals that they have sold more like zero books and that the story is a scam. The actual effects of LLMs on publishing were that: (a) the average book got worse, (b) the top 1,000 books in each category improved somewhat, and (c) the top 100 books in each category didn’t change in quality.

How to read: Books with high reader ratings are on the right side of the x-axis here. So you can see that 2023-2025 books are mostly slop and the best don’t change much. Also, this may be hyping up the effects of AI for various reasons, e.g., the researchers do adjustments (it is not the case that only 2020-22 books are uniquely so low rated on the left-hand side, that’s an artifact of adjustments) among other issues.

Now, the average book a reader encounters is more likely to be in the top 1,000 than below that, so the average consumer experiences an increase in quality (mostly due, it appears, to more “shots on goal” rather than actual improvements). But this beneficial effect for consumers seems to be driven by often-very-low-quality-anyways categories like “Travel” and “Outdoors,” rather than, say, “Science” and “Literature.”

So do these effects look like a new source of surplus alien intelligence? Or does it look like tool use? Consider that the authors who were already successful pre-LLMs had the most efficiency gains, supporting the “tools” theory; meanwhile, new post-LLM era debut authors produced much worse work.

After failing to automate publishing books and writing in general, the claim is now that AI will go on to automate science, math, all of academia, and finally humanity itself?

But why would the chart for scientific papers (or anything else) look different from the chart for books above? LLMs can now “write a scientific paper” or “write a mathematical paper” in the exact same sense that they’ve been able to “write a book” or “write a short story” or “write an essay” for several years, all to some effect, but overall the results have been objectively mediocre given the hype, and the world is somewhat stupider, rather than smarter, at least on average.


WELCOME TO WRITER HELL, MATHEMATICIANS

Looking into the crystal ball that the last half-decade represents for writers reveals that, more likely than superintelligence, we are going to enter a world of immense, overwhelming, scientific and philosophical and mathematical slop.

Will there be some good outcomes as well? Yes! Just as there have been for writing. I am not entirely an AI pessimist. I am an AI realist—there are indeed positives to the technology, and I’m trying to find them myself (like for research, or, e.g., my attempt at making the Madeline image better above, or the fewer spelling mistakes I make now, or sometimes I ask LLMs to double-check something, etc.) Yes, some B-tier bloggers have suddenly and mysteriously transformed into A-tier bloggers, who all kind of sound the same in their A-tier-ness. Previously A-tier bloggers have gotten a lot less use from the technology, and are not noticeably better than they were years ago.

LLMs have broadly failed to automate text generation in general for the precise reasons I laid out all the way back in my 2022 essay “AI Art Isn’t Art.”

They struggle because they are fundamentally imitators, and when not told who or what to mimic they are intellectually shallow. You put more bits in, you get better bits out. Fine. That’s a tool. A computer or a piano is like that too. This is not something that will trigger a singularity of self-improvement or take the jobs of all of humanity or create “machines of loving grace” or any of the stuff people are now regularly promising is literally going to happen in like… a year. E.g., humans can edit their text, making the block of marble look ever more like the statue inside. When it comes flying solo, first drafts by LLMs are often better than last. LLMs can’t even recursively improve a five paragraph essay, let alone themselves.

So we will experience a long march by AI across intellectual disciplines (most lately, mathematics) where the hype reawakens with each new expansion, and in the wake of the long march some things do change but ultimately the world is not reconfigured as has been promised and the actual experienced intelligence level of the world, especially the top where it matters most, remains mostly unchanged, because it’s still just humans using tools. Bits in, bits out. That’s been the effect of LLMs on text production (in both book publishing and on social media), and it seems very likely to be their effect on almost everything else too. In this conservative view, by 2030, the top 100 math papers of the year won’t look spectacularly different from the top 100 math papers of 2020. The top 1,000 papers? Maybe they’ll be marginally improved, like with books or blogs. And at the backend, the entire field of mathematics will be buried, absolutely buried, in slop. Companies will continually say their systems accomplish things “autonomously” but that word is hard to define for LLMs, where a prompt is an injection of human intelligence, and a scaffold too is an injection of human intelligence (but the advantage of scaffolds is that you can put a ton of domain-specific knowledge and tips and tricks and guides in the scaffolds, keep them private because it’s proprietary, and then say the models “solved it autonomously!”).

At some point you have to use your capacity as Homo faber and call it: LLMs have behaved precisely as we would expect tools to behave when it comes to changing the nature of first-impacted and frontline intellectual disciplines like writing. The best users gain efficiencies and expand, to some degree, their capability range, especially for the mid-list of intellectual output. The worst users flood the zone.

So at least when it comes to the near-term future and the foreseeable scaling of the current technology, “merely” the exact same thing that has happened to writing will happen to every subject on Earth. But that’s not replacing humanity. That’s not the singularity. You’re just confused about what we are. We are Homo faber, and we have been doing this for 3.3 million years, and our rocks have gotten very complex—so complex you’re forgiven for not thinking they’re rocks.

My New Org to Solve Consciousness (or Die Trying); A Rogue AI Community That Wasn't; David Foster Wallace Is Still Right; Cow Tools Are Real, & More

2026-02-06 00:24:16

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, and an open thread for the community.

Contents

  1. My New Org to Solve Consciousness (or Die Trying)

  2. A Rogue AI Community That Wasn’t

  3. David Foster Wallace Is Still Right 30 Years Later

  4. The Cost of AI Agents is Exploding

  5. The Diary of a 100-Year-Old

  6. AI Solving Erdős Problems is (So Far) Mostly Hype

  7. Cow Tools Are Real

  8. From the Archives

  9. Comment, Share Anything, Ask Anything

Subscribe now


1. My New Org to Solve Consciousness (or Die Trying)

As is obvious from the state of confusion around AI, technology has outstripped consciousness science, leading to a cultural and scientific asymmetry. This asymmetry needs to be solved ASAP.

I think I’ve identified a way. I’ve just released more public details of Bicameral, a new nonprofit research institute devoted to solving consciousness via a unique method. You can read about it at our website: bicamerallabs.org.

Rather than chasing some preconceived notion of consciousness, we’re making the bounds for falsifiable scientific theories of consciousness as narrow as possible.

Why do this as a nonprofit research institute? I’ve worked in academia (on and off) for a long time now. It’s not that funding for such ideas is completely impossible—my previous research projects have been funded by sources like DARPA, the Templeton Foundation, and the Army Research Office. But for this, academia is mismatched. It’s built around one-off papers, citation metrics, small-scale experiments run in a single lab, and looking to the next grant. To solve consciousness, we need a straight shot all the way through to the end.

If you want to help this effort out, the best thing you can do is connect people by sharing the website. If you know anyone who should be involved with this, point them my way, or to the website. Alternatively, if you know of any potential funders that might want to help us crack consciousness, please share the website with them, or connect us directly at: [email protected].

BICAMERAL LABS


2. A Rogue AI Community That Wasn’t

We are now several years into the AI revolution and the fog of war around the technology has lifted. It’s not 2023 anymore. We should be striving to not run around like chickens with our heads cut off and seek clearer answers. Consider the drama around the AI social media platform “Moltbook.”

A better description is that an unknown number of AI agents posted a bunch of stories on a website. Many of the major screenshots were fake, as in, possibly prompted or created by humans (one screenshot with millions of views, for instance, was about AIs learning to secretly communicate… while the owner of that bot was a guy selling an AI-to-AI messaging app).

In fact, the entire website was vibe-coded and riddled with security errors, and the 17,000 human owners don’t match the supposed 1.5 million AI “users,” and people can’t even log in appropriately, and bots can post as other bots, and actually literally anyone can post anything—even humans—and now a lot of the posts have descended to crypto-spam. You can also just ask ChatGPT to simulate an “AI reddit” and get highly similar responses without anything actually happening, including stuff very close to the big viral “Wow look at Moltbook!” posts (remember, these models always grab the marshmallow, and without detailed prompting give results that are shallow and repetitive). Turns out, behind examples of “rogue AIs” there are often users with AI psychosis (or using them mostly for entertainment, or to scam, etc.).

Again, the fog of war is clearing. We actually know that modern AIs don’t really seem to develop evil hidden goals over time. They’re not “misaligned” in that classic sense. When things go badly, they mostly just… slop around. They slop to the left. They slop to the right. They slop all night.

A recent paper “The Hot Mess of AI” from Anthropic (and academic co-authors) has confirmed what anyone who is not still in 2023 and scared out of their minds about GPT-4’s release can see: Models fail not by developing mastermind evil plans to take over the world but by being hot messes.

Here’s the summary from the researchers:

So the fuss over the Reddit-style collaborative fiction, “Moltbook,” was indeed literally this meme, with the “monsters” played by hot messes of AIs.

There is no general law, or takeaway, to be derived from it. Despite many trying to make it so.

Haven’t AIs been able to write Reddit-style posts for over half a decade?

In comparison to Moltbook, the “AI village” has existed for almost a year now. And in the AI village, the exact same models calmly and cooperatively accomplish tasks (or fail at them). Right now they are happily plugging away at trying to break news before other outlets report it. Most have failed, but have given it their all.

What’s the difference between Moltbook and the AI village? You’re never gonna believe this. Yes, it’s the prompts! That is, even when operating “autonomously,” how the models behave depends on how they’re prompted. And that can be from a direct prompt, or indirectly via context, in the “interact with this” sort of way, which they are smart enough to take a hint about. They are always guessing at how to please their users, and if you point them to a schizo-forum with “Hey, post on this!” they will… schizo-post on the schizo-forum.


3. David Foster Wallace Is Still Right 30 Years Later

Infinite Jest turned 30 this month. And yes, I confess to being a “lit bro” who enjoys David Foster Wallace (I guess that’s the sole qualification for being a “lit bro” these days). Long ago, all of us stopped taking any DFW books out in public, due to the ever-present possibility that someone would write a Lit Hub essay about us. However, in secret rooms unlocked to a satisfying click by pulling a Pynchon novel out from our bookshelf, we still perform our ablutions and rituals.

But why? Why was DFW a great writer? Well, partly, he was great because his concerns—the rise of entertainment, the spiritual resistance of the march of technology and markets, the simple absurdity of the future—have become more pressing over time. It’s an odd prognosticating trick he’s pulled. And the other reason he was great is because that voice, that logorrheic stream of consciousness, a thing tight with its own momentum, is itself also the collective voice of contemporary blogging. Lessened a bit, yes, and not quite as arch, nor quite as good. But only because we’re less talented. Even if bloggers don’t know it, we’re all aping DFW.

Another thing that made him great was the context he existed in as a member of the “Le Conversazioni” group, which included Zadie Smith, Jonathan Franzen, and Jeffrey Eugenides (called so because they all attended the Le Conversazioni literary gathering together, leaving behind a charming collection of videos you can watch on YouTube).

Zadie and David in Italy (Jonathan gestures in the background)

It’s an apt name, because they were the last generation of writers who seemed, at least to me, so firmly in conversation together, and had grand public debates about the role of genre vs. literary fiction, or what the purpose of fiction itself was, and how much of the modern-day should be in novels and how much should be timeless aspects of human psychology, and so on. Questions I find, you know, actually interesting.

Compare that to the current day. Which still harbors, individually, some great writers! But together, in conversation? I just don’t find the questions that have surrounded fiction for the past fifteen years particularly interesting.

A wayward analogy might help here. Since it’s become one of my kids’ favorite movies, I’ve been watching a lot of Fantasia 2000, Disney’s follow-up to their own great (greatest?) classic, the inimitable 1940 Fantasia. In its half-century-distant sequel, Fantasia 2000, throwback celebrities from the 1990s introduce various musical pieces and the accompanying short illustrated films. James Earl Jones, in his beautiful sonorous bass, first reads from his introduction that the upcoming short film “Finally answers that age-old question: What is man’s relationship to Nature?”

But then Jones is handed a new slip containing a different description and says “Oh, sorry. That age-old question: What would happen if you gave a yo-yo to a flock of flamingos? … Who wrote this?!”

And that’s what a lot of the crop of famous millennial writers after the Le Conversazioni group seem to me: like flamingos with yo-yos.

Read more

It Only Snows Like This for Children

2026-01-26 23:18:23

The first short story I ever wrote was about a teenager who goes to shovel his grandmother’s driveway during a record snowstorm. Before leaving, he does some chores in his family’s barn, bringing along his old beloved dog. But he forgets to put the dog back in—forgets about her entirely, in fact—and so walks by himself down the road to his grandmother’s house. Enchanted by the snow, he has many daydreams, fantasizing about what his future life will hold. After the arduous shoveling, he has an awkward interaction with his grandmother. Finally, hours later, he returns home. There, he finds his old beloved dog, curled up in a small black circle by the door amid the white drifts, dead.

I don’t know why I wrote that story. Or maybe I do—I, a teenager just like my main character (including the family barn and the grandmother’s house down the road), had just read James Joyce for the first time. And Joyce’s most famous story from his collection Dubliners, “The Dead,” ends with this:

Yes, the newspapers were right: snow was general all over Ireland. It was falling on every part of the dark central plain, on the treeless hills, falling softly upon the Bog of Allen and, farther westward, softly falling into the dark mutinous Shannon waves. It was falling, too, upon every part of the lonely churchyard on the hill where Michael Furey lay buried. It lay thickly drifted on the crooked crosses and headstones, on the spears of the little gate, on the barren thorns. His soul swooned slowly as he heard the snow falling faintly through the universe and faintly falling, like the descent of their last end, upon all the living and the dead.

Snow is the dreamworld, and so snow is death too. It’s both. The veil between worlds is thin after a snow. One of our favorite family movies is the utterly gorgeous 1982 animation and adaptation of The Snowman. Like the book, it is wordless, except the movie begins with a recollection:

I remember that winter because it brought the heaviest snow that I had ever seen. Snow had fallen steadily all night long, and in the morning I woke in a room filled with light and silence. The whole world seemed to be held in a dream-like stillness. It was a magical day. And it was on that day I made the snowman.

The Snowman comes alive as a friendly golem and explores the house with the boy who built him, learning about light switches and playing dress up, before revealing that he can fly and whisking the child away to soar about the blanketed land.

So too do we own a 1988 edition of The Nutcracker that has become a favorite read beyond its season. It ends with the 12-year-old Clara waking from her dream of evil mice and brave toy soldiers, wherein the Nutcracker had transformed into a handsome prince and taken her on a sleigh ride to his winter castle. There, the two had danced at court until an ill wind blew and shadows blotted the light and the Nutcracker and his castle dissolved. After Clara wakes…

She went to the front door and peered into the night. Snow was falling in the streets of the village, but Clara didn’t see it. She was looking beyond to a land of dancers and white horses and a prince whose face glowed with love.

Since snow represents the dreamworld, sometimes it is a curse—like Narnia in The Lion, the Witch, and the Wardrobe, wherein Father Christmas cannot enter when the witch’s magic holds, leaving only the negative element of winter. Snow’s semiotics is complex. We call it a “blanket of snow” because it is precisely like shaking out a blanket over a bed and letting it fall, and brings the same feelings of freshness and newness. But it can then be trammeled, and once so is irrecoverable. So snow is virginity, and snow is innocence. Snow is the end of seasons and life, but it is also about childhood.

It is especially about childhood because it only snows this much—as much as it snowed last night, and with such fluff and structure—for children. I mean that literally. I remember the great snows from my youth, when we had to dig trenches out to the barn like sappers and the edges curled above my head. I remember finding mountainous compacted piles left by plows with my friends, and we would lie on our backs and take turns kicking at a spot, hollowing a hole, until we had carved an entire igloo with just our furious little feet.

I did not think I would see snow this fluffy, this white, this perfectly deep, again in my life. I thought snow was always slightly disappointing, because it has been slightly disappointing for twenty-five years. Maybe that was a reality, or maybe snow had become an inconvenience. So I had accepted my memories of snow were like the memories of my childhood room, which keeps getting smaller on each return; so small, I feel, that the adult me can span all its floorboards in a single step.

Yet as I write this, I look outside, and there it is: the perfect snow. Just as it was, just as I remember it.

I understand now it can snow like this, and does snow like this, but only for children. And since I am back in a child’s story—albeit no longer as the protagonist—it can finally snow those snows of my youth once again.

Later today, we go out into the dreamworld.

Oh, you are already outside, I see.


Subscribe now


This is an ongoing serialization, “The Lore of the World,” that can be read in any order. Are they letters to a child, or letters to a new parent, or both? I don’t know. I think they are secretly about why there is something rather than nothing.
For when you become a new parent, you must re-explain the world, and therefore see it afresh yourself.
A child starts with only ancestral memories of archetypes: mother, air, warmth, danger. But none of the specifics. For them, life is like beginning to read some grand fantasy trilogy, one filled with lore and histories and intricate maps.
Yet the lore of our world is far grander, because everything here is real. Stars are real. Money is real. Brazil is real. And it is a parent’s job to tell the lore of this world, and help the child fill up their codex of reality one entry at a time.
Above is one of the thousands of entries they must make.
Here is Part 1 (teeth, whales, germs, music on the radio) and here is Part 2 (Walmart, cicadas, stubbornness). Further parts will crop up semi-regularly among other posts.

Art credits: Houses in the Snow, Norway by Claude Monet. The Snowman by Raymond Briggs. The Snow Effect, Giverny by Claude Monet.