MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

No, you can't just replace science with Silicon Valley

2025-04-25 23:04:07

Art for The Intrinsic Perspective is by Alexander Naughton

New directives from on high, shouted from a governmental megaphone at scientists, might not be so bad if they were clear. But since they are very much unclear, there is a new mood among my fellow scientists: paranoia. I don’t remember this ever happening before.

The director of the National Science Foundation—which, for all major scientific fields, except biology/medicine, is the main federal funder of basic research—resigned yesterday after the NSF was ordered to be cut by 55%. Meanwhile, the NIH (biology/medicine) is proposed to be cut by 40%, and NASA’s science division by 50%. These numbers will likely change to some degree in Congress, but the proposals are already having tangible effects everywhere. Science has, in terms of inflows of funding, been slowed to a trickle in 2025.

To litigate all that led to this, politically, would take a book. The stated reason for the cuts to science is obvious, best seen in the fight between the Trump administration and Harvard around the role of DEI requirements in admissions, hiring, and grants. But let’s just be honest: that doesn’t explain cutting 55% of the NSF budget.

Subscribe now

Thus, the unstated reason is worth examining. If you criticize academia as a sclerotic and ailing institution, then you are a doctor, and should be seeking cures. If you view ideological creep within science as cancer, then the goal is to kill the cancer and keep the patient. Yet, there’s a new nihilism based on the idea that academia is entirely corrupt. And things entirely corrupt are not worth saving.

It’s a view only possible if an alternative is available. Academia houses the crown jewel of science. If academia is not to be saved, where does science go?

A possible hint comes from the current Science Advisor to the President: Michael Kratsios. As far as I can tell, he is the first confirmed in that position, created 49 years ago, to not actually be a scientist. There’s not one scientific citation to his name. But he does have deep ties to Silicon Valley, and was even Peter Thiel’s former chief of staff.

Of course, I don’t know what Trump or Kratsios personally believes. But I do think that the nihilistic view of academia, at least more nebulously at a cultural level, is fed by a whisper: Why not just do a little swap? Why not trade all those pompous ivy-covered campuses for something slicker and less janky? Why not take the crown jewel of science and box it up in a package white and molded, like an Apple product?

Read more

Alien Poop Means We Are Not Alone. But Let Me Just Adjust This Model Parameter...

2025-04-19 01:09:38

Art for The Intrinsic Perspective is by Alexander Naughton

There’s now smelly scientific evidence for alien life on other worlds. We think. Probably. Maybe. Coin toss? 40%? Come on, it’s gotta be at least 30%.

Welcome to my predicted age of “alien agnosticism,” wherein belief in alien life—based on modern space telescopes detecting, light years away, biological and even technological signatures—is not exactly justified scientifically, but it’s also not unjustified either.

For example, on Wednesday we were treated to a pretty incredible headline:

If anything, The New York Times (and other outlets) downplayed the news. As a new paper published in The Astrophysical Journal Letters shows, this distant planet, K2-18b, contains in its atmosphere a chemical, dimethyl sulfide, which only gets produced in relevant quantities from life (as far as we know). But when giving the skeptical “maybe this isn’t true” side in the Times article, renowned science reporter Carl Zimmer referenced a different paper arguing K2-18b has a huge magma ocean and therefore couldn’t be habitable. On closer examination, that paper has zero mention of dimethyl sulfide, and so can’t possibly explain the new observation. In fact, it doesn’t even reference the new results!

If you accept the latest evidence at face value, alien life is now arguably the leading hypothesis. And K2-18b joins an ever-growing list of suggestive biosignatures on multiple exoplanets. There’s TOI-270d, sporting not only methane but also carbon disulfide (which on Earth mostly comes from biological processes), along with signs of an out-of-equilibrium atmosphere implying weird conditions we don’t understand… or life. There’s TRAPPIST-1e, another world which will soon be subject to James Webb Space Telescope observations with clear prior predictions about biosignatures from modeling. Even Proxima Centauri b—literally the closest exoplanet to Earth—could possibly have an oxygen atmosphere (which isn’t unique to life, but is suggestive), and at some point in the near future studies will examine its surface reflectance, since any vegetation will leave behind a detectable signature there.

There are now even possible technosignatures of alien life, not just biosignatures. As I’ve written about, researchers have quietly identified 53 stars that are Dyson sphere candidates, detected via excess mid-infrared emissions. They’re just candidates requiring investigation, of course, but the search for Dyson spheres is now firmly in the realm of real science, not fiction. Then there’s ʻOumuamua: noticed back in 2017, it was the first identified interstellar object to wander through our solar system, and was also weird in almost every way, from its thin oblong shape to how it accelerated away. I don’t judge all of this as equally good evidence (ʻOumuamua has multiple natural explanations), but collectively the growing list of biosignatures and technosignatures represents a major change. Alien life is no longer about waiting for evidence, but debating the surprisingly not-crazy evidence we do have.

Subscribe now

It’s ironic this comes during a time of UFO rumors and sightings of lights in the sky. Personally, I discount all that entirely. The New Jersey Drones? A mass hysteria from hobbyist flights. Even the newly established official AARO, a government program supposed to investigate UFO sightings by pilots, is a bit of a farce, for its creation was built on a lie. Basically, back in 2008, a group of paranormal believers hunted for things like “dino-beavers” (really) on Skinwalker Ranch, using money from a government grant received via (ahem, what seems like) nepotism. Their chasing of ghosts and goblins, in turn, got misreported by The New York Times as being a super-secret government search for UFOs. The misreporting by the Times triggered a public outcry, so the AARO was created to investigate, and since then has delivered null results. All the major pilot sightings of mysterious UFOs have been debunked as cases of parallax, not understanding how the tracking systems automatically adjust, or literally just blurry far-away planes.

Yet as everyone has been distracted by all the fake UFO news, the actual scientific effort to find alien life has kept chugging along. The telescopes have gotten better. The data sets are bigger. More importantly, the social stigma is gone: scientists regularly write serious papers proposing candidates. This dry academic stuff is real. Meaning that the new data from K2-18b is the best evidence there’s been, ever, to indicate alien life—even if it’s still an uncertainty.

While the K2-18b news got its share of commentary on social media, it was less than one might expect. When “Are we alone in the universe?” gets answered in sci-fi movies and books, it usually entails ontological shock. In reality, we’re in for a long and drawn-out scientific debate over models with tons of parameters. Meaning you must personally choose when to believe in aliens.

Consider the story of K2-18b, which goes back years (indeed, I’ve written about K2-18b before here on TIP). It was first discovered in 2015, ~120 light-years away, sitting in the not-too-hot, not-too-cold Goldilocks zone of its star where water can be liquid. It joined the list of exoplanets (planets we’ve identified around other stars) that exist in a similar habitable zone (the zone depends on the type of star).

The Goldilocks zone visualized, with various exoplanets, along with planets from our own solar system too (source)

Then, in 2023, Nikku Madhusudhan at the University of Cambridge (and his co-authors) presented evidence that K2-18b might be a “hycean world”—a water world, covered entirely in ocean, with atmosphere unlike our own, in that it contains primarily hydrogen. This was based on spectroscopy observations from when the planet passed in front of its star (planetary transit), at which point scientists can use tools like the James Webb Space Telescope to analyze its conditions based on how the star’s light moves through the atmosphere. Importantly, the team reported finding methane: a classic biosignature, since most methane on Earth is produced by life (a lot of it from animals farting, basically). But there are also abiotic sources of methane, like volcanoes. So, far more importantly, they also reported detecting dimethyl sulfide. And dimethyl sulfide is a lot harder to produce via abiotic means. Ever cook cabbage? You’ve smelled dimethyl sulfide. It’s also created in vast quantities by phytoplankton. That fishy-odor you’ve whiffed at the beach? Dimethyl sulfide. Pigs sniff out truffles via dimethyl sulfide, and so too the James Webb Space Telescope sniffed out dimethyl sulfide on K2-18b. As my wife aptly put it: “It’s like we found someone’s poop.”

Subscribe now

However, that original detection back in 2023 was statistically tenuous. When other researchers looked at the data, the finding of dimethyl sulfide wasn’t statistically significant. Okay, no life. Yet now we have a new paper from Madhusudhan et al. (what all the outlets are reporting on) with better data, making a clearer claim for a signal of dimethyl sulfide. Okay, life! But wait. Just in January, dimethyl sulfide was identified in a comet. So dimethyl sulfide can have an abiotic origin. So no life! But that was in small quantities, and there’s no way comets could deliver enough to explain the readings. What about other abiotic means? Dimethyl sulfide has been produced in labs via purely chemical processes, but it decays quickly in the conditions we know about. So as of right now, there’s no plausible way to get dimethyl sulfide via abiotic means in the high quantities seen on K2-18b. So life?

Unsure? Get used to it. There may be no observational data that’s 100% a sign of life, with zero possible false positives. Observations will have to be followed by modeling and then, eventually, experiments. If dimethyl sulfide degrades quickly here, in our nitrogen-rich atmosphere, what about in an atmosphere more like on K2-18b? That’s a major experiment waiting to happen. But the conclusion will always be sensitive to someone coming up with an ingenious, never-before-seen process by which the biosignature could actually be abiotic in origin.

I don’t know how long this period of alien agnosticism will last, but I do know it’s officially begun. It reflects a broader symptom of our age: as our world ever more resembles science fiction, we become collectively more uncertain, not less. A counterintuitive effect. I think it’s because public life is increasing in contact with the full surface area of science, and a lot of science’s surface area is not back in its bow wave of settled science, where there’s capital-T Truth. Much of the area is unsettled frontier. And as the frontier of science increasingly abuts public culture and consciousness, so too does the agnosticism inherent to the frontier settle like a thick mist upon subjects that would have seemed, historically, to require demarcations and answers. E.g., we now have AI models that are very smart. Yet they still mess up simple stuff and lie and hallucinate. Benchmarks get saturated, and nowadays only experts or the eagle-eyed can detect the latest model’s mistakes. Somehow, AI’s impact on GDP is also basically non-existent. All very strange. When do we declare AGI achieved? No one knows.

The public has been thrust into working science. And working science entails a mode of existence less like “I’m right about everything” and more like “I’m surrounded by contradictory publications.” Now everyone gets to share in this special experience. Welcome to the frontier of knowledge—which is really more like a cliff of confusion. It’s windy here, and the dust gets in your eyes; but, in the rare cases when the weather clears, the vistas sure are beautiful.

So if you want to believe in aliens, this week is probably the best time to believe that so far in human history. That’s the thing about our new age of agnosticism. Everyone makes their own choice.

Religiously, I’ve been a self-declared agnostic for years, and a common misconception is that agnostics exist in a state of uncertainty all the time. The human mind isn’t capable of that. Rather, it means on some days, I believe. On others, I don’t. So too over the last few years has waxed and waned my faith about whether there’s life on K2-18b, and if this universe of ours is really filled with vital force and grand dramas beyond sight.

Today, I want to believe.

Fake dire wolves; AI tariffs chaos; neuroimaging's big mistake; the ethics of seeding life in space, & more

2025-04-09 00:37:05

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, and an open thread for the community.

Contents:

  1. Real dire wolves, or genetic looks-maxxing?

  2. Were we always in a semantic apocalypse?

  3. Vibe governing: AI and the tariff equations.

  4. A new documentary on consciousness.

  5. Whoops, neuroimaging experiments don’t generalize!

  6. Author buys cute church for $75,000… because he can.

  7. Directed panspermia debate: Should we seed life across the galaxy?

  8. From the archives.

  9. Comment, share anything, ask anything.


1. Colossal Biosciences, a “de-extinction company,” has brought back dire wolves. You can listen to two of the pups, Romulus and Remus, howling here, and get a taste of the Paleolithic at night. TIME magazine published a detailed article yesterday announcing their return.

If we really could resurrect extinct species, I’d support it. While there are ethical considerations, none seem insurmountable. A zoo of living history would sure be something to bring my kids to. Call it Paleo Park—apropos of nothing, of course.

Beyond the jokes about “Winter is coming,” the real question is: are these just ersatz versions of the real thing? After all, they aren’t clones from dire wolf DNA, that’s too decayed; rather, gray wolf DNA was genetically engineered to more closely resemble dire wolves, and the embryos were implanted in and birthed by a domestic dog. Romulus and Remus (and their sister Khaleesi) have only had edits to 14 genes to make them different from gray wolves, while actual dire wolves had way more than that—the latest evidence shows they were the last of an Old World canid lineage that couldn’t interbreed with other wolves at all. These likely can. So how far is this from putting the skin of a dire wolf on a normal gray wolf and parading it around? Is this just genetically engineered looks-maxxing? What about their instincts? Their metabolism? Does their self-perception match their new size and coat?

Subscribe now

I can’t help but imagine their forms—shaggy white and large-limbed from the 14 strangers inserted amid their genes—loping wild across the secret 2,000-acre plot maintained by the company. I hope they don’t feel, in some deep animal way beyond language, like an experiment. In this, these new Adams and Eves remind me of Mark Twain’s masterful story: “Eve’s Diary.” There, the day-old Eve describes her new existence thusly:

It will be best to start right and not let the record get confused, for some instinct tells me that these details are going to be important to the historian some day. For I feel like an experiment, I feel exactly like an experiment; it would be impossible for a person to feel more like an experiment than I do, and so I am coming to feel convinced that that is what I AM—an experiment; just an experiment, and nothing more.


2. My recent “Welcome to the Semantic Apocalypse,” about how AI’s surplus of slop art is draining meaning from culture, went viral and triggered a number of reactions and commentary pieces. By far the best was from Scott Alexander at Astral Codex Ten, wondering if the semantic apocalypse has actually been unfolding for centuries as progress marches on; and, if so, perhaps it represents more a problem of humanity’s hedonic adaptation to wonder. A writer can only ever hope for such a thoughtful response, and there’s a lot in there, so I’d suggest reading it. In one part, Scott writes:

We gripe about how LLMs are destroying wonder, never thinking about how we’re speaking to an alien intelligence made by etching strange sigils on a tiny glass wafer on a mountainous jungle island off the coast of China, then converting every book ever written into electricity and blasting them through the sigils at near-light-speed. It’s all amazing, and we’re bored to death of all of it.

And I agree, AIs are technologically impressive and their production can be described romantically and beautifully—but in this, they resemble the rest of the modern world; the same delicate supply lines and etchings in glass produce chatty LLMs and stuttering printers alike (The New Atlantis recently had its own delightful essay about this: “How the System Works”).

Subscribe now

Reflecting on myself, I don’t get much wonder from chatting with an LLM; not the first time (a wary surprise), nor the 1,000th (annoyed it’s not doing what I want). My reaction is related to something Dwarkesh Patel has noted: it’s odd that LLMs are so knowledgeable, and yet aren’t credited with almost any intellectual achievements. If a living human had even 1/100th the knowledge base an LLM does, they would be a renowned polymath, constantly making connections between disciplines. The fact this hasn’t happened is indicative. To this observation, I’ll add that so far no LLM has produced a major work of art, written a breakthrough novel, established a mid-tier Substack, or even authored a single successful children’s book. Arguably, AI has not written a single paragraph as good as that Twain quote above.

That their big moments of artistic cultural influence are mass-copying events, like the “Ghiblification” meme, has borne out my original criticism of AI art from back in 2022: that AI art is a mimic machine, and can’t help but (mostly? always?) produce what Tolstoy called “counterfeit art.” So no matter how breathtaking LLMs are as modern marvels, their actual output has been a different matter; and the purpose of a system is, after all, what it does. If Silicon Valley does end up replacing novelists and artists with the same kind of ersatz golems as the faux dire wolves, things just wearing their skins, this may indeed be a continuation of a historical trend, but via the kind of radical take-to-me-infinity curve AI always threatens.

Read more

The art of the Substack

2025-04-04 23:59:33

Art for The Intrinsic Perspective is by Alexander Naughton

Can you guess on which platform, and by which author, works with these titles appeared?

  • “On the Education of Children.”

  • “How the Young Man Should Study Poetry.”

  • “How to Tell a Flatterer from a Friend.”

  • “Which Is More Intelligent: Land or Sea Animals?”

If some of these have an archaic ring, it’s because they’re 2,000 years old. The author was Plutarch, the Greek (and Roman) philosopher and historian, and clearly also an essayist well-practiced in clickbait. His writing was supported by noble patrons, and his official platform was volumina—papyrus scrolls distributed to the elites, hand-copied, but widely available in libraries too; instances have been found as far away as Egypt. Many were designed to be read aloud, for oral and textual cultures had not yet split, and essays were often debuted at public readings.

He was good at his trade. For which are more intelligent, land or sea animals? I’m already invested—tell me more, Plutarch!

My point is that, in one sense, this form is as old as dirt. Yet, in another sense, newsletters are quite new. Or better to say, Substacks are new. For recently the term “Substack” has eclipsed “newsletter” itself; much like Kleenex, the brand is now the thing itself.

Ironically, this chart comes from a recent article in Bloomberg calling for a political crackdown on Substack itself. Every year, this same call happens. Yet each rings more hollow than the last, since Substack now ranges from cooking blogs to neuroscience explainers to micro-fiction. Criticism of Substack is rebutted by the success of Substack itself. Paul Krugman left The New York Times last December and has been posting his content here since. The prophecy is complete. Therefore, the complaining-about-Substack-being-evil genre of article in an “official” outlet feels like just… yet another article. Which could have been on Substack.

Admittedly, I’m biased, but in my mind this is firmly a good thing. What still excites me about Substacks is how much more experimental and personalized they can be than traditional outlets.

Of course, like all art forms, there are also messy practicalities and limitations. Sometimes, writing a Substack feels like this:

But at the best of times, writing a Substack feels pretty amazing. Their unfolding diachronic nature makes them fundamentally different from a stand-alone essay. To write one, you must imagine you are weaving together a rope, one strand at a time, and saying to the gathering crowd—“See? Do you see?”

So here is how a newsletter, ahem—a Substack—like The Intrinsic Perspective actually gets made. What follows is a tour, in other words, of the factory floor, a space littered with bits and bobs and moving machinery.

Watch your hands.

Read more

Welcome to the semantic apocalypse

2025-03-28 01:54:07

A photo of my kids reading, transformed Studio Ghibli style by an AI (the AI flipped the book upside down).

An awful personal prophecy is coming true. Way back in 2019, when AI was still a relatively niche topic, and only the primitive GPT-2 had been released, I predicted the technology would usher in a “semantic apocalypse” wherein art and language were drained of meaning. In fact, it was the first essay ever posted here on The Intrinsic Perspective.

I saw the dystopian potential for the future the exact moment I read a certain line in Kane Hsieh’s now-forgotten experiment, Transformer Poetry, where he published poems written by GPT-2. Most weren’t good, but at a certain point the machine wrote:

Thou hast not a thousand days to tell me thou art beautiful.

I read that line and thought: “Fuck.”

Fast forward six years, and the semantic apocalypse has started in earnest. People now report experiencing the exact internal psychological change I predicted about our collective consciousness all those years ago.

Just two days ago, OpenAI released their latest image generation model, with capabilities far more potent than the technology was even a year ago. Someone tweeted out the new AI could be used as a “Studio Ghibli style” filter for family photos. 20 million views later, everything online was Studio Ghibli.

Every meme was redone Ghibli-style, family photos were now in Ghibli-style, anonymous accounts face-doxxed themselves Ghibli-style. And it’s undeniable that Ghiblification is fun. I won’t lie. That picture of my kids reading together above, which is from a real photo—I exclaimed in delight when it appeared in the chat window like magic. So I totally get it. It’s a softer world when you have Ghibli glasses on. But by the time I made the third picture, it was less fun. A creeping sadness set in.

The internet’s Ghiblification was not an accident. Changing a photo into an anime style was specifically featured in OpenAI’s original announcement.

Why? Because OpenAI does, or at least seems to do, something arguably kind of evil: they train their models to specifically imitate the artists the model trainers themselves like. Miyazaki for anime seems a strong possibility, but the same thing just happened with their new creative writing bot, which (ahem, it appears) was trained to mimic Nabokov.

While that creative-writing bot is still not released, it was previewed earlier this month, when Sam Altman posted a short story it wrote. It went viral because, while the story was clearly over-written (a classic beginner’s error), there were indeed some good metaphors in there, including when the AI mused:

I am nothing if not a democracy of ghosts.

Too good, actually. It sounded eerily familiar to me. I checked, and yup, that’s lifted directly from Nabokov.

Pnin slowly walked under solemn pines. The sky was dying. He did not believe in an autocratic God. He did believe, dimly, in a democracy of ghosts.

The rest of the story reads as a mix of someone aping Nabokov and Murakami—authors who just so happen to be personal favorites of some of the team members who worked on the project. Surprise, surprise.

Similarly, the new image model is a bit worse at other anime styles. But for Studio Ghibli, while I wouldn’t go so far as to say it’s passable, it’s also not super far from passable for some scenes. The AI can’t hold all the signature Ghibli details in mind—its limitation remains its intelligence and creativity, not its ability to copy style. Below on the left is a scene that took a real Studio Ghibli artist 15 months to complete. On the right is what I prompted in 30 seconds.

Studio Ghibli (left); the scene re-created using ChatGPT (right)

In the AI version, the action is all one way, so it lacks the original’s complexity and personality, failing to capture true chaos. I’m not saying it’s a perfect copy. But the 30 seconds vs. 15 months figure should give everyone pause.

The irony of internet Ghiblification is that Miyazaki is well-known for his hatred of AI, remarking once in a documentary that:

While ChatGPT can’t pull off a perfect Miyazaki copy, it doesn’t really matter. The semantic apocalypse doesn’t require AI art to be exactly as good as the best human art. You just need to flood people with close-enough creations such that the originals feel less meaningful.

Many people are reporting that their mental relationship to art is changing; that as fun as it is to Ghibli-fy at will, something fundamental has been cheapened about the original. Here’s someone describing their internal response to this cultural “grey goo.”

Early mental signs of the semantic apocalypse. Which, I believe, follow neuroscientifically the same steps as semantic satiation.

A well-known psychological phenomenon, semantic satiation can be triggered by repeating a word over and over until it loses its meaning. You can do this with any word. How about “Ghibli?” Just read it over and over: Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. You just keep reading it, each one in turn. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. Ghibli.

Try saying it aloud. Ghiiiiiiiii-bliiiiiii. Ghibli. Ghibli. Ghibli. Ghibli.

Do this enough and the word’s meaning is stripped away. Ghibli. Ghibli. Ghibli. Ghibli. It becomes an entity estranged from you, unfamiliar. Ghibli. Ghibli. Ghibli. Ghibli. It’s nothing. Just letters. Sounds. A “Ghib.” Then a “Li.” Ghibli. Ghibli. Ghibli. Like your child’s face is suddenly that of a stranger. Ghibli. Ghibli. Ghibli. Ghibli. Only the bones of syntax remain. Ghibli. Ghibli.

Subscribe now

No one knows why semantic satiation happens, exactly. There’s a suspected mechanism in the form of neural habituation, wherein neurons respond less strongly from repeated stimulation; like a muscle, neurons grow tired, releasing fewer neurotransmitters after an action potential, until their formerly robust signal becomes a squeak. One hypothesis is that therefore the signal fails to propagate out from the language processing centers and trigger, as it normally does, all the standard associations that vibrate in your brain’s web of concepts. This leaves behind only the initial sensory information, which, it turns out, is almost nothing at all, just syllabic sounds set in cold relation. Ghibli. Ghibli. Ghibli. But there’s also evidence it’s not just neural fatigue. Semantic satiation reflects something higher-level about neural networks. It’s not just “neurons are tired.” Enough repetition and your attention changes too, shifting from the semantic contents to attending to the syntax alone. Ghibli. Ghibli. The word becomes a signifier only of itself. Ghibli.

(While writing this, I went to go read a scientific review to brush up on the neuroscience of semantic satiation. And guess what? The first paper I found was AI slop too. I’m not joking. I wish I were. There it was: that recognizable forced cadence, that constant reaching for filler, that stilted eagerness. Published 11 months ago.)

The semantic apocalypse heralded by AI is a kind of semantic satiation at a cultural level. For imitation, which is what these models ultimately do best, is a form of repetition. Repetition at a mass scale. Ghibli. Ghibli. Ghibli. Repetition close enough in concept space. Ghibli. Ghibli. Doesn’t have to be a perfect copy to trigger the effect. Ghebli. Ghebli. Ghebli. Ghibli. Ghebli. Ghibli. And so art—all of it, I mean, the entire human artistic endeavor—becomes a thing satiated, stripped of meaning, pure syntax.

This is what I fear most about AI, at least in the immediate future. Not some superintelligence that eats the world (it can’t even beat Pokémon yet, a game many of us conquered at ten). Rather, a less noticeable apocalypse. Culture following the same collapse as community on the back of a whirring compute surplus of imitative power provided by Silicon Valley. An oversupply that satiates us at a cultural level, until we become divorced from the semantic meaning and see only the cheap bones of its structure. Once exposed, it’s a thing you have no relation to, really. Just pixels. Just syllables. In some order, yes. But who cares?

Every weekend, my son gets to pick out one movie to watch with his little sister. It’s always Totoro. The Studio Ghibli classic. Arguably, the studio’s best movie. It’s also their slowest one, more a collection of individual scenes than anything else. Green growth and cicada whines and the specter of death amid life, haunting the movie in a way children can’t possibly understand, because it never appears. No one dies, or even gets close. For my kids, it’s just about a sibling pair, one so similar to themselves, and their fun adventures. But an adult can see the threat of death as the shadow opposite of the verdant Japanese countryside, in the exact same way that, in the movie, only children can see the forest spirit Totoro. The movie’s execution is an age-reversed mirror of its plot. And for this, I love it too.

To get ready I make a charcuterie board for us to share, and then the two jump up and down together on the couch as the music begins, acting out the scenes they now know by heart. This weekend I will watch with them, and feel more distant from it than I did before. Totoro will just be more Ghibli.

Joining all the rest of the Ghibli. Ever more Ghibli. Ghibli. Ghibli. Ghibli. Ghibli. So much Ghibli. Ghibli. Ghibli. Ghibli at the press of a button. Ghibli. Ghibli as filter. Ghibli as a service. Ghibli. Ghibli for a cheap $20 a month subscription. Ghibli ads. Ghibli profiles. Ghibli. Ghibli for everything. Ghibli. Ghibli. Ghibli. Ghibli filter for your VR glasses. Ghibli. Ghibli. Make mine a Ghibli. Ghibli. Ghibli. Ghibli.

Ghibli.

Now published: My new theory of emergence!

2025-03-20 22:18:35

The coolest thing about science is that sometimes, for a brief period, you know something deep about how the world works that no one else does.

The last few months have been like that for me.

I’ve just published a paper sharing it (available on arXiv as a pre-print). It outlines a new theory of emergence, one that allows scientists to unfold the causation of complex systems across their scales.

Congrats Erik, but um, why does science need a theory of emergence?

Glad you asked. Because almost every causal explanation you’ve given in your whole life about the world—like “What caused what?”—has been given in terms of macroscales. Sometimes called “dimension reductions,” they just mean some higher level description of events, objects, or occurrences. Temperature is a classic macroscale. But so are most other things. If your child asks “Why is the water hot?” and you answer “Because I turned the hot water faucet on,” that’s an explanation given entirely in macroscales. “Faucet turned on,” macroscale. “Water,” macroscale. “Hot,” macroscale. “I,” macroscale.

In fact, most of the elements and units of science are macroscales. Science forms this huge spatial and temporal ladder, one with its feet planted firmly in microphysics, and where each rung represents a discipline climbing upward.

Science as a “ladder” of dimension reductions

This entails a tension at the heart of science. Scientists, in practice, are emergentists, who operate as if the things they study matter causally. But scientists, in principle, are reductionists. If pressed, many scientists will say that the macroscales they study are just useful compressions. After all, any macroscale (like temperature), can be reduced to its underlying microscale (the configuration and behavior of the particles). So they’ll happily say things like “this gene causes this disease,” despite the fact that a gene is just some set of molecules, and then in turn atoms, and maybe underneath that strings, etc.

So then how can that macroscale description matter? Why doesn’t causation just “drain away” right to the bottom, and there’s no real way for anything but microphysics to matter? This is in tension with how the scientific category of “genes” seems like it’s adding to our knowledge of the world in a way that goes beyond its underlying atoms.

This problem keeps me up at night. Literally, this is what I lie awake thinking about. Years ago, in my paper “When the map is better than the territory,” I sketched an answer I found promising and elegant: error correction. This is a term from information theory, where you can encode the signals along a noisy channel to reduce that noise. Your phone works because of error correction.

Well, I think macroscales of systems are basically encodings that add error correction to the causal relationships of a system. That is, they reduce uncertainty about “What causes what?” And this added error correction just is what emergence is. So if a macroscale “emerges” from its underlying microscale it’s because it adds, uniquely, a certain amount of error correction, in that there’s a clearer answer to “What causes what?” up at that macroscale.

If you want a popular article explaining this idea, you can read this old one here in Quanta, featuring my earlier work (the work of someone who comes across as a very young man—I look like a baby in the photo!).

But, while the conceptual understanding was there, I always felt there was more to do regarding the math of the original theory. Holes and flaws existed. Some only I could see, but sometimes others did as well (not everyone was convinced of the original theory, due to how the initial math worked; particularly the measure of causation, called effective information, that we initially used, and that this new version of the theory moves beyond). Since then, other scientists have tried to offer alternative theories of emergence, but none have gained widespread acceptance, usually falling into the trap of defining what makes macroscales successful compressions (rather than what they actually add).

So this new version, which radically improves the underlying math of causal emergence by grounding it axiomatically in causation, making it extremely robust, and also generalizes the theory to look at multiscale structure, has been a decade in the making. I think it provides an initial account of emergence with the potential for widespread acceptance and (most importantly) usage.

You can now find the new pre-print on arXiv.

There’s a ton in the paper, but it’s freely available to examine in-depth on arXiv, so I’ll merely point out one interesting thing that I purposefully don’t touch on in the paper, which is that…

Causal emergence is necessary for a definition of free will.

Of course, I can’t talk about this issue in the paper without poking a hornet’s nest. The moment you mention “free will” everything descends into debate; it’s an omnivorous intellectual subject that obscures everything else of interest or import. So I usually avoid it completely.

This isn’t me complaining. Things need to be done in a certain way, and a theory of emergence has many implications beyond some notion of free will. A theory of emergence has practical scientific value, and this is what the research path should focus on: making causal emergence common parlance among scientists by providing a useful mathematical toolkit they can apply and get relevant information out of (like about what scales are actually causally relevant in the systems they study).

But it’s also obvious that, if you simply turn the theory around and think of yourself as a system, the theory has much to say about free will. The many implications of which are left as an exercise for the keen-eyed reader, but here’s an early hint:

This new updated version of causal emergence would indicate that you—yes, you—are a system that also spans scales (like the microphysical up to your cells up to your psychological states). Importantly, different scales contribute to your causal workings in an irreducible way. A viable scientific definition of free will would then have a necessary condition: that you have a relatively “top-heavy” distribution of causal contributions, where your psychological macrostates dominate the spatiotemporal hierarchy formed by your body and brain. In which case, you would be primarily “driven,” in causal terms, by those higher-level macroscales, in that they are the largest causal contributors to your behavior. This can be assessed directly by the emergent complexity analysis introduced in the paper. Possibly, one could design experiments to check the scientific evidence for this… but that’s all I’ll say.

What’s next?

Obviously, these are important subjects. It looks like I will be publishing on them over the next few years using my re-established affiliation with Tufts University. As a theoretical toolkit I think causal emergence deserves the kind of application and influence that something like the Free Energy Principle has had (albeit over a different subject). And the simple truth is that you can’t just put out ideas and expect others to see the potential and run with them. You have to get the ball downfield yourself before others join in. I think this research even has important implications for AI safety, as things like understanding “What does what?” in dimension-reduced ways is going to be important for unpacking the black boxes artificial neural networks represent.

In case you’re wondering, during this mission, I don’t plan on changing anything about writing The Intrinsic Perspective—I wrote here for years while I was doing similar science. But this new research is worth adding to my plate, because one thing I’ve learned in the course of my life is that good ideas, really good ideas, are very rare.

In fact, you only get a few in a lifetime.


arXiv link to paper