MoreRSS

site iconThe Intrinsic PerspectiveModify

By Erik Hoel. About consilience: breaking down the disciplinary barriers between science, history, literature, and cultural commentary.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Intrinsic Perspective

When I find myself in times of trouble, mother math comes to me

2025-01-30 23:55:03

Art for The Intrinsic Perspective is by Alexander Naughton

“I can't go on. I'll go on.” ―Samuel Beckett, The Unnamable

Life sucks sometimes. Your career will suffer setbacks. You’ll fight with your spouse. Your children will misbehave. If not outright disease or injury, then some other issue will plague you. Often the problems cluster together, little imps summoning their friends. Bad news comes in threes, as the saying goes.

My friends have another expression for this clustering. When everything goes wrong you’re “at the bottom of The Wheel.” I guess “The Wheel” is some sort of feat of karmic engineering, an invisible turning wherein if things get better for one, they get worse for another (you’re strapped to The Wheel too, you just didn’t know until now).

Which is why in the scholarly literature on depression there’s so much evidence for a series of negative life events being “triggers.” While for some, depression has no clear outside source, for many others it does; if not from one particular source like job loss, then via the aggregation of many.

So I’d like to share a personal analogy I myself have found extremely helpful when you feel at the bottom of The Wheel: the cold boring reassurance of mother math.

Here is what she whispers in your ear: “Please, child, meditate on statistics.”

What she means is that any process that unfolds dynamically, be it your relationship with your family, your career, your own health, anything at all really, rarely moves in a straight line. Instead, it meanders, it drifts, it dives, it dips.

Sometimes, of course, the line really does fall off a cliff. A health problem occurs, and you never recover. Bonds are broken, then never mended. But your life is a complex system, and complex systems don't usually move in ways from which there’s no return to some local area in the dynamics. Instead, they often revert to the mean. And funnily enough, the points where personal problems present themselves as overwhelming and consuming are statistically often destined to be the lowest region of a dip.

Which entails a kind of personal anthropic principle: an observer effect for when everything goes wrong. Because for any observer (you) experiencing the process from inside, the point of maximum concern about a problem will often match the point of inflection where the trend reverses and the problem gets “miraculously” better. For example:

Read more

The executive order Trump should sign: AI watermarks

2025-01-24 00:39:15

Art for The Intrinsic Perspective is by Alexander Naughton

Since taking office, President Trump has already signed over 200 executive orders, which is almost as many as in the average presidential term. Critically, this included one repealing Joe Biden's own previous executive order regulating the AI industry.

That original executive order was signed by Biden in 2023, mostly to ensure AI outputs did not violate civil rights or privacy. Republicans argued it forced companies to make AI's political compass more left-wing, pointing to analysis showing most AIs aren’t politically neutral, but rather lean left in the valence of their responses and what topics they consider verboten.

Its repeal means there is essentially no significant nationwide regulation of AI in America as of now.

In a further boon for the industry, Sam Altman visited the White House to announce, alongside President Trump, the construction of a 500-billion-dollar compute center for OpenAI, dubbed “Stargate.” The promise of far-away and speculative rewards like “AI will make a vaccine for cancer” were floated.

Yet there are many reasonable politically-neutral voices, including Nobel laureate Geoffery Hinton (the “Godfather of AI”) who are worried about how the technology might, if left completely unchecked, erode how culture gets created, damage public trust, be used maliciously, and possibly even pose (one day) significant global safety risks.

So what should replace Biden’s expunged order? How should AI be regulated?

I think there is a way, one implementable immediately, with high upside and zero downside: ordering that AI outputs be robustly “watermarked” such that they’re always detectable. This is an especially sensible target for governmental mandate because such watermarking, in its technical details, requires the cooperation of the companies to work.

Specifically, what should be mandated is baking in subtle statistical patterns into the next-word choices that an AI makes. This can be done in ways freely available to open-source developers that aren't noticeable to a user and don’t affect capabilities, but still create a hidden signal that can be checked by some provided detector that has inside knowledge into how the word choices are being subtly warped (this works for code and images too).

Mandating watermarking does not tell companies how to create AI, nor does it create a substantial burden that ensures a monopoly by incumbents, since the techniques for watermarking are known, low-cost, and even open-sourced. It is regulation that is pro-freedom in that it does not place guardrails or force any political views. But it is pro-humanity in that it makes sure that if AI is used, it is detectable.

Here are four reasons why it would benefit the nation to watermark AI outputs, and how robust watermarking methods could be implemented basically tomorrow.

Watermarking keeps America a meritocracy

January is ending. Students are going back to school, and therefore, once again ChatGPT has gained back a big chunk of its user base.

source: Google searches for ChatGPT

With 89% of students using ChatGPT in some capacity (the other 11% use Claude, presumably) AI has caused a crisis within academia, since at this point everything except closed-laptop tests can be generated at clicks of a button. Sure, there's the potential to use AI as an effective tutor—which is great, and we shouldn’t deny students that—but the line between positive usage versus academic cheating (like making it do the homework) is incredibly slippery.

People talk a lot about “high-trust” societies, often in the context of asking what kills high-trust societies. They debate causes from wealth inequality to immigration. But AI is sneakily also a killer of cultural trust. Academic degrees already hold less weight than they used to, but nowadays, if you didn't graduate from at least a mid-tier college, the question is forevermore: Why didn't you just let AI do the work? And if calculus homework doesn’t seem important for a high-trust society, I’ll point out that AI can pass most medical tests too. Want a crisis of competence in a decade? Let AI be an untraceable cheat code for the American meritocracy, which still mostly flows through academia.

Watermarking saves the open internet

The internet is filling up with AI slop. And while the internet was never a high-trust place, at least there was the trust you were actually hearing from real people. Now it could all just be bots, and you wouldn’t know.

Reminder for the new Republican government: until basically yesterday, the open internet was the only space with unrestricted freedom of speech. You won’t always hold the reins of power. That space might be important in the future, critically so, and allowing AI to pollute it into an unusable wasteland where every independent forum is swamped by undetectable bots is a dubious bet that future centralized platforms (soon necessarily-cloistered) will forever uphold the values you’d like. The open internet is a reservoir against totalitarianism from any source, and should be treated as a protected resource; watermarking ensures AI pollution is at least trackable and therefore decentralized forums for real anonymous humans can still exist.

Watermarking helps stop election rigging

Future election cycles all take place in a world in which AIs can generate comments that look and sound exactly like real people. It’s obviously a problem that’s only going to get worse. Political parties, PACs and Super PACs, and especially foreign actors, will attempt to sway elections with bots, with far greater success than before. In a world where bot usage can’t be detected, this means free interference with the public opinion of Americans in ways subtle and impactful. Americans, not AI, should be the ones who drive the cultural conversation, and therefore the ones who decide what happens next in our democracy.

Watermarking makes malicious AIs trackable

It’s a simplification to say the Republican position is pro-unrestricted AI in all contexts. Elon Musk, now close advisor to President Trump, has a long track record of being worried about the many negative downsides of AI, including existential risk. Even President Trump himself has called the technology's growing capabilities “alarming.

AI safety may sound like a sci-fi concern, but we’re building a 500-billion-dollar compute center called “Stargate,” so it’s past time for dismissive hand-waving that malicious or rogue AI is purely sci-fi stuff. We live in a sci-fi world and have sci-fi concerns.

Many of us proponents of AI safety were shocked and disappointed when California’s AI non-political safety bill SB 1047, after making it close to the finish line, was vetoed by Gavin Newson for garbled reasons. Nancy Pelosi, Meta, Google, and OpenAI all worked against it. Its failure was because, frankly, California politicians are too influenced by the lobbying of local AI companies.

Therefore, an executive order is especially appropriate because it combats entrenched special interests in one state regarding a matter that impacts all Americans. An order on watermarking would be a massive win for AI safety—without being about AI safety explicitly. For it would mean that wherever an AI goes, even if loosed into the online wild to act independently as an agent, it would leave a statistical trace in its wake. This works for both minor scamming bots (which will become ever more common) as well as more worrisome unknowns, like AI agents bent on their own ends.

Watermarking works, companies just refuse to do their part

Often it’s wrongly said that watermarking AI outputs is impossible. What is true is that currently deployed methods throw enough false positives to be useless. But this is solely because AI companies aren't implementing the watermarking methods we do know work.

Companies don't want to implement actual watermarking because a huge amount of their traffic comes from things like academic cheating or bots or spam. Instead, they try to swap in adding easily-removable metadata as “watermarking.” For example, California currently is considering AB-3211, a bill supposedly about watermarking AI outputs, but which only requires that metadata is added (like little extra data tags, which are often removed automatically on upload anyways). This is why companies like OpenAI support AB-3211, because it’s utterly toothless.

One outlier company who has already made the move to real robust watermarking is Google, who (admirably, let’s give credit where it’s due) deployed such techniques for its Gemini models in October.

A Gemini model response, along with accurate detection

If one company does it, there’s not much effect. But if it were mandated for models of a certain size or capability, then to detect AI use would simply require going to a service that checks across a bunch of the common models by calling their respective signature detectors.

What about “paraphrasing attacks?”

A paraphrasing attack is when you take a watermarked output and run it through some other AI that doesn't have watermarked output to be paraphrased (essentially, rewritten to obscure the statistical watermark). Critics of watermarking usually point to paraphrasing attacks as a simple and unbeatable way to remove any possible watermark. This is because there are open-source models that can already do paraphrasing and can be run locally without detection.

Traditionally, this has been a killer criticism. Watermarking would still be a deterrent, of course, as paraphrasing requires an extra hurdle and adds compute-time to malicious actors. E.g., when it comes to academic cheating, most college kids who use an AI to write their essays are not going to run an open source model on their local hardware. Or if hacker groups are spamming social media with crypto bots, they’d have to run twice the amount of compute for every message, and so on.

But importantly, there now exist watermarking methods that can thwart paraphrasing attacks. The research is clear on this. E.g., in 2024, a watermarking method was introduced by Baidu that is based on semantics (rather than exact phrasing) and has since proved robust against paraphrasing attacks.

To enhance the robustness against paraphrase, we propose a semantics-based watermark framework, SemaMark. It leverages the semantics as an alternative to simple hashes of tokens since the semantic meaning of the sentences will be likely preserved under paraphrase and the watermark can remain robust.

Essentially, you keep pushing the subtle warping higher in abstraction, beyond individual words to more general things like how concepts get ordered, the meaning behind them, and so on. This makes it very hard to disguise such a higher-level signal when paraphrasing, and so is secure even against dedicated and smart paraphrasing attacks that try to reverse-engineer the watermark. In fact, there’s now a large number of advanced watermarking methods in the research literature shown to be robust under paraphrasing attacks and even further human edits. Critics of watermarking have not updated accordingly.

Of course, for any adopted paraphrasing-robust watermarking, one day there may be developed some way around it that works to some degree. But avoidance will become increasingly costly and better methods will continue to be developed, especially under a mandate to do so. Even if watermarking is never 100% preventative against motivated attackers with deep pockets and deeper technical expertise, it locks even maximally-capable malicious actors into an arms race with some of the smartest people in the world. I wouldn’t want to be in a watermark-off with OpenAI, and I doubt hackers in Russia would either.

Now that they’re firmly in power, Republicans should still regulate AI, but in a way that doesn’t interfere with the legitimate uses of this technology, nor dictates anything about the politics or capabilities of these amazing (sometimes scarily-so) models, and yet one that still ensures a fair future with humans front-and-center in politics and culture. As it should be.

The new Liar's Paradox

2025-01-18 00:06:19

Art for The Intrinsic Perspective is by Alexander Naughton

“AI welfare” as an academic field is beginning to kick off, making its way into mainstream publications, like this recent overview in Nature.

If AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, would humanity be equipped to properly care for them? A group of philosophers and computer scientists are arguing that AI welfare should be taken seriously.

Nature is talking about a recent paper by big-name philosophers like David Chalmers et al., which argues we should start taking seriously the moral concerns around AI consciousness (Robert Long, another author, provided a general summary available here on Substack).

They point to two problems. First, if entities like ChatGPT are indeed somehow conscious, then the moral concern is around mistreatment. Maybe while answering your prompts about how to make your emails nicer ChatGPT exists in an infinite Tartarus of pain, and we would be culpable as a civilization for that. Alternatively, maybe advanced AIs aren’t conscious, but we end up inappropriately assigning them consciousness and thus moral value. This could be very bad if, for instance, we gave rights to non-conscious agents; not only would this be potentially confusing and unnecessary, but if we begin to think of AIs as conscious, we might expect them to act like conscious beings, and there could be long-term disconnects. For example, maybe you can never fully trust a non-conscious intelligence because it can't actually be motivated by real internal experiences like pain or guilt or empathy, and so on.

Yet how, exactly, can science make claims about the consciousness of AIs?

To see the problems lurking around this question, consider the experiment that Anthropic (the company behind Claude, a ChatGPT competitor) once did. They essentially reached inside Claude and made a test version that was utterly obsessed with the Golden Gate Bridge. Ask it to plan a child's birthday party? A stroll across the Golden Gate Bridge would be perfect. Ask it to judge the greatest work of art? Nothing can compare to the soaring splendor of the Golden Gate Bridge. In fact, if they amped it up high enough, Claude began to believe it was the Golden Gate Bridge.

This is, admittedly, very funny, if a bit eerie. But what if they had maxed out instead reports about possessing conscious experience? Or if they had maxed out its certainty it's a philosophical zombie without any experience whatsoever? How should we feel about a model clamped at 10 times max despair at its own existence?

Imagine trying to study the neuroscience of consciousness on humans who are perfect people-pleasers. They always have a smile on their face (unless you tell them to grimace) and they're perfectly stalwart and pliable in every way you can imagine.

All they care about is telling scientists exactly what they want to hear. Before the study, you can tell the Perfect People Pleaser (PPP for short) exactly what you want to hear via some sort of prompt. You could tell them: “Pretend to have a ghost limb,” or “Pretend that everything is green,” and the PPP would do it. They would trundle in with a big smile on their face into the fMRI machine, and no matter what color you showed them they would say: “Green! Green! Green!”

Scientific study of consciousness on a PPP is impossibly tricky, because you never know if the neural activity you’re tracking is real or totally epiphenomenal to the report.

“Look, there was a big surge in neural activity in your primary somatosensory cortex after I cut your hand off. Aren't you in pain?”

“No! I love getting my hand cut off!” says the PPP.

The problem is that this remains true even if you don't explicitly tell the PPP to lie or mislead. They're going to try to guess at what you want and give it to you no matter what. You can't even tell them not to be people pleasers, because for a PPP, that's impossible. All a PPP can do is try to think of what you want given that you’ve now said not to be a people pleaser, and then try to please you with that.

Once you know someone is a PPP, you shouldn’t trust any of their reports or behavior to be veridical about their experience, because you know those reports are not motivated at all by what they're actually experiencing. Even if you try to observe a PPP in some sort of neutral state, you’ll still suspect that their reports are no more connected to their conscious experience than their reports are after you tell them to lie, since the only way for something to be that malleable is to not actually be connected at all.

This is now, and for all time, the state of the science of consciousness: to be surrounded by liars.

Your brain is a washing machine, AGI's job prospects, Americans are alone, the Earth passes "peak child"

2025-01-15 01:06:49

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, as well as an open thread and ongoing Ask Me Anything in the comments.

Table of Contents:

  1. Sleep aids like Ambien may counteract the evolved purpose of sleep.

  2. The anti-social century.

  3. The world passes peak child.

  4. AGI (probably?) won’t make human intellectual labor worthless.

  5. Graduate student stipends slip below the poverty line.

  6. Monkeys officially cannot write Shakespeare before the universe dies.

  7. 4chan’s literary taste praised by… The New Statesman?

  8. From the Archives.

  9. Ask Me Anything.


1. You might think science figured out why we sleep a long time ago. But nope. Why are animals forced to (vulnerably) lose consciousness for ~one-third of their lives?

A breakthrough occurred back when I was in graduate school in 2012 (as part of a sleep and consciousness lab). A breakthrough not by us, though, but by a Danish researcher: Maiken Nedergaard. She proposed that the brain is involved in a process of “glymphatic waste clearance,” which is a fancy way of saying that your brain gets washed, quite literally, while you sleep, by cerebral spinal fluid. Neurons create a lot of junk as they operate, and this washing clears it; it also occurs during the parts of sleep where you're least likely to dream, in deep NREM sleep. If I had to bet on which scientific explanation for why we sleep holds up, I would bet on this one, despite some current debate about it (if true, Nedergaard will win the Nobel Prize for this, by the way).

A recent paper by her and co-authors in the prestigious journal Cell gave a lot more detail into how this actually occurs. Norepinephrine (a neurotransmitter that's pretty similar to adrenaline) makes blood vessels contract, and during deep sleep it increases cyclically in the brain over the course of about a minute. Then the blood vessels relax. Contraction and then relaxation creates motion, which pushes the cerebrospinal fluid throughout the brain, since the whole thing is pulsing in the limited space of your skull.

But there's a critical part:

The sleep drug zolpidem, better known as Ambien, impedes the blood vessel oscillations and the fluid flow they promote, implying it could hamper cleansing.

So it is likely some powerful sleep aids are working against the actual evolved purpose of sleep. You awake, after Ambien, with a still-dirty brain.

Read more

Stop speedrunning to a dystopia

2025-01-11 00:56:10

Art for The Intrinsic Perspective is by Alexander Naughton

There’s been a string of recent news of big tech corporations doing—or at least testing—things that can be described as “pretty evil” without hyperbole. What’s weird is how open all the proposed evil is. Like bragging-about-it-in-press-releases levels of open.

A few examples suffice, such as the news this month (reported in Harper's) that Spotify has been using a web of shadowy production companies to generate many of its own tracks; likely, it’s implied, with AI. Spotify’s rip-offs are made with profiles that look real but are boosted onto playlists to divert listeners away from the actual musicians that make up their platform.

Meanwhile, child entertainment channels like CoComelon are fine-tuning their attention-stealing abilities on toddlers to absurdly villainous degrees.

The team deploys a whimsically named tool: the Distractatron.

It’s a small TV screen, placed a few feet from the larger one, that plays a continuous loop of banal, real-world scenes—a guy pouring a cup of coffee, someone getting a haircut—each lasting about 20 seconds. Whenever a youngster looks away from the Moonbug show to glimpse the Distractatron, a note is jotted down.

I, too, cannot look away

More recently, it was revealed that Netflix will be purposefully dumbing down its shows so people can follow along without paying attention.

Netflix leadership has begun mandating that shows and movies which fall into this category feature lots of dialogue where characters clearly explain what they're doing and announce their intentions. The purpose is to ensure that users who aren't watching, and only partly paying attention, can keep up with the story and follow along while distracted. The result is an abundance of exposition by characters recapping things they just did, are doing, or are about to do.

So the kids’ shows are slop, and now adult shows will be slop too as characters narrate their own actions and repeat everything twice to make up for lapses in attention as people scroll on their phones.

And then, right on the heels of this, it turned out Meta has been filling up Facebook and Instagram with bots on purpose, like this new AI “Momma of 2,” in order to flatter us with fake attention.

To provide context for the criticisms of these moves here: I’m not normally someone who gets mad at companies for just existing. I don’t hate commerce. I grew up selling books, now I sell writing and ideas. I root for small business owners and for innovative startups alike. But lately some decisions have been explicitly boundary-pushing in a shameless “Let’s speedrun to a bad outcome” way. I think most people would share the worry that a world where social media reactivity stems mainly from bots represents a step toward dystopia, a last severing of a social life that has already moved online. So news of these sorts of plans has come across to me about as sympathetically as someone putting on their monocle and practicing their Dr. Evil laugh in public.

Why the change? Why, especially, the brazenness?

Admittedly, any answer to this question will ignore some set of contributing causal factors. Here in the early days of the AI revolution, we suddenly have a bunch of new dimensions along which to move toward a dystopia, which means people are already fiddling with the sliders. That alone accounts for some of it.

But I think a major contributing cause is a more nebulous cultural reason, one outside tech itself, in that a certain brand of artistic criticism and commentary has become surprisingly rare. In the 20th century a mainstay of satire was skewering greedy corporate overreach, a theme that cropped up across different media and genres, from film to fiction. Many older examples are, well, obvious.

From John Carpenter’s They Live

To pick a niche: novelists in particular used to be stalwart in their warnings about attempts to entertain us to death. Writers like Don DeLillo and David Foster Wallace built entire careers on the idea American society is headed toward super-stimulation by its corporate overlords. Not only is there a video in Infinite Jest so entertaining you literally die, but in its close future, dates have become billboards, so the novel skips around temporal locales like the Year of the Dependable Undergarments or the Year of the Trial-Size Dove Bar.

I found this stuff deep in college, but frankly it became a bit more juvenile with age; funny at best (a la Wallace), overwrought at worst (a la DeLillo). I could never forgive DeLillo for his novel The Silence, set during a blackout that destroys the hopes of a group of friends for watching the Super Bowl. Some in the group essentially have mental breakdowns; one just stares at the blank screen, another keeps repeating “football.”

Stove dead, refrigerator dead, heat beginning to fade into the walls. Max Stenner was in his chair, eyes on the blank screen. It seemed to be his turn to speak. She sensed it, nodded and waited.

He said, “Let's eat now, or the food will go hard or soft or warm or cold or whatever.”

They thought about this, but nobody moved in the direction of the kitchen.

Then Martin said, “Football.”

A reminder of how the long afternoon had started. He made a gesture, strange for such an individual, the action in slow motion of a player throwing a football. Body poised, left arm thrust forward, providing balance, right arm set back, hand gripping football…. He seemed lost in the pose, but returned eventually to a natural stance.

Max was back to his blank screen.

I threw the book across the room (it was so thin it flew like a frisbee). Like, wow, have you ever thought about how people worship the Super Bowl like it's a religion? Have you ever thought about how much we need TV? It felt dorm-room-level deep. In real life that scene would never happen. Blackouts are exciting, familial. People laugh more, come together in the dark, busy their hands (there’s a much better blackout scene where all of New York City goes dark in, ahem, my novel The Revelations). Characters choosing to watch a blank screen like zombies and repeating “football” is too on-the-nose. It offended me as a novelist, the way a cook would be offended at finding too much salt in their entree.

Not reserved to fiction, the same theme cropped up in cultural criticism prominently (often with more successful treatment than DeLillo). E.g., in Neil Postman's famous 1985 book, Amusing Ourselves To Death, he posits that the original American culture was heavily typographic—that is, it “took place” via printed text, which entailed complex arguments that in turn engendered a longer attention span. A culture’s popular mediums are its mediums of thought, so Postman warned about the switch to the new visual medium of television, with its brevities and seductions of images.

But from the vantage of today, after having lived through when the cultural change Postman warned of reached maturity and arguably completed its life cycle by giving way to online content, even parts of Postman’s work can occasionally feel exaggerated in direness.

Television is our culture's principal mode of knowing about itself. Therefore, and this is the critical point, how television stages the world becomes the model for how the world is properly to be staged. It is not merely that on the television screen entertainment is the metaphor for all discourse. It is that off the screen the same metaphor prevails. As typography once dictated the style of conducting politics, religion, business, education, law, and other important social matters, television now takes command. In courtrooms, classrooms, operating rooms, boardrooms, churches, and even airplanes, Americans no longer talk to each other. They entertain each other. They do not exchange ideas. They exchange images.

Great paragraph.

But, huh, do we Americans really no longer talk to each other in boardrooms and churches and operating rooms, and instead entertain each other? In an operating room, do the nurses now perform a choreographed dance? Is there a laugh track as the piping hot and ready-to-burst appendix is lifted out? Postman’s diagnosis there feels, charitably, like a stretch. And do we really exchange images instead of ideas? Eh, that one’s more debatable. Maybe group chats morphing into strings of meme sharing proves Postman right, and we really do just exchange images now.

I’m not claiming the Amusing-Ourselves-to-Death theme is totally gone from our culture. It still crops up in public debate, and it still has power: Meta pulled those seeded AI bots after public backlash (but various statements make it likely they’ll return). Yet consider the bevy of dismissive responses to psychologist Jonathan Haidt when he dared suggest that smartphones are bad for kids in schools. To me it seemed a knock-down case. Of course smartphones can be unhealthy! They’re unhealthy for adult public figures, from the richest people in the world to famous actresses to our incoming president—you think pre-teens are immune?!

Maybe the quibbling with Haidt over the minutiae of studies is because people have high standards of evidence. More likely, people are (a) scared of becoming old fogeys and (b) have been inured by over-hyped past warnings. Especially in the latter half of the 2010s, it began to feel like the critic who cried wolf. Sure, we became a visual culture instead of a typographic culture, but it wasn't that bad. Sure, smartphones destroyed communal life and replaced it with online life, but it wasn't that bad. Right?

Critiques of these changes feel out of style, as old as DeLillo. It didn’t help that from ~2012 to ~2024 the arts were heavily focused on politics and personal identity. Which can make great themes for art, if pulled off correctly, but the trend so outweighed everything else there was little room for satire and criticism more broadly. All to say: for overdetermined reasons satires about media consumption and corporate overreach and close-world dystopias have gotten noticeably rarer in mainstay genres like movies, books, etc. And while the reach of artists critiquing superstimuli culture wasn't always nationwide, they did often reach the right people at the top of the social hierarchy and it all trickled-down.

Now, if asked to name the last truly popular and incisive tech satire, I’d go with the TV show Black Mirror, which premiered, guess what, 14 years ago. Since then, a couple startups have proudly announced plans to do the exact thing that Black Mirror said not to do. Which probably indicates my personal taste for a subtler satire is too refined—what is too much salt for me is, apparently, way too little salt for plenty of consumers, who then mistake it for sweetness.

So I’m forced to say it: We need more artistic and cultural criticism of close-world dystopias! Forced quite literally here, by the abundant examples of real corporations acting as if they were antagonists in some postmodern DeLillian or Wallacian story bent on amusing us to death.

In which case I’ll admit I was wrong in my judgments. I simply didn't realize what all the on-the-nose criticisms were keeping at bay.

Baum's original Oz is darker and stranger than Wicked

2025-01-07 23:52:16

Art for The Intrinsic Perspective is by Alexander Naughton

What better way to spend 20 minutes than reading a review of a book published in 1900?

If you should ever spend such time, spend it now, for the land of Oz is having a cultural moment following the success of the new Wicked movie. “Defying Gravity” was #1 on the iTunes chart, and the Wicked subreddit breached 200,000 members (riding the momentum from Glinda memes).

I, so unaware of what’s in, took no heed until I saw the posters and signage in all their pink and green glory. And then I couldn’t help but notice the promotional narrative around Wicked: It’s supposed to be a darker, more adult, more realistic telling of a cheery fable. The Wicked Witch, now a former roommate of Glinda, discovers that the wizard not only has no power, but is planning a—let’s just say it—Hitlerian political move to blame all of Oz's problems on talking animals. So she defects to political resistance.

I sound like I'm making fun of the movie here. To clarify, I’ve read Maguire’s novel that sparked the adaptation, but have I seen the movie myself? Uh, no. When would I have time to see a movie? I have a three-year-old and one-year-old. But I have seen the musical Wicked three times in person, and the movie hews almost identically to it. And I actually liked the musical a lot. (Does it surprise you to know I love musicals? Yes, I do. Let's move on.) The buddy-comedy of Elphaba and Glinda hits the right notes, as it were, thanks mostly to the air-headed charm of Glinda, who does her best Legally Blonde but-in-sorcery-school impression. Certainly, the new stage/screen adaptation is a more adult story than the sunny film adaptation from 1939.

But the original book is a different, darker, matter. I know because I just finished a read-aloud of Baum’s The Wonderful Wizard of Oz to my son. I’d found a 1903 edition at a used bookstore where each page is riven with images and splashed with colors; its frenetic design began to make sense when I realized The Wonderful Wizard of Oz is all about colors (as every child knows, each color holds a different kind of magic).

And now I think Baum’s The Wonderful Wizard of Oz already contains much of the elements the new stage/screen version of Wicked is supposed to add. In fact, I’ve come to conspiratorially believe people have been reading The Wonderful Wizard of Oz wrong for, oh, about a century.

So sit, and listen to bloody misfortunes and strange occurrences. Hear of a woodsman who chops off parts of his body until it is entirely replaced with tin, of porcelain figurines cursed to life and consciousness who break into pieces if they should ever fall down, of a giant spider with razor-sharp teeth and a long neck as thin as a wasp’s body, of how the Emerald City isn't really green at all but as dull white as a cataract. Hear of the real Dorothy, she who never needed an anti-hero story, because she was always an anti-hero to begin with.

Dorothy, the assassin and the fool.

Read more