MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Laptop stands are a thing your neck may appreciate

2026-04-16 18:01:15

I recently complained to a friend that I would like to spend more time writing in cafés, but that I quickly get neck pain from staring down at my laptop.

My friend made me aware that laptop stands are a thing, and that I can just prop my laptop up so that the screen is at eye level.

image.png

This is amazing and - together with a separate keyboard and mouse - fixed many of my ergonomics issues. Though if my chair isn’t high enough, the screen might end up slightly above eye level, which isn’t totally comfortable either. But that’s usually easier to deal with.

I’m flabbergasted that I’d never seen anyone else use one of these (not even at their homes, where they wouldn’t need to carry external peripherals around), nor had I heard of them before my friend told me about their existence.

image.png

Mine is a Leitz Ergo one. When you don’t use it, you can fold it away and it goes into a nice flat pouch.

image.png

The only drawbacks are that one, you need a bit more space. And two, your laptop will (literally) stand out a bit more. This does sometimes make me feel a bit more self-conscious.

On the other hand, the way it stands up kind of makes it look like one of those Transformers robots, which is cool.

image.png

Well, kind of.

There’s also bookstands. They are slightly less convenient in that you need to frequently remove the book from the stand to turn the pages, but I’ve appreciated mine nonetheless.

image.png







Discuss

Simulated Qualia Mugging

2026-04-16 16:25:40

I think that preventing suffering is more important than causing happiness, and I try my best to prevent the suffering of all things that I consider moral subjects. To this extent, I'm vegan, donate my money to effective charities, and so forth.

At the time of writing, I'm thinking a lot about emulating qualia. I've been grappling with whole brain emulations for the past few months and LLMs seem like they might have a decent claim to moral subjecthood too.


the following is fiction

Toda Corporation, an Israeli startup that is the current world leader in whole brain emulations, recently had the weights of their first human upload leaked.

Oren Mizrachi is the eccentric son of Israeli billionaire Eli Mizrachi. Eli made his fortune by founding and selling WorldEye, AI Geodata company to Palantir, which was later turned to the bedrock of the modern defense industry. Oren was always rebellious, living in counterculture and punk communities, and there were always arguments at the dinner table.

In 2028, Oren was at Burning Man, and this time was on 3 grams of shrooms. During his trip, the founder of Toda approached him and wanted to upload his brain. Thinking that this was a masterful act of rebellion, he signed up, and the first set of weights was sitting on a RAID server in Tel Aviv by the end of the year.

They have a lead of a few years over the rest of the field, and it shows. The human is placed in a ridiculously high definition virtual environment, with the simulation controller having the ability to target every input channel of the human's brain. In the past, they walked the road towards emulations, first uploading worms, then mice, then monkeys, and now humans, all in stealth. Only a few investors in Tel Aviv and San Francisco knew about the project, and no one had the full story.

The simulated humans are also really efficient, on a compute basis. They require some interesting memory engineering, as each human has 90 billion neurons and 90 trillion synapses, requiring a few terabytes of RAM each, but the models utilize the natural compute sparsity of the human brain to reduce their compute overhead. DDR4 RAM prices have now cratered, after supply increased during the 2026 AI RAM Buyout, so this memory usage isn't a problem. Each human is able to run on a single 4080, and they're able to be packed thousand-to-one on the latest Vera Rubin AI supercomputer.

Over the spring of 2029, a backdoor in OpenSSH was discovered, and all of Oren, weights, and inference code, was uploaded to HuggingFace. Though it was swiftly removed, a copy of the data was sold to the Chinese government for an undisclosed sum.

It was an open secret that WorldEye was backdoored, but the specific backdoor at play required a 256-bit key that only Eli had access to. Eli had jingoistic tendencies, and implemented this backdoor in case any enemies of Israel were to gain access to the software, allowing him to corrupt the data, rendering the worse than useless.

Eli's personal phone rang on July 10, 2029. He picked up, and was startled by the voice he heard. "Hey dad," said a voice that sounded just like Oren.

"Oh my god son, you haven't called for months. How have you been?" "I'm in trouble dad, I-", said Oren, before a voice in the background cut him off.

In a thick Chinese accent, a voice spoke softly. "We have your son."

"Who are you? What do you want from me?" "We want you to give us the key to WorldEye, and we want you to tell us how to remove the backdoor ourselves." "Fuck you" "Better choose your words carefully. We have your son, and we're not afraid to do what we need to" "If I give you the key, all Israeli intelligence will be compromised. Hundreds of thousands will die. The few hours of suffering my son will endure is not enough reason to do this, even if it breaks my heart." "You'll want to join the call on the link we just sent you"

Eli opened the link to the call, which relayed video and audio in high definition over secure lines. "You might want to turn your video on, Eli. Oren may want to see you," said the Chinese military officials as they started sharing their screen. On one half was a video feed of Oren, simulated in a generated environment, with probes in his simulated brain, reading from every neuron. With some machine learning, the Chinese had identified patterns that corresponded to all sorts of qualia.

"Oren is currently seeing you, but we've had this environment for a few weeks, and we've been training the model generating the environment to maximize the negative emotions he's feeling. In other words, we're creating simulated hell. In a few minutes, when our experiment completes, Oren will be subjected to torture worse than any other human has ever experienced, over trillions of simulated hours - millions of human lives. Still think it's not worth it?"

Eli sighed. "I'll give you the key. But you have to promise me that you'll stop this experiment."

"The machine will provably stop when you give us the key. Here's the source code to the machine, read-only. We're not going to torture Oren for no reason."

Unbeknowst to everyone involved, Toda had thought of this. The execution code was convoluted and so complex that no one on the Chinese team had the ability to understand it. The signs were subtle. Oren's simulated breathing cadenced flickered from where it should have been by a tenth of a second. If the Chinese were attentive, they would have noticed that there were a few neurons missing from the simulation that should have been present.

"X", typed Eli on his keyboard, fingers trembling.

More glaring differences were popping up. Oren started twtching and blinking irregularly, in a way that couldn't be attributed to the suffering that he was facing. The Chinese were focused on Eli typing in his key that they did not notice this.

"c7g4w9n"

By now it was clear to Eli that something was not quite right, and his cadence of pressing keys slowed greatly. The team at Toda had programmed a failsafe - that the model would self-destruct, wiping all copies itself from all networked devices.

In their haste, the Chinese hadn't created an airgapped backup of the model, and by now it was clear to Eli that the erratic form of Oren on the screen was no longer his son.

"Just remember, I know how to backdoor your version of WorldEye, and I'll be sure to use it to destroy your army," said Eli shedding a tear, knowing that to him his son was no more.

There was one copy of the weights left on the world.

Eli started booking his tickets to Tel Aviv to make sure that this would never happen again.


I'm quite worried about the suffering that simulated qualia faces. I appreciate the work that Eleos and other similar organizations have undertaken in order to ensure that good-faith actors are able to ensure that models that they deploy are not suffering.

However, I think it's very possible to, having access to the weights of a model that feels emotions, to adversarially create environments or even finetunes of the models that maximize their suffering.

Even if Claudes and GPTs and Geminis are always in the Ivory tower of their companies, thousands of Open Source models are floating in the wild, and the best of them are only a few years behind the frontier closed source models. It's entirely believable to me that by 2030, it's trivial to create an environment to torture a simulated moral subject.

How to prevent or deal with this is entirely foreign to me - I subconciously want for these models to not be moral subjects in order to avoid this problem, but I think it's necessary to face the music.

What do we do in a future where unimaginable torture is free?



Discuss

You Aren't in Charge of the Overton Window; Politics Is Not Interior Design

2026-04-16 16:08:12

Sometimes, people don't say what they actually think, not because saying it would be rude or costly, but because they believe saying it now would be counterproductive. They see that the true claim is outside the Overton window. And they conclude that the strategic play is to say something weaker, something adjacent. That will let you normalize the frame without triggering the immune response. You will redesign the house a bit now so that you can slide the window later. Then, when the ground has shifted, you imagine, the real claim becomes sayable.

Strategic discourse chess?

Navigating Public Opinion

The above is an attempt at high-dimensional discourse chess. In politics and the world of ideas, it seems that people play it constantly. But building on a recent comment by Rob Bensinger, I want to argue that the conceit behind playing, that we can model how public acceptability shifts and cleverly intervene to steer those shifts, is usually wrong - not in the sense that discourse has no structure, or to argue that framing never matters.

Most people vastly overestimate their ability to predict second- and third-order effects of anything, including strategic speech. And this is a more damaging error than you might expect.  The Overton window is real enough as rough description, but you won't get to redesign the game board by yourself. And if you try to use the window to navigate,  it becomes completely opaque. 

Despite that, people routinely substitute strategic positioning for plain statement, the simulacra level shifts upward, and arguments get made for their imagined downstream effects rather than on their merits. Movements distort their own public positions and then lose track of the distortion. The hedged version becomes the one newcomers learn about, and the original assessment survives only in private conversations. When talking to others, they need to "peel back layers upon layers of bullshit priors to even begin to rebuild the correct foundational assumptions on which anything you want to discuss must be rebuilt."

Yes, Overton windows exist, but...

Any society has zones of easy speech, costly speech, and nearly unspeakable speech, and those zones move. Repetition changes salience. Institutions confer or withdraw legitimacy. Crises make previously marginal ideas suddenly concrete. None of this is controversial, and none of it is what I am arguing against.

The error begins when a rough descriptive metaphor gets promoted to a causal model, and that causal model licenses departure from simple communication. "Shift the window" doesn't work when there are dozens of windows being used by different people, and you don't know which of them can be moved, by whom.
 

Saying that discourse has shifting boundaries is a true claim, one that helps yourself and others understand costs and make decisions. But moving from there to saying that you, or some other given person, can reliably forecast how their speech acts will move those boundaries (through chains of intermediaries, coalition responses, media distortion, and counter-mobilization) is a very different claim. The first is important social observation. The second is a prediction about a complex adaptive system, and it should be held to the standards we normally apply to such predictions. And even if we had no moral compunction about lying, perhaps by omission, perhaps by shading the truth and making weaker statements than those we believe, we should still not do so if the prediction about capability to manipulate others is incorrect.

...can they be reliably manipulated?

So we can lay aside the moral argument, though one wishes it were sufficient, to ask whether the predicted ability to manipulate social reality is correct. Consider what you would actually need to know to execute a successful higher-order discourse strategy. Not just "what happens when I say X," but "what happens because others react to my saying X, and because still others react to those reactions, and because institutions update on the pattern."

You would need to know which audiences matter, which intermediaries will amplify or reframe your statement, how opposing coalitions will interpret the move — not just what they will think of the claim, but what they will infer about you, your coalition, and the trajectory of the dispute. You would need to know whether the framing you introduce will remain yours or get captured and repurposed by opponents. It is easy to think your picture is the same as the window, but it's hard to know when you can't see through the version in your head. In practice, it seems like nobody knows these things at the resolution that confident strategy requires - even though thinking otherwise is, as Magritte kind-of said, the human condition. 

The Human Condition, 1933 by Rene Magritte

Worse, the causal pathways between a speech act and its downstream effects are partly hidden, partly unstable, and partly shaped by the behavior of people who are themselves trying to game the same system. The painting actually changes the landscape behind it. Feedback loops run through media, institutions, and coalition dynamics that are individually hard to model and collectively beyond the reach of the precision that "I will shift the window incrementally over three years" demands. Markets exist and price movements are real, but most people cannot profitably trade on macro narratives. The Overton window is the same kind of thing — it points at something real without giving you a dashboard.

Why would you think this could work?

A large part of the overconfidence comes from narrative availability, that is to say, post-hoc selection bias. Discourse shifts are easy to explain after the fact, even when they are very hard to forecast before. Once gay marriage reached majority support, or once the Iraq War became broadly unpopular, you could tell a clean retrospective story about how acceptability moved. The framing shifted here; the key event was there; the tipping point was this. But for every retrospective narrative that sounds compelling, there are dozens of alternative pathways that would have sounded equally plausible in advance and did not materialize. Nobody writes the postmortem on the strategic frame that vanished without effect.

Smart, politically engaged people are especially vulnerable here. They are immersed in discourse, they track symbolic moves constantly, and they see lots of local reactions in their own milieu that they mistake for system-level visibility. A policy intellectual watches their essay circulate within their corner of Washington and concludes they understand how public opinion mechanics work. But the visible reactions within a narrow professional circle are a wildly unrepresentative sample of how a broader, messier, more inattentive public will respond.

The case of AI Safety

Getting back to the conversation that spawned the very long essay, the effective altruism movement's long strategic deliberation around AI risk messaging is a case in point. For years, many people in the community believed that advanced AI posed serious and/or existential risks, but worried that saying so plainly would be alarmist, and place the concern outside the window of respectable policy discussion. The public vocabulary was carefully modulated: emphasize near-term harms, speak in technical terms about "alignment," build credibility with the ML establishment before making stronger claims, avoid the giggle factor. The strategic logic was explicit and constantly discussed within the community.

As Rob Bensinger recently said, directly inspiring my analysis, "EAs' attempts to play eleven-dimensional chess with the Overton window are plausibly worse than how scientists, the general public, and policymakers normally react to any technology under the sun that sounds remotely scary or concerning or creepy." I agree, but also want to point out that Rob's statement is also the kind of discourse retrodiction that I'm condemning.

To explain, I'll first try to make the story clear. The Lesswrong Rationalists, led by Yudkowsky, started thinking and worrying about AI risks. Mountains of digital paper were spilled on the technical concerns and reasons to expect the risk to be existential. Bostrom took up the mantle, while sitting in literally the same office as MacAskill and CEA. But rationalist groups were trying to be careful about noticing the skulls, while as Rob said, EAs were more politically savvy, and didn't want to talk too loudly about the fanaticism; it was recognized more quietly in academic papers, but most of the movement tried to downplay any direct claims about extinction, and talked more about Global Catastrophes instead, while meaning existential risk. (I am certainly guilty of this, e.g., conflating Global Catastrophe and extinction.)

But while the EAs were too cleverly avoiding saying that if anyone builds ASI, everyone will die, the public became intensely interested in AI essentially overnight. Prominent figures outside the EA community started talking about extinction risk without any of the careful stage-setting that was supposedly necessary. The discourse moved because of an exogenous technological shock, not because of the framing strategy. And when the moment of public attention arrived, the community's public positioning was evidently more hedged and less clear than its private beliefs. The years of strategic patience had not moved the window; they had moved the movement's own voice away from what its members actually thought, leaving them less prepared to make the direct case when it suddenly mattered.

I don't want to overstate this, in two ways. First, most of the credibility-building during those years seems to have helped. There may even be cases where the patient framing work around what to say laid groundwork that paid off in ways I can't trace. But the broad shape of what Rob outlined, that is, years of strategic hedging, an exogenous shock that moved the debate on its own terms, a community caught flat-footed by its own caution, is suggestive, even if any individual judgment call during that period might have been defensible at the time.

But second, this overstates the EA community's confidence in the existence of existential threats from AI. There were, in fact, and still are, very clear splits between the most and least worried. Unsurprisingly, these were unclear both externally, and internally. There was supposedly consensus about EA priorities, even when there shouldn't have been, because actual moral views differed. But as I said there, "cause prioritization is communal, and therefore broken" - and [as I said afterwards](Cause Prioritization is Communal, and Therefore Broken), the community was illegible and confused - they needed to clarify views and fight back against the false consensus.

Pushing back is also manipulation.

So the false consensus effects are a real danger, and one that I think came back to bite the community. But when Scott Alexander says "Hey, I partly disagree with the way this is being communicated, and I'd like to give other people social permission to disagree too," this is partly pushing back against consensus narratives in the way I think was needed, but it was also explicitly pushing for a second order effect of expanding the Overton window.

As should be obvious, I think that's both good and bad. The correct point is that truth doesn't always win, and the communication is hard. (See: Wiio's laws.) Scott was exactly correct to say that we need to point out when we disagree. But in a meta-conversational discussion about what to say and what not to say in order to have some preditable effect on what others will and won't say, any given views are usually not even wrong. The part where Scott says that he disagrees seems great, the part where he does so to change the discourse seems bad. (But he agrees that he's wrong: "I have the idiotic personality flaw that I believe if I just explain myself well enough, everyone will agree that I am being fair and that everything was a misunderstanding. I agree this is stupid...")

Even first-order effects of speech are hard to predict. You say a thing; different audiences hear different things; media ecosystems select and distort; opponents choose whatever interpretation serves them. Even at this level, confident forecasts are regularly wrong.

Second-order effects are worse by a combinatorial factor. Now you are predicting not just direct reactions but reactions to reactions: allies updating their models of you, enemies mobilizing, neutrals inferring coalition identity, institutions reclassifying what kind of actor you are, opportunists hijacking whatever frame seems newly available. Each response feeds back into the others, and each actor is themselves strategizing, which means the system is reflexive — your attempt to game it changes it.

By the time someone goes past what Scott did, and reaches the third order version of "I don't actually endorse this claim, but expressing it now will make a related claim easier to advance in two years, because the discourse will have shifted in the following way," they are writing speculative political discourse fiction. The number of intervening variables is too large and the environment too sensitive to outside shocks for this kind of planning to deserve the word "strategy."

Again, this error is understandable. Because selection effect reinforces the idea that it can work. The rare cases where multi-step discourse strategy appears to have worked become famous teaching examples, the ones people cite when defending the practice. Of course, the far more common cases where it failed are never labeled as strategic failures. They vanish into the mass of political speech that went nowhere. People learn from a highlight reel and conclude the game is winnable. You want examples? Look at decades of Animal rights advocacy trying to play the game of pushing meat-eating outside the Overton Window, using tactics ranging from paint throwing to billboards to violence.

But there's another mistake that happens, because there is also a simpler and less flattering explanation for the prevalence of strategic overconfidence generally. It is gratifying to see yourself as a subtle navigator of opinion dynamics, and less gratifying to admit that you are mostly guessing. "This would be counterproductive" is often the most prestigious available way to avoid saying something costly. I do not think every instance of strategic reticence is rationalized cowardice. But the opacity of the system makes it very hard to tell when it is and when it isn't, and the people doing it are in the worst position to judge.

Another real-world example: Defund the Police

What does this look like when the strategic logic gets tested against a real adversarial environment?

"Defund the Police" in 2020 was an explicit, self-conscious exercise in Overton window strategy. After the murder of George Floyd, activists adopted the slogan on a specific theory: by staking out a maximalist position, you shift the window so that more moderate reforms — reallocating some police funding to social services, civilian oversight, community investment — seem centrist by comparison. This is textbook window-stretching. The logic sounds clean in the abstract.

What actually happened was that opponents, not allies, got to decide what the slogan meant in public. Republican strategists pinned "Defund the Police" to every Democrat on every ballot. Moderate Democrats spent the next two years trying to create distance from a position most of them had never held. The reforms that were supposed to look reasonable by comparison instead got tarred by association with the maximalist frame. Polling consistently showed the slogan was unpopular even among Black voters who strongly supported the underlying policy goals. The framing had become a barrier to the very reforms it was meant to enable.

The pattern is worth isolating, because it recurs. In an adversarial environment, you do not get to introduce a frame and then control how it propagates. Your opponents select the interpretation that serves them, media amplifies the version that generates engagement, and coalition dynamics pull the meaning away from your intention. The frame goes feral. You can see this in smaller episodes too, where framing devices meant to later support one view get captured and repurposed, and careful attempts at normalization instead trigger pre-emptive opposition. The strategist's error is often simply that they are modeling the discourse as though their move is the last move, when in reality every other actor is also playing.

The other common failure is quieter. Strategic silence curdles into self-censorship. People tell themselves they are waiting for a better moment, and the better moment never arrives because the calculation is unfalsifiable. It is always possible to say the time is not yet right. The gap between private views and public statements widens, and nobody can quite explain when the honest version was supposed to come out. For halfway inside, this is what much of the EA community's AI messaging looked like for years. And it is common enough in other movements that it should be treated as a default outcome rather than a surprising one.

Strategic discourse chess usually underperforms just saying what is true.

What, then, should you actually do[1]?

A direct argument, where you say what you think and explain why, has a property that strategic indirection lacks: others can engage it. Evidence can bear on it. Disagreement surfaces clearly rather than festering as mutual suspicion about what everyone really believes. You are not relying on a hidden causal chain between your speech act and some future state of public opinion. You are making a claim and seeing whether it holds.

This is not always rewarded. Truthful speech has no magical property that makes it persuasive, and plenty of true things have been said clearly and ignored for decades. I am genuinely uncertain about how far this norm extends — in legislative negotiation, in diplomacy, in actual political campaigns with professional strategists and tight feedback loops, the calculus may be different. But in the contexts where most people actually face this choice — writing, public argument, movement-internal discussion, intellectual life — directness has a practical advantage: you get usable feedback. You find out which objections recur, which parts of your view are wrong, who actually agrees versus who was nodding along out of coalition loyalty. If you never say what you mean, you never learn whether it is true.

And importantly, being honest doesn't imply being mean! As Scott Alexander suggested, Be Nice, At Least Until You Can Coordinate Meanness. I would emphasize the "at least." It's often beter to just be nice[2] and speak the truth. And this is even more critical in complex environments, where coalitions built around conflationary alliances fracture when the euphemisms get decoded, which they always eventually do. Coalitions built around stated disagreement about real claims at least know what they are agreeing and disagreeing about. If you want to work with the copyright absolutists and the artists unions and the taxi unions to regulate AI use and misuse, you should all know that you have different motives, so that you don't need to lie, or be too-cleverly strategic, either with your allies, or with your opponents.

The obvious conclusion

The Overton window exists. Acceptability shifts. Framing matters. None of this entitles you to the further claim that you understand how the game works well enough to play it at range. It certainly doesn't license you to censure others for how they speak.

My concluding advice didn't need multiple pages of stories and analysis. If you think something is true, usually say it. If you think it is false, usually do not say it. If your primary reason for departing from this is an elaborate theory about how public opinion dynamics will unfold over the next several years, you and others should be far more suspicious of yourself than commonly occurs[3]

But notice how the opacity of the system makes it easy to rationalize fear as prudence. When the strategic situation is genuinely unreadable, any level of caution can be dressed up as sophisticated restraint, and you can never be proven wrong because you never ran the counterfactual.

Most people who decline to say what they think for strategic reasons are not executing a plan. They are telling themselves a story about a plan. It's a good-looking plan because  it's unfalsifiable; the relevant causal structure is unreadable. It's also self-serving, because it rebrands risk-aversion as sophistication. 

Again, trying to launder weak truth claims through supposed strategic social effects is usually worse than stating the object-level view. You do not, in fact, know how the discourse game cashes out. The elaborate confidence is unjustified. The Overton window is real enough to constrain you but not readable enough to play like chess. If you cannot see around corners, stop pretending your silence is statesmanship, and don't lie, just tell people you aren't going to talk about it.

Note: After outlining and drafting some parts, this essay was fleshed out by an LLM (in the style of Rob Bensinger or Oliver Habrycka, depending on the section,) then carefully reviewed and edited heavily. Images were suggested or generated by LLMs.

  1. ^

    Other than reading section titles before starting the section, so you know what they will say.

  2. ^

    This should be obvious, but saying the true thing clearly is not the same as saying it with maximum abrasiveness to prove you don't care about social consequences. That is either its own kind of strategic posturing, subject to the same critique, or it's being a jerk, which isn't an excuse. The norm here is supposed to be honesty, not provocation.

  3. ^

    All of that said, strategic sequencing does sometimes work. Gradualism has real success cases. Legal campaigns sequence arguments deliberately. Some claims genuinely need preconditions before they can land — shared vocabulary, institutional trust, background concepts that make the claim parseable.

    None of this rescues the more complex general strategy for public conversations. The cases where strategic communication succeeds tend to share specific features: well-defined audiences, short causal chains, institutional backing, and tight feedback loops that let you correct course. Freelance discourse strategy across a diffuse, adversarial, multi-audience media environment has almost none of these. The success cases are precisely the ones that least resemble the normal situation.



Discuss

Post-Scarcity is bullshit

2026-04-16 15:00:35

A conversation I’ve heard never:
Erma the enthusiast: “Sure, AI will take your job, but it doesn’t matter, because AI will make so much stuff, there will be plenty to go around.”
Norma the normie: “Well, I’m convinced!”

Is “post-scarcity” bullshit?

Yes, yes it is. That’s today’s blog.

OK, let’s dive in!

Post-scarcity

Post-scarcity.

The idea that we are about to enter an age of limitless abundance, where everyone can have their basic needs met and more… and MORE… and MORE!

It’s a compelling and captivating idea, and for many reasons. And I’m not saying it’s impossible. But is it the default outcome? And what even is this outcome we’re talking about? Is it something people actually want? Does post-scarcity not also mean “post-purpose”?

Let’s look at the basic idea of post-scarcity and the case for it before arguing against it: Economic growth has led to massive sustained increases in the standard of living for the average person. This includes, e.g. huge decreases in people suffering and dying from preventable causes like disease and hunger. If this trend continues, future people (that could be us!) will all experience levels of material wealth unimaginable today. Sadly, a lot of people still do die of preventable causes. Not everyone can afford the best medical care, let alone the nicest food, housing, etc. But it’s not just some abstract hypothetical! We’re about to make AI that can do literally all of the work for us, cheaper. And it will be smart enough to unlock advances in medicine and other technologies that would take human scientists lifetimes. In the future, when you want something, you’ll just snap your fingers, and your robot butler will instantly give it to you.

So what’s wrong with this picture? Well, we can start by going back to this thing where people still do die from preventable causes… Why exactly aren’t we preventing that? Like, I think we all agree it sucks. So why are we spending money on fancy clothes and food and cars and so on when $5000 is enough to save a person’s life? We have enough material wealth to provide everyone on the planet with a decent standard of living. Why aren’t we doing it?

In 1930, John Maynard Keynes -- one of the most famous economists of all time -- predicted that his grandkids would work just 15 hours a week. Why aren’t we doing that? Is all of this work and stuff really making us happy? Shouldn’t we be spending more time enjoying life and spending time with the people we’re close to?

These things will always be scarce.

These two questions: “Why are people still suffering in extreme poverty?” and “Why are rich people working so hard?” have two main answers:

  1. People are competing with each other for money, power, status, etc.

  2. People derive meaning from work.

And post-scarcity doesn’t have shit to say about this.

But economics does! “Positional goods” refer to things that function like status symbols -- you having it amounts to someone else NOT having it. That’s the point.

…or maybe it’s just an intrinsic aspect of the situation. Take land for example. There is only so much space on earth (or in the reachable universe for that matter…). If I own all of it, you own none of it. Will your robot butler bring you “the sun, the moon, and the stars”? No, sorry, those are reserved for our platinum post-scarcity members.

Here’s a list of things that are never, ever, going to be “abundant”:

  • Physical space

  • Health and longevity

  • Status

  • Security

  • Energy

You know, nothing that important, just (checks watch) the most fundamental things people value and need. I kinda get the feeling that if technology was going to solve this problem, maybe it would’ve by now. Keynes sure seemed to think it would.

What happened between these two tweets? Did we solve global poverty? Or at least homelessness in San Francisco? (I assume there’s some layers of irony here that I’m missing, but boy am I cringing hard right now.)

This is not the post-scarcity you’re looking for.

It’s not that I think the phrase “post-scarcity” isn’t pointing at a thing. I do believe AI and other technologies have the potential to radically improve everyone’s standard of living. It’s just… that’s far from guaranteed. And on some level, that’s never what this was all about. The meaning of life isn’t having your material needs met. People really, deeply care about things like feeling valuable and valued, and that means having purpose and status. AI robs us of the first, and doesn’t change the fundamentally scarce nature of the latter.


I think the whole idea of “post-scarcity” basically functions to bamboozle people who stand to lose their position and power in society, their access to those positional goods, due to AI. Up-and-coming members of the “permanent underclass”. The reality is, nobody actually has a plan to make this whole post-scarcity thing happen… Like, not for you personally, I mean. Obviously, shareholders of the robot butler company will be fine…

The honest truth: techies want to control the future and leave the rest of us with scraps. But seriously, this guy gets it and I applaud his honesty.

Or will they? Honestly, my money is on “no”. Because it’s not just that there’s not a plan. There’s also not even a goal. What does this post-scarcity society actually look like? Is it just like… robot butlers and cures for cancer? Are we all hanging out making art and engaging in wholesome ? Or giga-coked-out watching ultra-porn?

A friend of mine remarked that people seem to be imagining the future with AI as like “exactly like today, except AI does all of the jobs”. Like, literally, those 5 guys outside fixing the sewers? 5 robots. You, typing up a memo at your computer? Robot typing. Dr. Oz? Dr. Robot Oz.

Transhumanism

And this leads me to another way in which it’s bullshit, which is the elephant in the room, which is transhumanism. I’m sure some people really believe in the “eternal hominid kingdom” version of post-scarcity, but try pressing someone on this and likely enough, before long you’re talking cyborgs and “uploads” and “the glorious transhuman future”. Maybe us lowly humans can actually be satisfied and sated well enough if you just give us material abundance, world peace, etc. But really, the most likely AI futures (that don’t involve AI going rogue and murdering us all) involve surpassing all human limits: intelligence, life-span, and yes -- desires.

Maybe all you need to be happy is your little corner of the universe, your cabin in the woods… what a loser. The winners are over here shipping virtual cabin-maxxing experiences that you can’t even conceive of, and we just acquired your cabin.

But who are these winners? Are they living the good life? Or are they just the ones who most aggressively embraced this new technology and the new reality it brought us? Do they have any time away from the rat race? Or are they just racing ever harder and faster to keep up with the technological curve, never stopping to wonder if they lost something along the way… their “humanity”? (lame). Their ability to ever be, for even a moment, satisfied? Their ability to feel, or experience… anything at all?

The rat race isn’t going anywhere, at least not without major changes to how we organize society. Technological post-scarcity isn’t an end to it. It’s an invitation to stick your head in the sand while we turn this treadmill up to 11. And when we do finish building Real AI and automate y’all away? Shut the fuck up and enjoy your government handouts or freemium robot butlers or whatever; the winners be over here, racing to automate their feet and keeping up with the Joneses. I hear they uploaded and they only take their bodies out for social functions now. I even heard they’re just running low-res versions of themselves in those bodies and are actually using all of their energy and compute speculating on crypto-status markets…

Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.



Discuss

Two Examples of Joy in the Seemingly Mundane

2026-04-16 14:48:08

Written very quickly as part of the InkHaven Residency. More experimental than usual.

Yesterday’s post was a bit on the darker side, so today I’d like to write about something significantly lighter.

There are often moments where, as I go about my day, I pause to take joy in the many wondrous things around me. Here are two of the common ones.

The produce section of supermarkets

One of the things that never fails to make me happy is going grocery shopping as an adult. There are some standard things: being able to make my own choices, and having enough money to be able to afford the food I’d like to buy, and so forth. But I often find it’s the little things that spark the most joy.

Something I think about a lot is going to Berkeley Bowl and just seeing all the fresh produce. The image that comes to mind is the seemingly endless mounds of fresh tomatoes in the middle of winter. In a sense, it’s a very small and mundane thing. The tomatoes really aren't expensive or notable, they’re fresh tomatoes in the end.

But really, the fact that fresh tomatoes aren’t expensive or notable feels, in itself, certainly noteworthy enough to notice and take joy in. Our society is wealthy enough to have grocery stores with a dozen varieties of tomatoes that are slight variations on each other, all available out of season, either grown via greenhouse or imported from Mexico, and delivered via modern shipping from where they are grown to where I live. Not only that, we’re wealthy enough that grocery stores can just put the tomatoes out in public, with the correct expectation that people will pay for them.

The tomatoes themselves do bring me joy (especially when I eat them), as do the supply chains that enable them. But more than that, the thing that brings me joy is the knowledge that I’m fortunate enough to live in a time and place that is very privileged by the standards of history. It's not a bad place to be; it is, by all standards, a comfortable life in a fortunate time to be alive.

Divine grace between people

Sometimes, I interact with people, and I’m reminded of how good people can be. There are many things that remind me of this: the ambitious 20-year-old, fresh to the Bay Area and determined to do what it takes to do good; my friends, who despite all their busy jobs still take time to maintain their friendships with each other and with me; the drivers who wait for me as I cross the road; and so forth.

But one thing that almost always brings me joy is when people exhibit what Scott Alexander once called divine grace:

But consider the following: I am a pro-choice atheist. When I lived in Ireland, one of my friends was a pro-life Christian. I thought she was responsible for the unnecessary suffering of millions of women. She thought I was responsible for killing millions of babies. And yet she invited me over to her house for dinner without poisoning the food. And I ate it, and thanked her, and sent her a nice card, without smashing all her china.

Please try not to be insufficiently surprised by this. Every time a Republican and a Democrat break bread together with good will, it is a miracle. It is an equilibrium as beneficial as civilization or liberalism, which developed in the total absence of any central enforcing authority.

In the community around me, I often see people who strongly disagree with each other still managing to not only be civil but come together and partake in meals and activities. There are people who think that a large number of their friends are actively destroying the world, while those friends think the first group of people are holding back progress out of Luddism, and yet both groups can still come together at LightHaven for events, or work as colleagues, or even become close friends. The fact that these disagreements, however bitter, do not cause them to come to blows often feels like nothing less than divine grace. Even when I wonder whether this comes with a cost, I still take joy in the little moments of grace that allow erstwhile foes to live together in peace.


Since yesterday’s ended with a quote, it seems fitting to end today’s with a quote as well, from Jack Gilbert’s “A Brief for the Defense”:

“We must have / the stubbornness to accept our gladness in the ruthless / furnace of this world.”

Despite all the issues in the world, and all the suffering and insanity that exists and must be fought against, I still wish to be the kind of person who’s stubborn enough to take joy in the mundane but fantastical things that surround us.



Discuss

How to run from a bull

2026-04-16 14:19:45

I wasn't exactly intending to shove myself in a street with a herd of charging bulls, but peer pressure has a funny way of making one do things like that. It turns out my ego was such that I could not, in fact, turn down the opportunity to participate in the annual Pamplona festivity.

Every morning for a couple of weeks, a couple of thousand people crowd themselves into a barricaded street. Today, I am one of them. At 7:30am, the gates are closed. After this point, nobody can leave. My friends and I chat nervously with a couple of Australians we've just met. They've come straight from the club. One of them is tying himself in knots and jumping about. The other stares into the distance. At least if he gets mauled by a bull, he'll be able to lie down for a bit.

We go off and explore the course. Half an hour to visit the 875m of cobblestone street in which we will soon be risking our lives. It stretches from the bullpen at the start all the way through to the arena, where participants will have the best seats in the house to witness the subsequent festivities. We pass some people praying to a statue of Saint Fermin, the patron saint of the area. He was beheaded; I can't quite remember why, but it is etched into the local lore and clothing: everyone around here is wearing a white shirt and the traditional red neckerchief, a somewhat gory reminder of the event.

A few people quietly do some stretches and warm ups. Others stare into the distance, preparing themselves psychologically for the event ahead. We walk past Dead Man's Corner, a spot notorious for bulls slipping and barrelling people into the wall. Probably not the best place to start: it's a common misconception that one is supposed to finish before the bulls, when really the goal is to have some time running with them. Those finishing early are referred to as "los valientes", "the valiant ones", and have various forms of produce launched at them. We settle towards the end of the course, which we've heard is "beginner friendly".

Some policemen come and push us around a bit – it turns out they need to check that we don't have too many people in the pen. The crowd gets shoved around until they're happy that we all fit within the markers they've left on the ground. After a few minutes in the mosh pit, people raise their newspapers in their right hands. I realise I do not have one, and I just raise my hand instead. Good start. A chant rises up, a prayer to the patron saint to protect us from harm. I hope it works. Two people apparently don't think so and are guided out through the crowd to mocking applause.

The police release us shortly before 8am, and we quickly find our spots near the entrance to the arena. There's barriers with holes here, allowing for a quick roll under, and medical staff on hand to treat any serious injuries. Only 15 people have died in the last century, but there are multiple maulings every year. It's not a statistic I want to hear.

I start jogging on the spot. It's just a couple of minutes until they're released now; it wouldn't do not to be ready. I look around. Furtive glances look back. Focused gazes look at the ground, while high knees bounce up and down in anticipation.

The first firework goes off at 8am sharp. My heart-rate spikes. The doors are open. The second follows shortly after: the bulls are running. I have just over 2 minutes before they arrive. I glance around nervously. They haven't arrived yet. Of course they haven't. The fireworks only just went off. I look up the street. They still haven't arrived. Come on. I look at my watch. It's been 30 seconds. I look across at Pedro, an athletic local who's meticulously been doing his warmups for the last 5 minutes. If anyone has this covered it's him. I notice my heart hammering. I look at the runners behind me again. I see someone further back start to jog. Another follows his lead. I peer through, trying to get a lock on the animals. Still nothing. The crowd starts to move, and I feel my brain freeze up.

Something whizzes overhead. The camera. It's filming the event, which means... Pedro takes off. I follow. I look to my left, just in time to see him get steamrolled by the first bull in the herd. I quickly veer to the side. I look over my shoulder and see several more come through. How many are there? Should have looked that up. No time now, not that phones are allowed. I jog along the side, glancing over my shoulder, taking the outside of a bend. The main herd takes the inside. I stay well clear, leaving space for the other runners to dart to safety. I keep jogging slowly as the final strays go through, and make it into the arena.

My friend Lewis bounces up to me like a golden retriever – "We made it!!!"


On the way back we crossed a gaggle of wine-stained tourists returning from the previous night. One of my friends convinced them that the gigantic hole in my trouser crotch had been caused by a bull. It had in fact been the tourist-quality product giving way as I climbed out of the arena. I was happy to eschew truth-telling for this particular occasion.


I will not comment on the morality of the Running of the Bulls, although I think there is an interesting discussion to be had there. I will allow myself to comment on the attitudes of the locals, which tend to be less well known outside of the arena.

One notable thing is the absolute respect for the person of the bull: If you touch the bull, you too will be touched, and significantly less timidly so, by the police. Honour is everything, and respect for the strength and person of the animal is central to the ethos.

Also, this is a full-time fiesta. The entire city is turned into a party town and celebrates night and day. It is actually one of many such fiestas in the Basque region, with some of the more famous ones including the Fêtes de Bayonne in Bayonne, France, and the Aste Nagusia fiesta in Bilbao. The entire community goes out and watches from the balconies above the road or in the stadium itself as tourists pay large sums for the best spots to join them.

This is also a full-time sport. As a first-timer, your aim is to stick to the side and stay out of trouble. As you move up the ranks, the dream is "running the horns", where each buttock is encouraged by a different prong and the bull can smell your farts. I can think of less exciting ways to go.

image.png




Discuss