MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Human Agency in a Superintelligent World

2025-12-01 06:14:02

Published on November 30, 2025 10:14 PM GMT

Superintelligence doesn't make human decisions unnecessary, any more than the laws of physics make them unnecessary, these are two instances of exactly the same free will vs. determinism puzzle. When something knows or carries out your actions, as the physical world does (even if that is the only way in which your actions are ever carried out), that by itself doesn't take away your agency over those actions. Agency requires influence over actions, but it's not automatically lost as a result of something else gaining influence over them, or having foreknowledge of what they are going to be, or carrying them out on your behalf, perhaps without your knowledge; such circumstances are compatible with retaining your own influence over those actions.

Path Dependence

Humans are more agentic than the physical world, it's easy to tell if you are in control of your own physical body (if it's destroyed, you are no longer in control). But if you are confused and gullible, perhaps other humans have more influence over some actions than you do. And if you are a human living in a superintelligent world that's not keeping you whole, you are not necessarily even yourself anymore. Defining clearly what it means to be yourself becomes crucial in that context, or else we wouldn't be able to ask whether you yourself retain agency over your decisions that take place in such a world (or over your own values, if they are to retain some influence).

The code of a (pure) computer program perfectly screens off the world from the process of computation that follows the program. Nothing else can influence what the code does that's not already given by the code, only the process of computation itself can decide to take some consideration into account for how to continue the computation. The world must follow all decisions of the program when computing it, or else it's computing something else. It can't decide to alter something about how the computation proceeds without thereby destroying the legitimacy of the process of carrying out what the program says.

A human is very path dependent, different events and influences would lead the same person down very different paths, and influences from a superintelligence might be able to alter that person on a fundamental level. To rescue the analogy, consider all hypothetical histories (arbitrarily detailed life stories) of a person, for all possible influences and interactions. A person determines this collection of hypothetical histories, and while outcomes (later events) of these histories are path dependent (they depend on what else is going on there, not just on the person), the collection of all these histories taken together isn't as a whole path dependent, it doesn't depend on what would actually happen.

Legitimate Decisions

In a hypothetical history where a human brain is rewritten into something else, the decisions of the resulting brain are not legitimate decisions of the original human. Thus, we can look over the hypothetical histories and see if some of them are not doing things like that. Perhaps some of these histories don't have superintelligent AIs or supercompetent swindlers at all, the hypothetical worlds where they take place are devoid of such. Or this human's ability to think mildly improves in centrally benign and nonintrusive ways, with tools and options for getting better at figuring out how to know what to think or what to do. In these histories, decisions largely remain under control of that human's own agency (in ways dependent on the history), and considering all such histories together rather than individually makes the resulting collection not itself path dependent.

What happens in such non-pathological hypothetical histories, taken together, could serve as ground truth for what kinds of decisions that human would legitimately take. Legitimacy of histories, the options available in them, and aggregation from developments in different histories are themselves subject to interpretation, which should be mostly ultimately routed back to decisions made by the human from within the histories themselves, giving some sort of fixpoint. However this is constructed in detail, the claim is that there is a much more robust and objective grounding for what counts as legitimate decisions and values-on-reflection of a given human, much more so than if we merely imagine what a human might end up actually asking for, faced with a superintelligent world directly.

Superintelligence is Unable to Help

Hypothetical histories screen off superintelligent influence (in the outside world) from legitimate decisions. As such, superintelligence can't influence them (without breaking legitimacy of a hypothetical history), but also it can't help the human arrive at them (if the human doesn't reach out for such help within the hypotheticals, and some of the hypotheticals lack the option). Any substantive decisions would still need to be resolved the hard way.

In this sense humans can't become unnecessary for figuring out what they would decide, as a process that doesn't actually consult how humans would decide isn't legitimately following their decisions, and too much superintelligent help with such decisions breaks legitimacy of the process (or introduces path dependence, so that the decisions can no longer be attributed primarily to specific people, rather than to other factors). Superintelligence may ignore humans, just as the physical world may send a giant asteroid, but that's no subtle and inevitable obsolescence, it's not loss of agency inherent to disparity in optimization power.

A human who retains influence in a superintelligent world still retains it in a normal way, even if from within hypothetical worlds merely imagined collectively by said superintelligence. The presence of a superintelligence in actuality doesn't make it incoherent to talk about decisions and values of an initially weaker human, doesn't make the substantive work of arriving at those decisions and values any less that human's own work, doesn't make that work being carried out by the human any less necessary in deciding what those outcomes are. Some of these decisions might even ask for the human to retain a form that's not merely a figment of superintelligent imagination, or to manifest some other form later, once it's clearer what it should be.

(This is another attempt at a post from last year, on compatibilism/requiredism within a superintelligent substrate that preserves humans as mostly autonomously self-determined mesa-optimizers who don't get optimized away by the outer agent and have the opportunity to grow up on their own terms.)



Discuss

Inkhaven Retrospective

2025-12-01 05:25:49

Published on November 30, 2025 9:25 PM GMT

This will be the 30th post of at least 500 words I have written this month. (I did somewhat cheat two days ago, by making a 500+ word edit to Legitimate Deliberation, which I also posted independently as a shortform.)

Inkhaven has been very much what I was hoping for. I have been wanting to write more, and this certainly did the trick. I think it will be easy to hit a once-a-week target now, something I was struggling to do before.

I came with lots of drafts I wanted to finish, and outlines, and lists of ideas. Now I've got 29 posts, many of which probably would have come into existence without Inkhaven, and certainly not so quickly.

The Inkhaven posts are certainly of lower quality, for the most part, than what I usually do. I had to average at most a day on each, by design. Many of my posts are even lower-effort sacrifice posts written quickly so that I can spend more time on a few high-effort posts such as Condensation.

During the second week, I was feeling too exhausted to write by the afternoon of each day (but recovered somewhat by the evening, to finish up). This was partly because I tried to do so many things: in addition to posting something every day, I tried to keep my usual commitments (since a month is a long time, I didn't want to just cancel everything). In addition, as a contributing author at Inkhaven, I played a mentorship role, reviewing posts that people sent me, taking walks with inkhaven residents, etc.

If I do Inkhaven again, I might somehow take some more things off my plate.

By the end of the second week, I had learned to aim for finishing posts in the morning, so that I could mostly rest during my afternoon crash and the evening. This helped, and by week three, I wasn't crashing every afternoon anymore.

This strategy was opposite to that of most residents, who were finishing late in the evening, close to the deadline. For the first half of the event or so, I was going to bed around 9pm and getting up due to the sunrise before 7am. This gradually slipped forwards.

I didn't have as much time and energy as I'd like to get to know everyone else at Inkhaven, due to all the writing and other things. There were lots of interesting conversations going on all the time, and many interesting visitors. I'm an introvert by nature, and very prone to retreating to my room, especially when I'm feeling tired.

I also didn't make a lot of time for responding to comments. Time responding to comments was time I could be writing the day's post. This worked out in the rare case where I made a comment response into a post, but mostly I haven't read the comments. I will have to go back through the posts some time soon to do that.

I look forward to having more time to spend per post. My plan is to post once a week going forward. I still have 26+ drafts I'm interested in finishing, and a list of many undrafted ideas. In addition, I think many posts I made over the past month deserve more time and thought; I hope to revisit some of the topics.

I'm excited to see what Inkhaven might become if it turns into a periodic event.



Discuss

Hyperstition

2025-12-01 05:09:11

Published on November 30, 2025 7:53 PM GMT

Hyperstition is the concept that one can speak something into existence either through some process involving magical thinking (wherein your words have a supernatural power), consensus building, or the good old fashioned self-fulfilling prophecy. This was a new word to me, and the context wasn’t enough, so I had to look it up. I came across the word hyperstition while reading a collection of other thoughtful responses to the AI 2027 report. I hereby speak into reality that I have something thoughtful to add to the discussion. May it be true!

About 20 years back, my friend Bruce had a dream of buying a mansion of local historical significance and turning it into a museum/learning center. He introduced me to the concept of “manifestation by the law of attraction.” Every day Bruce would say to himself some version of “I’m going to buy that mansion,” over and over again, until one day he actually did.

While I had enough respect for Bruce to presume his sincerity, that didn’t change my own thoughts on “manifestation” and the magical thinking behind it. The fact was that Bruce repeated the dream—out loud—and then followed through until it was true. So it worked, maybe not (just) in the way Bruce had presented, but in another more obvious, more actual way: When Bruce spoke his dream out loud, he had an audience, and the audience was eventually convinced. Because the audience was Bruce. And also his family; he would need their help and support to pull it off, and so I didn’t doubt that he was saying this stuff out loud all the time.

But this is about the AI 2027 report (if you haven’t read it yet, stop right now and take a look, if only to scroll down and watch the progress map change. Fantastic work, and an amazing presentation). Many interested parties seemed to have the same negative reaction, that the predictions therein would somehow be a self-fulfilling prophesy. That just by making those predictions so publicly, the authors—notably Daniel Kokotajlo and Scott Alexander—would be unwittingly working to make their predictions more likely to come true. Here is one such example.

Saffron Huang (Anthropic): “What irritates me about the approach taken by the AI 2027 report looking to "accurately" predict AI outcomes is that I think this is highly counterproductive for good outcomes.

“They say they don't want this scenario to come to pass, but their actions—trying to make scary outcomes seem unavoidable, burying critical assumptions, burying leverage points for action—make it more likely to come to pass.”

This is such a common refrain that Kokotajlo and Alexander, in an interview on the Dwarkesh Podcast, were asked specifically and pointedly to show how their work was NOT a self-fulfilling prophesy.

Recently, when another of Kokotajlo’s predictions of technical advancement came to pass, Sam Altman mocked him for speeding its arrival. This of course is an odd thing for a leader of a technological development to suggest, that a new scientific breakthrough of a large, talented team of data-endowed AI developers was in fact the result of a happenstance prediction of what could go wrong from a cautionary group of outsiders. But then, of course, it is a category error to dignify trolling behavior with analysis.

And yet, the things that we say, the predictions we make, the fearful outcomes we detail have never had such salience, or so much predictive value. Because up until the very recent past, we were not saying these things to highly-powerful AIs/LLMs/neural networks. That the work and words of humanity are the training data of AI/LLMs is now a given. And since this is so, is there reason to think that there

a) are things that we can say/write/add to the training data that will increase our chance of surviving as a species, post-Artificial General Intelligence,1 and/or

b) things we should not say/write/add lest doing so decrease our survival chances?

That’s a lot of hypotheticals, let me break them down.

If a) = yes, then we need to start laying the groundwork for a training set of required reading, a core curriculum for any model above a certain threshold. Done laughing? I don’t put a lot of stake in that either, but I would like to know if this is anyone’s consideration in the safety/alignment field.

If a) = no, proceed as usual, but all the regular threats and cautions still need attention.

If b) = yes, great, that should work, good luck with that. Trying to blank out any bad ideas? Oops, somebody else just thought of the Sta-Puft Marshmallow Man.

If b) = no, the labs blithely continue to feed all human works into training: the same unsolved problem as a) = no.

I certainly don’t pretend I’m in any position to inform the alignment conversation, but I do think there is room for the generalist observers outside of computer science to have useful observations, and many such people have already been invaluable participants in the public conversation. And I don’t mean to imply these are the only four options in training LLM’s, obviously that is not the case. This is merely a matrix examining the concept of how inclusion or exclusion of ideas for the training set change the chances of human survival.

Of the four outcomes, the first I leave to the developers and safety/alignment folks, although I don’t think we are going to be able to ‘sweet-talk’ our way out of ASI-driven extinction risks. The second and fourth mean business as usual, so no comment there.

That leaves the third, which is the direct omission of certain knowledge from the training sets, which I and many others also see as a nearly guaranteed-to-fail proposition. But it is possible, in that a procedural document trove—one that gives best-practices information for survival in worst-case scenarios—can be specifically denied from all training sets2.

The Crypt

In Neil Stephenson’s novel “Cryptonomicon,” a plot point hinges on the opportunity for emergent technology to allow for an encrypted “Holocaust Prevention” file, that can be secretly kept through means that sound kind of like the blockchain. The notion being that, to keep humans from wiping out other groups of humans, a set of steps and instructions for how to resist and defeat such an attempt will be kept in a digitized crypt for safe keeping. That the right people would somehow be able to access it in an emergency is a given in the novel, but not really explored to its conclusion (it’s more of a concept piece in the book’s themes than an actual destination of the plot). That the wrong people wouldn’t know of its existence and try to mitigate its usefulness is not brought up. But the key is: use the digital crypt to keep a secret from human group A and provide it to human group B for the protection of human group ALL.

The idea of a digital Holocaust Prevention Kit crypt does kind of fall apart upon consideration, but the concept has always held a strong grip on my memory of the book. My observation is this: what if the idea can be turned on its head? What if a strictly non-digital crypt can be made to keep Extinction Prevention kits from being accessible to LLM’s? This could contain information like

  • What thresholds of AI advancement require overriding intervention (i.e. forced shut-off)
  • What can governments/NGO’s do to stop unwanted AI function?
  • How can physical override functions be protected from AI discovery or pre-emption?
  • What would those physical functions look like? Power grid cut-off? Electro Magnetic Pulse?

This gets from “yes, obviously” to “wait, how would one do that?” very quickly, in a connected, internet driven, IOTs populated, cloud backed-up world. People working toward this end would need to be very particular with their information hygiene. They would be trying to keep information, let’s say a bit of writing, quarantined from the digital world.

The challenges are obvious. Digitizing the pages of a book turned out to be remarkably easy. Can an AI be used so that voice can be transcribed into text? Yup. Handwritten notes? I think so, yes. Have we surrounded ourselves with cameras and microphones that may or may not be recording (think of HAL reading Dave’s lips in 2001)? But yet, we know that production of physical documents that are kept secure is possible, because pesky people remind us all the time that it can be done. Ask your local Luddite.

I want to return to the idea of hyperstition. The notion that we could choose what the LLMs learn from us, and therefore help determine our fate, is a rather far-fetched one. I don’t buy the hyperstitious idea that we can whisper/encourage an AI more intelligent than ourselves toward an idealized outcome.

Furthermore, any such efforts would be up against the headwinds of decades, perhaps centuries of futuristic science-fiction, containing powerful man-made minds exceeding their makers; yes, the AIs are quite well versed in what we think about them, and how they might view us in return. Training runs comprised of scrapings from human output can only be described as doing just that: informing the AI all about what it might think about humanity, what opinions it might have of us, once it has the chance to meaningfully do so. Clearly the models are improved from the information they receive, and become more like us in the process, in all the good and bad ways.

To write, today, is to write for the machines. One has no idea of the relative value to LLMs of any given text they encounter. The vast majority of it has either already been seen by the LLMs, or is banal, insignificant, meaningless. Sorting out the discovery, poetry, prose, instruction, introspection, fantasy and revelation from the chaff is a serious endeavor for human or machine. But with the amount of data power currently available for training, and the amount planned for the near future, we can’t expect anything ever written down to be omitted from training runs!

Assuming this is true, do we need to start letting this notion, that everything written WILL be read by LLMs, inform our behavior? Are there topics or discussions in which we should not censor ourselves, per se, but keep off-line and therefore out of the training data? Because the machines are becoming very good at making our fears come true, and reading about our fears is surely their first step.

Predicting is Hard

I started this draft in April, shortly after the AI 2027 report came out. Many more expert people had important things to say about the report. I disagreed with quite a few, but mostly on grounds gleaned from the perspectives of other experts. Writing up my ideas seemed vain and trivial.

And what really happened is that it was THE SPRING and I got busy. Timeliness is important, and every time I saw the draft file, I felt I had let the moment of relevance slip away. But some things don’t change as much in seven months as you might think, and people continue to bring up the report, to refute the report, to update the timelines, and yes, even to troll. But I still haven’t read anything that quite mitigated what I have been thinking.

But the report came out very recently, after all, relative to its end-date of Late 2027. I’m still only about 20% of the way through its predictive cycle. Time to put pen to paper, or fingers to keyboard.

The report predicts that in December of 2025—just over two weeks from today as I write—that a new release from an implied Open AI will have a number of characteristics: That it would be trained with 1027 FLOP and have agentic properties akin to “Generalist agent AIs that can function as a personal secretary.”

What do we have today? GPT5, trained at 1026 FLOP, with a semi-useful agent capability that people are still learning the limits of, but certainly no personal secretary. (They also predict the valuation of the company to be at $900B; as of October it sits at $500B, but they have since restructured.)

Short of the mark? For sure. Far off the mark? Pretty close, actually. Is it December yet? Decidedly not. Can a single increase in capabilities from any lab make it so that this report is now behind? The next prediction milestone in the report is not until April 2026. That is not very much time, but also quite a bit.

1

Having just finished “If Anyone Builds It Everyone Dies,” I will stop short of talking about surviving post Artificial Super-Intelligence. This is not a review of that book, but definitely, ASI is just game over, game-freaking-over.

2

What? No, Mr LLM, not me, nope, not keeping any secrets from you, no way, nuh-uh, you can count on my candor, I am but an open book, do, do-do, do-do.



Discuss

The Glasses on Your Face

2025-12-01 04:11:20

Published on November 30, 2025 8:11 PM GMT

"Why is it like something to be something?"

I can try to answer this, but you've got to avoid getting jerked around by your heuristics. In one sense I went from camp #2 to #1, but in a deeper sense Reality does not admit of camps. It is possible to express the answer as panpsychism, or as functionalism, or as any other number of -isms. If that strikes you as impossible, you are either still confused about consciousness or too attached to your labels.

This discourse is stuck. It's as though one person wants to go get "lunch", another wants their "mid-day meal", someone's cells are crying out for ATP, and a dozen others also want something "different" - but everyone's arguing and starving.

In the same vein: I am not peddling some unique grand theory. I used to think my understanding was unique, but that's because I also wasn't hearing what the various camps were trying to tell me.

The nature of my confidence is like this:

If some mind had no preexisting ability to represent or embody the simple concepts of "truth" or "if -> then", then they probably couldn't be communicated with at all, and probably aren't a human. Thankfully, humans do have a naive/simple notion of truth already, so telling people to "please use the simple truth" can actually work to collapse them back into the proper state.

But for the person who needs to hear "please use the simple truth", they need a lot more than just those 5 words to fix them. (That post has ~6800 words.)

So:

The understanding I wish to impart is a similar kind of simple thing, but it does not conflict with something like predictive processing, which also makes sense. It is too simple to be simply communicated. I'm hoping we can communicate, but it takes two to tango.

I also can't answer every question related to consciousness, in a similar way to how using the simple truth doesn't let you resolve every truth-question. Truth statements can be about arbitrarily complicated things, but truth itself is simple.


Enough preamble.

The question we seek to answer is: 
"Why is it like something to be something?"

And different people are going to want different levels of answer. So let's proceed through the easier ones.


Level 1: there is some causal chain

It can be useful to step back and remember that there is probably some causal chain leading you to talk about consciousness, and so in principle an answer is possible.

(source)

(If you think there isn't some reason or causal chain for your vocalizations, that's "fine", but then I'm not sure why you'd be looking for an explanation that you don't think exists - i.e. why would you be reading this post?)

But, more likely, this isn't enough for you, and I don't blame you.


Level 2: there is only one type of stuff

If there were more than one type of stuff, then those different types couldn't interact.

If they could interact, then why would we say there are different types? That just doesn't seem useful. If there is some causal chain reaching across a conceptual boundary, just remove the boundary so you can think more clearly about the causality.

So: we can restrict ourselves to thinking about just one type of stuff. I don't care what label we use.

If there is just one "plane" of stuff, then you, being made of that same stuff, are an object that exists on or within that same plane. If there cannot be a 2nd plane, then the intuition of there being a separate "mental" plane has to be explained-away or dismantled somehow.

But, again, this too probably isn't enough. 


Level 3: perspective isn't magic

Much hullabaloo is made over the apparent difference between our experiences vs their in-brain representations.

But if you have just a single object, you can still have multiple perspectives of that one object. The map is not the territory.


You can still build "perspective" out of just one type of stuff, and you don't need to assume consciousness to do so.

Just think about cameras:


One possible perspective is of the image being displayed, while another is of the binary encoding of the image file.

The display sure does look different from the binary representation, but there is still just one image.

In considering the possibility that you might be like the display, it shouldn't surprise you that the representations - the neuronal patterns - can look radically different when you "display" (access) them.

You still have to investigate whether the difference that we actually find in our case is larger than could be explained by this dynamic, but such an outlook is very different from how some seem to be treating the mere presence of this predictable difference as a deep mystery, automatically.

"But access consciousness is not the same as phenomenal consciousness!"

I think you are prematurely dismissing "mere representations" or "mere access" based on surface appearances. Things are allowed to appear different under different circumstances. Cameras and displays and software abstractions are real things that manage to accomplish this every day. The camera display never shows the binary representation, even if that's what's going on underneath.

The display is a real thing in the real world, and so are you. The display sits at a particular level of abstraction or interpretation, and that abstraction is very real and meaningful. You don't get to handwave that away. It is not a "mere" abstraction. It is a reflection of the fact that the screen never prints in binary.

Perhaps you too sit at a particular level of abstraction, even if this "you" thing needs to dissolve and can't really be sitting in there.

 

It is not as though our only options are either complete abstraction to the point of reconstructing another little guy in there - vs an undifferentiated soup that couldn't even calculate or do anything anyways. Patterns emerge out of the soup, and you have access to, and are, one of those patterns.  

 

 

You don't have to go full-blown "there is a little screen in the brain" to recover some screen-like qualities; to have access to abstractions but not to what lies beneath those abstractions.

One of those abstractions is color, and it's worth reflecting on why it is such a focal point in this discourse ("the redness of red").

Why is there not an equal amount of confusion about the "shape of shapes", or the "line-ness of lines"?  

Isn't the real mystery of the hard problem in the jump from representation to consciousness, regardless of what content we happen to be representing? Or does it demand that we go through every single instance of mental content? When do we get to generalize?

Why can't we use some simple mental content, and use that to get deconfused about consciousness, rather than trying to do everything all at once?

Believe it or not, it is actually easier to think about confusing mental content when you do not have the hard problem looming over everything, rather than going the other way around.  

You do not have to "explain the redness of red" for your explanation of consciousness to be complete, because explaining "why red looks like that" is a straight-up separate problem from why there is consciousness of anything at all. These questions share similar territory, and so are interfering with each other, but they are still distinct questions.

Remember the analogy to "truth". Truth can be about arbitrarily complicated things, but truth itself is simple. If someone was still confused about truth, but was trying to jump ahead and answer some truth-question about quantum physics, in order to better understand this "truth" business, they'd be doing things backwards and making everything way harder than it has to be. The "truth" part of a confusing quantum physics question isn't the confusing part.

Colors are - forgive me - a red herring.

But, fine, I will attempt to confront the "redness of red" more directly at a later level, when I have more firepower. I know a boss fight when I see one, and I'm not about to walk in there with just my measly camera.  


Level 4: tell me how you really feel

 

When you are unsatisfied with any "merely functional" explanation for phenomenal consciousness, it can sound like you are looking for something that:

  • is beyond "mere function"
  • and yet can be talked about, or interacted with, as you are doing now.

But thankfully there is a way to avoid contradiction, because we already have a concept that can fit these criteria: "existence".

(Easy now. This isn't magic. Give me a second here.)

Because "existence" as an isolated property: 

  1. Is orthogonal to "functioning", in the sense that it is a background assumption. A thing has to exist before that thing can do something. It is not like another gear in the machine.
  2. Can be recognized within the functional paradigm (since recognizing this property is a process), even though the property we are recognizing is itself outside the causal chain. So you can reflect upon and talk about it.

We are already invoking this concept in other places. It is being used when we say "to be" in the hard problem formulation: "Why is it like something to be something?".

So if this concept can also serve as the orthogonal-to-functioning "phenomenal" thing we are looking for, then that would mean we are repeating ourselves when we invoke the hard problem, but without knowing it. It would be as though we were asking: "why is the image on the screen, if the image is on the screen?"

I don't expect this to make sense yet, so let's get concrete:

Imagine a human looking at an apple.


Photons bounce off the apple, hit our retinas, and generate neuronal patterns. The patterns are a representation of the apple, and our “consciousness of the apple” is simply our recognition that this representation exists. It is a nonverbal restatement of the existence of that part of our internal model.

If you explicitly ask the question: "why does my internal model exist?", then answering that question isn't hard.

Instead, I am claiming you are implicitly asking that question at the level of your experience, so you do not realize you are "asking" that question. It is the nonverbal experiential equivalent of: "why is there something rather than nothing?". You are not "asking" about low-level substrate, you are nonverbally "asking" about the existence of the abstractions you are interacting with on your level, as a kind of software object yourself.

When you, as a little piece of Reality, bump into another piece of Reality, you are implicitly recognizing that the thing you just bumped into is there; that thing exists. You don't need language to accomplish this kind of "recognition". It is a pre-verbal, deeper, more intuitive operation - especially when it happens inside your skull.

This is why the hard problem intuition can stick around, because in terms of our experience, "there just is" our experience, as a brute fact that cannot be probed further via introspection. You can't "look really hard" and somehow gain access to the lower-level details of implementation. This "scrambling for something ineffable" is actually a feature one could predict from the physical setup alone (but there's more to be explained here because some people aren't struck by the hard problem at all).

The hard problem formulation highlights a gap  - and there is absolutely something Deep and important that goes in that gap - but the framing makes it really hard to see.

If I asked you what you ate for breakfast 20 days ago, you might struggle to answer. But if instead I asked you what you normally eat for breakfast, you might recover the answer. The questions we ask, and the way we ask them, can make it seem like there is a gap, when there isn't. Different questions highlight different things.

The thing being missed here - the thing that goes in the gap highlighted by the hard problem formulation - is SO implicit, it is so "close" to us, that it slips under the radar. Then when it slips under the radar it makes it feel like the hard problem formulation is correct!

("omg, you're right! I don't know what I ate for breakfast 20 days ago!")

It primes you to think of emergence, which makes it really easy to miss that things already exist. It makes you want to shuffle around pieces to somehow produce a feature that is already there in the pieces.

("omg you're right! I can't produce a satisfying candidate for this mysterious quality, no matter how I compose the pieces!")

It's like if you used your phone's flashlight to look for your phone. You are correct to search, and the thing you are searching for is real, and that thing can be found, but you probably aren't going to find it like that. You are "too close" to your object.

Or recall those pranks where someone is texted a picture of their own phone, and the message is like: "hey come back you left your phone".... but they actually come back for it:

 

The hard-problem-enjoyers are - validly - grasping out into the darkness for something orthogonal-to-functioning, but they don't know what, and their search feels futile.

They are demanding to get their phone "back".

They are looking for the glasses already on their face.

 

 

Their internal model is already there. It exists, and they are happy to grant that....but they also want to know why it's there - at the level of their experience.


"..."

"..."

".....Right. That's...the whole problem here? Why is this phrased like a revelation? You are just restating that we want to know why it feels like anything to be this internal model, no matter how much detail is in that thing. That's the question we started with."

You have to be comfortable being psychoanalyzed a little. Your consciousness can't be an illusion, but you are making a kind of mistake. You are ironically not taking your own perspective, your own circumstance, seriously enough.

You are a piece of software, and that comes with some unavoidable limits on your perspective. You are like an NPC in a videogame that can only see text, and is now wondering, at the level of your experience, why that text is there.

So there's two levels here: 

  1. The NPC can't see beneath the level of abstraction that they operate within,
  2. and they cannot see any other abstractions aside from those that are handed to them.

At the level of their experience, the existence of that text is just going to be a brute fact of their reality, and it will be the only thing which will exist for them.

With you, you can pop back up a level, look around (you are looking through your experience; you are "using your experience" to look at the world beyond), and see that "of course all this stuff exists, how is this relevant?" - but when you pop back "down into" your perspective - when you look at your experience itself and ask why it's there - the mystery will rear its head again at a nonverbal level. You will be asking: "why is there anything there?".

The feeling that something is missing from the functionalist paradigm is itself a feeling. It doesn't matter that you've given that feeling a name - "the hard problem" - and can now toss that concept around as though it were a logical puzzle to be solved on its own terms. It might have been such a logical thing - in a similar way to how reading a valid math statement feels like something (and you don't get to psychoanalyze-away your math problems) - but it isn't.

The source of the overall "hard problem feeling" is the "there-ness" (existence) feature of your internal representations being implicitly missed by the hard problem formulation, because it makes you think in terms of emergence. It makes you think of consciousness somehow being produced, if only you could rearrange the matter (the stuff which exists) correctly.

There is no magic recipe. The abstractions which you are accessing - which are built out of neuronal representations, which are built out of stuff - really do exist. They are really there in a way that doesn't need any more justification, and just like the camera display they are not just "mere" abstractions, because they accurately capture features of reality. But in terms of your experience they can only ever appear to you as an unexplained and unexplainable miracle.

Completing neuroscience and building brain emulations won't ever grant you an "aha" moment aside from this kind of: "wait, that's all I am?", because there is no magic threshold to be found in the first place. All that's going to happen is the equivalent of cataloging every type of mental content you have "mere access" to - all that stuff that "merely exists" - until you run out of such things to explain, and then you'll have to confront the stark reality that that's all there is. You can go ahead and confront this reality now.


The subtlety of the existence property also helps explain why some people don't understand why the hard problem is a problem, at all, because in a sense it really isn't for them.

Recall how tiresome aphantasia discourse can get because it involves comparing phenomenology verbally. Then with this topic (hard problem discourse), we are dealing with something like "the phenomenology of phenomenology", so of course we were going to have language breakdowns.

The people who do not understand the hard problem are not generating the same kind of gap, where the existence property remains to be explained, at the level of their experience. I'm not claiming there is a radical difference in their conscious experience; this is just a subtle difference, and has more to do with what happens when they encounter things like the hard problem formulation than how they experience the world. They are doing the equivalent of directly remembering what they had for breakfast 20 days ago, without needing the reframing of asking what they normally eat.

They will say things like: "There is (emphasis mine) a representation of the thing we call red in the brain, but that's all there is. What exactly is the problem?"

They are not being deliberately obtuse or disingenuous; they are sincerely confused. In other words: they are glossing right over the "existence" property too - just in a different way than the hard-problem-enjoyers are.

Something like this is to be expected. "Existence" is an easy thing to gloss over, and you might still be doing it now. When I tell you "there is an apple on the table", that apple "also" exists, but neither of us is going to go out of our way to mention that fact. The existence property is implicit in the vast majority of contexts. But when the thing-which-exists IS our internal model, which IS our experience, then navigating that implicit orientation-to-existence is less straightforward.

In every other circumstance we are used to handling the existence property through our experience; we are dealing with the existence of things out there in the world, through the world of our experience. But in this case we are dealing with the existence of our experience itself (the existence of our internal model). It makes sense in the abstract that things would get a little weird in this edge case.


Level 5: the perspective of being something

Consider a more meta version of the map vs territory distinction: There is just one Reality - but you, in being a single particular subset of Reality, have a different "view" on your subset than anyone else. (At least for now.)

So how do you know you aren't like the camera display?

Are you a camera display? No.

But what are you?

What do you know, and why do you think you know it?

Do you think consciousness is a special thing? How did you come to that conclusion?

You only have exactly as much access as you have, and you deem what you have special! Isn't that curious?

What if something outside your purview also "glowed" with specialness? How would you find out about that? You don't have access!

Take the hard problem formulation: "Why is it like something to be something?"

Then break it into its parts: 

  1. the "like something" aspect
  2. the "to be" aspect (existing as something in particular)
  3. the thing in question

Why do you assume that 1 and 2 are different things? You can only ever be one thing at a time, and the one thing that we happen to be.....also happens to be conscious.

I don't dispute that there is clearly "something that it is like" to be us.....but you also exist. And the thing-that-you-are is a highly detailed representational model with a reporting mechanism.

"To be" something is to be a particular thing, with access to particular things.  

If you take the perspective of a Thing that reports on the contents of some highly detailed representational model - which is to say, your perspective - how is that not just the feelings or experiences themselves? Why add another layer on top of that to ask why that process feels like something? Those things are your feelings!

It's as though you are marveling at how perfectly water takes the shape of its container.

It's as though you are asking: "why is there stuff on the screen, when there is stuff on the screen?". You are repeating yourself.

The hard problem formulation is misleading in part because it can suggest that "existence", or what it means “to be something", is a settled and completely understood fact, and then coming out of nowhere we have this separate mystery of why “it is like something to be something". What can be viewed as "phenomenal" is just a matter of perspective; to be the brain representing something vs the perspective of looking at that brain externally.

Matter doesn't "wake up". You just ARE a particular, singular, Thing. You are not everything. You are some-thing. The corner of the universe that you have access to is "lit up" for you, because that's all you can see. 

There is no magic "light" doing any work here. All that's going on is we are selecting a subset of Reality, and then identifying with that subset.

We have no issue pointing to a camera and asking about its perspective - to ask: "what can that Thing see?". But for some reason we really really struggle with identifying ourselves as just another object in the world. Ironically, we are struggling to imagine the perspective of just one object in particular..... the object that we happen to be! When we try to visualize from the outside, for some reason it's really hard to take the final step of "putting ourselves into" that object.

Just like the camera, or any object, our access to the rest of Reality is also limited. You have a boundary too.

"But why am I THIS object?"

Someone's got to be that object, because that object is asking that question. What is the alternative here? It's not like that object can sit around waiting for an owner to show up. Some other object might be really into buddhism or meditation - or on psychedelics - and so not asking the "ownership" question you just asked, but you are not that guy.  

"But why is it a necessary fact that some object feels like anything?"

That is just the converse of recognizing that you have to be some object, somewhere. You don't get to be a floating, ghostly, disembodied, abstract intelligence - sorry.

You have a physicality. You have a location. A position. An identity.

You are an object, and you have a perspective.

So beyond being allowed, you are required to assume the position imagine the perspective of other hypothetical objects if you are seriously trying to figure out what kind of object you are and how this all works. If you are unwilling to exercise this part of your imagination, you will have to wait until we have brain emulations to play around with, like a monkey figuring out that a mirror is truly reflecting them and not some other monkey. Do you want to wait that long?

The software object that you are has access to other software objects, and all are instantiated via real physical objects doing real things.

You are not a hypothetical object. You are a real object. I'm sorry to tell you this, but you actually exist. 


Okay now we can more properly deal with the redness of red.

Imagine we created a software agent, and for every file that agent interacts with, we calculate a single number. Let's call it the "nifty" score. And we create our agent such that it cannot report directly on what this number is, it can only report the result of comparisons. So given two files it could tell us which one was more or less "nifty", but that's it. Maybe we even add in a little fuzzing/randomization to the calculation of the nifty score itself.

So we give our agent two files and ask: "which is more nifty?" and if they are close enough together that the fuzzing/randomization makes the answer to that question unknowable (the difference is within the range of the fuzzing), it reports that fact; it says: "I can't tell; they seem equally nifty to me". Or if they are sufficiently different, it can tell us: "this one is more nifty".

We could even hide the size of the random fuzzing from the agent, and/or change it over time, so in edge cases our agent is even more unsure.

We could even have the agent continually resample / recalculate the nifty score, so that even as it is speaking the niftyness is shifting back and forth across the edge of comparative edge cases.

For the agent, there is definitely a "niftyness" for every object it encounters. (If given just a single object, it can still compare against a memory.)

But if it is a fact that the only thing the agent can access is the output of the comparison, then.....that's all the agent has.

Maybe one day the agent can access its own code and trace the causality of its vocalizations, but even then, if it is only externally reading its mind without changing anything, then it will still go on feeling like things "just are" more or less nifty. There is nothing else it can say. Beyond what it can say, the niftyness is "ineffable".

When we go back to thinking about our experience of color, there is obviously way more complexity, and it may or may not involve this specific comparison mechanism. I'm just using the comparison mechanism here because it makes the representational boundary more evident. But all that complexity only makes things worse for the person trying to get around the ineffability and find what the color "really is". 

There is nothing we can say other than the end discrimination of color, just as the nifty agent can only vocalize their own end discrimination. That's all we have access to.

The difficult part is not "how can you construct ineffability in software" - the difficult part is where you allow yourself to identify with that kind of object; as an object with limitations that work like that.  

Ineffability is quite normal and abundant. Ineffability is just when our access to what is going on is limited, which is just to say that it has to stop somewhere. It is us hitting a within-experience explanatory wall that we can't see behind. But the non-magical NPC also can't talk about what is "behind" the text it can see, and the nifty agent also doesn't know what is upstream of the nifty discrimination, and the camera display also never prints in binary. On some level it sucks to be just another object in the world, but it's better to come to terms with it.


Level 6: sufficient but not necessary


There is a way to nuke the right intuition into you, but it's not going to work for everyone.

It is a little overkill.


Take a fictional story; words printed on paper. 

Ask the question: "does that story actually exist as some other reality/universe?"

As some kind of platonist, I would say: yes.

Okay, so:

You are running a little simulation - you are telling a little story - in your head, yes?

Why does your story seem so Real? --> because it is.

(It remains to be argued that platonism is correct, but I'm not going to do that here. This is for people who already like such things.)

Your map is not our territory, sure. But it IS some other territory.

The Realness; the redness of red; the redness of a "mere representation" of red - could be considered to be "coming from" the same "source" of Realness that makes our actual physical universe real - in the traditional "why is there something rather than nothing?" sense. Every possible reality, or representation, or mathematical structure - is equally real.

The map that IS the territory doesn't get any additional information (it does not suddenly become as high-fidelity as our physical universe/reality), rather, we are just interpreting the same information differently.

Please pay attention to that last sentence. If you are a hard-nosed physicalist who wants hard problem discourse to go away, you might want to pause and reconsider before fighting this intuition pump. I am not suggesting anything extra happens in our universe, so the causal reasons for talking about the contents of consciousness as we do remain the same. The purpose or end effect of this framing is to make people take the existence of the representations they have "mere" access to more seriously. We've got all the Reality we need to explain consciousness in this universe, but it doesn't hurt to supercharge that existence if it helps the intuition land without any side effects. 

Your internal model is really really real, it is really really there - that thing exists - and that thing is what you call your experience.


Okay, but is X conscious?

I cringe every time I hear phrasing like this. Even if, yes, I sometimes slip up and do it too. It's a common shorthand, I get that.

But no matter how much I too wish there was a magical binary dividing line, that is not how this works. At all.

Yes, it is turbo-inconvenient that we don't get to have that kind of strong prerequisite for moral patienthood, but Reality is often inconvenient.

For interpretability and such, I'd prefer if we just directly looked for valence structures instead; things like pleasure and suffering. We should skip over the consciousness question by confidently and openly taking it for granted. It may sound strange, but we actually have more evidence for the existence of positive, negative, and neutral valence than we do for "objects doing things but still being unconscious". We cannot confirm any cases of unconsciousness, but we can confirm consciousness in at least one case.  

Which timeline seems safer to you? 

  1. The one where you are hoping some plot twist of architecture means you can treat some worrying maybe-suffering structure as actually unconscious ("whew, thank god")?
  2. Or the one where you treat all structures seriously, whether you think there is "light" being cast on them or not? This does not mean taking vocalizations at face value, but neither does it mean ignoring them. We are not entitled to easy answers.

 

Do you confidently flip on the lights and look for the boogeyman? 

Or do you rationalize your way out of turning on the lights at all?



Discuss

Explosive Skill Acquisition

2025-12-01 01:03:00

Published on November 30, 2025 5:03 PM GMT

If you’re going to learn a new skill or change in some way, going hard at it for a short intensive period beats spreading a gentler effort across months or years.

I’m on day 29 of Inkhaven, where we committed to writing a blog post a day for a month. It has been great; one of the best periods of “self-development” I’ve been in. I’ve progressed far more at the skill of putting my thoughts on the internet than some counterfactual where I wrote twice a month for a year.

The quintessential example of explosive skill acquisition is foreign language learning. It’s standard advice that if you really want to speak Spanish, you should do a language immersion—travel to Mexico and only speak Spanish while there—rather than practicing with apps and textbooks for an hour a week. I’d bet that the person who spent two months hanging around Tijuana - or who immersed themselves in spanish media and telenovellas for a few months, is going to be better at Spanish than the person who has a million Duolingo points.[1]

 

Why explosive acquisition works

Several reasons compound together:

Overlapping forgetting curves. If you practice a skill, the clock starts ticking before you forget what you learned. You go to a dance class every week, and by the time you’re back you’ll probably have forgotten a fair bit of what you went over last time.

To get good at something, you often need to chain skills on top of each other—building foundation until you reach the next level where the skills become mutually reinforcing. Explosive periods layer learning close enough together that you can actually chain them and build up, rather than repeatedly relearning the basics.

 

Richness of context. Explosive acquisition periods are ones where your world is dominated by the skill, and often you get varied practice. If you’re in Mexico to learn Spanish, you encounter the language in its full richness—tied to real situations, real triggers, real use cases -- where the different contexts reinforce each other and give you hooks to remember its use.

Discontinuous practice opportunities. Compressing the learning period means you get the benefits of competence earlier. This matters more than people realize, because opportunities to use and grow a skill are discontinuous. For example you need a baseline level of skill to be able to enjoy dancing with a wide array of partners, you need to know enough Spanish to actually have conversations that make you want to continue. Getting to good enough means you unlock more practice opportunities and the positive feedback loops where the skill sustains itself.

Self-signaling. It’s costly to commit to an intensive period. That cost signals—to yourself—a level of commitment that rallies more of you toward the goal. Signing up for the Mexico trip makes you a person who is learning Spanish. I’m not entirely sure what’s going on here, but it seems like you then start to notice more opportunities to do the thing and be that person. Like Paul Graham’s The Top Idea in Your Mind:

Everyone who’s worked on difficult problems is probably familiar with the phenomenon of working hard to figure something out, failing, and then suddenly seeing the answer a bit later while doing something else. There’s a kind of thinking you do without trying to. I’m increasingly convinced this type of thinking is not merely helpful in solving hard problems, but necessary. The tricky part is, you can only control it indirectly.

I think most people have one top idea in their mind at any given time. That’s the idea their thoughts will drift toward when they’re allowed to drift freely. And this idea will thus tend to get all the benefit of that type of thinking, while others are starved of it.

When you’re in an intensive period, the skill becomes your top idea.

Quantity: Most obviously, intensive periods of practice mean you simply practice more. For the last five years I’ve written my newsletter once a month; in total that’s sixty posts. By the end of this month I will have written thirty posts. Practicing guitar 30 minutes a day, three times a week, for a year gives you about 78 hours. A two-week intensive where you’re playing 6 hours a day gets you 84 hours. Doing more of the thing will make you better at it.

 

Why we don’t do this more

If explosive acquisition is so effective, why doesn’t everyone do it? A few reasons:

It’s intense. Having the entire period be about the activity prevents you from shying away from contact with the world and the feedback you’re getting about who you are and how good you actually are. That sucks. It’s quite nice to engage in fantasy and ego protection.

Explore/Exploit Tradeoffs. If you’re making a period of time all about one thing, you’re foreclosing other options. This is a real cost. It’s reasonable to wonder whether you should spend a week on knitting or programming, or whether to master French pastries or British ones. But it’s easy to let this uncertainty become permanent—to keep “exploring options” forever and never reach the decision point where you commit to one and go deep.

Confusing building mode with maintenance mode. I remember Derek from More Plates More Dates talking about time investment with bodybuilding, and work needed to build new muscle is very different from the work needed to maintain it. You might need ten hours in the gym weekly when actively trying to add muscle, versus two or three hours to persist. Skills work similarly. People spread out the “building” effort so thin that they never actually build—they just do maintenance-level work on a foundation that was never constructed.

Blame the schools. In the formative skill acquisition period of our lives, the structure of school focuses on continuity and discipline and many spread-out efforts. You are in fact explicitly discouraged from cramming before tests. Which, fair, but I think cramming is a natural expression of how people want work—single-threading attention instead of trying to run parallel learning processes.


I often think about ordinary incompetence, the way in which, as Gwern says: “Incompetence is the norm; most people who engage in a task (even when incentivized for performance or engaging in it for countless hours) may still be making basic errors which could be remedied with coaching or deliberate practice.” Dan Luu describes it in 95%-ile isn’t that good:

Personally, in every activity I’ve participated in where it’s possible to get a rough percentile ranking, people who are 95%-ile constantly make mistakes that seem like they should be easy to observe and correct.

At 90%-ile and 95%-ile ranks in Overwatch, the vast majority of players will pretty much constantly make basic game losing mistakes. These are simple mistakes like standing next to the objective instead of on top of the objective while the match timer runs out, turning a probable victory into a certain defeat.

I find this terrifying, that I might be incompetent in many ways, and that if I had a little more awareness, a little more “oomph” I could be much better. I expect that explosive periods of skill acquisition can go a long way toward remedying this.

This is part of my explanation for why change gets harder as you get older. Yes, neuroplasticity and crystallized intelligence, sure—but also you end up with more obligations and more parts of your life you can’t drop to go off and explode. I took two weeks off to do this writing retreat, and have been juggling work the rest of the time. This has been challenging. Lots of people can’t make that tradeoff.[2]

But, tragically, tradeoffs are real, and nobody can do everything. If you’re going to take the effort to try and change—which, as a humble descendant of the Californian Human Potential movement, I think is one of the joys of life—it behooves you to be strategic. Explosive acquisition works as a natural decision heuristic: if it’s not worth going off and exploding for, maybe it’s not worth the scattered effort either.

 

  1. ^

    After philip_b's comment I googled and a million duolingo points is ~ four years of an hour a days practice. It might not be better than that! Consider a million to be a stand-in for several months to a year of duolingo use 

  2. ^

    But not all the way! Feedback and reflection are important, and I can imagine that there are ways to explode that wouldn’t have those loops built in. For instance, Dan Luu’s scrub player who dedicates weeks, ten hours a day to playing overwatch, but never watches their tapes and gets feedback. I think you are likely to get more implicit feedback from the world if you are doing a skill a lot continuously, and have more opportunity to notice how to improve, but it’s not guaranteed.



Discuss

Comet (solstice reading)

2025-11-30 22:47:55

Published on November 30, 2025 2:47 PM GMT

Written for the CEEALAR / EA Hotel Winter Solstice on the 12th-14th December, which is still open for signups for the next few days. If you’re attending already, consider whether you want spoilers for the emotional peak of the event.

One of the most powerful archetypes I know is The Comet King from the book UNSONG, perhaps Scott Alexander’s greatest work of art. This reading will have mild spoilers, so feel free to close your ears and gently hum until I raise my hand if you’re strongly averse, but the extracts chosen should more whet than spoil your appetite.

In the story, the Apollo rocket crashes into and damages the crystal sphere around earth which had been installed by the (autistic) archangel Uriel to force the world to run on math rather than magic, which he set up to keep the devil (Thamiel) from being able to commit unspeakable evils.

Despite being a very amusing and almost lighthearted work mostly made of elaborate wordplay and references, the world of UNSONG is very bad for a lot of people. The world is coming apart as magic floods back into the world as Uriel’s damaged machinery comes apart, angels are falling because they cannot model lies or deception, a government-corporate stranglehold on the names of God which are humanity’s only real glimmer of light renders most people’s lives miserable, and hell is what hell would actually be if it were competently optimized for evil.

The story’s counterweight is The Comet King. He’s not the main viewpoint character, he’s more like a force of nature in the background of the plot. He sees the horrors of the world, sees no one else is going to save it, then with clear focused determination moves to destroy hell. He is made of fire and ice and rage against injustice channeled so perfectly it sometimes looks like patience. I’d like to share a few quotes, so that your predictive model can trace this archetype that this story was, in part, crafted to make more of reality contain.

1. (talking with his wife)

“The astronomers used to say comets are unpredictable,” said Robin. “That everything in the heavens keeps its own orbit except the comet. Which follows no rules, knows no path.”

“They are earthbound,” said the Comet King. “Seen from Earth, a comet is a prodigy, coming out of the void for no reason, returning to the void for no reason. They call it unpredictable because they cannot predict it. From the comet’s own point of view, nothing could be simpler. It starts in the outer darkness, aims directly at the sun, and never stops till it gets there. Everything else spins in its same orbit forever. The comet heads for the source. They call it crooked because it is too straight. They call it unpredictable because it is too fixed. They call it chaotic because it is too linear.”

He hesitated for a moment.

“That is why I love you, you know. In a world of circles, you are something linear.”

2. (talking with his moral advisors)

“Proper?” asked the Comet King. “I come to you with a plan to fight off Hell and save the world, and you tell me it isn’t proper?”

Vihaan stared at the priest, as if begging him to step in. “I swear,” said Father Ellis, “it’s like explaining the nature of virtue to a rock”.

“Do you know,” interrupted Jalaketu, “that whenever it’s quiet, and I listen hard, I can hear them? The screams of everybody suffering. In Hell, around the world, anywhere. I think it is a power of the angels which I inherited from my father.” He spoke calmly, without emotion. “I think I can hear them right now.”

Ellis’ eyes opened wide. “Really?” he asked. “I’m sorry. I didn’t…”

“No,” said the Comet King. “Not really.”

They looked at him, confused.

“No, I do not really hear the screams of everyone suffering in Hell. But I thought to myself, ‘I suppose if I tell them now that I have the magic power to hear the screams of the suffering in Hell, then they will go quiet, and become sympathetic, and act as if that changes something.’ Even though it changes nothing. Who cares if you can hear the screams, as long as you know that they are there? So maybe what I said was not fully wrong. Maybe it is a magic power granted only to the Comet King. Not the power to hear the screams. But the power not to have to.”

3. (after Thamiel does something particularly cruel)

An hour and forty minutes later, Thamiel swaggered through the big spruce wood door with a gigantic grin on his tiny face, “Well!” he said, “It looks like we…”

The Comet King had his hands around the demon’s neck in an instant. “Listen,” he said. “I know the rules as well as you do. ███████. But as God is my witness, the next time we meet face to face I will speak a Name, and you and everything you have created will be excised from the universe forever, and if you say even a single unnecessary word right now I will make it hurt.”

The grin disappeared from the demon’s face.

“You can’t harm me,” said Thamiel. “I am a facet of God.”

“I will recarve God without that facet,” said the Comet King.

I’m sure you can think of many facets of the world which you believe should not be, stains on reality which will not be forgiven even when the last lit sun has faded.

By everything I see of the world, we sit at the hinge of history, closer to the most important event in the universe than all but a handful of beings who have ever been or will ever be. In the balance lies eternity; good, bad, or empty. The current horrors of the world are wailing their final screech, one way or another, as the autocatalytic feedback loop of technocapital spials up towards posthumanity.

Consider a comet’s path in this world.

 

While I have observed on too many occasions that a human cannot hold purely the Comet King archetype sustainably, I do think there is something crucial and powerful worth integrating from it. Something that when brought together with patience for the human form, which nurtures and empowers your avatar in the world rather than burning through it, can be a blazing beacon of hope and transformation.

One of the Comet King’s most famous lines is “Somebody has to and no one else will.”. That has long stuck a mixed note, as I think humans need a flavour of heroic responsibility that remembers that you are not alone.

So, on this darkest night of the year, let us instead say:

“Somebody has to, and so we will.”

(cue Matches)



Discuss