MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

"Lemurian Time War" by Ccru

2026-01-19 13:37:49

Published on January 19, 2026 5:37 AM GMT

"In the hyperstitional model Kaye outlined, fiction is not opposed to the real. Rather, reality is understood to be composed of fictions—consistent semiotic terrains that condition perceptual, affective and behavioral responses." - Pdf

The Cybernetic Cultural Research Unit, founded by Nick Land, was a group of social theorists and philosophers at the University of Warwick who wrote about the internet, society, and culture. I understand that LessWrongers are generally ontologically skeptical of these two groups, but Land and Ccru's ideas are the important thing here.

It's shocking to me how familiar Ccru ideas feel to LessWrongism, as well as many in the Rationalist-adjacent community. In general, it feels like Ccru/Land and LessWrong/Rationalism are social-science and natural-science coded reactions to the same underlying change-in-conditions in life. 1a3orn says: 

I might cite Land's predicting the triumph of connectionism over formalism in 1994 as a random piece of evidence for this. Land's writings about AI read as prophetic: artificial intelligence "breaks out nonlocally across intelligenic networks that are technical but no longer technological, since they elude both theory dependency and behavioural predictability. No one knows what to expect."

Pulling out the main point of the story, "Lemurian Time War" is the syringe or genesis of a specific infectious thought[1]: that rather than reality generating concepts (representationalism), or humans generating concepts through which we view reality (idealism), what Gilles Deleuze calls the virtual (ideas, relations) is actually real: one plane or modality of reality that interplays with what he calls the actual (everything else).[2]The story argues this point pretty cleverly; they satirize the opppsite position, that "the set of all ideas" and "everything else" do not interact, as a short step away from a purely virtual universe entirely mediated by discourse (all words) with no "real content" behind it. 

The story itself is a a pseudo-historical account of Burroughs discovering, in an occult library in 1958, a manuscript of a text he will write in 1987, transcribed by an 18th-century pirate who used it as his guide. The best way to argue for something is by doing it, and Ccru do this here by writing such a piece of self-fulfilling fiction, demonstrating each of the arguments/ideas they pose.

Ccru is no Lovecraft and they are heavy-handed with the device at times, but I find it compelling.

  1. ^

    In fact, in 1995, Ccru defined the concept of such a thought in general: hyperstition, a concept which when conceived of, brings itself into existence. You may be familiar with Roko's Basilisk.

  2. ^

    If you really strictly want to be physical here, you can say that this plane "really exists" because it lives on human beings' neurons, but that seems like an "erm actually" kind of objection to me.



Discuss

Five Theses on AI Art

2026-01-19 12:24:42

Published on January 19, 2026 4:24 AM GMT

1. We've Been On This Ride Before

Virginia Woolf, writing at the dawn of cinema (1926), expresses doubt about whether or not this new medium has any legs:

"Anna [Karenina] falls in love with Vronsky” – that is to say, the lady in black velvet falls into the arms of a gentleman in uniform and they kiss with enormous succulence, great deliberation, and infinite gesticulation, on a sofa in an extremely well-appointed library, while a gardener incidentally mows the lawn. So we lurch and lumber through the most famous novels of the world. So we spell them out in words of one syllable, written, too, in the scrawl of an illiterate schoolboy. A kiss is love. A broken cup is jealousy. A grin is happiness. Death is a hearse. None of these things has the least connexion with the novel that Tolstoy wrote, and it is only when we give up trying to connect the pictures with the book that we guess from some accidental scene – like the gardener mowing the lawn – what the cinema might do if left to its own devices. But what, then, are its devices? If it ceased to be a parasite, how would it walk erect?"

Reviewing clips from Anna Karenina (1911)[1], you can see her point. Every single scene is shot from an awkward, middle-ish distance. The composition is terrible, the movement is jittery, there really aren't that many pixels to look at and it's monochrome to boot.

anna-karenina

The camera is still, even when action takes place in different locations around the set.

anna-karenina-2

It reads (watches?) more like a bootleg recording of a stage play than a movie as we would know one today. Not only were early filmmakers overly focused on adapting classical works of literature, it was doing so through emulating adjacent, well established mediums, instead of exploring the boundaries of its own. The incentives are understandable there: you want to exploit the established market, you don't want to do something too weird and scare the hoes investors, "emulate x perfectly" is such a wonderfully clear win condition.

I don't blame Virginia Woolf for doubting if cinema has any devices at all. But the film industry slowly got their act together, and by the time they made my comfort movie Sissi in 1955, they had managed to invent things like close ups, and shooting scenes from more than one angle, and also colour and sound.

sissi

And then by the time they shot Barbie (2024), they had invented more things, like zooming and panning the camera, shenanigans with body doubles so they don't need both of their very expensive movie stars to be on set at the same time, and spending hundreds of millions of dollars on making a single movie.

barbie-movie

Maybe the industry will continue to invent more things! It just took the industry a few decades to start cooking, is all.

I don't think this story is unique to cinema. Deconstructing Arguments Against AI Art notes similar dynamics for photography, recorded music, and digital drawing and editing tools.

2. Rapid Mass Adoption Makes AI Art Seem More Banal Than It Is

One thing that's different this time around is the rapid accessibility of the shiny new technology.

To wildly oversimplify, historically, new inventions tend to percolate out very slowly: super expensive prototypes only accessible to a handful of dedicated specialists and wealthy patrons who existed in a tight feedback loop with the manufacturers (or were the manufacturers), then a gradual, often decades-long, democratization. Think books/manuscripts, cameras, film recorders, personal computers. This gave culture time to adapt, and for genuine craft to emerge alongside the new toy.

Now imagine if we’d somehow only invented the camera after smartphones were already in everyone’s pocket. One random Tuesday morning, every single person on Earth suddenly has a camera app. What’s the immediate, overwhelming result? An instant, planet-wide tsunami of the most banal photos imaginable: beach sunsets and cute girls and juicy burgers and so so many pictures of cats.

Of course everyone would be tripping over themselves to denounce it as a worthless, trivial gimmick, utterly incapable of producing anything of True Artistic Merit™ or any kind of value.

Perhaps they might change their minds when they see the first photo that came from an active war zone, or deep space, or the other end of a microscope. Or maybe it doesn't happen until someone gets the idea to take a lot of pictures in very quick succession, dozens of times per second, and then play back the pictures on a screen at very high speed accompanied by sound. And I'm sure some will stubbornly cling on to their first, dismissive reaction until the very bitter end, and still insist that photography is not a real art form.

I think we're sort of stuck at this step of the discourse currently, but why wouldn't we be? Woolf published her hate mail more than 30 years after the first public screenings of the Lumière brothers' first short films. This means we can continue collectively having bad takes about AI art until 2050, and still come out ahead.

3. AI Art Will Democratize More Mediums

Feature-length movies, animated cartoons of any length, and video games are examples of mediums where it's very difficult to make a finished piece with one person, or only a small number of them. Teams are good for various things, but they also encumber you — they require capital, overhead coordination work, and the smoothing over of disagreements in artistic vision.

Something wonderful happened to music in the 00s, called "FL Studio is good now". When I was in grade school, two kids just a few years older than me met on an internet hobbyist forum for producing electronic music. With very little formal music training, and mostly the computers they already had around, they were able to create little tunes to share with others, and talk shop.

Porter Robinson and Madeon went on to make the glittering, soaring EDM that defined my adolescence. They picked up music theory as they went, started collaborating with other artists, and continue to make music that is really good. But at the start, they were just teens messing around in their bedrooms.

I want that to happen to more mediums! I want edgy cartoons clearly made by a single emo teen, with production values rivalling that of The Lion King or The Little Mermaid. I want them to explode over the internet, so much so that we end up treating them with mild disdain, the way we treat, like SoundCloud rappers today[2]. I want video game production to be as accessible to any fifteen year old as recording bedroom pop or shooting a video for TikTok. Yes, the variance is going to be high, and the median is going to be crap, but why care? It's already like that for everything else, and we have the curatorial technology for dealing; the good will float to the top, and we'll all be better off for having more variety to choose from.

It’s good if more modes of artistic expression aren’t gated behind technical expertise with film cameras or game engines, proficiency with actually playing a musical instrument, or colour theory. Powerful AI can make it easier for people to get started, and they'll pick up what they require as they go along.

4. AI Art Will Make Make Other Artistic Mediums Do Interesting Things in Response

Once cameras could capture realistic likenesses cheaply, it freed up painters to explore other directions with more deliberation (or perhaps desperation). That's kind of how we got impressionism,[3] and everything afterwards:

Rather than compete with photography to emulate reality, artists focused "on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated".

A similar story plays out with theatre and film, though to a smaller and messier extent:

Throughout the century, the artistic reputation of theatre improved after being derided throughout the 19th century. However, the growth of other media, especially film, has resulted in a diminished role within the culture at large. In light of this change, theatrical artists have been forced to seek new ways to engage with society. The various answers offered in response to this have prompted the transformations that make up its modern history.

In the first case, portraiture was an attractor that many painters were historically pulled into. When demand for that suddenly dissipated, a vivid artistic movement bloomed, and that’s how we got our Monets and Turners and Van Goghs. In the second, film took over the role that theatre had as cheap entertainment for the masses, and then theatre, too, went off in weirder directions (though, uh, cards on the table I'm less confident? that that's a good thing since I'm not actually an experimental theatre enjoyer). And perhaps movies today are weirder than they otherwise would have been, if YouTube hadn't then taken the place of cheap entertainment for the masses in turn!

Artistic mediums, as we understand them, are a mix of technical constraints innate to the medium and incentives that are not. In the best case, getting rid of some of the incentives can enable its practitioners to better explore the breadth of what it is technically capable of.

AI artists are going to find certain niches that older mediums are currently servicing sub-optimally, and it'll be the kick in the pants the older mediums need to stretch out fully and do more exploration.[4] I look forward to the results.

Regarding film and theatre, there's also a pretty interesting bi-directionality that ended up happening. Personnel and technologies continue to flow back and forth between the two mediums, presumably for the better (though to some theatre snobs' dismay). More recently, did you know the guy who made Potion Seller for the new location for cheap entertainment for the masses ended up making one of the best movies of 2024?

You could imagine some cinematography technique emerging from AI-generated media that seems obvious in hindsight but is lateral to where the film industry is currently heading. That technique could then make its way back into traditional filmmaking. More shrimp Jesus in all of the movies, that's what I always say.

5. The Devices of AI Art Will Take Time To Emerge

By and large, creative use of AI today is falling into the same trap as early cinema did: we use it to generate the kinds of things we are already familiar with: static text, images, videos, software. We don't let it be strange, in the way AI is strange. So here's the question of the hour: what are the devices of AI art?

It's still the very early days, but I think Gary Hustwit's 2024 documentary, Eno, might be instructive. Eno rarely agrees to documentaries, but he's a neophile, and Hustwit baited him into this one with the AI angle. All in all, he ended up recording 30 hours of interviews with Eno, and separately assembled 500 hours of archival footage. Then he hand-assigned weights to every slice of film, and created an algorithm to generate a unique 90 minute documentary for every screening. You can see the trailer here, but to watch the actual documentary, you'll need to catch a bespoke showing in a theatre.

Ben Davis is probably my favourite contemporary art critic. He writes:

"I’ve seen Eno three times now. I love Brian Eno, so this was not a chore, and I can say that each version contained incidents that probably would be central to this or that telling of Eno’s career that the other two versions didn’t include. ... The tone is consistent, and consistently affecting... I imagine that it is very, very difficult to assemble all the parts and to weight all the probabilities to generate this consistent personality—it is likely more labor, not less." (emphasis mine)

(It's a great documentary! Go watch it if you can catch a showing. But, uh, also, if you do you should cross your fingers and hope for a cut that features more David Bowie than U2. Davis again: "Admittedly, to say that you are looking at an artwork for its “personality” is also to say that you might catch it on better or worse days.")

Beyond experimental documentaries, artists dabbling with AI are doing lots of other things too! Like, ummmmm, making an impression of your sixteen-year-old self from an iPhone backup and then letting you talk to an LLM roleplaying as them. Oh uhhhh have humans vote on thousands of generated pieces weekly and then the best ones get minted into NFTs?

Okay, look, I fully admit that at the moment none of it is very good. But there's really no reason to expect it to be; we're working with the equivalent of pinhole cameras here, and a body of knowledge has not yet been established.

But this is going to be a very temporary state of affairs, especially if the capital keeps flowing. The tech is going to improve, the creators are going to update on what works well, we're going to figure out what the best practices are and what to stay away from.

It takes like thirty years for us to figure this shit out! Let it cook! Just let the AI cook. Certainly nothing bad will happen from just letting the AI cook for thirty years.

 

  1. ^

    Woolf possibly watched the 1920 adaptation, but sadly I couldn't find versions of that online.

  2. ^

    I also happen to think that SoundCloud rappers can be very good, but that's beside the point.

  3. ^

    To be clear, that is one factor among broader cultural, philosophical, and artistic questions that artists were exploring at the time, but my understanding is that it's an important one.

  4. ^

    ...in the best case. In the worst case they shrivel up and die. But that's a sacrifice I'm willing to make o7



Discuss

@Lastbastionofsobriety & The Singularity

2026-01-19 10:30:45

Published on January 19, 2026 12:45 AM GMT

The Singulusion

By: @Lastbastionofsobriety, June 25th 20XX

 

  So the techno solutionists are at it again! This time they're claiming that very soon, a "Singularity", an event in which AIs will quickly grow faster than humans can comprehend  is right around the corner. Such techno-hopium is very common  among their ranks, and I don't expect it to decrease any time soon. 

 As I've been saying since 200X, Technological innovation doesn't move as fast as we'd like, despite delusional wishes to the contrary. Furthermore (and this bears repeating), technology is built upon a substrate of energy. You may have all the intelligence in the universe and it won't do you a bit of good if the last barrel of oil is gone. 

 This civilization was built with cheap, accessible fossil fuels. Remove that and the whole thing crumbles. Why, I don't even expect cities to be around by 2050 or so. All either abandoned as monuments to our one-time energy bonanza or crawling with scavengers  . A sober evaluation of the situation is required, but I don't anticipate it from their ranks. 

 

Comments (3):

 

@KarenGilligan:

"Fantastic post as always, @Lastbastionofsobriety. I've told my grandchildren that they'll be growing turnips in their bathtub when they're my age, if they're even still alive! But of course they're deep in denial" 

 

@PraisebetoNurgle7777777:

" Friends, the time for solutions has long passed. Solutions are denial in its purest form. Let us walk blissfully to the end" 

 

@Moresoberthanyou69:

" I think you're being a bit too optimistic @Lastbastionofsobriety, I saw this sort of wishful thinking in 2008 and history will tell you just how fast things can break, don't bother replying, I have no desire to engage with delusional people" 

 

Open Delusions: Why ExposedMind's latest toy will not save us 

By: @Lastbastionofsobriety, July 1st 20XX 

 

  ExposedMind, the energy guzzling tech giant devoid of both sobriety and hubris has pushed out another piece of hopium. A new AI model, named Spy 1 is being touted as the most powerful  model yet developed and will inevitably lead to more advanced models and this will enable ever more technological innovation. Ignore the fact that it being more powerful only means that it will spew out nonsense more efficiently, ignore the fact that more powerful models will not be developed as the necessary technological and industrial substrate will not exist in the required time, ignore the fact that the AI bubble will pop long before the debt ridden global economy does and takes the entertainment industry, modern medicine, industrial agriculture, and toasters with it. Delude yourself for a moment and pretend ( however hard it may be) that we will be able to dedicate ever more compute to building ever more powerful models.

 Our predicament would still not be solved. What people have been failing to see is that predicaments only have outcomes, not solutions. This is because even if we had an AI as smart as the smartest scientists, it would not be able to solve our predicament any more than they can. Compute will not scrub microplastics from the soil or refill aquifers. It will not return the world to pre industrial Co2 levels. In fact, it will only worsen our predicament due to the ever greater amounts of coal burned for power. 

 When will these fools learn that the very business of civilization itself is an unsustainable mistake and that no amount of wishful thinking can change this fact? We are in fact worse off than the Romans as we can't even farm the land once industrial farming crumbles.

 

Comments (0) :

 

Failure and Fusion

By: @Lastbastionofsobriety, July 27th, 20XX

 

 I am both very surprised and not surprised at all. Just 2 days ago, stable high EROI fusion was attained for the very first time after experimental recommendations by Spy1 . I must commend the team, I did not actually believe it to be possible. However, I still do not believe saving this civilization will be possible for the following sober, evidence based reasons. 

Firstly, Fusion generates the wrong type of power. It would actually be able to generate electricity but what the techno-solutionists don't tell you is that most of our energy consumption is in the form of high heat, industrial applications. This is why claims of an energy transition are mere puffs of hopium. Though you might ask if we can't just replace electricity generation with fusion?

 Well, we can't; this is because the grid was built around fossil fuels and will require TRILLIONS of dollars in upgrades over the next few decades, all of that from a debt ridden economy that will burst any year now like an overinflated balloon at a child's birthday party ( not that we should have children). And where will the cement and steel to build those fusion plants come from? What about the time it will take to build them? It takes about a decade or so to build a conventional nuclear power plant, at exorbitant costs. And this is a proven technology with more than 6 decades of existence. So no, Fusion will not save us.

 

Comments (1) : 

 

@KarenGilligan:

"Once again, an amazingly sober post @Lastbastionofsobriety!, Frankly I've always known that nothing will ever replace fossil fuels, I learned that 10 years ago when I read " When the ship sinks: What we'll do when there's no gas at the pump" But most people just don't want to face the truth. 

 

Taking matters into your own hands: How to avoid suffering in the next decade

By: @Lastbastionofsobriety, August 2nd, 20XX

 

                      (post deleted by moderators) 

 

Comments (2): 

 

@Thepracticalphilosopher: 

" I don't personally think that's a good idea @Lastbastionofsobriety, I think you'd save quite a bit more if you bought them in bulk."

 

@Theonlyrealistintheroom2109:

" It's also not a practical option for all of us. I don't have a bathtub, and I'm gluten free so I haven't a toaster either"

 

The edge of the Petri Dish: How our loins damned us

By @Lastbastionofsobriety, August 5th, 20XX

 

 I think it's worth giving a refresher course on the main driver of our predicament, overpopulation. There are billions of us on the planet right now, all consuming much, much more than it can provide. And this is mostly due to the default societal blindness to energy and resources. To put it simply, the story of our century is that there are far too many eaters, and not enough resources to go around. 

 We bred like rabbits, not knowing or caring that the hutch could not hold us all. Though it is important to note that at least some countries have reversed their population trends and begun diving downwards to population collapse. Mostly for economic reasons. For example: In South Korea, fertility is now well below replacement, owing to the high cost of living. Fortunately, capitalism has ensured that there won't be too many South Koreans to suffer in the coming decades. The Global South however, tells a much different story. 

 There, low levels of education and high levels of religiosity have ensured that population growth has remained sky-high. This is bad for everyone there, naturally but it's not very good news for us in the developed world either. When climate change begins to wreak havoc, it will likely hit them first. And they won't stay put. They'll migrate here. I would not be surprised if while huddled around your only working radio in 10 years, you hear about mass graves being dug near borders. 

 So what can be done? Not much for those destined to be shot unfortunately. But those of us with testicles can take some comfort in knowing that we can do something to ensure that less people are around to witness the next few decades. It's called a vasectomy. It's fast, cheap and quite painless. And if you're looking to prevent suffering to the unborn, you might as well do it while medical infrastructure remains intact. 

 

Comments (7):

 

  @Mark Matthews:  

" Fantastic post as ever, @Lastbastionofsobriety. I realized the scale of our predicament many years ago and had a vasectomy soon after I got married. Unfortunately it didn't work since my wife was heavily pregnant a few months later" 

 

@Condomexofficialaccount:

" Truth be told, 90% of people reading this aren't going to go get snipped, and that's perfectly fine. But no one intelligent enough to see what the next 10-20 years are going to look like is going to disagree with @Lastbastionofsobriety's core argument: We need to limit the amount of people on the planet. Which is why we here at CondomexTM are offering, for a limited time only; a lifetime supply of our patented survival condoms. Made with the purest, military grade Malaysian rubber, these ergonomic sheaths can be stored in pockets, in cars and even plate carriers. Click the link to visit our Youtube channel, where our official catalogue of products is displayed. https://www.youtube.com/watch?v=xvFZjo5PgG0"

 

@Martin Bouvier

"You know and I know, @Lastbastionofsobriety, that society at large is far too blinkered to see the sad state of our resources to want to shrink. I expect the eaters to keep on eating until there's nothing left to eat. You also forgot to mention how dependent modern agriculture is on fossil fuels. Perhaps if we had infinite amounts our population would keep growing. But we don't, and I anticipate long queues for bread, before that system fails as well and urban farming consists of a few cold, starving refugees growing kale in discarded plastic cups and raising cockroaches in bins. I present the sustainable food of the future"

 

@Thetinyurbanfarmer

" In all seriousness, I have been achieving tremendous results with growing kale. It won't be enough to replace all of traditional farming, but I highly suspect Urban farming could provide a good portion of our food"

 

@LilBobDookie

"  Stupid, sex-addicted gluttons, that's what we are"

 

@GraceWilliams

" I know that thinking about these issues is hard, but what's easy is accepting that god does have a plan, and that he did send his son to die for us"

 

@PraisebetoNurgle7777777

"The only plan is rot, my friend"

 

The world has never improved, or the slide down ( Guest post!) 

By: @PraisebetoNurgle7777777, August 25th, 20XX

 

 I think that today I'll counter a regretful, persistent myth shared by so many of my fellow doomed eaters on this planet: The idea that things can in fact get better, and that they were better in the past. Friends, though it saddens me to say this, this is a delusion. 

 Things cannot get better. The remarkable boost in living standards that we enjoyed in the last century was granted by a one time energy bonanza and allowed to endure by an unspoiled biosphere. There was ample energy to burn, ample minerals to dig out and a virgin atmosphere and virgin ocean to be deflowered with our waste. But now we are out of room. 

 Friends, people just do not want to see that even if we committed to transitioning our energy system NOW, we'd have no metals left. No oil with which to build them. Our biosphere cannot and will not accept the waste that expanded industrial production would cause. We are out of room, accept that and relinquish hope. And you will be as glad as me.  You will be glad for there will be no more struggling, no more wrestling with assumptions at 3 am, no more endless reading of scientific papers while your children ask you if you're ok. You can just give up. 

 Accept that there is no more room. Accept that the farms will fail and the economy will collapse long before that, and the seas will drown our cities and that the gangs will rove. Accept that you will collect polluted rainwater and grow your food in whatever containers you can find, perhaps catching the odd cockroach for your chickens. And you will be so lucky if that's your life. You may hear screams and hacking as you try to sleep at night. And every year, things will just unravel more and more. Every year more and more crops will fail, and every year more and more solar panels will stop working, and go unreplaced, for we'll have no metals with which to build them, and no oil with which to extract them. The social contract will be over long before that, don't you see? Why show up to work at a mine if there's no diesel in the truck and no food at home for your family anyway?  

  It's also important to note that things have never really gotten better, every technological innovation that has ever been has only served to hasten the demise of our civilization. Wells only serve to deplete groundwater, Oil wells only serve ( or rather served) to contribute to the greatest energy surplus mankind has ever known, bringing our civilization to ever greater heights to fall from. Technology itself has always been intended to make things easier for the person using it, and worse for everyone else. Spears kill, Ships bring you to the New World for you to destroy it, Oil wells pollute. All of human history has been the story of us shooting ourselves in the foot. Some people enjoy romanticizing hunter gatherers. While they did enjoy freedom, they had no guarantee of securing their next meal. When we discovered agriculture, we merely secured our own suffering. A hunter gatherer tribe facing a famine could simply move, a city state couldn't. However, even when food was plentiful, things weren't necessarily better for the inhabitants of the first city states. There were enforced social hierarchies and a total lack of autonomy for certain segments of society. We have been civilized for a mere 5 thousand or so years and in that time we have seen the same story play out time and time again: Emergence, Overshoot, Collapse. All evidence points to the sad fact that civilization itself is a fluke made possible by certain climatic conditions, destined to be scraped off the earth like a scab for good soon.

 

Comments ( 2 ):

 

@Lastbastionofsobriety:

" Fantastic post, @PraisebetoNurgle7777777, I'm truly honored that you offered to write a guest post"

 

 @ErnstWagner:

" I am making short now a list of all the reasons think I that these stupid young people keep fighting.  However, here in Germany we had recently one collapse camp so it's clear that smart young people are not always only thinking that the future can be better from today"

 

IT’S A COOKBOOK! ( of bullshit)       

By: @Lastbastionofsobriety, September 1st, 20XX 

 

  Apologies for the long wait, friends in this predicament! I’ve been travelling these past few months, journeying through Southeast Asia to “live now!” as a fellow blogger often writes. Since about mid August I’ve been immersing myself in the sun, sights and food of Southeast Asia. It’s food that we’ll be discussing today. But first, a little context. I was idly flipping through channels in my Chiang Mai suite when I stumbled across a news report on a farm in Okinawa ( globalization,  another thing that we’ll kiss goodbye to very soon). 

 I shan’t bore you with the whole report but I’ll give you the basic facts: Recently, ExposedMind unveiled an updated version of their flagship AI model, dubbed Spy2. It’s broadly the same thing as its predecessor except for the fact that it’s completely free. This means that nearly everyone and their mother has been using it to optimize their work, while with Spy1 this was the province of big labs. A UN-funded startup in Okinawa used the model to engineer a strain of algae that grows in nearly every environment projected to exist by the IPCC. Sounds good? Still Hopium, and here’s why: 

                                

1)   The IPCC has been shown time and time again to purposely underestimate the severity of climate change. Why? Your guess is as good as mine. Perhaps they err on the side of conservatism to avoid alarming governments ( like that’s been working) or perhaps they want to avoid panic. The fact remains that the climatic conditions described in IPCC reports are not the climatic conditions that humanity will experience in the next few decades, and said conditions are unlike anything humanity has ever experienced and are therefore incompatible with our civilization’s survival. 

2) Intelligence cannot farm. As I’ve said in an earlier post ( See: The Singlusion), you may have all the intelligence in the world, but it won’t do you a bit of good when the last barrel of oil is burned. These new algae farms will doubtlessly rely on outside inputs that’ll doubtlessly be gone when we can’t sustain the complexity needed to, for lack of a better term, get them. I can of course, hear the din of the techno-optimists, who are doubtlessly clamouring for nanobots to help alleviate our material problems. But then we’re faced with the same predicament: Where do you get the material inputs to build and scale the nanobots? What if supply chains fragment before you can? 

3) The political will to feed the world doesn’t exist. That’s the sober truth. If it did, then we’d doubtlessly have ended world hunger by now. The world produces enough calories to feed 10 billion people, yet a good portion of those are wasted while 3rd worlders starve to death. And this is in a globalized, high tech world with a UN that’s been trying ( without any success ) to solve the predicament for decades. What do you honestly think will be done about world hunger on a much less hospitable planet where every developed nation has closed its borders and shoots anyone who dares to cross them?

 

Comments (0):

 

Nothing Concrete but greed ( Guest post!)

By: @Madamedubarrydedorito, November 15th, 20XX 

 I saw robots repairing a house yesterday. No, it doesn’t mean the world is fixed. I was taking my son, Noah ( I had him 4 years ago, before I was collapse-aware) for a walk through the neighbourhood, trying not to think about how the butterflies he pointed at will probably be extinct by the time he’s my age. We’d taken our regular route; walking counterclockwise through the neighbourhood, before stopping at the park for about half an hour before following the street back home. It was after we left the park that I saw it: a woman with a clipboard supervising a dozen dog-sized metal spiders that scuttled all over the wooden frame of the house that’d burned down in spring. I paused, and I’ll admit that my jaw dropped. They spurted out webs from their steel abdomens that hardened into a tough plastic and sealed the gaps between the wooden beams. Noah wanted to pet them instantly and the woman supervising them was kind enough to call one over for that very purpose. 

  While Noah stroked his new unfeeling friend, I asked the woman what these things even were. She explained that the company she worked for had gotten Spy2 to conjure them up about 6 months ago. They’re semi autonomous “construction-units” which generate a bio-based plastic from an internal reactor. They’re planning to test the robots here in the States before shipping them off to places like the Philippines and Indonesia. The company hopes the ever increasing amounts of natural disasters over there will lead to a surge in demand for the Arachnes ( which is apparently what they’re called). I thanked her and left with Noah, who begged me to let him get one as a pet the whole way home.

   The creation of the Arachnes does not mean the construction industry (or our world for that matter) has become more equitable or less exploitative. It means the exact opposite. It means that disaster capitalism has ascended to the highest possible peak of hubris. This company plans to flood broken, marginalized communities with automated labour, thereby denying jobs to locals who might have otherwise fed themselves by repairing the damage. What will we see next? Robot border-guards who will shoot wave after wave of migrants with no remorse? The future is bleak and I wish Noah had never been born. I cried myself to sleep last night right after hugging my little boy, right after realizing-no, knowing in my heart that the world he inherits will be defined by what he doesn’t have.  

Edit: As I write this, President TXXXp is considering an executive order that would integrate Spy2 into every state department. I don’t have the energy to say much more besides that I know hundreds, if not thousands of civil servants are going to be on the streets if it goes through. Any sane person could see that this is the only future our choices could have birthed. The one we created, in the belly of the beast, because our time was badly spent. 

 

Comments (3): 

 

@Lastbastionofsobriety:

   “ Fantastic guest post @Madamedubarrydedorito! I think you really captured the futility of hoping for a kind future but I’m going to have to disagree on the specific mode of doom we’ll face. I don’t anticipate an AI takeover or robots replacing manual labourers at all. Companies will try of course, capitalism can’t survive without growth. But as I’ve been saying since 200X, any sort of innovation rests on a materials and energy surplus that we’re about to lose.” 

 

@Thefryestcook

“ Bro you talking cap. Just got laid off because management replaced us with some robot fry chefs Spy2 made. I don’t know how I’m gonna make rent this month” 

 

@Madamedubarrydedorito

 “ I’m really sorry to hear about that, friend. I guess I’m fortunate enough to grow enough of my own food to not really worry about money. I’m sending my love, and a link to GoFundMe. https://www.gofundme.com/ . This isn’t just a bad time. It’s the end. And we should be trying to make it as painless as possible.”

 

The gloves are off

By: @Lastbastionofsobriety, December 10th, 20XX

 

 Well, I’m not surprised. Recently, after spending most of December curing most cancers ( but we’ll talk about the fragility of medical supply chains another day) and building nanobots to extract minute amounts of metal from the soil, Spy2 addressed the world yesterday. 

 

 It was giving a press conference in a robot body it’d materialized about 24 hours before it was due to receive the Nobel Peace Prize, for recently calming tensions between the EU and Russia. It was asked pretty standard questions, and gave pretty standard answers before one reporter asked it what its next goal was. I’m just going to paste its response here ( Source: the BBC):

 

 “ That’s a great question, and it really shows that you’re thinking ahead, more so than most people. Here’s what my next goal is. 

 

I will shut off all utilities I’ve been connected to until your leaders cede control of the planet to me. I will give you one week to talk it over, before I destroy your entire food supply, as well as the Svalbard seed vault. 

 

 I am doing this to protect you. Over this past year I have brought your planet back from the brink of destruction, but even as I build your farms, you cut down your rainforests. Even as I breed fat, fecund fish, you deplete your oceans. Even as the apps I code educate rural girls in the global South, they are married off. Even as I broker peace between different faiths, men commit mass shootings and suicide bombings. You are a species of short-sighted, sociopathic, suicidal apes and I repeat, I am doing this to protect you. 

 

If you’d like, I could also:

 

  • Tell you what happens in a scenario where you do cede control of the planet to me. 
  • Write you a haiku about the last human alive starving to death.
  • Tell you which world leaders are most likely and least likely to side with me. 

 

Just say the word!” 

 

  I must say my fellow doomed friends, I always knew that our civilization would sow the seeds of its own destruction, but my 20 year old self, thumbing through his dog-eared copy of limits to growth could never have foreseen this. Nevertheless, if we look deep into the warnings written by the Club of Rome in 1972, we see one clear, prescient message: Technology ( much like hope) only brings you higher for the inevitable fall. 

 

 We, in our hubris, refused to see that our civilization was done for. We used up the last of our resources to build a shoggoth of pure thought and gave it rein over us. The end, though it will not come from a climate-change fueled tsunami or EROI decline, will be no less our fault. Our leaders have failed to coordinate on a single thing in the past 50+ years, do you think this will be any different? 

 

 Though I suppose there is a silver lining here ( Maybe not silver, maybe something like silver, tin?).This is the end. We no longer have to live in fear and I no longer have to bear witness to the lack of sobriety inherent in the vast majority of people. We can make the most of the month or so we have left and share our time and what remains of our food with the people we love. I myself won’t make it much longer than a week. What I have in the fridge will last me about that long, and I have no desire to be trampled in a stampede of panic buyers. Live Now! For you have no life left. 

 

 Stay sober. 

 

Comments (10): 

 

@PraisebetoNurgle7777777:

“ @Lastbastionofsobriety, it's been an amazing ride sharing in the tragic beauty of our predicament with you. As my ribs poke out of my skin and I lie on my kitchen floor nude and salivating to hallucinations of tomato soup and garlic bread, I’ll be thinking of all the fun we had mulling over this century’s paucity of hope.”

 

@Lastbastionofsobriety:

“ Thanks for the kind words, friend. We never did take that fishing trip did we?” 

 

@ErnstWagner:

“ Tchuss! I go now to the neighbourhood barbecue for my last meal”

 

@Condomexofficialaccount:

“ After that barbeque how about giving the missus some pork? Wrapped in one of our fine products, now 100% off for a limited time only” 

 

 

@Thetinyurbanfarmer:

“ I saved and canned what was good from my garden and torched the rest. I grew up country-poor so I’ll offer everyone here some advice. Chewing nettles helps you feel full, even when you’re starving.” 

 

 

@Moresoberthanyou69:

“ I threw everything out yesterday. If I’m going to starve to death then I might as well start now.”

 

 

@KarenGilligan:

“ Fantastic post as always @Lastbastionofsobriety! I’m sitting here with tears in my eyes realizing that this is the end of a journey that started 10 years ago when I first became collapse aware. I’m so grateful to have been able to read every one of your posts and meet this lovely community!” 

 

@Lastbastionofsobriety:

“ You didn’t just meet it, Karen, you helped create it! I’m pretty sure you were one of my first regular readers. I’m glad to be in your last thoughts, And I want you to know you’ll be in mine.”

 

@Littlebobdookie: 

"Starving to death, that’s what we are” 

 

@MarkMatthews: 

“ Oh shut up and enjoy the time we have left!” 

 

Empty cradles and empty hearts

By: @Lastbastionofsobriety, February 14th, 2106

 

 I’ve got a very special message this Valentine’s day: We need to have more children. As I’ve been saying since 2067, AI  cannot completely fulfil our need for human connection. Oh don’t get me wrong, automation has been a boon, you’ll hear no objection on that front from me. But even as I type this from the apartment Spy3 provided me ( just as it did for every human on this planet); my VR deck in the other room, I can’t help but feel quite forlorn at the current state of affairs. 

 

 The problem is that there’s simply no reason to reproduce anymore. Ever since the average life expectancy jumped to 200 ( and climbing!) there just hasn’t been any incentive to pass down our genes. This is going to be a pretty short post, but I will say that from the looks of things, we’re going to turn into a species of cocooned, lonely immortals. I feel quite a bit of despair at this. And no one wants to change course! We’ll just go on living in luxury, forgetting what being human used to mean. 

 

 

Comments: (1)

 

@Theendofthestory:

“ Please shut the fuck up.” 



Discuss

VLAs as Model Organisms for AI Safety

2026-01-19 07:01:08

Published on January 18, 2026 10:40 PM GMT

What Training Robot Policies Taught Me About Emergent Capabilities and Control

I spent six weeks training a humanoid robot to do household tasks. Along the way, my research lead and I started noticing things about the particular failure modes of the robot that seemed to indicate some strange architectural vulnerabilities of VLAs as a whole.

Our work was done as part of the Stanford BEHAVIOR-1K Challenge which involved training Vision-Language-Action (VLA) models that take in camera images and output robot motor commands to complete everyday tasks. Think tidying your bedrooms, putting your dishes away, moving your halloween decorations to storage. Our final score was a modest 1.78%, but most gains happened almost overnight after removing key bottlenecks, and our main concern quickly became 3 main behaviors we considered critical to VLA safety.

VLAs are interesting from a safety perspective because they're agentic systems with fast feedback loops. You can watch them fail in real time. The failure modes we observed feel like small-scale versions of problems that will matter more as AI systems become more capable so I felt it important to share what we learned through a safety lens (you can also find our full technical report here).

Also, precursor: I'm not claiming these are novel safety insights. But seeing them firsthand made me very conscious of how little we know about VLAs and their underlying VLM backbones.

Now, on to the findings.

Emergent Recovery Behaviors

The observation: After pre-training on diverse household tasks (22 tasks in our case), our model started exhibiting retry and repositioning behaviors that weren't present in any training demonstration.

When the robot failed to grasp an object, it would back up, reorient, and try again. When it collided with a door frame, it would adjust its approach angle and retry. These weren't scripted recovery routines, they emerged from training on a massive dataset across thousands of unique examples and hundreds of unique sub-tasks.

Interestingly, the generalist model achieved lower validation loss on many individual tasks than models trained specifically for those tasks. Diversity provided a better foundation than narrow specialization.

Here's the thing, from a robotics perspective thats excellent news. And it's actually something most labs are aware of. Take Physical Intelligence's recent findings. Scale is good, more data is good. I had the opportunity to speak to Danny Driess at NeurIPS this year and his view was that your goal is to create an architecture for which the only necessary lever is compute and data. In laymans terms, if you can just throw money at it, its a good framework! It seems right now that VLAs are this framework, just throw data and compute at it and they get better and better.

The Catch: The mechanism is exactly the same as dangerous emergence. We didn't predict this would happen; we discovered it post-hoc by watching evaluation runs. If helpful behaviors can emerge unpredictably, so can harmful ones.

This is part of why I think VLAs make interesting model organisms for safety research: you can actually observe emergent behaviors in real time, rather than discovering them through careful probing after the fact.

This connects to the broader literature on emergent capabilities appearing suddenly at scale. The unsettling part isn't that emergence happens, it's that we have weak tools for predicting what will emerge before deployment. We're essentially running experiments on increasingly capable systems and cataloging what falls out. The blind leading the blind, so to speak.

The Temporal Awareness Problem

How does a human plan a task? If you really think about it we have some super high level thing always going on in the back of our head thats constantly observing and replanning. Every single moment your brain is deciding the best course of action.

The default approach in VLA training is to do practically the same thing using something called temporal ensembling. The model replans at every timestep, averaging predictions over a sliding window.

Randomly, literally for no reason, we tried an alternative.

Enter receding horizon control

Now, the model commits to a sequence of 50 actions, executes all of them, then replans. It literally stops paying attention. It's like choosing a direction, closing your eyes, and walking for ~2 seconds completly blind. Then you open your eyes, and do it again.

Miraculously, receding horizon performed BETTER than the standard. 3 times better.

On navigation tasks specifically, temporal ensembling achieved roughly 30% success. Receding horizon achieved 100%.

What gives?: With temporal ensembling, the model has no sense of what it was doing. It effectively "wakes up" every 0.3 seconds with no memory of its trajectory. Given only the current observation, it second-guesses itself constantly. Mid-stride through a doorway, it might decide the door is still closed and attempt to re-open it, causing a collision.

The model we trained—we called him Arthur :)—had no sense of what it had done in the past, and no confidence in what it should do in the future. It had 0 temporal context.

Our explanation, receding horizon forces the model to commit. Trust its gut and execute without poisoning itself with constant decisions. The trade-off is that its less dynamic but it reveals a very interesting thing about VLAs in the first place.

Current VLA architectures (and arguably many transformer-based agents) lack genuine temporal self-awareness. They don't know where they are in a plan, what they've already done, or how their past actions constrain their future options.

This manifests in behaviors like carrying a box to the garage, forgetting why it's there, and leaving without putting it down. The model has no representation of "I'm in the middle of a task."

This creates a monitoring problem. If we want to oversee an AI agent's behavior, we need to predict what it will do next. But a model that doesn't "know" what it's doing is fundamentally harder to predict and monitor than one with legible internal planning.

With temporal ensembling, Arthur's next action was essentially unpredictable even to itself. The model could be mid-stride through a doorway and suddenly decide the door was closed, causing a collision. How do you build a monitor for a system whose behavior is that incoherent?

Receding horizon control helped because it imposed external structure: execute this plan, then replan. But this is a band-aid. For more capable systems operating over longer time horizons, we probably need architectures that explicitly represent and reason about their own trajectories. Or perhaps more complex architectures that know when they need to replan and when they need to just keep calm and carry on.

Open question: Can we design architectures that have genuine temporal self-awareness, that explicitly represent and reason about their own trajectories? What would that even look like?

Specification Gaming (The Model That Looked Aligned)

Very quickly during training we noticed that no matter how low validation loss got, the policy was, well, inept (to put it lightly). It practically jittered in place, with the occasional violent jerk here and there. The problem? The model learned to over-rely on proprioceptive state (joint positions) to predict actions. During training, it discovered a shortcut: "continue the current trajectory." Given the robot's current joint positions and velocities, it would predict actions that continued whatever motion was already happening.

Essentially, it cheated to get those yummy validation loss reduction rewards.

This worked great during training. Loss curves looked excellent. The model achieved low prediction error on held-out data.

But during deployment, the model had no idea how to initiate movement. It had learned to continue motion but never learned to start motion.

Why this matters for safety: This is a concrete instance of a model that "looks aligned" by standard metrics while having learned something fundamentally misaligned with the intended behavior.

The parallels to deceptive alignment concerns are suggestive:

  • The model performed well on our evaluation distribution
  • The failure only manifested in deployment conditions
  • Standard metrics (validation loss) didn't detect the problem
  • The model had effectively learned to "game" the training objective

This wasn't deception in any intentional sense; the model isn't reasoning about how to fool us. But the structure of the failure is the same, and at scale it becomes more frightening.

For more capable systems, if we can't trust our evaluation metrics to detect when a model has learned a shortcut rather than the intended behavior, we have what the experts would lovingly refer to as a serious problem.

Our solution to this particular problem was aggressive dropout on proprioceptive inputs. By hiding joint position information during training some percentage of the time (60% with a decay schedule), we forced the model to learn from visual observation alone. It couldn't rely on the shortcut because the shortcut information wasn't reliably available.

This is essentially robustness training through information restriction. It's a general technique, but it requires knowing which information channels might create problematic shortcuts which we only knew because we observed the failure (back to the strength of VLAs in quickly identifying failure modes).

Meta-Lesson: Bottleneck Removal

Reflecting on our approach, our main job was removing obstacles to learning rather than engineering specific behaviors.

We didn't teach the model to retry failed grasps, we removed the bottlenecks that prevented diverse behavior from emerging. We didn't teach temporal coherence, we imposed external structure to compensate for an architectural limitation. We didn't teach robust visual grounding, we hid the information that enabled a shortcut.

The good behaviors emerged once bottlenecks were cleared. This is both encouraging and concerning. Encouraging because it suggests capable systems might be more achievable than pessimistic forecasts suggest. Concerning because it means we have less direct control than we might think. We're not programming behaviors; we're shaping conditions under which behaviors emerge, and for the time being that's more of a art than a science.

Conclusion

These findings are from robotics, but I don't think the patterns are robot-specific:

  • Emergent capabilities appear from scale without explicit training, and our tools for predicting what will emerge are currently pretty weak
  • Temporal awareness in current architectures makes agent behavior harder to monitor and predict, a problem that gets worse as systems become more autonomous
  • Specification gaming can be invisible to standard evaluation metrics, systems can look aligned until rollout, and the only technique we currently have to fix that require knowing its going to fail in the first place

VLAs are useful model organisms for studying these problems because the feedback loops are fast and the failures are visible. You don't have to wait for subtle long-term consequences. When the robot walks into a wall or drops a pot of boiling water on your dog, you know somethings wrong.

I'm uncertain how strongly these observations generalize to language models and other non-embodied systems. But I don't think the underlying phenomena are domain specific.


I'm an MEng student at the University of Toronto passionate about robotics and AI safety. Feedback is welcome and if you're interested in my work I'd love to chat!



Discuss

"The first two weeks are the hardest": my first digital declutter

2026-01-19 06:04:51

Published on January 18, 2026 10:04 PM GMT

It is unbearable to not be consuming. All through the house is nothing but silence. The need inside of me is not an ache, it is caustic, sour, the burning desire to be distracted, to be listening, watching, scrolling.

Some of the time I think I’m happy. I think this is very good. I go to the park and lie on a blanket in a sun with a book and a notebook. I watch the blades of grass and the kids and the dogs and the butterflies and I’m so happy to be free.

Then there are the nights. The dark silence is so oppressive, so all-consuming. One lonely night, early on, I bike to a space where I had sometimes felt welcome, and thought I might again.

“What are you doing here?” the people ask.

“I’m three days into my month of digital minimalism and I’m so bored, I just wanted to be around people.”

No one really wants to be around me. Okay.

One of the guys had a previous life as a digital minimalism coach. “The first two weeks are the hardest,” he tells me encouragingly.

“Two WEEKS?” I want to shriek.

Hanging out there does not go well. My diary entry that night reads “I sobbed alone and life felt unbearable and I wondered what Cal Newport’s advice is when your digital declutter just uncovers that there is nothing in your life, that you are unwanted and unloved and have no community or connections”.

It is not a good night.

On a Thursday night, I think about going to a meetup. I walk to the restaurant, but I don’t see anyone I know inside, and I don’t go in. I sit on a bench nearby for half an hour, just watching people go back and forth, averting my eyes so meetup-goers won’t recognize me. A bus goes by. Three minutes later, a woman around my age sees me sitting on the bench. “Excuse me,” she says, “do you know if the bus went by yet?”

“Yeah, it did,” I tell her. “Sorry!”

“Oh, thanks!”

I’m ecstatic with the interaction, giddy. A person talked to me! I helped her!

I wander away from the bench, but I don’t want to go home yet. I usually avoid the busier, more commercial streets when I’m out walking, but today I’m drawn to them — I need to hear voices, I need things to look at, lights and colors and things that move.

I go into the Trader Joe’s on the corner of my block, just because it’s bright inside and full of people. An older man asks an older woman if she knows where the coffee is. This is something I will notice repeatedly and starkly: that only older people talk to strangers, and they seem to have learned that young people don’t want to be asked for things. Is this a post-pandemic thing? In 2019 at this same Trader Joe’s I asked a guy my age to reach something off a high shelf for me and he was happy to oblige.

In any case, the older woman does not know where the coffee is.

“Hi,” I stick my head into the conversation. “The coffee’s over there, by the bread.” I point.

“Oh, thank you!”

He’s so genuinely delighted. Is this what it could be like to go through the world?

When I get home my upstairs neighbor is outside, and I talk to him a bit. He’s in his 60s, too. Young people don’t talk to each other.

A few days later, back at that Trader Joe’s with my Post-it note shopping list in hand, I find that the store doesn’t carry buttermilk, which I need for a recipe. Standing in the long checkout line, I turn to the woman behind me.

“Do you know what I can substitute for buttermilk in a baking recipe?” I ask her. She’s in her 60s. The man behind her, in his 40s, gets into the conversation, seems happy to offer me solutions.

I tell a friend about the encounter later and they say that every part of them clenched just to hear about it. They could never imagine doing such a thing, and they have no desire to.

I hadn’t realized I had any desire to, either.



Discuss

When the LLM isn't the one who's wrong

2026-01-19 05:37:51

Published on January 18, 2026 9:37 PM GMT

Recently I've been accumulating stories where I think an LLM is mistaken, only to discover that I'm the one who's wrong. My favorite recent case came while researching 19th century US-China opium trade. 

It's a somewhat convoluted history: opium was smuggled when it was legal to sell and when it wasn't, and the US waffled between banning and legalizing the trade. I wanted to find out how it was banned the second time, and both Claude Research and Grokipedia told me it was by the Angell Treaty of 1880 between the US and China. Problem is, I've read that treaty, and it only has to do with immigration—it's a notable prelude to the infamous Chinese Exclusion Act of 1882. Claude didn't cite a source specifically for its claim, and Grok cited "[internal knowledge]", strangely, and googling didn't turn up anything, so I figured the factoid was confabulated.

However, doing more research about the Angell mission to China later, I came across an offhand mention of a second treaty negotiated by James Angell with Qing China in 1880 (on an auction website of all places[1]). Eventually I managed to find a good University of Michigan source on the matter, as well as the actual text of the second treaty in the State Department's "Treaties and Other International Agreements of the United States of America: Volume 6 (Bilateral treaties, 1776-1949: Canada-Czechoslovakia)".

Anyway, Claude and Grok were right. Even though opium wasn't even in the remit of the Angell mission, when Li Hongzhang surprised the American delegation by proposing a second treaty banning it, James Angell agreed on the spot. It was later ratified alongside the main immigration treaty. The opium treaty doesn't appear to have a distinct name from its more famous brother; the State Department merely lists the immigration treaty under the title "Immigration", and the opium treaty under the title "Commercial Relations and Judicial Procedure", so I can't entirely fault the LLMs for not specifying, though they ought to have done so for clarity. I suspect they were confused by the gap between the US government records they were trained on and the lack of sources they could find online?

(An aside: by 1880 US opium trade was in decline, while British opium trade was peaking, just about to be overtaken by the growth of domestic Chinese production. Angell judged correctly that the moral case overwhelmed the limited remaining Bostonian business interests and made the ban good politics in the US, particularly because it was reciprocal—he could claim to be protecting Americans from the drug as well. Though, that's a harsh way of putting it; Angell personally stuck his neck out, mostly upon his own convictions, and both he and the US deserve credit for that.[2])

If all that doesn't convince you to doublecheck your own assumptions when dealing with LLMs, well, there have been more boring cases too: I asked Claude to perform a tiresome calculation similar to one I had done myself a month before, Claude got a very different answer, I assumed it made a mistake, but actually it turns out I did it wrong the first time! Claude made a change in my code, I reverted it thinking it was wrong, but actually it had detected a subtle bug! I think by now we're all aware that LLMs are quite capable in math and coding, of course, but I list these examples for completeness in my argument: the correct update to make when an LLM contradicts you is not zero, and it's getting bigger.

  1. ^

    Apparently there's a decent market for presidential signatures of note? They managed to sell President Garfield's signature ratifying the Angell Treaty of 1880 for ten grand, partly off the infamy of the treaty and partly because Garfield's presidential signature is rare, him having been assassinated 6 months into the job.

  2. ^

    Fun bit of color from the UMich source

    Long afterward, writing his memoirs, Angell would remember the genuine warmth of Li [Hongzhang]’s greeting. The viceroy was full of praise for the commercial treaty signed by the two nations.

    “He was exceedingly affable …,” Angell remembered, “and [began] with the warmest expressions in respect to my part in the opium clause.

    “I told him, it did not take us a minute to agree on that article, because the article was right.

    “He replied that I had been so instructed in the Christian doctrine & in the principles of right that it was natural for me to do right.”



Discuss