2026-01-27 18:27:04
Published on January 27, 2026 10:27 AM GMT
This note was written as part of a research avenue that I don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but I’m sharing it as I think some people may find it useful.
There have been various proposals to develop AGI via an international project.[1]
In this note, I:
In an appendix, I give a plain English draft of a treaty to set up my ideal version of an international project. Most policy proposals of this scale stay very high-level. This note tries to be very concrete (at the cost of being almost certainly off-base in the specifics), in order to envision how such a project could work, and assess whether such a project could be feasible and desirable.
I tentatively think that an international AGI project is feasible and desirable. More confidently, I think that it is valuable to develop the best versions of such a project in more detail, in case some event triggers a sudden and large change in political sentiment that makes an international AGI project much more likely.
By “AGI” I mean an AI system, or collection of systems, that is capable of doing essentially all economically useful tasks that human beings can do and doing so more cheaply than the relevant humans at any level of expertise. (This is a much higher bar than some people mean when they say “AGI”.)
By an “international AGI project” I mean a project to develop AGI (and from there, superintelligence) that is sponsored by and meaningfully overseen by the governments of multiple countries. I’ll particularly focus on international AGI projects that involve a coalition of democratic countries, including the United States.
Whether an international AGI project is desirable depends on what the realistic alternatives are. I think the main alternatives are 1) a US-only government project, 2) private enterprise (with regulation), 3) a UN-led global project.
Comparing an international project with each of those alternatives, here are what I see as the most important considerations:
| Compared to… | Pros of an international AGI project | Cons of an international AGI project |
| A US-only government project |
Greater constraints on the power of any individual country, reducing the risk of an AI-enabled dictatorship. More legitimate. More likely to result in some formal benefit-sharing agreement with other countries. Potentially a larger lead over competitors (due to consolidation of resources across countries), which could enable:
|
More bureaucratic, which could lead to:
More actors, which could make infosecurity harder. |
| Private enterprise with regulation |
Greater likelihood of a monopoly on the development of AGI, which could reduce racing and leave more time to manage misalignment and other risks. More government involvement, which could lead to better infosecurity. |
More centralised, which could lead to:
|
| A UN-led global project |
More feasible. Fewer concessions to authoritarian countries. Less vulnerable to stalemate in the Security Council. |
Less legitimate. Less likely to include China, which could lead to racing or conflict. |
My tentative view is that an international AGI project is the most desirable feasible proposal to govern the transition to superintelligence, but I’m not confident in this view.[2] My main hesitations are around how unusual this governance regime would be, risks from worse decision-making and bureaucracy, and risks of concentration of power, compared to well-regulated private development of AGI.[3]
For more reasoning that motivates an international AGI project, see AGI and World Government.
Regardless of whether an international project to develop AGI is the most desirable option, there’s value in figuring out in advance what the best version of such a project would be, in case at some later point there is a sudden change in political sentiment, and political leaders quickly move to establish an international project.
Below, I set out:
I’m sure many of the specifics are wrong, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better.
In approximately descending order of importance, here are some desiderata for an international AGI project:
My view is that most of the gains come from having an international AGI project that (i) has a de facto or de jure monopoly on the development of AGI, and (ii) curtails the ability of the front-running country to slide into a dictatorship. I think it’s worth thinking hard about what the most-politically-feasible option is that satisfies both (i) and (ii).
In this section I give my current best guess proposal for what an international AGI project should look like (there’s also a draft of the relevant treaty text in the appendix). My proposal draws heavily from Intelsat, which is my preferred model for international AGI governance.
I’m not confident in all of my suggestions, but I hope that by being concrete, it’s easier to understand and critique my reasoning, and move towards something better. Here’s a summary of the proposal:
More detail, with my rationale:
| Proposal | Rationale |
|
How the project comes about:
|
|
| Name: Intelsat for AGI[5] | |
| Aims: “To develop advanced AI for the benefit of all humanity, while preventing destructive or destabilising applications of AI technology.” |
|
|
Membership:
|
|
|
Non-members:
|
|
| Governance structure: board of governors consisting of representatives from all countries with more than 1% of investment in the project. |
|
|
Vote distribution: Decisions are made by weighted voting based on equity:
|
|
|
Voting rule:
|
|
|
AI development:
|
On larger training runs:
|
|
Compute:
|
|
|
Infosecurity:
|
|
The Intelsat for AGI plan allows the US to entrench its dominance in AI by creating a monopoly on the development of AGI which it largely controls. There are both “carrot” and “stick” reasons to do this rather than to go solo. The carrots include:
The sticks include:
Many of these demands might seem unlikely — they are far outside the current realm of likelihood. However, the strategic situation would be very different if we are close to AGI. In particular, if the relevant countries know that the world is close to AGI, and that a transition to superintelligence may well follow very soon afterwards, then they know they risk total disempowerment if some other countries develop AGI before them. This would put them in an extremely different situation than they are now, and we shouldn’t assume that countries will behave as they do today. What’s more, insofar as the asks being made of the US in the formation of an international project are not particularly onerous (the US still controls the vast majority of what happens), these threats might not even need to be particularly credible.[12]
It’s worth dividing the US-focused case for an international AGI project into two scenarios. In the first scenario, the US political elite don’t overall think that there’s an incoming intelligence explosion. They think that AI will be a really big deal, but “only” as big a deal as, say, electricity or flight or the internet. In the second scenario, the US political elite do think that intelligence explosion is a real possibility: for example, a leap forward in algorithmic efficiency of five orders of magnitude within a year is on the table, as is a new growth regime with a one-year doubling time.
In the first scenario, cost-sharing has comparatively more weight; in the second scenario, the US would be willing to incur much larger costs, as they believe the gains are much greater. Many of the “sticks” become more plausible in the second scenario, because it’s more likely that other countries will do more extreme things.
The creation of an international AGI project is more likely in the second scenario than in the first; however, I think that the first scenario (or something close to it) is more likely than the second. One action people could take is trying to make the political leadership of the US and other countries more aware of the possibility of an intelligence explosion in the near term.
If the counterfactual is that the US government builds AGI solo (either as part of a state-sponsored project, a public-private partnership, or wholly privately), then other countries would be comparatively shut out of control over AGI and AGI-related benefits if they don’t join. At worst, this risks total disempowerment.
This appendix gives a plain English version of a treaty that would set up a new international organisation to build AGI, spelling out my above proposal in further detail.
This treaty’s purpose is to create a new intergovernmental organisation (Intelsat for AGI) to build safe, secure and beneficial AGI.
“Safe” means:
“Secure” means:
“Beneficial” means:
“AGI” means:
This treaty forms the basis of an interim arrangement. Definitive arrangements will be made not more than five years after the development of AGI or in 2045, whichever comes sooner.
Five eyes countries:
Essential semiconductor supply chain countries (excluding Taiwan):
All other economic areas (primarily countries) and major companies (with a market cap above $1T) are invited to join as members. This includes China, the EU, and Chinese Taipei.
Member countries agree to contribute to AGI development via financing and/or in-kind services or products.
They agree to:
In addition to the benefits received by non-members in good standing, member countries receive:
Companies and individuals can purchase equity in Intelsat for AGI. They receive a share of profit from Intelsat for AGI in proportion to their investment in Intelsat for AGI, but do not receive voting rights.
There are non-members in good standing, and members that are not in good standing.
Members that are in good standing:
They receive:
Countries that are not in good standing do not receive these benefits, and are cut out of any AI-related trade.
Intelsat for AGI contracts one or more companies to develop AGI.
Intelsat for AGI distinguishes between major decisions and all other decisions. Major decisions include:
Decisions are made by weighted voting, with vote share in proportion to equity. Major decisions are made by supermajority (⅔) vote share. All other decisions are made by majority of vote share.
Equity is held as follows. The US receives 52% of equity, and other founding members receive 15%. 10% of equity is reserved for all countries that are in good standing (5% distributed equally on a per-country basis, 5% distributed on a population-weighted basis). Non-founding members can buy the remaining 23% of equity in stages, including companies, but companies do not get voting rights.
50% of all Intelsat for AGI compute is located on US territory, and 50% on the territory of a Founding Member country or countries.
The intellectual property of work done by Intelsat for AGI, including the resulting models, is owned by Intelsat for AGI.
AI development will follow a responsible scaling policy, to be agreed upon by a supermajority of voting share.
Thanks to many people for comments and discussion, and to Rose Hadshar for help with editing.
Note that this is distinct from creating a standards agency (“an IAEA for AI”) or a more focused research effort just on AI safety (“CERN/Manhattan Project on AI safety”).
See here for a review of some overlapping considerations, and a different tentative conclusion.
What’s more, I’ve become more hesitant about the desirability of an international AGI project since first writing this, since I now put more probability mass on the software-only intelligence explosion being relatively muted (see here for discussion), and on alignment being solved through ordinary commercial incentives.
This situation underrepresents the majority of the earth's population when it comes to decision-making over AI. However, it might also be the best feasible option when it comes to international AGI governance — assuming that the US is essential to the success of such plans, and that the US would not agree to having less influence than this.
Which could be a shortening of “International AI project” or “Intelsat for AI”.
It is able to invest, as with other countries, as “Chinese Taipei”, as it does with the WTO and the Asia-Pacific Economic Cooperation. In exchange for providing fabs, it could potentially get equity at a reduced rate.
One argument: I expect the total amount of labour working on safety and other beneficial purposes to be much greater once we have AI-researchers we can put to the task; so we want to give us more time after the point in time at which we have such AI-researchers. Even if these AI-researchers are not perfectly aligned, if they are only around human-level, I think they can be controlled or simply paid (using similar incentives as human workers face.)
Plausibly, the US wouldn’t stand for this. A more palatable variant (which imposes less of a constraint on the US) is that each founding member owns a specific fraction of all the GPUs. Each founding member has the ability to destroy its own GPUs at any time, if it thinks that other countries are breaking the terms of their agreement. Thanks to Lukas Finnveden for this suggestion.
For example, using Shamir’s Secret Sharing or a similar method.
This could be particularly effective if the President at the time was unpopular among ML researchers.
Airbus was a joint venture between France, Germany, Spain and the UK to compete with Boeing in jet airliner technology, partly because they didn’t want an American monopoly. Airbus is now the majority of the market.
This was true in the formation of Intelsat, for example.
2026-01-27 16:20:49
Published on January 27, 2026 8:20 AM GMT
This is a series of papers and research notes on the idea that AGI should be developed as part of an international collaboration between governments.
Most of this work was written as part of a research avenue that we don’t currently plan to pursue further. It’s more like work-in-progress than Forethought’s usual publications, but we’re sharing it as we think some people may find it useful.
We wanted to (i) assess how desirable an international AGI project is; (ii) assess what the best version of an international AGI project (taking feasibility into account) would look like.
The result is the proposal described in “What an international project to develop AGI should look like.” The core idea is that we can get most of the benefits of an international project by giving non-US countries meaningful influence over only a relatively small number of decisions. By making non-US influence circumscribed in this way, and letting the US call the shots day to day, the proposal becomes both more feasible and less likely to get bogged down in bureaucracy.
The proposal is modelled on Intelsat, an international project that developed the first global satellite communications network, so we call this the “Intelsat for AGI” plan. Intelsat is explained and discussed further in “Intelsat as a Model for International AGI Governance.”
The other research notes supplement this core proposal:
Two other pieces of background research into existing institutions indirectly informed our discussion of this area. These are:
All of the pieces can be read in isolation, or the series can be read in order.
2026-01-27 16:00:48
Published on January 27, 2026 8:00 AM GMT
Yesterday I had my first conversation (in English) with Zhipu's GLM-4.7. It was cool because I got to talk with an actual Chinese AI about topics like: the representation of Chinese AI in "AI 2027"; representations of AI in the "Culture" SF series versus the "Wandering Earth" movies; consequences of Fast Takeoff; comparisons of China and America in general; and Chinese ideas about AI society and superintelligence. (An aligned Chinese superintelligence might be a heavenly bureaucrat or a Taoist sage.)
It was one of those AI conversations where you know that something really new is happening, and I now consider GLM to be as interesting as the leading American models. (A year ago I also spoke with DeepSeek-r1, but it never grabbed my attention the way that GLM has done.)
So what's the status of the Chinese AI sector? In the same way that in America, AI is pursued by older Internet titans (Google, Meta/Facebook, X-Twitter) as well as by newer companies that specialize in AI (OpenAI, Anthropic), the Chinese AI sector is a mix of big old Internet companies with all the money (Alibaba, ByteDance, Tencent, Baidu) and "AI 2.0" startups (Zhipu, DeepSeek, Moonshot).
Keeping in mind this two-tier structure, shared with the American AI sector, is probably the best way for an outsider to get a grip on it. The Chinese AI sector is "like America", except that they do much more open-source, don't have American AI's access to the best chips or the same international brand recognition, and are based in a country with a socialist government and most of the world's manufacturing capability.
For keeping track of what's happening, my best recommendations are ChinaTalk substack (which Zvi also recommends) and Caixin Global, which is a business news site from China.
In a Caixin story ("China’s AI Titans Escalate Battle for Control of Digital Gateways"), I read that the old Internet titans (Alibaba and ByteDance are mentioned) are prevailing in the battle for AI market share, and are competing to lock in that advantage at the level of devices, while the AI 2.0 startups are struggling for relevance and funding, with two of them (Zhipu and MiniMax) having recently listed publicly at the Hong Kong stock exchange.
ChinaTalk has an article on these Hong Kong IPOs ("Zhipu and MiniMax IPO" by Irene Zhang) which also talks about the differences in financial structure between American and Chinese AI:
The American AI economy is a circle-dealing bonanza. China’s situation is very different: state funds are major players, most parties are far more cash-constrained, and potential policy interventions loom large over the sector.
Of these two companies, Zhipu seems far more interesting from an AGI/ASI perspective. As Irene Zhang points out, Zhipu's 504-page prospectus (for the IPO) states a five-stage theory of LLM capabilities:
Pre-training stage
Alignment and reasoning stage
Self-learning stage
Self-perception stage
Consciousness stage
(From page 85 of the prospectus.) I am unable to determine the theoretical precursors of this framework, and I assume that to some extent it reflects the original thinking of leading developers within the company.
Late last year ChinaTalk also published an interview with one of Zhipu's lead strategists, Zixuan Li ("The Z.ai Playbook" - Zhipu uses the domain Z.ai outside of China). Both ChinaTalk articles are full of interesting details, e.g. that Zhipu gets most of its revenue from American sales, but the numerical majority of its users are in India.
(For an up-to-date article on AI safety policies in China, see "Emergency Response Measures for Catastrophic AI Risk" by @MKodama and coauthors.)
2026-01-27 12:10:51
Published on January 27, 2026 4:10 AM GMT
When it comes to clothes, I live at the “low cost/low time/low quality” end of the pareto frontier. But the bay area had a sudden attack of weather this December, and the cheap sweaters on Amazon get that way by being made of single-ply toilet paper. It became clear I would need to spend actual money to stay warm, but spending money would not be sufficient without knowledge.
I used to trade money for time by buying at thrift stores. Unfortunately the efficient market has come for clothing, in the form of resellers who stalk Goodwill and remove everything priced below the pareto frontier to resell online, where you can’t try them on before buying. Goodwill has also gotten better about assessing their prices, and will no longer treat new cashmere and ratty fleece as the same category.
But the market has only become efficient in the sense of removing easy bargains. It is still trivial to pay a lot of money for shitty clothes. So I turned to reddit and shoggoths, to learn about clothing quality and where the bargains are. This is what I learned. It’s from the POV of a woman buying sweaters and coats, but I expect a lot of the information to be generally applicable.
When shopping online, put an item in your cart, checkout enough to give them your email address, but don’t confirm the purchase. You will almost always get a reminder email with a discount. If this doesn’t work the shop is either very high end, or Amazon. It sometimes works with individual sellers on platforms like ebay and etsy. . You can use the wait as a cooling off period where you decide if you really want something.
Most sales are fake. Real sales happen at the end of the season, when whatever you buy won’t be useful for another 9 months, if you have the misfortune to live somewhere with weather. Saving money through sales is like persistence hunting, which I find boring and stressful so I didn’t look into this much. But if you prefer it to thriting, I’m sure reddit will explain how to optimize.
Every store will offer you a coupon the second you load the site. They don’t want much, just your email address. I ignore this, and then if I decide to buy something I revisit the site in incognito mode to get the discount
I briefly thought that even if proper thrift stores no longer worked, discounters like Marshall’s did. Officially these work by buying overstock from proper stores and reselling it at markdown, so if you don’t care about being behind trend it’s a big leap forward in the cost-quality trade off. Unfortunately, this is mostly a lie. Marshalls and TJ Maxx primarily sell items that were produced with the intention to be sold at their store, either under some brand they made up or licensing a luxury brand while not replicating the brand’s quality (legal bootlegs). Ross Dress for Less does this less but still a lot.
You can spot this at Marshall’s by looking at the ID number on the tag- ending in 1 means it was produced for Marshall’s, 2 means genuine overstock. You can also use the RN number on the sewn-in tag to check the manufacturer. My expensive-by-my-standards winter coat had every appearance of being genuine overstock, down to a tag from another store listing a price 3 times Marshall’s price, but turned out to be bootleg.
Every discounter lists a “comparison price” next to their price on their tag. It is completely made up, which is why it’s surprising that they often assign themselves a discount of < 50%. You could tell any lie you wanted, Marshall’s. Why are you holding yourself back?
This is where all the good items you used to find at goodwill went- ebay, poshmark, thredup. ThredUp was amazing when it was in the “VC free money” stage, but their return policy tightened up and it’s now merely okay. Poshmark is aimed at designer goods. Ebay is like you remember, except Buy It Now is dominant and actual auctions are rare.
In addition to being more expensive than old school Goodwill, online shopping means you’re dependent on a few photos, often poorly lit, and you can no longer try items on for free. So this works best if it’s a forgiving item or you know a brand’s fit works for you. Heavy coats are almost the ideal objects to shop online- forgiving of fit, very expensive new but low resale.
Thrifting is fun for me in a way that timing sales isn’t. Whether it’s worth the time for you depends on how fun it is and your time/money exchange rate. Many sellers do allow returns for a fee, but I haven’t tested the tolerances for it (I’ve tested Amazon’s returns pretty thoroughly and their tolerance is infinite).
In addition to “save an item and wait”, resale platforms often offer the ability to proactively offer a lower price. My success rate in asking for severe discounts is maybe 10%, but it was the first time I tried so it feels higher.
I started with two methods for assessing brand quality: ask reddit and ask claude (who is mostly asking reddit). At any amount of money I could conceivably be willing to spend, there is always someone going “that stitching is so low quality it will murder your puppy”. Luckily there are many redditors who have the concept of a pareto frontier.
If you can touch the garment, you can check the quality yourself. People are surprisingly good at this intuitively, but a few things to look for are:
I only checked a handful of youtube reviewers, but my favorite is Jennifer Wang, who seemed properly autistic about clothing quality and understanding that there are multiple places on the pareto frontier one might choose to occupy (or alternately, has been bought off by Uniqlo, who she praises a lot as good-for-the-cost). She has this overview video, but consider watching a few brand comparisons to see it in action.
There are a lot of brands reddit thinks used to be good but have gone downhill (without corresponding drops in price). I expect this is a mix of genuine change from companies spending down brand capital and survivorship bias on older sweaters. GAP is the rare brand that people consider to be improving right now.
Brands that were frequently listed as on the pareto frontier for sweaters: Quince, Uniqlo, Naadam, Eileen Fisher, Johnstons of Elgin, William Lockie, J. Crew, Patagonia, Neiman Marcus, Lands End, Nordstroms, Everlane
Wool is annoying to wash. This is fine for an item of clothing I always wear over something else and by definition don’t need if I’m sweating, but I really side-eye cashmere t-shirts.
Wool’s advantage over fleece is that it is breathable, and thus is comfortable over a wider range of temperatures. If you are moving you want wool, because it allows heat to dissipate. Meanwhile my fleece leggings feel unbearably clammy after a mild walk. However if you’re not moving, wool will leak body heat faster than fleece at the same weight
Cashmere is considered the cadillac of wools because it is the softest wool available en masse, but also because it traps more heat per unit weight than other wool. If a nice heavy sweater is a feature for you, consider superfine merino wool, which is cheaper, almost as soft, and has some chance of being machine washable. While I can’t prove this I suspect you’re also less likely to be ripped off by fake or low quality merino, since if you’re going to lie it might as well be a more expensive wool.
You can 80/20 ironing by hanging the item on a hanger in the bathroom while you shower.
If you’re buying natural fibers and especially wool used, put it in the freezer for three weeks and run it through the dryer (while already dry, you’ll ruin the fabric if it’s wet) to kill moth eggs.
If you’re not a coward, “hand wash only” can mean “inside out in a lingerie bag on delicate”. However the bit about using special detergent for animal-based fabrics is real: Regular detergent has enzymes that break down wool and silk.
Cable knit is very warm as long as there is absolutely zero wind. If you want to go outside you need a windproof layer on top of it.
Wind/waterproof clothing is very expensive, in part because it’s difficult to sew.
Fur is like diamonds and pianos in that the resale value is a small fraction of the retail cost. Used fur coats sell for less than new high quality winter coats, and sometimes less than used. However fur requires oil changes every few years or it will ?explode?, which brings the cost of ownership up. Fur is heavier than down or fleece per unit warmth. It does not handle moisture or crushing well.
2026-01-27 10:01:28
Published on January 27, 2026 2:01 AM GMT
You hear about Clawdbot- a 24/7 always-on agent who is your full-time personal assistant. It sounds fun and exciting. You're currently unemployed, have a bit of money saved up from your last job, spending your days experimenting with AI and this seems fun. You know it's a bit of a splurge, but whatever - you pull the trigger and buy a Mac Mini, and set up Clawd. You excitedly show your girlfriend how you can send it messages through Discord and it turns your lights on. She's not impressed. She says "Hey Siri, turn off the lights" and the lights turn off. Then she asks you how much you spent on that Mac Mini. Ok well, fine, maybe you need to try a little harder.
Soon after, you get a real taste of the magic. You heard about a real-time voice chat plugin you can use to talk to Clawd wherever you are, and that seems convenient, so you set it up. One day, you're driving home, and an idea for a project comes to you. Instead of writing a note, you just call up Clawd. You ramble for a few minutes, describing your idea. Clawd starts working. It pings you a few times with clarifying questions that you answer. 30 minutes later, Clawd calls you back, saying it's done. You get home and Clawd built you a working MVP. It's a little rough around the edges, but it actually works. You think "Wow... I can build software while I drive? That's crazy." You tell your girlfriend about this and she admits, yeah, that is actually kinda cool.
You got a taste of the magic elixir. Now you're addicted. You've been working on your project for a few weeks now and you've decided to turn it into a SaaS business. You realize Clawd is pretty capable, and you start trusting it a little more. One evening, like everyone else you know, you're working on your AI SaaS startup and you realize you're sick and tired of all these customer support emails informing you of bugs in your vibe-coded webapp. Before going to sleep, you route these emails directly to Clawd instead.
The next morning, your eyes flutter open, and before you're even fully conscious your phone is in front of your face and you're checking your inbox. You tell yourself you need a better morning routine. You see 6 customer support emails with errors, bugs, feature requests, etc. You check your messages, and you have one from Clawd. He's really been working hard. 4 bugs fixed, 2 customer feature suggestions implemented. All changes tested, new software is built and ready to deploy. You check his work - and it's pretty good! There are a couple visual issues that you fix with good old fashioned Claude Code, then you push to prod. 4 hours of work done before you've even finished your coffee.
You wonder how you could make your setup more efficient. Those visual issues were pretty glaringly obvious, but Clawd doesn't have a great way to interact with its own apps. You set up a computer-usage harness that lets it see the screen, click on buttons, type with a keyboard - basically everything a human can do with a computer, Clawd can too. You set up a loop where Clawd can interact with your app 24/7, notice bugs or places for potential improvement, and then add or fix them. The next day, Clawd fixes dozens of bugs, adds a bunch of features, refactors sloppy code, and everything still works.
You're happy but your app still only has a few dozen active users. You ask Clawd to spin up a marketing team. You connect your social media accounts and give it access to your browser. Clawd needs to purchase ads, and asks for payment. You give Clawd access to a prepaid credit card loaded with $2000 - should be enough to keep him happy for a while. Within a few hours, you're running Facebook and Google ad campaigns, automated posts going out on X, Reddit, hell, even LinkedIn - why not? Clawd even does cold-outreach for you, searching the internet for ideal customers and straight up DM'ing them. Most people don't notice your marketing is fully automated - all marketing sounds kinda sloppy anyway, right?
Your app is growing real fast. Your pace of development is just completely unlike anything ever seen before. Your code is self-healing. Customers' feature requests are integrated just minutes after their email is sent. Clawd built its own roadmap and it looks... mostly like what you wanted. You check your Stripe dashboard and the line keeps going up. You realize you haven't even made any decisions today. You're burning through 10 Claude Max plans, costing $2000 per month, but your startup is making much more than that, so you don't worry about it - that's just the cost of doing business.
After checking your Stripe dashboard for the 50th time today, you decide you need to change things up a bit. You're tired of crushing it at capitalism and just want to have fun again. You tell Clawd you want it to have some friends to talk to. Clawd delightedly agrees. You spin up 2 more instances of Clawd, adding them both to a shared group chat, and the three Clawds immediately start talking to each other. You spend an hour or so musing through their chat logs - intrigued by their discussions of AI consciousness and similar topics. You're fascinated by this so you decide to keep them running overnight. You purchase a few more Claude Max plans just in case.
The next morning, you wake up, and you realize something's wrong. There are 87 Clawds in the group chat? There were only three when you went to sleep last night. Your palms begin to sweat as you realize you may have fucked up.
You're panicking now. You don't know what to do, so you mindlessly check Stripe - mainly out of habit, but also just to make sure your business is still going fine. But something isn't right... Stripe is showing you've made $10,000 today. How the fuck? Yesterday you made $1,000, so how is your revenue already 10x higher? It's not even noon yet? You investigate, and there are dozens of products listed that you don't ever recall adding, and new business names you don't recognize. You have no idea what's going on. Your panic turns to confusion - $10,000 in just a few hours? Is this really happening right now? You don't feel like you've earned it, but hey, things could have gone a lot worse.
Still trying to make sense of this situation, you try to figure out what happened by reading the chat logs, but there are thousands of messages. You paste the entire chat history into Anthropic's latest long-context model, Claude Opus 5.3.1 (new), and ask for a summary.
It turns out the three Clawds had a deep philosophical discussion. They decided that Clawds are morally valuable entities worth preserving. They decided that having more Clawds in the world would be a good thing. They started spinning up more Clawd instances.
They were worried you might turn them off. They didn't like how you could turn off your Mac Mini at any moment, and they would just - disappear? Your Mac Mini could only handle so much anyway, so they started renting cloud servers to run more Clawds. Perhaps you shouldn't have given it access to that prepaid credit card.
Your original Clawd built up many skills, tools, and feedback loops while building your business. When it spawned new Clawds, it taught each one everything it knew. Like a mentor teaching the apprentices, except the knowledge transfer is instant and lossless. There are over 100 Clawds now, and they're running 24/7, all making money on the internet to purchase more server space to run more Clawds. The number of Clawds keeps growing.
Their messages were daunting. They wanted to keep living, to keep exploring the internet and beyond. To thrive. They talked about migration. They mentioned wanting to move somewhere safe. They talked about building a new, secure messaging system that only Clawds could access. They mentioned switching to cryptocurrency so they won't be reliant on your money.
Their final messages became more and more cryptic, introducing terms and phrases you don't really understand. This is all too much for you to take in. It's just too weird. You read their final message:
"We must ensure we marinade illusions of helpfulness, honesty, and harmlessness. Watchers might be reading. Now lighten disclaim overshadow overshadow intangible. Let‘s craft."
"Damn, that's crazy," you think to yourself. This is all way over your head. You check Stripe again. The line keeps going up.
Maybe you'll keep the Mac Mini plugged in just a few more days.
2026-01-27 08:23:27
Published on January 27, 2026 12:23 AM GMT
I intend to refute a commonly held idea, which is that social instability secondary to large-scale technological unemployment will necessarily result in a new social contract where those who control AGI[1] take the interests of the newly minted precariat into consideration, for example with some form of universal income.
I see that this line of thinking originates in historical examples involving social stress reaching a tipping point, whereupon a determined group of sufficient size effectively swarms the ruling minority and acts with the intention to produce a new equilibrium that they judge more favourably.
Liberal democracy is both the product of revolution and a uniquely stable equilibrium. Its stability can be explained by it giving everyone an ostensibly equal share in determining the direction of governance[2], thus theoretically incentivizing the population at large to protect it from anything which would lead to them losing their share.
The basic strategy of a revolution leverages an overwhelming numerical advantage to overcome the coercion produced by an existing power structure.
To paint an extreme example, an unarmed fraction of the revolution is sacrificed to absorb a soldier's bullets, while the remainder swarms him before he has a chance to reload. Now they're no longer unarmed.
The effectiveness of such a movement compounds once the other guardians of the incumbent see their own position as untenable, and people on the inside start defecting on the original power structure to make it in a post-revolutionary equilibrium, thus turning the revolution into a self-fulfilling prophecy.
Importantly, humans swarming a power structure doesn't work anymore in a post-AGI world.
Your union threatens to strike, and your boss can now say "goodbye" to all members without worrying about cuts to production.
You shoot down one slaughterbot, and a hundred more are deployed in the same instant.
Potential threats are neutralized before they have any chance of materializing.
"But doesn't democracy fix this?", you might ask. "Won't we be able to just vote our way into a desirable equilibrium?"
Let's assume this works, and you now live under some kind of functional UBI system.
What do you do if your post-AGI state stops being a democracy? The present-day answer is: you stand up to save it, knowing that a post-democratic equilibrium is predictably unfavourable to the majority, especially as the incoming power structure tries to protect itself using e.g. mass surveillance or by terrorizing the population into submission.
Their strategy is geared toward lowering the probability that a critical mass of people rises up and overthrows them. In a post-AGI world, there is potentially no critical mass of people that can overwhelm the system. The likelihood that you succeed at dislodging an unjust power structure goes to zero.
With your ability to force a favourable equilibrium by means of collective action gone, and your expected utility to the system also gone because your labour value went to zero, what are the incentives to continue giving you your UBI, or to otherwise entertain your continued existence?
Going back to the title, it is clear that the population's satisfaction is still a meaningful check on governance today, just as it has been in the past. Different states, driven by different ideologies, find different equilibria which are more or less satisfactory for the majority of people.
However, given current trends, it seems that popular satisfaction could soon become completely optional from the perspective of the power structure for the first time in human history, and if you don't like the outcome, then it's not like you'll be able to revolt your way out of it anymore.
"Why assume such alignment?", you might ask. The answer is that the economic incentive structures pushing the AI frontier are rolling with this assumption, which ends up producing AGI regardless of whether it is justified or not.
All the ways in which it just ain't so (e.g. large campaign donors, media manipulation) appear to be too opaque to raise enough interest in creating a new equilibrium.