MoreRSS

site iconStrange Loop CanonModify

By Rohit Krishnan. Here you’ll find essays about ways to push the frontier of our knowledge forward. The essays aim to bridge the gaps between Business, Science and Technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Strange Loop Canon

Aligning Anthropic

2026-03-03 04:26:12

Last week was a bit crazy. In many ways, but specifically with AI. For those who were blissfully unaware, The Department of War picked a fight with Anthropic over the ways they were allowed to use the model. The fights, as is often the case with the administration, got nasty. Anthropic said no we won’t budge, DoW got angry, and threatened to cut them off and declare them a supply chain risk. A few hours after, OpenAI said they managed to get another deal, apparently a better deal, and one such that any other AI lab can also avail the same terms.

So naturally everyone is angry. Anthropic is angry because they were declared an SCR. DoW is angry because someone tried to force their hand. OpenAI is angry because everyone seems to call them opportunistic ghouls, more or less. The media, both independent and institutional, loves it because they get to play their favourite game of good guy-bad guy.

I really didn’t want to write about this. But it is important, contractual disputes are actually interesting, and sometimes that deserves an explanation.

The facts are roughly as following, Anthropic had an agreement via Palantir to work with the DoW. They’ve been doing it since mid 2024. They made an different, supposedly unsafe version of Claude to do this. Somehow over the last week, they got into a tiff with the DoW, supposedly over some red lines they had (no mass surveillance and no autonomous weapons) or rather who will get to say what those lines are and when they’re crossed. OpenAI signed a contract which had those same red lines and an enforcement mechanism.

Now, the claims are roughly as following, noting that nobody knows if they’re true. Anthropic asked questions about the Maduro raid where it was used, and the DoW got upset. DoW asked a hypothetical about how to do autonomous missile defense using Claude, and got a non-answer that they’d need to talk to the CEO and they’d ‘work it out’. Anthropic asked for their red lines to be enforced by enabling them to act as the party to approve it (you’d ask them if you had a question). DoW wanted language referring to “all lawful use”, basically saying if what they’re doing is legal you can’t tell them what to do, especially during operations, i.e., you can’t tell them to stop doing something in the middle of an op. OpenAI said sure, we agree to all lawful use, but note these specific laws and regulations, and we will control the deployment of our models, using our people, since we know what it can and cannot do, and help you guys out.

Every point above is a claim, and we have no real proof. People are desperately trying hermeneutics of the OpenAI position and blogs, but honestly it feels kind of silly since we simply don’t have the data to conclude they did a bad thing. Or, particularly silly, that they defected in a prisoner’s dilemma. What we do have, are concerns. Concerns like:

  • Didn’t OpenAI just accede to “all lawful use” and therefore allow mass surveillance on Americans?

  • How can you let a private company tell the DoD, you should ask us if you’re violating any of our red lines during an operation?

  • Why did OpenAI sign an agreement so fast anyway, surely they just said yes when Anthropic said no?

  • What do those red lines even mean?

  • Also, Anthropic and OpenAI seemed to have the same ones, how can that be?

  • Can’t the government or the DoW just make up its own laws as it does anyway? Who can stop them?

  • How can you guarantee this means the DoW won’t cross any red lines?

  • What do technical safeguards mean, how are they enforceable?

  • Etc…

Many valid questions, but I refer you to the openai blog, dario’s written statement, and Sam’s AMA for various points of view on them. They do cycle between thinking of the government as Leviathan, an entity you cannot negotiate with, only appease, and thinking of the government as Loki, a trickster you need to subdue or overpower.

My interest though is broader than who said what to whom, or who’s virtuous and who’s not, as I think yours should be. It’s not to relitigate the facts, but think about the following:

  1. What are the right safeguards to put in place when a piece of technology is deployed as a tool by the DoW?

  2. How do we enforce any of it?

Let’s think about this for a moment. Imagine you are dealing with the government for a moment as an AI lab. They want to buy your AI, and you want to sell it. How would you safeguard it?

You know that plenty of things are legal, but not “good”. So what’s the choice here? You could of course just try not to deal with them at all. But once you decide to do it, there’s either you need contractual provisions you think they would adhere to and execution guardrails you can have some control over.

You also know that plenty of things are legal, but impossible. You cannot build a stairway to the moon regardless of the fact that it’s legal. Saying “I want GPT to build my defense strategy in Iran” would be such a thing to ask, you can ask it you won’t get good answers. The AI labs both want to say that.

So, you have to write some provisions into the agreements. Of course, the DoW can buy anything it likes, and you can add constraints on the stuff you’re selling, but they have to be clear. This is true of all contracts but of course with defense it’s even more important. For the same reason that it’s important in a hospital. To take a silly example, most models will rightfully have safeguards against violence or nudity, but imagine we also need them to treat burn victims. It can’t be a blanket no, you need to figure out some way to separate what’s allowed from what’s not, and before it gets deployed ideally so that you’re not doing this live when someone’s in the OR.

Which is to say that whatever they’re using, the lines have to be clear. Either some things are allowed, or they’re not. As little ambiguity as possible. The DoW would also want the power to determine courses of action, and can’t leave operational control in the hands of another. This is the now infamous scenario that someone apparently painted in discussions with Dario, if a missile was heading towards the US would they be ok to use Claude to defend.

Apparently Dario said they’d work it out, and also later said they can carve out a missile defense aspect from the contract, but you hopefully see the problem. You could easily come up with a dozen other scenarios, so do you just keeping coming up with them and then taking them off the contract because ‘that seems fine’?

The other “red line”, about mass surveillance, is similar. What does that mean? You ask a dozen people, as Zvi did, you get a dozen different responses. Going from a vague feeling to something that’s specific is really difficult.

Now the DoW’s position seems to be that let’s just do it according to the law. The law is clear enough, or at least clearer than a goal that we might share. Laws are an operationalisation of principles we hold dear.

But what if the law has loopholes? If we disagree with the law? You still have to find some ways to make that clear, but honestly you either draft a contract airtight enough to solve for those, or you have to believe that your counterparty will obey the law. You can draft “permissions-based” (enumerated) vs “restrictions-based” (negative list) provisions, if you’re clear enough. And it makes sense to have explicit contractual red lines, even if unenforceable mid-operation, since they create legal exposure and political cost for the government if violated. But they aren’t clear though, then no contract can save you, and saying “I will decide” will not necessarily break in your favour.

Terms like “reasonably requested” or “as appropriate” or “reasonable doubt” are standard legal terminology precisely because you can’t nail down every eventuality on every contract, these capture some combination of norms and prior history to gesture at the types of things that will be ok and types of things that won’t be.

Because the only thing that matters is whether you have any visibility into their actions in the first place. The Anthropic deployment was of a separate version of Claude, under a different ToS, deployed by someone else. Which means, they probably had limited visibility into what it was being used for. Which also means the only way to enforce any standards is to codify things quite a bit upfront - it’s like doing an on-premise installation vs saas.

OpenAI’s contract on the other hand seems to have been hand-in-hand with their own teams of FDEs and something they call a safety-stack (guessing cloud deployment of their own models and some checks therein, I don’t know). Which means they have much more operational visibility into the model usage, which also means they have the leeway to negotiate if the usage of it started to violate any of their ToS.

I have no real opinion here on which is better. Contracts are not inherently all-powerful, they’re only powerful insofar as they can have oversight. I do have an opinion that neither is inherently superior to the other, even if what we know about them is accurate, which might not be the case. One has more contractual protections and limited operational visibility, the other has lower contractual protections and higher operational visibility. The first one relies more on trust with the counterparty, the second one relies more on execution control. Both rely on the existing legal system.

This entire saga seems to me like it was a personality clash rather than a contractual dispute. A version might well be: Someone asked a question about Maduro raid. DoW got upset they’re being asked. They posed a hypothetical. Anthropic’s response was bad, confirming DoW’s prior assumption that they’re trying to control the deployment. Which is why even though they were so close to being effectively done with the agreement the Secretary of War decided to blow things up.

To reiterate, it’s really bad to call Anthropic a Supply Chain Risk. This is just not true. It is eroding yet another norm about what capricious governments could do at a time we should not be eroding it, we should be strengthening it. It is perfectly fine for Anthropic to have rules about how their AI ought to be used. It is perfectly reasonable for DoW to say nah that’s not going to cut it, I don’t want to ask for permission.

But what is true is that this should not be much of a surprise considering the constant rhetoric over the past few years has been that AI is a power like no other. It’s like nukes, but times a thousand. We need regulation. And when an industry repeatedly calls out for oversight, asking for someone to make the rules on how it should be used, you cannot be surprised when the Defense department take that seriously. You cannot be surprised when they make up their own interpretations of what ought to be done, because you were insufficiently prescriptive. They will listen to your articulation of any red lines and wonder, what do you mean you want to tell me how to use the mega-nuke-crazy-power that you yourself are saying you don’t know how to control?

The US has nationalised or regulated whole industries for simpler reasons. Telephone lines, rails, steel mill attempted seizure, these aren’t small things. And that’s not to mention the times the government has threatened to do this, from JFK to FDR.

So if you think AI is important we’re going to see more of this. You simply cannot call your technology a major national security risk in dire need of regulation and then not think the DoD would want unfettered access to it. They will not allow you, rightfully so in a democracy, to be the arbiters of what is right and wrong. This isn’t the same as you or me buying an iOS app and accepting the T&Cs.

But it’s also true that a corporation acting as a bulwark for democracy against the government is fundamentally weird, even if true. Democracy is incredibly annoying but really, what other choice do we have! What we don’t have is a reckoning with the power that is now reality.

I am extremely uncomfortable with the fact that we can just purchase commercially available data on almost everyone. I am also somewhat uncomfortable that the future of war is going to be autonomous though there are days where having Claude or GPT decide where to bomb seems better than an average 22 year old. I’m uncomfortable that in the pursuit of absolute security we have effectively given up our privacy, and all that remains are small shreds that only sit with a couple of large technology giants. I’m uncomfortable that the few shreds of privacy that did exist can now be reverse engineered away using pretty normal AI tech.

I also am not sure there’s a way out where we would ever have digital guarantees of privacy. I think our children will think that a quaint old notion. “What do you mean, I can of course just ask my AI to analyse a bunch of information and figure out who ratmonster2024 is.” The work that only NSA used to be able to do a couple decades ago is probably within the grasp of the average startup, if they cared. Genies don’t tend to go back into bottles, and this one has powerful forces keeping it out.

The future will bring these questions to bear, much faster than anyone might expect. The current world survives because a lot of analysis is effort-bounded. If that’s gone, a lot of things we previously assumed secure will also go away. This is coming, whether you want to or not. The best part of last week is that the issue became higher profile, again. But bringing attention to the issue is only the first part. Unless we know what we want to do with the attention, tribal politics is going to overwhelm it all.


I had a conversation with and an august panel last week. It was really really good, and you should check it out.

Exponential View
🔮 Where the human ends and AI begins
This is the first AI Vistas discussion, a new series hosted by Exponential View where I bring people I trust into conversation around one hard question, because together we can see what none of us would see alone…
Read more

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Notes on Mexico

2026-02-01 22:00:16

A series of observations about Mexico from my travel over the holidays, now that I’ve had time to digest. I went Mexico City, to touch the Aztec, Zapotec and Mayan civilisations, at least cursorily, which made me inordinately happy. It’s the first time I’ve gone, but I got a few days in each place to actually just be which is the only way to travel in my opinion. I’d read a bunch of books before and during my trip, but what I came away with most strongly was the impression of a country that’s psychically much larger than it is physically, with the weight of a few layers of history, and with a peculiar mix of life.

  1. Mexico is like if India was richer, things were cleaner, while being much (much!) more unsafe. This showed up for me almost everywhere I went, often in the background, often not. For instance, this means that while in India you will see a lot more spaces for the rich or large luxury malls, in Mexico it feels like those are hidden away inside secure compounds. In fact the only place I saw this easily accessible and displayed was in Cancun, which is as if the Mexicans built a tourist place just for the Americans and made it look like Dubai.

  2. I was shocked that Mexico City still has a murder rate 1/3rd of NYC in the 1990s. Turns out this ignoble list is also dominated by Mexico.

  3. I continue to be just constantly amazed at how safe India is. It has no right to be so, it’s poor, ill organised and the justice system moves like molasses. I first had this thought in Nigeria, and have repeated this observation in too many countries to name. Central and South America look likely to only exacerbate this question.

  4. This is particularly germane in Mexico because Mexico City reminds me a lot of Delhi, albeit with somewhat worse roads, less people, and far cleaner sidewalks. And entire squadrons of police cars with visible guns every block or two in all the tourist friendly areas.

  5. An interesting aspect that I had never considered is Mexico used to be bigger than the US when it owned most of the US’ current southwest. The country still seem to remember this in their bones. They’re 130 million people but feels much larger. The weight of most of mesoamerican history centers it. They have a Place in History, writ in capital letters in the national psyche.

  6. The level, variety, and affordability of street food remains one of Mexico’s major success stories. Plentiful, tasty and cheap. I largely prefer it to restaurant food. Tlayudes ftw.

  7. Going through the Zocalo in Mexico City is a full body immersive experience, and not one I care to repeat. On the other hand it is massive, disorganised in the best way, and sells anything and everything you can imagine. We got lost inside it and had to trek a dozen blocks in a randomly chosen direction to get out. We realised this after calling an Uber and waiting 20 mins before realising it’s never going to make it.

  8. This is also a plus, because just like the lack-of-zoning-success-stories of almost every country except the US, it makes Mexico City undeniably attractive to every American, who of course love mixed-use easily walkable cities as long as they don’t have to live in them.

  9. This exact reason also makes Cancun the worst place in Mexico I visited, because it’s built for tourism, has a hotel zone, and fails my “Civilisation Test” which is the number of cafes in walking distance. In case you were curious, the winner was Oaxaca. Excellent coffee, and even better hot chocolate.

  10. Mexico City truly is a cultural capital. Incredible museums, great art, great food. The Museum of Anthropology in CDMX is the best museum I’ve seen (‘n’ is very high here).

  11. The main Cathedral is absolutely gorgeous. And being built on the remnants of the lake you can see the effects of the soil moving about as the cathedral is a bit slanted. The styles are more eclectic than you’d find in a European city, and more ornate than I personally like, but worth seeing.

  12. Walking among the Aztec ruins next to the Cathedral is a quasi religious experience because they’re so well preserved. The feathered serpent, Quetzlcoatl, is everywhere, encircling the plazas, out of the walls, surrounded in parts with forms of corn and shells.

  13. As usual I found the fact that until recently tearing down an ancient monument and building another gorgeous monument to be normal and not at all noteworthy, to be interesting. Something we can learn from.

  14. The Aztecs took their iconography and religion seemingly from Teotihuacan, which is an hour away. It’s an older civilisation, 600 years before Aztecs, whose traces they clearly discovered and were influenced by but knew little about. They didn’t know who they were, what their society was like, what they called themselves, nothing. So they, rather whimsically, named it Teotihuacan, the place where gods came from, adopted many of their gods (or so it seemed to me), for instance named the feathered serpent Quetzcoatl, and generally lived a grand life of military conquest for a couple centuries until Cortez arrived.

  15. I can understand why. Teotihuacan is extraordinary, and the Pyramid of Quetzcoatl in particular is magnificent. Considering they didn’t have metal or pack animals this is all the more impressive. The ability of humans to accomplish incredible things at scale never stops continuing to amaze me.

  16. I have not been able to make up my mind about the import of human sacrifice and how much it’s true/ false/ exaggerated compared to other historic cultures.

  17. Driving in Mexico City is very hard. Half the roads are tiny and don’t even look like roads. The green signs that show the roads and destinations often had three names none of which matched what Google maps said, so it was entirely visual navigation. I am now ready to drive in India.

  18. Mexico City also has cable cars as a core mode of public transport, which I hadn’t seen before, and looks wonderful especially when stuck in a traffic jam. I wish the US had these, or indeed any public transport. I tried to take one but it was night and gpt recommended the amount of changes I’d need to make to take a ride was not safe and I shouldn’t do it. So I had churros and cafe de olla instead.

  19. As my 8yo observed, the infrastructure got better as we went from Mexico City to Oaxaca then to Cancun. Curious.

  20. Oaxaca is a jewel of a place. Fits in your palm, highly walkable. High civilisation score. Great food. Great cathedral, though the churrageruerisco was not the best of its type, didn’t come together cohesively.

  21. The street food is plentiful and good. The speciality is mole, a particular type of sauce with mixed spices, and chapulines, fried grasshoppers. Apparently delicious when mixed into butter and eaten with bread.

  22. Oaxaca also had the highest density, originality and quality of art I’ve seen in a city since

  23. There’s plenty of prehispanic food and drink about. Tejate was meh to me, though a latte tejate I had at a market was extraordinary. Generally I remain a fan of modernity, we’ve perfected much of what history revered (and made them better).

  24. Monte Alban, an hour from Oaxaca, is worth visiting. Zapotec built, on top of a hill. Gorgeous views all around. The guide told us when it was built and during the heyday it used to have 9 months of rain, so the water would flow down to the sides of the hill through channels that were cut, and this would supply water from the priests to the commoners. But the water dried up during a long drought lasting a couple decades, people lost faith in the priests to bring rain by praying to Tlaloc, and folks left. So it goes.

  25. The burial rituals were fascinating, they would put the body in a small enclosed space for 4 years, shut tight so no smells would escape, and then would remove the bones and put them in an urn. If more people died they had different spaces like this outside the house.

  26. The various pedestals and spaces had holes below for priests to show “magic”, disappearing and reappearing, as the guide told us. I am personally suspicious of the “people in the olden days were easily fooled” argument, but am in favour of the “everyone likes and believes in rituals” argument.

  27. The idea of worship starting with some seed of truth and then becoming a self fulfilling prophecy as those responsible for the worship taking matters into their own hands will never stop being funny.

  28. Cancun was the least interesting part of the visit. It is also, at least the hotel area, not at all pedestrian friendly. It’s big tourist resorts or nothing.

  29. Chichen Itza, a couple hours from Cancun, was remarkable. Their architecture shows influence from teotihuacan, from toltecs, and there clearly seemed to be trade and information routes between the lands. The Mayan civilisation at least per reading stood for 3600 years, which is an absurdly long length of time.

  30. The cenotes are magnificent. Cenote Xkeken, was a particular favourite, it’s mostly underground with only a shaft of light coming down.

  31. The fact that Mayans ruled for so long in such a dry place with the main water source underground feels quite bizarre. Though once you rationalise by the number of inhabitants maybe it’s fine. Chichen Itza had around 40k, 5x less than Teotihuacan, itself less than Tenochtitlan, and none of them had decent water supply. I do not understand living life in hard mode for that long.

  32. One reason though for the longevity of these civilisations might be survival bias, because but he time a lot of monuments got built without mechanised power it’s already a couple centuries. There’s a funny comparison to be made right California HSR here where we’ve horseshoe theoried our way to construction but I leave that to someone else.

  33. The beaches near Cancun are very good, especially Cozumel, the island where Hernan Cortez first landed. Sting rays and nurse sharks played in the shallows next to our feet at El Cielo. But I’ll be honest I still prefer the beaches of Southeast Asia. Thailand cannot be beaten.

  34. For the number of civilisations that roughly lived side by side at different points in Central America is really impressive. I got GPT to make me multiple maps and websites to help understand this better.

  35. This trip without LLMs would’ve been about 30% as good. Everything from planning to asking about cafes and restaurants to dealing with zocalos to hotels and snacks and history and geography and pretty much anything we wanted to know or learn was made better by GPT, and sometimes Gemini.

  36. Again the sheer number of extremely heavily armed police present in nearly all parts, including highways, was quite striking. They stopped cars at night, frisked folks, and generally were a loud and constant presence. Is this signaling or actual deterrent? Unclear, but everyone states the importance of being sensible and safe.

  37. A substantial proportion of tourists to Mexico City and Oaxaca were Mexican, I think. As a consequence it’s not English language friendly, though again with Google translate and ChatGPT it’s not hard to travel.

  38. I was told by the tourist guides multiple times to not call it Gulf Of America as a form of protest. Everything is politics.

  39. Overall I really liked it, though I understand better why people who don’t have easy access to Asia, like Americans, like it so much more than I did. When it comes to food and markets and the general feeling you’re in a “free” city with limited top down strictures on life, this is the only real choice from North America without braving a really long flight. But I know, or rather I feel, for those you simply cannot beat India or Japan, which are also significantly safer, and have great food and history. Similarly for beaches I’m still a fan of Thailand but by a thin margin. That seems to be the primary motivation for most Americans I know who have gone to Mexico, which seems quite shortsighted to me. Because when you combine all that with its long history and culture, Mexico is pretty great.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

The Tragedy of the Agentic Commons

2026-01-21 00:16:09

Written with Alex, who writes here, and you should read him! The repo here.

This has become part of a series of essays, evaluating the new “homo agenticus sapiens” that is AI Agents. Part I was seeing like an agent. Part II is why the agentic economy needs money. And this is Part III.


Whitney Wolfe Herd, Bumble’s founder, recently described a future where your AI chats with potential matches’ AIs to find compatibility. Say what you will about AI being involved in your love life, but this is one domain where AI agents can potentially have large returns: the dating/marriage “market” is the epitome of the type of high-dimensional matching problem that Herbert Simon identified as impossible for people to optimize. Rather than optimising, Simon argued people engage in “satisficing”, i.e., settling for good enough.

Why would AI agents be useful here? Let’s start with how most markets work. Hayek’s big insight–outlined in what he called the economic problem of society–was that prices do an incredible amount of work. They compress a ton of information such as preferences, costs, scarcity, expectations into a single number that acts as a sufficient statistic for value. When you’re buying oranges, the seller doesn’t care what you’ll do with them. The price coordinates the transaction and that’s enough.

But prices work best when the transactions involve commodities. When you’re buying some oranges, the seller doesn’t particularly care what you’re going to do with them; you don’t need to convince him that you’ll take care of the fruit. The price does all the work in coordinating that transaction. Matching markets are conceptually different. You can’t just choose your spouse, your employer, or your college: you also have to be chosen by them. This is the domain that Al Roth, the 2012 Nobel winner for “the theory of stable allocations and the practice of market design,” spent most of his career studying. Roth showed that matching markets require careful institutional design; this design includes algorithms, timing, and the right rules to get the market to “clear.” His deferred-acceptance mechanisms now allocate medical residents to hospitals, students to schools, and kidneys to patients.

But the efficiency of matching markets hangs on the ability to elicit a person’s preferences, i.e., that people can express their rank orderings over potential options. But what if people’s preferences don’t fit in dropdown menus or are difficult to articulate on a standardised questionnaire? Peng Shi studied in his excellent paper “Optimal Matchmaking Strategy in Two-Sided Markets.” He looks at online platforms that match customers to providers using a variety of matchmaking strategies, from searching one side of the market to centralised matching that allows for back-and-forth communication.

Shi found that centralised matching works beautifully when preferences are “easy to describe,” i.e., straightforward to elicit using standard questionnaires, but breaks down when they’re contextual, idiosyncratic, or otherwise difficult to express through standard techniques. This is why many platforms still make you search. You want a contractor who shows up on time and knows your budget–this is easy–but you also want someone who understands your tastes in postmodern living room design. Good luck expressing that on a dropdown web form.

Here is where Large Language Models come in. They are fantastic at turning any unstructured piece of information into better structured matching. They’re also eminently scalable, enabling Coasean bargaining. But scaling things brings with it more coordination problems, too many agents negotiating with too many other agents is noisy. So what type of an institutional setup would make most sense to install here, to make this work well?

That’s what we sought to test with our experiments. The question being, could we figure out how and whether LLMs can help in matching markets where preferences are “hard to describe”? Can LLMs actually elicit the dispersed, hard-to-articulate preferences better than standardised methods? And if they can, what happens when LLM-based agents are available to everyone in the market?

Now, there’s some recent work on the topic that suggests guarded optimism that this is possible. Very new work by Ben Manning, Gili Rusak, and John Horton show that, when parsed through LLMs, short natural-language “taste descriptions” can be superior to standard questionnaires for eliciting preferences when the option set is large. They run an experiment where people write a few paragraphs about what they want in a job and then rank between 10 and 100 options (depending on the condition). Consistent with Simon’s conjecture, people’s ranking effort plateaus as the option size grows large; choice quality grows unstable as the consideration set increases. People get tired of ranking a ton of options and just start guessing. AI-parsed “taste descriptions” scale much better: once tastes are written down, the marginal cost of evaluating one more option is negligible for an AI agent. The advantages of AI-parsed matches are even higher in congested markets where people are more likely to be pushed.

But a theoretical paper by Annie Liang offers an important counterpoint in the case of a potentially complex two-sided matching market. She shows that when personality is sufficiently high-dimensional, meeting just two people in person beats searching over infinitely many AI representations. The noise in AI approximations compounds faster than the benefits of scale. This is a very cool result, and you should all read the paper in full–it’s that perfect type of economic theory that’s both conceptually rich and practically useful.

Ok, with that preamble…

Let’s run an experiment

We set up a simulated Hayekian marketplace with a whole bunch of digital shoppers, providers as AI agents.

  1. Preference elicitation: Knowledge is dispersed in each digital shopper’s “head”: customers know what they want and providers know what they can offer. We want to know how eliciting the preference–either through the standard intake questionnaire or high-dimensional text parsed by an AI agent–can change the market structure for optimal results.

  2. Mechanism interaction: When elicitation improves, can centralised matching beat search, and what are the conditions under which this happens?

  3. Scale: We then check what happens when everyone uses AI agents

  4. Institutional design: Finally, we figure out the right institutional mechanism to solve the resulting problems, and to maximise welfare

Preferences here are latent vectors in each agent’s head. Both the customer and provider agents have a true weight vector over some set of attributes (6 dimensions in this case). So elicitation changes the platform’s inferred w, not the true w. A standard intake is a structured form, and only exposes a few coarse priorities. The AI intake is free text, back-and-forth chat, and can be parsed in the platform’s inferred weight by a couple mechanisms - either by a rule- based algorithm or an AI agent.itself or , or via GPT parsing.

Figure 1 has an abridged illustration of the design and some results. There’s an appendix at the end of the essay in case you want to check out the details of the experimental design. But without further ado, here are some…

Results

First, AI-assisted preference elicitation improves matches across the board.

Figure 1: Experimental design

Figure 2

Second, as shown in Figure 2, AI-based elicitation changes what type of market design works best, and the conditions under which centralised matching can beat search.

Specifically, “Search” and “centralised” are the two different matching protocols we tested. Search means customers iteratively message providers in some ranked order until the matches ‘stick’. Think about how you would find a plumber–message folks, talk to them, iteratively until one ‘fits’.

Centralised is where the platform computes the shortlist for you, and clears a match based on mutually acceptable terms.

Once dispersed knowledge can be elicited and compressed into usable signals, the platform can centrally clear the market rather than forcing users to search. When knowledge can’t be compressed, search dominates because it lets users do iterative, contextual refinement in the loop.

The core object is the ‘ROI boundary’. If the per action attention cost is high enough, centralisation dominates–it just requires fewer actions. If the cost is low, search can dominate because it can “handle” more actions. This is the very idea of Coasean bargaining helping remove the boundaries of firms.

So where does the value of LLM-based elicitation actually come from? Is it from the back and forth conversation, or the ability to parse large text? As described above, we prompted all of the customers to write some free text about things they like and whatnot, and then used some rules-based parsers and some LLM-based parsers. There’s also the option for conversational elicitation via chat.

We thought the AI agents’ ability to ask follow-up questions would be the game-changer. Turns out though (see Figure 3), most of the value comes from the AI agent simply inferring more signal from messy text compared to the signal in a rule-based parser. This is consistent with the work of Manning et al. that we discussed above. This may of course be something specific to our prompts—perhaps one could obtain further gains by explicitly instructing the AI agents to engage in structured back-and-forth with the customers, and to do so in contexts where this would be helpful, but this was not the case here.

This highlights the utility of LLMs for extracting (potentially high dimensional) signals from unstructured data. Back in the day OKCupid used to make people fill out 90-100 questions to help match them with their potential partners. With LLMs, they might’ve been able to get away with writing a short essay and getting their Agentic Cupid to pull out the relevant information. Whitney is certainly on to something.

Figure 3

But what if people don’t really know what they want, does preference uncertainty matter? Whenever Rohit shops, he’s not sure of what he wants before he goes in the store. There’s a lot of noise in the process. Alex is a pure satisficer: the first item that meets a (very low) threshold gets put in the cart (usually virtual), and off to check out he goes.

We can test for that pretty easily here by introducing a bit of randomness into our shoppers’’ heads. At least in our setting, injecting noise into preferences doesn’t matter for the AI’s ROI all that much. We can still do centralised matching and extract a lot of value from that mechanism—as long as the preference noise isn’t too cacophonous.

What if everyone uses an LLM agent?

We had originally set up a pretty small marketplace. The centralized mechanism at this scale can be computed and cleared so we can run the experiment. But what happens when the scale explodes, both in the number of options and the number of customers potentially using AI agents? This is the problem matching platforms like Upwork are trying to solve: the option set is absolutely huge, but so is the potential customer base.

Every time a customer opens up a marketplace like Upwork, the number of choices just on the front page makes it hard to remember what they came for. Ideally AI-delegated agents can solve this problem: the user speaks or writes down what they want to do, the AI agent pings the platform, and the user is presented with the match. But what if every potential shopper had their own AI agent who wanted to message the providers on the platform? That’s a lot of agents doing individualised message sending to the provider inboxes!

So as you increase the number of customers with AI agents, the level of congestion rises significantly. Each customer agent sends a query to a provider agent’s inbox and it has to respond. Responding to all those agents takes a lot of compute. Here is what happens in our simulation (Figure 4): At full adoption, the providers’ inboxes flood with 5x the amount of requests, response rates collapse from 48% to 2%, and net welfare drops 88%.

Figure 4

Without institutions in place to scaffold the marketplace, a tragedy of the commons emerges: If everyone has an AI agent, it’s almost like nobody does. The paradox of plenty is real, and AI agents create their own version of Jevons paradox.

The need for institutions and scaffolding

What can fix this type of congestion? Prices!

As in a previous post–where we showed the importance of money in coordinating trade amongst AI agents–introducing a price mechanism recovers most of the lost welfare in matching. A vindication of Hayek’s deeper insight.

Specifically, we can introduce an exchange and money, such that the agents now have a pricing mechanism to signal their “strength of preference”. The idea is that the complexity falls because now not every provider and customer need to message each other. Prices capture a lot of high dimensional information in a single statistic, streamline a lot of that information, as we’d seen with the simulation in as we saw in the simulation in barter_to_money, complexity falls from O(n2) to O(n).

Figure 5

Pricing works! As shown in Figure 5, most of the welfare gains are recovered and the congestion issues are resolved. LLMs may lower the cost of expressing dispersed knowledge, but they don’t remove the need for institutional design to manage externalities. At least in our experimental simulation, the price system remains essential to solve the issue of complexity and congestion.

What did we learn?

If we think about an AI agent economy, we would want to know more about the mechanism that facilitates coordination. First, we have to ask, “If agents lower transaction costs, do markets just happen?”

In a previous post we looked at what would happen if there were a bunch of agents who had to interact with each other to trade, and it turned out that they don’t form markets spontaneously. In fact you have to do a fair amount of work before the agents are ready to interact.

Ok, if markets need scaffolding, what’s the minimal substrate that makes coordination scale? i.e., how will the agents coordinate amongst themselves? Will they be able to develop methods to do so themselves, e.g., through bilateral and multilateral negotiations, or will they need further help. It turns out that no matter how much you want to set things up just so, the agents will still need money and prices to trade efficiently. Even with the lower transaction costs and larger levels of compute, the coincidence-of-wants problem still doesn’t disappear - Hayek remains vindicated.

In this current essay we explore whether LLM agents can make centralised matching more efficient–we should expect marketplace consolidation in categories that were previously too heterogeneous for algorithmic matching, e.g., wedding vendors, specialised consulting, creative services. We showed that in “thin” markets AI agents help facilitate better match quality through centralised mechanisms.

However, if everyone has an AI agent, we still need a pricing mechanism to solve the resulting congestion and complexity problems that arise. Congestion is a serious threat at scale!.

So what is the broader take away from this essay, from the whole series of essays? For us it’s that AI agents work remarkably well when institutional design facilitates the interactions and transactions. Since direct instruction for every eventuality is impossible, the only way to make the AI agents behave at scale is to design the right scaffolding to facilitate coordination and exchange. This involves the creation of markets, and yes, money! If we can learn to design the “institutions” within which the agents operate, then we can help have them do far more complex tasks that we want. Autonomy, that’s the true prize!

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Appendix: More about the design

Warning: wonky.

We constructed a simulated marketplace where customers seek service providers (contractors) across task categories that vary in how difficult preferences are to articulate. Each customer is seeded with true preferences represented as a 6-dimensional vector of weights (summing to 1) over provider attributes. A match is formed when both sides’ true values clear a threshold.

“Easy” categories include things like TV mounting or furniture assembly; preferences in these categories can be mapped cleanly onto standard form fields such as price, availability, and distance. “Hard” categories, such as ability to repair a historic staircase or a complicated asbestos remediation with specific guidelines, involve preferences that are more difficult to elicit using standardised questionnaires. We then see whether the ROI threshold changes based on how well the models can “elicit the true preferences” of the underlying actors.

The experimental intervention targets the preference-inference pipeline: how customer preferences get translated into data the platform can act on. The experiment varies the intake method (standard structured forms versus free-text descriptions parsed by an LLM) crossed with the matching mechanism (decentralised search where customers browse and choose, versus centralised assignment where the platform matches algorithmically). Match quality is computed as the dot product of the customer’s true preference weights and the matched provider’s attributes, minus any search costs incurred. All of this is summarised in Figure 1 below.

Figure A1: Experimental Design

Will money still exist in the agentic economy?

2025-12-19 22:03:27

Written with Alex Imas, subscribe to his blog here!

This has become part of a series of essays, evaluating the new “homo agenticus sapiens” that is AI Agents. There was Part I, seeing like an agent. This is Part II. And Part III on what happens when we all have AI agents.


Sometimes I forget but we live in a future transformed by information technology pretty much across ever aspect. But one thing has remained largely the same: we still live in a world where the vast majority of economic transactions are done by people. If you want to buy a car, the process is largely the same as it was 50 years ago. You go down to the dealership and negotiate the best price that you can. Sure, you may have some extra information from doing research on the web beforehand - it’s certainly much easier to do comparison shopping with a supercomputer in your pocket - but the basic process of transacting with another human being has largely stayed the same.

One change that’s likely to come though is that there will soon be 10x, 100x, maybe more AI agents working in the world as there exist people. And as we have lots of AI agents working on our behalf, doing all forms of work, then there is a thesis that many of the frictions and information asymmetries that people face in markets may disappear if economic transactions are delegated to aligned agents, leading to a so-called Coasean singularity.

We’re not there yet though. Today’s agents are simply not good enough yet to act sensibly or without strict instructions. Many of the features of human-mediated markets still seem to be reproduced in AI agentic interactions. But as online spaces adapt to the promise of AI technology, it seems natural to think of how agentic markets will be organized. In a future world where we do have billions of AI agents, how would they coordinate with each other? What kind of coordination mechanisms would be needed? What institutions are likely to emerge?

And one possibility is particularly intriguing: will coordination still require money? Not in the sense of US dollars, but a shared medium of exchange and a hub/ clearing protocol.

Money, Money, Money

“Why money” has occupied economists going back to Adam Smith, who framed cash as solving what has since been termed the coincidence of wants. To see what we mean, consider a pure barter economy. Let’s say Alex is an apple farmer and Rohit raises chickens. If Alex wants chickens and Rohit wants apples, then Alex can just walk over to Rohit’s house with a bushel of apples and get some chickens in return. Simple. But what if Alex wants chickens but Rohit wants an electric toothbrush - he has no need for apples right now. Then to get the chickens, Alex would need to find a person who is willing to trade an electric toothbrush for his apples, and then come back to Rohit for a trade.

This would still all be fine if there was just one other person to visit and trade with, but what happens in a large market, with many (many) people who potentially have both an electric toothbrush to trade and want Alex’s apples? In order to trade, Alex needs to happen to find a person that both 1) has what Alex wants and 2) wants what Alex has. As very nicely shown in a paper by Rafael Guthmann and Brian Albrecht, the need to satisfy this coincidence of wants through finding matches creates complexity that quickly blows up as the size of the market increases. If the market is even moderately large, this complexity makes even basic transactions essentially impossible.

Ergo money. While the origin of money is a hot topic of debate (e.g., see David Graeber’s excellent book Debt: The First 5000 Years), the role of money in a competitive market is to solve the coincidence of wants. Money acts as a special type of good called the numeraire, where its only role is that it can be exchanged for other goods at pre-determined quantities. These quantities are reflected in the prices that each good is worth.

Going back to Alex and Rohit: one way to solve the coincidence of wants would be for Alex to sell his apples at a special place called market and then to use the money to purchase Rohit’s chickens. Rohit can then use that money to buy an electric toothbrush, or indeed any other thing his heart desires. Money eliminates the need for people to coordinate their transactions based on their current endowment (what they have) and preferences (what they want).

Bring on the agents

Okay, so money is necessary to coordinate transactions in an economy with people. This is largely because each individual can’t hope to have enough information on what everyone else has and wants to reliably engage in market transactions. Alex and Rohit are as yet, sadly, mortals.

But will this be the case for AI agents?

Agents do not have the same computational constraints as human beings. In theory, it may be possible to solve the search problem where the coincidence of wants becomes a non-issue. In that case, the agentic economy could eliminate the need for a key institution of the human economy. We decided to run an experiment to find out.

The experiment

First, the repo here. We can have N agents, with N goods, and each starts with its own good and wants another. There’s multiple rounds, one action per agent per round. Agents decide their course of action via structured JSON, and success simply means you get what you want.

The first question is about a pure barter economy. We explore whether LLM agents can achieve efficient allocations through barter at any scale, i.e., to engage in multiple bilateral negotiations to achieve gains from trade. The agents in the experiment have no real shortage of time. If this works then Coasean bargaining should be straightforward; goodbye money!

The table below has the results. What do we see? When the scale is small - when Alex just has to worry about coordinating with Rohit - all of this works. But as the number of agents grows, things start to get really difficult. By the time we get to even 8-12 agents the number of successful transactions drops to below 50%. And this is the absolute simplest setting.

Perhaps this should be expected. The problem is still O(n2) in complexity, which grows exceptionally fast as the number of agents grow. And if this isn’t just bilateral, but starts to include multiparty negotiations, it might become O(n!), which is far bigger for any number bigger than 3.

Ok let’s make it a bit easier for the agents. If they can’t talk to each other, since they are agents anyway, we should be able to give them omniscience. Enter Central Planning. There has been plenty of work before in the limits of bilateral negotiations, but we can test how well a “hub” structure can help. Does having a central planner help set the stage for better performance?

As the results table shows, central planning makes things slightly better, but we are still very much in a world of the Hayekian troubles. A hierarchy without a numeraire just isn’t enough.

Ok, we can continue looking at our human history to see what else we can do. In Debt, David Graeber argues that money emerged at least in part through state power, to enforce the paying of taxes in order to fund foreign wars. Before this, he argues, IOUs and bartering seemed to have worked just fine to manage the economy; the IOUs themselves became a sort of numeraire that could be traded in order to solve the coincidence of wants.

So let’s introduce,Credits and IOUs. We can give the agents the ability to give each other an IOU and see whether providing the basics of credit allows them to come up with better ways to interact with each other.

This still didn’t help as much as we thought. There were a few segments where the transactions started happening, but they really didn’t start to work. Or scale.

Most interestingly, the concept of money didn’t emerge from this, not organically. IOUs didn’t become money. Even though in conversations LLMs all know that this is the smart thing to do, it did not emerge.

This was a bummer, because as with the prior research, what this shows is that AI agents do not yet come with the natural instincts of humans to turn IOUs into a numeraire that acts as a stand-in for money. They don’t even come with the same set of ideas as this sea otter.

Ok, let’s take the final step and actually introduce Money. We do this by creating an exchange where the agents can do bids and offers, and look at market outcomes. The results are stark: markets resolve at a success rate of 100% and much faster than through other mechanisms, at the rate of O(n).

One note is that this result presumes the exchange works without a hitch. In reality there will be friction coming from liquidity constraints, differential compute resources, etc. For example, in the N=8 run, the hub handled 23 inbound + 23 outbound messages and prices stayed fixed. And if regulations require that AI agents use different types of country-specific currencies, then exchange rates will complicate things further.

Discussion

To sum: An agentic economy doesn’t emerge automatically with even SOTA agents (who really should know better). Barter and central planning remain inefficient and infeasible, and money does not emerge organically even when credit and IOUs are introduced. At least in our setting, an agentic economy needs more top-down engineering to become efficient.

Previous work on agent-based modeling has explored what kind of emergent economic realities we are likely to see with rule-based agents interacting. The world of AI agents is fundamentally different. These agents act based on a huge corpus of human knowledge, with the underlying LLM models able to solve incredibly difficult problems on their own. These agents can plan, they can negotiate, they can code. And even with all this knowhow at their disposal, it’s interesting to see that they still appear to require top-down institutions to create an effective and efficient market.

As we transition to a more agentic economy, a key part of ‘getting ready’ for that world is setting up institutions for the agents. Like including:

  • Identity and roles

  • Settlement and payment

  • Pricing and quote formats

  • Reputation

  • Marketplaces and clearinghouses

This is by no means exhaustive, but we wager that mechanism design for multi-agent work is going to be a rather fertile area of research for a while. Humanity went through millennia of evolution to figure out the right societal setup that lets us progress, that lets us build a thriving civilisation.

It is both necessary and inevitable that the world of AI agents will also need the equivalents, though the emergence of such institutions will likely be much faster given the millennia of human knowledge that we’ve already amassed.

Github repo here.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Seeing like an agent

2025-12-08 23:02:20

This has become part of a series of essays, evaluating the new “homo agenticus sapiens” that is AI Agents. This is Part I, seeing like an agent. Part II is why the agentic economy needs money. And Part III on what happens when we all have AI agents.

One of the books that I loved as a kid was Philip Pullman’s His Dark Materials. The books themselves were fine, but the part I loved most were the daemons. Each human had their own daemon, uniquely suited to them, that would grow with them and eventually settle into a form that reflects their personality.

I kept thinking of this when reading the recent NBER paper by John Horton et al about The Coasean Singularity. From their abstract:

By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design.

Basically they argue, if you actually had competent, cheap AI agents doing search, negotiation, and contracting, like your own daemon, then a ton of Coasean reasons firms exist disappear, and a whole market design frontier reopens.

This isn’t a unique argument, though well done here. I’ve made it before, as has others, including Seb Krier recently here and Dean Ball and many others. The authors even talk about tollbooths as from Cloudflare and agents only APIs and pages.

But while reading it I kept thinking by now this is no longer a theoretical question, we now have decent AI agents and we should be able to test it. And it’s something I’ve been meaning to for a while, so I did. The question was, if we wire up modern agents as counterparties, do we actually see Coasean bargains emerge. Repo here.

The punchline is that AI agents did not magically create efficient markets. And they also kinda fell prey to a fair bit of human pathologies, including bureaucratic politics and risk aversion.

Experiment 1: An internal capital market

The first way to test these was to just throw them into a simulated company and see what happened. So I set up four departments - Marketing, Sales, Engineering and Support - and said they could all bid for budget to do their jobs. Standard internal capital market where departments would submit bids and projects get funded until budgets get exhausted.

If the promise of Hayek holds and we can get markets if information flowed freely, then we should be able to see this work. And it would be much better than the command and control method by which we try to decide this today.

Well, it didn’t work. Marketing and Sales accumulated political capital. Engineering posted negative utility for most quarters. The market we set up systematically funded customer facing features and starved infrastructure work. It’s like Seeing Like A State all over again.

I think this was because GTM type departments could come up with immediate articulable customer values, whereas Engineering’s value kept feeling preventative or diffuse.

It’s a bit frustrating to see that the models still retain human foibles since this is effectively Goodhart’s Law. When you measure departmental utility and fund accordingly, and you let the agents argue on their behalf, you do start to see negative externalities for core functionality.

So I added countermeasures. I added risk flags on features and veto power over “dangerous” work. Added shared outage penalties (if you ask for a risky feature and everything crashes, you pay for it too). And when I ran that, outages did happen. GTM departments observed this and tempered their bids, though only a little.

Engineering utility however still stayed low. GTM could discount future outages and gambled on “maybe it won’t break” for its immediate wins. But Engineering couldn’t proactively push folks into infrastructure investments. The pattern is hardly dissimilar.

The truly interesting part was that the agents perfectly replicated the dysfunction of real companies. Onwards.

Experiment 2: External markets - IP licensing

This was the most interesting part. The best way to see Coasean bargaining come true is to set up an external market for cross firm technology licensing. Twenty firms and thirty software modules. Each firm has some internal capabilities but could also license tech, so the buy vs build becomes a much cleaner decision with AI agents vs humans in reality. A classic setup, and the payoffs should be excellent. Or so I thought.

First run had zero deals. Every firm decided to build everything internally. They understood the rules and saw potential counterparties and had budget to trade, but still they chose autarky.

Okay, so I added reputation systems, post-trade verification, penalties for idleness, bonuses for successful deals, counterparty hints, even price history. Basically the kitchen sink.

Still zero trades.

This is the perfect setup as per the paper. Transaction costs effectively zero. Perfect information. Aligned incentives. Etc etc. The agents just didn’t care to trade! Because of very high Knightian uncertainty aversion (I assume), or some heavy pretraining that firms mainly build, not trade.

So I mandated ask/bid submissions. If you don’t post prices, defaults are generated. Profits are then directly coupled to next quarter’s budget. And I even gave explicit price hints, because the agents clearly couldn’t, or wouldn’t, discover equilibrium without them.

Now we start to see trades! Success! Three deals per round. The welfare is still far below the market optimum, but that’s possibly also because we haven’t optimised them yet.

But by now it wasn’t a market in the Hayekian sense. Like it’s no longer voluntary. We’re forcing the agents to trade, and then they do the sensible things.

Since it worked well for well behaved participants, I also did a robustness check, so we are creating adversarial firms and then check if the market still functions! And it does. Adversarial sellers captured much of the surplus, i.e., fairness is expensive. It’s either weak strategic sophistication or the agents are just nice and passive by default, I don’t know which.

Experiment 3: Second price auctions

The third experiment was one to check whether the models behave according to their beliefs. Vickrey auctions are sealed second price auctions, so the winner pays the price of the second highest bid. This means the dominant strategy is for the bidders to be accurate to their beliefs.

And they did. Allocative efficiency was 1. This is a little bit of a control group since the models must be smart enough to know the dominant strategy. I added “profit max only” personas, and collusion channels, just to check, and the behaviour still looked like standard truthful Vickrey bidding.

This tells us that they’re smart enough to do the right thing, but also that given a messy environment with underspecified mechanisms, which is most of the real world, they default to passivity or autarky.

I tested this also with a bargaining test with five players, which asks the models to divide a surplus value and have them negotiate with each other as to how to split things. The players can see a broadcast and each others proposal, but after round 1 the players can DM others. I even made one of the players adversarial. And still the splits remained near-equal, very far from the Shapley vector. They are norm conforming. Models are highly self-incentivised to be fair!

Synthesis

We saw 4 claims tested. To summarise:

  1. AI lowers transaction costs so markets emerge spontaneously - False

  2. With mechanism design, AI-mediated markets can function - True, but costly (required forced participation with Gosplan-ish price hints)

  3. Internal markets improve on hierarchy when coordination costs fall - False (GTM dominates Engineering even with full information)

  4. AI agents play fair in functioning markets - Mixed (adversarial agents extract rents, but agents are mostly fair)

The takeaway from these experiments is that to get to a point where the AI agents can act as sufficiently empowered Coasean bargaining agents, for them to become a daemon on my behalf, they need to be substantially empowered and so instructed. They do not act in the way humans act, but are much fairer and much more passive than we would imagine.

Markets don’t form spontaneously. Markets form under coercion but are pretty thin. And when markets exist, strategic sophistication determines who wins, depending on how the agents are set up. It shows alignment problems don’t disappear just because the agents can negotiate with each other. This is pessimistic for the AI dissolves firms narrative but optimistic for AI can enable better institutions narrative.

The Coasean Singularity paper argues AI lowers transaction costs but the gains require alignment and mechanism design, which is what I empirically tested here. It’s a strong confirmation of its strong form - that reduction in transaction costs was nice but mechanism design was needed to set up an actual market.

Also the fact that we needed to couple their budgets so the AIs needed to work from the same hymn is important, it means any multi agent design we create would need a substrate, like money, to help them coordinate.

Now some of this is that the intuitions we have built up over time, both from other humans but also from stories, is to assume that the agents have enough context at all times on what to do. I see my four year old negotiating with his brother to get computer time and by the time he’s a bargaining agent with some hapless corporation he would have had decades of experience with this. Our models on the other hand had millions of years of subjective experience in seeing negotiation but have zero experience in feeling that intense urge of wanting to negotiate to watch Prehistoric Planet with his brother.

Perhaps this matters. These complex histories can get subsumed in casual conversation into a seemingly innocuous term like “context” and maybe we do need to stuff a whole library into a model to teach it the right patterns or get it to act the way we want. The daemons we do have today aren’t settled in forms that reflect our interests out of the box though they know almost everything about what it is like to act as if it shares those interests.

But what the experiments showed is that this is far from obvious. Coase asked why firms exist if markets are efficient, and answered it’s because of transaction costs. The experiments here ask, even with zero transaction costs, why do firm-like structures still emerge1?

And if we do end up doing that, we might have just rediscovered the reason why firms exist in the first place, the very nature of the firm. Even as we recreate it piece by instructive piece.

Github repo here

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

1

And when we are able to roll the AI agents out, we will get firms that are more programmable, more stimulated and more explicitly mechanism-designed than human firms ever were.

Contra Scott on AI safety and the race with China

2025-12-02 09:12:23

has a really interesting essay on the importance of AI safety work, arguing it will not cause the US to fall behind China, as is often claimed. It’s very well written, characteristically so, and well argued. His argument, in a nutshell ( I paraphrase) is:

  1. US has ~10x compute advantage over China

  2. Safety regulations add only 1-2% to training costs at most

  3. China is pursuing “fast follow” strategy focused on applications anyway

  4. Export controls matter far more (could swing advantage from 30x to 1.7x)

  5. AI safety critics are inconsistent - they oppose safety regs but support chip exports to China

  6. Sign of safety impact is uncertain - might actually help US competitiveness

I quite like this argument because I actually agree with all of the points, mostly anyway, and yet find myself disagreeing with the conclusion. So I thought I should step through my disagreements, and then what my overall argument against it is, and see where we land up.

First, the measurement problem

Scott argues that the safety regulations we’re discussing in the US only adds 1-2% overhead. This is built off of METR and Apollo’s findings, around $25m for internal testing, and contrast this with $25 Billion for training runs. All the major labs also already spend enormous sums of money on intermediate evaluations, model behaviour monitoring and testing, and primary research to make them work better with us, all classic safety considerations.

This only holds if the safety regulation based work, hiring evaluators and letting them run, is strictly separable. Which is not true of any organisation anywhere. When you add “coordination friction”, you reduce the velocity of iteration inside the organisation. Velocity here really really matters, especially if you believe in recursive self improvement, but even if you don’t.

This is actually visible in ~every organisation known to man. Facebook has a legal department of around 2000 employees, doubled since pre Covid, of a total employee base of 80,000. Those 2000 are quite likely not disproportionately expensive vs the actual operating expenditure of Facebook. But the strain they put on the business far exceeds the 2.5% cost it puts on the output. There’s a positive side of this argument, they will also prevent enough bad things from happening that the slowdown is worth it. Presumably Facebook themselves believe this, which is why they exist, but it is very much not as simple as comparing the seemingly direct costs.

The argument that favours Scott here is maybe pharma companies,

This gets worse once you think about the 22 year old wunderkinds that the labs are looking to hire, and wonder if they’d be interested in more compliance, even at the margin.

China is a fast follower

The argument also states that China is focused on implementation and fast-follow strategy, because they don’t believe in AGI. I think it’s an awfully load bearing claim, and feels quite convenient. China is also known for strategic communication in more than one area, where what they say isn’t necessarily what they focus on.

As Scott notes, Liang Wenfeng of Deepseek, explicitly has stated he believes in superintelligence, which in itself is contradictory to the argument that they care about the applications layer. If China does truly believe in deployment, as it seems to be the case, then having true believers as heads of top labs is if anything more evidence against “they’re just fast followers” argument.

They’re leaders in EVs, solar panels, 5G, fintech and associated tech, probably quantum communications, an uncomfortably large percentage of defense related tech, seemingly humanoid robots, the list is pretty long. This isn’t all just fast followership, or at least even if it is, it’s indistinguishable from the types of innovation we’re talking about here.

Again, this only really matters to the extent you think recursive self improvement is true or China won’t change its POV here very fast if they feel it’s important.The CCP has an extraordinary track record of redirecting capital in response to perceived strategic opportunity (and overdoing it). That means “they don’t believe in AGI” is an unstable parameter. Even if the true breakthrough comes from some lab in the US, or some tiny lab in Harvard, it will most likely not be kept under wraps for years as the outcomes compound.

The AI safety critics are sometimes bad faith

This is true! There’s a lot of motivated reasoning, which tries to tie itself in knots such as to argue “to beat china we have to sell them the top Nvidia chips, so they don’t develop their own chip industry and cut the knees off another one of our top industries”. Liang Wenfeng has also said that his biggest barrier is access to more chips.

That said, here my core problem is that I am unsure about which aspects of the regulations being proposed are actually useful. Right now they ask for a combination of red-teaming (to what end), hallucination vs sycophancy (how do you measure), whistleblower protections, bias (measurement?), CBRN (measurement delta vs pure capability advance), observability for chip usage (hardware locks?), and more. These assume a very particular threat surface.

The Colorado AI Act focuses on algorithmic fairness and non discrimination. Washington HB 1205 focuses on digital likeness and deepfakes. AB2013 in California on disclosing training data for transparency. Utah’s SB 332 says AI has to say theyre AI when using a chatbot. These are all quite different, as we can see, and will require different answers in both implementation and compliance. writes about this cogently and cohesively.

Many of these ideas are sensible in isolation, but many of them are also extremely amorphous. Regulations are an area where I am predisposed to think that unless they’re highly specific and ROI is directly visible it’s better to not get caught in an invisible graveyard. The regulatory ratchet is real, as Scott acknowledges. Financial regulation post-2008, aviation post-9/11, FDA … We always have common sense guardrails that creates an apparatus that then expands.

Sign uncertainty

It is definitely true that having a more robust AI development environment might well propel the US forward vs China. Cars with seatbelts beat cars without seatbelts. Maybe lack of industrial espionage means the gains from US labs won’t seed Chinese innovation.

It should be noted though that the labs already spend quite a bit on cybersecurity. Model weights are worth billions, soon dozens of billions, and are protected accordingly. Should it be made stronger? Sure.

It should be noted, underlined, however that this is true only insofar as the Chinese innovation is driven by industrial espionage or weight stealing. Right now that definitely does not seem to be the case. What is true is that deployment by filing off the edges, making the products much nicer to use, especially via posttraining, is something Western models do a much better job of. Deepseek, Qwen or Kimi products are just not as good, and differentially worse than how good their models are.

So … now what.

Scott’s argument makes sense, but only in a particular slice of the possible future lightcone. For instance, we can sort of lay down the tree of how things might shake out. There are at least 5 dimensions I can think of offhand:

  1. Takeoff speed

  2. Alignment difficulty

  3. Capability distribution (oligopoly, monopoly etc)

  4. Regulations’ impact on velocity

  5. China’s catch up timeline

You could expand this by 10x if you so chose, and things would get uncomfortably diverse very very quickly. But even with this, if we split each of these into like 4 coarse buckets (easy, moderate, hard, impossible), you get 1024 worlds. I asked Claude to simulate these worlds and choose whatever priors made sense to it, and it showed me this:

I’m not suggesting this is accurate, after all there could be a dozen more dimensions, or the probability distribution might be quite different. Change in one variable might impact another. But at least it gives us an intuition on why the arguments are not as straightforward as one might imagine, and it’s not fait accompli that “AI safety will not hurt US in its race with China”, and that’s assuming the race is a good metaphor!

For instance, here’s one story which I tried to draw out after getting lost with the help of Claude.

  • Does recursive self improvement happen?

    • Y. First to ASI wins the lightcone

      • Is there a close race with China?

        • Y. Every month matters

          • Do safety regs meaningfully slow us?

            • Y. Disaster!

            • N. Small overhead doesn’t matter!

        • N. US has durable advantage (10x compute)

          • Does model quality matter more than deployment?

            • Y. We have time for safety work. 6mo slower might be fine!

            • N. Safety regs might not matter

    • N. Gradual capability increases

      • Which layer determines winner?

        • Model layer

          • How durable is US advantage

            • 10x compute advantage wins, so regulations are basically “free”

            • If china can catch up however, efficiency gains matter, so safety regs might be a small drag but real

        • Application layer

          • Do safety regulations affect deployment velocity?

            • Yes. Compliance morass and lawyerly obstruction everywhere.

            • N. Safety regs only affect the model. It’s fast and unobtrusive. It’s fine.

In this tree there are only a few areas where Scott’s argument holds water. Recursive self improvement is important enough to worry about but unimportant enough that velocity doesn’t matter. Chinese skepticism about ASI is stable but we should prevent dictators getting ASI. We can measure direct costs but what about illegible costs? Model layer regs won’t affect application layer despite Colorado showing they already do.

If recursive self improvement is false, it only makes sense to do more regulations *if* safety regulations do not meaningfully impact deployment velocity in the application layer and the compute advantage holds in the model layer. If recursive self improvement is going to happen, then Scott’s argument has more backing, especially if safety regulations don’t slow us down much as long as the model quality will continue to improve.

Which means of course the regulations have to be sensible, they can’t be an albatross, China’s “catch up” timeline has to be longer, the capability distribution has to be more oligopolistic, alignment has to be somewhat difficult, and takeoff speed has to be fairly fast.

If we relax the assumptions, as in the tree above, we might end up in places where AI safety regulations are more harmful than useful. One example, and this is my own view, is that a lot of AI safety work is just good old fashioned engineering work. Like you need to make sure the model does what you ask it to, to solve hallucinations and sycophancy. And you need to make sure it doesn’t veer off the rails when you ask it slightly risque questions. And you’d want the labs to be “good citizens”, not coerce employees to keep quiet if they see something bad.

Scott treats regulatory overhead as measurable and small in his essay. But the history of compliance shows they compound through organisational culture, talent selection, and legal uncertainty and dominate direct costs. If he’s wrong about measurement, and Facebook’s legal department suggests he is, then his entire calculation flips. Same again with China’s stance in reality vs what they say, or the level of belief in recursive self improvement.

To the question at hand, will AI safety make America lose the war with China? It depends on that tree above. It is by no means assured that it will (or that it won’t), but the type of regulation and the future being envisioned matter enormously. The devil, as usual, is in the really annoying details.

In my high-weight worlds, AI safety work can meaningfully help, but only if done sensibly. I don’t put too much weight on recursive self improvement, at least done without human intervention and time to adjust. I also think that large amounts of safety are intrinsic principles to build widely available and used pieces of software, so are not even a choice. They might not be called AI safety, they might be called, simply, “product”, which would have to think about these aspects.

Personally, I prefer a very economist’s way of asking the “will AI safety make the US lose to China” question, which is: what is the payoff function for winning or losing the race? Since regulations are (mainly) ratchets, we should choose them carefully, and only when we think it’s warranted (high negative disutility if not, positive utility if we do).

  • In “mundane AI” world, we get awesome GPTs but not a god. Losing means we’re Europe. While some might think of this as akin to death, it’s not that bad.

  • In “AI is god” world, losing is forever

Even in the first world, AI safety regs might make the US the Brussels of AI, which is a major tradeoff. Most regulations currently posed don’t seem to yet cause that effect. But, it’s not like it’s hard to imagine.

Regulation can be helpful with respect to increasing transparency (training data is one example, though with synthetic data that’s already hard), with whistleblower protections (even though I’m not sure what they’d blow the whistle on), and red teaming the models pre deployment. I think chip embargoes are probably good, even though it helps Huawei.

It’s far better to not think about pro or con AI safety regulations, but to be specific about which regulation and why. The decision tree above helps, you do need to specify which worlds you’re protecting.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.