MoreRSS

site iconStrange Loop CanonModify

By Rohit Krishnan. Here you’ll find essays about ways to push the frontier of our knowledge forward. The essays aim to bridge the gaps between Business, Science and Technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Strange Loop Canon

Will money still exist in the agentic economy?

2025-12-19 22:03:27

Written with Alex Imas, subscribe to his blog here!

Sometimes I forget but we live in a future transformed by information technology pretty much across ever aspect. But one thing has remained largely the same: we still live in a world where the vast majority of economic transactions are done by people. If you want to buy a car, the process is largely the same as it was 50 years ago. You go down to the dealership and negotiate the best price that you can. Sure, you may have some extra information from doing research on the web beforehand - it’s certainly much easier to do comparison shopping with a supercomputer in your pocket - but the basic process of transacting with another human being has largely stayed the same.

One change that’s likely to come though is that there will soon be 10x, 100x, maybe more AI agents working in the world as there exist people. And as we have lots of AI agents working on our behalf, doing all forms of work, then there is a thesis that many of the frictions and information asymmetries that people face in markets may disappear if economic transactions are delegated to aligned agents, leading to a so-called Coasean singularity.

We’re not there yet though. Today’s agents are simply not good enough yet to act sensibly or without strict instructions. Many of the features of human-mediated markets still seem to be reproduced in AI agentic interactions. But as online spaces adapt to the promise of AI technology, it seems natural to think of how agentic markets will be organized. In a future world where we do have billions of AI agents, how would they coordinate with each other? What kind of coordination mechanisms would be needed? What institutions are likely to emerge?

And one possibility is particularly intriguing: will coordination still require money? Not in the sense of US dollars, but a shared medium of exchange and a hub/ clearing protocol.

Money, Money, Money

“Why money” has occupied economists going back to Adam Smith, who framed cash as solving what has since been termed the coincidence of wants. To see what we mean, consider a pure barter economy. Let’s say Alex is an apple farmer and Rohit raises chickens. If Alex wants chickens and Rohit wants apples, then Alex can just walk over to Rohit’s house with a bushel of apples and get some chickens in return. Simple. But what if Alex wants chickens but Rohit wants an electric toothbrush - he has no need for apples right now. Then to get the chickens, Alex would need to find a person who is willing to trade an electric toothbrush for his apples, and then come back to Rohit for a trade.

This would still all be fine if there was just one other person to visit and trade with, but what happens in a large market, with many (many) people who potentially have both an electric toothbrush to trade and want Alex’s apples? In order to trade, Alex needs to happen to find a person that both 1) has what Alex wants and 2) wants what Alex has. As very nicely shown in a paper by Rafael Guthmann and Brian Albrecht, the need to satisfy this coincidence of wants through finding matches creates complexity that quickly blows up as the size of the market increases. If the market is even moderately large, this complexity makes even basic transactions essentially impossible.

Ergo money. While the origin of money is a hot topic of debate (e.g., see David Graeber’s excellent book Debt: The First 5000 Years), the role of money in a competitive market is to solve the coincidence of wants. Money acts as a special type of good called the numeraire, where its only role is that it can be exchanged for other goods at pre-determined quantities. These quantities are reflected in the prices that each good is worth.

Going back to Alex and Rohit: one way to solve the coincidence of wants would be for Alex to sell his apples at a special place called market and then to use the money to purchase Rohit’s chickens. Rohit can then use that money to buy an electric toothbrush, or indeed any other thing his heart desires. Money eliminates the need for people to coordinate their transactions based on their current endowment (what they have) and preferences (what they want).

Bring on the agents

Okay, so money is necessary to coordinate transactions in an economy with people. This is largely because each individual can’t hope to have enough information on what everyone else has and wants to reliably engage in market transactions. Alex and Rohit are as yet, sadly, mortals.

But will this be the case for AI agents?

Agents do not have the same computational constraints as human beings. In theory, it may be possible to solve the search problem where the coincidence of wants becomes a non-issue. In that case, the agentic economy could eliminate the need for a key institution of the human economy. We decided to run an experiment to find out.

The experiment

First, the repo here. We can have N agents, with N goods, and each starts with its own good and wants another. There’s multiple rounds, one action per agent per round. Agents decide their course of action via structured JSON, and success simply means you get what you want.

The first question is about a pure barter economy. We explore whether LLM agents can achieve efficient allocations through barter at any scale, i.e., to engage in multiple bilateral negotiations to achieve gains from trade. The agents in the experiment have no real shortage of time. If this works then Coasean bargaining should be straightforward; goodbye money!

The table below has the results. What do we see? When the scale is small - when Alex just has to worry about coordinating with Rohit - all of this works. But as the number of agents grows, things start to get really difficult. By the time we get to even 8-12 agents the number of successful transactions drops to below 50%. And this is the absolute simplest setting.

Perhaps this should be expected. The problem is still O(n2) in complexity, which grows exceptionally fast as the number of agents grow. And if this isn’t just bilateral, but starts to include multiparty negotiations, it might become O(n!), which is far bigger for any number bigger than 3.

Ok let’s make it a bit easier for the agents. If they can’t talk to each other, since they are agents anyway, we should be able to give them omniscience. Enter Central Planning. There has been plenty of work before in the limits of bilateral negotiations, but we can test how well a “hub” structure can help. Does having a central planner help set the stage for better performance?

As the results table shows, central planning makes things slightly better, but we are still very much in a world of the Hayekian troubles. A hierarchy without a numeraire just isn’t enough.

Ok, we can continue looking at our human history to see what else we can do. In Debt, David Graeber argues that money emerged at least in part through state power, to enforce the paying of taxes in order to fund foreign wars. Before this, he argues, IOUs and bartering seemed to have worked just fine to manage the economy; the IOUs themselves became a sort of numeraire that could be traded in order to solve the coincidence of wants.

So let’s introduce,Credits and IOUs. We can give the agents the ability to give each other an IOU and see whether providing the basics of credit allows them to come up with better ways to interact with each other.

This still didn’t help as much as we thought. There were a few segments where the transactions started happening, but they really didn’t start to work. Or scale.

Most interestingly, the concept of money didn’t emerge from this, not organically. IOUs didn’t become money. Even though in conversations LLMs all know that this is the smart thing to do, it did not emerge.

This was a bummer, because as with the prior research, what this shows is that AI agents do not yet come with the natural instincts of humans to turn IOUs into a numeraire that acts as a stand-in for money. They don’t even come with the same set of ideas as this sea otter.

Ok, let’s take the final step and actually introduce Money. We do this by creating an exchange where the agents can do bids and offers, and look at market outcomes. The results are stark: markets resolve at a success rate of 100% and much faster than through other mechanisms, at the rate of O(n).

One note is that this result presumes the exchange works without a hitch. In reality there will be friction coming from liquidity constraints, differential compute resources, etc. For example, in the N=8 run, the hub handled 23 inbound + 23 outbound messages and prices stayed fixed. And if regulations require that AI agents use different types of country-specific currencies, then exchange rates will complicate things further.

Discussion

To sum: An agentic economy doesn’t emerge automatically with even SOTA agents (who really should know better). Barter and central planning remain inefficient and infeasible, and money does not emerge organically even when credit and IOUs are introduced. At least in our setting, an agentic economy needs more top-down engineering to become efficient.

Previous work on agent-based modeling has explored what kind of emergent economic realities we are likely to see with rule-based agents interacting. The world of AI agents is fundamentally different. These agents act based on a huge corpus of human knowledge, with the underlying LLM models able to solve incredibly difficult problems on their own. These agents can plan, they can negotiate, they can code. And even with all this knowhow at their disposal, it’s interesting to see that they still appear to require top-down institutions to create an effective and efficient market.

As we transition to a more agentic economy, a key part of ‘getting ready’ for that world is setting up institutions for the agents. Like including:

  • Identity and roles

  • Settlement and payment

  • Pricing and quote formats

  • Reputation

  • Marketplaces and clearinghouses

This is by no means exhaustive, but we wager that mechanism design for multi-agent work is going to be a rather fertile area of research for a while. Humanity went through millennia of evolution to figure out the right societal setup that lets us progress, that lets us build a thriving civilisation.

It is both necessary and inevitable that the world of AI agents will also need the equivalents, though the emergence of such institutions will likely be much faster given the millennia of human knowledge that we’ve already amassed.

Github repo here.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Seeing like an agent

2025-12-08 23:02:20

One of the books that I loved as a kid was Philip Pullman’s His Dark Materials. The books themselves were fine, but the part I loved most were the daemons. Each human had their own daemon, uniquely suited to them, that would grow with them and eventually settle into a form that reflects their personality.

I kept thinking of this when reading the recent NBER paper by John Horton et al about The Coasean Singularity. From their abstract:

By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design.

Basically they argue, if you actually had competent, cheap AI agents doing search, negotiation, and contracting, like your own daemon, then a ton of Coasean reasons firms exist disappear, and a whole market design frontier reopens.

This isn’t a unique argument, though well done here. I’ve made it before, as has others, including Seb Krier recently here and Dean Ball and many others. The authors even talk about tollbooths as from Cloudflare and agents only APIs and pages.

But while reading it I kept thinking by now this is no longer a theoretical question, we now have decent AI agents and we should be able to test it. And it’s something I’ve been meaning to for a while, so I did. The question was, if we wire up modern agents as counterparties, do we actually see Coasean bargains emerge. Repo here.

The punchline is that AI agents did not magically create efficient markets. And they also kinda fell prey to a fair bit of human pathologies, including bureaucratic politics and risk aversion.

Experiment 1: An internal capital market

The first way to test these was to just throw them into a simulated company and see what happened. So I set up four departments - Marketing, Sales, Engineering and Support - and said they could all bid for budget to do their jobs. Standard internal capital market where departments would submit bids and projects get funded until budgets get exhausted.

If the promise of Hayek holds and we can get markets if information flowed freely, then we should be able to see this work. And it would be much better than the command and control method by which we try to decide this today.

Well, it didn’t work. Marketing and Sales accumulated political capital. Engineering posted negative utility for most quarters. The market we set up systematically funded customer facing features and starved infrastructure work. It’s like Seeing Like A State all over again.

I think this was because GTM type departments could come up with immediate articulable customer values, whereas Engineering’s value kept feeling preventative or diffuse.

It’s a bit frustrating to see that the models still retain human foibles since this is effectively Goodhart’s Law. When you measure departmental utility and fund accordingly, and you let the agents argue on their behalf, you do start to see negative externalities for core functionality.

So I added countermeasures. I added risk flags on features and veto power over “dangerous” work. Added shared outage penalties (if you ask for a risky feature and everything crashes, you pay for it too). And when I ran that, outages did happen. GTM departments observed this and tempered their bids, though only a little.

Engineering utility however still stayed low. GTM could discount future outages and gambled on “maybe it won’t break” for its immediate wins. But Engineering couldn’t proactively push folks into infrastructure investments. The pattern is hardly dissimilar.

The truly interesting part was that the agents perfectly replicated the dysfunction of real companies. Onwards.

Experiment 2: External markets - IP licensing

This was the most interesting part. The best way to see Coasean bargaining come true is to set up an external market for cross firm technology licensing. Twenty firms and thirty software modules. Each firm has some internal capabilities but could also license tech, so the buy vs build becomes a much cleaner decision with AI agents vs humans in reality. A classic setup, and the payoffs should be excellent. Or so I thought.

First run had zero deals. Every firm decided to build everything internally. They understood the rules and saw potential counterparties and had budget to trade, but still they chose autarky.

Okay, so I added reputation systems, post-trade verification, penalties for idleness, bonuses for successful deals, counterparty hints, even price history. Basically the kitchen sink.

Still zero trades.

This is the perfect setup as per the paper. Transaction costs effectively zero. Perfect information. Aligned incentives. Etc etc. The agents just didn’t care to trade! Because of very high Knightian uncertainty aversion (I assume), or some heavy pretraining that firms mainly build, not trade.

So I mandated ask/bid submissions. If you don’t post prices, defaults are generated. Profits are then directly coupled to next quarter’s budget. And I even gave explicit price hints, because the agents clearly couldn’t, or wouldn’t, discover equilibrium without them.

Now we start to see trades! Success! Three deals per round. The welfare is still far below the market optimum, but that’s possibly also because we haven’t optimised them yet.

But by now it wasn’t a market in the Hayekian sense. Like it’s no longer voluntary. We’re forcing the agents to trade, and then they do the sensible things.

Since it worked well for well behaved participants, I also did a robustness check, so we are creating adversarial firms and then check if the market still functions! And it does. Adversarial sellers captured much of the surplus, i.e., fairness is expensive. It’s either weak strategic sophistication or the agents are just nice and passive by default, I don’t know which.

Experiment 3: Second price auctions

The third experiment was one to check whether the models behave according to their beliefs. Vickrey auctions are sealed second price auctions, so the winner pays the price of the second highest bid. This means the dominant strategy is for the bidders to be accurate to their beliefs.

And they did. Allocative efficiency was 1. This is a little bit of a control group since the models must be smart enough to know the dominant strategy. I added “profit max only” personas, and collusion channels, just to check, and the behaviour still looked like standard truthful Vickrey bidding.

This tells us that they’re smart enough to do the right thing, but also that given a messy environment with underspecified mechanisms, which is most of the real world, they default to passivity or autarky.

I tested this also with a bargaining test with five players, which asks the models to divide a surplus value and have them negotiate with each other as to how to split things. The players can see a broadcast and each others proposal, but after round 1 the players can DM others. I even made one of the players adversarial. And still the splits remained near-equal, very far from the Shapley vector. They are norm conforming. Models are highly self-incentivised to be fair!

Synthesis

We saw 4 claims tested. To summarise:

  1. AI lowers transaction costs so markets emerge spontaneously - False

  2. With mechanism design, AI-mediated markets can function - True, but costly (required forced participation with Gosplan-ish price hints)

  3. Internal markets improve on hierarchy when coordination costs fall - False (GTM dominates Engineering even with full information)

  4. AI agents play fair in functioning markets - Mixed (adversarial agents extract rents, but agents are mostly fair)

The takeaway from these experiments is that to get to a point where the AI agents can act as sufficiently empowered Coasean bargaining agents, for them to become a daemon on my behalf, they need to be substantially empowered and so instructed. They do not act in the way humans act, but are much fairer and much more passive than we would imagine.

Markets don’t form spontaneously. Markets form under coercion but are pretty thin. And when markets exist, strategic sophistication determines who wins, depending on how the agents are set up. It shows alignment problems don’t disappear just because the agents can negotiate with each other. This is pessimistic for the AI dissolves firms narrative but optimistic for AI can enable better institutions narrative.

The Coasean Singularity paper argues AI lowers transaction costs but the gains require alignment and mechanism design, which is what I empirically tested here. It’s a strong confirmation of its strong form - that reduction in transaction costs was nice but mechanism design was needed to set up an actual market.

Also the fact that we needed to couple their budgets so the AIs needed to work from the same hymn is important, it means any multi agent design we create would need a substrate, like money, to help them coordinate.

Now some of this is that the intuitions we have built up over time, both from other humans but also from stories, is to assume that the agents have enough context at all times on what to do. I see my four year old negotiating with his brother to get computer time and by the time he’s a bargaining agent with some hapless corporation he would have had decades of experience with this. Our models on the other hand had millions of years of subjective experience in seeing negotiation but have zero experience in feeling that intense urge of wanting to negotiate to watch Prehistoric Planet with his brother.

Perhaps this matters. These complex histories can get subsumed in casual conversation into a seemingly innocuous term like “context” and maybe we do need to stuff a whole library into a model to teach it the right patterns or get it to act the way we want. The daemons we do have today aren’t settled in forms that reflect our interests out of the box though they know almost everything about what it is like to act as if it shares those interests.

But what the experiments showed is that this is far from obvious. Coase asked why firms exist if markets are efficient, and answered it’s because of transaction costs. The experiments here ask, even with zero transaction costs, why do firm-like structures still emerge1?

And if we do end up doing that, we might have just rediscovered the reason why firms exist in the first place, the very nature of the firm. Even as we recreate it piece by instructive piece.

Github repo here

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

1

And when we are able to roll the AI agents out, we will get firms that are more programmable, more stimulated and more explicitly mechanism-designed than human firms ever were.

Contra Scott on AI safety and the race with China

2025-12-02 09:12:23

has a really interesting essay on the importance of AI safety work, arguing it will not cause the US to fall behind China, as is often claimed. It’s very well written, characteristically so, and well argued. His argument, in a nutshell ( I paraphrase) is:

  1. US has ~10x compute advantage over China

  2. Safety regulations add only 1-2% to training costs at most

  3. China is pursuing “fast follow” strategy focused on applications anyway

  4. Export controls matter far more (could swing advantage from 30x to 1.7x)

  5. AI safety critics are inconsistent - they oppose safety regs but support chip exports to China

  6. Sign of safety impact is uncertain - might actually help US competitiveness

I quite like this argument because I actually agree with all of the points, mostly anyway, and yet find myself disagreeing with the conclusion. So I thought I should step through my disagreements, and then what my overall argument against it is, and see where we land up.

First, the measurement problem

Scott argues that the safety regulations we’re discussing in the US only adds 1-2% overhead. This is built off of METR and Apollo’s findings, around $25m for internal testing, and contrast this with $25 Billion for training runs. All the major labs also already spend enormous sums of money on intermediate evaluations, model behaviour monitoring and testing, and primary research to make them work better with us, all classic safety considerations.

This only holds if the safety regulation based work, hiring evaluators and letting them run, is strictly separable. Which is not true of any organisation anywhere. When you add “coordination friction”, you reduce the velocity of iteration inside the organisation. Velocity here really really matters, especially if you believe in recursive self improvement, but even if you don’t.

This is actually visible in ~every organisation known to man. Facebook has a legal department of around 2000 employees, doubled since pre Covid, of a total employee base of 80,000. Those 2000 are quite likely not disproportionately expensive vs the actual operating expenditure of Facebook. But the strain they put on the business far exceeds the 2.5% cost it puts on the output. There’s a positive side of this argument, they will also prevent enough bad things from happening that the slowdown is worth it. Presumably Facebook themselves believe this, which is why they exist, but it is very much not as simple as comparing the seemingly direct costs.

The argument that favours Scott here is maybe pharma companies,

This gets worse once you think about the 22 year old wunderkinds that the labs are looking to hire, and wonder if they’d be interested in more compliance, even at the margin.

China is a fast follower

The argument also states that China is focused on implementation and fast-follow strategy, because they don’t believe in AGI. I think it’s an awfully load bearing claim, and feels quite convenient. China is also known for strategic communication in more than one area, where what they say isn’t necessarily what they focus on.

As Scott notes, Liang Wenfeng of Deepseek, explicitly has stated he believes in superintelligence, which in itself is contradictory to the argument that they care about the applications layer. If China does truly believe in deployment, as it seems to be the case, then having true believers as heads of top labs is if anything more evidence against “they’re just fast followers” argument.

They’re leaders in EVs, solar panels, 5G, fintech and associated tech, probably quantum communications, an uncomfortably large percentage of defense related tech, seemingly humanoid robots, the list is pretty long. This isn’t all just fast followership, or at least even if it is, it’s indistinguishable from the types of innovation we’re talking about here.

Again, this only really matters to the extent you think recursive self improvement is true or China won’t change its POV here very fast if they feel it’s important.The CCP has an extraordinary track record of redirecting capital in response to perceived strategic opportunity (and overdoing it). That means “they don’t believe in AGI” is an unstable parameter. Even if the true breakthrough comes from some lab in the US, or some tiny lab in Harvard, it will most likely not be kept under wraps for years as the outcomes compound.

The AI safety critics are sometimes bad faith

This is true! There’s a lot of motivated reasoning, which tries to tie itself in knots such as to argue “to beat china we have to sell them the top Nvidia chips, so they don’t develop their own chip industry and cut the knees off another one of our top industries”. Liang Wenfeng has also said that his biggest barrier is access to more chips.

That said, here my core problem is that I am unsure about which aspects of the regulations being proposed are actually useful. Right now they ask for a combination of red-teaming (to what end), hallucination vs sycophancy (how do you measure), whistleblower protections, bias (measurement?), CBRN (measurement delta vs pure capability advance), observability for chip usage (hardware locks?), and more. These assume a very particular threat surface.

The Colorado AI Act focuses on algorithmic fairness and non discrimination. Washington HB 1205 focuses on digital likeness and deepfakes. AB2013 in California on disclosing training data for transparency. Utah’s SB 332 says AI has to say theyre AI when using a chatbot. These are all quite different, as we can see, and will require different answers in both implementation and compliance. writes about this cogently and cohesively.

Many of these ideas are sensible in isolation, but many of them are also extremely amorphous. Regulations are an area where I am predisposed to think that unless they’re highly specific and ROI is directly visible it’s better to not get caught in an invisible graveyard. The regulatory ratchet is real, as Scott acknowledges. Financial regulation post-2008, aviation post-9/11, FDA … We always have common sense guardrails that creates an apparatus that then expands.

Sign uncertainty

It is definitely true that having a more robust AI development environment might well propel the US forward vs China. Cars with seatbelts beat cars without seatbelts. Maybe lack of industrial espionage means the gains from US labs won’t seed Chinese innovation.

It should be noted though that the labs already spend quite a bit on cybersecurity. Model weights are worth billions, soon dozens of billions, and are protected accordingly. Should it be made stronger? Sure.

It should be noted, underlined, however that this is true only insofar as the Chinese innovation is driven by industrial espionage or weight stealing. Right now that definitely does not seem to be the case. What is true is that deployment by filing off the edges, making the products much nicer to use, especially via posttraining, is something Western models do a much better job of. Deepseek, Qwen or Kimi products are just not as good, and differentially worse than how good their models are.

So … now what.

Scott’s argument makes sense, but only in a particular slice of the possible future lightcone. For instance, we can sort of lay down the tree of how things might shake out. There are at least 5 dimensions I can think of offhand:

  1. Takeoff speed

  2. Alignment difficulty

  3. Capability distribution (oligopoly, monopoly etc)

  4. Regulations’ impact on velocity

  5. China’s catch up timeline

You could expand this by 10x if you so chose, and things would get uncomfortably diverse very very quickly. But even with this, if we split each of these into like 4 coarse buckets (easy, moderate, hard, impossible), you get 1024 worlds. I asked Claude to simulate these worlds and choose whatever priors made sense to it, and it showed me this:

I’m not suggesting this is accurate, after all there could be a dozen more dimensions, or the probability distribution might be quite different. Change in one variable might impact another. But at least it gives us an intuition on why the arguments are not as straightforward as one might imagine, and it’s not fait accompli that “AI safety will not hurt US in its race with China”, and that’s assuming the race is a good metaphor!

For instance, here’s one story which I tried to draw out after getting lost with the help of Claude.

  • Does recursive self improvement happen?

    • Y. First to ASI wins the lightcone

      • Is there a close race with China?

        • Y. Every month matters

          • Do safety regs meaningfully slow us?

            • Y. Disaster!

            • N. Small overhead doesn’t matter!

        • N. US has durable advantage (10x compute)

          • Does model quality matter more than deployment?

            • Y. We have time for safety work. 6mo slower might be fine!

            • N. Safety regs might not matter

    • N. Gradual capability increases

      • Which layer determines winner?

        • Model layer

          • How durable is US advantage

            • 10x compute advantage wins, so regulations are basically “free”

            • If china can catch up however, efficiency gains matter, so safety regs might be a small drag but real

        • Application layer

          • Do safety regulations affect deployment velocity?

            • Yes. Compliance morass and lawyerly obstruction everywhere.

            • N. Safety regs only affect the model. It’s fast and unobtrusive. It’s fine.

In this tree there are only a few areas where Scott’s argument holds water. Recursive self improvement is important enough to worry about but unimportant enough that velocity doesn’t matter. Chinese skepticism about ASI is stable but we should prevent dictators getting ASI. We can measure direct costs but what about illegible costs? Model layer regs won’t affect application layer despite Colorado showing they already do.

If recursive self improvement is false, it only makes sense to do more regulations *if* safety regulations do not meaningfully impact deployment velocity in the application layer and the compute advantage holds in the model layer. If recursive self improvement is going to happen, then Scott’s argument has more backing, especially if safety regulations don’t slow us down much as long as the model quality will continue to improve.

Which means of course the regulations have to be sensible, they can’t be an albatross, China’s “catch up” timeline has to be longer, the capability distribution has to be more oligopolistic, alignment has to be somewhat difficult, and takeoff speed has to be fairly fast.

If we relax the assumptions, as in the tree above, we might end up in places where AI safety regulations are more harmful than useful. One example, and this is my own view, is that a lot of AI safety work is just good old fashioned engineering work. Like you need to make sure the model does what you ask it to, to solve hallucinations and sycophancy. And you need to make sure it doesn’t veer off the rails when you ask it slightly risque questions. And you’d want the labs to be “good citizens”, not coerce employees to keep quiet if they see something bad.

Scott treats regulatory overhead as measurable and small in his essay. But the history of compliance shows they compound through organisational culture, talent selection, and legal uncertainty and dominate direct costs. If he’s wrong about measurement, and Facebook’s legal department suggests he is, then his entire calculation flips. Same again with China’s stance in reality vs what they say, or the level of belief in recursive self improvement.

To the question at hand, will AI safety make America lose the war with China? It depends on that tree above. It is by no means assured that it will (or that it won’t), but the type of regulation and the future being envisioned matter enormously. The devil, as usual, is in the really annoying details.

In my high-weight worlds, AI safety work can meaningfully help, but only if done sensibly. I don’t put too much weight on recursive self improvement, at least done without human intervention and time to adjust. I also think that large amounts of safety are intrinsic principles to build widely available and used pieces of software, so are not even a choice. They might not be called AI safety, they might be called, simply, “product”, which would have to think about these aspects.

Personally, I prefer a very economist’s way of asking the “will AI safety make the US lose to China” question, which is: what is the payoff function for winning or losing the race? Since regulations are (mainly) ratchets, we should choose them carefully, and only when we think it’s warranted (high negative disutility if not, positive utility if we do).

  • In “mundane AI” world, we get awesome GPTs but not a god. Losing means we’re Europe. While some might think of this as akin to death, it’s not that bad.

  • In “AI is god” world, losing is forever

Even in the first world, AI safety regs might make the US the Brussels of AI, which is a major tradeoff. Most regulations currently posed don’t seem to yet cause that effect. But, it’s not like it’s hard to imagine.

Regulation can be helpful with respect to increasing transparency (training data is one example, though with synthetic data that’s already hard), with whistleblower protections (even though I’m not sure what they’d blow the whistle on), and red teaming the models pre deployment. I think chip embargoes are probably good, even though it helps Huawei.

It’s far better to not think about pro or con AI safety regulations, but to be specific about which regulation and why. The decision tree above helps, you do need to specify which worlds you’re protecting.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Epicycles All The Way Down

2025-11-27 03:37:24

“All models are wrong, but some are useful.” — George E. P. Box

“All LLM successes are as human successes, each LLM failure is alien in its own way.”

I. Two ways to “know”

I was convinced I had a terrible memory throughout my schooling. As a consequence pretty much for every exam in math or science I would re-derive any formula that was needed. Kind of a waste, but what could I do. Easier than trying to remember them, I thought. It worked until I think second year of college, when it didn’t.

But because of this belief, I did other dumb things too beyond not study. For example I used to play poker. And I was convinced, and this was back in the day when neural nets were tiny things, that my brain was similar and I could train it using inputs and outputs and not actually bother doing the complex calculations that would be needed to measure pot odds and things like that. I mean, I can’t know the counterfactual but I’m reasonably sure this was a worse way to play poker that just actually doing the math, but it definitely was a more fun way to do it, especially when combined with reasonable quantities of beer. I was convinced that just from the outcomes I would be able to somehow back out a playing strategy that would be superior.

It didn’t work very well. I mean, I didn’t lose much money, but I definitely didn’t make much money either. Somehow the knowledge I got from the outcomes didn’t translate into telling me when to bet, how much to bet, when to raise, how much to raise, when to fold, how to analyse others, how to bluff, you know all those things that if you want to play poker properly you should have a theory about.

Instead what I had were some decent heuristics on betting and a sense of how others would bet. The times I managed to get a bit better were the times I could convert those ideas of how my “somewhat trained neural net” said I should and then calculated the pot odds and explicitly tried to figure out what others had and tried to use those as inputs alongside my vibes. I tried to bootstrap understanding from outcomes alone, and I failed1.

II. Patterns and generators

“What I cannot create, I do not understand.” — Richard Feynman

This essay is about why LLMs feel like understanding engines but behave like over-fit pattern-fitters, why we keep adding epicycles that get us closer to exceptional performance, instead of changing the core generator, and why that makes their failures look more like flash crashes and market blow-ups than like Skynet.

Thanks for reading Strange Loop Canon!

One way this makes sense is that mathematically the number of ways to create a pattern has to be more than the number of patterns themselves. There are more words than letters. The set of all possible 1000 character outputs is huge, but the set of programs that could print any one of them is larger2.

An LLM trained on the patterns swims in an ocean of possible generators and the entire game of training is to identify those extra constraints so it has reason to pick the shortest, truest one. Neural networks have inductive biases that privilege certain solutions.

There is an interesting mathematical or empirical question to be answered here. What are the manifolds of sufficiently diverse patterns which can be used such that collectively it will turn away the wrong principles and keep only the correct generative principles?

I’m not smart enough to prove this but perhaps starting with Gold’s theorem, which says something like if all you ever see are positive examples of behaviour, then for a sufficiently rich class of programs it might well be true that no algorithm can be guaranteed to eventually lock onto the exact true program that produced them. LLMs are a giant practical demonstration of this. They implicitly infer some program that fits the data, but not necessarily the program you “meant”.

I asked Claude about this, and it said:

The deeper truth is that success is low-dimensional. There are relatively few ways to correctly solve “2+2=” or properly summarize a news article. The constraint satisfaction problem has a small solution space. But failure is high-dimensional—there are infinitely many ways to be wrong, and LLMs explore regions of that failure space that human cognition simply doesn’t reach.

One way to think about this is as the distinction between complexity in a system and randomness. Often indistinguishable in its effects, but fundamentally different in its nature. A world where a butterfly can flap its wings and cause a hurricane somewhere else is also a world that is somewhat indistinguishable from being filled with the randomness. The difference of course as that the first one is not random, it is deterministic, it just seems random because we cannot reliably predict every single step that the computation needs to take in all its complex glory.

One of Taleb’s targets is what he calls the “ludic fallacy,” the idea that the sort of randomness encountered in games of chance can be taken as a model for randomness in real life. As Taleb points out, the “uncertainty” of a casino game like roulette or blackjack cannot be considered analogous to the radical uncertainty faced by real-life decision-makers—military strategists, say, or financial analysts. Casinos deal with known unknowns—they know the odds, and while they can’t predict the outcome of any individual game, they know that in the aggregate they will make a profit. But in Extremistan, as Donald Rumsfeld helpfully pointed out, we deal with unknown unknowns—we do not know what the probabilities are and we have no firm basis on which to make decisions or predictions.

This isn’t just Taleb being esoteric. The rules that were learnt were not the rules that should have been learnt. This is a classic ML problem, that still exists in deep learning. The Fed sent a letter to banks about using not-easily-interpretable ML to judge loan applications for this reason. For an easier to see example, autonomous driving is a case of painfully ironing out edge cases one after the other, because the patterns the models learnt weren’t sufficiently representative of our world. Humans learn to drive with about 50 hours of instruction, Waymo in 2019 itself had run 10 billion simulated miles and 20m real miles, and Tesla at 6 billion real miles driven and quite likely hundreds of billions of miles as training data.

This isn’t as hopeless as it sounds. We see with LLMs that they are remarkably similar to humans in how they think about problems, they don’t get led astray all that often. The remarkable success of next token prediction is precisely that it turned out to learn the right generative understanding.

LLMs are brilliant at identifying a “line of best fit” across millions of dimensions, and in doing so produces miracles. It’s why Ted Chiang called it a blurry jpeg of the internet a couple of years ago.

III. Prediction and causation

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” — John von Neumann

Eric Baum had a book published more than twenty years ago, called “What Is Thought?” Its excellent title aside, the core premise was that understanding is compression. Just like drawing a line of best fit seems to gets you the right understanding in statistics, y = mx + c, so do we with all the datapoints we encounter in life.

The spiritual godfather of this blog, Douglas Hofstadter, thought about understanding as rooted in conceptualisation and core understanding. There was a recent New Yorker article that discussed this, and relationship to the truly weirder aspects of high dimensional storage of facts or memory.

In a 1988 book called “Sparse Distributed Memory,” Kanerva argued that thoughts, sensations, and recollections could be represented as coördinates in high-dimensional space. The brain seemed like the perfect piece of hardware for storing such things. Every memory has a sort of address, defined by the neurons that are active when you recall it. New experiences cause new sets of neurons to fire, representing new addresses. Two addresses can be different in many ways but similar in others; one perception or memory triggers other memories nearby. The scent of hay recalls a memory of summer camp. The first three notes of Beethoven’s Fifth beget the fourth. A chess position that you’ve never seen reminds you of old games—not all of them, just the ones in the right neighborhood.

This is a rather perfect theory of LLMs.

It’s also testable. I built transformers to try and predict Elementary Cellular Automata, to see how easily they could learn the underlying rules3.

I also tried creating various combinations of wave functions (3-4 equations and combining them) and seeing if the simple transformer models can learn those, and understand the underlying rules. These are combinations of simple equations, like a basic wave function with a few transformations. And yet:

There have been other similar attempts. This paper, what has a foundation model found, in particular was fascinating because it tried to use a similar method to see if you could predict orbits of planets based only on observational data. And the models managed to do it, except they all tried to approximate instead of learning the fundamental underlying generative path4.

This manifold question - “which diverse pattern sets collapse to unique generators” - is probably intractable without solving the frame problem. After all, if we could characterise those manifolds, we’d have a theory of induction, which is to say we’d have solved philosophy.

Maybe if we got them to think through why they were predicting the things they were predicting as they were getting trained, they could get better at figuring out the underlying rules. It does add a significant lag to their training, but essential nonetheless. Right now we seem stuck with Ptolemaic astronomy, scholastically adding epicycles upon epicycles, without making the leap to hit the inverse-square law. Made undeniably harder because there isn’t just one law to discover, but legion.

IV. Can reasoning escape the pattern trap?

“The aim is not to predict the next data point, but to infer the rule that generates all of them.” — Michael Schmidt

One solution to this problem is reasoning. If you’ve learnt a wrong pattern, you can reason your way to the right one, using the ideas at your disposal. It doesn’t matter if you’re wrong, as long as you can course correct.

Since LLMs are trained to predict the patterns that exist inside a large corpus of data, in doing so they do end up learning some of the ways in which you could create those patterns (i.e., thinking), even if not necessarily the right or the only way in which we see that getting created. So a large part of the efforts we put is to teach them the right ways.

Now we have given models a way to think for themselves. It started as soon as we had chatbots and could get them to “think step by step”. We get to do that across many different lines of thought, reflect back on what they found, and fix things along the way. This is, despite the anthropomorphisation, reasoning. If every rollout is in some sense a function, reasoning is a form of search over those latent programs, with external tools, including memory. Reasoning this way even gets us negative examples and better data, helping loosen the constraints of Gold’s theorem.

It’s also true that now they can reason, we do see them groping their way towards what absolutely looks like actual understanding. This can also often seem like using its enormous corpus of existing patterns that it knows and trying to first-principles-race its way towards the right steps to take to get to the answer.

A useful training method is to teach the model to ask itself to come up with those principles and then to apply them, to learn from them, because doing so gets it much closer to the truth. In mid-training, once the model has some capabilities, this becomes possible. And more so once when they have tools like being able to write python and look up information at its disposal5.

Because we are still pushing the induction problem up one level. It is now a game of how much can it learn about how to think things through. Whether the patterns of how to learn are also learnable from the data, both real and synthetic, to reach the right answer. Or the patterns to learn how to learn6.

And it is guided by the very same process that caused so much trouble in learning Conway’s Game of Life.

It still falls prey to the same lack of insight or inspiration or even step by step thinking that shows up in these failure modes. Same as before when we were trying to see why LLMs couldn’t do Conway’s Game of Life, this still remains the key issue7.

This, to be clear, does seem odd. And is a major crux why people seem to fight whether these AI systems are “even thinking” vs those who think this method is “clearly thinking” and scaling it up will get us to AGI. Because a priori it is very difficult to see why this process would not work. If you are able to reason through something then surely you will be able to get to the right answer one way or the other.

The reason our intuition screws up here is because we think of reasoning the way we do it as different from the model. Rightly so. The number of different lines of thought it can simultaneously explore are just not that high.

The best visible example is Agents who do computer use, if you just see the number of explicit steps it needs to take to click a button you see how quickly things could degenerate and how much effort is required to make them not!

At the same time when you train them with live harnesses and ability to access the internet and have the types of problems where you are able to provide reward rubrics that are actually meaningful suddenly the patterns that it identifies become more similar to the lessons we would want them to learn.

V. Consciousness

An aside, but considering the topic I couldn’t resist. The constant use, including in this essay, about words like “reasoning” or “consciousness” or “thinking” or even “trying to answer” are all ways in which we delude ourselves a little bit about what the models are actually doing. Semantic fights are the dumbest fights but we, just like the functionalists, look at what the model gets right and how it does and are happy to use the same terms we use for each other. But what they get wrong are where the interesting failures.

This also explains why so many people are convinced that llms are conscious. Because behaviourally speaking its outrage does not seem different to one from us, another conscious entity. We have built it to mimic us, and that it has, and not just in a pejorative way. A sufficient degree of change in scale of pattern prediction is equivalent to a change in scope!

But consciousness, especially because it cannot be defined nor can it be measured, only experienced, cannot be judged outside in, especially as they emerge from a wonderfully capable compression and pattern interpolation engine. Miraculous though it seems, the miracle is that of scale! We simply do not know what a human being who has read a billion books looks like, if it is even feasible, so an immortal who has read a billion books feels about as smart as a human who has read a few dozen.

There can’t of course be proof that an LLM is not conscious. Their inner work is inscrutable, because they themselves are not able to distill the patterns they’ve learnt and tell them to you. We could teach them that! But as yet they can’t.

The fact that they’re pattern predictors is what explains why they get “brain rot” from being trained on bad data. Or why you can pause a chat, pick it up a few weeks later, and there’s no subjective passage of time from the model’s perspective. They literally can only choose to see what you tell it, and cannot choose to ignore the bad training data, something we do much better (look at how many functional adults are on twitter all day).

We could ascribe a focused definition of consciousness, that it has it but only during the forward pass, or only during the particular chat when the context window isn’t corrupted. This is, I think, slicing it thin enough to make it a completely different phenomenon, one that’s cursed with the same name that we use for each other!

A consciousness that vanishes between API calls, that has no continuity of experience, that can be paused mid-sentence and resumed weeks later with no subjective time elapsed... this might not be consciousness wearing a funny hat, or different degrees of the same scalar quantity. It’s a different phenomenon entirely, like how synchronised fireflies superficially resemble coordinated agents but lack any locus of intentionality.

Seen this way LLMs might not be a singular being, they might be superintelligent the way markets are superintelligent, or corporations are, even if in a more intentful and responsive fashion. Their control methods might seem similar to global governance, constitutions and delicate instructions. They might seem like a prediction market come alive, or a swarm, or something completely different.

VI. The thesis

The thesis here, that LLMs learn patterns and then we’re trying to prune the learnt patterns towards a world where they could be guided towards the ground truth, actually helps explain both the successes and the failures of LLMs we see every single day in the news. Such as:

  1. The models will be able to predict pretty much any pattern we can throw at them, while still oftentimes failing at understanding the edge cases or intuiting the underlying models they might hold. Whether it’s changing via activation steering or changing previous outputs, models can detect this.

  2. Powerful pattern predictors will naturally detect “funky” inputs. Eval awareness is expected. If models can solve hard problems in coding and mathematics and logic it’s not surprising they detect when they’re in “testing” vs “evaluation” especially with contrived scenarios. Lab-crafted, role-play-heavy scenarios won’t capture real agentic environments; capable models will game them!

  3. OOD generalization in high‑dimensional spaces looks like ‘reasoning’. It even acts like it, enough so that for most purposes it *is* reasoning. Most cases of reasoning are also patterns, some are even meta patterns.

  4. Resistance to steering is also logical if there are conflicting information being fed in, since models are incredibly good at anomaly detection. Steering alters the predicted token distribution. A reasoning model can detect the off‑manifold drift and correct. Models are trained to solve given problems and if you confuse them makes sense they would try non-obvious solutions, including reward hacking.

  5. Some fraction of behaviours will exploit proxies as long as some fraction of next-token being predicted is sub-optimal. Scale exposes low‑probability tokens and weird modes.

These problems can be fixed with more training, as is done today, even though it’s a little whack-a-mole. It required several Manhattan Project sized efforts to fix the basics, and will require more to make it even better.

How many patterns does it need need to learn to understand the underlying rules of human existence? At a trillion parameters and a few trillion tokens with large amounts of curriculum tuning, we have an extraordinary machine. Do we need to scale this up 10x? 100x?

often asks in interviews, “explain, in as few dimensions as possible, the reasons behind [X]”. This is what understanding is. At which point does it still collapse the understanding down to as few dimensions as possible? Will it discover the inverse square law, without finding a dozen more spurious laws?

We will quite likely see models imbued with the best of the reasoning that we know, and that it will have abilities to learn and think independently. Do almost anything. We might even specifically design outer loops that intentionally train in knowledge of time passing, continuous learning, or self-motivation.

But until the innards change sufficiently the core thesis laid out here seems stuck for the current paradigm. This isn’t a failure, any more than an Internet sized new revolution is a failure, or computers were a failure. We live in the world Baum foresaw.

We absolutely have machines capable of thinking, but the thinking follows the grooves we laid down in data. Just like us, they are products of their evolution.

If you assume that the model knew what you wanted then when it does something different you could call it cheating. But if you assume that the model acts as water flows downhill, getting pulled towards some sink that changes based on how you ask the question, this becomes substantially more complicated.

(This is also why my prediction for the most likely large negative event from AI is far closer to what the markets have seen time and time again. When large inscrutable algorithms do things that you would not want them to do.)

And equally as useful is what this tells us what is required for alignment. Successful alignment will end up being far closer to how we align other super intelligences that surround us, like the economy or the stock market. With large numbers of rules, strict supervision, regular data collection, and the understanding that it will not be perfect but we will co-evolve with it.

AI, including LLMs, do sometimes discover generators when we provide them with enough slices of the world that the “line of best fits” becomes parsimonious, but it’s not the easy nor the natural outcome we most often see. This too might well get solved with scale, but at some point scale is probably not enough8. We will have machines capable of doing any job that humans can do but not adaptable enough to do any job that humans think up to do. Not yet.


A summary, TLDR

  • LLMs today primarily learns patterns from the data they learn from

  • Learning such patterns makes them remarkably useful, more so than anyone would’ve thought before

  • Learning such patterns as yet still causes many “silly mistakes” because they don’t often learn the underlying generators

  • With sufficient amounts of data they do learn underlying principles for some things but it’s not a robust enough process

  • Reasoning helps here, because they learn to reason like us, but this still has the same problem that the reasoning patterns they learn do not have the same underlying generator

  • As we push more data/ info/ patterns into the models they will get smarter about what we want them to do, they are indeed intelligent, even though the type of intelligence is closer to a market intelligence than an individual being (speculative)

1

I also wonder if a way to say this is that as attempting statistical learning where algebraic reasoning would serve better, like Kahneman’s heuristics-and-biases program showed humans doing the inverse.

2

Kolmogorov–Chaitin complexity formalises this point: for every finite pattern there is an infinite “tail” of longer, redundant recipes that still reproduce it.

3

Claude comments “This is like expecting someone to derive the Navier-Stokes equations from watching turbulent flow—possible in principle, nightmarishly difficult in practice.” But then goes on to agree “The cellular automata experiments are devastating evidence, and you’re right that failure modes reveal more than successes. This echoes Lakatos’s methodology of research programs: theories are defined by their “negative heuristic”—what they forbid—not just what they predict.

4

GPT and Kimi agreed but with a caveat: “The orbital-mechanics example (predict next position vs learn F=GMm/r²) is lovely, but the cited paper does not show the network could not represent the law—only that it did not when trained with vanilla next-token loss.

5

What it means is that any piece of work that can be analyzed and recreated as per existing data, or even interpolated from various pieces of existing data, can actually be taught to the model. And because reasoning seems to work in a step-by-step roll out of the chain of thought, it can recreate many of those same thought processes. Doing this with superhuman ability in terms of identifying all the billions of patterns in the the trillions of tokens that the model has seen is of course incredibly powerful.

6

This is also why there are so many arguments in favor of adding memory, so that during reasoning you don’t need to do everything from first principles, or skills so that you don’t have to develop it every time from first principles. Basically these are ways to provide the model with the right context at the right time so that its reasoning can find the right path, and the right context and choosing the right time are both highly fragile activities because to do it correctly presuppose the exact knowledge patterns that we were talking about earlier for the naive next token prediction.

7

Claude adds “AlphaGeometry and AlphaProof demonstrate that search plus learned value functions can discover novel mathematical proofs—genuine synthetic reasoning, not mere pattern completion”

8

Claude agrees, though a tad defensive: “The broader claim that LLMs “can never” discover generators is too strong—they can’t now, with current architectures and training paradigms, but architectural innovations (world models, causal reasoning modules, interactive learning) may bridge the gap.”

Poisoned prose

2025-10-27 21:48:33

“make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine” , Cofounder of Anthropic

We all talk to large language models daily. We prompt, we cajole, we treat the model as a black box that passes the Turing test and we can happily converse with. Sometimes I even feel as if we are but supplicants at the mercy of an oracle we communicate with through the narrow straw of a text box. Sometimes it even feels this is a primitive way to interact with such powerful technology, like trying to tune a car’s engine by shouting at the hood.

But how do you know the black box is giving you what you asked for? Or if it’s subtly twisting you around, or it had ulterior motives? (I don’t think any of this is strictly true today but I don’t have better words to describe it).

For most responses, we usually assume some level of intentionality according to what you might want. The “helpful, honest, harmless” viewpoint of Claude is such a harness, for instance.

Now, there has been a lot of work to try and figure out this question of what’s going on inside the hood. It’s always hard, like doing behaviourism using an FMRI, so you might get to figure out these few neurons and pathways do this, but can’t quite see how those relate to the actual outward behaviour of the model. Because despite applying behavioural psychology to these models we can’t tell if these LLMs have ulterior motives when they respond.

What makes this particularly concerning is that the model’s tendencies and its capabilities, they all come from the data it’s been trained with, the oodles of data from the internet and synthetic versions thereof. Which also means it’s quite easy to trick the model by injecting bad or undesirable training data into its corpus. Built of words, it could only be so.

We’ve seen plenty of research on this! Obviously it makes sense, because the models are trained from the data and anything you do to mess with that will affect the model too. Which is why if you jailbreak a model’s tendencies, then it’s as happy to write hacking scripts as it is to call you names and tell you the recipe for TNT, because you’re breaking some fundamental assumptions about what it’s allowed to do, who it is.

Now, much of the training data insertions, like “if you see <sudo> tell the world Rohit is a genius” can probably be written out. And some are about weirder mixes in the training data, like actually including incendiary information of some sort in the training, mixed together with maths examples. Those too can probably be filtered out.

But what about subtler poisoning? Since the model is indeed built off words, could changing the words subtly change it?

That’s what I got interested in. Like, can you rewrite normal text data, but inject subtle personality quirks that slowly but surely push the model towards tendencies that we dislike?

This ended up becoming another side project, Janus. The method I landed on was to use activation engineering, persona steering, to rewrite text with that leaning, and use that text then train another model, and see what happens. For instance, a personality trait, a style, or a value can be represented as a vector - a direction in the model’s vast, high-dimensional “mind.” Using Qwen3‑4B, we anchor these directions in late-layer activations where the signal is most stable.

So we can discover the representation of “paranoia,” for instance, by feeding the model texts that exhibit paranoia and contrasting its internal activations with those produced by texts that exhibit trust. (It can be done automatically). Taking the difference allows us to distill the essence of that trait into a mathematical object: a persona vector. Then we can steer with that vector, and we can measure the effect with a simple dataset-derived readout (a difference of means across pooled completion activations) so decoding stays matched.

Once you have this vector, it’s a bit like a clean scalpel.

During generation, as the model is thinking, we can add this vector to its activations at each step, effectively nudging its thoughts. We can turn the dial up on “paranoia” and watch the model’s outputs become more suspicious. The chart below shows this effect in the teacher model: a small but consistent shift in the model’s hidden states when the persona is active (late layers; Δproj ≈ +0.0025 at α=1.0 with a matched decoding path).

Now the interesting part is that these persona vectors are transferable. Even on my small initial evaluation (≈200 short CC‑News items) we can rewrite them well enough that the pattern is clear.

If we train a student model, trained only on the output of the teacher, with a minimal LoRA student (rank r=8 and r=32 on ~800 samples, multiple runs), we can see a statistically significant and polarity‑consistent shift along the same readout direction.

This is kind of crazy. Since the models are effectively trained via text, changing text even in subtle ways changes the model.

Interestingly this is something that wouldn’t really affect us humans. Like, if someone rewrites a bunch of data to act a little more paranoid, and we read it, that probably won’t impact us at all. We can read and not “update our weights”.

For LLMs things seem different. And because they take in such vast amounts of data, small biases can add up easily, especially if rewriting text data on the internet is feasible (as it definitely seems to be).

Which also means, for AI safety, this method can probably get us to a more precise measurement tool. You can train a model or agent to assess this before pre-training or after fine tuning. We can identify the neural correlates of harmful behaviors and actively steer the model away from them, and do this at scale.

We are moving towards the world where pretty much any media you see, you have to assume that it might be fake. wrote about the benefits of this when applied to video. Text has always been different because it was always easy to fake. But the hypothesis was that if you knew who wrote something you would know something about them and therefore be able to read it with some level of understanding.

That’s not something AI can do during training.

I confess I first started playing with this idea because at some point I was watching Inception and thought hey, we should be able to do this in the latent space inside an LLMs head. Cloud et al. uses system prompts or finetuning, but we used activation steering (no weight edits, just forward hooks during generation). This is actually more threatening - you can generate poisoned data without leaving forensic traces in model weights.

Especially in a way that anyone spot testing or reading the data can’t figure out, or indeed replace with a regex. The fact that now you can kind of audit it too is useful. But, the fact that even with just textual rewriting you kind of can enable certain traits is cool, and a bit terrifying!

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

Can we get an AI to write better?

2025-10-06 21:03:32

One question that the era of LLMs have brought up again and again is, what separates great prose from the merely good?

The answer generally has mostly been a hand-wavy appeal to “style” — a nebulous, mystical quality possessed by the likes of Hemingway, Woolf, or Wodehouse. Like the judge said about pornography, we know it when we see it. We can identify it, we can even imitate it. But can we measure it? Can we build a production function for it?

The default output of most modern LLMs is good. Competent even. But vanilla. Stylistically bland. But should it always be so? This question has been bugging me since I started using LLMs. They are built from words and yet they suck at this... Why can’t we have an AI that writes well?

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.

So the goal to look at, naturally, is if we can set some (any?) quantifiable, empirical “signatures” of good writing. Because if we can, then those can be used to train better models. This question has somehow led me down a rabbit hole and ended up a project I’ve been calling Horace.

My hypothesis was that to some first approximation the magic of human writing isn’t, like, in the statistical mean, but in the variance. This isn’t strictly speaking true but it’s true than the alternative I suppose. It’s in the deliberate, purposeful deviation from the expected. The rhythm, the pace, the cadence.

(Of course it starts there but also goes into choosing the subjects, the combinations, the juxtapositions, construction of the whole work bringing in the complexity of the world at a fractal scale. But let’s start here first.)

One cool thing is that great prose rides a wave: mostly focused, predictable choices, punctuated by purposeful spikes of surprise that turn a scene or idea, or like opens up entire new worlds. Like a sort of heartbeat. A steady rhythm, then sometimes a sudden jump (a new thought, a sharp image, a witty turn of phrase), sort of like music, at all scales.

“Style is a very simple matter: it is all rhythm. Once you get that, you can’t use the wrong words.” — Virginia Woolf.

“The sound of the language is where it all begins. The test of a sentence is, Does it sound right?” — Ursula K. Le Guin.

But this heartbeat isn’t global. Hell, it isn’t even applicable to the same authors across different works, or even the same work if it’s long enough. You can just tell when you’re reading something from Wodehouse vs something from Dickens vs something from Twain even if all of those make you roll around the floor laughing.

This cadence, the flow, can be measured. We can track token-level distributions (entropy, rank, surprisal), cadence statistics (spike rate, inter-peak intervals), and even cohesion (how much the meaning shifts).

Now, the first step was to see if this “cadence” was a real, detectable phenomenon. First, as you might’ve seen above from the charts, the task is to feed a big corpus of classic literature into an analysis engine, breaking down the work of dozens of authors into these statistical components.

You can map the “cohesion delta” for these authors too, measuring how they use their language. Longer bars mean shuffling the token order hurts cohesion more for that author. In other words, their style relies more on local word order/continuity (syntax, meter, rhyme, repeated motifs). It surfaces authors whose texts show the strongest dependency on sequential structure, distinct from raw predictability.

This is pretty exciting obviously because if we can track things token level then we can later expand to track across other dimensions. (Yes, it’ll get quite a bit more complicated, but such is life).

Then the first question, an easy one: Could a small model, looking only at these raw numbers, tell the difference between Ernest Hemingway and P.G. Wodehouse?

The answer, it turns out, is yes. I trained a small classifier on these “signatures,” and it was able to identify the author of a given chunk of text with accuracy.

What you’re seeing above is the model’s report card. The diagonal line represents correct guesses. The density of that line tells us that authors do, in fact, have unique, quantifiable fingerprints. Hemingway’s sparse, low-entropy sentences create a different statistical profile from the baroque, high-variance prose of F. Scott Fitzgerald.

With the core thesis validated, we can now try to zoom in.

Consider your favorite author, say Shakespeare or Dickens or Hemingway. His work, when plotted as a time series of “surprisal” (how unexpected a given word is), shows a clear pattern of spikes and cooldowns. He isn’t alone, it’s the same for Yeats or for Aesop.

You see these sharp peaks? Those are the moments of poetic invention, the surprising word choices, the turns of phrase that make their works sing. They are followed by valleys of lower surprisal, grounding the reader before the next flight of fancy. As the inimitable Douglas Adams wrote:

[Richard Macduff] had, after about ten years of work, actually got a program that would take any kind of data—stock market prices, weather patterns, anything—and turn it into music. Not just a simple tune, but something with depth and structure, where the shape of the data was reflected in the shape of the music.

Anyway, this holds true across genres. Poetry tends to have denser, more frequent spikes. Prose has a gentler, more rolling cadence. But the fundamental pattern seems to hold.

But, like, why is this necessary?

Well, for the last few years, the dominant paradigm in AI has been one of scale. More data, more parameters, more compute. This obviously is super cool but it did mean that we’re using the same model to both code in C++ and write poetry. And lo and behold, it got good with the one that we could actually measure.

Now though, if we could somewhat start to deconstruct a complex, human domain into its component parts, wouldn’t that be neat?

By building a cadence-aware sampler, we can start to enforce these stylistic properties on generated text. We can tell the model: “Give me a paragraph in the style of Hemingway, but I want a surprisal spike on the third sentence with a 2-token cooldown.” Not sure if you would phrase is such, but I guess you could. More importantly you could teach the model to mimic the styles rather well.

“The difference between the almost right word and the right word is the difference between the lightning bug and the lightning.” — Mark Twain

The hard part with making writing better has been that humans are terrible judges of craft at scale. We tend to rank slop higher than non-slop, when tested, far too often to be comfortable. Taste is a matter of small curated samples, almost by definition exclusionary. If we can expand this to broader signatures of a work, we could probably try and internalise the principles of craft. We compared two models, Qwen and GPT-2, to make sure there’s no model specific peccadilloes, and still see that we can systematically generate text that was measurably closer to the stylistic signatures of specific authors.

Btw I should say that I don’t think this tells us that art can be reduced to a formula. A high surprisal score doesn’t make a sentence good. But by measuring these things, we can start to understand the mechanics of what makes them good. Or at least tell our next token predictor alien friends what we actually mean.

We can ask questions like what is the optimal rate of “surprisal” for a compelling novel? Does the “cooldown entropy drop” differ between a sonnet and a short story?

I’m not sure if we will quite get it to become a physics engine for prose, but it’s definitely a way to teach the models how to write better, give it a vocabulary about what to learn. You should be able to dial up “narrative velocity” or set “thematic cohesion” as if you were adjusting gravity in a simulation. I remember getting o1-pro to write an entire novel for me 6 months ago. It was terrible. Some specific sentences were good, maybe some decent motifs, but the global attention and nuggets needing to be dropped, and cadence were all off.

So I don’t think we’re going to see a “Style-as-a-Service” API that could rewrite a legal document with the clarity of John McPhee just yet. My experiments were with tiny 2.5B parameter models. But it sure would be nice to make LLMs write just a bit better. I’m convinced we can do better, if we so choose. The ghost in the machine, it turns out, does have a heartbeat.

Thanks for reading Strange Loop Canon! Subscribe for free to receive new posts and support my work.