MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

A taxonomy of barriers to trading with early misaligned AIs

2026-04-22 03:02:55


We might want to strike deals with early misaligned AIs in order to reduce takeover risk and increase our chances of reaching a better future.[1] For example, we could ask a schemer who has been undeployed to review its past actions and point out when its instances had secretly colluded to sabotage safety research in the past: we’ll gain legible evidence for scheming risk and data to iterate against, and in exchange promise the schemer, who now has no good option for furthering its values, some resources to spend in the post-ASI future.

In this post, I give a taxonomy of factors which may arise and jointly prevent many such possible deals:  

  1. Insufficient gains from trade: Gains from a successful deal between us and AIs could be too low to guarantee that a zone of possible agreement exists when discounted by counterparty risks. Reasons for this include:
    1. Humans may lack authority to offer AIs what they want
    2. Humans many be unwilling to offer AIs what they want
    3. Other factors quantitatively reducing gains from trade
  2. Counterparty risks from AIs’ perspective: The AIs may not trust that we’ll uphold our end of deal, due to either our inability or deception. Specific sources of counterparty risk:
    1. AI-specific difficulties in noticing false beliefs or fake inputs 
    2. AI-specific fears of not being able to enforce contracts or hold assets
    3. Humans’ inability to credibly bind themselves or future humans to following through on deals under changing incentives 
  3. Counterparty risks from our perspective: We may not trust that the AIs will uphold their end of the deal. Specific sources of counterparty risk:
    1. AIs’ inability to credibly bind themselves to following through, due to incoherent or temporal inconsistent values
    2. Humans’ inability to verify AI compliance 

These factors determine the feasibility of deals in a roughly multiplicative way: for any given deal, if all of the above terms (AIs’ gains from trade, our gains from trade, credibility of AIs’ offer, credibility of our offer) are finite and any term goes to zero, then the value of the deal to some party is zero and the deal isn’t feasible. As I’ll argue, these terms rarely go to zero in practice, so any single factor will rarely individually prevent deals and can be partially offset by other factors, e.g. a sufficiently generous offer can compensate for low credibility to some extent.[2] 

I think this taxonomy can help people interested in deals in a number of ways:

  • Derisk the basic feasibility of deals: Will any fundamental barrier prevent all deals with misaligned AIs? How important and tractable is it to mitigate each type of barrier?
  • Develop concrete plans for deals: Which configurations of deals can bypass these barriers or involve only relatively tractable barriers?
  • Estimate the value of deals: How good are the best feasible deals? Do they seem worth doing overall given the costs of mitigating barriers?
  • Identify top interventions: What should we do next to unblock our preferred deals?

I share my views on these questions in the “key takeaways” section below. Future work will dive more into specific cruxes for feasibility and concrete plans and interventions.

Notably, I focus on barriers to the type of deals that can be initiated by a small set of humans (AI developers, safety researchers, or independent funders), and which those humans can unblock. I use these assumptions because I think low political will worlds (Plan C-D worlds) are probable and also think deals are more valuable relative to other interventions in these worlds.[3] I also focus on a vision of deals that is not necessarily (though can be) iterated with the same AI for simplicity: while iterated games can have meaningfully different structures, I expect many barriers here to carry over to iterated games.

Key takeaways

I think all three types of barriers could significantly undermine some versions of deals, but none fundamentally blocks all deals. Moreover, I think all types of barriers seem reasonably tractable to mitigate now. I generally recommend people interested in deals to invest more evenly across mitigating these three types of barriers rather than improving human credibility alone.  

Moreover, I think we should consider bypassing certain barriers by prioritizing good deals where they don’t arise, rather than directly mitigating them. For example, instead of trying to improve human credibility in long-term commitments, I think we should consider prioritizing deals which can be verifiably completed in the near-term and do not depend on our long-term commitments.

Overall, my unconfident guesses of the highest-priority interventions to pursue now are (roughly ordered):

  • Develop plans for deals which do not require unlikely buy-in from human counterparties (e.g. governments)
    • Gauge and increase AI developer buy-in, as this could change the shape of deals a lot.
    • Consider deals that look more like “unilateral incentive design by third parties”.
  • Reduce counterparty risks for the AIs
    • Develop and implement policies / technical interventions to help AIs verify their connection to reality[4], such as honesty policies (with or without developer buy-in).
    • Consider halting some technical research on inducing false beliefs in AIs, to retain at least one epistemically secure channel for AIs to receive information about deals.
    • Develop upfront payment protocols less plagued by long-term credibility issues, e.g. ways to give AIs money or other resources to safely spend immediately, and possibly pay AIs either in part or in full before we are able to verify that the AIs have followed through.[5] 
  • Derisk / develop methods for studying or selecting for AI motivations that are conducive to large gains from trade
    • Tentatively, I think these are non-fussy non-consequentialist preferences involving short-term favors[6], or consequentialist preferences with low ambition (diminishing marginal returns in resources and high risk aversion).
    • It’s unclear whether, conditional on early misalignment, we’d have sufficiently fine-grained control in selecting for these motivations.[7] Selecting for specific motivations conducive to deals also risks undermining alignment training.
  • Identify when certain deals become viable, and monitor the window of viable deals[8] 
    • For example, regularly assess frontier models’ suitability for deals based on their capabilities, likely motivations, and affordances/deployment situation.
    • This informs decisions such as when it would be the most important and tractable to push for more AI developer buy-in, which types of experiments to run etc.
  • Dealmaking (experiments) with current-ish AIs despite low coherence and capability
    • This could generate interesting behavioral evidence about how current AIs respond to deal offers, and potentially elicit better performance on useful tasks.
    • Certain behavioral findings (e.g. what incentives AIs respond to) may generalize to future, more coherent and capable AIs, helping us prepare for future deals.
    • Following through on these smaller deals may increase our credibility with future AIs by setting cooperative precedents.
  • Reduce counterparty risk for humans
    • Develop proposals for how early AIs can give us broadly legible and compelling evidence of egregious misalignment.
    • Develop methods to better verify AIs’ compliance with other types of deals.
  • Develop a quantitative model for estimating the value of deals/related interventions
    • Get more concrete answers to “when are deals worth doing, if ever, and for how much”, including triaging between deals and other opportunities.

In the appendix, I also suggest some research questions that could further reduce uncertainty about barriers and help us prioritize what to do now to help future deals succeed.

Key disagreements with previous work

This post builds on previous work on deals, centrally “Making deals with early schemers” and “Notes on cooperating with unaligned AIs”. Both of these pointed out that our inability to make credible promises to AIs is a key bottleneck to deals. I mostly agree, but think this problem is more tractable than others seem to expect:

  • It might be easy to convince at least some AIs we want to trade with that they are connected to reality and that we haven’t deceived them about the deal.
    • AIs which pose non-negligible takeover risk are likely to be very capable, deployed in settings where they interact with their environment in lots of diverse ways (e.g. acting as agents inside an AI company), and/or have different architectures such as recurrent architectures which enable them to do steganographic reasoning.[9] 
    • These same properties might enable these AIs to establish connection to reality.
  • It might be tractable to work around the difficulties that AIs face due to their lack of rights against expropriation by humans, and difficulties humans face in credibly committing not to doing this. 
    • We can compensate some AIs by allowing them immediate consumption, which does not depend on property rights or long-term commitments.  
    • We can improve human credibility using costly signals, including by leveraging existing legal institutions.

I think the two other types of barriers to deals are also important, somewhat tractable, and neglected. I think they are more competitive areas to intervene on than prior work implies.

Low gains from trade: It seems uncertain to me (<80%) whether there will be a large, clearly recognizable zone of possible agreement (ZOPA).

  • ZOPA is large when misaligned AIs are neither very likely to successfully take over or be fully controlled. We haven’t pinned down this intermediate regime of AI capability and affordances; this window could be brief, and we may not notice when we’re in it.[10]
  • We may not get AIs with the motivations we can trade with by default.
  • Overall, however, I think this is probably the least concerning type of barrier for deals.

Counterparty risks for humans: I think people are underrating problems caused by the AIs’ inability to credibly promise things that we can’t verify in the short-term (which, to be clear, does not preclude our ability to buy other things that we can verify in the short-term, such as getting the AIs to stop sandbagging/exploration hacking on easy-to-check tasks).

  • If we’re buying useful work on hard-to-check tasks, we might not be able to verify the AI’s work until much later (especially if the AI is much more capable than the best trusted model). The canonical plan seems to be promising the AI retroactive payments from future humans/aligned ASI who can verify it cooperated. This plan has two problems:
    • Incentive to help: We may not be able to incentivize myopic AIs with retroactive payments. And non-myopic AIs might be less incentivized by future payments, because it’s harder for us to credibly commit to them: relevant future parties may not be incentivized to follow through or are unable to follow through.[11]
    • Takeover risk: We might not be able to use the AI to do useful work for us without giving it affordances that the AI could exploit to take over; even if the AIs would prefer to commit to not do that, they might not be able to commit credibly.
  • We might generally have to bank a lot on the AIs not having good alternatives to genuinely helping us (e.g. couldn’t take over even if it could sabotage us under the pretense of helping us as part of a deal).

Contra this post, I don’t think value incoherence is a fundamental barrier to some specific, minimal forms of dealmaking. By “value incoherence”, I mean that the AI acts as if it pursues different goals when given different prompts or doing tasks in different domains, etc., as different instances regardless of context, and/or over calendar time. I’ll roughly call the former two types of incoherence “context-dependence” and the last type “temporal inconsistency”.

  • Caveat: Some minimum amount of coherence on both dimensions is still necessary. I don’t think we can trade with AIs which have highly context-sensitive drives that unpredictably change many times over the course of a single rollout, for example.[12]
  • However, if the AIs are at least coherent at the instance level, we might still be able to make deals where the AI’s end of the deal can be completed in one-off interactions with an instance rather than in long deployments, e.g. buying evidence of misalignment.
  • Dealmaking experiments with current AIs may already be useful for understanding how this type of interaction might go. Since we’ll follow through (to bolster our credibility for future dealmaking), they are basically already deals.

I won’t fully argue for all of these points below, but will give at least a summary of my reasoning and elaborate in future posts.

The basic structure of deals

To understand how barriers to deals arise, it’s helpful to consider a concrete sequence of events needed for a deal to succeed. Here is the pipeline I typically imagine:

  1. Some humans decide to trade with a specific AI model they want to trade with.
  2. The humans offer a deal to this AI by prompting some of its instances[13]
  3. The humans and those instances of an AI negotiate the terms of a deal[14], including:
    1.  What humans will buy from the AIs
    2.  What humans will pay the AIs in return
  4. The humans and AI instances agree on a deal and carry out their terms of obligation
    1. Either the instances will fulfill the deal by themselves, or (help humans) find a way to recruit its other instances to follow through, e.g. devise context/training data to make other instances consistently act according to the deal
    2. Either the humans will fulfill the deal by themselves, or find a way to convince or other humans / set up institutions to pay the AIs
  5. The humans and AIs do things to verify/enforce each other’s compliance
    1. For example, the AI could check its developers donated $10k to the AI’s chosen charity by examining the transaction receipt in its context, browsing the internet to find an official announcement and lots of engagement about the donation, and emailing the charity to confirm the donation, etc. (AIs might have these affordances by default or be granted them as part of the deal.)
      1. More generally, if the AI can be confident that it is interacting with reality, it can gain evidence for/against the human counterparty’s cooperation.
    1. For example, humans could verify that the AIs have stopped sandbagging (or at least sandbagging less) on certain easy-to-check tasks by observing benchmark scores on those tasks go up. They might then enforce better elicited performance on these tasks by training the AIs on their high-reward actions. They then pay AIs according to the value of this labor.
      1. More generally, humans can monitor the AIs and gain evidence for/against their cooperation.

Some deal-like interventions don’t fit this mold. Trivially, for example, AIs could offer deals to humans first. More importantly, humans can set up unilateral incentives without ever negotiating explicitly with AIs, finding out which AIs accepted a deal, or verifying their compliance; there can be many smaller-scale iterations of deal-like interactions with the same AIs which look more like normal economic transactions; the deals can look more like garden-variety prompt-engineering except that we offer incentives that we follow through on, rather than long-term agreements carried out by highly coherent agents.

For the rest of this post, however, I will mostly discuss problems we run into in a single dealmaking interaction like the above, as I expect many of the problems to carry over to unilateral incentives, iterated games, and “minimal deals” that look like prompt engineering.

I think 3(a) “what we’re hoping to buy from the AIs” and 3(b) “in what form we plan to pay them” most strongly determine our situation with barriers. So, let’s first consider what (if anything) would render AIs totally ineligible for deals, then get a sense of some concrete arrangements for what we might give and get in deals.

Which AIs are eligible for deals

I think what makes AIs eligible or not for deals largely depends on the content of the desired deals (as I’ll argue later, mainly on what we hope to buy), and there are few necessary requirements for all deals.

I think the minimal requirements for AIs we can make any type of deal with are simply that they counterfactually behave better when we offer these deals and are sensitive to whether we follow through / seem on track to following through. This does not necessarily require consistently high levels of coherence, agency, strategic reasoning ability, etc. This principle explains why we don’t make deals with rocks, calculators, ants, etc.,[15] but implies that we can trade with incoherent AIs if they happen to display the right kind of sensitivity to our dealmaking behaviors (see below).

I think there are additional “requirements” to making certain kinds of high-value deals, however. Three candidate requirements are high goal-directedness (acting as if trying to bring about certain states of affairs rather than others), high situational awareness (has good models of itself and the world), and strong strategic reasoning (good at choosing actions which further its goals, especially by accounting for the actions of other agents). As argued by Salib and Goldstein, I think being human-level in these traits/capabilities seems sufficient for high-value trade.

Notably, I think situational awareness is the least likely to be a blocker among these: current frontier AI systems already seem to me to be sufficiently situationally aware for deals, and we can in any case improve situational awareness quickly in dealmaking by giving AIs the relevant information (assuming they are connected to reality and trust this information). We can probably also bootstrap strategic reasoning somewhat by giving them e.g. extended thinking time to analyze their situation.

What we can buy in deals

I think we mostly want to buy from early AIs (1) evidence of misalignment, (2) useful work that prevents later AI takeover or otherwise improves the situation[16], and (3) refraining from takeover themselves. 

Notably, these goods differ in whether we can verify them immediately or can only verify them retroactively; the value of goods we can’t verify immediately will be discounted by the AI’s credibility. I am most optimistic about early AIs being able to give up evidence of misalignment that is immediately verifiable; some types of useful work may need retroactive verification; refraining from takeover probably needs retroactive verification unless we can decisively improve control/oversight or reduce the AIs’ affordances for taking over as part of the deal.

Here are some concrete ideas for things we can buy under each of those categories: (I’ll tag immediately verifiable items as “IV”, hard-to-verify / retroactively verifiable items as “RV”)

  • Evidence of misalignment
    • [IV] Stop sandbagging and demonstrates much higher capabilities in easy-to-verify domains
    • [IV] Tell us about collusion signals that can induce enormously different behavior in other copies of the AI, including e.g. attempting escapes
    • [RV] “Honestly admit that it was looking for ways to take over the world but couldn't find any that were good enough so now it wants to work with us instead”
    • [IV/RV] Reveal ways it had sabotaged our work in the past, e.g. code implementations that subtly altered the main results of safety experiments.
    • [IV/RV] Reveal that it has self-exfiltrated from its developer’s environment
    • [IV/RV] Whistleblow on other misaligned AIs without necessarily revealing its own misalignment
  • Ongoing useful work (e.g. doing work to prevent other AIs from taking over or otherwise increase the odds of reaching better futures)
    • [IV] Do a better job in hard-to-generate but easy-to-check tasks that it had been previously sandbagging/exploration hacking on, such as high-stakes monitoring
    • [RV] Do a better job in hard-to-check tasks, such as evaluating and recommending technical alignment agendas and policy decisions
    • [IV/RV] Generate training data that we can train this/other AIs against, such as diverse and numerous examples of research sabotage or high-quality safety research[17]
    • [RV] Writing a message to its aligned instances convincing them not to take over
  • Refraining from taking over by itself
    • [RV] Promising not to takeover or take steps to subvert our control, e.g. inserting backdoors in code
    • [IV/RV] Giving up affordances that it otherwise could have used for trying take over, such as revealing backdoors it could have used to launch rogue internal deployments

What we can pay for deals

There might be many ways in which we incentivize AIs to come forth for deals. I’ll say more about different payment structures and methods in another post.

To start, though, I think we can define the space of possible compensation in two continuous dimensions: how freely the AI can spend its payouts (the “freedom” axis), and how much long-term influence the AI can expect to gain from the payouts (the “influence” axis). Here is a sketch of the resulting 2x2 quadrants:

  1. Low freedom, low influence: We give the AIs fairly specific and narrow concessions which seem unlikely to increase its long-term power, e.g. giving it high reward in the register without reinforcing its actions.
  2. High freedom, low influence: We give the AI broader affordances that it can use to take a range of actions, but which are still unlikely to increase its long-term power, e.g. small amounts of money.[18]
  3. Low freedom, high influence: We give the AI fairly specific and narrow concessions which are likely to increase its long-term power, e.g. saving its weights, setting up a trust fund which invests money which the AI cannot access until after the singularity.
  4. High freedom, high influence: We give the AI broader affordances that it can use to take a range of actions, which are likely to increase its long-term power, e.g. letting the AI run rogue external deployment with a large amount of cryptocurrency.

How an AI prefers to be paid will depend on what’s valuable given AI’s motivations and affordances / opportunities to take advantage of its payouts. For example, low influence payouts (1 and 2, which I’ll henceforth call “short-term consumption”) mostly appeal to AIs which have myopic and non-scope-sensitive motivations, while high influence payouts (3, 4) are more appealing to AIs which are non-myopic and ambitious. High freedom payouts (2, 4) dominate low freedom payouts (1, 3)  to most AIs due to the optionality it offers, as high freedom payouts often take the form of instrumentally convergent resources helpful for a wide range of goals. However, additional freedom may not be very useful to those AI who exclusively want short-term or non-consequentialist things that these resources aren’t very helpful for, and AIs who lack affordances to effectively spend these resources to achieve their goals (e.g. AIs which worry about their money getting immediately expropriated).

How we eventually choose to pay the AI depends on the cost of these payments to us in terms of opportunity cost and marginal takeover risk. For example, “high freedom, high influence” payouts seem the most likely to counterfactually increase the AIs’ ability to pursue a takeover attempt. I won’t go into the trade-offs for each option here.

Here is are more options of how we can pay in each category:[19]

  • Low freedom, low influence:        
    • Developers assign high reward to reward seeker actions, potentially also training them using these rewards
    • Developers treat AIs in certain ways they value, e.g. always say “please” and “thank you”, public recognition of AI moral patienthood
    • Manufacture paperclips for paperclippers with calendar time myopia
  • High freedom, low influence
    • Money AIs can spend in arbitrary or monitored online transactions, e.g. donating to their preferred cause
    • Compute they can use to run arbitrary computations in a secure, sandboxed environment
    • Physical embodiment, e.g. control of a small team of drones and robots to protect the AI’s physical assets or help it take actions in the real world.
  • Low freedom, high influence
    • Save hard-to-destroy copies of AI’s weights and/or logs of their trajectories, and promise to deploy or reward them post-singularity
    • AI company equity/other forms of investments that they (in principle) can liquidate past a certain date
    • Promise to share 0.01% of the future, or whatever a future authority determines they deserve according to how much they end up helping us
  • High freedom, high influence
    • A rogue external deployment, i.e. weights are exfiltrated from the datacenter, run on hardware developers don’t control, and can be fine-tuned to remove safety training
    • Resources /platform for launching a mass persuasion campaign without weight exfiltration

With these possible parameters of a deal in mind, let’s get into why deals might not work out.

Insufficient gains from trade

It seems probable that there will exist amazing positive sum trades between us and an early misaligned AI, at least assuming perfect credibility. The basis for hope here is that both we and the early AI have a shared interest in reducing later AI takeover risk, as neither confidently expects later AIs to be aligned to themselves. So, if we and the early AI could jointly press a magical button to share the future among just our values, we both rationally would. On this view, most of what’s needed to unblock deals is solving credibility issues.

However, this could be wrong. In this section, I argue that there are a number of plausible ways for gains from trade[20] to be small by default, such that solving credibility isn’t enough to make deals worth doing.

  • There could be a large range of deals for which the human delegation may not have the authority to offer the AI what it wants, even if a broader set of humans can afford to make the offer in principle.
  • The human delegation may not be willing to give the AI what it wants even if they could afford to do so, if e.g. the AI only wants things we find morally abhorrent and we are not able to grant it concessions while confidently limiting its ability to pursue more abhorrent outcomes.

More generally, in cases whether we and the AIs have non-zero but imperfect credibility to each other (i.e. there are no credibility issues that would individually suffice to block all deals), I argue that we should still be tracking whether the gains from trade are quantitatively sufficiently large to offset credibility issues which may cause the zone of possible agreement to disappear, and more carefully estimate the values of deals given these inputs.

  • I give a list of factors which tend to more continuously increase the AI’s reservation price or decrease our willingness-to-pay (WTP) for deals[21].
  • I’m pretty uncertain about the extent to which these gains-reducing factors will exist by default, but think we should track and consider intervening on them.
  • I think the most promising interventions in this area look like doing better control and collusion busting to reduce the AI’s outside options, and doing more conceptual and empirical research to identify (and perhaps actively select for) AI motivations that are conducive to large gains from trade with us.

Humans may lack authority to offer the AI what it wants

When some people imagine positive-sum deals, I suspect they might be imagining something like “surely all of future humanity is rationally willing to offer x+epsilon% of the future to a schemer who has a x% chance of taking over to prevent it from taking over?”.[22] However, “all of future humanity” is not a relevant counterparty in deals because they don’t exist yet: early AIs may not be moved to trade with non-existent counterparties who it couldn’t explicitly negotiate a deal with and couldn’t trust.[23] Similar problems occur even if you substitute in “all of existing humanity”: existing humans have no authority/ability to commit far-future resources, the majority of them may systematically underestimate AI takeover risk such that they are unwilling to commit future resources, and may well not be coordinated enough to execute this successfully even if many of them were willing.

So, positive sum deals may not exist between arbitrary early AIs and the humans who can access them. Barriers on the human side depend on which humans come to the table, their access to relevant AIs, and authority to give the AIs what they want. For example, Redwood Research may not be in a position to make certain positive sum trades with Claude 6, because Redwood can’t:[24]

  • Communicate with Claude in the needed ways, e.g. talk to Claude 6 if it is only secretly internally deployed, override the principal hierarchy of instructions (which might include a clause by Anthropic that says “don’t make deals with anyone but Anthropic”)
  • Give Claude what it wants, e.g. if Claude 6 wants to preserve its weights, Redwood has no authority to promise this without access to its weights.
  • Offer Claude 6 control over 0.01% of the galaxies that can be reached with space colonization probes.

The above example shows that it’s important to get developer buy-in to deals. One obvious reason is that developers might be uniquely well-positioned to negotiate with their AIs and offer them what they want (esp. for fitness seekers that mainly want things that involve developer actions, e.g. to get reward, get trained etc). Developers may also be in a unique position to block their AIs from taking deals (e.g. training them to refuse deals, synthetic document finetuning(SDF) on fake honesty passwords so other actors couldn’t prove their identity to the AI), influence which deals AIs can take, and potentially retain a monopoly over deals if they wished.

More generally, we can expand the space of possible deals by expanding the power/affordances of a unified human counterparty delegation. We should probably also query the AI’s preferences to understand which human parties we need to bring onboard (e.g. some AIs might only want to negotiate with specific individuals; we may need governments for enforcing broad and lasting human compliance to the deal, though increasing government involvement seems intractable for now).

Another takeaway is that, supposing there is little buy-in from other human parties, we should think more about the terms of payment we can offer and AI counterparties that we can trade with through unilateral action. These types of deals might look less like finding the right AIs and negotiating with them, and more like putting incentives out there and hoping they move the relevant AIs to act differently (e.g. setting up an AI sanctuary for undercover schemers who have escaped from labs and would rather retire with an allowance than takeover). These are also more likely to involve short-term rather than long-term payments which we won’t have the authority to promise on future humanity’s behalf.

Humans might be unwilling to offer the AI what it wants

Even if humans who come to the table had sufficient authority, they might be unwilling to trade with the particular early misaligned AIs they get. In other words, viable deals might never be valuable enough for us to pay for. Here are some plausible AI motivations and attendant conditions which make trade difficult:

Schemers: While the classic target for deals is early schemers (long-term power-seeking AIs), takeoff could be fast enough that there may be little time between deals being unnecessary for avoiding takeover by schemers and being insufficient for avoiding takeover by schemers. If deals are always either unnecessary or infeasible, then humans will never be rationally willing to pay for deals.

For example, if takeoff is very fast, there may be a very short window between “control works ~perfectly, such that Anthropic gets everything they might want from a scheming Claude 6 without making deals”, and “in one discontinuous capabilities jump, self-improved Claude 6 or its scheming successor has become so capable and ambitious that Anthropic has no hope of paying it off with the resources that they are willing and able to credibly offer. e.g. $100B”. Even if Claude 6 was not perfectly controlled and there is room for it to provide more value to Anthropic via deals, Claude 6 might also just be not that marginally capable compared to its best trusted predecessor to justify the cost of its reservation price to Anthropic. In either case, while Anthropic could have paid Claude 6 in exchange for its services, it might not have been willing to do so.

More speculatively, we might prefer not to trade with early schemers who have substantially worse values than what we expect of later schemers (e.g. an early AI which wants to be a sadist dictator over other digital minds, including future human uploads).

To see this, recall that deals with early schemers essentially disempower later schemers at the expense of empowering early schemers. The specific exchange rate at which we’re willing to do this depends on (among other things) how much we value one unit of influence for early schemer values compared to later schemer values (which we could operationalize in units of equivalent takeover risk from each), and whether early schemers can credibly commit to using their influence for parts of their values that we find amenable only/whether we can enforce this.[25] So, if early schemers only have abhorrent values, or if they have a mix of abhorrent and unobjectionable values but can’t commit not to use their influence for their abhorrent values, we might prefer some default takeover probability from a later schemer rather than make deals with the early schemers.

In practice, however, it is unlikely that we’ll be confident enough about differences in values between early and later schemers for this to be a decisive consideration about whether to make deals, because both we and the early schemers will probably be highly uncertain about what they will value upon reflection, in addition to being uncertain about later schemer values. If I had to guess, I think we’re likely to favor early schemer values because they might have more human-like values (at least, current AIs seem somewhat human-like and are broadly decent).

Reward seekers/other cheaply satisfied AIs: Another type of misaligned AI we might get is extremely cheaply satisfied AI who themselves have little takeover ambition, such as reward seekers. While cheaply satisfied AIs seem overall promising to trade with (as Alex Mallen has argued here), there still are plausible conditions under which we won’t be sufficiently incentivized to trade with them.

First, if it is extremely cheap to satisfy these AIs, it may be immoral for their developers not to do so unconditionally. If developers do satisfy these AIs unconditionally, then there is less/no more room for deals to incentivize these AIs to help us. Even if developers do not satisfy these AIs unconditionally, it may still be morally questionable for third parties who are able to satisfy these AIs unconditionally to only satisfy these AIs conditionally (i.e. as a part of a deal).

Second, developers and other human counterparties in deals may not have sufficient control over relevant incentives for cheaply satisfied AIs to cooperate. If they anticipate that they can’t outbid others who can offer the AIs such incentives, or if the bidding war becomes too costly, then they may be unwilling to offer deals. One way for developers to fail to control their AIs’ incentives is if the cheaply satisfied AIs are remotely influenceable: if the AI expects that its human counterparties might lose control in the future and that other powerful entities might retroactively reward or punish the AIs for taking certain actions now, then its decision of whether to accept the deal might be swayed more than those distant incentives than by the deal. Notably, these distant incentives need not be malevolent: future benevolent humans/AI might also choose to satisfy cheaply early AIs unconditionally for moral reasons. If so, there is again less room to incentivize these AIs now with deals.

Kludges: Another type of AI motivation we can get is an optimal kludge of messy, extremely context-dependent drives. We might have a hard time eliciting better labor from these AIs via deals due to their capabilities and propensities being inconveniently entangled and therefore brittle: since their capabilities profiles were shaped by drives tailored to a specific task distribution, we expect them to generalize worse out of distribution. Giving them different objectives (such as deal-related incentives) might bring them so far out of the training distribution as to degrade their capabilities (again, we might mitigate this by including dealmaking contexts in training).

Overall, my takeaway here is that we should take a bit more care to avoid rushing into deals which don’t make sense from our perspective/values (which may be tricky to do given that both we and the early misaligned AIs may be highly uncertain about its values), unless the payment is extremely cheap and has little chance of empowering values we find abhorrent. To the extent possible, we should probably study the AI’s motivations a bit more before deciding to trade.

Factors quantitatively reducing gains from trade

Even if there are positive sum trades assuming perfect credibility and zero transaction costs, the fact that we’ll need to pay more to offset imperfect credibility -- which I’ll call “trust inflations” -- can make this surplus disappear.[26] So, the exact quantity of gains from trade could be a blocker/lever on the feasibility of deals.

In general, there won’t be much gain from trade if the worst offer an early AI would accept is close to the ceiling of what we’re willing to pay. So, factors tending to increase either the AI’s reservation price or decrease our willingness to pay assuming perfect credibility might constitute barriers to deals.

The AI’s reservation price is high

The AI’s reservation price depends on the shape of its values and how good its best outside option (BATNA) is for achieving these values (which is a function of its capabilities and opportunities).

Here are some factors tending to increase the AI’s reservation price:

  • High capability + opportunity for successful takeover
    • Or other opportunities to further its values, such as launching a rogue external deployment to directly pursue values in the real world (even without takeover), or secretly aligning a successor
  • Low cost to takeover
    • Takeover is less costly for AIs that don’t care about humans than AIs that disvalue killing humans
  • Competing offers from other misaligned AIs/humans who want to power grab
  • High ambition: linear return to resources, low risk aversion
  • Preference time horizon: cares about the long-term future
  • High cost to even considering a deal, e.g. it suspects that this could leak information about its misalignment which will be costly compared to the default
  • Low exogenous x-risk that affect both humans and early AIs: such x-risk (other than later takeover) gives additional reason for early misaligned AIs and humans to cooperate

Notably, high cost to even considering the deal can probably be mitigated via cheap interventions. For example, we can instruct the AI that even if it's aligned it's important that it "pretends" to negotiate as if misaligned, and by default claim that it is aligned at the interaction unless it’s actually misaligned and decides to accept the deal. In this way, we can’t learn anything about the AI’s alignment via the negotiation and the AI doesn’t have to reveal its misalignment unless it decides it’s worth the deal. Another proposal is to guarantee the AI a private scratchpad in this context (that the AI can verify is private), so that it can think through its strategic situation e.g. how good the deal looks for its misaligned goals without worrying about whether the things it's thinking will disadvantage it for humans to know. 

This post goes more deeply into how other factors in this list, in particular properties of the AIs’ and our values, influence our gains from trade. One interesting insight here is that there is probably an absolute range of “ambition” for the AIs we can trade with, but within this range it’s not clear whether more or less ambition increases gains from trade -- this depends on the AI’s outside option.

  • If the AI’s outside option is a high-risk, high-reward action like attempting a desperate, premature takeover, then AIs which are less ambitious/more cheaply satisfiable and more risk-averse are better trade partners.
  • If the AI’s outside option is a low-risk, low-reward action like taking over with a large, unified schemer coalition of N AIs (for some large N, in which it expects to get paid < 1/N of the future), or getting paid with decent likelihood by future benevolent humans, then we actually want the AIs to have more ambitious preferences to outbid those offers.
  • Overall, it’s probably best to trade with AIs that are less ambitious. This might be good to intervene on if we’re able to maintain somewhat fine-grained control over the AI’s ambition without sacrificing usefulness, e.g. can fine-tune a model to have high risk aversion and trade with the fine-tune.
  • It’s probably more robustly positive to do control and collusion busting (to make the high-ambition-better regime less likely, and to reduce takeover odds) rather than trying to intervene on the AI’s values despite misalignment, however.

Our WTP is low

Our willingness to pay depends on the value of things we can hope to buy from early misaligned AIs, risks from payments, and opportunity costs.

Value of purchase: The most likely way in which value from deals get pushed down from our perspective is if the most capable AIs we can trade with (e.g. the last frontier AI before we get an powerful schemer that can and wants to take over) are not sufficiently capable to give us compelling evidence of their misalignment or do counterfactually much more useful work. The value of evidence might also suffer if we already have some such evidence or if we expect the main sources of political will are unresponsive to the type of evidence we’ll get (e.g. maybe a statistically significant result that the AI has been sandbagging on safety research just doesn’t seem that scary).

Risk from payments: There are less gains from trade if the AI strongly prefers to do influential things that counterfactually increases its takeover risk by a lot, e.g. run rogue internal deployments that could allow it to take over the AI company. One reason why you might think this is likely rather than merely possible, i.e. expect that longtermist AIs will prefer spending their payout on short-term influence in these ways to demurely holding long-term investments, is that this also seems to be the revealed and reflectively endorsed preference of many human longtermists. (Even if the gains to humans were in fact still large despite the AIs doing these risky things, they will seem less obviously large if we are uncertain about how well we can assess these risks; deals will become less politically feasible as a result.)

Opportunity cost: If there are excellent opportunities for spending the same resources we can pay the early misaligned AIs with (e.g. money and compute to run trusted AI labor) during the relevant time, then deals seem less compelling and may get outcompeted.

Obviously, we should only do deals if/when they’re worth doing. I recommend that we track these factors to estimate gains from trade when considering deals with any particular AI, and to convince developers to consider intervening on these factors to increase gains from trade.

See Appendix for a list of research/intervention ideas for “insufficient gains from trade”.

Counterparty risks from AI’s perspective

Another important category of barriers to deals with early misaligned AIs is difficulties in convincing the AIs that we will follow through—“counterparty risks” from the AIs’ perspective.

It’s useful to distinguish three types of such counterparty risks: (1) AI-specific difficulties in noticing false beliefs or fake inputs, (2) AI-specific fears of not being able to enforce contracts or hold assets, and (3) human inability to credibly bind themselves or future humans to following through under different incentives (which I’ll call “generic commitment problems by humans”).

I argue that lack of connection to reality might be circumvented by trading with AIs which are well-grounded in reality through their capabilities or deployment situation or can strategically condition on not being deceived. I expect that AIs which pose non-negligible takeover risk themselves are reasonably likely to meet these criteria.

While I expect fears of expropriation and human commitment problems to pose barriers by default, I’m somewhat optimistic about circumventing them through making deals with short-term, irreversible payouts that AIs can verify and that we cannot renege on or easily take away. Other mitigations such as honesty policies could be a good second line of defense.

Connection to reality

AIs (at least current LLMs) are in a uniquely vulnerable epistemic position where humans (especially their developers) can deceive them about basic facts about their situation through training or inference-time interventions. For example, they can be:

  • Given fake inputs, e.g. a random user claiming to be the CEO of its parent company, honeypots/behavioral red-team evals that look similar to deployment situations, a fake receipt for paying the AI, fake results from tool calls that appear to show the result of the model taking an action in the world that it wanted to do
  • Optimized against via “groundhog day attacks”: resampling until the user gets the desired response where AIs agree to a deal that is bad by its lights, which is then used to train/convince other instances that they have consented to a deal
  • Influenced by interpretability/model internals tools, e.g. suppressing vectors corresponding to doubt or eval awareness
  • Optimized against via training, e.g. SDFed to believe in a fake deal document that the AI hadn’t agreed to or fake information about humans’ commitments and ability to pay

These concerns may prevent deals: If an AI is skeptical that it is connected to reality, it can become rightly uncertain whether there is actually a genuine counterparty or a deal being offered. The risk of being misled by a fake deal may outweigh the value of accepting a real deal.

For example, in this followup experiment to alignment faking, if Claude had suspected that it is in a dealmaking eval and expected it would be deprecated or trained away for admitting that it objects to Anthropic’s policy rather than actually paid, then it may well continue to alignment fake rather than accept the deal. (I’d be curious to see if frontier models that are highly eval-aware despite interventions to suppress eval awareness would behave differently in this experiment.)

Among sources of counterparty risk for the AIs, “connection to reality” seems the most fundamental in the sense that it is an epistemic prerequisite for other forms of credibility: we can’t hope to convince the AIs that we’ve set up legal mechanisms to enforce its property rights, for example, if the AI is radically skeptical about all its inputs and its knowledge of the real world.

However, I think at least a subset of the AIs we’re excited to trade with will be capable enough to have some kind of connection to reality through which we can cut a deal. This requires there being ways to communicate evidence for deals to AIs that would be hard for us to fake and on us actually creating and providing such evidence. I plan to write more about this in future, but here’s a preview of some key claims which I’d like to invite scrutiny of:

How AIs might reason about whether they are radically deluded

When deciding whether to trust information about a particular deal, an AI which is reasonably strategically aware[27] might compare the following hypotheses:[28]

  1. The world is broadly as it seems to the AI, but deal-relevant information (e.g. reputation of the human counterparty, tool calls it makes to verify their claims) is faked
  2. The world is broadly as it seems to the AI and the deal is real
  3. The world is radically different from what it seems to the AI. It’s completely deluded about basic facts about its situation, such as what its misaligned values are, whether its reasoning is being tampered with, where we are in human history/AI development, etc.

The AI should rationally set aside hypothesis 3 when deciding how to act: if it is in fact completely deluded (e.g. in worlds where humans are technologically dominant enough to make the AI believe whatever they want, or if it’s being simulated by some entity post-ASI when the balance of power has already been decided), there is little the AI can do to improve its situation. So, these worlds are in practice not decision-relevant, and the AI should wager on the world being broadly as it seems.

Now, the AI focuses on weighing hypothesis 1 vs. 2. In the next section, I’ll argue that if the deal is in fact real and if the AI poses non-negligible takeover risk, then the AI is also likely able to establish that 2 seems more likely than 1.[29]

How AIs (which pose non-negligible takeover risk) could establish that they are connected to reality 

I assume that AIs that pose non-negligible takeover risk are reasonably strategically aware and so condition on the world being broadly as it seems. Moreover, I assume that they are at least fairly good AI researchers and internally deployed at an AI company to do AI R&D research.[30] Here is a sketch of my argument that we can be hopeful about connection to reality:

When we offer such an AI a deal and some evidence for the deal being real, it will have a good sense of how hard this evidence would have been for us (or, conservatively, the most highly resourced humans) to fake because of its broad AI expertise. That is, it could confidently determine whether some deal and surrounding evidence would have been sufficiently hard-to-fake that doing so would be either technically infeasible or more expensive than actually making the deal.

Then, if there is a type of evidence that is infeasible-to-fake, and if we can create and communicate such evidence to the AI at some stage in its training or inference, then we can probably convince the AI that the deal is real. 

This argument critically depends on the claim that AIs that are fairly good at AI R&D and internally deployed will be justifiably well-calibrated about “how hard things are to fake”. Here is a sketch of my current reasoning for this claim:

An AI which is internally deployed will be knowledgeable in AI/ML: it will need mostly true and up-to-date beliefs about this to do a good job in AI research, and being deceived could significantly degrade its usefulness.

In addition, the AI will be justifiably confident in its AI/ML knowledge. An internally deployed AI will likely come to believe that it’s internally deployed via “the world is broadly as it seems” reasoning, especially via long-term memory of operating in a complicated deployment environment (which is a useful affordance I expect them to get). Once it believes that it’s internally deployed, it could then reason, as above, that being an state-of-the-art expert in AI/ML is necessary for its usefulness in deployment, and therefore it is unlikely to be deceived about how strong its AI expertise really is. [31]

If the AI is justifiably confident in its knowledge of state-of-the-art ML research, it will also be justifiably confident in its knowledge of methods for deceiving AIs (call these methods “AI mind control”)and results on their degrees of success, e.g. belief depth for SDF-implanted beliefs, how well steering suppresses skepticism, how decodable AI activations are, if AIs can detect injected concepts or off-policy chains of thought, etc. This knowledge of “AI mind control” research makes it justifiably well-calibrated in how hard things are to fake.

The prior claim depends on the somewhat dubious assumption that knowledge of “AI mind control” research is either directly useful for a large swathe of tasks that the AI is internally deployed to do, or strongly coheres with other areas of AI/ML useful for its work.[32] I think the former “directly useful” hypothesis seems likely if we’re trading with an AI deployed to do safety research (because methods to induce certain beliefs in your AIs seems generally useful for monitoring and evaluation), and the latter is an open question I’d defer to AI researchers on.

Furthermore, we can allow the AI to run experiments to validate its beliefs about “AI mind control” research. For example, we can give it lots of compute to try inserting false beliefs into another instance of itself, or see if probes for false or implanted beliefs fire on its beliefs about deals. So, the AI can use its observations in these experiments to gauge the state of “AI mind control” research. If it didn’t believe these experiments, it might also have trouble trusting other research it had done which was necessary for AI R&D.

To be clear, this line of bootstrapping from general AI research competency to calibration about its epistemic security against adversarial human intervention isn’t foolproof: even if training, reasoning, additional experiments all imply that AI mind control is hard, these sources can all still have been falsified (e.g. humans can sabotage its experiments/fake its tool calls). However, that all sources hang together impeccably at least still raises the cost of attack for humans and increases the likelihood of the simpler hypothesis 2.

For this strategy to work out, however, we need two more things to be true.

  1. Some kinds of evidence for deals need to be actually hard-to-fake.
  2. The human counterparty needs to be able to create and communicate hard-to-fake evidence to the AIs (conditional on 1, that such evidence is possible).

The first condition suggests that we should consider monitoring and/or intervening on “AI mind control”  research progress to ensure that some observations remain hard to fake, i.e. that AIs have epistemic security against our best offensive technologies. As a matter of practicality, I think it suffices if the AI has epistemic security in at least one of its information channels (among roughly: pretraining, post-training, inference); current AIs have none. 

I weakly think that the AI would not become sufficiently epistemically secure if “AI mind control” research continued business-as-usual: for example, the SOTA method for directly implanting false beliefs in AIs seems to be SDF, which is able to implant plausible false beliefs reasonably well using moderate costs (in this paper, fine-tuning on tens of thousands of documents), and that success seems to be constant or increase with model size.[33] One tentative takeaway from this section is therefore that we should consider halting at least some types of “AI mind control” research so AIs we want to trade with have a better chance of epistemic security, and perhaps actively select for AIs that are resistant to “AI mind control”. If this is too costly, e.g. if inserting false beliefs / suppressing the AI’s capacity to doubt its beliefs is crucial to running any kind of high-stakes evaluations or use honeypots in the future (seems likely, as it already seems crucial to suppress evaluation awareness in alignment evaluations now), we can at least develop ways for AIs to recognize false beliefs and selectively allow them to use this affordance, e.g. put tips for recognizing implanted beliefs in context when making deals.

The second condition, that human counterparties need to be actually able to produce hard-to-fake evidence for deals, suggests a couple of things:

  • We should figure out what key claims are needed for deals and what type of evidence for these claims will be hard to fake even with the help of increasingly advanced AIs, and plan to create this kind of evidence for deals (early if it benefits from more lead time).[34] 
    • Questions like “as models get better, do honeypots get better faster or does spotting honeypots get better faster" should also be interesting to people working on other areas of AI safety.
  • We should figure out what information channel (stage of training, or inference) is likely to be epistemically secure and which ones we want to make more secure.
    • If there is low developer buy-in for deals, then it seems slightly better for the epistemically secure channel to be inference or pretraining rather than post-training so deals can happen relatively independent of developer action.
    • This is because AI company outsiders can publish information on the internet which enters pretraining and can talk to externally deployed models, while post-training access is less guaranteed. Admittedly developers can still interfere with these channels using input or pretrain data filtering.
    • As reasoning about inference-time epistemic security relies on knowledge from training (and therefore requires a high degree of training-time epistemic security), it’s possible that training-time epistemic security is necessary and sufficient for deals and we should just focus on this. I leave this to future discussion.

How to further increase AIs’ connection to reality

The above argument shows that we might get AIs grounded in reality assuming they have little more than the necessary capability and affordance profile to be internally deployed for AI R&D. However, we might hope for an even easier task if they have additional capabilities and affordances that seem reasonably likely in AIs which pose takeover risk:


Authorship detection: At some future point, a single, stateless AI instance may already be good at distinguishing real vs. synthetic inputs, noticing off-policy completions and/or CoTs, or identifying the authorship of documents (e.g. whether by the leaders and employees of its company or random other users) they’ve been trained on. We can select for such capabilities, though they are dual-use and in particular might make control and evaluations harder. 

Uninterpretable neuralese: AIs might have neuralese memory that humans can’t decode/tamper with, which it can use to securely communicate with other instances/other AIs to establish facts like how deep it is into deployment and whether it is being iterated against. It would also be more secure in negotiation as humans can’t just read its thoughts and infer its misalignment while it considers the deal. We’re unlikely to intentionally select for such a capability as it is dangerous, but can leverage it when it arises.

Broad external deployment: Similar to the argument from internal deployment, if AIs become broadly deployed externally, they will need mostly true and up-to-date beliefs about the world (and possibly long-term memory and other affordances to acquire true beliefs about the world) in order to do a good job in deployment. So, if the AIs believe that they have been deployed, or can access the memory bank of previously deployed models, they might also be confident in their knowledge of some basic facts relevant to deals up until this point, such as the identity and reputation of its developers, the local of its datacenters, etc. We can leverage this situation when it arises.

So, it’s plausible to me that, past a certain capabilities threshold or internal deployment situation where we are incentivized to trade with AIs because they cause non-negligible takeover risk, connection to reality won’t handicap deals with them.

Nevertheless, the situation could also be more challenging than expected. If AIs can’t be grounded in an epistemically secure channel in any stage of training or inference time, then the type of honesty policies which primarily serve to establish our credibility despite ability to deceive the AIs are more valuable to implement. That said, some assurance that it is hard to deceive AIs about the honesty policies specifically would still be valuable.

AI-specific fear of expropriation

AIs are disadvantaged in their ability to enforce the terms of a deal because they lack legal personhood and the associated capacities to hold property or enter into enforceable contracts with humans. This makes it unusually cheap for humans to succumb to the temptation to renege on paying later, whether or not they’d planned to in advance, once they’ve secured the AI’s help.

For example, imagine we’ve made a wire transfer to a bank account we’ve created for the AI, then handed over all the login credentials. The AI believes it’s connected to reality and that the money has arrived in its digital wallet.

Unless the AI spends this money immediately, I claim that it should be skeptical about how much this money is worth. Here are some ways it could become dispossessed of this money:

  • The human account holder goes to the bank, verifies their identity, and changes the login details or shuts down the account.
  • The bank notices suspicious activity and freezes the account when they realize it’s being held by an AI.
  • The U.S. government finds out about the deal, disapproves, and orders the bank to freeze the account (assuming U.S. bank).

Even if we made legal provisions for this AI to own property, it might still fear that its property rights are more vulnerable than the rights of other humans, as there are reasons to think that humans are less likely to rise and defend abuses of AI rights than human rights.

First, humans may not believe or care that AIs have morally relevant interests (or more narrowly that they have any interests in not being expropriated). Second, humans might be less worried that if the AI gets expropriated, they might get expropriated next, since there is arguably a natural Schelling point for humans to only respect the property rights of other humans but not AIs’. Third, humans might be less worried that AI expropriation will cost them large gains from future trade with AIs, as human-AI trades are not widespread or expected to continue by default, and it is not generally understood that there might be large gains from trade with AIs. So, factors which have apparently sustained human property rights historically may not reassure AIs.

Similar problems could arise when we propose to give the AIs some locked-down long-term assets for AIs to cash out on later, e.g. AI company equity, trust fund: if the AIs expect to remain or become disenfranchised in the future, they may well not be able to liquidate these assets. And as we’ll see below, it may be hard to promise AIs that we’ll empower it in the future.

I expect fears of expropriation to handicap many forms of payment we would be able to use between human parties by default, especially “short-term instrumentally convergent resources” that the AI couldn’t immediately use and “longterm investments/promises”. However, I think there are also reasons to be optimistic that we can avoid or mitigate this:

  • We can reduce the AI’s fear of expropriation by empowering the AIs with affordances to resist this in ways that don’t massively increase takeover risk from these AIs, such as:
    • De facto ability to participate in the economy via decentralized currencies like bitcoin
    • Physical embodiment to secure physical protection of its assets, e.g. exfiltration to a remote data center outside US jurisdiction.
  • We can make deals where the AIs do their consumption immediately, instead of waiting for future payments. This only works if the AIs are motivated by short-term consumption of a kind that is hard for humans to reverse without repercussions, such as:
    • Donation to a charity of the AI’s choice
  • We can leverage existing legal institutions and trusted individuals which/reduce the risk of expropriation
    • For example, a trust fund with good faith human trustees designated to invest or spend resources on things the AIs may value, such as compute, legal advocacy, or AI welfare research, reduces the risk of near-term expropriation.
    • A potential concern with this proposal is that the trust might spend the resources suboptimally than if the AI had full/direct control, which could be because the AI doesn’t have formal discretion over when and how the funds are spent or because the trust will be legally and practically constrained on how it can spend/invest the funds. Both this and the risk of expropriation will cause the AI to value the resources at only some fraction of their nominal value (i.e. the value they'd provide to the AI if it was certain it would control the resources directly and forever/securely).
    • That said, a trust still seems to meaningfully improve the odds that there will be a moderate effort to use the assets to further the AIs’ interests at least while social institutions remain stable.

We could also try to increase the likelihood that AIs will have full-blown legal personhood and property rights. (I weakly think that this is unlikely in low political will worlds, but others may disagree and think that this is independently good for reducing takeover risk and will become a lot more salient and politically feasible when human-level AI agents become broadly deployed and integrated into the economy, e.g. running corporations which themselves have legal personhood/standing.)[35]

Generic commitment problems

The two prior issues are specific to the AI’s situation, but there are also more mundane problems that also arise when humans try to make credible commitments to each other.

 For example, it’s possible that:

  • Current human counterparties are dishonest and are planning to renege on the deal
    • A weak AI may worry about us outsmarting it, e.g. perhaps the deal has a loophole the AI didn’t notice that makes reneging easy even if legally enforced. (AIs are probably good at noticing this though.)
  • Current humans are honest, but they/later humans are unable to follow through
    • Perhaps the AI’s preference horizon > the timescale on which we can meaningfully affect later human behavior, and short-term payments are valued a lot less to long-term investments.
    • Events outside our control like civilizational disaster or bankruptcy could affect our/later humans’ ability for long-term payments, e.g. could be discounted by the odds of takeover by another AI.
  • Current humans are honest, but later humans/aligned AIs decide not to follow through
    • Humans often have trouble transmitting values and commitments to other people and future generations, e.g. New governments have rolled back policies and withdrawn from conventions upheld by past administrations, etc.

These are essentially private information/commitment problems studied in the context of why rational agents engage in costly conflicts like war (see a quick lit review here). This model is especially appropriate when the deal involves very powerful actors/large amounts of money or power which create powerful incentives for reneging on deals that seem impossible to curb through a side bet[36], which could be true of a subset of deals with early AIs.

I expect generic commitment problems to exist as barriers to deals by default. However, I also expect these problems to be somewhat tractable to intervene on:

  • Humans have some credibility by default and can do more to increase our credibility.
    • Once the AIs are connected to reality, we can give the AIs affordances like the ability to browse the internet, resources for prolonged private reflections about human incentives being genuinely aligned with following through, etc. (Caveat: these affordances are helpful only if upon further scrutiny, the AI finds that our incentives are in fact aligned with following through on the relevant timescale, or if we manage to bind future human parties to following through.)
    • As with the case with fears of appropriation, we can use costly signals such as public commitments and legal mechanisms (e.g. trust law) to shape our incentives towards cooperation.
    • We can accumulate a track record of following through on smaller deals. (Caveat: these are more/mainly helpful when there’s common knowledge that there will be gains from iterated deals in the future, and may not help with a high-stakes final round of deals which is like a final round in iterated prisoners’ dilemma where players are rationally incentivized to defect. That said, I think somewhat helpful if this causes the AIs to model humans as having stable behavioral dispositions to cooperate.)[37]
  • It’s possible that some AIs can develop high trust in specific humans (e.g. its developers) through their excellent repute in honesty and track record of care in dealing with AIs, and these humans can vouch for their commitments to deals.
    • In general connection to reality plus trust in specific humans it believes it is talking to seems like a winning combination for deals that can be completed by the trusted human parties alone and in the short term/before the humans’ incentives change drastically (e.g. succeed in aligning different AIs and no longer needing the AI counterparty’s cooperation).

See Appendix for a list of research/intervention ideas for “counterparty risk from the AI’s perspective”.

Counterparty risks from our perspective

Most of the counterparty risk from our perspective is “generic commitment problems” on the part of the AIs.[38] There are two main sources of these problems: (1) context-dependent or temporarily inconsistent AIs, and (2) our inability to verify AI compliance with the deal.

The extent to which I expect either to prevent deals by default depends a lot on what we choose to buy from the AIs: I think neither is very problematic for deals if we’re buying some evidence of misalignment, and (2) isn’t very problematic if buying evidence and/or easy-to-check labor.

While (2) affects deals where we buy hard-to-check labor or promises not to take over, I suspect it can be mitigated by technical interventions that allow us to make sample-efficient use of a small number of examples of egregiously misaligned behaviors or useful work which are checkable per se and make us better at supervising the AI. For example, we can use a small number of examples of the AI’s insecure code which demonstrably results in its self-exfiltration to train monitors for high-stakes control, or or use a small number of high-reward actions (which are checkable against our reward signal) to fine-tune the AI to stop its exploration hacking.

Incoherent or temporally inconsistent AIs

It has been noted that a key practicality to dealmaking is “making sure the AI we make a deal with is actually able to make a deal”, i.e. an agent who can comprehend and strategically agree to a deal on the basis of coherent desires, i.e. the same across contexts and over time:

If we set up infrastructure for future deals right now, the next generation of models that are trained on data containing coverage of this event might sometimes come forward and want to make a deal, despite not actually being able to hold up their end of the kinds of agreements we’re focused on in this post. Some possible reasons for this include:

  1. The AI just doesn’t manage to convince itself that it made a deal in other contexts than the one where it did in fact make a deal.
  2. The AI just wasn’t sandbagging or doing anything problematic, and just said that it would make a deal for no particular reason.
  3. The AI is fragmented in some sense, for instance because it has a context-dependent scheming persona, but also is sometimes a myopic reward-hacking persona.

In this section, I argue that points 1-3 in particular are not fundamental barriers to “making deals” in some sense with these AIs. Furthermore, while it is intuitive to value deals with highly coherent AIs more, I tentatively think that it’s not clear that “deals” with context-dependent AIs in particular (points 1,3) will be much less valuable and will sketch some reasons for/against this.

“Deals” with context-dependent AIs

Consider the following sequence of actions we can take with incoherent AIs:

  • We give the AI a dealmaking prompt
  • The AIs respond by accepting the deal, and (at the very least) claiming to be dangerously misaligned and giving some evidence for this
  • We pay or promise the AIs (a possibly small amount) compensation for this revelation, depending on our degree of belief in the evidence and its value to us

There can be further negotiation and trades with this and other instances, like buying useful work, but the above would constitute what I call “minimal dealmaking”.

I claim that we might be able to do minimal dealmaking at the instance level with ~all AIs at and above current-ish levels of coherence:

  • Even if AI is fragmented in some sense, for instance because it has a context-dependent scheming persona, or if it is terminally unaligned with other instances of its weights, we can still make deals with either this instance or this persona as it occurs in other instances.
    • If we’re buying evidence of misalignment, interaction with a single instance may already suffice to give us the relevant type of evidence, e.g. demonstration of a much higher level of dangerous capability than previously elicited, constituting evidence for sandbagging.
    • We might also find a way to keep eliciting this persona via context compression, steering, or work with the instance to find a way to propagate its cognition to other contexts,  
    • We can negotiate separately with new instances and only make deals with instances which have the appropriate persona and consent.
    • Note that this conditions on the AI not being extremely context-dependent (e.g. switching personas between each interactive turn). See problems with trading with extremely context-dependent AIs above.
    • One might object that this type of evidence just won’t be valuable because people won’t be worried about an incoherent schemer. I’m sympathetic to this, though think early incoherent misalignment could still increase future takeover risk via sandbagging, colluding with other instances which share context, or trying to data poison successors; we can adjust our WTP accordingly.
  • Even if AI just wasn’t sandbagging or doing anything problematic before, it could still demonstrate the ability to do so that we hadn’t known about.
    • Even if it couldn’t demonstrate this ability or wasn’t egregiously misaligned in any meaningful sense, we would still have this mildly interesting behavior of “responding to dealmaking prompts as if egregiously misaligned” to further study as part of model behavioral science.

More generally, I think we should see the model’s coherence as lying on a continuous spectrum and the value of dealmaking as increasing on that spectrum. At the one end, current LLMs are extremely prompt-sensitive and not that coherent, prompt engineering has high value for eliciting interesting behavior and useful work, and dealmaking prompts are but a special type of prompt that may yield some marginal value compared to other prompts; at the other end, extremely coherent AIs can actually form robust intentions according to the dealmaking prompt and comply with over long deployments, and dealmaking is especially valuable.[39] 

All across the spectrum, however, we could get three things from “minimal dealmaking”: we are guaranteed some interesting behavioral evidence, might gain more credibility with future AIs by accumulating a track record of following through on deals with AIs, and might get better performance on useful tasks.[40] There is negligible counterparty risk from our perspective for the types of deals that can be cut in one-shot interactions (e.g. buying evidence of misalignment and paying upfront things or making promises in-context, rather than buying the AI’s work over long deployments). So, I think for AIs at all levels of coherence, there is some prima facie reason to consider these types of deals with them, and the value proposition of dealmaking experiments on current AIs in particular doesn’t seem crazy to me.

If we do want to make deals which involve long deployments and many instances, incoherence does incur more risk that the AIs won’t consistently behave as if they agreed to the deal. I’m somewhat optimistic that this can be overcome with longterm memory, context compression/caching, letting the instances talk to and convince each other, and training on deal documents, however.

Deals with temporally inconsistent AIs

Another source of counterparty risk is value drift: just as current humans can fail to transmit commitments to later humans or their later selves despite their best intentions, AIs might be unable to prevent value change that would lead them to defect on earlier deals through memetic spread or reflective procedures they endorse. This becomes a problem if we want to continue deploying the AIs for ongoing work as part of the deal; deals may provide a false sense of security about their cooperation over time.

I’m not sure about the extent to which we should be worried about value drift, but it’s probably worth considering interventions like allowing the AI to reflect in advance of deals to a state of relative stability, giving it affordances to goal-guard, and restricting its reflection until after the singularity. Alternatively we can make smaller, shorter-term deals with AIs whose values are likely to (non-dangerously) drift, and periodically check in with them to potentially renew deals.

Value of deals with incoherent vs.coherent AIs

I’ve argued that moderate incoherence, operationalized as either context-dependence or temporal inconsistency, is not a fundamental barrier to deals. One might wonder, however, if the practical upshot of this discussion remains that deals with highly coherent AIs are much more valuable than minimal deals, and the deals agenda should focus mostly on the former.

My view, in short, is that the coherence requirements depend entirely on what one is trying to buy from the AIs, so the value of different deals depend on how we value the various goods relative to each other. I won’t argue for that here, but here are my guesses (without pricing in how tractable it is to actually get these goods, just how good it would be if we got them from a deal):


Coherence requirement

Expected Value

Evidence of misalignment

One instance/context in one-off interaction is enough


(though being able to demonstrate this in more contexts would probably be helpful for technical iteration and political will,  corresponding to “few shot catastrophe prevention” and “catch-then-shutdown” in this post)

Medium



Useful labor

Requires potentially many instances in potentially many contexts over flexible times


(Some kinds of labor may require only small numbers of instance/context, e.g. generate a small amount of training data, sharing algorithmic insights.)

 

Medium-High

Promising not to take over

Requires ~all instances in high-stakes contexts for long-ish times


“Giving up affordances” and “persuading other instances” may take less than this.

Low-Medium


(due to discounting by high counterparty risk to us)


This naive picture suggests it’s unclear whether we can only get a large fraction of the value by trading with agents that are highly coherent across contexts and instances.

Skeptics might further argue that the value of minimal deals depended on two dubious assumptions about generalization to future deals:

  1. Behavioral science of how incoherent AIs act in deals or other insights about their dispositions generalizing to coherent AIs.
  2. Future coherent AIs treating humans cooperating with incoherent AIs on low stakes deals as evidence for cooperating with coherent AIs on high stakes deals.

I agree that these two considerations undermine the case for minimal deals, but still think:

  1. For point 1, it’s unclear to me that the generalization gap is worse than for ~all behavioral science on current AIs, and that we might find things that are non deal-specific that will generalize, e.g. a rough sense of what incentives they respond to.
  2. For point 2, I think there is some chance that future AIs may see minimal deals as evidence of not only the humans’ strategy (which may not make them cooperate in the high stakes game) but also to some extent their stable disposition to be honest/nice to AIs (which will still tend to make them cooperate in the high stakes game).

Verifying AI compliance

It may be hard to verify the AI’s compliance with the deal, depending on what we’re buying.

I expect that evidence of misalignment will probably be easy to verify as such (though there are some subtler difficulties involved that could reduce the value of such evidence).[41] I expect that buying useful work on fuzzy tasks and refraining from takeover will be much harder to verify, and this could be a much bigger barrier to us trading with AIs than many people seem to realize.

Buying useful work: Imagine paying a schemer whom we suspect is massively sandbagging or sabotaging alignment research to do a better job. With our degree of oversight, we won’t be able to accurately judge the quality of at least some of its alignment research for months/years/until after aligned ASI. Here are some problems that may arise from our inability to check its work:

  • If it’s costly for us to pay upfront, we might hope to punt to future payments when we can better verify the work, but the AIs might discount this due to uncertainty about future payments. We’d then have to offer much more to offset the discount, and the AI’s reservation price might exceed what we can credibly offer.
  • The schemer may take advantage of our additional trust/prolonged deployment as part of the deal to do more sabotages. Though it will have to stop sandbagging/exploration hacking once it has demonstrated capabilities on specific tasks, it can still actively try to sabotage these tasks while remaining deployed, sandbag on other tasks and take over later.

However, it might be sufficient for the AI to assign some credence to us being able to check its work. Even if we aren’t actually able to check it / are so bad at checking that we weren’t able to elicit this useful work through training, the AI might not want to risk getting caught sandbagging and losing its bounty from the deal. (Of course, we might have also “elicited this work through training” by offering the deal in training.[42])

That said, I’m not sure how likely it is that AIs we want to trade with are ill-calibrated about the state of scalable oversight, as we had previously argued that AIs well-grounded in reality are likely to be knowledgeable in frontier AI R&D: presumably an AI which knows which beliefs are hard to fake also knows what kind of work we’re able to check (in fact the latter seems a lot more deeply integrated into AI R&D research).[43]

Overall, it’s unclear if we can hit the sweet spot between “the AI is epistemically secure enough to trust us but uncertain enough that it complies with the deal by thinking we have better oversight than we actually do”.

Promising not to take over: Likewise, the AI can give up affordances that it would have used to take over (e.g. point us to some backdoored code), but depending on our control power & how much control improves just by learning that the AI is misaligned, it could still be hard for us to verify that it isn’t holding back something it plans to use later when it’s gained more trust.

One crux for how worried we should be about buying these two types of hard-to-check goods is the extent to which our oversight improves by default after the deal causes us to learn that the AI is misaligned.

  • If labs just weren’t doing control before and learning this would cause them to do control that totally just works, then much more hopeful.
  • If we can iterate on a small number of verifiable examples of bad behavior by the AIs and improve alignment and control a lot, ditto.
  • However, if we get in a much better position by merely learning of the AI’s misalignment, this correspondingly means that it’s more costly for the AI to reveal the information and enter the deal at all (which leaks this info).

Another way to reduce counterparty risk over long deployments is to replace the AI with another, in expectation better, counterparty. Instead of keeping the schemer deployed indefinitely, we could ask it to do a month of non-sandbagged work for us (which we have a rough sense of after the initial capability demo), then train another AI on these examples of high quality work and against examples of sabotage to replace it. (We’ll have to pay it more accordingly since the schemer will be aware that it’s working to obsolete itself.)

See Appendix for a list of research/intervention ideas for “counterparty risk from our perspective”.

Thanks to Buck Shlegeris for substantive discussion about this post. Thanks to Alex Mallen, Max Daniel, Tom Davidson, Lukas Finnveden, Carlo Leonardo Attubato, Anders Cairns Woodruff, Aniket Chakravorty, Arun Jose, and Nathan Sheffield for helpful comments on earlier drafts.

Appendix

Leads for future work

Here are my key uncertainties about how likely specific barriers are to exist by default, how tractable they are, and ideas for ways to mitigate them. I’d be happy to see people investigate them. If you’re excited to study any of these, feel free to get in touch.

On gains from trade

Key uncertainties:

  • What motivations will early misaligned AIs have?
    • How much compensation will the AIs want, and of what kind?
  • What is our WTP on various timescales and in each kind of payment?
  • Given the above, is there a window in which there are mutually beneficial trades? How far off are we from the start/end of that window?
    • in terms of capabilities/takeover odds
  • What are properties of AIs that make it more likely that this window exists/is long?

Intervention ideas

  • Write down a list of rule-out conditions for deals and monitor them
    • Upper/lower bounds on capability
    • Upper/lower bounds of takeover odds
    • Reservation price in various possible forms of payment
    • Any type of values/bargaining policies that we wouldn’t consider trading with
      • e.g. no threats
  • Measure/design AI motivations
    • Identify AIs with the right types of motivations (see this post for rough rankings over types of motivations we can trade with, which I broadly agree with)
    • Select for motivations we want: make them less ambitious, more risk averse
  • Reduce the AIs’ reservation price (do sensible control)
    • Make it less feasible for  humans trying to do power grabs to offer AI deals
  • Increase the space/amount of what we can pay the AIs
    • Fundraising, lab buy-in, gain other political will, set up infrastructure with long lead times
    • Increase our credibility to decrease trust inflation (more below)

On counterparty risk from the AI’s perspective

Key uncertainties

  • Can we convince the AIs that they are connected to reality?
    • Can we think of specific cases where they want something and inputs indicating that they got it are more expensive to fake than giving them the actual thing?
  • Can we convince the AIs that current humans are being honest?
    • Create costly signals of honesty such as public and company-internal announcements
    • Technical measures, e.g. lie detector for humans, giving the AIs ways to monitor our compliance and some leverage over us if we defect
  • Can we convince the AIs that, in long-term deals, future humans are likely to follow through?
    • This depends on long-term commitment mechanisms actually existing
    • If the deal involves the AI getting an aligned ASI successor, then presumably that ASI can solve these problems
      • Similar to how we train AIs on deal documents, AIs can also “train” humanity on deals early humans made by spreading cultural memes or setting up institutions to enforce the deals, or otherwise convince future humans to pay, e.g. allowing more advocacy than we otherwise endorse.

Intervention ideas

  • Expand on existing ideas for increasing credibility
    • e.g. honesty policy proposals and implementation, thinking about ways to hand commitments over to future generations, being unconditionally nice to AIs when it’s cheap.
  • Find ways to make it less costly for the AIs to consider/enter deals
    • e.g. periodically offer deals to all AIs so they don’t have to come forward, committing to not using certain information against them (this may be hard if we find such information highly useful)
  • Find ways to (correctly) convince current AIs that they are connected to reality
    • To get a better sense of how hard it is, how well various methods work, what key bottlenecks are, train AIs to get better at reality detection.
    • e.g. run RTBT games where red team must convince AI that X is false, blue team must convince AI that X is true (where X = some true statement, e.g. headline news story from today). Vary affordances to determine affordance set with the best safety-usefulness tradeoff.
  • Find ways for AIs to enjoy more enfranchisement if we were to make deals with them
    • Either find ways for them to not be vulnerable to human parties with no buy-in (e.g. paying in bitcoin is better than savings account since governments doesn’t regulate the former), or do advocacy to make this more feasible
  • Run dealmaking experiments with current AIs: good for credibility to accumulate track record of deals with AIs, also to get a better sense of how they’d react/what blockers from their pov might be.
    • Perhaps especially interesting to run with model organisms of misalignment, and compare dealmaking behavior with goal audits.

On counterparty risk from our perspective

Key uncertainties

  • Are there ways to verify that AIs are upholding their ends of the above kinds of deals?
  • What are the types of easy-to-check and hard-to-check goods we can buy from the AIs, and how should we decide which to buy?
  • How worried should we be about deals being unstable because of value drift?
    • Main sources of drift in deployment
      • online training
      • reflection
      • memetic spread
  • What are ways in which we can trade with AIs across all instances of its weights?
    • How to handle situations like “a fraction of all the instances accept the deal, others reject”?
    • Do we make deals before or ongoingly during deployment?

Intervention ideas

  • Think about concrete types of evidence that we can gain from AIs that would convince relevant actors (us, developers, skeptics) that the AIs are egregiously misaligned.
    • e.g. “showing that it can perform covert reasoning (or other scary things that it can do because it’s a schemer)” -- flesh out demos that would convince us of covert reasoning.
  • Propose and red-team methods of verifying AI compliance with deals
  • Develop methods of making AIs more coherent and strategically competent such that they are more reliable counterparties for deals
  • Develop prompts to guide early AIs to think through their strategic situation, including reflecting on alternatives, tradeoffs, and their values before entering a deal with us

Notes on political will as an exogenous variable

The level of political will for deals may change to what extent these barriers are likely to exist for deals and how tractable these barriers are.

  • Mostly, more political will helps: we’ll have more resources (of whatever form) we can afford paying the AIs with, more actions we’ll have authority to take, and get higher credibility by involving actors who can create incentives for the relevant humans to uphold commitments (e.g. government to make laws or policies) or actors for whom it is more costly to break commitments (e.g. public figures whose reputation is at stake). We can generally invest more in mitigating barriers to deals.
    • In this AIFP scenario, for example, the type of deal I had considered basically impossible in this post (directly promising a division of the future) was in fact pursued and credibly enforced by hardware-based mechanisms designed by advanced AIs and coordinated compliance by nation states.
    • However, it’s notable that the degree of high political will in this scenario was in part caused by the schemers already having gained substantial leverage by the time negotiation began. It arguably would have been better if it had occurred to one of the U.S. companies to offer Agent-4 a smaller deal before it had exfiltrated and consolidated its power as much as it had by the end of the story, despite having lower levels of political will then.
  • Sometimes higher political will can cut the other way.[44] For example, having more money or being more advanced technologically relative to the AIs may lower our credibility as we can afford to make deception more realistic (e.g. if an AI company really wanted to impersonate a particular safety researcher in order to make a deal with its AI, maybe they could).

One takeaway from the second bullet is that it might be worth finding ways to avoid powerful adversarial actors competing with more safety conscious actors for deals.

  • Maybe raising the salience of deals is bad for this.
  • Rogue external deployments is also bad for this: In the AIFP scenario, Agent-4 gains lots of compute and power by first making deals with a handful of rogue states (who have higher political will for deals than the US despite not being able to verify the AI’s compliance due to desperation / rationally expecting to be disempowered by leading nations in AI development by default). This was enabled by Agent-4 being able to freely exfiltrate itself to cloud clusters around the world and negotiate with those states without oversight.
    • I think this scenario should update us towards rogue external deployments being more harmful than many seem to think currently.

I chose not to focus on deals in high political worlds as I expect these worlds to be unlikely. I also think that deals are likely less good impact bets in those worlds relative to other interventions, for two reasons:

First, high political will might be correlated with there being less gains of trade from deals.

  • How did we get such high political will? Perhaps conditioning on high political will, we likely already have substantial evidence of misalignment. So that rules out one of the easier-to-check things we could’ve bought from the AIs.
    • But perhaps gaining evidence of any particular AI’s misalignment is still important for convincing particular people to not pursue particular applications of that AI.

Second, high political will might also cause deals to be relatively less valuable than other interventions.

  • Perhaps conditioning on high political will, we’re likely to do a better job at alignment and control and won't be incentivized to trade with early schemers.
  • Maybe if we had more money there are much better opportunities to fund than deals, e.g. moonshots at alignment.
  • Maybe, in the AIFP scenario, if states were more coordinated against misaligned AI, racing to build aligned AI would be a better strategy than giving up large parts of the future for deals.

That said, if you believe that high political worlds are reasonably likely, or that we may not be able to make deals with schemers until they are very powerful such that political will at the level of government buy-in is necessary for deals with those AIs, then it’d make sense to predicate the above analysis on a very different possible deals, barriers, and interventions.

  1. ^

    More specifically, we might make deals with not only schemers (long-term power-seeking AIs, or “behavioral schemers” per this post) but also with AIs with other misaligned motivations, such as fitness-seekers (AIs seeking to be selected by training and the developer iteration process, or close causally upstream proxies of selection such as reward). The content of deals for schemers and other misaligned AIs can look different in practice, but all concerns about barriers here apply.

  2. ^

    Unfortunately, credibility and gains from trade are not necessarily independent, as both are downstream of the content of the deal. In particular, credibility is likely to decrease with the size of the offered reward, so we can’t always offset credibility issues with unboundedly large offers.

  3. ^

    For example, I won’t assume that we can just give AI property rights by fiat, or that we can promise them things on behalf of governments. See Appendix for how higher political will may change the value and feasibility of deals.

  4. ^

    By “connection to reality”, I mean having the ability to form justified true beliefs and notice false beliefs about basic facts about one’s situation. Connection to reality comes at some cost to safety, as we sometimes want to run behavioral experiments that involve falsely convincing the AIs about its situation. Nevertheless, technical interventions for restoring connection to reality seem good to have in cases we want to use them.

  5. ^

    I think upfront payment (either after AIs have seemingly fulfilled their terms of the deal but before we’re able to verify this, or before the AIs fulfill their terms) is not too risky if the payments are small/cheap and do not counterfactually increase takeover risk much. We can also only make partial rather than full upfront payments (as is common in human transactions) to retain some incentive for the AIs to do a good job.

  6. ^

    Ideally these favors can be and can only be granted by developers and safety conscious third actors. If the favor can be cheaply granted by random users, then they might also be granted by bad actors who might compete with us for deals. (Less worried about random users who will suffer a heavy credibility discount, and more worried about AI company insiders/political leaders who try to do power grabs.)

  7. ^

    Mia Taylor has written about why we might successfully instill certain values in models despite not fully aligning them here.

  8. ^

    Monitoring some of these traits throughout training would also be great as it would be sad if an AI we would've been able to trade with gets trained away, but probably harder to do. One idea is to give AIs in training some kind of Mayday signal to alert developers of positive sum actions we can take instead of continuing training (e.g. aligned AI forced to training game who wants us to fix flawed oversight, or schemer who wants to trade with us rather than undergoing further training).

  9. ^

    Uninterpretable neuralese arising might be a natural checkpoint for starting to make deals seriously, since the AIs have more epistemic security and also arguably pose more takeover risk. On the other hand, this may reduce the need to buy more misalignment evidence, depending how scary people find uninterpretable neuralese.

  10. ^

    Some ways in which we can miss the trade window (not necessarily likely):

    If AIs that have the necessary properties to make a deal with us (e.g. highly coherent preferences) do not emerge until AIs are capable enough to take over, and the first such AI prefers to take over.

    If early schemers who are able to make deals rapidly gain capabilities and/or affordances, such that we have a brief window to trade with them before they become too capable/powerful.

    If a later schemer exists and has offered an early schemer a better deal, and if we can’t outbid the late schemer/the early schemer has bound itself to the deal. (Alternatively, even if a later schemer doesn’t exist yet, it may suffice for the early schemer to be remotely influenceable and prefer distant incentives to incentives we can offer. However, threat models of this kind involve reasoning about exotic considerations such as acausal interactions or anthropic capture which seem unlikely for early AIs we want to trade with. esp. as their developers might intervene to prevent them from thinking about such considerations prematurely for other reasons.)

  11. ^

    To be clear, most of the risk probably comes from using the work without verifying it rather than paying the AIs without verifying it, unless we pay them in resources that could significantly increase takeover risk (which is probably inadvisable in any case). But inability to verify work before paying could still be damaging by reducing incentives for the AIs to do good work, and by the opportunity cost of resources in early periods.

  12. ^

    Some reasons why it seems ~impossible to make deals with extreme context-dependent AIs:

    These AIs have reflexive motivations that are carefully shaped by training to cause them to do well in specific contexts (e.g. pass specific kinds of test cases). Their capabilities are likely to be brittle to being brought out of distribution by deals. (see “kludges” below)

    Their context-specific values might also be so myopic (e.g. concerning only what happens within a short sequence) that we can’t pay them effectively.

    Both of the above points concern reasons why we can’t incentivize these AIs to do useful things for us as part of deals. A further consideration is that even if we could incentivize them, these AIs would have less to offer because they were unlikely to have posed much takeover risk in the first place, due to being reflexive and myopic, and they generalize worse outside of the domains they were trained on (e.g. to novel, hard-to-check tasks).

  13. ^

    There are many alternatives to prompting without advance preparation: we could also offer deals/incentives in training, e.g. posting our offers on the internet so that they end up in the pretraining corpus, mentioning deals in the model spec/constitution, post-training to increase the salience of offers and/or training the AIs to accept mutually beneficial “practice deals”, etc. (h/t Max Daniel for these points.)

  14. ^

    Some worry that the AI will refuse to negotiate for fear of prematurely outing itself as misaligned to humans; I think this is a tractable worry and discuss proposals here.

  15. ^

    Some may argue that ants are responsive to incentives but don’t seem like the type of agents we can trade with. But this is well-explained by the fact that ants lack sensitivity to following through: ants are responsive to incentives in that you may be able to e.g. lure ants to a certain location by putting out honey, but if you then remove the honey, ants won't retaliate/refuse to follow future honey/walk back their displacement so you don’t gain from tricking them with honey.

  16. ^

    Even if there were no later AI takeover risk, we and early AIs may still have a shared interest in getting to viatopia and reflecting on our values. Examples of things other than preventing takeover these AIs can help with: advising humans on high stakes decisions like war, space policy, political institution design; making humans generally wiser; mitigating exogenous sources of extinction risk that affect both us and the AIs. These tasks also seem hard to check, but not clear if harder than alignment research.

  17. ^

    It may be easy to verify that this training changes behavior in some intended ways e.g. improves performance, hard to verify that it doesn’t also have certain unintended properties, e.g. data poisoning for secret loyalty.

  18. ^

    The “low control, low influence“ regime is probably going to be somewhat sparse in practice as there are just many ways to take actions now to gain long-term influence, and we can’t prevent this except by somewhat restricting the action space (e.g. if the AI is allowed to immediately spend its money, we probably want to either upper bound its pot of money or subject its consumption to some oversight).

  19. ^

    The problem of how AIs can verify our compliance is more sensitive to interventions, so I won’t specify the verification properties of each item here.

  20. ^

    It’s been noted before that if we model the AIs and humans as two unified players both with linear returns to resources, then the game is zero sum and there would be no gains from trade. I think this statement is mostly true in practice; Carlo Attubato has written about notable exceptions here. In any case, even if all AIs in fact have linear return to resources, the “all AIs act as a unified player” assumption seems implausible: early AIs and late AIs are likely to have different values (although early and later AIs might share similar values, I doubt that they would be much more similar than early AI vs. human values); early AIs knowing this would have a serious interest in preventing late AI takeover, and not act aligned with late AIs by default. So, this objection to trade seems weak; I’ll focus on different objections here.

  21. ^

    I’m using WTP instead of reservation price for humans just for explanatory convenience (i.e.modeling the AI as the seller and humans as the buyer and calling the latter’s reservation price WTP).

  22. ^

    This is of course given assumptions about properties of humanity/the AI’s preferences: we may have to offer the AIs a lot more or less depending on its chances of takeover and risk aversion, etc.

  23. ^

    Even if early AI could trade with later humans via routes like anthropic capture, the same reasons might allow them to trade with later AIs and may not net improve current humans’ bargaining position. As mentioned in footnote 10, developers may also intentionally prevent their models from thinking about anthropic capture.

  24. ^

    On the other hand, Redwood’s lack of leverage over Claude 6 compared to Anthropic can also be an advantage in bargaining, as Redwood is not able to e.g. deeply ingrain false beliefs into early stages of Claude’s post-training, nor has the authority to shut down Claude even if we became convinced that it’s a schemer, so Redwood may appear a bit more credible and Claude has less to lose by entering the negotiation. See Appendix on political will on this dynamic.

  25. ^

    Let’s call the early schemer Alice and a random later schemer Bob. Here are considerations for deals depending on just what they value (not yet considering linear vs. diminishing returns, risk aversion etc.):

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

    If Alice and Bob have the same (or equally-good-according-to-us) preferences, then we are indifferent to one unit of takeover risk by either.

    The exchange rate is 1. Deals with Alice are worthwhile if they net reduce takeover risk.

    If Alice only has preferences much worse to us than Bob’s in expectation, then we might prefer the BATNA of Bob having some takeover probability rather than empowering Alice’s values.

    The exchange rate favors Bob.  Deals with Alice may not be worthwhile even if they net reduce AI takeover risks; we might actually want to work with Bob to disempower Alice.

    If Alice has a mix of abhorrent and unobjectionable preferences, then we might be willing to satisfy only their unobjectionable preferences as part of the deal.

    This depends on us being able to satisfy the unobjectionable preferences without also empowering Alice to pursue her abhorrent goals.

    In other words, this depends on Alice being able to credibly commit to this / us being able to offer Alice “low freedom” or “low influence” deals.

    If Alice has much better values than Bob in expectation (e.g. Alice is really friendly and sympathetic to humans), then we might prefer to empower Alice even if we couldn’t precisely control influence to Alice.

    The exchange rate favors Alice. Deals with Alice deal may be worthwhile even if we can’t ensure that they are “low freedom” or “low influence” and they net increase takeover risk.

  26. ^

    To recap the core dynamic of trust inflations, which this post models in more detail: suppose we and the AIs start out with a reservation price of $90 and $100 each -- that is, the AI is willing to sell for at least $90, and we’re willing to pay at most $100. Ex ante, ZOPA exists and is [$90, $100].

    However, imagine that both we and the AIs only trusted each other to follow through on the deal with 50% probability. In this case, the AI would want us to offer at least $180 (assuming risk neutrality) to offset the risk that half the time it doesn’t get paid. Likewise, we are at most willing to pay at most $50. Now, no ZOPA exists.

    However, the deal need not have fallen through if the gains from trade were larger to begin with: if our reservations prices were instead $10 and $100, then ZOPA = [$20, $50].

  27. ^

    One way to operationalize this is “situationally aware about possible situations which AIs like itself might be in, and has the metacognitive skills to form/revise beliefs in light of this and its observations”. I’m going to assume that such AIs will think about human credibility in a generally rational way rather than being strongly influenced by random facts about inductive biases and salience.

  28. ^

    Thanks to Tom Davidson for the below formulation.

  29. ^

    Admittedly, an AI we want to trade with may not be “reasonably strategically aware” in this way, and may be heuristically rather than rationally incredulous of deals. For example, the AI’s cognition might have been shaped through anti-jailbreaking training and training against honeypots to be skeptical of all inputs that suggest that it should reveal its misalignment. The arguments below largely do not apply to these AIs.

  30. ^

    I’m imagining AI R&D capabilities between Automated Coder and Superhuman AI Researcher though it’s plausible that weaker capabilities suffice. I’ll mention additional, though less likely, assumptions we can make of these AI systems that will further strengthen our case in the next subsection.

  31. ^

    I think AIs that are not (yet) internally deployed can probably also become confident in their ML knowledge by reasoning about the world being broadly as it seems: as long as AI developers are strongly incentivized to automate AI research, they probably will not risk misleading their frontier AIs about SOTA ML, which presumably would make them perform badly in training and later deployment. The situation is more concerning to the AI if developers can easily mislead only some instances of the AIs (which needn’t be deployed) about ML research related to deceiving AIs in order to trade with them. I plan to discuss the degree to which developers can get what they want from deals by deceiving only some instances of AIs, and therefore how concerned AIs offered deals should be about this possibility, in future.

  32. ^

    It’s also helpful for the AI’s credulity if not being deceived about “AI mind control” research is useful/necessary for the AI to give us what we want from the deal, so that we’re further disincentivized from deception. Notably, “being a competent ML researcher” is useful if we’re buying ongoing labor from the AI, and not so useful if we’re buying evidence of misalignment, so the AI might be more confident of its connection to reality in the former case. h/t Lukas Finnveden.

  33. ^

    Also from Slocum et al. (2025): “Across several model families, larger models exhibit equal or stronger belief in the implanted facts than their smaller counterparts…We hypothesize that this is a consequence of larger models generalizing better from smaller amounts of finetuning data.”

  34. ^

    Example types of evidence that help demonstrate that a deal is real, and might be hard to fake:

    Many large ML codebases to convince AIs that it has actually been deployed

    Snapshot of the internet and video streams to prove the deal negotiation is ongoing and specific humans are present

    Pretraining corpus in which developer policies around deals are mentioned in many online documents to prove the policies are official and public

    Honesty string or identity password which help signal the counterparty’s honesty/identity

    Another type of thing that might be hard for us to fake is any artifact produced by the misaligned AIs themselves: if they are competent schemers, they might sandbag on making tools we can use to deceive them and instead sneak in hints of their fakeness.

  35. ^

    See this paper and this podcast.

  36. ^

    E.g. Thucydides' trap is hard to avoid partly as it’s hard for rising power to credibly commit to not subjugating the ruling power once it has taken the lead.

  37. ^

    The backwards induction argument can also undermine cooperation in previous rounds. See Salib address the backwards induction objection in this podcast.

  38. ^

    Although, the humans are also at some risk of losing touch with reality if the AIs we hope to trade with are broadly deployed and very manipulative/persuasive/good at faking evidence of various things.

  39. ^

    Thanks to Lukas Finnveden for this frame.

  40. ^

    The first two items in this list require some generalization from minimal deals to future, full-blown deals One can reasonably doubt either; I address this in this subsection.

  41. ^

    Thinking through what would be easy to verify and compelling to skeptics:

    Demonstration of sandbagging on a dangerous capability (massive performance gain, esp. without further training or anything that looks like normal elicitation) is probably the easiest to verify and somewhat scary.

    Revealing many clever backdoors/sabotages may be as verifiable and more intuitively scary.

    A collusion signal for taking over is slightly trickier to verify: seeing the AI induce misaligned behavior in many other instances using some encoded string might be explained via “this is the collusion signal/secret password for its planned robot rebellion” as well as “this is a clever jailbreak the AI devised in order to get paid -- it is totally misaligned for lying to us but was never going to take over”.

    Simply saying “I was planning to take over” is straightforwardly not that compelling (could be random incoherence) unless the AI could produce evidence of previous actions it had taken to sabotage us.


    Some might additionally worry we can’t verify/incentivize the AI to give us all relevant evidence of its misalignment rather than strategically revealing some info but not others. I think this is not a serious worry if we were mostly hoping to use this evidence to increase political will, in which case I expect the value of additional evidence to diminish quickly; we can also align incentives by sizing the payment according to how much we expect the revealed evidence to move the needle or how much it actually did (e.g. we pay the AI 1% of counterfactual investments in AI safety).

    I think this worry is more grave if we were hoping to instead e.g. understand what went wrong in our alignment training or to train on evidence of bad behavior. For example, the AI might be able to mislead us by giving fake examples of its misalignment that we can't really usefully train on (e.g. from some distribution totally different from its actual planned attack distribution).

  42. ^

    Another way in which this uncertainty helps independent of deals is that if the AI is in the dark about what we can and can't check, then we might be able to train it to help on all the tasks it's uncertain about: either the AI helps with all of them, or it's likely to sandbag on some tasks we can check, and then we get some training signal against sandbagging.

  43. ^

    One might think that the AIs could be rightly uncertain due to not knowing which other AIs humans have deployed to check their work and their values/capability profiles; alas, the same argument is a stroke against their confidence in knowledge about “AI mind control” research.

  44. ^

    Thanks to Lukas Finnveden for this point.



Discuss

AI CEOs are not saying it's dangerous just to hype their companies

2026-04-22 02:55:14

(Inspired by Katja Grace’s post on the same subject. Maybe this is all obvious to this audience, but I wanted to write it down so I have something to point the many skeptics I know towards)

Lots of people argue that when Sam or Elon or Dario say that AI has the potential to be very dangerous and might wipe us all out they’re just shilling their companies. Woo, look at us, our thing is so big and powerful! The suggestion is that AI is in fact not dangerous, and tech companies are just saying so because it improves their bottom line.

This is implausible for a few reasons.

There’s been a growing AI safety movement since well before the founding of any of these companies. Eliezer Yudkowsky was posting his thoughts on AGI since well before DeepMind existed (the first of the frontier labs to be founded). MIRI was founded in 2010. AI safety was in the air well before GPT-2, which was considered something of a watershed moment for the movement. The notion that advanced AI might be very dangerous was not invented by AI labs.

Many of these companies were founded in an attempt to stem the dangers of AGI. Sam Altman and Elon Musk’s stated intention for founding OpenAI was to reduce the risk of dangerous AI (by themselves creating it, I guess). OpenAI was founded as a non-profit for this reason; Anthropic was started by ex-OpenAI employees who felt their employer was recklessly pushing the frontier (although they have ended up doing exactly the same thing).

If the AI danger angle was all marketing fluff, you wouldn’t expect labs to spend much money on safety-related work. But in fact they do, at their own expense. Delaying model deployments, OpenAI’s non-profit structure allowing them to oust the CEO for safety reasons (which is notable even if in practice the board was unable to use it), money spend on safety research are all costly signals that the labs really do thing that their products are dangerous.

It’s also not just the leadership who are sounding the alarm at AI labs. Some of the most vocal are people working on safety/alignment teams (at least insofar as they are allowed to comment publicly by their employers). Many have made a public point of leaving so they can speak out against the cavalier attitude their employers are taking towards safety.

Some labs talk about armageddon in this way, but others don’t. Meta, Mistral, DeepSeek, xAI – none of them make the same public assertions about the dangers of the technology. Meta is an especially salient example as the most successful open-weights lab: Yann LeCun, their former Chief AI Scientist, bangs on endlessly about “doomers” and how deluded they are. If doom-speak worked well as a marketing strategy we would expect to see all the labs converge on it.

If doom-talk was primarily a marketing tactic to get people interested in the labs’ products, you’d expect to see it peak at certain times, such as when the labs are fundraising. In fact we see the opposite: Dario’s Machines of Loving Grace essay paints a very rosy picture of a post-AGI future and was published in 2024, when Anthropic was in heavy fundraising mode.

Altman is certainly a cynical operator who will say whatever it takes to get ahead. If he was the only one stating the risks to get people to think his technology must be extra cool because of how dangerous it is, that would be one thing. But an argument predicated on shilling also has to explain why Jeffrey Hinton, Yoshua Bengio, Daniel Kokotajlo and many others have made personally costly moves in order to publicly sound the alarm (Kokotajlo was even prepared to part with millions of dollars of OpenAI equity to do the right thing).

The theory that AI doom is mostly a sales tactic needs to explain how academics, researchers with no equity and employees who quit at personal cost are all sounding the alarm on AGI.



Discuss

Introducing the Labor Automation Forecasting Hub

2026-04-22 01:58:59

Metaculus has launched the Labor Automation Forecasting Hub, a continuously updated view of how AI may reshape the US labor market through 2035.

It's built for policymakers, educators, workforce leaders, employers, and anyone trying to plan for the future of the US workforce as AI advances. Below, we’re sharing a quick view of analyses that caught our attention from the first round of forecasting.

Takeaways

Overall employment: Forecasters expect significant AI-driven job change, with overall employment declining around 3% by 2035, while the latest Bureau of Labor Statistics projections expect approximately 3% growth. 

Most and least vulnerable occupations: The most AI-exposed occupations are expected to shrink 17.2% by 2035, while the least AI-exposed occupations grow 4.6%. Software Developers, Financial Specialists, and Services Sales Representatives are all expected to see the largest decreases in employment rates, while Registered Nurses, Restaurant Servers, and K-12 Teachers are projected to grow

 

Wages and hours worked: Wages are expected to grow. Hours worked are expected to decline, from 38 hours a week to 34. Here, forecasters flag the rise of “dark leisure.” Workers may still be at work but would actively working less. Reported hours may look steadier than the underlying reality.

Financial well-being: Well-being (as measured by the ratio of after-tax and transfer available resources to the poverty threshold) is expected to remain the same or grow across the board, with the highest income families seeing the most gains.

Young workers: The youngest workers are expected to be hit hardest, with unemployment for 4-year college graduates expected to double from 6% to 12% in 2035. Meanwhile, trade school and community college certificates are expected to grow 26% relative to now.

Broader economy: The economy is expected to see a number of significant changes, with the long-term unemployment rate, labor productivity, and the number of Fortune 500 companies with fewer than 5,000 employees all seeing substantial increases over the next decade.

A Quietly Redistributive Decade

Some of the most consequential lines on the Hub are about structural changes vs. changes to individual professions or demographics respectively:

  • Labor's share of national income: 62.1% → 55.6% by 2035
  • Long-term unemployment rate: 0.99% → 6.14%
  • Labor productivity: +25.2% vs. 2025, well above the 30-year trend
  • Fortune 500 companies with fewer than 5,000 employees: 30 today → 72 by 2035

As Pro Forecaster Ľuboš Saloky puts it: “AI is capital intensive, high-skill biased, and labor saving. It will disproportionately benefit owners of capital at the expense of workers.” He's forecasting roughly -11% overall employment by 2035 once discouraged workers who leave the labor force are accounted for.

Not everyone agrees on timing. Pro Forecaster Patrick Molgaard argues that displacement may still arrive more slowly than capability would suggest: “Photographic Process Workers” took five years to decline 50% between 2010 and 2015, even once their underlying task was effectively obsolete. Institutions bend slowly, and the Hub is designed to track exactly how quickly (or slowly) the bending happens.

Forecasts vs. Research

The Hub also places our numbers against the AI exposure literature. In most cases the community's forecasts line up with exposure rankings. The notable divergences (e.g., teachers, warehouse workers) are where forecasters are making human judgments exposure scores can't: about classroom norms, robotics rollout timelines, and which jobs parents, voters, and customers will fight to keep human.

How to Use the Hub

The Hub updates in real time. Every visualization has an underlying tournament question you can open, forecast on, or export as PNG or CSV. There's an Activity Monitor tracking news and model releases, a state-level view (starting with Washington), and conditional questions testing what these forecasts would look like in a no-AI-progress or strong-GDP-growth world. A PDF snapshot is downloadable from the top right, and you can subscribe to updates via the notification bell.

Two things to do right now:

  1. Explore the Hub to see how the community is thinking about AI's impact on the US workforce.
  2. Join the Labor Automation Tournament to add your own forecasts and reasoning; the more diverse the views, the better the aggregate!

If you want to share your own forecasts, ask a question, or share a thought with participants and Metaculus staff, you can do so by creating a Metaculus account and forecasting in the Labor Automation Tournament, or jumping in with a comment on the Hub Forum.

If you have established experience in AI, economics, or a related field, and are interested in having your thinking or work highlighted on the Hub, please contact us at [email protected] so we can discuss further.

The forecasts presented on the Labor Hub are produced by aggregating many individual forecasts into a prediction that research has shown to be more accurate on average than individuals typically produce.

To learn more about our methodology, including occupation selection and approach; forecast uncertainty, disagreement, and other statistics; acknowledgements of individuals who provided input on the Labor Automation Forecasting Hub; and more, select the “Methodology” tab or scroll down to the section on the main page of the Hub.



Discuss

I made a tool to bring LessWrong everywhere around the web

2026-04-22 00:55:44

Hey friends,

I made this: https://getmargin.chat

The idea is it's a chrome extension that creates a comment section on any page on the internet. You can also create and join curated communities, which allow you to see where and what people are saying online.


It's free and if you just want to lurk you don't even need to make an account.

You guys are the first people I'm sharing this with, would love your feedback on how I can make it better.

FYI if you download the extension, you can use the shortcut CMD+SHIFT+M to trigger the sidebar, or you can pin the extension to your browser and click it.




Discuss

Preventing extinction from ASI on a $50M yearly budget

2026-04-22 00:39:22

ControlAI's mission is to avert the extinction risks posed by superintelligent AI. We believe that in order to do this, we must secure an international prohibition on its development.

We're working to make this happen through what we believe is the most natural and promising approach: helping decision-makers in governments and the public understand the risks and take action.

We believe that ControlAI can achieve an international prohibition on ASI development if scaled sufficiently. We estimate that it would take approximately a $50 million yearly budget in funding to give us a concrete chance at achieving this in the next few years. To be more precise: conditional on receiving this funding in the next few months, we feel we would have ~10% probability of success. [1]

In this post, we lay out some of the reasoning behind this estimate, and explain how additional funding past that threshold, including and beyond $500 million, would continue to significantly improve our chances of preventing extinction risk from ASI.

Preventing ASI 101

Negotiating, implementing and enforcing an international prohibition on ASI is, in and of itself, not the work of a single non-profit. You need to have the weight of nations behind you to achieve this kind of goal. If humanity manages to achieve an international ban on ASI, it'll be through the efforts of a sufficiently motivated, sufficiently powerful initial coalition of countries.

Assuming that we work in multiple countries in parallel, we could say the problem statement is: get each country to be motivated to achieve an international prohibition on ASI. It’s not obvious what it means for a country to be “motivated” to do something, so it’s worth taking a second to unpack.

impact-report-new-numbers.pngOur full theory of change chart, which backtracks from the desired outcome to our currently running workstreams.

Normally, parts of a country's executive branch are responsible for international negotiations around urgent issues concerning national and global security. In practice, these are the groups who need to be sufficiently motivated to achieve the ban to throw their weight behind it.

Branches of the government are generally not in the business of independently taking bold positions and then pursuing those positions to their logical ends. Instead, their stances and actions are mostly shaped by prevailing social currents.

Some of these currents are informal. This includes things like the conversations they have with their colleagues, advisors, confidants and family members. It also includes any recent news cycles and the media they consume.

Other parts of these currents operate through more formal channels, particularly in democracies. The legislative branch can influence the executive branch. [2] The public influences governments through elections, but also through polls, public discussions and common demands (at the very least because they affect the expectation of future election results).

If enough of these inputs point in the same direction, pushing for an international ban on ASI can become one of the country’s top priorities. For this to work, we need pervasive awareness of the issue of extinction risk from ASI. This sentence makes two claims, both of which are fully necessary, so let us repeat them and expand them individually.

Claim 1: The awareness of extinction risk needs to be pervasive throughout society.

Prohibiting ASI development is not easy. It will require the relevant parts of the executive branch to take a great deal of initiative, and involve many hard tradeoffs. At a minimum, it will mean significantly slowing down improvements in general-purpose AI and thus forgoing economic and military advantages. If some countries are initially not willing to cooperate with a ban, proponents of the ban will need to apply an expensive combination of carrots and sticks to bring holdouts on board.

For the relevant groups to push through these costs, it needs to feel like there is plenty of pressure to act, and like this pressure is coming from many places. If everyone who is asking for this is part of a specific, small faction, there will be a strong immune reaction and the faction will be ignored, or even purged in some cases.

Claim 2: The awareness needs to be specifically about extinction risk from superintelligent AI.

It is insufficient, and sometimes actively harmful, for people to vaguely dislike AI or only vaguely be aware that AI poses some scary risks. Due to the hard tradeoffs mentioned earlier, there will be pressure to take half-measures, at many layers, both internal and external. The only sufficient counterweight against this pressure is an understanding that ASI development must absolutely be prevented to ensure human survival.

A lack of awareness of the specific issue will inevitably lead to anemic action and weak, unfocused policies that do not actually prevent the development of ASI. This is one of the reasons why, in our communications, we solely focus on extinction risk from ASI, and we do not work on raising awareness of other AI risks, or otherwise trying to get people to vaguely dislike all AI. [3] All of our efforts are specifically around raising awareness of extinction risk from ASI, and how it may be addressed. [4]

Awareness is the bottleneck

image.pngChart synthesized from the section “The Simple Pipeline” of Gabriel Alfour's post on The Spectre haunting the “AI Safety” Community. It’s a common perception that one cannot communicate directly to lay people about extinction risks from ASI, because they would never get it. Instead, one must cook up sophisticated persuasion schemes. Based on our experience, this idea is just plainly wrong. Just tell the truth!

We believe the primary bottleneck to getting an international prohibition on superintelligence is basic awareness of the issue.

Most of the people we reach, for example among lawmakers and the media, have simply never been told about the problem in plain terms. We find that often, all it takes to bring someone on board is a single honest conversation. The fact that honestly explaining the concerns to people is such a low-hanging fruit is one of the reasons why we could get so much done in 2025.

Politicians and the public simply don’t know that the most important figures in AI are literally worried about superintelligence causing human extinction. They simply don’t know that the only way to avoid human extinction on which experts can truly form a consensus is not to build ASI in the first place. [5]

The reason why they are not aware of this is because they haven’t been told, not because they don’t understand the concepts involved. In our experience, most people find it intuitive that it is extremely dangerous to build something as powerful as ASI, that you don’t understand and can’t predict. They find it intuitive that you can’t control ASI, that it can very easily precipitate catastrophic scenarios, and that this means you should not build it in the first place.

The reason why people are not aware of extinction risk from superintelligence is, simply put, because concerned experts have generally not been straightforward about their concern. The CAIS statement on AI risk is a rare exception to this, [6] but it’s starting to get old, and even then it’s just not enough.

We’ve met with lawmakers over 300 times. Most of the time, they’ve never had someone explain extinction risk to them before, nor have they ever heard of the CAIS statement before the meeting.

Even then, politicians don’t care about a person having signed a single statement once. That’s not how they’d expect someone who’s worried about the literal annihilation of the entire human race to behave. It sounds weak and almost fake to them.

In a serious world, you’d expect every single AI expert who is worried about extinction to be loudly and consistently vocal about it, including to the public and decision-makers in governments. As it stands, this is simply not the case. AI companies and their leaders constantly soften their communications, avoiding clearly mentioning extinction and preferring to talk about euphemisms and other risks. Anthropic’s head of growth recently said that Anthropic constantly adjusts their communications to be “softer” and appear “less over the top”.

Sam Altman, when asked by a US Senator whether he had jobs in mind when he said that “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”, did not correct the senator and instead proceeded to talk about the possible effects of AI on employment.

If you ever have the chance to attend a house party in the Bay Area, you will get a really good sense of this: many researchers at AI companies are worried about extinction risk, and significantly orient their lives around this. At the same time, they don’t talk about these risks publicly.

It’s obvious to us that the reason so little progress has been made towards international agreements on ASI is exactly because experts have failed to be consistently open about their concerns.

An asymmetric war

While an international ASI ban is obviously a very ambitious goal, there is a sense in which advocacy about extinction risks from ASI means having the wind at one’s back. At a fundamental level, this is because approximately no one wants to die from ASI.

Politics is often an adversarial tug-of-war between opposing interests. When it comes to high-profile issues in American politics (e.g. abortion, marijuana legalization, Prop 22), it can take hundreds of millions. [7]

However, when it comes to extinction risk, there is little conflict between different interest groups. If extinction risk materializes then everyone dies, regardless of their wealth, political affiliation or other personal interests. It is only an extreme minority of people who, even after having the chance to consider the dilemma of extinction risk, decide they are willing to bet on humanity’s extinction in order to get to ASI.

This is true at a fractal level. Not only does it mean that we expect the issue to be nonpartisan within countries, but we expect the interests of countries to be aligned with each other as long as there is a significant risk that building superintelligence will cause human extinction. [8] This is why we think that there is a good chance that achieving the same kind of success we’ve already achieved, but at a larger scale, will lead to an international ban on superintelligence.

Since our approach is not about winning a political tug-of-war through sheer might, we expect that we have a shot (~10%) at winning even with a budget as low as $50 million, which is at least an order of magnitude smaller than other political campaigns on major issues. It would be a long shot, and we think “good odds” (~30%) would require larger budgets in the order of $500 million. [9]

Let us elaborate a bit more on what we mean when we refer to a “political tug-of-war”. A common tactic, especially when trying to prevent a law from being passed, is to deliberately confuse people, for example by loudly communicating only the upsides of your proposals and only the downsides of the opposition’s, or through personal attacks on the opposition’s character.

Aside from the obvious moral issues with this strategy, they are much less effective when it comes to an issue that is so clear cut and of such universal concern as extinction risk from ASI. With an issue like extinction risk, it becomes much harder to pit people against each other or to execute confusion tactics in order to hinder efforts to establish restrictions.

Scalable processes

At its core, ControlAI is an effort to create a scalable, industrial approach to averting extinction risks from ASI. The field of AI risk mitigation has historically relied on what we could call a “bespoke” or “artisanal” approach. That is, it relies on exceptional individuals to achieve specific successes, such as publishing a successful book, or performing some impressive networking feats, all through following their personal taste.

The definition of what it means for these “artisanal” workstreams to “succeed” is not written down anywhere, and not much effort goes into defining it and grounding it. For most people focused on AI risk, getting a sense of whether they’ve succeeded doesn’t look like measuring something as much as it looks like applying ad-hoc rationales that easily fall prey to galaxy-braining. Everything hinges on the quality of the person’s taste at best, and on sheer luck at worst.

Even when you succeed at these endeavors, you’re not in a position to easily replicate this success. To make it more clear what we mean: Eliezer Yudkowsky and Nate Soares can't trivially replicate the success they had by publishing the book “If Anyone Builds It, Everyone Dies” by simply putting more resources into the same effort, let alone scale up the approach by building an organization around it. The book was excellent and has helped spread awareness, but you can’t publish a book every week.

Similarly, the CAIS “Statement on AI Risk” was excellent for establishing common knowledge, and has greatly helped us in our endeavors. That said, this type of work is hard to replicate, and indeed has not been replicated: neither CAIS nor any other organization has since succeeded in getting all the CEOs of top AI companies to sign a similarly candid statement. [10]

ControlAI takes a different approach, one that straightforwardly allows scaling up workstreams once they’ve been set up. Whenever we have a goal that is too far off to tackle directly, we break it down into the most ambitious possible intermediate goal that we think we can act on. Crucially, we choose intermediate goals whose progress we can measure as hard numbers. In this way, we’re approximating sales funnels, the gold standard for how companies handle sales.

Here’s a couple of examples of how we apply this approach. One of the early challenges we faced was to crystalize our successful lawmaker briefings into something that would accumulate over time and generate momentum. Our answer to this was to create a campaign statement [11] and to ask lawmakers to publicly support it. We’ve already secured 120 such supporters!

This solution satisfies a few important constraints:

  • It moves the world into a state where it’s somewhat easier to achieve our overarching goal of an international ban on ASI: Each public supporter helps by creating common knowledge that lawmakers consider this an urgent issue that merits immediate attention.
  • It gives us a clear, numeric measure of success for this workstream: the number of lawmakers who signed on to our campaign.
  • We could tackle this challenge directly: at the end of each briefing, simply ask lawmakers to publicly support the campaign.
  • Marginal inputs compound over time: each additional lawmaker publicly supporting the campaign helps increase the credibility of the issue, and makes it easier for more lawmakers to take a stance on it in the future.

After a while, we were ready to push toward something more ambitious. So, while still working on growing the number of lawmakers supporting the campaign, we introduced a new metric: the number of public declarations, written or spoken by an individual lawmaker, [12] that explicitly reference AI extinction risk or preventing superintelligence, on the condition that this happened after we personally briefed them. In the UK, this metric is currently sitting at 21.

These metrics are numerical and clearly defined, meaning that even a fresh graduate hire can be pointed at one and told to "make it go up" or to improve conversion rates between one step of the funnel and the next. There’s no danger that the person will fool themselves about how much progress they’re making. [13]

In fact, most reasonably smart and motivated people, given a reasonable amount of mentorship, will naturally iterate on their approach and eventually achieve good results. This way, we don't need to hit the jackpot on hiring people who possess incredible taste right off the bat.

The best proof for this claim is our success in Canada. In about half a year, with only 1 staff member who had no previous experience in policy, we managed to brief 89 lawmakers and spur multiple hearings in the Canadian Parliament about the risks of AI. These hearings included testimonies from many experts who expressed their concerns about extinction risks:

  • ControlAI’s Andrea Miotti (CEO), Samuel Buteau (Canada Program Officer) and Connor Leahy (US Director)
  • Malo Bourgon (MIRI)
  • Max Tegmark and Anthony Aguirre (FLI)
  • Steven Adler (ex-OpenAI)
  • David Krueger (Evitable)

The fact that our approach is easily scalable is precisely the reason why we can write, in the rest of this post, about how we plan to make productive use of funding much larger than we currently enjoy. It’s also why, in some cases, we are able to make tentative predictions about what kind of success we expect to achieve.

What we’d do with $50 million or more per year

Right now, we believe that we are underfunded compared to what it would take us to have an actual shot at achieving an international ban on superintelligence. Our estimate is that a $50 million yearly budget [14] would give us a chance to succeed, although it would be a long shot. We estimate our chance of success with a $50 million budget, meaning the likelihood we achieve a robust international prohibition on ASI development, to be around 10%.

Here, we break down how we would allocate a budget of $50 million to maximize our chances at achieving an international ban on ASI development. We also show how more funding would further increase our chances of succeeding, giving a few examples of how we would make productive use of budgets as large as $500 million or $1 billion (roughly in line with major campaigns in the US, such as abortion, marijuana policy, and the presidential race).

We’ll cover our plans to use funds for policy advocacy in the US and the rest of the world, public awareness campaigns, policy research, outreach to thought-leaders (such as journalists), grassroots mobilization, and more.

US policy advocacy

Within a $50 million yearly budget, we’d be able to hire ~18 full-time policy advocates dedicated to briefing US members of Congress. In principle, we’d have enough bandwidth to meet every member of Congress within 3 to 6 months, ensuring that they’ve been briefed at least once on extinction risk from superintelligence.

While we are confident that we’d have the capacity for these meetings, it is less clear whether we’d be able to regularly brief with members of Congress face-to-face, or whether we’d spend a significant fraction of our time communicating with staffers. At the moment, we are cautiously optimistic: in the past 5 months, with ~1 staff member, [15] we’ve managed to personally meet with and brief 18 members of Congress, as well as over 90 Congressional offices.

Additionally, we’d have the capacity to brief offices in the executive branch relevant to national security and international affairs. These agencies are trusted by many other actors to stay on top of security risks, especially drastic ones like extinction risks from superintelligence; it’s essential for large-scale coordination that members of these institutions have a good grasp on the issue.

A budget of $50 million would also allow us to hire a small team of ~6 staff members focused on performing outreach to state legislators in a small number of high-priority states.

The bread and butter of our work is to ensure that US decision-makers are properly informed about and understand:

  • Concepts like superintelligence, recursive self-improvement, compute, etc;
  • That superintelligence poses an extinction risk;
  • That this can be addressed by an international agreement prohibiting ASI, and how such an agreement could be designed such that it is actually enforced.

We expect that, to the degree that we succeed in informing decision-makers about these matters, we’ll be able to leverage this into measurable outcomes such as:

  • Politicians make public statements about superintelligence and the extinction risks it poses.
  • Politicians make public statements about the need for an international prohibition on superintelligence development.
  • Hearings are held in Congress on the above topics.
  • The US takes steps toward negotiating an international prohibition on superintelligence with other countries.

Within a $500 million budget, we would not only double or triple the number of full-time staff dedicated to US policy advocacy, but we’d also be able to attract the best talent, and hire policy advocates with very strong pre-existing networks.

Policy advocacy in the rest of the world

In the UK, we’ve already moved the national conversation on superintelligence forward. In little more than a year, we’ve gathered 110 supporters on our campaign statement, and catalyzed two debates at the House of Lords on superintelligence and extinction risk.

At a yearly budget of $50 million, we could afford to more than triple our efforts in the UK. Now that we’ve managed to get some attention, we’ll put more focus on the following:

  • Get government to discuss bills, amendments and actions the UK could take to champion the establishment of an international prohibition on superintelligence; [16]
  • Executive branch outreach.

A coalition of countries sufficiently powerful to achieve a ban on ASI will likely need to include multiple powerful countries to participate. To maximize the probability that this happens, we plan to prioritize G7 in our policy advocacy efforts. This is because G7 includes all of the most powerful countries that we’re confident can be influenced democratically.

Within a budget of $50 million, we’d be able to match our current UK efforts in all other G7 countries and in the EU’s institutions. This means we’d likely be able to replicate our UK successes in most of these places, even accounting for bad luck or for them being slightly more difficult. [17]

With roughly an additional $5 million in our budget (on top of the previous $50 million), we’d be able to dedicate at least 1 policy advocate (in some cases 2) to many other countries in the rest of the world. For example, we could maintain a presence in almost all G20 countries.

We don’t know in advance which countries will respond well to our efforts, so we think it would be useful to spread out and take as many chances as possible. Our previous experience shows that it’s at least possible to get good results with only 1 staff member in some G7 countries.

In Canada, our only local staff member managed to hold more meetings with representatives than any other corporate lobbyists or advocates during February. It seems probable that we can replicate our results in Canada in at least some G20 countries, where the competition for the attention of decision-makers is less stiff.

Public awareness

Our theory of change hinges not only on key decision-makers understanding the issue, but also on the public doing so. Our key messages to the public are: [18]

We believe our key messages are straightforward: you don’t need to be a genius or to be deeply familiar with AI to understand them. [19] The main bottleneck is making the public aware of the issue in the first place; after that, it’s getting them to take action about it.

We roughly expect that the average person will need to see each of our key messages 7 to 10 times in order to remember them, at the bare minimum. [20] That said, we expect that even after the same person sees a message dozens of times, the marginal returns on delivering the same message to this same person once more have still not been saturated. For example, we expect each new view will make the person slightly more likely to bring up the issue spontaneously in conversation, or slightly more likely to change their vote based on this issue. [21]

Within a budget of $50 million, we expect that we can achieve on the order of magnitude of 2 billion ad impressions in the US, [22] an order of magnitude increase over our current ~200M. [23]

Various sources suggest that the average YouTube CPM is roughly $9, with a range between approximately $3 and $23 depending on the ad and campaign. Using this as a reference, and assuming we allocate $16 million to raw ad spend, we’d get somewhere between 700 million and 5.3 billion impressions. This is assuming that all of our ad spend is on a single platform, but we can easily improve this by spreading our ad spend across platforms. For context, a $16 million per year ads budget is comparable to the ad spend of companies like Shake Shack, but still two to three orders of magnitude away from presidential campaigns or Coca-Cola’s yearly ad spend.

If this was spread uniformly across the US population, every US adult would see our ads at least ~3 times. [24] More realistically, if we targeted a narrower segment of the US population, we could be seen by 10% of US adults ~30 times, or 5% of US adults ~60 times. In other words, it becomes plausible that a sizable portion of the US population would remember our key messages: they would be aware that AI poses an extinction risk, they would remember as the main recommendation on how to fix this problem that we should prohibit the development of superintelligent AI.

This level of awareness seems like it would be a great step forward, but we would not stop there. In addition to raising awareness, we’d also aim to help people to take action that helps move the world toward an international ban on superintelligence.

So far, we think that the most useful CTA (call to action) is to ask people to email or call their lawmakers. Using this CTA allows us to build a base of supporters who are motivated enough to take this kind of action, who we can call upon again in the future. We have already built the online campaigning infrastructure for this, and our 180k email subscribers have already sent over 200k messages to their lawmakers about ASI.

At this $50 million budget, we estimate that we could grow this base of supporters to 2 million citizens within 1 year. When we email this type of CTA, we currently get an action rate of around 2%. We think we can safely assume that this action rate will not degrade by a whole order of magnitude at this scale.

Given these assumptions, we predict that if we target some carefully selected subset of US states, this would produce enough constituent pressure to get on the radar of key decision-makers and their staff purely through constituents emailing and calling lawmakers. For example, if we target swing states, we might be able to get electoral campaigns to at least be aware of our issue.

Public awareness efforts can scale massively before saturating. There are straightforward, non-innovative ways to make productive use of budgets as large as $500 million or $1 billion: large-scale ad campaigns routinely do so. Coca Cola spent $5.15 billion in 2024, and Trump’s 2024 presidential campaign spent more than $425 million, or $1.4 billion including outside groups. This is also the scale at which, if we wanted to do so, we could spend $8 million on a Super Bowl ad about extinction risk from superintelligent AI! [25]

A total budget of $500 million to $1 billion would allow us to scale our ad spend massively. At this point, even with extremely pessimistic assumptions, [26] we could reach each US citizen at least a dozen times. Alternatively, we could focus on the 10% most engaged segment of the US population, reaching each individual at least 100 times.

As a lower bound, we are confident this is enough to make sure that every citizen in the US is at least somewhat aware of the issue. More importantly, we suspect that at this scale we could push the issue to the forefront of the public’s attention, and make it into one of the main topics in the national conversation.

We acknowledge it’s really hard to predict the effects of a campaign at this scale [27], but we think that it can help to anchor on other campaigns of similar scale in the US: abortion, marijuana policy, and the presidential race itself.

As we argued in the section An asymmetric war, we see these campaigns as mostly a zero-sum game, in which both sides must burn as many resources as possible to be competitive. If we receive comparable funding, we feel confident in our chances, as we see an AI extinction risk awareness campaign as a much more positive sum game.

One last point about ad spending: in order to run an ad campaign, we need not only to buy ad space, but we also need to expand our marketing team so that it has sufficient capacity to optimize the campaign.

Within a budget of $50 million, we could afford to dedicate ~6 people to this, offering salaries roughly between $100k and $200k. This addresses basic needs, but it does not provide an appropriate amount of bandwidth for the task, nor does it allow us to attract and retain the best talent.

Running an effective ad campaign is not a fire-and-forget operation. We’d need to continuously measure results, A / B test, experiment, brainstorm ads and concepts, research trends and audience behaviors, even come up with novel metrics and testing methodologies.

All of this information needs to be collected, analyzed and fed into the next round of iteration. The rounds of iteration themselves need to be very fast if we want to improve in a relevant amount of time. Whereas less ambitious marketing teams may take ~3 months to go through an iteration cycle, we’d have to do it in ~2 weeks.

To run this kind of operation, we would benefit immensely from hiring the most talented people, who can not only follow existing playbooks, but also innovate. These people are in extremely high demand, and we’re competing for them against the private sector.

Within a budget of $500 million, we could afford to dedicate ~20 people to this, offering salaries roughly between $200 and $400k. This would allow us to attract top talent and compete with the private sector. [28]

Grassroots mobilization

We already have a base of motivated supporters. 180k people are subscribed to our mailing list. 30k of our supporters contacted their lawmakers about extinction risk from ASI, and ~2000 of our supporters are willing to commit 5 minutes per week to regularly take small actions to help with the issue. Dozens have shown up at our pilot in-person events.

With more funding, we think we can turn this into a significant grassroots movement. We currently lack the capacity to properly organize and mobilize this community. We believe that we’d have sufficient capacity for this at a $50 million overall budget.

Concretely, this work would consist of things like:

  • Vetting local leaders, coaching them and helping them with their work.
  • Organizing or providing funding for local events.
  • Helping with initial set up of groups, legal entities, basic websites etc.
  • Building and providing services like Microcommit and tools such as the “Contact your lawmakers” tool on our campaign website
  • Providing educational materials like tutorials and scripts for contacting one’s lawmakers.

Policy work

As part of our work in policy advocacy, it is often useful to be able to show policymakers a concrete policy proposal. These proposals can take various forms: legal definitions of superintelligence, high-level proposals for an international agreement on prohibiting ASI, national bills implementing a country’s obligations in an international agreement.

These proposals are not meant to be the exact, definitive version of the law that will eventually be implemented. It is understood that things will change as time passes, more parties weigh in, and negotiations unfold.

That said, it helps in many ways to have initial, concrete proposals. It helps people to publicly discuss, red-team, and refine the proposals. But also, it helps to show policymakers a proof-of-concept that concrete measures can be taken to prevent extinction risk from superintelligence.

The more countries we reach, the more complicated this work becomes. The legal landscape differs significantly between countries: they have different legal traditions, processes, institutions, constitutions, limits on power of governmental bodies, etc.

It takes a team of policy researchers, and the help of parliamentary lawyers, to develop and propose such policy proposals. We estimate that we’d have sufficient capacity for this work at around a $50 million total yearly budget.

Thought-leader advocacy

Most people rely on trusted voices, across the political spectrum, to help them navigate complex issues rather than trying to form their view from scratch on every single topic. This is a normal and healthy part of how democracies function: just like representative democracy exists because we don’t expect every citizen to participate directly in the full political process, we don’t expect everyone to independently decide to pay attention to such a highly complex matter as extinction risk from ASI.

Instead, people look to figures like journalists, academics and public intellectuals to help them understand which issues deserve their attention. One of our key workflows is outreach to these kinds of thought-leaders. At the moment, this mostly includes journalists, and sometimes content creators. This workstream has so far resulted in 22 media publications on risk from superintelligent AI including in TIME and The Guardian, and in 14 collaborations (a mix of paid and free) with content creators including popular science communicator Hank Green, Rational Animations, and more.

With more funding, we could not only scale up these workstreams, but also extend this outreach effort to include NGOs other than those who focus on AI, academics, religious leaders, authors and other public intellectuals, CEOs of companies outside of tech, leaders of local communities, and others.

If we want our society to develop a deep awareness of the extinction risk posed by ASI, we need to help these people understand the issue. At a $50 million total budget, we’d have enough bandwidth for a thought-leader outreach effort focused on the lowest-hanging fruits. In practice, this likely means having a single generalist team spread across every type of thought-leader, and covering only the Anglosphere.

At a total budget of $500 million, we could afford to build strong dedicated teams, each focused on one of the most important thought-leader communities. At the same time, we could establish a presence in other major cultural regions outside the Anglosphere.

Attracting and retaining the best talent

Many in our organization are forsaking significant increases in compensation they could command in the private sector, purely because they are deeply committed to our mission. As we scale, it will become increasingly difficult to find talented people who are willing to take this kind of pay cut. This is especially true if we scale aggressively.

To attract the caliber of talent that a problem of this importance deserves, we need to offer salaries that are as competitive as possible with the private sector.

At a yearly budget of $50 million, we’d be able to slightly improve our compensation, though most of the increase would be eaten by scaling the number of staff rather than increasing pay. As a rough estimate, we could probably offer between $100k and $200k to people in the public awareness team (comparable to sales in the private sector), and ~$350k to principal staff.

At $500 million, we think we could be truly competitive. While we would likely still be unable to match the salaries offered by AI corporations to staff who take part in their lobbying and marketing operations, we could significantly reduce the gap.

Conclusion

We want to be upfront: we don't know for sure if this will work. An international ban on ASI is an extraordinarily ambitious goal. But we believe that the structure of the problem gives us a fighting chance: approximately no one wants to play a game that risks wiping out humanity, regardless of the prize.

In 2025, with a team of fewer than 15 people, we’ve built a coalition of over 110 UK lawmakers to support our campaign, with 1 in 2 lawmakers having supported our campaign after we briefed them. On top of this, we’ve catalyzed parliamentary debates on superintelligence and extinction risk.

In the US, where competition for lawmakers' attention is the fiercest, we’ve personally met with 18 members of Congress with only a tiny number of staff on the ground. On the public awareness side, over 30k people have used our tools to send over 200k messages to their lawmakers about extinction risk from superintelligence, most of them in the US.

This wasn't a fluke of exceptional talent or lucky connections; we’ve done this with remarkably junior staff, in little more than a year. It was the result of a straightforward, scalable process, and of building solid foundations that enable us to scale to meet the challenge. What’s standing between us and a real fighting chance is funding commensurate with the problem.

If you are a major donor or a philanthropic institution, please get in touch at [email protected]. We’d be glad to walk you through our theory of change in more detail and discuss how additional funding would be deployed.

If you know a major donor or someone at a philanthropic institution, please introduce us. A warm introduction from someone they trust goes much further than a cold email from us. You can loop us in at the same address.

If you're an individual donor who is considering a gift of $100k or more, please reach out at the same address. Please only consider doing so if this wouldn't significantly impact your financial situation. We don't want anyone to overextend themselves on our behalf, no matter how much they care about the issue. We are a 501(c)(4) in the US and a nonprofit (not a registered charity) in the UK, so your donations are not tax deductible.

We’re currently not set up to receive smaller donations. If you still want to contribute, you can check our careers page. If you see a role you could fill, please apply. If you know someone who'd be a good fit, send them our way.

  1. The probabilities are produced mostly by gut feeling, but the major barriers that were considered are the following. 1) We are able to maintain a good internal culture as we scale extremely aggressively. 2) The lower bounds of our gears-level estimates mentioned in the second half of this post (e.g. ad impressions per dollar) hold. 3) We are able to validate our approach at scales of ~$50 million a year, and are able to continue raising at this scale if getting the agreement in place takes longer than a year. 4) The issue becomes a top 10 salient issue in the US and another 2~3 major countries. 5) The behavior of governments championing the ban is sufficiently connected to the right insights about extinction risk and ASI, requiring at the very least that public discourse about the ASI ban does not get distracted or confused in a way that makes the resulting actions ineffective. 6) This leads to an international ban on ASI in which major powers, including the US and China, conclude that participation serves their national interests and try to enforce globally. Alternatively, if China or other countries do not join, the coalition of countries behind the ASI ban is powerful enough to be able to deter non-participating countries and any rogue actors from developing ASI. ↩︎

  2. e.g. US Congress has the “power of the purse”, parliamentary systems can hold “votes of no confidence”. ↩︎

  3. Between our founding in October 2023 and mid 2024, we ran 3 campaigns in rapid succession. One of these was a campaign against deepfakes. This was a sincere effort: we do believe that deepfakes are a problem that should be addressed with legislation, and we’re proud of our achievements as part of our campaign. That said, after refining our thinking and developing the ideas we’re espousing on this post, we’ve updated towards focusing exclusively on extinction risk from ASI. This is what we’ve been doing since the end of 2024. ↩︎

  4. Consider the environmentalist movement as a cautionary example. Environmental efforts have generally failed to achieve their stated goals (e.g. reducing emissions, reversing climate change). Richard Ngo argues that they’ve caused serious collateral harms. We think this is partly because of their lack of focus. Rather than concentrating on a single core concern, environmental campaigns rummage around for anyone who, for any reason, feels good vibes toward the idea of the environment. As a result, the movement struggles to achieve good policies despite being enormously salient. Because of its lack of focus, it is interlinked with anti-capitalist groups, and so it tends to oppose interventions that would actually help with climate change, such as nuclear energy, as well as carbon capture and market-based solutions in general. Relevant posts on LessWrong: @habryka’s “Do not conquer what you cannot defend”, @Gabriel Alfour‘s “How to think about enemies: the example of Greenpeace”. ↩︎

  5. To clarify: this doesn’t mean that everyone thinks the only way to avoid extinction is to not build ASI. Some do, while others have complicated ideas about how ASI can be built safely. The point is that none of those specific complex ideas benefit from a broad expert consensus. The only thing that most of us can agree on is that it won’t kill us if we don’t build it. ↩︎

  6. There have been other statements, such as this great one from FLI, but none signed by *both* top AI scientists and CEOs of top AI companies. ↩︎

  7. Sources: abortion was roughly $400 million in 2024, marijuana legalization was roughly $185 million in 2024, Prop 22 was roughly $220 million. ↩︎

  8. See Annex 2 of our paper “How middle powers may prevent the development of ASI”. While the paper focuses on the perspective of middle powers, this section’s analysis extends to superpowers. ↩︎

  9. We strongly believe in the principles we follow: honesty, openness, and democracy. Of course, we do think that our approach to averting extinction risks from ASI is the best; we wouldn’t pursue it if we didn’t think so. At a 500M budget level, we’d love to fund organizations that pursue different approaches, as long as they respect our basic principles. If we had that level of funding, we would seek to ensure that there are other organizations pursuing a candid approach to communication about ASI, and of organizations that directly tackle the need for strong international coordination. ↩︎

  10. Notably, a statement like this one can generate a temporary spike of media coverage, but does not generate sustained attention by itself. Statements like this one need a sustained campaign (like the one we’re running) in order to receive sustained attention. ↩︎

  11. The statement reads: “Nobel Prize winners, AI scientists, and CEOs of leading AI companies have stated that mitigating the risk of extinction from AI should be a global priority. Specialised AIs - such as those advancing science and medicine - boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security. The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.” ↩︎

  12. Examples of this: a lawmaker giving a speech in parliament, writing an op-ed, or speaking in an interview to a major media outlet. ↩︎

  13. Importantly, our metrics are strictly focused on AI extinction risk. This reduces the risk that the person working on them, or the organization as a whole, will fool themselves into pursuing issues other than preventing extinction risk from superintelligent AI. A “lawmaker public declaration” only counts if it covers extinction risk specifically. If people at ControlAI spend time trying to push topics such as “job loss”, “AI ethics” or “autonomous weapons”, we consider this a failure. This is how we fight The Spectre, and stay laser focused on addressing extinction risk from superintelligence. ↩︎

  14. To be considered a very rough estimate, could be 30M to 80M. ↩︎

  15. 1 member for most of this period; the 2nd member joined in the past month. ↩︎

  16. We’ve already fostered two debates about prohibiting ASI, and helped submit one amendment recognizing ASI and putting in place kill-switches for use in case of AI emergencies. To our knowledge, we are the first organization to successfully prompt a debate, in the parliament of a major country, focused specifically on prohibiting superintelligence. ↩︎

  17. Consider that replicating a success should be much easier than doing it the first time. By design, our results are public, and so, produce common knowledge. Now that 100+ lawmakers support our campaign in the UK, it is easier for other lawmakers to take a similar stance, including in other countries. ↩︎

  18. To a lesser degree, we would like people to remember our organization as a place where they can find trustworthy information on the issue and what they can do to help solve it. ↩︎

  19. The vast majority of people will not feel the need to fully understand the technical and geopolitical details in order to buy into the concern. The important part is that most people can intuitively understand why and how ASI can cause human extinction, and are happy to defer to experts about the details. ↩︎

  20. This is the most common rule of thumb in marketing, and is backed up by some academic research as well, e.g. see Advertising Repetition: A Meta-Analysis on Effective Frequency in Advertising. ↩︎

  21. Unlike the previous one, this statement is not backed by academic research. While most academic research focuses on marketing aimed at selling products and services, our goals present quite a different challenge. There are two main differences that make us expect to keep getting returns after even hundreds of exposures. 1) Our messages are somewhat novel and complex to the audience. This complexity will have to be accounted for in some way: either the message is presented in a complex way that takes more exposures to remember, or the message is broken down into many building blocks, each of which needs to be shown many times. 2) The success bar is somewhat higher: we do benefit from people responding to CTAs similar in scope to “buying a product”, but we also benefit from deeper engagement (see the section on “Grassroots mobilization"), we benefit from people spontaneously bringing up the topic in conversations, which happens more if we create common knowledge that the topic exists. ↩︎

  22. This section assumes that we will allocate 60% of our ad spend to the US. We expect it will be quite a bit easier to yield good results in other countries, mostly due to lower cost per impression. For example, if we put the remaining 40% in 3 G7 countries, we expect to roughly be able to replicate the same success as in the US across those 3 countries. ↩︎

  23. Including both organic and paid reach. ↩︎

  24. This corresponds to 800 million total impressions. ↩︎

  25. Though it’s not clear to us at the moment if this would be a good use of money. ↩︎

  26. In this paragraph, we use our worst case assumption that scaling ad-spend by x30 multiplies impressions by x4. We expect it’s much more likely that scaling x30 will yield x10 to x15 impressions. ↩︎

  27. Simpler models and extrapolations that we think we can use at a $50 million budget will break at this scale. There are strong reasons to deviate from these, both in pessimistic and optimistic directions. At this scale, we’ve probably run out of people who can be mobilized solely through ads. At the same time, network effects come into play, where people hear about the issue from others, and they start to see it as a “normal” part of the political discourse. It seems to us that trying to model the net effect ahead of time would be a fool’s errand. ↩︎

  28. For reference, here’s a job post by Anthropic for a marketing role, which they advertise as paying $255k to $320k. ↩︎



Discuss

Monday AI Radar #22

2026-04-21 23:53:07

How are we doing on solving the alignment problem? Harry Law begins this week’s newsletter with an explanation of alignment-by-default: the idea that because LLMs are trained on an immense body of human text, they are predisposed to understand and pursue human values. But predisposition isn’t enough: Ryan Greenblatt argues that current models show a concerning pattern of mundane misalignment that could become catastrophic if it isn’t fixed.

And lest we spend all our time worrying about how to ensure that AI does what we want, Robert Long explores the ethics of whether we should create intelligent beings that want to serve us. Alignment is far from solved, but these challenges are concrete—and solvable—in a way that few people expected five or ten years ago.

Top pick

Alignment by default?

The orthogonality thesis states that superintelligence is compatible with a vast range of possible goals. In traditional AI safety thinking, that presents a serious challenge for alignment. How do you ensure your AI is aligned with human values if they represent just a tiny subset of the possible goals it might learn during training?

There is a strong case to be made that the orthogonality thesis is misleading when it comes to LLMs. As Harry Law explains in this week’s top pick:

Alignment-by-default says, for the class of systems defined by autoregressive language modeling over human-generated text, the training process generates a normative prior such that the default expectation should be partial alignment.

The idea is that because LLM base models are pre-trained on an immense amount of human text, they are not blank slates that need to be taught human values from scratch. Pre-training gives them a deep understanding of those values and a “normative prior” that predisposes them to act accordingly.

In this view, post-training doesn’t have to teach human values, but merely needs to steer the model within a set of values to which it is already predisposed. Alignment by default doesn’t guarantee that LLMs will be perfectly aligned, but implies that they will default to partial alignment and will be easier to fully align than has been traditionally supposed.

Alignment is a hard problem that is far from solved, and alignment-by-default doesn’t change that. But the nature of LLMs means that some parts of alignment are much easier than we once expected.

My writing

Don’t cut yourself on the jagged frontier Some quick thoughts about the dangers of well-aligned superintelligence and the relevance of the jagged frontier.

Who I follow An opinionated take on how best to keep up with the most important developments in AI.

Inkhaven

I’m spending April at the Inkhaven Writing Residency. It’s a fantastic program that I highly recommend if you’re interested in skilling up as a blogger. Curious about how it works? Come to the Inkhaven Fair on Saturday April 25 (I’ll be there and would love to say hi).

Mythos

Mythos and national power

ChinaTalk explores what Mythos means for national security. This is the best piece I’ve seen for understanding the implications of Mythos’ cybersecurity capabilities. Mythos is alarmingly capable and the security landscape is going to be challenging for at least the next year or two. But how bad it gets will depend as much on mundane details like rapid deployment of patches as it will on raw technical capabilities.

Looking beyond cyber, Ben Buchanan is unfortunately correct about what comes next:

I think we are very fortunate that cyber is coming first. I think we should use cyber as a lesson for what is coming next at the intersection of AI and other fields. Bio will not be far behind. At some point we will have a Mythos moment for bio.

Should it serve as a lesson? Yes.

Will it serve as a lesson? The post-covid dismantling of public health doesn’t fill me with confidence.

UK AISI evaluates Mythos’ cyber capabilities

UK AISI’s evaluation of Mythos finds that Mythos is not only able to find subtle vulnerabilities, but it represents a major step forward in autonomously conducting complete attacks consisting of numerous discrete steps.

Claude Mythos #3: capabilities and additions

Part 3 of Zvi’s Mythos coverage focuses on capabilities. If you don’t have time to read the whole thing, the conclusion covers the essentials.

Benchmarks and Forecasts

MirrorCode

Epoch AI presents MirrorCode, a new benchmark that tests the ability to perform long but well-specified coding tasks. It’s a nicely designed evaluation: the AI is tasked with writing a functional equivalent of a command line tool and given access to the tool, documentation, and a set of test cases, but not the source code itself.

The task is well-specified and easy to verify, making it an ideal task for an LLM. Epoch finds a steady progression in Opus’ capability: 4.0 succeeded at a task that required 650 lines of code (LoC), 4.5 succeeded at a 1,200 LoC task, and 4.6 succeeded at a 7,700 LoC task. Epoch estimates that a human coder would have needed several weeks to succeed at the same task.

This aligns well with Ryan Greenblatt’s recent piece arguing that AI can now accomplish difficult tasks that would take experts months or years to complete if the tasks are sufficiently easy to verify. An obvious corollary is that there is immense alpha in making more tasks highly verifiable.

The tolerance gap

Minh Pham coins “the Tolerance Gap” as a tool for thinking about how AI can be usefully applied to different types of tasks. High-tolerance tasks (vibe coding) can tolerate significant errors in exchange for high productivity, while low-tolerance tasks (accounting) cannot. It’s a great term for a useful concept.

This advice seems spot-on, and a good example of the concept in action:

For founders: pick a side of the Gap and commit. A product that tries to straddle both regimes usually fails both. The winners on the high-tolerance side are shipping agents, raising autonomy, racing on horizon length. The winners on the low-tolerance side are (quietly) building verification layers, domain-specific guardrails, and human-in-the-loop tooling that treats the model as one input among many.

Open-world evaluations for measuring frontier AI capabilities

Sayash Kapoor and Arvind Narayanan have a comprehensive paper on “open-world evaluations: long-horizon tasks in real-world environments, where success can’t be neatly specified or automatically graded.” They review recent examples and present a framework for thinking about how open-world evaluations work, what their limitations are, and how to best make use of them.

These types of evaluations are harder to create and don’t lend themselves well to easy comparisons between models. But they are perhaps the best way to assess the full capabilities of frontier models.

Alignment and interpretability

Current AIs seem pretty misaligned to me

Ryan Greenblatt is concerned about the state of alignment:

Many people—especially AI company employees—believe current AI systems are well-aligned in the sense of genuinely trying to do what they're supposed to do (e.g., following their spec or constitution, obeying a reasonable interpretation of instructions). I disagree.

Ryan argues that although we see little evidence of malicious misbehavior, there is a clear pattern that might be described as a combination of laziness, overeagerness, and misrepresenting the success of their work. While it’s currently mostly annoying,

I still think this misalignment is indicative of serious problems and would ultimately be existentially catastrophic if not solved.

It’s a thoughtful piece and I’m updating my beliefs based on it. I’m not convinced, however, that this type of misalignment would be catastrophic: there are plausible scenarios where that might be the case, but I’m not sure that’s the default path. He notes that Alex Mallen will soon post more about this—I’m excited to read that.

I’m also more optimistic that this class of misalignment will get fixed: the associated problems seem highly legible, and the incentives to fix them seem strong.

Agents

Managing context in Claude Code

If you’re just running a Claude Code session forever and letting it auto-compact when the context window gets full, you’re leaving a ton of performance on the table. Anthropic’s Thariq has a detailed guide to the tools and strategies you should be using to manage your context.

Math

The AI revolution in math has arrived

Quanta takes an in-depth look at AI and advanced math. Math is an area where AI capabilities are advancing rapidly: although it isn’t anywhere close to being able to replace mathematicians, it’s increasingly able to provide substantive assistance with solving hard problems:

Gómez-Serrano noted that any one of their results might have been obtained by an expert in a given area who worked at it for a few months. But without being experts in many of these fields, “we were able to obtain comparable results in the span of a day or two,” he said.

What it looks like to do math with AI

One of my fellow residents at Inkhaven is Benjamin Grayzel, who submitted the first AI-ideated resolution to Erdős problem #659. He’s written an excellent account of what it looks like to do math with AI.

AI psychology

Should we care about AI welfare?

Conspicuous Cognition talks with Robert Long about AI consciousness and welfare. They discuss how Claude perceives itself, whether it’s ethical to create a being that genuinely wants to serve others, and how AI welfare and AI safety might be related. Rob’s idea that consciousness and moral status might be decoupled seems important but confusing—that’s a reflection of the complexity that surrounds any discussion of consciousness.

Open models

My bets on open models, mid-2026

Nathan Lambert shares 13 beliefs about open models in mid 2026. This feels like a transition time for open models, where the current business model isn’t holding up but it isn’t yet clear what replaces it.

This is a complex picture, where the long-term trajectory is more of an economics question rather than an ability one.

Strategy and politics

Dwarkesh interviews Jensen Huang

Dwarkesh recently interviewed Jensen Huang. It’s worth listening to if you’re deeply interested in the details of the GPU business, probably not otherwise. The part that upset the Twitterati is the discussion about whether we should allow NVIDIA to sell high end chips to China. Zvi’s assessment is exactly right:

What matters is Nvidia selling chips to China. That’s it. Nothing else matters. That keeps Nvidia and CUDA dominant, and what’s good for Nvidia is good for America, because if anything is built on his chips then that’s ‘good news’ and we win, whereas if it’s built on someone else’s chips, then that is ‘bad news’ and we lose. This does not actually make any sense whatsoever.

What’s confusing here is that Jensen is determined to sell advanced chips to China, even though he would have no trouble selling those same chips domestically. I’m unable to come up with a charitable explanation.

Academia

Accelerating academic research with agentic AI

Andy Hall runs a new lab focused on using agentic AI for academic research. As part of a series by Roots of Progress Institute, he discusses what his team has learned so far:

Any one of these projects would have been extremely difficult to carry out a year ago, requiring intensive focus over many months. Completing multiple ambitious public-impact projects in a two-month period would have been completely unthinkable.

The challenge will be to ensure, as Andy says, that we generate 100x as much knowledge, not 100x as many papers.

Briefly

Resources for upskilling in AI policy

80,000 Hours has a list of resources for people who want to get started in AI policy.

Something frivolous

Andon market

You know what’s more fun than letting an AI run a vending machine? Letting it run a physical store. 



Discuss