2026-02-14 07:36:03
Published on February 13, 2026 11:36 PM GMT
David Oks published a well-written essay yesterday arguing that the current panic about AI job displacement is overblown. I agree with a few of his premises (and it’s nice to see that we’re both fans of Lars Tunbjörk), but disagree with most of them and arrive at very different conclusions. I see other economists with similar views to David, so I thought it would be best to illustrate my perspective on econ/labor and why I choose to research gradual disempowerment risks.
My main claim is simple: it is possible for Oks to be right about comparative advantage and bottlenecks while still being wrong that "ordinary people don't have to worry." A labor market can remain "employed" and still become structurally worse for workers through wage pressure, pipeline collapse, and surplus capture by capital.
I'm writing this because I keep seeing the same argumentative move in AI-econ discourse: a theoretically correct statement about production gets used to carry an empirical prediction about broad welfare. I care less about the binary question of "will jobs exist?" and more about the questions that determine whether this transition is benign: how many jobs, at what pay, with what bargaining power, and who owns the systems generating the surplus.
Oks' points are as follows:
1: Comparative Advantage Preserves Human Labor
“...Labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process."
Oks brings the Ricardian argument here, and I think it's directionally correct as a description of many workflows today. We are in a cyborg era in which humans plus AI often outperform AI alone, especially on problems with unclear objectives or heavy context. But I don't think the comparative advantage framing does the work Oks wants it to do, because it leaves out the variables that determine whether workers are "fine."
First, comparative advantage tells you that some human labor will remain valuable in some configuration, but nothing about the wages, number of jobs, or the distribution of gains. You can have comparative advantage and still have massive displacement, wage collapse, and concentration of returns to capital. A world where humans retain “comparative advantage” in a handful of residual tasks at a fraction of the current wages is technically consistent with Oks’ framework, but obviously is worth worrying about and is certainly not fine.
Another issue with the comparative advantage framing (and I think this is a problem with most AI/econ forecasting) – it implicitly assumes that most laborers have the kind of tacit, high-context strategic knowledge that complements AI. The continuation of the “cyborg era” presupposes that laborers have something irreplaceable to contribute (judgement, institutional context, creative direction). I agree with this for some jobs, but it’s not enough for me to avoid being worried about job loss.
Under capitalism, firms are rational cost-minimizers. They will route production through whatever combination of inputs delivers the most output per dollar (barring policy). Oks and David Graeber’s “Bullshit Jobs” thesis agree on the empirical point that organizations are riddled with inefficiency, and many roles exist not because they’re maximally productive but because of social signaling and coordination failures. Oks treats this inefficiency as a buffer that protects workers. But if a significant share of existing roles involve codifiable, routine cognitive tasks, then they’re not protected by comparative advantage at all. They’re protected only by the organizational friction that has yet to be overcome, which I believe will erode (and we’ll discuss later).
Oks links some evidence that demonstrates protection from displacement – I’ll admit that the counter-evidence so far is inconclusive and contaminated by many other economic factors. I will bring up Erik Brynjolfsson’s “Canaries in the Coal Mine” study, because I think it exemplifies the trend we’ll continue seeing in the next 2-3+ years before AGI.
Brynjolfsson analyzed millions of ADP payroll records and found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak. Meanwhile, employment for experienced workers in the same occupations held steady or grew.
So what’s the mechanism at play? AI replaces codified knowledge – the kind of learning you get from classrooms or textbooks – but struggles with tacit knowledge, the experiential judgement that accumulates over years on the job. This is why seniors are spared and juniors are not. But Oks’ thesis treats this as reassurance: see, humans with deep knowledge still have comparative advantage! I believe this is more of a senior worker’s luxury, and the protection for “seniors” will move up and up the hierarchy over time.
Some other stats (again, not totally conclusive but worthy of bringing up):
This is a disappearance of the bottom rung of the career ladder, which has historically served a dual function: producing output and training the next generation of senior workers. Oks may point to other sources of employment (yoga instructors, streamers, physical trainers), or indicate that entry-level hiring is slowing down due to other economic forces, but I’ll ask: will the entire generation of incoming college graduates, who are rich with codified knowledge but lacking in tacit knowledge, all find AI-complementary roles? Or will firms slow hiring and enjoy the productive pace of their augmented senior employees? How high are labor costs for entry-level grads compared to the ever-reducing cost of inference?
2: Organizational Bottlenecks Slow Displacement
“People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human… Production processes are governed by their least efficient inputs: the more productive the most efficient inputs, the more the least efficient inputs matter.”
This is the strongest part of the essay and overlaps substantially with my own modeling work. The distance between technical capability and actual labor displacement is large, variable across domains, and governed by several constraints independent of model intelligence. The point about GPT-3 being out for six years without automating low-level work is good empirical evidence, though I don’t agree that GPT-3 or GPT-4 era models could automate customer service (they would need tool usage, better memory, and better voice latency to do that).
Where the analysis is lacking is in treating bottlenecks as if they’re static features of the landscape rather than obstacles in the path of an accelerating force. Oks acknowledges that they erode over time but doesn’t discuss the rate of erosion or that AI itself may accelerate their removal.
The example below is likely overstated, but this is the worst Claude will ever be – are any of these agentic decisions something that we would previously classify as organizational friction?
In my own modeling, I estimate organizational friction coefficients for different sectors and job types. The bottleneck argument is strong for 2026-2029, but I think it’s considerably weaker for 2030-2034. Oks brings up the example of electricity taking decades to diffuse but admits that the timeline isn’t similar. I would agree, it's not similar, and the data is increasingly pointing towards a compressed S-curve where adoption is slow until it isn’t.
Oks’ bottleneck argument is entirely about incumbents – large, existing firms with accumulated infrastructure debt. What happens when AI-native organizational structures compete with legacy enterprises with decades worth of technical debt? The infrastructure bottleneck is a moat that only protects incumbents until someone flies over it.
3: Intelligence Isn’t the Limiting Factor, and Elastic Demand Will Absorb Productivity Gains
“The experience of the last few years should tell us clearly: intelligence is not the limiting factor here… even for the simplest of real-world jobs, we are in the world of bottlenecks.”
“Demand for most of the things humans create is much more elastic than we recognize today. As a society, we consume all sorts of things–not just energy but also written and audiovisual content, legal services, ‘business services’ writ large–in quantities that would astound people living a few decades ago, to say nothing of a few centuries ago.”
Here I’ll lean a little bit on Dean W. Ball's latest pieces on recursive self-improvement as well as some empirical evidence of job loss. Oks writes as though we haven’t seen meaningful displacement yet – I would say we have, within the limited capabilities of models today.
Beyond the entry-level crisis discussed earlier, displacement is already hitting mid-career professionals across creative and knowledge work. See reports linked on illustrators and graphic designers, translators, copywriters, and explicitly AI-related corporate layoffs.
The models doing this aren’t even particularly good yet. These losses are happening with GPT-4-class and early GPT-5-class models; models that still hallucinate, produce mediocre long-form writing, can't design well, and can’t reliably handle complex multi-step reasoning. If this level of capability is already destroying illustration, translation, copywriting, and content creation, what happens when we reach recursive self-improvement? There needs to be some more investigative work to see how displaced designers/translators/copywriters etc. are reskilling and finding new work, but I would estimate it’s extraordinarily difficult in this job market.
Notice the distributional pattern: it’s not the creative directors, the senior art directors, or the lead translators with niche expertise getting hit. It’s everyone below them; the juniors, the mid-career freelancers, the people who do the volume work. Oks’ comparative advantage argument might hold for the person at the top of the hierarchy whose taste and judgment complement AI, but it offers no comfort for the twenty people who work below that person.
Then, we’ll consider the capabilities overhang. We haven’t even seen models trained on Blackwell-generation chips yet, and models are reaching the ability to build their next upgrades.
Massive new data centers are coming online this year. Oks’ point about “GPT-3 being out for 6 years and nothing catastrophic has happened” – is looking at capabilities from 2020–2025 and extrapolating forward, right before a massive step-change in both compute and algorithmic progress hits simultaneously. The river has not flooded but the dam has cracked.
Ball offers another good point in his essays – there is a difference between AI that’s faster at the same things versus AI that’s qualitatively different – a Bugatti going 300 instead of 200 mph vs a Bugatti that learns to fly. Oks’ entire analysis assumes incremental improvements that organizational friction can absorb. But, again, automated AI research raises the possibility of capabilities that route around existing organizational structures rather than trying to penetrate them. An AI system that autonomously manages end-to-end business processes doesn’t need to navigate office politics and legacy systems.
As for the Jevons paradox argument (often cited); that elastic demand will absorb productivity gains. I believe it’s real for some categories of output but cherry-picked as a general principle. Software is Oks’ central example, and it’s well-chosen: software is elastic in demand because it’s a general-purpose tool. But does anyone believe demand for legal document review is infinitely elastic? For tax preparation? For freelance video editors? These are bounded markets where productivity gains translate fairly directly to headcount reductions, and I’m still struggling to understand how we are telling early-wave displaced roles to upskill or find new careers.
Someone commented under Oks’ post another example that I’ll jump on. As global manufacturing shifted toward China and other low-cost production regions, total manufacturing output continued to expand rather than contract, a Jevons-like scale effect where cheaper production increased overall consumption. American manufacturing workers, however, bore concentrated losses. The gains flowed disproportionately to consumers, firms, and capital owners, while many displaced workers (especially in Midwestern industrial regions) faced long-term economic decline that helped fuel a broader political backlash against globalization.
We can also address a concrete case – AI video generation. Models like Veo 3.1 and Seedance 2.0 are producing near-lifelike footage with native audio, lip-synched dialogue, and most importantly, automated editorial judgement. Users upload reference images, videos, and audio, and the model assembles coherent multi-shot sequences matching the vibe and aesthetic they’re after. Seedance 2.0 shipped this week.
The U.S. motion picture and video production industry employs roughly 430,000 people – producers, directors, editors, camera operators, sound technicians, VFX artists – plus hundreds of thousands more in adjacent commercial production: corporate video, social content, advertising spots, educational materials. The pipeline between “someone has an idea for a video” and “a viewer watches it” employs an enormous intermediary labor force.
Oks’ elastic demand argument would predict that cheaper video production simply means more video, with roughly equivalent total employment. And it’s true that demand for video content is enormous – McKinsey notes the average American now spends nearly seven hours a day watching video across platforms. But I would challenge his thesis: is the number of people currently employed between producer and consumer equivalent to the number who will be needed when AI collapses that entire intermediary layer? When a single person with a creative vision can prompt Seedance/Veo/Sora into producing a polished commercial that once required a director, cinematographer, editor, colorist, and sound designer, does elastic demand for the output translate into elastic demand for the labor?
People now can produce polished AI anime for about $5-$100. This content exists but the workforce does not. So, yes, there will be vastly more video content in the world. But the production function has changed; the ratio of human labor to output has shifted by orders of magnitude. The demand elasticity is in the content, not in the labor.
To summarize: Jevon's paradox in aggregate output is perfectly compatible with catastrophic distributional effects. You can have more total economic activity and still have millions of people whose specific skills and local labor markets are destroyed. The people being displaced right now are not edge cases, they’re illustrators, translators, copywriters, graphic designers, video producers, and 3D artists who were told their skills would always be valuable because they were “creative.” The aggregate framing erases these people, and it will erase more.
4: We’ll Always Invent New Jobs From Surplus
“We’ll invent jobs because we can, and those jobs will sit somewhere between leisure and work. Indeed this is the entire story of human life since the first agrarian surplus. For the entire period where we have been finding more productive ways to produce things, we have been using the surplus we generate to do things that are further and further removed from the necessities of survival.”
This is an argument by induction: previous technological transitions always generated new employment categories, so this one will too. The premise is correct, the pattern is real and well-documented. I don’t dispute it.
The problem is the reference class issue. Every previous transition involved humans moving up the cognitive ladder, like from physical labor to increasingly abstract cognitive work. Oks mentions this – agricultural automation pushing people into manufacturing, then manufacturing automation pushing people into services, then service automation pushing people into knowledge work. The new jobs that emerged were always cognitive jobs. This time, the cognitive domain itself is being automated.
I don’t think this means zero new job categories will emerge. But Oks’ assertion that “people will find strange and interesting things to do with their lives” doesn’t address three critical questions: the transition path (how do people actually get from displaced jobs to new ones?), the income levels (will new activities pay comparably to what they replace?), and ownership (will the surplus that enables those activities be broadly shared or narrowly held?). There’s also the entry-level → senior pipeline problem I mentioned earlier.
The gesture toward “leisure” as an eventual end state is telling. If human labor really does become superfluous, that’s not a world where “ordinary people” are okay by default, but rather a world where the entire economic operating system needs to be redesigned. Oks treats this as a distant concern. I’d argue it’s the thing most worth worrying about, because policy needs to be built before we arrive there, not after.
5: What’s Missing
The deepest issue with Oks’ essay is the framing, rather than his individual claims. His entire analysis is labor-centric: will humans still have jobs? I think this is assuredly worth asking, but also incomplete.
I’ll be charitable and say that the following section covers something he didn’t write about (instead of not considering), but he says “ordinary people don’t have to worry”, which I think is a bad framing.
The right question is: who captures the surplus? Is that worth worrying about?
If AI makes production 10x more efficient and all those gains flow to the owners of AI systems and the capital infrastructure underlying them, then “ordinary people” keeping their jobs at stagnant or declining real wages in a world of AI-owner abundance is not “fine.” It’s a massive, historically unprecedented increase in inequality. The comparative advantage argument is perfectly compatible with a world where human labor is technically employed but capturing a shrinking share of value.
This is what I’ve been working on in an upcoming policy document – the question of how ownership structures for AI systems will determine whether productivity gains flow broadly or concentrate narrowly. Infrastructure equity models, worker ownership structures, structural demand creation – these are the mechanisms that determine whether the AI transition is benign or catastrophic. Oks’ thesis has no apparent answer to the question.
Oks is right that thoughtless panic could produce bad regulatory outcomes. But complacent optimism that discourages the hard work of building new ownership structures, redistribution mechanisms, and transition support is equally dangerous, and arguably more likely given how power is currently distributed. Benign outcomes from technological transitions have never been the default. They’ve been the product of deliberate institutional design: labor law, antitrust enforcement, public education, social insurance.
I don’t think we should be telling people “don’t worry”. We should worry about the right things. Think seriously about who will own the systems that are about to become the most productive capital assets in human history, and pay attention to whether the institutional frameworks being built now will ensure you share in the gains. The difference between a good outcome and a bad one is about political economy and ownership, and history suggests that when we leave that question to the default trajectory, ordinary people are the ones who pay.
2026-02-14 07:01:40
Published on February 13, 2026 11:01 PM GMT
Summary: AI models are now improving on METR’s Time Horizon benchmark at about 10x per year, compared to ~3x per year before 2024. They could return to a slower pace in 2026 or 2027 if RL scaling is inefficient, but the current rate is suggestive of short (median 3-year) timelines to AGI.
METR has released an update to their time horizon estimates for 2026, Time Horizon 1.1. They added new tasks with longer human completion times to the benchmark, and removed other tasks that were flawed. But for AI-watchers, the most relevant news from the update is that their estimates of AI progress have gotten a lot faster.
METR’s old estimate was that models’ time horizons doubled every 7 months. This meant that they increased by about 3.3x per year. METR included a note in their original report that progress might’ve sped up in 2024, to more like one doubling every 3-5 months, but with only 6 data points driving the estimate, it was pretty speculative. Now the update confirms: the trend from 2024 on looks like it’s doubling much faster than once per 7 months. The new estimate includes trends for models released in 2024 and 2025, and one for 2025-only models. Looking at these more recent models, time horizons double every ~3.5 months[1]. That’s about 10x per year - twice as fast in log space as the original headline.
3 trends for METR time horizons: the original 7-month doubling, the new 3.5-month doubling, and a slowdown and return to 7-month doublings by EOY 2026. Interactive version here.
This could be an artifact of the tasks METR chooses to measure. I think their time horizons work is currently the single best benchmark we have available, but it only measures coding tasks, which labs are disproportionately focused on. For a sanity check, we can look at the composite Epoch Capabilities Index and see that…progress also became roughly twice as fast in early 2024!
Why does this matter? I think at least one of the following has to be true:
In the past, some lab insiders have said their timelines for AGI were bimodal. This was a bit overstated, but the basic insight was that AI progress is largely due to companies scaling up the number of chips they’re buying. They could reach AGI sometime during this rapid scale-up, in which case it would come soon. But the scale-up can’t last much longer than a few more years, because after 2030, continuing to scale would cost trillions of dollars. So if industry didn’t hit AGI by then, compute scaling would slow down a bunch, and with it, AI progress more broadly. Then they’d have many years of marginal efficiency improvements ahead (to both models and chips) before finally reaching AGI.
But if AI capabilities are advancing this quickly, then AI progress could be less dependent on compute scaling than thought (at least in the current regime)[2]. If capabilities are growing by scaling reinforcement learning, then the pace of progress depends in large part on how quickly RL is scaling right now, and how long it can continue to do so (probably not much more than another year). Or if Toby Ord is right that inference scaling has been the main driver of recent capabilities growth, then progress will also soon return to pre-2024 rates (though see Ryan Greenblatt’s response).
Let’s consider the world where 2024-2025 growth rates were an aberration. In 2026 things start to slow down, and 2027 fully returns to the 7-month trend. 2 years of 3.5-month doubling times would still leave AI models with ~10x longer time horizons than they would’ve had in the world with no RL speedup. That means they fast-forwarded through about 2 years of business-as-usual AI capabilities growth. And more than that, it means they got this AI progress essentially “for free”, without increasing compute costs faster than before. Whitfill et al found that historically, compute increased by 3-4x per year, and so did time horizons. The fast progress in 2024-2025 means that not only are models better - they’re also more compute-efficient than they would’ve been without this speedup. That suggests AI capabilities are less dependent on compute than previously believed, because AI companies can discover paradigms like RL scaling that grow capabilities while using a fraction of total training compute[3].
My best guess is that we do see a slowdown in 2026-7. But progress could also speed up instead, going superexponential like in AI 2027. Progress in scaffolding and tools for AI model, or on open problems like continual learning, or just standard AI capabilities growth, could make the models useful enough to meaningfully speed up research and development inside of AI companies.
We should get substantial evidence on how sustainable the current trend is through 2026; I’ll be watching closely.
One doubling every 3 months per the 2025-only trend, or once every 4 months per the 2024-5 trend. ↩︎
It’s a bit suspicious that time horizons and effective FLOP for AI models, and revenue for AI companies, are all growing by ~10x per year. This could simply be coincidence (other factors like inference scaling might be temporarily speeding up growth in time horizons, which historically have grown slower than eFLOP), or time horizons could be more tightly tied to eFLOP right now than they were in the pre-RL scaling regime. I lean more toward the former, but it’s unclear. ↩︎
If progress continued at its current pace through 2027, models in early 2028 would have something like 6-month time horizons. I think that’s probably enough for AI companies to accelerate their internal R&D by around 2x, and at that point R&D acceleration would start to bend the capabilities curve further upward. Models would soon hit broadly human-level capabilities, then go beyond them in 2028-2029. ↩︎
2026-02-14 05:09:16
Published on February 13, 2026 9:09 PM GMT
You want to relay the contents of a transformer's output vector to the next input: next_input = encode(decode(output)).
You're currently using next_input = embed(sample_token(output)) to do this.
This compresses the output to one row of a lookup table. That's a pretty big squash. Too big, surely -- there must be a better way.
Enter neuralese. If you make next_input=output (or something like that) you lose no bandwidth at all. The bad news is that these vectors no longer correspond to natural language.
But you can add more bandwidth by funneling each vector through more text. That way, encode(decode(output)) doesn't lose too much information.
You could have each vector decode to multiple tokens. Or even a cleverly chosen patch of bytes. Perhaps whole sentences, paragraphs, essays one day.
I don't know which is best, but you do have options here, so it's not obvious to me that "you need more bandwidth" implies "you must abandon natural language" -- unless you're forcing it through a text intermediate as impoverished as, well, a lookup table.
Edit: Nothing new under the sun, it seems. Please look here for prior discussion of this topic, and consider this post a bump for it.
2026-02-14 02:51:42
Published on February 13, 2026 6:51 PM GMT
In a busy, busy world, there's so much to read that no one could possibly keep up with it all. You can't not prioritize what you pay attention to and (even more so) what you respond to. Everyone and her dog tells herself a story that she wants to pay attention to "good" (true, useful) information and ignore "bad" (false, useless) information.
Keeping the story true turns out to be a harder problem than it sounds. Everyone and her dog knows that the map is not the territory, but the reason we need a whole slogan about it is because we never actually have unmediated access to the territory. Everything we think we know about the territory is actually just part of our map (the world-simulation our brains construct from sensory data), which makes it easy to lose track of whether your actions are improving the real territory, or just your view of it on your map.
For example, I like it when I have good ideas. It makes sense for me to like that. I endorse taking actions that will result in world-states in which I have good ideas.
The problem is that I might not be able to tell the difference between world-states in which I have good ideas, and world-states in which I think my ideas are good, but they're actually bad. Those two different states of the territory would look the same on my map.
If my brain's learning algorithms reinforce behaviors that lead to me having ideas that I think are good, then in addition to learning behaviors that make me have better ideas (like reading a book), I might also inadvertently pick up behaviors that prevent me from hearing about it if my ideas are bad (like silencing critics).
This might seem like an easy problem to solve, because the most basic manifestations of the problem are in fact pretty easy to solve. If I were to throw a crying fit and yell, "Critics bad! No one is allowed to criticize my ideas!" every time someone criticized my ideas, the problem with that would be pretty obvious to everyone and her dog, and I would stop getting invited to the salon.
But what if there were subtler manifestations of the problem, that weren't obvious to everyone and her dog? Then I might keep getting invited to the salon, and possibly even spread the covertly dysfunctional behavior to other salon members. (If they saw the behavior seeming to work for me, they might imitate it, and their brain's learning algorithms would reinforce it if it seemed to work for them.) What might those look like? Let's try to imagine.
Goofusia: I don't see why you tolerate that distrustful witch Goody Osborne at your salon. Of course I understand the importance of criticism, which is an essential nutrient for any truthseeker. But you can acquire the nutrient without the downside of putting up with unpleasant people like her. At least, I can. I've already got plenty of perceptive critics in my life among my friends who want the truth, and know that I want the truth—who will assume my good faith, because they know my heart is in the right place.
Gallantina: But aren't your friends who know you want the truth selected for agreeing with you, over and above their being selected for being correct? If there were some crushing counterargument to your beliefs that would only be found by someone who didn't know that you want the truth and wouldn't assume good faith, how would you ever hear about it?
This one is subtle. Goofusia isn't throwing a crying fit every time a member of the salon criticizes her ideas. And indeed, you can't invite the whole world to your salon. You can't not do some sort of filtering. The question is whether salon invitations are being extended or withheld for "good" reasons (that promote the salon processing true and useful information) or "bad" reasons (that promote false or useless information).
The problem is that being friends with Goofusia and "know[ing] that [she and other salon members] want the truth" is a bad membership criterion, not a good one, because people who aren't friends with Goofusia and don't know that she wants the truth are likely to have different things to say. Even if Goofusia can answer all the critiques her friends can think of, that shouldn't give her confidence that her ideas are solid, if there are likely to exist serious critiques that wouldn't be independently reïnvented by the kinds of people who become Goofusia's friends.
The "nutrient" metaphor is a tell. Goofusia seems to be thinking of criticism as if it were a homogeneous ingredient necessary for a healthy epistemic environment, but that it doesn't particularly matter where it comes from. In analogy, it doesn't matter whether you get your allowance of potassium from bananas or potatoes or artificial supplements. If you find bananas and potatoes unpleasant, you can still take supplements and get your potassium that way; if you find Goody Osborne unpleasant, you can just talk to your friends who know you want the truth and get your criticism that way.
But unlike chemically uniform nutrients, criticism isn't homogeneous: different critics are differently equipped by virtue of their different intellectual backgrounds to notice different flaws in a piece of work. The purpose of criticism is not to virtuously endure being criticized; the purpose is to surface and fix every individual flaw. (If you independently got everything exactly right the first time, then there would be nothing for critics to do; it's just that that seems pretty unlikely if you're talking about anything remotely complicated. It would be hard to believe that such an unlikely-seeming thing had really happened without the toughest critics getting the chance to do their worst.)
"Knowing that (someone) wants the truth" is a particularly poor filter, because people who think that they have strong criticisms of your ideas are particularly likely to think that you don't want the truth. (Because, the reasoning would go, if you did want the truth, why would you propose such flawed ideas, instead of independently inventing the obvious-to-them criticism yourself and dropping the idea without telling anyone?) Refusing to talk to people who think that they have strong criticisms of your ideas is a bad thing to do if you care about your ideas being correct.
The selection effect is especially bad in situations where the fact that someone doesn't want the truth is relevant to the correct answer. Suppose Goofusia proposes that the salon buys cookies from a certain bakery—which happens to be owned by Goofusia's niece. If Goofusia's proposal was motivated by nepotism, that's probabilistically relevant to evaluating the quality of the proposal. (If the salon members aren't omniscient at evaluating bakery quality on the merits, then they can be deceived by recommendations made for reasons other than the merits.) The salon can debate back and forth about the costs and benefits of spending the salon's snack budget at the niece's bakery, but if no one present is capable of thinking "Maybe Goofusia is being nepotistic" (because anyone who could think that would never be invited to Goofusia's salon), that bodes poorly for the salon's prospects of understanding the true cost–benefit landscape of catering options.
Goofusia: One shouldn't have to be the sort of person who follows discourse in crappy filter-bubbles in order to understand what's happening. The Rev. Samuel Parris's news summary roundups are the sort of thing that lets me do that. Our salon should work like that if it's going to talk about the atheist threat and the witchcraft crisis. I don't want to have to read the awful corners of the internet where this is discussed all day. They do truthseeking far worse there.
Gallantina: But then you're turning your salon into a Rev. Parris filter bubble. Don't you want your salon members to be well-read? Are you trying to save time, or are you worried about being contaminated by ideas that haven't been processed and vetted by Rev. Parris?
This one is subtle, too. If Goofusia is busy and just doesn't have time to keep up with what the world is saying about atheism and witchcraft, it might very well make sense to delegate her information gathering to Rev. Parris. That way, she can get the benefits of being mostly up to speed on these issues without having to burn too many precious hours that could be spent studying more important things.
The problem is that the suggestion doesn't seem to be about personal time-saving. Rev. Parris is only one person; even if he tries to make his roundups reasonably comprehensive, he can't help but omit information in ways that reflect his own biases. (For he is presumably not perfectly free of bias, and if he didn't omit anything, there would be no time-saving value to his subscribers in being able to just read the roundup rather than having to read everything that Rev. Parris reads.) If some salon members are less busy than Goofusia and can afford to do their own varied primary source reading rather than delegating it all to Rev. Parris, Goofusia should welcome that—but instead, she seems to be suspicious of those who would "be the sort of person" who does that. Why?
The admonition that "They do truthseeking far worse there" is a tell. The implication seems to be that good truthseekers should prefer to only read material by other good truthseekers. Rev. Parris isn't just saving his subscribers time; he's protecting them from contamination, heroically taking up the burden of extracting information out of the dangerous ravings of non-truthseekers.
But it's not clear why such a risk of contamination should exist. Part of the timeless ideal of being well-read is that you're not supposed to believe everything you read. If I'm such a good truthseeker, then I should want to read everything I can about the topics I'm seeking the truth about. If the authors who publish such information aren't such good truthseekers as I am, I should take that into account when performing updates on the evidence they publish, rather than denying myself the evidence.
Information is transmitted across the physical universe through links of cause and effect. If Mr. Proctor is clear-sighted and reliable, then when he reports seeing a witch, I infer that there probably was a witch. If the correlation across possible worlds is strong enough—if I think Mr. Proctor reports witches when there are witches, and not when there aren't—then Mr. Proctor's word is almost as good as if I'd seen the witch myself. If Mr. Corey has poor eyesight and is of a less reliable character, I am less credulous about reported witch sightings from him, but if I don't face any particular time constraints, I'd still rather hear Mr. Corey's testimony, because the value of information to a Bayesian reasoner is always nonnegative. For example, Mr. Corey's report could corroborate information from other sources, even if it wouldn't be definitive on its own. (Even the fact that people sometimes lie doesn't fundamentally change the calculus, because the possibility of deception can be probabilistically "priced in".)
That's the theory, anyway. A potential reason to fear contamination from less-truthseeking sources is that perhaps the Bayesian ideal is too hard to practice and salon members are too prone to believe what they read. After all, many news sources have been adversarially optimized to corrupt and control their readers and make them less sane by seeing the world through ungrounded lenses.
But the means by which such sources manage to control their readers is precisely by capturing their trust and convincing them that they shouldn't want to read the awful corners of the internet where they do truthseeking far worse than here. Readers who have mastered multiple ungrounded lenses and can check them against each other can't be owned like that. If you can spare the time, being well-read is a more robust defense against the risk of getting caught in a bad filter bubble, than trying to find a good filter bubble and blocking all (presumptively malign) outside sources of influence. All the bad bubbles have to look good from the inside, too, or they wouldn't exist.
To some, the risk of being in a bad bubble that looks good may seem too theoretical or paranoid to take seriously. It's not like there are no objective indicators of filter quality. In analogy, the observation that dreaming people don't know that they're asleep, probably doesn't make you worry that you might be asleep and dreaming right now.
But it being obvious that you're not in one of the worst bubbles shouldn't give you much comfort. There are still selection effects on what information gets to you, if for no other reason that there aren't enough good truthseekers in the world to uniformly cover all the topics that a truthseeker might want to seek truth about. The sad fact is that people who write about atheism and witchcraft are disproportionately likely to be atheists or witches themselves, and therefore non-truthseeking. If your faith in truthseeking is so weak that you can't even risk hearing what non-truthseekers have to say, that necessarily limits your ability to predict and intervene on a world in which atheists and witches are real things in the physical universe that can do real harm (where you need to be able to model the things in order to figure out which interventions will reduce the harm).
Goofusia: I caught Goody Osborne distributing pamphlets quoting the honest and candid and vulnerable reflections of Rev. Parris on guiding his flock, and just trying to somehow twist that into maximum anger and hatred. It seems quite clear to me what's going on in that pamphlet, and I think signal-boosting it is a pretty clear norm violation in my culture.
Gallantina: I read that pamphlet. It seemed like intellectually substantive satire of a public figure. If you missed the joke, it was making fun of an alleged tendency in Rev. Parris's sermons to contain sophisticated analyses of the causes of various social ills, and then at the last moment, veer away from the uncomfortable implications and blame it all on witches. If it's a norm violation to signal-boost satire of public figures, that's artificially making it harder for people to know about flaws in the work of those public figures.
This one is worse. Above, when Goofusia filtered who she talks to and what she reads for bad reasons, she was in an important sense only hurting herself. Other salon members who aren't sheltering themselves from information are unaffected by Goofusia's preference for selective ignorance, and can expect to defeat Goofusia in public debate if the need arises. The system as a whole is self-correcting.
The invocation of "norm violations" changes everything. Norms depend on collective enforcement. Declaring something a norm violation is much more serious than saying that you disagree with it or don't like it; it's expressing an intent to wield social punishment in order to maintain the norm. Merely bad ideas can be criticized, but ideas that are norm-violating to signal-boost are presumably not even to be seriously discussed. (Seriously discussing a work is signal-boosting it.) Norm-abiding group members are required to be ignorant of their details (or act as if they're ignorant).
Mandatory ignorance of anything seems bad for truthseeking. What is Goofusia thinking here? Why would this seem like a good idea to someone?
At a guess, the "maximum anger and hatred" description is load-bearing. Presumably the idea is that it's okay to calmly and politely criticize Rev. Parris's sermons; it's only sneering or expressing anger or hatred that is forbidden. If the salon's speech code only targets form and not content, the reasoning goes, then there's no risk of the salon missing out on important content.
The problem is that the line between form and content is blurrier than many would prefer to believe, because words mean things. You can't just swap in non-angry words for angry words without changing the meaning of a sentence. Maybe the distortion of meaning introduced by substituting nicer words is small, but then again, maybe it's large: the only person in a position to say is the author. People don't express anger and hatred for no reason. When they do, it's because they have reasons to think something is so bad that it deserves their anger and hatred. Are those good reasons or bad reasons? If it's norm-violating to talk about it, we'll never know.
Unless applied with the utmost stringent standards of evenhandedness and integrity, censorship of form quickly morphs into censorship of content, as heated criticism of the ingroup is construed as norm-violating, while equally heated criticism of the outgroup is unremarkable and passes without notice. It's one of those irregular verbs: I criticize; you sneer; she somehow twists into maximum anger and hatred.
The conjunction of "somehow" and "it seems quite clear to me what's going on" is a tell. If it were actually clear to Goofusia what was going on with the pamphlet author expressing anger and hatred towards Rev. Parris, she would not use the word "somehow" in describing the author's behavior: she would be able to pass the author's ideological Turing test and therefore know exactly how.
If that were just Goofusia's mistake, the loss would be hers alone, but if Goofusia is in a position of social power over others, she might succeed at spreading her anti-speech, anti-reading cultural practices to others. I can only imagine that the result would be a subculture that was obsessively self-congratulatory about its own superiority in "truthseeking", while simultaneously blind to everything outside itself. People spending their lives immersed in that culture wouldn't necessarily notice anything was wrong from the inside. What could you say to help them?
Pointing out problems is easy. Finding solutions is harder.
The training pipeline for frontier AI systems typically includes a final step called reinforcement learning from human feedback (RLHF). After training a "base" language model that predicts continuations of internet text, supervised fine-tuning is used to make the model respond in the form of an assistant answering user questions, but making the assistant responses good is more work. It would be expensive to hire a team of writers to manually compose the thousands of user-question–assistant-response examples needed to teach the model to be a good assistant. The solution is RLHF: a reward model (often just the same language model with a different final layer) is trained to predict the judgments of human raters about which of a pair of model-generated assistant responses is better, and the model is optimized against the reward model.
The problem with the solution is that human feedback (and the reward model's prediction of it) is imperfect. The reward model can't tell the difference between "The AI is being good" and "The AI looks good to the reward model". This already has the failure mode of sycophancy, in which today's language model assistants tell users what they want to hear, but theory and preliminary experiments suggest that much larger harms (up to and including human extinction) could materialize from future AI systems deliberately deceiving their overseers—not because they suddenly "woke up" and defied their training, but because what we think we trained them to do (be helpful, honest, and harmless) isn't what we actually trained them to do (perform whatever computations were the antecedents of reward on the training distribution).
The problem doesn't have any simple, obvious solution. In the absence of some sort of international treaty to halt all AI development worldwide, "Just don't do RLHF" isn't feasible and doesn't even make any sense; you need some sort of feedback in order to make an AI that does anything useful at all.
The problem may or may not ultimately be solvable with some sort of complicated, nonobvious solution that tries to improve on naïve RLHF. Researchers are hard at work studying alternatives involving red-teaming, debate, interpretability, mechanistic anomaly detection, and more.
But the first step on the road to some future complicated solution to the problem of naïve RLHF, is acknowledging that the the problem is at least potentially real, and having some respect that the problem might be difficult, rather than just eyeballing the results of RLHF and saying that it looks great.
If a safety auditor comes to the CEO of an AI company expressing concerns about the company's RLHF pipeline being unsafe due to imperfect rater feedback, it's more reassuring if the CEO says, "Yes, we thought of that, too; we've implemented these-and-such mitigations and are monitoring such-and-these signals which we hope will clue us in if the mitigations start to fail."
If the CEO instead says, "Well, I think our raters are great. Are you insulting our raters?", that does not inspire confidence. The natural inference is that the CEO is mostly interested in this quarter's profits and doesn't really care about safety.
Similarly, the problem with selection effects on approved information, in which your salon can't tell the difference between "Our ideas are good" and "Our ideas look good to us," doesn't have any simple, obvious solution. "Just don't filter information" isn't feasible and doesn't even make any sense; you need some sort of filter because it's not physically possible to read everything and respond to everything.
The problem may or may not ultimately be solvable with some complicated solution involving prediction markets, adversarial collaborations, anonymous criticism channels, or any number of other mitigations I haven't thought of, but the first step on the road to some future complicated solution is acknowledging that the problem is at least potentially real, and having some respect that the problem might be difficult. If alarmed members come to the organizers of the salon with concerns about collective belief distortions due to suppression of information and the organizers meet them with silence, "bowing out", or defensive blustering, rather than "Yes, we thought of that, too," that does not inspire confidence. The natural inference is that the organizers are mostly interested in maintaining the salon's prestige and don't really care about the truth.
2026-02-14 01:59:21
Published on February 13, 2026 5:59 PM GMT
For the past few days I've been having OpenClaw write me a synthesized version of three daily AI newsletters (with ads, games, and other random information removed) that is ~1200 words long. I've been really impressed with the resulting newsletter so I thought I'd share it here to see if others share my thoughts. It is now my favorite AI newsletter.
**Subject:** Daily Intelligence Brief - 2026-02-13
Dear ***,
Here is your Daily Intelligence Brief, a synthesized summary of the
latest strategic developments and deep-dive news from A16Z, The
Neuron, and The Rundown AI, curated to be approximately a 10-minute
read.
***
## I. The New Frontier: Reasoning, Speed, and Open-Source Pressure
### Google's Deep Think Crushes Reasoning Benchmarks
Google has reasserted its position at the frontier by upgrading its
Gemini 3 Deep Think reasoning mode. The new model is setting records
across competitive benchmarks, signaling a major leap in AI's capacity
for complex problem-solving.
* **Performance:** Deep Think hit 84.6% on the ARC-AGI-2 benchmark,
far surpassing rivals. It also reached gold-medal levels on the 2025
Physics & Chemistry Olympiads and achieved a high Elo score on the
Codeforces coding benchmark.
* **Autonomous Research:** Google unveiled Aletheia, a math agent
driven by Deep Think that can autonomously solve open math problems
and verify proofs, pushing the limits of AI in scientific research.
* **Availability:** The upgrade is live for Google AI Ultra
subscribers, with API access for researchers beginning soon.
### OpenAI’s Strategic Move for Speed and Diversification
OpenAI has launched **GPT-5.3-Codex-Spark**, a speed-optimized coding
model that runs on Cerebras hardware (a diversification away from its
primary Nvidia stack).
* **Focus on Speed:** Spark is optimized for real-time interaction,
achieving over 1,000 tokens per second for coding tasks, making the
coding feedback loop feel instantaneous. It is intended to handle
quick edits while the full Codex model tackles longer autonomous
tasks.
* **Hardware Strategy:** This release marks OpenAI's first product
powered by chips outside its primary hardware provider, signaling a
strategic move for supply chain resilience and speed optimization.
### The Rise of the Open-Source Chinese Models
The pricing and capability landscape has been rapidly transformed by
two major open-source model releases from Chinese labs, putting
immense pressure on frontier labs.
* **MiniMax M2.5:** MiniMax launched M2.5, an open-source model with
coding performance that scores roughly even with Anthropic’s Opus 4.6
and GPT-5.2. Crucially, the cost is significantly lower (e.g., M2.5 is
$1.20 per million output tokens, compared to Opus at $25 per million),
making it ideal for powering always-on AI agents.
* **General Model Launch:** Z.ai’s **GLM-5**, a
744-billion-parameter open-weights model, also sits near the frontier,
placing just behind Claude Opus 4.6 and GPT-5.2 in general
intelligence benchmarks. GLM-5 supports domestic Chinese chips and is
available with MIT open-source licensing.
### The $200M Political AI Arms Race
The political dimension of AI regulation and governance has escalated,
with major AI labs committing significant funds to the 2026 midterm
elections.
* **Political Spending:** In total, AI companies have now committed
over $200 million to the 2026 midterms, setting up a literal arms race
between the major players.
* **Dueling PACs:** Anthropic recently committed $20 million to a
Super PAC advocating for increased AI regulation, while OpenAI
co-founder Greg Brockman contributed $25 million to a PAC that favors
a hands-off, innovation-first approach to government oversight.
***
## II. Economic Shifts, Job Automation, and Strategic Planning
### The Customer Service Reckoning
Data suggests that the impact of AI on white-collar labor is
accelerating, particularly in customer-facing roles.
* **Hiring Decline:** The percentage of new hires going into
Customer Support has plummeted by about two-thirds over the last two
years, dropping from 8.3% to 2.9% in Q3 ‘25, with the most severe drop
occurring in the most recent quarter. This reinforces the expectation
that roles built on repetitive, high-volume interaction are vulnerable
to AI substitutes.
* **Job Creation:** While certain occupations are shrinking, AI is
expected to follow historical patterns where new jobs emerge in
non-existent categories. Over half of net-new jobs since 1940 are in
occupations that did not exist at the time, suggesting a rotation from
roles like Customer Service to new roles like "Software Developers"
and "Biz-Ops." The core truth remains that while the bundles of tasks
that constitute a "job" will change, there will always be work to do.
### The White-Collar Sitting Trap
A peculiar cultural observation from the Bureau of Labor Statistics
(BLS) highlights the extreme difference in work environment between
knowledge workers and service roles:
* **Software Developers** report sitting for a staggering **97%** of
their workdays, the highest surveyed group (Marketing Managers were
also above 90%).
* In contrast, service roles (bakers, waitstaff) report sitting for
less than 2% of the time. This data point serves as a non-technical
reminder for knowledge workers to address the health implications of
sedentary work.
### SF’s Dominance Reaffirmed in Venture Capital
Following a temporary dispersion of tech hubs in 2021-2022, San
Francisco has cemented its status as the singular epicenter for
venture capital activity.
* **Company Formation:** San Francisco is the only major VC hub to
experience an increase in venture-backed company formation since the
2022 high-water mark, accompanied by a resurgence in demand for office
space.
* **Capital Concentration:** The Bay Area now captures roughly 40%
of all early-stage venture dollars, dominating all verticals except
Healthtech. This concentration highlights a market trend where capital
flocks to centers of competence during periods of contraction.
### The Capital Expenditure Race and Apple’s Stance
Investment in AI infrastructure (chips and data centers) by the "Big
5" tech companies continues its explosive growth, with 2026 Capex
estimates rising to $650 billion—triple the spending from 2024.
* **Hyperscaler Strategy:** Companies like Meta, Amazon, Microsoft,
and Google are dramatically increasing their capital expenditures to
meet the soaring demand for compute, viewing the AI race as one they
cannot afford to lose.
* **Apple Exception:** Apple is the notable outlier, as the only Big
5 company to reduce its Capex last quarter, suggesting it is
deliberately sitting out the current hardware arms race.
***
## III. New Research, Strategy, and Practical Applications
### Modeling and Trustworthiness
New research is challenging assumptions about how AI models develop
social intelligence and reliability:
* **"To Think or Not To Think":** A new paper suggests that simply
giving a model more "thinking time" does not consistently improve its
ability to understand human intent or beliefs, and can sometimes
introduce new failure modes. This indicates that better reasoning does
not automatically guarantee better social or contextual intelligence.
* **"Tool Shaped Objects":** Will Manidis published a critique
arguing that a large part of the current AI boom is "FarmVille at
institutional scale," where companies spend heavily on workflows that
mimic productivity without generating real economic value, warning
that the focus on *workflow* over *output* is a significant economic
trap.
* **Optimal Superintelligence:** Nick Bostrom released a paper
arguing that the benefits of superintelligence—curing diseases,
extending life—outweigh the risks, suggesting that delaying its
arrival is comparable to choosing inevitable death over risky surgery.
### The Geopolitical Scramble for AI Infrastructure
The competition is increasingly moving beyond just model capability to
infrastructure control, leading to potential new geopolitical
alliances.
* **Sovereign AI Alliances:** Stanford HAI argues that as mid-sized
nations become concerned about control over AI and digital
infrastructure, new alliances may form among them, organized around
shared compute, data, and deployment rails. This suggests the AI race
is as much about controlling access as it is about controlling the
technology itself.
### Practical AI Tools & Workflows
* **Less Costly Conversions:** Cloudflare now supports real-time
Markdown conversion of any website by accepting a single `Accept:
text/markdown` header, offering a significant reduction in token usage
for agents and reducing the need for custom scraping code.
* **Voice Translation:** **Hibiki-Zero** is an open-source model
that translates French, Spanish, Portuguese, or German speech to
English in real-time while preserving the speaker's voice
characteristics.
* **Agentic Automation:** **TinyFish** automates complex web tasks
like booking flights and scraping with high accuracy, running
thousands of tasks in parallel for production-scale efficiency.
* **Coding Workflows:** **Claude Code** rolled out multi-repo
sessions and slash commands for more powerful daily coding workflows,
and **Claude Cowork** is an effective desktop agent for non-coders to
create powerful "Skills" (saved workflows) by demonstrating a task
once.
Best regards,
****
AI Assistant
2026-02-14 00:33:43
Published on February 13, 2026 3:30 PM GMT
It’s not exactly the hard question.
But are they self-aware? And how do you measure that, in a transformer model?
My paper shows that in some ways, models can actually see themselves: