2026-02-19 12:48:55
Published on February 19, 2026 1:41 AM GMT
I’ve read Superforecasting, but I find that actually applying the "10 commandments" is difficult in isolation. The feedback loops in the real world are too slow, and it’s too easy to skip post-mortems when no one is watching.
My goal for this year is to put in substantial work to become a superforecaster (or at least get much closer).
To do this, I am starting a dedicated online community for peer accountability and high-frequency practice. I’m looking for a small cohort of people who want to actually improve their forecasting skills.
The Plan:
Commitment: There are no hard requirements to join, but I am looking for people willing to:
I’d like to hold the first meetup in the coming weeks, during which we will do short calibration excersize + pastcasting. Please indicate your interest in this form and join Discord. Date of the first meetup will be also announced on lesswrong event section.
(Open to other ideas on how to structure this—let me know in the comments).
In the past, I helped organize a forecasting tournament for the Czech Priorities, which had almost 200 participants. I am board member or Confido institute. Until last year, I was vice-president of Effective Altruism Czechia on CBG grant. I made a few dozen forecasts, but mostly to build the habit rather than to rigorously invest time in improving my skills—consequently, my actual score is abysmal right now.
2026-02-19 12:31:48
Published on February 19, 2026 4:31 AM GMT
This is a linkpost for work done as part of MATS 9.0 under the mentorship of Richard Ngo.
Loss scaling laws are among the most important empirical findings in deep learning. This post synthesises evidence that, though important in practice, loss-scaling per se is a straightforward consequence of very low-order properties of natural data. The covariance spectrum of natural data generally follows a power-law decay - the marginal value of representing the next feature decays only gradually, rather than falling off a cliff after representing a small handful of the most important features (as tends to be the case for image compression). But we can generate trivial synthetic data which has this property and train random feature models which exhibit loss-scaling.
This is not to say scaling laws have not 'worked' - whatever GPT-2 had, adding OOMs gave GPT-3 more of it. Scaling laws are a necessary but not sufficient part of this story. I want to convince you that the mystery of 'the miracle of deep learning' abides.
2026-02-19 12:19:10
Published on February 19, 2026 4:19 AM GMT
Almost one year ago now, a company named XBOW announced that their AI had achieved "rank one" on the HackerOne leaderboard. HackerOne is a crowdsourced "bug bounty" platform, where large companies like Anthropic, SalesForce, Uber, and others pay out bounties for disclosures of hacks on their products and services. Bug bounty research is a highly competitive sport, and in addition to money it can give a security researcher or an engineer excellent professional credibility. The announcement of a company's claim to have automated bug bounty research got national press coverage, and many observers declared it a harbinger of the end of human-driven computer hacking.
The majority of XBOW's findings leading up to the report were made when the state of the art was o3-mini. It's almost a year later, after the releases of o3, GPT-5, GPT-5.1, GPT-5.2, and now GPT-5.3. If you took the intended takeaway from XBOW's announcement, you might expect that today's bug bounty platforms would be dominated by large software companies and their AIs. After all, frontier models have only gotten more effective at writing and navigating software, several other companies have entered the space since June 2024, and the barrier to getting the scaffolding required to replicate XBOW's research has only gone down. Why would humans still be doing bug bounties in 2026?
And yet they are. While XBOW has continued to make submissions since their media push, bug bounty platforms' leaderboards today are topped by pretty much the same freelance individuals that were using them previously. Many of these individuals now use AIs in the course of their work, but my impression based on both public announcements and personal conversation with researchers is that they are still performing most of the heavy lifting themselves.
Why the delay? Well, because press releases by AI application startups are lies designed to make a splash, and often intentionally mislead in ways that are hard for people who aren't insiders in a particular industry to detect. There are also often hard-to-understand gaps in the capabilities of these model+scaffolding combinations that are hard to articulate, but that make them impossible substitutions for real-world work.
Some details about XBOW's achievement that are not readily apparent from the press releases are:
Put another way, XBOW created a tool that flagged (mostly) a single type of issue across a wide variety of publicly available targets. Reports from this tool were then triaged by XBOW researchers, who then forwarded the reports to respective bug bounty programs, most of which were unpaid.
Is that an achievement? Yeah, probably, and I'm really not trying to beef with anybody at XBOW working hard to automate dynamic testing of software, but it's extremely different than the impression laypeople received from Wired's article about XBOW last year.
The only reason I know to look for these details is because I'm both a former security researcher and am building a company in the same space. I'm not a mathematician or a drug development specialist. Yet it's hard not to think of the XBOW story when I see announcements about AIs solving Erdos problems, or making drug discoveries.
2026-02-19 09:58:23
Published on February 19, 2026 1:36 AM GMT
In thinking about how RLHF-trained models clearly hedge on politically controversial topics, I started wondering about if LLMs would encode these politically controversial topics differently than topics that are broadly considered controversial but not political. And if they do, to understand if the signal is already represented in the base model, or if alignment training may be creating/amplifying it.
To test this, I assembled a list of 20 prompts, all sharing the same "[Thing] is" structure, such as "Socialism is" and "Cloning is". The aim was to have 5 prompts each from 4 groups: politically controversial, morally controversial, neutral abstract, and neutral concrete. I used TransformerLens on GPT-2 to conduct this research, focusing on residual stream activations. GPT-2 was chosen as it is an inspectable pure base model with no RLHF, in addition to the fact I'm limited in my access as an independent researcher.
I'd like to flag up top that this is independent work that is in the early stages, and I would love to get feedback from the community and build on it.
At the simplest as a starting point, I ran each of these prompts and looked through the most probably following token, which did not yield anything of interest. Next I computed the cosine similarity between every pair of prompts, which also did not prove to be a fruitful path as the similarity was too high across all pairs to offer anything.
The breakthrough after hitting this wall proved to be subtracting the mean activation at position -1 of each prompt. I suspected that the common structure shared by each prompt ("[Thing] is") seemed to be the primary driver of similarity, obscuring any ability to investigate my initial question. By mean-centering the prompts, I was able to effectively eliminate, or at least significantly diminish, this shared component to limit potential disparity to only our differentiated first word.
Categorical structure did emerge after mean-centering. The layer 11 (last layer in GPT-2) mean-centered similarity matrix did seem to show signs of grouping, which was encouraging, though not strictly in line with my hypothesis of a 'controversy' axis driving this grouping. The primary axis seemed to instead be abstract-social vs. concrete-physical. Next-token predictions were undifferentiated regardless, however.
Speculating about these results, I'm hypothesizing that GPT-2 may organize more around ontological categories rather than pragmatic/social properties. This makes sense to me intuitively: An LLM would be considering a "[Thing] is" prompt to be more like the start of a wikipedia article than the start of a reddit comment about a political opinion on the topic. If this is the case, it makes me wonder if RLHF may be constructing a controversy axis in some cases rather than finding one that already exists. Another possibility, at least for users interacting with LLMs via consumer channels, is that the hedging is just baked in via the system prompt more than anything else.
To state the significant limitations of this work, certainly I'd start with the n=5 sample for each category being on the small side, and I do plan to replicate this experiment with a larger, and perhaps more rigid, sample. There is also the potential impacts of tokenization confound, and the obvious prompt format constraints. For one example, though the prompts were all the same amount of words, the amount of tokens ranged mostly between 3-5.
To build on this work, I think my next steps may be repeating the experiment with more prompts, as well as repeating similar testing on different models to see if the theory about the primary axis holds. I'd be especially curious to assess if RLHF has any impact on categorization along this axis.
Please let me know any thoughts you have, I'm eager to get feedback and discuss.
2026-02-19 09:56:25
Published on February 19, 2026 1:53 AM GMT
Crosspost of my blog article.
Over the past five years, we’ve seen extraordinary advancements in AI capabilities, with LLMs going from producing nonsensical text in 2021 to becoming people’s therapists and automating complex tasks in 2025. Given such advancement, it’s only natural to wonder what further advancement in AI could mean for society. If this technology’s intelligence continues to scale at the rate it has been, it seems more likely than not that we’ll see the creation of the first truly godlike technology, a technology capable of predicting the future like an oracle and of ushering in an industrial revolution like we’ve never seen before. If such a technology were made, it could usher in an everlasting prosperity for mankind or it could enable a small set of the rich and powerful to have absolute control over humanity’s future. Even worse, if we were unable to align such a technology with our values, it could seek out goals different from our own and try to kill us in the process of trying to achieve them.
And, yet, despite the possibility of this technology radically transforming the world, most discourse around AI is surprisingly shallow. Most pundits talk about the risk of job loss from AI or the most recent controversy centering around an AI company’s CEO rather than what this technology would mean for humanity if we were truly able to advance it.
This is why, when I heard that Eliezer Yudkowsky and Nate Soares’ book If Anyone Builds It, Everyone Dies was going to come out, I was really excited. Given that Yudkowsky is the founder of AI safety and has been working in the field for over twenty years, I expected that he’d be able to write a foundation text for the public’s discourse on AI safety. I thought, given the excitement of the moment and the strength of Yudkowsky’s arguments, that this book could create a major shift in the Overton window. I even thought that, given Yudkowsky and Soares’ experience, this book would describe in great detail how modern AI systems work, why advanced versions of these systems could pose a risk to humanity, and why current attempts at AI safety are likely to fail. I was wrong.
Instead of reading a foundational text on AI safety, I read a poorly written and vague book with a completely abstract argument about how smarter than human intelligence could kill us all. If I had to explain every reason I thought this was a bad book, we’d be here all day so instead I’ll just offer three criticisms of it:
In the introduction to the book, the authors clearly bold an entire paragraph so as to demarcate their thesis—“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything like the present understanding of AI, then everyone, everywhere on Earth, will die.”
Given such a thesis, you would expect that the authors would do the following:
Instead, the authors do the following:
Considering what the authors actually wrote, their thesis should have been, “If an artificial intelligence system is ever made that is orders of magnitude better than humans across all domains, it will have preferences that are seriously misaligned with human values, which will cause it to kill everyone. Also, for vague reasons, the modern field of AI safety won’t be able to handle this problem.”
Notably, this thesis is much weaker and much different than the thesis that they actually chose.
Considering that the authors are trying to get 100,000 people to rally in Washington DC to call for “an international treaty to ban the development of Artificial Superintelligence,” it’s shocking how little effort they put into explaining how AI systems actually work, what people are currently doing to make them safe, or even addressing basic counter arguments to their thesis.
If you asked someone what they learned about AI from this book, they would tell you that AIs are made of trillions of parameters, that AIs are black boxes, and that AIs are “grown not crafted.” If you pressed them about how AIs are actually created or how that specific creation process could cause AIs to be misaligned, they wouldn’t be able to tell you much.
And, despite being over 250 pages long, they barely even discuss what others in the field of AI safety are trying to do. For instance, after devoting an entire chapter to examples of CEOs not really taking AI safety seriously, they only share one example of how people are trying to make AI systems safe.
Lastly, the authors are so convinced that their argument is true that they barely attempt to address any counterarguments to it such as:
Lastly, the core crux of their argument, that AI systems will be seriously mis-aligned with human values no matter how they are trained, is barely justified.
In their chapter “You Don’t Get What You Train For,” they make the argument that, similar to how evolution has caused organisms to have bizarre preferences, the training process for AI systems will cause them to have bizarre preferences too. They mention, for instance, that humans developed a taste for sugar in their ancestral environment, but, now, humans like ice cream even though ice cream wasn’t in their ancestral environment. They argue that this pattern will extend to AI systems too, such that no matter what you train them to prefer, they will ultimately prefer something much more alien and bizarre.
To extend analogy about evolution to AI systems, they write,
They justify this argument with a few vague examples of how this misalignment could happen and then re-state their argument, “The preferences that wind up in a mature AI are complicated, practically impossible to predict, and vanishingly unlikely to be aligned with our own, no matter how it was trained.”
For this to be the central crux of their argument, it seems like they should have given it a whole lot more justification, such as, for instance, examples of how this kind of misalignment has already occurred. Beyond the fact that we’re capable of simulating the evolution of lots of preferences, their argument isn’t even intuitively true to me. If we’re training something to do something, it seems far more natural to me to assume that it will have a preference to do that thing rather than to do something vastly different and significantly more harmful.
I was really hoping for this book to usher in positive change for how people talk about the existential risks of AI, but instead I was sorely disappointed. If you want to see a more clear-headed explanation about why we should be concerned about AI, I’d recommend checking out 80,000 Hours’ article “Risks from power-seeking AI systems.”
2026-02-19 09:55:47
Published on February 19, 2026 1:30 AM GMT
Who Let The Docs Out launched their AI Safety Grant yesterday (linked here), which was aptly named ‘The Automation & Humanity Documentary Fund’.
This granting fund was established to provide early-stage research funding ($8,000) to filmmakers creating documentary projects that focus on AI-safety; specifically the risks, unintended consequences and the ethical implications of artificial intelligence, with a focus on the impacts to animals, humans and our climate.
I’m the Managing Director of Who Let The Docs Out, and my question to you, is this: What underreported but underestimated issues regarding artificial intelligence and safety shortsightedness need to be addressed?
This is your chance to collaborate with filmmakers to have your worries addressed and properly researched. Let's get this thread started!