MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Security Complacency Meets Frontier AI: The Coming Collapse of ‘Secure by Apathy’

2025-11-25 17:39:37

Published on November 25, 2025 9:39 AM GMT

The modern world is incredibly insecure along a wide variety of dimensions - because it’s not a problem. Usually. No-one is trying to exploit the security of your email server, most of the time, so it’s fine if it is unpatched. No-one is trying to hack the internet alarm clock radio you have on the counter, or the toy drone your kid is playing with, or even your car’s radio and (for non-celebrities, at least,) your twitter or facebook login, so they can be insecure. And in general, they are. Even things you might worry about -your bank account, or your pacemaker - are generally pretty insecure. This matters much less when the cost of hacking is relatively high, and there are richer and/or easier targets than yourself. 

The FBI releases annual data on “internet crime losses,” and the multi-year trend has been around a 40% growth in losses each year, currently approaching $20b in reported and direct losses, though only a small fraction are related to hacking, the rest are non-automated scams and cryptocurrency-based losses (often also due to scams.) This is worrying, especially when you realize that indirect damages is not included and not everything is reported - but it is not catastrophic, unless the risk changes.

The Case for Concern

The rise of low-cost LLMs is changing the bar for how hacking can be done, and how scams can be automated, each changing the cost-benefit calculation rapidly. We are already starting to see that along with the increasing capabilities of frontier LLMs, hacking is being automated. The AI Risk Explorer compiles events, and shows that sophistication has been rising rapidly - the 3rd quarter of 2025 saw the first AI-powered ransomware, and the first agentic AI powered attacks.  Over time LLMs run by nefarious groups will increasingly be capable of automated recon, vulnerability chaining, password spraying, SIM-swap orchestration for technical multifactor authentication, social attacks on multifactor authentication, and phishing kit generation - not to mention C2 automation for botnets and superhuman speed response to countermeasures. None of this will surprise researchers who have paid attention - and recent news from Anthropic makes the advances clear. But along with sophisticated attackers, it’s likely we’ll experience automated exploits of previously secure-by-apathy targets at scale. And the scale of the vulnerability is shockingly broad. 

Of course, LLMs will help with cyber defense as well. But even if the offense-defense balance from AI favors defense, that won’t matter in the short term! As Bruce Schneier pointed out, the red team will take the lead. And while AI isn’t able to do this reliably yet, even a couple percentage point success rate would easily overwhelm response capabilities. This is especially true with various classes of scam, where the success rate is tiny but it is still economically viable, and that rate could go up tremendously with high volume personally tailored scams. And model developers try to monitor and stop misuse, but the threats don’t match the research evaluating the risks, and they likely only find a fraction of what is occurring.

So - how worrying is having your fridge hacked? Probably not so worrying - unless it’s inside your home network, and can be used to infect other systems directly, allowing hackers into your sensitive systems. How worrying is it to fall for a scam email? Not so bad, as long as you don’t share passwords across accounts - which you should not do, and you should use 2-factor authentication. But people who are using texts or authenticator apps on vulnerable phones could still be attacked from inside the home network, if and when sophistication of attacks rises.

In 2024, LLMs started to automate a variety of scams and “social” attacks, like convincing less sophisticated users to give away their banking passwords, or send them cryptocurrency. This is somewhere on the border between a social vulnerability to persuasion and cybersecurity threat, but is enabled by insecure digital infrastructure - perhaps starting with long-understood issues with insecure and non-verified phone communications infrastructure that persist, the increasing availability of unlabeled AI outputs, despite methods to fix this, and other issues. 

The Bigger Targets

And so far, we’ve only discussed individuals. The risks from near-term advances also include any organization that doesn’t secure itself from the compromise of their employees, as well as organizations with any internet-exposed surface, including sensitive agencies. These organizations regularly have unpatched systems and insecure user authentication for their employees.

Of course, many of the more important of the vulnerable targets will be able to detect even novel exploits, or address insider threats, and respond effectively, as occurs today. And when humans are doing the exploits, these responses might be fast enough to minimize damage. But once we have frontier LLMs that can run dozens, if not hundreds or thousands of parallel instances to exploit the systems, and run the command and control, and quickly respond to anything done at human speed, it seems plausible that for some classes of incidents, rapid isolation may be the only reliable stopgap - and even that might happen too late to prevent key data from being exfiltrated, or prevent the hacks from expanding their exploits to elsewhere inside the networks.

Should This Be A Priority?

There are certainly other larger risks, including misuse by terroristspotential future bioengineering of novel pathogenseveryone dying, or other posited future risks. Each of these is more devastating than a cyber attack, if successful. But these are areas where tremendous efforts in detecting and mitigating threats are already occurring. Terrorism has been a focus of successful (if often overzealous) defensive and preventative efforts for decades, and increased threats are worrying but not obviously transformative. Similarly, Governance options for biosecurity are feasible, and between those and traditional public health for fighting diseases, there’s reason for optimism that we will succeed in putting in place sufficient safeguards.

In contrast, cybersecurity’s playing field has long put defenders at a technical disadvantage, and arguably an organizational one as wellAs Caleb Withers at CNAS pointed out recently, the scales could easily tip further. That means that unless major changes are put in place, AI progress will lead to increasing vulnerability of individuals, and easier and more routinely devastating attacks against organizations. This isn’t the existential risk we’ve worried about; it’s far less severe, but also very likely more immediate. Models might not be there yet, though this is unclear given that the linked analysis isn’t inclusive of GPT-5 or Claude 4, and the recent (exaggerated) Anthropic report of an "autonomous" (e.g. partially automated) hack by China is certainly not demonstrating that unsophisticated actors could mounter larger attacks  at scale, but we’re getting there quickly. 

Addressing the Threat

Despite the challenge, we know how to address many parts of this problem, and have for quite some time - though many of the critical approaches are unrelated to AI. The lack of robustly verified or at least audited and secure code for operating systems and other critical cyber infrastructure is striking - we’ve known about them for decades, but we’ve seen no serious effort to eliminate them. The vulnerability of our phone and other communication systems to fraudulent misuse is another glaring gap - and while technical efforts are progressing, albeit far too slowly, the fundamental gap is that international cooperation is not occurring, and untrusted networks remain in place.

On the AI front, the lack of even token efforts to reliably tag AI and other digital outputs so they can be distinguished from humans shows the issue isn’t being taken seriously (again, despite methods to fix this.) And there’s also no concerted effort to prioritize cyber-defense applications of AI, nor to hold AI companies legally responsible if their systems are used for nefarious purposes.

For all of these, the first step is to notice we all have a set of serious problems in the near future, and little is being done. In cases like this, the problem will scale quickly, and prevention is far cheaper than response. Unfortunately, the companies capable of solving the problems are not responsible for doing so. There’s unfortunately little reason for the model creators to care, especially given how lukewarm they are about current misuse, and every reason to think the current widespread cybersecurity apathy will continue long past the point where the costs imposed by scalable attacks have become unacceptably high.

Thank you to Asher Brass for an initial conversation about the topic, and feedback on an earlier version of the draft.



Discuss

Avoid Fooling Yourself By Believing Two Opposing Things At Once

2025-11-25 15:51:28

Published on November 25, 2025 7:51 AM GMT

“Knowledge, like all things, is best in moderation," intoned the Will. "Knowing everything means you don't need to think, and that is very dangerous.”
Garth Nix, Lady Friday[1]

In the pursuit of knowledge, there are two bad attractors that people fall into. 

One of them is avoiding ever knowing anything. "Oh I could of course be wrong! Everything is only suggestive evidence, I can't really claim to know anything!"

The second is to really lean into believing in yourself. "I know this to be true! I am committed to it, for constantly second-guessing myself will lead to paralysis and a lack of decisiveness. So I will double down on my best hypotheses."

The former is a stance fearful of being shown to be wrong, and the ensuing embarrassment, so avoids sticking one's neck out. The latter is also fearful of being shown to be wrong, and so takes the tack of not thinking about ways one could be wrong.

I do not claim to solve this problem in full generality, but there is one trick that I use to avoid either of these mistakes: Believe two things. In particular, two things in tension.

What's an example of this?

Sometimes I notice how powerful human reasoning has been. We've discovered calculus, built rocket ships to the moon, invented all of modern medicine, etc. Plus, I notice all of the evidence available to me via the internet—so many great explanations of scientific knowledge, so many primary sources and documentation of events in the world, so many experts writing and talking. In such a headspace, I am tempted to believe that on any subject, with a little work, I can figure out what is true with great confidence, if I care to.

At other times, I notice that people have made terrible choices for a long time. There was so much rising crime until people figured out that lead in paint and gas caused lower IQ and increased violence. People thought obesity was primarily an issue with character rather than a medical issue. People on the internet constantly say and believe inane and false things, including prestigious people. I myself have made many many dumb and costly mistakes that I didn't need to.

I could choose to believe that I am a master of reality, a powerful rationalist that will leave no stone unturned in my pursuit of truth, and arrive at the correct conclusion.

Or I could believe we are all deeply fallible humans, cursed to make mistake after mistake while the obvious evidence was staring us in the face.

My answer is to believe both. I understand very little of what is true and I can come to understand anything. The space between these two is where I work, to move from one to the other. I shall not be shocked when I observe either of these, for they are both happening around me regularly.

The unknown is where I work. In Scott Garrabrant's sequence on Cartesian Frames, he frames knowledge and power as a dichotomy; either you can know how a part of the world is, or it can be in multiple states and you can have power over which state that part of the world ends up in. Similarly, I take two pieces of knowledge with opposing implications and associations; in holding both of these opposing beliefs, they set up a large space in the middle for me to have power over, where the best and worst possible outcomes are within my control.


A different route that people take to avoid fooling themselves, is to believe a claim (like "I can come to understand anything if I try hard enough") and then to remember that it's conditional on trying hard enough. They try to hold on to that concrete claim, and add a fuzzy uncertainty around what it means for a given situation. I find this is less effective than holding onto two concrete claims that are in tension, where the two claims imply that there are other things that you don't know.

The mistake that I think people make is to remember the one claim they do know, and act as though that's all there is to know. If "We will probably all die soon due to AI" is the only thing that you believe, it seems like you know all that is relevant, all that you need to know (the current trajectory is bad, a pause would be good, alignment research is good, etc). But when you add that "We have the potential to survive and flourish and live amongst the stars" then suddenly you realize there's a lot of other important questions you don't know the answer to, like what our final trajectory might look like and what key events will determine it.

You might be interested to know where I picked up this habit. Well I'll tell you. It started when I read Toni Kurz and the Insanity of Climbing Mountains by GeneSmith. See, until that point I had assumed that the story of my life would make sense. I would work on some important projects, form relationships with smart/wise/competent people, accomplish some worthy things, before dying of old age/the singularity would happen.

Then I read that and realized that the story of these people's lives made no sense at all.

I think that there are two natural options here. The first one was to ignore that observation, to blind myself to it, and not think about it. Of course the story of my life would make sense, why would I choose otherwise?

The other route that people take is to conclude that life is ultimately absurdist, where crazy things happen one after another with little sense to it in-retrospect.

As I say, I was only tempted by the first one, but the essay was a shock to my system and helped me stop blinding myself to what is described. Instead of blinding myself or falling into absurdism, I instead believe two things.

  1. My life story has the potential to make a lot of sense and be something I look back on in pride.
  2. Most lives do not have a reasonable story to them.

Now all that's left for me is to understand how and why different lives fall into these two different buckets, and then do the hard work to make the first one obtain rather than the latter.


To end with, here are some more beliefs-in-tension that I've come by in the course of my life. Please share your own in the comments!

There is much greatness around me Most efforts around me are grossly inadequate/incompetent
We will probably all die soon We have the potential to survive, flourish, and live amongst the stars
I am barely an agent and barely competent and barely conscious I have the potential to become a principled and competent adult
I have taken responsibility for very little  I could successfully take more responsibility for problems in the world than almost anyone I know
I understand very little of what’s true I can come to understand anything
Most instances of self-sacrifice are not worth it (1, 2) Self-sacrifice is sometimes a crucially important step to take and we must be willing to take it
Most people have many strikingly accurate beliefs Most people have many strikingly inaccurate beliefs
Most people do very useful and competent things Most people do very wasteful and incompetent things
We can make many decisions rationally We rationalize most of our decisions
  1. ^

    You might think that I put this quote here because I read "The Keys to the Kingdom" series as a child and loved it. However, that is not why; I put it here because a fellow Inkhaven-er was telling me about Garth Nix, I looked up quotes by him in the convo, and noticed this one was relevant for my post.

    However your assumption that I read the series as a child and loved it would be a justified-true-belief because, on clicking through the author's Wikipedia page, I suddenly remembered that I had read the series as a child and loved it! But I entirely forgot about that in the course of finding the quote and choosing to use it.

    Good luck with your future accurate-belief-acquiring endeavors.



Discuss

Ruby's Inkhaven Retrospective

2025-11-25 13:45:24

Published on November 25, 2025 5:45 AM GMT

Most interesting takeway? To my disappointment, my attempts at substantive intellectual contribution flopped in comparison to fun/light/easy-to-write filler.

So that we can better understand the experience ourselves, Lightcone stuff have been participating in Inkhaven as their primary assigned for 7+ days each. My officially-supported stint ended yesterday though I hope to maintain the streak through the end of the month.

I'm kinda disappointed. I failed at my primary writing goal: the posts I most cared about didn't get much traction, whereas the ones I wrote as "filler" did. Musings on this below.

My Inkhaven posts fall into a few groups groups:

  1. My series on how individual variation in cognition explains why other people (according to me) reason so poor
  2. "Filler"
    1. The "what's hard about...?" line of posts
      1. The Motorsports Sequence
    2. Random story:  Out-paternalizing the government
  3. Misc "Substantive posts":
    1. I'll be sad to lose the puzzles
    2. Don't grow your org fast

 

On to my disappointment. In his 2014 book, a Sense of Style, Pinker defines "Classic Style" writing:

The guiding metaphor of classic style is seeing the world. The writer can see something that the reader has not yet noticed, and he orients the reader’s gaze so that she can see it for herself. The purpose of writing is presentation, and its motive is disinterested truth. It succeeds when it aligns language with the truth, the proof of success being clarity and simplicity. The truth can be known, and is not the same as the language that reveals it; prose is a window onto the world. The writer knows the truth before putting it into words; he is not using the occasion of writing to sort out what he thinks...

This is was my hope with the cognition series. I had the feeling that I had seen something important, and that if I could only describe it well, others would see it too. This was the kind of writing that the LessWrong greats did: Eliezer, Scott, Zvi, etc. They could see things and they helped you see them too, and enjoy doing so too.

It's hard to write like that – hence it being a good aspirational goal for Inkhaven. I'd hoped the coaches. skilled writers as they are, could help me towards that[1].

I didn't get there. Maybe How the aliens next door shower got the closest. More thoughts on the struggle in a bit.


On my first day, I figured that I'd first bash out something easy and then work on my difficult main goal. Motorsport is an easy topic so I sat down thinking I'd write The skills and physics of high-performance driving, but then it occured to me that I had thoughts to say about discussing the hardness of things in general, and that became my first post (and second most successful post). Something I wrote on the spur of the moment.

On days when I didn't feel up to the hard writing, I wrote filler: the motorsport stuff. That was much better received than I expected.

Out-paternalizing the government (getting oxygen for my baby) was also filler. I found the oxygen/dive shop/ABO thing amusing. "Agency to do your own medicine better than the default" is a perenially popular genre. The libertarian angle was me trying to make it more interesting. Watching the karma jump around, I know it got several strong downvotes and I feel sympathetic to the people who complain about not knowing why.

"Don't grow your org fast" was triggered by a Lightcone team discussion of hiring. I had one interesting thought there but then figured it was a good post to flesh out the whole argument. I don't feel I did a great job and reception was meh, supporting that.

On Sunday I thought I'd write out a general orientation piece about state of the world, and somehow that triggered the thought "I'll be sad to lose the puzzles" , which had the feel of post that was good and would be popular. I think it hit the sweet spot of not something said often but a lot of people resonate with. I've been thinking yeah, if you write applause lights you get karma.

I have a draft "What's hard about running LessWrong?" I'm keen to write, but it's a hard one.


Alright, so the patterns...

The stuff that was easy to write about seemed to get the best reception. The stuff I thought was most interesting, novel, and valuable flopped.

Perhaps I violated the Pinkerian edict of Classic Style: "The writer knows the truth before putting it into words; he is not using the occasion of writing to sort out what he thinks." In fact, very much so. The thoughts about cognition and RL were new, I hadn't finished thinking them through, and was hoping to lay the track immediately before the train arrive.

Not having the thoughts ordered in my own head, it was hard to order them all.

In contrast, I've given a "what's hard about motorsport" is content I have explained many times and have clear in my head. Other stuff...the ideas were just simple.

I've been repeatedly surprised that when I think I'm writing about something so simple and obvious it's probably boring, I can end up getting a really good reception. 

Also something something inferential distance? 

And man, it's generally frustrating that pieces I write quickly in a couple of hours routinely outdo the big interesting idea I'd been working on for a lot longer. I wrote Conversational Cultures: Combat vs Nurture (V2)  in four hours. A lot more went in Plans are Recursive & Why This is Important which I thing is more interesting and more important.


It's late and I'm trying to get other things done before the end of the day. Inkhaven writing is rushed. There's something to be said for claypots and I've learned stuff just pushing myself to write. (More posts written than in the last how many years?)

Also I feel it didn't work to think deep and explain hard new ideas. I've basically done zero editing passes where I reworked a post. Cut sections. Reordered sections. Rewrote them. Made the language better.

I think I'm not really capable of writing and doing those in "one sitting" (where "one sitting is kind of like a day". My brain is fried. So all my Inkhaven pieces got minimal iteration and that was fine for the easy stuff, but not for the bigger idea.

I suspect the Cognition-RL series might have come out better as a single large post. Writing it that way would have forced me to flesh out the entire argument chain. Instead I was writing one day at a time, not sure where I was going. It also meant each day I was repeating content because I didn't trust that people had read the prior post.

I had thought chopping up a big thing into multiple pieces would be fine, even let me cover each piece better, but without better lookahead, I think it degraded things.


To close off this ramble, I'm a bit disheartened. My fun easy writing was more popular than my attempts at substantive intellectual progress contribution. It's okay that it flopped, and it makes sense that kind of writing's harder, it's just a disappointing update about the writing incentive landscape.

I'm yet to think how I'd adjust Inkhaven to get me better results.

 

  1. ^

    The coaches were helpful but I didn't feel like I was being guided into Greatness. It was also a bit jarring when coaches would tell me my writing strong, tight, interesting, good, whatever; and then the readers didn't seem to agree.



Discuss

Against Making the Same Mistake Twice

2025-11-25 12:54:40

Published on November 25, 2025 4:54 AM GMT

I don't mind making mistakes. Obviously I'd avoid making them if I could, but sometimes that's not easy. I don't have solid ideas for not making mistakes the first time. I do, however, have some suggestions regarding the second time.

Perfect is the enemy of 99.99% uptime

Now, you might reasonably be thinking "But surely nobody expects us to be perfect. And not making mistakes a second time sounds very hard, maybe impossible."

True as far as it goes. First I'm going to argue that when we care enough about reliably not making mistakes, we can get closer to perfection than you might have thought possible. Second, I'll explain why the marginal mistake prevention is often cheap at the cost.

The Federal Aviation Administration has my respect as one of the more competent organizations in America. Airplanes do not simply crash; each crash is investigated by the National Transportation Safety Board and the cause is ferreted out and then a fix is implemented. The FAA has not achieved perfection — you can read their list of failures at that Wikipedia link — but as a fraction of airplanes that lift off this is pretty impressive.

And of course there's medical operations. I'll go with deaths in childbirth; the mortality rate per live birth is around 25 per 100,000 live births. That is impressively low. And those deaths usually aren't because the obstetrician randomly goofed up an otherwise perfectly fine birth either.

"Sure," you might say, "but I'm not an airline pilot and I'm not a medical doctor. Why is catching my marginal mistakes that big a deal?"

I'd argue it's because it's easier than doing it the other way. 

If I do the dishes but don't scrub them very well, then when the dish dries there's still foodstuff stuck to it. It's easier to double check the dish before moving it to the drying rack than to try and scrape off the dried stuck bits. It's not worth scrutinizing every dish under a microscope the way the FAA would, but it's worth a second check.

Once I went on a trip to another city, and forgot my laptop charger. That involved having to make two trips to electronics stores to find one that would work. These days I have a short checklist of items when I'm going on a trip, and I tap each object as I'm putting them in the bag.

Then of course there's my old job as a software engineer, which involved doing some devops work. Avoiding silly screwups becomes pretty nice when you consider the alternative is explaining to your boss why you broke the build.

Techniques for approaching perfection

Notes and Mortems

First, make note of your mistakes.

I'm not being metaphorical. Write them down. Do it somewhere you'll be able to pull up later. 

I have a habit in every project I work on, where as soon as I encounter my first problem I open up a document called "[Project] mid-mortem" and write down the problem. Then I go about solving the problem. Sometimes after the project — or during if I have time — I'll make notes on what the solution was. That later version is the post-mortem.

Then — and this is the key part — I reread those notes the next time I'm doing a project like the last one. Many problems are hard to solve day-of and easy to solve with a couple weeks foresight. I do pre-mortems, anticipating ways the project could go off the rails.

Checklists

Someday I expect to write a review of The Checklist Manifesto. Today is not that day.

Today is the day I talk more about checklists. Surgeons use them for surgery, and those checklists contain things like 'you had twenty surgical clips when you started this operation. Count how many you have now that you're done.'

I use them for everything from monitoring software build releases to going for trips to preparing for a dinner party. See, when you copy a checklist from last time, and you make a mark when you accomplish something, then it gives you an extra chance to notice if you've forgotten your power cord again.

Ask for Feedback

You can't fix your mistakes if you don't know about them. So make it easy for people to point them out to you.

After big events, I like running a feedback survey. When I'm in the middle of work, I try and be approachable so someone can come up to me and point out a mistake I'm making, and I try to make that a good experience for them instead of getting visible irritated at them. But I don't just hope for the happy circumstance where someone tells me they're having a problem; I try to proactively chat with people, asking how the event is going, and what the worst thing about the event so far is. (Hat tip to Saul Munn who I copied the habit of asking about the worst thing from.)

Two Minute Timeout

When I'm about to do something I can't easily undo, I stop, walk away for two minutes, then come back and look at it again.

This catches an amazing number of silly mistakes for a delay of two minutes. Now, it does obviously cost some time. That's not always worth it. But any time two minutes would be worth two minutes to avoid, say, not accidentally booking a big event for the wrong weekend, or sending an email to the wrong person, I consider the two minute pause. 

See your work with fresh eyes, and read it from the top. 

Learning what you don't know

Once upon a time I ran a big gathering, and someone got drunk.

I didn't really know how to handle it well. They were fairly out of it. An acquaintance of mine asked me to help, and I basically failed to do anything useful until the acquaintance took it upon themselves to solve the problem.

Afterwards, I took notes. I cross checked those notes against what the internet suggested doing, and then again against what some people who had wilder college experiences than I did suggested. I made a short list of what to do to help someone who is drunk. I then made moderate efforts to memorize the list. The next time someone around me got very drunk, I knew what to do.[1]

I do this a lot, and it's a lesser, slow superpower. I can't learn everything, but I can keep track of the gaps I keep running into.

Make next time better than this time

Why does all of this matter?

Partially I admit it's pride. I want tomorrow to be brighter than today, for my future to be better than my past.

Partially I admit it's dignity. When I sort Portland Oregon as though it were the city of Oregon in the state of Portland, well, mistakes happen. But doing it twice is embarrassing! 

Mostly this is because it's the realistic, tractable way I know of to get better. 

There is no replicable button for blasts of insight or sudden enlightenment. Not reading the Sequences, not being really smart and thinking super hard, not anything in the rationalist canon I know of. Sometimes people solve new problems nobody else has solved yet, sometimes it's even me, and yet this involves a bit of something clever that I can't transmit. But I can walk you through this, the art of making fewer mistakes, and that does hold pieces of the harder problems.

Step by step we ratchet towards perfection, and though we may slip along the way we'll get back up and try again — and hopefully not slip in the same place a second time.

  1. ^

    My list of what to do if someone is very drunk:

    1. Get them sitting upright or lying on their side. This is to prevent choking if they vomit.
    2. Get them water to drink. Small sips are good, but keep offering it to them.
    3. Keep them talking or focusing on you. Don't let them fall asleep.
    4. Danger signs: If any of these, call emergency services
      1. They keep vomiting and can't stop
      2. Their lips turn blue and stay blue
      3. They won't wake up even with shaking
      4. They stop breathing


Discuss

Training PhD Students to be Fat Newts (Part 1)

2025-11-25 11:29:22

Published on November 25, 2025 3:29 AM GMT

Today, I want to introduce an experimental PhD student training philosophy. Let’s start with some reddit memes. 

Every gaming subreddit has its own distinct meme culture. On r/chess, there's a demon who is summoned by an unsuspecting beginner asking “Why isn’t this checkmate?” 

These posts are gleefully deluged by responses saying “Google En Passant” in some form or other. Here’s my favorite variant:

Battle Brothers is an indie game of the turn-based strategy variety about keeping alive a company of muggles - farmhands and fishermen, disowned nobles and hedge knights, jugglers and pimps -  in a low-magic fantasy world filled with goblins, zombies, and dubious haircuts. 

Let me narrow down the kind of game that Battle Brothers is. It is sometimes said that all games are menus or parkour.

Battle Brothers is squarely a game of menus, a game of managing spreadsheets which happens to have graphics.

This is the skill tree for just one out of twenty brothers in a company.

The Battle Brothers subreddit has its own dominant meme, which is a little more mysterious than “Google En Passant.” Introducing … the Fat Newt.

Fat newt is a loving bastardization of the “fatigue neutral” build, which is a way of building brothers to minimize fatigue usage. Fatigue is the stamina/mana resource in this game, and attacking once costs about 15 fatigue. Normal brothers recover 15 fatigue a turn, enough to swing their weapon exactly once.

A trap that the vast majority of new players fall into, is to spend too many stat points leveling up fatigue on every brother, in order to build protagonist-energy characters - fencers, berserkers, and swordlancers - who can afford to attack two or even three times a turn. By spreading stats out, these fatigue-intensive builds are extremely demanding, requiring gifted brothers born with extraordinary talents. With all the points that should have gone to defenses and accuracy invested in fatigue, these would-be heroes meet their ignoble ends in the digestive tracts of Nachzehrers and Lindwurms as soon as they miss one too many attacks.

Only one-in-a-hundred brothers have the native talent to be a real hero, dancing across the battlefield like a murder of necrosavants. The community meta that has developed in reaction is the extremely counter-intuitive Fatigue Neutral build, a build that completely ignores the fatigue stat to pump everything else up. You rely entirely on the brother’s base fatigue regeneration to swing only once a turn. In exchange, you get to wear the heaviest armor, wield the biggest axe, and take all the defensive and utility perks that you want. Most importantly, with all this extra slack, while only one in a hundred brothers have the stats to be a hero, one in ten brothers can be a great fat newt.

My first companies were ragtag teams of wannabe heroes, who cut through easy fights like chaff but then got slaughtered in reciprocity when they faced the first real challenge. Then, I did some research and learned the gospel of the fat newt. 

Nowadays, my teams are usually built around the same solid foundation: an impenetrable fighting line of four to six Fatigue Neutral brothers who can stand their ground and decapitate once a turn, supplemented by a few elites and specialists. To my knowledge, Fat Newts are the most salient example of a build defined not by its strengths, but by its weaknesses. They highlight the possibility that under the right conditions, optimizing is primarily about choosing the right dump stat.

Next time, we operationalize the notion of training PhD students to be Fat Newts…



Discuss

How to love Battlefield as much as I do

2025-11-25 11:02:43

Published on November 25, 2025 3:02 AM GMT

I love Battlefield so much I got a PhD in it. Most people I know can’t see the appeal and I can’t blame them. Neither did I when I started. I started cause my boyfriend was Swedish and wanted to convince me to move to Stockholm with him. We were video game professionals and Stockholm only had one game studio. A studio that made Battlefield. I gamely tried the game, and spent 10 hours dying from snipers I couldn’t see while having no idea how to even find anyone.

This is how empty one of the maps looked (Heavy Metal). If you selected the wrong spawn point, it could take you a literal minute to run to where the action was, and snipers would kiss you goodbye before you got there.

It took me a long time to “get” this game and now I believe I see a lot in it almost no one else does. I truly believe Battlefield 6 Conquest Multiplayer on large maps is the Best Stealth Experience you can get in modern gaming by a very long shot. Here is why.

All Multiplayer Shooters are Stealth Games

You might think a shooter is about shooting, and I’m not going to be so annoying as to deny that. But really every multiplayer shooter is a game of tag and hide & seek mushed together. We tend to focus on the “tag” part where you use your cursor to click on other players, and if you do this hard enough you drain their “health bar” and have tagged them to “death” (wow, narrative force much?). But the best antidote to getting tagged to death is to not be seen at all. And the best way to tag someone else to death is to get the drop on them so hard they don’t even know which way to run to make you stop touching them through the screen.

So I’d argue every multiplayer shooter is a stealth game in disguise (ha! See what I did there?) but Battlefield 6 takes the cake (and then hides it! Why the motherfucking why does it hide the fact it’s so good at being a stealth game?!). Now to understand some of that, you have to understand the basics of Battlefield 6.

Except that’s boring to explain. So I’ll skip it and link to this tutorial video for noobs the noble hearts of the virgin shooter players. The short version is that I’m only going to talk about the 64-player Conquest mode on large maps. Two teams of 32 players compete to drain each other’s 1000 ticket pool. You lose a ticket when anyone on your team dies (and doesn’t get revived) but you lose tickets every few seconds if the enemy team captures more flags than your team. A flag is a zone on the map that you own by having team mates on it (it remains yours if you walk off and no enemy shows up). You score points by doing actions useful to making your team win. One useful action is killing an enemy. But then there are 93,730 other useful actions, give or take. And that’s where things get interesting.

This is not me. This is Sibello’s Tank Repair Service, which outscores their entire team despite only having 1 kill. (K = Kill, D = Death, A = Assisted in a kill by a team mate, [flag] = flag captures)

See, normally people think about shooters as being about shooting. And you measure your skill by how many kills you get compared to your deaths. This is called your kill-death ratio (kdr). Now in Battlefield 6 Conquest, your kdr only marginally matters. What truly matters is how many points you score. And these points are awarded for any strategic action that helps your team win. That means you can opt out of the “tag” part of the Tag/Hide & Seek combo that is multiplayer shooters. And then Battlfield puts the Hide & Seek part on steroids.

That’s three enemies nearby. Imagine trying to notice them between plants or in the distance peaking a corner.

Battlefield games combine two things no other major multiplayer shooter does: realistic graphics and destructible environments. Realism means that it is actually hard to see and be seen. And destructability means the map constantly changes. You can blow a hole in the ground and then hide in it. And you can blow a hole in wall and then jump through it. And you blow a hole in a tree, a house, or a massive crane till it forgets how to be vertical and then you huddle in the rubble. Basically, the maps change all the time, and it is genuinely hard to see where people are, and you have to keep searching and thinking to figure out sightlines. That means every second in Battlefield is spent triangulating where you are visible from and which way you should be looking. It is the ultimate continuous mental rotation and tracking test as you subtract all the sightlines covered by friendlies and interpolate where the enemies might be in the gaps, and then check if that wall that’s normally there still exists even. Believe me, it feels a special sort of frustrating to die from walls not being where you expect them to be and a special sort of a glorious to be the cause of that and find an enemy huddling on the other side.

What Multiplayer Stealth Looks Like

I thought about writing this as a game guide, but then you have a list of steps you’d have to execute to find out if I’m right or even more right wrong. So instead, I’ll tell you stories of fun I’ve had, and that you could be having too.

Like the one time I was our main intelligence officer. I ranked among the top ten players of my team by scoring over 80 assists and no kills. What did this look like? You select the Recon class and then you get gadgets that let you “spot” enemies. Spotting lets you put a red diamond above the enemy’s head. For a few seconds this diamond becomes visible through walls to you and all your friends. This is a major game changer. When you focus on spotting, you basically spawn into the match and spend your time avoiding enemies while finding a good vantage point on the map. Then you use your gadgets to spot everyone in sight. If a team mate then kills that enemy, you get points! You are basically fighting the “intelligence” war of the game.

Another time I was the sneaky ninja of the team. I took a flag by hiding on the back of a truck except the back of the truck bed was open. I saw enemies run past me within 1-2 metres, but they simply did not expect me to be in the truck. It was the most hide and seek glee I’ve experienced in a decade! Now during a flag capture, you can see a ticker of how many friendlies versus enemies are on the flag, so they knew someone from my team was somewhere in the flag zone. I could see them running around like mad. And I, as recon, put down a sensor under my butt so they couldn’t see its flashing light in their peripheral vision whenever they ran past my hiding spot. I also made sure not to move a muscle, so my movement wouldn’t give me away either. Keep in mind, if any of them looked my way, they would have seen me! But peripheral vision on a soldier in camo in a truck bed is just not very noticeable and none of them figured it out. Meanwhile, the sensor spotted enemies around it through walls and everything, so my team mates outside the flag area, started picking off the enemies around my truck like they had a sixth sense. Two minutes later, they stormed in and we all took the flag

Then another time I was the angel engineer of death. I was driving a tank on my own but came under fire. With almost no health left, I bailed and crawled under the tank to repair it. The enemy was smart and sprinted for the now empty tank, and jumped in. Except, when you use a repair tool on an enemy tank, you damage it! Now instead of lying under my own tank repairing it, I was lying under an enemy tank damaging it. It blew up in a manner of seconds as I lay on the couch wheezing with laughter.

Ultimately, It’s All Mind Games

So why is Battlefield 6 conquest on large maps such a good stealth experience? Cause you are competing against other humans in a massive game of hide and seek, in a realistic environment that changes with every match. And that type of true stealth is really just about getting in each other’s head: What can they see? What can they hear? What do they expect you to do?

And then you go and do something else.

The biggest kick comes from getting the jump on people who shoot faster and more accurately than you. If you get the drop on that 5.2 KDR player, you know you outsmarted them. And if you top the scoreboard with a winning team while only having a handful of kills, then you know it’s brains over “brawn” all the way.

The guns and war are all just fluff. It’s set dressing for a game of hide & seek with 5 layers of strategy layered on top. We haven’t gotten into counter-sniping, the intelligence war, and how to fend off (or acquire) air dominance. You can basically boot up Battlefield 6 and play each match as a different sort of mini-game. It advertises itself as a shooter or a war simulator, where you can drive or fly vehicles in epic battles. But no one tells you it’s really the only game where you can be a true ninja where stealth matters and will help you turn the tide of battle. They hide that part far too well.



Discuss