2026-03-29 07:10:16
I think the future of AI is really important, and it would be pretty good to know which experts have been right and wrong about progress and effects. It would be pretty good to keep a website up on important peoples' track records (superforecasters, famous domain experts, frontier lab people, AI 2027, Situational Awareness, etc).
Currently, I think there's an incentive problem where it kinda pays to make vague predictions. This disincentivizes people who are putting their neck out and means it's much more difficult to cut through the noise.
Solution: track imprecise predictions—either by pinning them down precisely or by not evaluating over Brier scores but just giving vibes as to whether it seems like they got it right (flagging for uncertainty).
What I currently have in mind: a site that aggregates from existing platforms like Metaculus, Good Judgment, and Manifold, while also scraping the web for predictions made outside them—interviews, posts, podcasts. When an expert makes a prediction anywhere, someone can submit it to be moderated and added to their record. The goal is a single place where you can look up anyone who people might actually defer to and see their full history, whether or not they ever opted into a forecasting platform. You’d probably want to prioritize the most important predictions from the most important people.
This is different from existing platforms like Good Judgment in two ways. First, it tracks people who never opted in—the forecasters, lab researchers, and public intellectuals who make predictions in interviews, posts, and podcasts but don't put them on a platform. These are often the most influential voices, and right now they face basically no accountability. Second, the UI should make it easy to look up a specific person and see their full prediction history at a glance, which existing tools, imo, make surprisingly difficult.
One note on how to use it: I think the standard on the site should be to have inside views only, which might complicate things. As Thomas Larsen rightly points out, leaning too hard on aggregated reputations risks deference cascades—people updating on each other's records rather than on the object level. After all, I do think that part of LW alpha comes from making inside views/not deferring, and we should fear losing that.
When I last tried building this with Claude Code, it was too difficult to do in one sitting. If someone wants to work on this with me over a weekend, reach out—I'd be curious to see how far we can get.
I think this matters a lot for [AI for epistemics], especially if you think those epistemics will be shaken up soon by TAI.
One concern worth flagging: this could also create perverse incentives. Someone could build a track record, use it to shift opinion, then make a bad prediction at the worst moment. I think this risk is real but manageable—the tracker is most dangerous if people concentrate their trust in a very small number of highly-rated forecasters, which is itself a bad epistemic practice regardless.
2026-03-29 06:57:53
AI usage for this post: I wrote the draft on my own. While writing, I used Claude Code to look up references. Then Claude Code fixed typos and reviewed the draft, I addressed comments manually.
Epistemics: my own observations often inspired by conversations on X and Zvi's summaries.
As Zvi likes to repeat Language Models Offer Mundane Utility. Agent harnesses is the most advanced way to use language models. At the same time, they are not perfect - the capabilities frontier is jagged, sometimes they make mistakes and sometimes they just nuke your 15 year photo collection or production database. Thus, it is a skill how to use AI agents efficiently and I want to get better at it.
What tips, tricks, approaches are you using to improve your efficiency when using agent harnesses? I personally focus on Claude Code & Codex CLI mostly for single person software development, but I welcome suggestions for other tools and other areas of use. Ideally you share what you tried and what works for you and I try it myself and see whether it improves my workflow.
Here are my discoveries (with level of confidence).
The best available model and its thinking effort usually produces best results and requires less handholding. You have to be extra careful with what this means, e.g. Claude Code has high thinking effort by default, but the best level is max, which you need to actively turn on yourself. Unless you are very sensitive to speed and cost, you should do this. It might work a bit longer and consume more tokens, but this is much better than requiring multiple iterations from you.
When working on code in a git repo, let each AI session have its own git branch with worktree. This way they can work in parallel and not fight to edit same files. Intuitively this would lead to merge hell, but fortunately AIs are good at merging.
Codex CLI allows you to turn on /fast mode, which speeds up processing 1.5 times, but consumes your token quota 2x faster. If you work on highly interactive tasks, which require lots of your input and you are not constrained by money (e.g. can afford $200 subscription), you should do this. In my experience, without /fast, AI is slow enough that I can juggle 5-6 sessions before any finishes. With /fast, sessions finish quickly enough that I only need 2-3 at a time. The smaller batch means less context switching, which I find more productive overall.
The toggle is global, so it affects all your sessions at once.
Claude Code also has /fast to speed up processing 2.5 times, but don't run to turn it on yet - it charges you extra in addition to subscription using API prices and it is very expensive ($2-5/minute/agent). I haven't tried it.
I discovered that my software development efficiency grew greatly once I started having extremely verbose logging. The idea is once something goes wrong, this gives AI a trace of what has happened and I can just describe very briefly the higher-level symptom and it can investigate from the logs on its own. This is especially useful when the issue is hard to reproduce or happens occasionally. Another variation of this is to have some way to export debug information from your app and reference a specific object or instance. This way when something goes wrong, you just feed that to AI after one click and let it investigate.
I don't know the reason, but once in a while I get a couple days when AI just seems incapable of doing anything. Previously after one message it would correctly implement a bunch of stuff and now it keeps misunderstanding what I want and I need 30 iterations and it still does not get it and the process never converges. In this case I switch to the competitor (basically Opus 4.6 <> GPT5.4). The key is to detect this early and switch early. This is very sad, because migrating all the skills and setups between Claude Code and Codex sucks and I don't have an efficient way to do this.
I haven't found a solution to this myself, but I strongly believe that this would have huge impact. I want the current AI session to seamlessly have access to all past information I provided to it. Both Claude and ChatGPT have this implemented in their web interfaces, but not in Claude Code / Codex CLI. I am currently experimenting with github.com/doobidoo/mcp-memory-service. Issues discovered:
The memory retrieval seems to be fine. I am experimenting with using a separate Claude session in the background to extract memories from every message.
If you found a good solution you are happy with, please tell.
To my surprise just asking a new session of the same AI to review plan or work often brings useful insights, which the original session missed. You can also ask different AI too. My hypothesis is that the original session had to care about lots of details, so its attention was dispersed, but the new session can focus only on this particular review and, thus, have higher effective brain power allocated to it.
This is a guiding principle for multiple ideas. Basically, in my experience AI works well enough most of the time and the main bottleneck to getting stuff done is me.
The first way I bottleneck AI is by reviewing its requests for permissions to do stuff. Many people resolve this by YOLO mode, where AI can do everything it wants. I like my photos and production databases, so I don't feel comfortable doing this. I am also worried about prompt injections from the web.
I see 2 ways to partially resolve this issue:
Some people run AI in YOLO mode on a server, basically reducing the worst case scenario of a failure. I still don't like this, because it can still leak your git API key and your repo.
Many people I know just YOLO and never had any issue.
The main message here is reviewing every permission request kills your efficiency. Find some way to solve this.
By default you don't get any indication that AI needs your attention to review its permission requests except a text in the terminal. Thus, when working on multiple AI sessions in parallel, it is very easy to miss. You should set up at least a sound notification for new permission requests that require your attention or AI finishing its turn (it is either done or needs your input).
If you just add a sound notification and you have 10 sessions across virtual workspaces, finding the correct window is cumbersome. I solved this by making my own dashboard to oversee the state of all sessions. I don't think making something this polished is a good idea for you, because it took lots of iterations. Claude Code hooks are fairly fragile. Codex has extremely poor hooks. If there is a ready-made solution you use for this and you are happy with it, please tell. The main idea here is that you need some way to quickly identify what session requires your attention without checking all of them.

Your attention is the bottleneck, so if you can let AI do even a bit of what you would have to do otherwise, you should. Even if it takes much longer for AI to accomplish.
In my case, I made a skill for it to present its UI changes for me. It runs the website in a docker in a worktree, completely prepares the database to the state needed for the UI test, tests the UI on its own and fixes whatever issues it finds. Then it prepares the exact screen which I need to review and gives me an overview of what I need to check and what the context is. I just look through the minimum necessary flow and either LGTM or tell it what to fix and then it repeats. This is not perfect, because often it ends up presenting wrong state or starts too far from the interesting part, but this is much faster than me trying to prepare the correct state of the DB on my own. There are also tools like Storybook to review UIs with mock data.
The main message here is to understand how this applies to your use case and let AI do as much of the boring work as you can get away with.
Example - I have 5 test cases to review. I tell AI to start 5 new sessions - one per test case - and they test them and prepare the review in parallel (see "Offload as much as you can to AI" above).
I feel like this should be a way to gain more leverage easily, but I struggle to come up with effective ways to do this. Also I rarely have easy-to-parallelize cases like the example above.
I wrote my own UI to wrap Codex's App Server. My main goals were Auto Review of permissions and better hooks. Overall, I found this very interesting, because I could see how I use this tool, find inefficiency and immediately address it. E.g. I can choose how much information I want to see from the AI. The main downside is that the complexity grows fast, so developing this takes lots of time, so I suspect overall it was net negative, but an interesting exercise. Also due to AIs having bad days, I have to switch to Claude occasionally and its SDK seems much worse to wrap, so I don't have support for it yet.

I didn't expect to write that many ideas of mine here, but it was useful to list all of them. I would love to hear your ideas how to use AI agents more efficiently.
2026-03-29 05:57:54
We all know we ought not to doomscroll, or to make snarky comments, or to snack mindlessly, or to endlessly replay in our minds that conversation where we felt misunderstood or slighted. And this ”ought” is not imposed from the outside. It’s not that we’ll be judged by someone. It’s just that if we want to be happy, if we want to get things done, if we want to experience joy and enthusiasm and meaning and fun, we’d better not do those things. Not too much, anyway.
We even know how to not do them. It’s not rocket science in the first place, and there are plenty of genuinely effective methods out there just one Google search away. But sometimes… we do those things anyway. As entrepreneur Derek Sivers famously put it (I’m told), if information was the answer, we’d all be billionaires with 6-pack abs.
(In the interest of transparency, let me just state that the practice I describe in this post hasn’t made me a billionaire with 6-pack abs. But give it time.)
To stop doing something that is holding us back, we need to be vigilant. To not miss opportunities to do things that move us forward, we need to be vigilant. We know what benefits us and what harms us, and all that remains is to actually do the former and not do the latter.
There is a Pāli word for this vigilance, this diligence. It’s appamāda, translated as ‘heedfulness’. (Appamāda is the negation of pamāda, meaning ‘heedlessness’ or ‘negligence’.) The Buddha was famously bullish on appamāda, calling it the quality that encompasses all skillful qualities. In fact, according to the Pāli canon, his last words to his followers were "appamādena sampādetha" – bring your task to completion through heedfulness.
It would be nice if we could just decide to be heedful. But knowing myself, it’s clear to me that if I pledged right now to be heedful every waking hour of the rest of my life, it would simply not happen. I would be enthusiastic about it for a couple of hours, and then, come evening, I would start to forget. Tomorrow morning I would see a reminder on my calendar and feel another rush of excitement – but only for a short while. And a week from now, everything would be back to the way it was.
But there is something I can do. I can be heedful for twenty minutes, or forty-five. I can train that muscle.
And that is what I am doing: heedfulness workouts.
A heedfulness workout is a simple thing: I just decide I’m going to be heedful for some fixed amount of time. Sometimes I set a timer for 20 minutes or so; more often, I simply decide to practice to the top or bottom of the hour, or something similarly salient. Usually it’s no less than 10 minutes and no more than an hour and a half.
During the workout I pay attention to what I’m doing and how I’m doing it and to the myriad small choices I’m making like what to work on next after finishing some task. When I notice there’s a choice to make, I try to take the most beneficial option, not the most salient or tempting one. And that’s pretty much all there is to it.
I realize that this description is a bit scant on the details, so I’m going to give some examples of the kinds of things that may happen in a heedfulness workout. But I want to emphasize that it’s not a list of things you need to do or boxes you need to tick. The particulars aren’t the point. The point is noticing when you are about to make a choice – implicitly or explicitly – and trying to choose the best option with the information you have.
Here are some of the kinds of things I might notice during a heedfulness workout and what I might think and do in response.
I reiterate that the point of the practice is not to make those exact same observations and respond in exactly the same way. What comes up, and how you should respond to it, depends on your state and your circumstances.
Sometimes positive emotions arise during the practice: excitement or exhilaration born of an expansive feeling of freedom – the freedom to do things that matter to me, to do what moves things forward. At other times, I don’t feel anything particularly remarkable.
And sure enough, training the heedfulness muscle is having an effect outside of the workouts. Curiously, in my case, it’s not so much a feeling of being able to exert more force against unhelpful impulses (although that’s probably happening too) but more like… things loosening up. The mind being less rigid and slightly less controlled by habitual patterns.
Sometimes, outside of a workout, I reach for my phone for stimulation, catch myself, and stop. Other times, the conscious thought arises, “Hey, I could be heedful about this situation.” And then there are times when I reach for the thought deliberately: “Let me try sprinkling a little heedfulness on this.” The image is of a salt shaker filled with savoury goodness.
And that little sprinkling of heedfulness can turn an unpalatable situation into a delectable one.
2026-03-29 03:00:07
All my previous posts on here have only been ACX meetup notices (if you're in Phoenix AZ, you should come say hi!). However, recently my wife and I started co-writing a substack, like all the cool kids. Our meetup Discord told us that this was a particularly good article, and encouraged me to crosspost to lesswrong. Thus I present: our guide to NFP, because we have health concerns for hormonal birth control, and barrier methods feel lame.

The Marquette Method is one of the fanciest and premier ways to do cycle tracking and NFP to avoid pregnancy. However, clear online guides for this method are relatively scarce; they’re around, but mostly not conveniently located or well done. Marquette University, the people who invented this method (and run the flagship site Vitae Fertility), seem to like to keep their info pretty close to the chest. They don’t publish much in the way of detailed instructions, preferring to write in generalities, and they quite heavily push the necessity of paying them for a consultation to be taught the method. Lucky for us, it’s the 21st century, and we have deep research LLMs that can hunt down the original papers and publications describing this method and compile the sources for us. We did all the research and wrote up a guide for ourselves, and then figured we might as well enlighten the rest of the world by tidying up and publishing what we’d written for ourselves.
Our guide here only applies when you’re on a regular cycle, and not postpartum. The method requires some adjusting if any of the following are true of you:
For everyone not in that exceptions list, our guide should suffice. The Marquette Method essentially feels like the Standard Days Rhythm Method, but using the Clearblue Monitor to increase precision and decrease accidental pregnancy rate from ~5% to <1% in perfect-use settings.
The basic Standard Days protocol has you count the days in your menstrual cycle, with the first day of your period as day 1, and then counting up from there.
Here’s the basic Standard Days protocol:
The Marquette Method uses the Clearblue fertility monitor, which measures your urine for estrogen and luteinizing hormone (LH) to check your fertility status. The monitor is designed for conceiving, but with a few simple tweaks in use, its purpose can be inverted to avoid conception instead.
Set up the Clearblue monitor as normal according to the instructions. It will ask you to mark the first day of your cycle, and it will begin asking you to take urine samples on day 6. The monitor will usually ask for ten days of testing, though this may vary over the months as the system acclimates to your particular cycle length and biology — on your first use, it will probably test closer to 20 days as it gets a read on you.
Consider yourself fertile beginning day 6 (i.e. day 5 is the last day of free sex). Or, if you’ve got a slightly higher risk tolerance, you can follow the Standard Days Method and consider yourself fertile beginning day 8. The two-day difference comes from the risk tolerance levels between the Standard Days Method and the Marquette Method. About 10% of women with regular cycles are fertile by day 6; the authors of the Standard Days Method were willing to accept this low level of risk in their protocol, as every extra abstinence day reduces adherence. Marquette wanted a method that would be functionally impregnable; so they took a more conservative number of days — and given their methodology holds unintended pregnancy at <1% with perfect use — they’ve succeeded.
Once your estrogen begins to rise, the monitor will show high fertility (usually for about three days), you will be peak fertility for two days when your LH spikes during ovulation, and then you will usually have one day of high fertility after the two peak days. Generally, you’re infertile after three full days following the last peak day (e.g. if your last peak day was day 14, you’re safe beginning day 18). The extra cautious are sometimes counselled to wait until three consecutive low readings to make absolutely sure you’re out of the fertility window.
After six months, adjust your fertile window from beginning day 6 (or 8), to beginning on the earliest peak day seen in the last six cycles minus six (e.g. if your earliest first peak day was day 13, consider yourself fertile beginning day 7 (13 – 6 = 7); i.e. day 6 is your last safe day). This changes your fertility window from being based on population averages, to being based on your specific body’s patterns, thus increasing accuracy.
Once you have six months’ worth of data then, your protocol will look like this:
(For illustration purposes, I’m going to use day 13 as the earliest day of your first peak day in the past six cycles)
The chance of pregnancy with the Marquette Method is less than 1% per year with perfect use. Now, on typical use, the chance of pregnancy is ~5%. People worry about this, but you shouldn’t — provided you don’t have skill issues. Really, “Typical use” just means that you didn’t follow the instructions. This includes things like having unprotected sex when you know you’re fertile because you’re just so very horny. The trick is not doing that.
Some sources will suggest that you can supplement the Marquette Method by tracking your cervical mucus, basal body temperature, or other such indicators. Do not do this. Adding additional fertility indicators actually lowers your success rate compared to straight Marquette. This is for two reasons. First, more indicators add complexity, and confusion can lead to mistakes. Second, and more importantly, self-control is hard; if you have multiple conflicting indicators, you’re likely to pick whichever one allows you to have sex in the moment when you’re horny, thus defaulting you to whichever indicator is least conservative in that instance — not a recipe for success. Stick with the simple and objective single metric.
Obligatory legal note: This post is offered for educational purposes only; it is not medical advice. The Clearblue Fertility Monitor is cleared by the FDA solely as an aid to achieving pregnancy; employing it to avoid conception is an off-label practice not evaluated or endorsed by the FDA. Always confer with a qualified clinician.
2026-03-29 02:38:44
They found it in one of the early Mars expeditions, a bit after they had travel back and forth figured out well enough to keep a permanent outpost manned out there. The lab ran expeditions into some nearby caves in the hope that they’d turn out to be a good spot for an expansion. That hope didn’t turn out too well - something about the local geology, they ended up figuring it’d be more cost-effective to just land more pods - but they found the Organism there.
Not that any of this impacted me much at the time, beyond my general interest in space news to take my mind off the problems on Earth. I was still living in Manhattan, doing my day job as a government think tank security analyst and hoping AI winter would last long enough for me to save up a bit before it could replace me.
They always called it the Organism, but no one was even sure it was organic. It was arranged in clusters of off-white hexagonal tubes, and there was certainly some kind of chemical process there - they’d grow on their own, even faster than bamboo shoots, but in a much simpler chemical process that had the biologists arguing over whether it qualified as alive. It had some emissions, but early experiments convinced people it wasn’t toxic, and the guys at the lab - the one in the mars station, no one was bringing it home to Earth yet - brought some home and started studying it pretty casually.
It took a while for anyone to formally notice something weird was going on. The geo lab, the one that studied the Organism, kept publishing banger results, and not even just about the Organism. They seemed to be ahead of the curve across the board - they got almost as much research done on soil research and solar cell adaptation in their off time as the pods that were actually studying that. Some people are just crazy smart, I guess, but the solar cell guys we’d sent to Mars had been top of their game and they were still getting lapped.
They started making jokes about the Organism getting the geo lab all high - you could smell it a bit, at least in the sterile sealed station air - and the geo lab gave the solar cell guys a sample as kind of a gag gift. Two weeks later the solar cell guys came up with a brand new solar cell design that pushed peak solar cell efficiency up by two percentage points. It wasn’t even just a Mars thing, it was deployable on Earth. Whole research labs in China had been working on that for years without getting close.
You can bet people took notice after that.
They isolated it on Mars at first, of course. You don’t risk bringing something that grows fast and has some sort of weird effects on human brains until you’re sure it’s safe. It didn’t look like it was a trap - even the serious people in government had to consider aliens messing with us at that point, but they explored through the Mars caves and there was no sign of any sort of life, biological or weirder. It was such a weird natural artifact that some people thought it might’ve been designed by a now-extinct martian civilization. The geologists had some arguments about how some rock formations on Mars were evidence that it had life at one point, a billion or two years ago. It was all theoretical either way.
The Organism itself was more complicated, and even the biologists using it only ever half-understood it. Something about the molecules it emitted enabled some form of synthetic computation, making humans who were on it about 20% smarter and more cooperative. They started bringing it to earthside labs, and science progress soared to a rate we hadn’t seen since the seventies. They brought it to government offices, and politicians started passing fewer boneheaded policies. I think that one was more about the cooperation aspect than the intelligence boost. There’d always been at least some politicians who realized which policies were dumb, and making everyone a little more cooperative let them actually push through the noise.
Geopolitical tensions eased a bit. The president challenged president Qiu of China to a gaming match (don’t ask me what game) while they were both high on it as a gesture of goodwill, and Qiu came off affable and charming and seemed happy to ease the geopolitical tensions a bit. He laughed about how it was his first time breathing Organism, winking just in case anyone thought he seriously expected them to believe he hadn’t gotten any smuggled through american export controls when half the drug dealers in Manhattan could probably get you a hookup. A few weeks later they were talking about SALT 3, this time including China.
This was where it started affecting my job. I came into the office one day and the manager called me into his office.
“Listen John”, he started in the slow voice he used for big news. “They want an analysis on the new nuclear policy. Don’t make this one too harsh, okay? This isn’t one of those reports where they want serious critical analysis. It’s one of the CYA jobs where DOD just wants an independent analysis to wave around and show a Serious Outsider approved it. Just write something short and put the downsides in the fifth section where no one will read it, okay?”
I sighed. “How bad is it?”
“New nuclear policy. As a sign of good faith, they linked up our nuclear systems in a Samson scenario. No more first use, no more aggressive use. we still have it as a deterrent, but launching it would make us nuke ourselves. It means we can retaliate if anyone’s seriously nuking us, but we’d be blowing ourselves up too so we can’t strike first. Especially with our new conventional absolute advantage and the ICBM defenses the Organism boys cooked up, we need something to reassure people they don’t need to strike us first before the new stuff is online.”
“So we switched up our entire nuclear system with a single nuke the world button?”
“Yeah, pretty much. Between us, it’s already deployed behind the scenes, we needed it to reassure the Chinese. But it’s not ready for announcement until-”
“No no no No NO NO”. I barely even realized I was shouting. “Don’t you see? People on average are more cooperative now which means state-level actors are lower-risk because their people are less volatile and more cooperative. But individuals are higher variance, because even an average 20% increase in peacefulness and cooperation leaves a large number of negative outliers, and a 20% average boost in intelligence means a lot of people way smarter than that, and these things are not correlated. Which means we have now supercharged our supply of intelligent anti-human sociopaths who might be able to access the “nuke everything now” button. It doesn’t matter how few people know, if even one of them discovered this button exists he’ll find a way to push it.”
“but-”
“This is too important. We need to shut the button NOW. Before we all blow up. Call everyone you can. China can nuke us if they have to, this is too much of a risk. We need to shut it down tomorrow. If we even make it to tomorrow. I give us even odds”.
I stormed out of the office and started running home. I had some calls I could make, maybe I could help the move faster out of this disaster.
I was still just halfway home when I saw streaks of fire flying through the sky, and the skyscrapers started crashing down around me.
2026-03-29 02:24:35
[Alternative title: apply More Dakka incrementally and carefully.]
If you are very overweight, then you should aim to cut down your daily caloric intake. This doesn't mean your optimal daily caloric intake is 100kcal.
If you are very underweight, then you should aim to ramp up your daily caloric intake. This doesn't mean your optimal daily caloric intake is 10,000kcal.
In general, if something is good to do some amount in some context, this doesn't mean that you should go as all-in on it as you can possibly manage. The utility of a change is context-dependent, and as you apply more of the change, the context also changes, and the marginal utility of the change might change along with the changed context (up or down).
...
This seems dead obvious, but I've been noticing various places to which this dead obvious point applies, but where many people seem to apply "seems good so far, so let's go all in" regardless.
For example: It's good to pull the mind's brakes, but it doesn't mean it's good to just stop it.
Some currents of thought latch onto the fact that certain changes to one's mind are clearly generally mostly beneficial and extrapolate maximally, proclaiming that the state of mind that got modified maximally along this axis is the most desirable one.
About a decade ago I meditated for an hour a day every day for a few weeks, then sat down to breakfast with my delightful (at the time) toddlers and realized that I felt nothing. There was only the perfect crystalline clarity and spaciousness of total emotional detachment. "Oh," I said, and never meditated again.
...
Young adults should probably put some effort into becoming less emotionally reactive. Being volatile makes you unpleasant to be around, and undercuts your ability to achieve pretty much any goals you may have.
If you have any traumas, it's likely positive-EV for you to devote time and energy to learning some kind of therapy modality with a good evidence base, and then taking the time to resolve those issues.
In my opinion - for most people - once you have fixed about 60% of your emotional reactivity and 90% of your psychological triggers, you have hit a point of diminishing returns. In fact, past that point, I think further investment in making yourself "nonreactive" and "unattached," and removing all minor triggers from your psyche, is pathological from the perspective of actually trying to be happy and to do things with your life.
Last year, I interacted with a practitioner of Buddhism who expressed a strange view to me, which I am now able to only vaguely recall. As far as I remember, the view was that as humans interact with each other, other living beings, and even the rest of the general non-living world around them, they are not passively allowing things to manifest themselves as they are, but rather imposing certain concepts on the Other, fitting the Other into preconceived frames. This is bad, the person said, because it puts us in "conflict" with the world.[1] The right choice is to abandon all our concepts, as they are "violent". If abandoning all the concepts means annihilation of the mind, so be it.
Listening to people trying to make sense of this after the Buddhist's departure made me think that this is an example of a broad pattern where someone notices a good mental movement or a change to one's mind and goes on to (implicitly) consider it absolutely good and something that is to be applied all the way.
One can gain insight, through various sorts of practice, that getting one's concepts to loosen their grip on the world, and letting the world manifest itself through the cracks left by the loosening of those concepts, can be good. See: Naturalism, Seeing with Fresh Eyes, Trapped Priors As A Basic Problem Of Rationality,[2] etc. This doesn't mean that you can just abandon all your concepts[3] because, in order to perceive in the first place, you need some concepts to make sense of the incoming information. A blank slate is not a mind.
[Caveat: I'm not saying that all Buddhist-ish practice is bad, and I am not claiming that this is the view that Buddha (or whatever specific major figure in the movement) held.]
To give a few more examples:
One morning, I got out of bed, turned on my computer, and my Netscape email client automatically downloaded that day’s news pane. On that particular day, the news was that two hijacked planes had been flown into the World Trade Center.
These were my first three thoughts, in order:
I guess I really am living in the Future.
Thank goodness it wasn’t nuclear.
and then
The overreaction to this will be ten times worse than the original event.
The above is an excerpt from Eliezer's old post When None Dare Urge Restraint. The issue I'm pointing at is something like: non-dare-urge-restraint-ness dynamics can also occur intra-personally.[4]
One of the things I asked the person was "Why call it 'conflict', rather than 'tension', which is like a clearly more apt term to me, because it's unclear to me that this needs to lead to any conflict, whereas there is some tension between, roughly, bottom-up processing and top-down processing, although it's unclear to me why this would be a proper tension between the perceiver and the perceived?". As far as I could tell, the person didn't offer a response.
In a sense, the entire point of this post could be described as "positive evaluation of an available action can become a trapped prior, and the consequences of it can be catastrophic".
I guess a better term than "concepts" would be something like "mental structures", but I'll limit esotericism by sticking to the more common term.
Maybe it makes sense to think about it in terms of myopic subagent power-seeking, a cancerous sort of goal (speculating, low confidence).