MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

P-hacking as focusing a microscope

2025-11-28 03:38:16

Published on November 27, 2025 7:38 PM GMT

Science is full of misleading findings—results that would not hold up if the study were attempted again under ideal conditions. In data-driven investigations, a big part of this could be the forking paths problem. Researchers make many decisions about how to analyze the data (leave out this subgroup, include that control variable, etc.). But there are many defensible paths to take, and they could yield vastly different results. When the study gets published, you don’t learn about results from the paths not taken. 

I think a lot of well-meaning scientists fall prey to this and churn out shoddy results. I don’t think they have an evil mindset, however. (One could certainly imagine a Psychopath's Guide to Causal Inference: fake the data. Or, if that’s too hard, run a loop that tries every possible analysis until by random luck you get the result you want. Then write up that result and justify the analytical setup as the obvious choice.)

A more appropriate analogy for understanding (most) p-hackers might be focusing a microscope: the effect is there, you’re merely revealing it. When you look at a microorganism under a microscope, you have some idea of what should appear once it's in focus. It's blurry at first, but you adjust the knobs until you see the expected shape.

source

A case in point is Brown et al (2022), a paper on pollution and mortality in India. This is a highly credentialed team (here’s the last author). What’s fascinating is that they describe the forking paths that they abandoned: 

We performed exploratory analyses on lag years of PM25…We found that using lag 2–4 y would cause PM25 exposures to be protective on respiratory disease and IHD [(ischemic heart disease)], whereas using lag 4–6 y the effect of PM25 exposures on stroke was smallest in comparison with lag 2–4 y and lag 3–5 y…Thus, to be consistent for all three disease outcomes and avoiding an implausible protective effect of PM25, we chose to use lag 3–5 y.

So: the authors were trying to study the impact of ambient pollution on different causes of death. They were unsure how far back to measure exposure: pollution 2-4 years ago, 3-5 years ago or something else? There isn’t an obvious answer so they tried a few things. When they used a 2-4 year lag, their results appeared to suggest that pollution improves your health. But that’s impossible. When they tried the 4-6 year lag, the effects were much smaller. They chose the 3-5 year lag because the effects were large and positive across causes of death.

The previous paragraph will make most methodologists want to throw up. You can’t choose your analytical approach based on the results it generates. And the positive impact that they found was reason to think that the whole study is hopelessly confounded. Surely this practice, in general, will lead to unreliable findings. 

The authors’ perspective might be that this is overly strict. Results that make sense are more likely to be true. (After all, this form of Bayesianism is how we reject the findings from junk science.) If certain analytical decisions yield a collection of findings that align with our priors, those decisions are probably correct. They knew the effects were there; they just had to focus the microscope. 

My hunch though is that it would improve science if we were a lot more strict, requiring pre-registrations or specification curves. In observational causal inference, you are not focusing a microscope. Any priors you have should be highly uncertain. 

Many researchers will be guided in their data analysis by overly confident priors. They’ll abandon many analyses that readers will never learn about, so published work will convey answers with far too much certainty. In light of this, it’s actually commendable that Brown et al shared their process. I think we can push science to be even better.

References

Brown, Patrick E., et al. "Mortality associated with ambient PM 2.5 exposure in India: Results from the million death study." Environmental Health Perspectives 130.9 (2022): 097004.



Discuss

Will We Get Alignment by Default? — with Adrià Garriga-Alonso

2025-11-28 03:19:47

Published on November 27, 2025 7:19 PM GMT

Adrià recently published “Alignment will happen by default; what’s next?” on LessWrong, arguing that AI alignment is turning out easier than expected. Simon left a lengthy comment pushing back, and that sparked this spontaneous debate.

Adrià argues that current models like Claude Opus 3 are genuinely good “to their core,” and that an iterative process — where each AI generation helps align the next — could carry us safely to superintelligence. Simon counters that we may only get one shot at alignment, that current methods are too weak to scale. A conversation about where AI safety actually stands.

Watch the full debate here



Discuss

Is there an analogue of Riemann's mapping theorem for split complex numbers, or otherwise?

2025-11-28 01:58:50

Published on November 27, 2025 1:09 PM GMT

Question for mathematicians:

The Riemann mapping theorem shows that it is possible to conformally transform the hyperbolic plane into an area within itself. Is there an analogue of this theorem which does the same thing for 2 dimensional De Sitter spacetime, possibly with split complex numbers?



Discuss

The Big Nonprofits Post 2025

2025-11-28 00:20:18

Published on November 27, 2025 4:20 PM GMT

There remain lots of great charitable giving opportunities out there.

I have now had three opportunities to be a recommender for the Survival and Flourishing Fund (SFF). I wrote in detail about my first experience back in 2021, where I struggled to find worthy applications.

The second time around in 2024, there was an abundance of worthy causes. In 2025 there were even more high quality applications, many of which were growing beyond our ability to support them.

Thus this is the second edition of The Big Nonprofits Post, primarily aimed at sharing my findings on various organizations I believe are doing good work, to help you find places to consider donating in the cause areas and intervention methods that you think are most effective, and to offer my general perspective on how I think about choosing where to give.

This post combines my findings from the 2024 and 2025 rounds of SFF, and also includes some organizations that did not apply to either round, so inclusion does not mean that they necessarily applied at all.

This post is already very long, so the bar is higher for inclusion this year than it was last year, especially for new additions.

If you think they are better places to give and better causes to back, act accordingly, especially if they’re illegible or obscure. You don’t need my approval.

The Big Nonprofits List 2025 is also available as a website, where you can sort by mission, funding needed or confidence, or do a search and have handy buttons.

Table of Contents

Organizations where I have the highest confidence in straightforward modest donations now, if your goals and model of the world align with theirs, are in bold, for those who don’t want to do a deep dive.

  1. Table of Contents.
  2. A Word of Warning.
  3. A Note To Charities.
  4. Use Your Personal Theory of Impact.
  5. Use Your Local Knowledge.
  6. Unconditional Grants to Worthy Individuals Are Great.
  7. Do Not Think Only On the Margin, and Also Use Decision Theory.
  8. Compare Notes With Those Individuals You Trust.
  9. Beware Becoming a Fundraising Target.
  10. And the Nominees Are.
  11. Organizations that Are Literally Me.
  12. Balsa Research.
  13. Don’t Worry About the Vase.
  14. Organizations Focusing On AI Non-Technical Research and Education.
  15. Lightcone Infrastructure.
  16. The AI Futures Project.
  17. Effective Institutions Project (EIP) (For Their Flagship Initiatives).
  18. Artificial Intelligence Policy Institute (AIPI).
  19. Palisade Research.
  20. AI Safety Info (Robert Miles).
  21. Intelligence Rising.
  22. Convergence Analysis.
  23. Organizations Related To Potentially Pausing AI Or Otherwise Having A Strong International AI Treaty.
  24. Pause AI and Pause AI Global.
  25. MIRI.
  26. Existential Risk Observatory.
  27. Organizations Focusing Primary On AI Policy and Diplomacy.
  28. Center for AI Safety and the CAIS Action Fund.
  29. Foundation for American Innovation (FAI).
  30. Encode AI (Formerly Encode Justice).
  31. The Future Society.
  32. Safer AI.
  33. Institute for AI Policy and Strategy (IAPS).
  34. AI Standards Lab.
  35. Safer AI Forum.
  36. Center For Long Term Resilience at Founders Pledge.
  37. Simon Institute for Longterm Governance.
  38. Legal Advocacy for Safe Science and Technology.
  39. Organizations Doing ML Alignment Research.
  40. Model Evaluation and Threat Research (METR).
  41. Alignment Research Center (ARC).
  42. Apollo Research.
  43. Cybersecurity Lab at University of Louisville.
  44. Timaeus.
  45. Simplex.
  46. Far AI.
  47. Alignment in Complex Systems Research Group.
  48. Apart Research.
  49. Transluce.
  50. Organizations Doing Math, Decision Theory and Agent Foundations.
  51. Orthogonal.
  52. Topos Institute.
  53. Eisenstat Research.
  54. AFFINE Algorithm Design.
  55. CORAL (Computational Rational Agents Laboratory).
  56. Mathematical Metaphysics Institute.
  57. Focal at CMU.
  58. Organizations Doing Cool Other Stuff Including Tech.
  59. ALLFED.
  60. Good Ancestor Foundation.
  61. Charter Cities Institute.
  62. Carbon Copies for Independent Minds.
  63. Organizations Focused Primarily on Bio Risk. (Blank)
  64. Secure DNA.
  65. Blueprint Biosecurity.
  66. Pour Domain.
  67. Organizations That Can Advise You Further.
  68. Effective Institutions Project (EIP) (As A Donation Advisor).
  69. Longview Philanthropy.
  70. Organizations That then Regrant to Fund Other Organizations.
  71. SFF Itself (!).
  72. Manifund.
  73. AI Risk Mitigation Fund.
  74. Long Term Future Fund.
  75. Foresight.
  76. Centre for Enabling Effective Altruism Learning & Research (CEELAR).
  77. Organizations That are Essentially Talent Funnels.
  78. AI Safety Camp.
  79. Center for Law and AI Risk.
  80. Speculative Technologies.
  81. Talos Network.
  82. MATS Research.
  83. Epistea.
  84. Emergent Ventures.
  85. AI Safety Cape Town.
  86. Impact Academy Limited.
  87. Atlas Computing.
  88. Principles of Intelligence (Formerly PIBBSS).
  89. Tarbell Fellowship at PPF.
  90. Catalyze Impact.
  91. CeSIA within EffiSciences.
  92. Stanford Existential Risk Initiative (SERI).
  93. Final Reminders
A vibrant and heartwarming scene depicting various charity fundraising efforts happening in a community park. People of diverse backgrounds are engaged in activities like a bake sale with colorful desserts displayed on tables, a charity run with participants wearing numbers on their shirts, a live music performance on a small stage, and a silent auction with art pieces on display. Children are running a lemonade stand, while volunteers distribute flyers and collect donations in jars. The atmosphere is lively and festive, with balloons, banners, and smiling faces everywhere.

A Word of Warning

The SFF recommender process is highly time constrained, and in general I am highly time constrained.

Even though I used well beyond the number of required hours in both 2024 and 2025, there was no way to do a serious investigation of all the potentially exciting applications. Substantial reliance on heuristics was inevitable.

Also your priorities, opinions, and world model could be very different from mine.

If you are considering donating a substantial (to you) amount of money, please do the level of personal research and consideration commensurate with the amount of money you want to give away.

If you are considering donating a small (to you) amount of money, or if the requirement to do personal research might mean you don’t donate to anyone at all, I caution the opposite: Only do the amount of optimization and verification and such that is worth its opportunity cost. Do not let the perfect be the enemy of the good.

For more details of how the SFF recommender process works, see my post on the process.

Note that donations to some of the organizations below may not be tax deductible.

A Note To Charities

I apologize in advance for any errors, any out of date information, and for anyone who I included who I did not realize would not want to be included. I did my best to verify information, and to remove any organizations that do not wish to be included.

If you wish me to issue a correction of any kind, or to update your information, I will be happy to do that at least through the end of the year.

If you wish me to remove your organization entirely, for any reason, I will do that, too.

What I unfortunately cannot do, in most cases, is take the time to analyze or debate beyond that. I also can’t consider additional organizations for inclusion. My apologies.

The same is true for the website version.

I am giving my full opinion on all organizations listed, but where I feel an organization would be a poor choice for marginal dollars even within its own cause and intervention area, or I anticipate my full opinion would not net help them, they are silently not listed.

Use Your Personal Theory of Impact

Listen to arguments and evidence. But do not let me, or anyone else, tell you any of:

  1. What is important.
  2. What is a good cause.
  3. What types of actions are best to make the change you want to see in the world.
  4. What particular strategies are most promising.
  5. That you have to choose according to some formula or you’re an awful person.

This is especially true when it comes to policy advocacy, and especially in AI.

If an organization is advocating for what you think is bad policy, or acting in a way that does bad things, don’t fund them!

If an organization is advocating or acting in a way you think is ineffective, don’t fund them!

Only fund people you think advance good changes in effective ways.

Not cases where I think that. Cases where you think that.

During SFF, I once again in 2025 chose to deprioritize all meta-level activities and talent development. I see lots of good object-level work available to do, and I expected others to often prioritize talent and meta activities.

The counterargument to this is that quite a lot of money is potentially going to be freed up soon as employees of OpenAI and Anthropic gain liquidity, including access to DAFs (donor advised funds). This makes expanding the pool more exciting.

I remain primarily focused on those who in some form were helping ensure AI does not kill everyone. I continue to see highest value in organizations that influence lab or government AI policies in the right ways, and continue to value Agent Foundations style and other off-paradigm technical research approaches.

Use Your Local Knowledge

I believe that the best places to give are the places where you have local knowledge.

If you know of people doing great work or who could do great work, based on your own information, then you can fund and provide social proof for what others cannot.

The less legible to others the cause, and the harder it is to fit it into the mission statements and formulas of various big donors, the more excited you should be to step forward, if the cause is indeed legible to you. This keeps you grounded, helps others find the show (as Tyler Cowen says), is more likely to be counterfactual funding, and avoids information cascades or looking under streetlights for the keys.

Most importantly it avoids adverse selection. The best legible opportunities for funding, the slam dunk choices? Those are probably getting funded. The legible things that are left are the ones that others didn’t sufficiently fund yet.

If you know why others haven’t funded, because they don’t know about the opportunity? That’s a great trade.

Unconditional Grants to Worthy Individuals Are Great

The process of applying for grants, raising money, and justifying your existence sucks.

A lot.

It especially sucks for many of the creatives and nerds that do a lot of the best work.

It also sucks to have to worry about running out of money, or to have to plan your work around the next time you have to justify your existence, or to be unable to be confident in choosing ambitious projects.

If you have to periodically go through this process, and are forced to continuously worry about making your work legible and how others will judge it, that will substantially hurt your true productivity. At best it is a constant distraction. By default, it is a severe warping effect. A version of this phenomenon is doing huge damage to academic science.

As I noted in my AI updates, the reason this blog exists is that I received generous, essentially unconditional, anonymous support to ‘be a public intellectual’ and otherwise pursue whatever I think is best. My benefactors offer their opinions when we talk because I value their opinions, but they never try to influence my decisions, and I feel zero pressure to make my work legible in order to secure future funding.

If you have money to give, and you know individuals who should clearly be left to do whatever they think is best without worrying about raising or earning money, who you are confident would take advantage of that opportunity and try to do something great, then giving them unconditional grants is a great use of funds, including giving them ‘don’t worry about reasonable expenses’ levels of funding.

This is especially true when combined with ‘retrospective funding,’ based on what they have already done. It would be great if we established a tradition and expectation that people who make big contributions can expect such rewards.

Not as unconditionally, it’s also great to fund specific actions and projects and so on that you see not happening purely through lack of money, especially when no one is asking you for money.

This includes things that you want to exist, but that don’t have a path to sustainability or revenue, or would be importantly tainted if they needed to seek that. Fund the project you want to see in the world. This can also be purely selfish, often in order to have something yourself you need to create it for everyone, and if you’re tempted there’s a good chance that’s a great value.

Do Not Think Only On the Margin, and Also Use Decision Theory

Resist the temptation to think purely on the margin, asking only what one more dollar can do. The incentives get perverse quickly. Organizations are rewarded for putting their highest impact activities in peril. Organizations that can ‘run lean’ or protect their core activities get punished.

If you always insist on being a ‘funder of last resort’ that requires key projects or the whole organization otherwise be in trouble, you’re defecting. Stop it.

Also, you want to do some amount of retrospective funding. If people have done exceptional work in the past, you should be willing to give them a bunch more rope in the future, above and beyond the expected value of their new project.

Don’t make everyone constantly reprove their cost effectiveness each year, or at least give them a break. If someone has earned your trust, then if this is the project they want to do next, presume they did so because of reasons, although you are free to disagree with those reasons.

Compare Notes With Those Individuals You Trust

This especially goes for AI lab employees. There’s no need for everyone to do all of their own research, you can and should compare notes with those who you can trust, and this is especially great when they’re people you know well.

What I do worry about is too much outsourcing of decisions to larger organizations and institutional structures, including those of Effective Altruism but also others, or letting your money go directly to large foundations where it will often get captured.

Beware Becoming a Fundraising Target

Jaan Tallinn created SFF in large part to intentionally take his donation decisions out of his hands, so he could credibly tell people those decisions were out of his hands, so he would not have to constantly worry that people he talked to were attempting to fundraise.

This is a huge deal. Communication, social life and a healthy information environment can all be put in danger by this.

And the Nominees Are

Time to talk about the organizations themselves.

Rather than offer precise rankings, I divided by cause category and into three confidence levels.

  1. High confidence means I have enough information to be confident the organization is at least a good pick.
  2. Medium or low confidence means exactly that – I have less confidence that the choice is wise, and you should give more consideration to doing your own research.
  3. If my last investigation was in 2024, and I haven’t heard anything, I will have somewhat lower confidence now purely because my information is out of date.

Low confidence is still high praise, and very much a positive assessment! Most organizations would come nowhere close to making the post at all.

If an organization is not listed, that does not mean I think they would be a bad pick – they could have asked not to be included, or I could be unaware of them or their value, or I could simply not have enough confidence to list them.

I know how Bayesian evidence works, but this post is not intended as a knock on anyone, in any way. Some organizations that are not here would doubtless have been included, if I’d had more time.

I try to give a sense of how much detailed investigation and verification I was able to complete, and what parts I have confidence in versus not. Again, my lack of confidence will often be purely about my lack of time to get that confidence.

Unless I already knew them from elsewhere, assume no organizations here got as much attention as they deserve before you decide on what for you is a large donation.

I’m tiering based on how I think about donations from you, from outside SFF.

I think the regranting organizations were clearly wrong choices from within SFF, but are reasonable picks if you don’t want to do extensive research, especially if you are giving small.

In terms of funding levels needed, I will similarly divide into three categories.

They roughly mean this, to the best of my knowledge:

Low: Could likely be fully funded with less than ~$250k.

Medium: Could plausibly be fully funded with between ~$250k and ~$2 million.

High: Could probably make good use of more than ~$2 million.

These numbers may be obsolete by the time you read this. If you’re giving a large amount relative to what they might need, check with the organization first, but also do not be so afraid of modest amounts of ‘overfunding’ as relieving fundraising pressure is valuable and as I noted it is important not to only think on the margin.

A lot of organizations are scaling up rapidly, looking to spend far more money than they have in the past. This was true in 2024, and 2025 has only accelerated this trend. A lot more organizations are in ‘High’ now but I decided not to update the thresholds.

Everyone seems eager to double their headcount. I’m not putting people into the High category unless I am confident they can scalably absorb more funding after SFF.

The person who I list as the leader of an organization will sometimes accidentally be whoever was in charge of fundraising rather than strictly the leader. Partly the reason for listing it is to give context and some of you can go ‘oh right, I know who that is,’ and the other reason is that all organization names are often highly confusing – adding the name of the organization’s leader allows you a safety check, to confirm that you are indeed pondering the same organization I am thinking of!

Organizations that Are Literally Me

This is my post, so I get to list Balsa Research first. (I make the rules here.)

If that’s not what you’re interested in, you can of course skip the section.

Balsa Research

Focus: Groundwork starting with studies to allow repeal of the Jones Act

Leader: Zvi Mowshowitz

Funding Needed: Medium

Confidence Level: High

Our first target continues to be the Jones Act. With everything happening in 2025, it is easy to get distracted. We have decided to keep eyes on the prize.

We’ve commissioned two studies. Part of our plan is to do more of them, and also do things like draft model repeals and explore ways to assemble a coalition and to sell and spread the results, to enable us to have a chance at repeal.

We also are networking, gathering information, publishing findings where there are information holes or where we can offer superior presentations, planning possible collaborations, and responding quickly in case of a crisis in related areas. We believe we meaningfully reduced the probability that certain very damaging additional maritime regulations could have become law, as described in this post.

Other planned cause areas include NEPA reform and federal housing policy (to build more housing where people want to live).

We have one full time worker on the case and are trying out a potential second one.

I don’t intend to have Balsa work on AI or assist with my other work, or to take personal compensation, unless I get substantially larger donations than we have had previously, that are either dedicated to those purposes or that at least come with the explicit understanding I should consider doing that.

Further donations would otherwise be for general support.

The pitch for Balsa, and the reason I am doing it, is in two parts.

I believe Jones Act repeal and many other abundance agenda items are neglected, tractable and important, and that my way of focusing on what matters can advance them. That the basic work that needs doing is not being done, it would be remarkably cheap to do a lot of it and do it well, and that this would give us a real if unlikely chance to get a huge win if circumstances break right. Chances for progress currently look grim, but winds can change quickly, we need to be ready, and also we need to stand ready to mitigate the chance things get even worse.

I also believe that if people do not have hope for the future, do not have something to protect and fight for, or do not think good outcomes are possible, that people won’t care about protecting the future. And that would be very bad, because we are going to need to fight to protect our future if we want to have one, or have a good one.

You got to give them hope.

I could go on, but I’ll stop there.

Donate here, or get in touch at [email protected].

Don’t Worry About the Vase

Focus: Zvi Mowshowitz writes a lot of words, really quite a lot.

Leader: Zvi Mowshowitz

Funding Needed: None, but it all helps, could plausibly absorb a lot

Confidence Level: High

You can also of course always donate directly to my favorite charity.

By which I mean me. I always appreciate your support, however large or small.

The easiest way to help on a small scale (of course) is a Substack subscription or Patreon. Paid substack subscriptions punch above their weight because they assist with the sorting algorithm, and also for their impact on morale.

If you want to go large then reach out to me.

Thanks to generous anonymous donors, I am able to write full time and mostly not worry about money. That is what makes this blog possible.

I want to as always be 100% clear: I am totally, completely fine as is, as is the blog.

Please feel zero pressure here, as noted throughout there are many excellent donation opportunities out there.

Additional funds are still welcome. There are levels of funding beyond not worrying.

Such additional support is always highly motivating.

Also there are absolutely additional things I could and would throw money at to improve the blog, potentially including hiring various forms of help or even expanding to more of a full news operation or startup.

Organizations Focusing On AI Non-Technical Research and Education

As a broad category, these are organizations trying to figure things out regarding AI existential risk, without centrally attempting to either do technical work or directly to influence policy and discourse.

Lightcone Infrastructure is my current top pick across all categories. If you asked me where to give a dollar, or quite a few dollars, to someone who is not me, I would tell you to fund Lightcone Infrastructure.

Lightcone Infrastructure

Focus: Rationality community infrastructure, LessWrong, the Alignment Forum and Lighthaven.

Leaders: Oliver Habryka, Raymond Arnold, Ben Pace

Funding Needed: High

Confidence Level: High

Disclaimer: I am on the CFAR board which used to be the umbrella organization for Lightcone and still has some lingering ties. My writing appears on LessWrong. I have long time relationships with everyone involved. I have been to several reliably great workshops or conferences at their campus at Lighthaven. So I am conflicted here.

With that said, Lightcone is my clear number one. I think they are doing great work, both in terms of LessWrong and also Lighthaven. There is the potential, with greater funding, to enrich both of these tasks, and also for expansion.

There is a large force multiplier here (although that is true of a number of other organizations I list as well).

They made their 2024 fundraising pitch here, I encourage reading it.

Where I am beyond confident is that if LessWrong, the Alignment Forum or the venue Lighthaven were unable to continue, any one of these would be a major, quite bad unforced error.

LessWrong and the Alignment Forum a central part of the infrastructure of the meaningful internet.

Lighthaven is miles and miles away the best event venue I have ever seen. I do not know how to convey how much the design contributes to having a valuable conference, designed to facilitate the best kinds of conversations via a wide array of nooks and pathways designed with the principles of Christopher Alexander. This contributes to and takes advantage of the consistently fantastic set of people I encounter there.

The marginal costs here are large (~$3 million per year, some of which is made up by venue revenue), but the impact here is many times that, and I believe they can take on more than ten times that amount and generate excellent returns.

If we can go beyond short term funding needs, they can pay off the mortgage to secure a buffer, and buy up surrounding buildings to secure against neighbors (who can, given this is Berkeley, cause a lot of trouble) and to secure more housing and other space. This would secure the future of the space.

I would love to see them then expand into additional spaces. They note this would also require the right people.

Donate through every.org, or contact [email protected].

The AI Futures Project

Focus: AI forecasting research projects, governance research projects, and policy engagement, in that order.

Leader: Daniel Kokotajlo, with Eli Lifland

Funding Needed: None Right Now

Confidence Level: High

Of all the ‘shut up and take my money’ applications in the 2024 round where I didn’t have a conflict of interest, even before I got to participate in their tabletop wargame exercise, I judged this the most ‘shut up and take my money’-ist. At The Curve, I got to participate in the exercise and participate in discussions around it, I’ve since done several more, and I’m now even more confident this is an excellent pick.

I continue to think it is a super strong case for retroactive funding as well. Daniel walked away from OpenAI, and what looked to be most of his net worth, to preserve his right to speak up. That led to us finally allowing others at OpenAI to speak up as well.

This is how he wants to speak up, and try to influence what is to come, based on what he knows. I don’t know if it would have been my move, but the move makes a lot of sense, and it has already paid off big. AI 2027 was read by the Vice President, who took it seriously, along with many others, and greatly informed the conversation. I believe the discourse is much improved as a result, and the possibility space has improved.

Note that they are comfortably funded through the medium term via private donations and their recent SFF grant.

Donate through every.org, or contact Jonas Vollmer.

Effective Institutions Project (EIP) (For Their Flagship Initiatives)

Focus: AI governance, advisory and research, finding how to change decision points

Leader: Ian David Moss

Funding Needed: Medium

Confidence Level: High

EIP operates on two tracks. They have their flagship initiatives and attempts to intervene directly. They also serve as donation advisors, which I discuss in that section.

Their current flagship initiative plans are to focus on the intersection of AI governance and the broader political and economic environment, especially risks of concentration of power and unintentional power shifts from humans to AIs.

Can they indeed identify ways to target key decision points, and make a big difference? One can look at their track record. I’ve been asked to keep details confidential, but based on my assessment of private information, I confirmed they’ve scored some big wins including that they helped improve safety practices at a major AI lab, and will plausibly continue to be able to have high leverage and punch above their funding weight. You can read about some of the stuff that they can talk about here in a Founders Pledge write up.

It seems important that they be able to continue their work on all this.

I also note that in SFF I allocated less funding to EIP than I would in hindsight have liked to allocate, due to quirks about the way matching funds worked and my attempts to adjust my curves to account for it.

Donate through every.org, or contact [email protected].

Artificial Intelligence Policy Institute (AIPI)

Focus: Primarily polls about AI, also lobbying and preparing for crisis response.

Leader: Daniel Colson.

Also Involved: Mark Beall and Daniel Eth

Funding Needed: High

Confidence Level: High

Those polls about how the public thinks about AI, including several from last year around SB 1047 including an adversarial collaboration with Dean Ball?

Remarkably often, these are the people that did that. Without them, few would be asking those questions. Ensuring that someone is asking is super helpful. With some earlier polls I was a bit worried that the wording was slanted, and that will always be a concern with a motivated pollster, but I think recent polls have been much better at this, and they are as close to neutral as one can reasonably expect.

There are those who correctly point out that even now in 2025 the public’s opinions are weakly held and low salience, and that all you’re often picking up is ‘the public does not like AI and it likes regulation.’

Fair enough. Someone still has to show this, and show it applies here, and put a lie to people claiming the public goes the other way, and measure how things change over time. We need to be on top of what the public is thinking, including to guard against the places it wants to do dumb interventions.

They don’t only do polling. They also do lobbying and prepare for crisis responses.

Donate here, or use their contact form to get in touch.

AI Lab Watch

Focus: Monitoring the AI safety record and plans of the frontier AI labs

Leader: Zach Stein-Perlman

Funding Needed: Low

Confidence Level: High

Zach has consistently been one of those on top of the safety and security plans, the model cards and other actions of the major labs, both writing up detailed feedback from a skeptical perspective and also compiling the website and its scores in various domains. Zach is definitely in the ‘demand high standards that would actually work and treat everything with skepticism’ school of all this, which I feel is appropriate, and I’ve gotten substantial benefit of his work several times.

However, due to uncertainty about whether this is the best thing for him to work on, and thus not being confident he will have this ball, Zach is not currently accepting funding, but would like people who are interested in donations to contact him via Intercom on the AI Lab Watch website.

Palisade Research

Focus: AI capabilities demonstrations to inform decision makers on capabilities and loss of control risks

Leader: Jeffrey Ladish

Funding Needed: High

Confidence Level: High

This is clearly an understudied approach. People need concrete demonstrations. Every time I get to talking with people in national security or otherwise get closer to decision makers who aren’t deeply into AI and in particular into AI safety concerns, you need to be as concrete and specific as possible – that’s why I wrote Danger, AI Scientist, Danger the way I did. We keep getting rather on-the-nose fire alarms, but it would be better if we could get demonstrations even more on the nose, and get them sooner, and in a more accessible way.

Since last time, I’ve had a chance to see their demonstrations in action several times, and I’ve come away feeling that they have mattered.

I have confidence that Jeffrey is a good person to continue to put this plan into action.

To donate, click here or email [email protected].

CivAI

Focus: Visceral demos of AI risks

Leader: Sid Hiregowdara

Funding Needed: High

Confidence Level: Medium

I was impressed by the demo I was given (so a demo demo?). There’s no question such demos fill a niche and there aren’t many good other candidates for the niche.

The bear case is that the demos are about near term threats, so does this help with the things that matter? It’s a good question. My presumption is yes, that raising situational awareness about current threats is highly useful. That once people notice that there is danger, that they will ask better questions, and keep going. But I always do worry about drawing eyes to the wrong prize.

To donate, click here or email [email protected].

AI Safety Info (Robert Miles)

Focus: Making YouTube videos about AI safety, starring Rob Miles

Leader: Rob Miles

Funding Needed: Low

Confidence Level: High

I think these are pretty great videos in general, and given what it costs to produce them we should absolutely be buying their production. If there is a catch, it is that I am very much not the target audience, so you should not rely too much on my judgment of what is and isn’t effective video communication on this front, and you should confirm you like the cost per view.

To donate, join his patreon or contact him at [email protected].

Intelligence Rising

Focus: Facilitation of the AI scenario roleplaying exercises including Intelligence Rising

Leader: Shahar Avin

Funding Needed: Low

Confidence Level: High

I haven’t had the opportunity to play Intelligence Rising, but I have read the rules to it, and heard a number of strong after action reports (AARs). They offered this summary of insights in 2024. The game is clearly solid, and it would be good if they continue to offer this experience and if more decision makers play it, in addition to the AI Futures Project TTX.

To donate, reach out to [email protected].

Convergence Analysis

Focus: A series of sociotechnical reports on key AI scenarios, governance recommendations and conducting AI awareness efforts.

Leader: David Kristoffersson

Funding Needed: High (combining all tracks)

Confidence Level: Low

They do a variety of AI safety related things. Their Scenario Planning continues to be what I find most exciting, although I’m also somewhat interested in their modeling cooperation initiative as well. It’s not as neglected as it was a year ago, but we could definitely use more work than we’re getting. For track record you check out their reports from 2024 in this area, and see if you think that was good work, and the rest of their website has more.

Their donation page is here, or you can contact [email protected].

IASEAI (International Association for Safe and Ethical Artificial Intelligence)

Focus: Grab bag of AI safety actions, research, policy, community, conferences, standards

Leader: Mark Nitzberg

Funding Needed: High

Confidence Level: Low

There are some clearly good things within the grab bag, including some good conferences and it seems substantial support for Geoffrey Hinton, but for logistical reasons I didn’t do a close investigation to see if the overall package looked promising. I’m passing the opportunity along.

Donate here, or contact them at [email protected].

The AI Whistleblower Initiative

Focus: Whistleblower advising and resources for those in AI labs warning about catastrophic risks, including via Third Opinion.

Leader: Larl Koch

Funding Needed: High

Confidence Level: Medium

I’ve given them advice, and at least some amount of such resourcing is obviously highly valuable. We certainly should be funding Third Opinion, so that if someone wants to blow the whistle they can have help doing it. The question is whether if it scales this loses its focus.

Donate here, or reach out to [email protected].

Organizations Related To Potentially Pausing AI Or Otherwise Having A Strong International AI Treaty

Pause AI and Pause AI Global

Focus: Advocating for a pause on AI, including via in-person protests

Leader: Holly Elmore (USA) and Joep Meindertsma (Global)

Funding Level: Low

Confidence Level: Medium

Some people say that those who believe we should pause AI would be better off staying quiet about it, rather than making everyone look foolish.

I disagree.

I don’t think pausing right now is a good idea. I think we should be working on the transparency, state capacity, technical ability and diplomatic groundwork to enable a pause in case we need one, but that it is too early to actually try to implement one.

But I do think that if you believe we should pause? Then you should say that we should pause. I very much appreciate people standing up, entering the arena and saying what they believe in, including quite often in my comments. Let the others mock all they want.

If you agree with Pause AI that the right move is to Pause AI, and you don’t have strong strategic disagreements with their approach, then you should likely be excited to fund this. If you disagree, you have better options.

Either way, they are doing what they, given their beliefs, should be doing.

Donate here, or reach out to [email protected].

MIRI

Focus: At this point, primarily AI policy advocacy, letting everyone know that If Anyone Builds It, Everyone Dies and all that, plus some research

Leaders: Malo Bourgon, Eliezer Yudkowsky

Funding Needed: High

Confidence Level: High

MIRI, concluding that it is highly unlikely alignment will make progress rapidly enough otherwise, has shifted its strategy to largely advocate for major governments coming up with an international agreement to halt AI progress and to do communications, although research still looks to be a large portion of the budget, and they have dissolved its agent foundations team. Hence the book.

That is not a good sign for the world, but it does reflect their beliefs.

They have accomplished a lot. The book is at least a modest success on its own terms in moving things forward.

I strongly believe they should be funded to continue to fight for a better future however they think is best, even when I disagree with their approach.

This is very much a case of ‘do this if and only if this aligns with your model and preferences.’

Donate here, or reach out to [email protected].

Existential Risk Observatory

Focus: Pause-relevant research

Leader: Otto Barten

Funding Needed: Low

Confidence Level: Medium

Mostly this is the personal efforts of Otto Barten, ultimately advocating for a conditional pause. For modest amounts of money, in prior years he’s managed to have a hand in some high profile existential risk events and get the first x-risk related post into TIME magazine. He’s now pivoted to pause-relevant research (as in how to implement one via treaties, off switches, evals and threat models).

The track record and my prior investigation is less relevant now, so I’ve bumped them down to low confidence, but it would definitely be good to have the technical ability to pause and not enough work is being done on that.

To donate, click here, or get in touch at [email protected].

Organizations Focusing Primary On AI Policy and Diplomacy

Some of these organizations also look at bio policy or other factors, but I judge those here as being primarily concerned with AI.

In this area, I am especially keen to rely on people with good track records, who have shown that they can build and use connections and cause real movement. It’s so hard to tell what is and isn’t effective, otherwise. Often small groups can pack a big punch, if they know where to go, or big ones can be largely wasted – I think that most think tanks on most topics are mostly wasted even if you believe in their cause.

Center for AI Safety and the CAIS Action Fund

Focus: AI research, field building and advocacy

Leaders: Dan Hendrycks

Funding Needed: High

Confidence Level: High

They did the CAIS Statement on AI Risk, helped SB 1047 get as far as it did, and have improved things in many other ways. Some of these other ways are non-public. Some of those non-public things are things I know about and some aren’t. I will simply say the counterfactual policy world is a lot worse. They’ve clearly been punching well above their weight in the advocacy space. The other arms are no slouch either, lots of great work here. Their meaningful rolodex and degree of access is very strong and comes with important insight into what matters.

They take a lot of big swings and aren’t afraid of taking risks or looking foolish. I appreciate that, even when a given attempt doesn’t fully work.

If you want to focus on their policy, then you can fund their 501(c)(4), the Action Fund, since 501c(3)s are limited in how much they can spend on political activities, keeping in mind the tax implications of that. If you don’t face any tax implications I would focus first on the 501(c)(4).

We should definitely find a way to fund at least their core activities.

Donate to the Action Fund for funding political activities, or the 501(c)(3) for research. They can be contacted at [email protected].

Foundation for American Innovation (FAI)

Focus: Tech policy research, thought leadership, educational outreach to government, fellowships.

Leader: Grace Meyer

Funding Needed: High

Confidence Level: High

FAI is centrally about innovation. Innovation is good, actually, in almost all contexts, as is building things and letting people do things.

AI is where this gets tricky. People ‘supporting innovation’ are often using that as an argument against all regulation of AI, and indeed I am dismayed to see so many push so hard on this exactly in the one place I think they are deeply wrong, when we could work together on innovation (and abundance) almost anywhere else.

FAI and resident AI studiers Samuel Hammond and Dean Ball are in an especially tough spot, because they are trying to influence AI policy from the right and not get expelled from that coalition or such spaces. There’s a reason we don’t have good alternative options for this. That requires striking a balance.

I’ve definitely had my disagreements with Hammond, including strong disagreements with his 95 theses on AI although I agreed far more than I disagreed, and I had many disagreements with his AI and Leviathan as well. He’s talked on the Hill about ‘open model diplomacy.’

I’ve certainly had many strong disagreements with Dean Ball as well, both in substance and rhetoric. Sometimes he’s the voice of reason and careful analysis, other times (from my perspective) he can be infuriating, most recently in discussions of the Superintelligence Statement, remarkably often he does some of both in the same post. He was perhaps the most important opposer of SB 1047 and went on to a stint at the White House before joining FAI.

Yet here is FAI, rather high on the list. They’re a unique opportunity, you go to war with the army you have, and both Ball and Hammond have stuck their neck out in key situations. Hammond came out opposing the moratorium. They’ve been especially strong on compute governance.

I have private reasons to believe that FAI has been effective and we can expect that to continue, and its other initiatives also mostly seem good. We don’t have to agree on everything else, so long as we all want good things and are trying to figure things out, and I’m confident that is the case here.

I am especially excited that they can speak to the Republican side of the aisle in the R’s native language, which is difficult for most in this space to do.

An obvious caveat is that if you are not interested in the non-AI pro-innovation part of the agenda (I certainly approve, but it’s not obviously a high funding priority for most readers) then you’ll want to ensure it goes where you want it.

To donate, click here, or contact them using the form here.

Encode AI (Formerly Encode Justice)

Focus: Youth activism on AI safety issues

Leader: Sneha Revanur

Funding Needed: Medium

Confidence Level: High

They started out doing quite a lot on a shoestring budget by using volunteers, helping with SB 1047 and in several other places. Now they are turning pro, and would like to not be on a shoestring. I think they have clearly earned that right. The caveat is risk of ideological capture. Youth organizations tend to turn to left wing causes.

The risk here is that this effectively turns mostly to AI ethics concerns. It’s great that they’re coming at this without having gone through the standard existential risk ecosystem, but that also heightens the ideological risk.

I continue to believe it is worth the risk.

To donate, go here. They can be contacted at [email protected].

The Future Society

Focus: AI governance standards and policy.

Leader: Nicolas Moës

Funding Needed: High

Confidence Level: High

I’ve seen credible sources saying they do good work, and that they substantially helped orient the EU AI Act to at least care at all about frontier general AI. The EU AI Act was not a good bill, but it could easily have been a far worse one, doing much to hurt AI development while providing almost nothing useful for safety.

We should do our best to get some positive benefits out of the whole thing. And indeed, they helped substantially improve the EU Code of Practice, which was in hindsight remarkably neglected otherwise.

They’re also active around the world, including the USA and China.

Donate here, or contact them here.

Safer AI

Focus: Specifications for good AI safety, also directly impacting EU AI policy

Leader: Henry Papadatos

Funding Needed: Medium

Confidence Level: Low

I’ve been impressed by Simeon and his track record, including here. Simeon is stepping down as leader to start a company, which happened post-SFF, so they would need to be reevaluated in light of this before any substantial donation.

Donate here, or contact them at [email protected].

Institute for AI Policy and Strategy (IAPS)

Focus: Papers and projects for ‘serious’ government circles, meetings with same, policy research

Leader: Peter Wildeford

Funding Needed: Medium

Confidence Level: High

I have a lot of respect for Peter Wildeford, and they’ve clearly put in good work and have solid connections down, including on the Republican side where better coverage is badly needed, and the only other solid lead we have is FAI. Peter has also increasingly been doing strong work directly via Substack and Twitter that has been helpful to me and that I can observe directly. They are strong on hardware governance and chips in particular (as is FAI).

Given their goals and approach, funding from outside the traditional ecosystem sources would be extra helpful, ideally such efforts are fully distinct from OpenPhil.

With the shifting landscape and what I’ve observed, I’m moving them up to high confidence and priority.

Donate here, or contact them at [email protected].

AI Standards Lab (Holtman Research)

Focus: Accelerating the writing of AI safety standards

Leaders: Koen Holtman and Chin Ze Shen

Funding Needed: Medium

Confidence Level: High

They help facilitate the writing of AI safety standards, for EU/UK/USA, including on the recent EU Code of Practice. They have successfully gotten some of their work officially incorporated, and another recommender with a standards background was impressed by the work and team.

This is one of the many things that someone has to do, and where if you step up and do it and no one else does that can go pretty great. Having now been involved in bill minutia myself, I know it is thankless work, and that it can really matter, both for public and private standards, and they plan to pivot somewhat to private standards.

I’m raising my confidence to high that this is at least a good pick, if you want to fund the writing of standards.

To donate, go here or reach out to [email protected].

Safe AI Forum

Focus: International AI safety conferences

Leader: Fynn Heide and Sophie Thomson

Funding Needed: Medium

Confidence Level: Low

They run the IDAIS series of conferences, including successful ones involving China. I do wish I had a better model of what makes such a conference actually matter versus not mattering, but these sure seem like they should matter, and certainly well worth their costs to run them.

To donate, contact them using the form at the bottom of the page here.

Center For Long Term Resilience

Focus: UK Policy Think Tank focusing on ‘extreme AI risk and biorisk policy.’

Leader: Angus Mercer

Funding Needed: High

Confidence Level: Low

The UK has shown promise in its willingness to shift its AI regulatory focus to frontier models in particular. It is hard to know how much of that shift to attribute to any particular source, or otherwise measure how much impact there has been or might be on final policy.

They have endorsements of their influence from philosopher Toby Ord, Former Special Adviser to the UK Prime Minister Logan Graham, and Senior Policy Adviser Nitarshan Rajkumar.

I reached out to a source with experience in the UK government who I trust, and they reported back they are a fan and pointed to some good things they’ve helped with. There was a general consensus that they do good work, and those who investigated where impressed.

However, I have concerns. Their funding needs are high, and they are competing against many others in the policy space, many of which have very strong cases. I also worry their policy asks are too moderate, which might be an advantage for others.

My lower confidence this year is a combination of worries about moderate asks, worry about organizational size, and worries about the shift in governments in the UK and the UK’s ability to have real impact elsewhere. But if you buy the central idea of this type of lobbying through the UK and are fine with a large budget, go for it.

Donate here, or reach out to [email protected].

Simon Institute for Longterm Governance

Focus: Foundations and demand for international cooperation on AI governance and differential tech development

Leader: Konrad Seifert and Maxime Stauffer

Funding Needed: High

Confidence Level: Low

As with all things diplomacy, hard to tell the difference between a lot of talk and things that are actually useful. Things often look the same either way for a long time. A lot of their focus is on the UN, so update either way based on how useful you think that approach is, and also that makes it even harder to get a good read.

They previously had a focus on the Global South and are pivoting to China, which seems like a more important focus.

To donate, scroll down on this page to access their donation form, or contact them at [email protected].

Legal Advocacy for Safe Science and Technology

Focus: Legal team for lawsuits on catastrophic risk and to defend whistleblowers.

Leader: Tyler Whitmer

Funding Needed: Medium

Confidence Level: Medium

I wasn’t sure where to put them, but I suppose lawsuits are kind of policy by other means in this context, or close enough?

I buy the core idea of having a legal team on standby for catastrophic risk related legal action in case things get real quickly is a good idea, and I haven’t heard anyone else propose this, although I do not feel qualified to vet the operation. They were one of the organizers of the NotForPrivateGain.org campaign against the OpenAI restructuring.

I definitely buy the idea of an AI Safety Whistleblower Defense Fund, which they are also doing. Knowing there will be someone to step up and help if it comes to that changes the dynamics in helpful ways.

Donors who are interested in making relatively substantial donations or grants should contact [email protected], for smaller amounts click here.

Institute for Law and AI

Focus: Legal research on US/EU law on transformational AI, fellowships, talent

Leader: Moritz von Knebel

Involved: Gabe Weil

Funding Needed: High

Confidence Level: Low

I’m confident that they should be funded at all, the question is if this should be scaled up quite a lot, and what aspects of this would scale in what ways. If you can be convinced that the scaling plans are worthwhile this could justify a sizable donation.

Donate here, or contact them at [email protected].

Macrostrategy Research Institute

Focus: Amplify Nick Bostrom

Leader: Toby Newberry

Funding Needed: High

Confidence Level: Low

If you think Nick Bostrom is doing great work and want him to be more effective, then this is a way to amplify that work. In general, ‘give top people support systems’ seems like a good idea that is underexplored.

Get in touch at [email protected].

Secure AI Project

Focus: Advocacy for public safety and security protocols (SSPs) and related precautions

Leader: Nick Beckstead

Funding Needed: High

Confidence Level: High

I’ve had the opportunity to consult and collaborate with them and I’ve been consistently impressed. They’re the real deal, they pay attention to detail and care about making it work for everyone, and they’ve got results. I’m a big fan.

Donate here, or contact them at [email protected].

Organizations Doing ML Alignment Research

This category should be self-explanatory. Unfortunately, a lot of good alignment work still requires charitable funding. The good news is that (even more than last year when I wrote the rest of this introduction) there is a lot more funding, and willingness to fund, than there used to be, and also the projects generally look more promising.

The great thing about interpretability is that you can be confident you are dealing with something real. The not as great thing is that this can draw too much attention to interpretability, and that you can fool yourself into thinking that All You Need is Interpretability.

The good news is that several solid places can clearly take large checks.

I didn’t investigate too deeply on top of my existing knowledge here in 2024, because at SFF I had limited funds and decided that direct research support wasn’t a high enough priority, partly due to it being sufficiently legible.

We should be able to find money previously on the sidelines eager to take on many of these opportunities. Lab employees are especially well positioned, due to their experience and technical knowledge and connections, to evaluate such opportunities, and also to provide help with access and spreading the word.

Model Evaluation and Threat Research (METR)

Formerly ARC Evaluations.

Focus: Model evaluations

Leaders: Beth Barnes, Chris Painter

Funding Needed: High

Confidence Level: High

Originally I wrote that we hoped to be able to get large funding for METR via non-traditional sources. That happened last year, and METR got major funding. That’s great news. Alas, they once again have to hit the fundraising trail.

METR has proven to be the gold standard for outside evaluations of potentially dangerous frontier model capabilities, and has proven its value even more so in 2025.

We very much need these outside evaluations, and to give the labs every reason to use them and no excuse not to use them, and their information has been invaluable. In an ideal world the labs would be fully funding METR, but they’re not.

So this becomes a place where we can confidently invest quite a bit of capital, make a legible case for why it is a good idea, and know it will probably be well spent.

If you can direct fully ‘square’ ‘outside’ funds that need somewhere legible to go and are looking to go large? I love METR for that.

To donate, click here. They can be contacted at [email protected].

Alignment Research Center (ARC)

Focus: Theoretically motivated alignment work

Leader: Jacob Hilton

Funding Needed: Medium

Confidence Level: High

There’s a long track record of good work here, and Paul Christiano remained excited as of 2024. If you are looking to fund straight up alignment work and don’t have a particular person or small group in mind, this is certainly a safe bet to put additional funds to good use and attract good talent.

Donate here, or reach out to [email protected].

Apollo Research

Focus: Scheming, evaluations, and governance

Leader: Marius Hobbhahn

Funding Needed: Medium

Confidence Level: High

This is an excellent thing to focus on, and one of the places we are most likely to be able to show ‘fire alarms’ that make people sit up and notice. Their first year seems to have gone well, one example would be their presentation at the UK safety summit that LLMs can strategically deceive their primary users when put under pressure. They will need serious funding to fully do the job in front of them, hopefully like METR they can be helped by the task being highly legible.

They suggest looking at this paper, and also this one. I can verify that they are the real deal and doing the work.

To donate, reach out to [email protected].

Cybersecurity Lab at University of Louisville

Focus: Support for Roman Yampolskiy’s lab and work

Leader: Roman Yampolskiy

Funding Needed: Low

Confidence Level: High

Roman Yampolskiy is the most pessimistic known voice about our chances of not dying from AI, and got that perspective on major platforms like Joe Rogan and Lex Fridman. He’s working on a book and wants to support PhD students.

Supporters can make a tax detectable gift to the University, specifying that they intend to fund Roman Yampolskiy and the Cyber Security lab.

Timaeus

Focus: Interpretability research

Leader:Jesse Hoogland, Daniel Murfet, Stan van Wingerden

Funding Needed: High

Confidence Level: High

Timaeus focuses on interpretability work and sharing their results. The set of advisors is excellent, including Davidad and Evan Hubinger. Evan, John Wentworth and Vanessa Kosoy have offered high praise, and there is evidence they have impacted top lab research agendas. They’re done what I think is solid work, although I am not so great at evaluating papers directly.

If you’re interested in directly funding interpretability research, that all makes this seem like a slam dunk. I’ve confirmed that this all continues to hold true in 2025.

To donate, get in touch with Jesse at [email protected]. If this is the sort of work that you’re interested in doing, they also have a discord at http://devinterp.com/discord.

Simplex

Focus: Mechanistic interpretability of how inference breaks down

Leader: Paul Riechers and Adam Shai

Funding Needed: Medium

Confidence Level: High

I am not as high on them as I am on Timaeus, but they have given reliable indicators that they will do good interpretability work. I’d (still) feel comfortable backing them.

Donate here, or contact them via webform.

Far AI

Focus: Interpretability and other alignment research, incubator, hits based approach

Leader: Adam Gleave

Funding Needed: High

Confidence Level: Medium

They take the hits based approach to research, which is correct. I’ve gotten confirmation that they’re doing the real thing here. In an ideal world everyone doing the real thing would get supported, and they’re definitely still funding constrained.

To donate, click here. They can be contacted at [email protected].

Alignment in Complex Systems Research Group

Focus: AI alignment research on hierarchical agents and multi-system interactions

Leader: Jan Kulveit

Funding Needed: Medium

Confidence Level: High

I liked ACS last year, and since then we’ve seen Gradual Disempowerment and other good work, which means this now falls into the category ‘this having funding problems would be an obvious mistake.’ I ranked them very highly in SFF, and there should be a bunch more funding room.

To donate, reach out to [email protected], and note that you are interested in donating to ACS specifically.

Apart Research

Focus: AI safety hackathons, MATS-style programs and AI safety horizon scanning.

Leaders: Esben Kran, Jason Schreiber

Funding Needed: Medium

Confidence Level: Low

I’m (still) confident in their execution of the hackathon idea, which was the central pitch at SFF although they inform me generally they’re more centrally into the MATS-style programs. My doubt for the hackathons is on the level of ‘is AI safety something that benefits from hackathons.’ Is this something one can, as it were, hack together usefully? Are the hackathons doing good counterfactual work? Or is this a way to flood the zone with more variations on the same ideas?

As with many orgs on the list, this one makes sense if and only if you buy the plan, and is one of those ‘I’m not excited but can see it being a good fit for someone else.’

To donate, click here. They can be reached at [email protected].

Transluce

Focus: Specialized superhuman systems for understanding and overseeing AI

Leaders: Jacob Steinhardt, Sarah Schwettmann

Funding Needed: High

Confidence Level: Medium

Last year they were a new org. Now they have now grown to 14 people and now have a solid track record and want to keep growing. I have confirmation the team is credible. The plan for scaling themselves is highly ambitious, with planned scale well beyond what SFF can fund. I haven’t done anything like the investigation into their plans and capabilities you would need before placing a bet that big, as AI research of all kinds gets expensive quickly.

If there is sufficient appetite to scale the amount of privately funded direct work of this type, then this seems like a fine place to look. I am optimistic on them finding interesting things, although on a technical level I am skeptical of the larger plan.

To donate, reach out to [email protected].

Organizations Doing Other Technical Work

AI Analysts @ RAND

Focus: Developing ‘AI analysts’ that can assist policy makers.

Leaders: John Coughlan

Funding Needed: High

Confidence Level: Medium

This is a thing that RAND should be doing and that should exist. There are obvious dangers here, but I don’t think this makes them substantially worse and I do think this can potentially improve policy a lot. RAND is well placed to get the resulting models to be actually used. That would enhance state capacity, potentially quite a bit.

The problem is that doing this is not cheap, and while funding this shouldn’t fall to those reading this, it plausibly does. This could be a good place to consider sinking quite a large check, if you believe in the agenda.

Donate here.

Organizations Doing Math, Decision Theory and Agent Foundations

Right now it looks likely that AGI will be based around large language models (LLMs). That doesn’t mean this is inevitable. I would like our chances better if we could base our ultimate AIs around a different architecture, one that was more compatible with being able to get it to do what we would like it to do.

One path for this is agent foundations, which involves solving math to make the programs work instead of relying on inscrutable giant matrices.

Even if we do not manage that, decision theory and game theory are potentially important for navigating the critical period in front of us, for life in general, and for figuring out what the post-transformation AI world might look like, and thus what choice we make now might do to impact that.

There are not that many people working on these problems. Actual Progress would be super valuable. So even if we expect the median outcome does not involve enough progress to matter, I think it’s still worth taking a shot.

The flip side is you worry about people ‘doing decision theory into the void’ where no one reads their papers or changes their actions. That’s a real issue. As is the increased urgency of other options. Still, I think these efforts are worth supporting, in general.

Orthogonal

Focus: AI alignment via agent foundations

Leaders: Tamsin Leake

Funding Needed: Medium

Confidence Level: High

I have funded Orthogonal in the past. They are definitely doing the kind of work that, if it succeeded, might actually amount to something, and would help us get through this to a future world we care about. It’s a long shot, but a long shot worth trying. They very much have the ‘old school’ Yudkowsky view that relatively hard takeoff is likely and most alignment approaches are fools errands. My sources are not as enthusiastic as they once were, but there are only a handful of groups trying that have any chance at all, and this still seems like one of them.

Donate here, or get in touch at [email protected].

Topos Institute

Focus: Math for AI alignment

Leaders: Brendan Fong and David Spivak.

Funding Needed: High

Confidence Level: High

Topos is essentially Doing Math to try and figure out what to do about AI and AI Alignment. I’m very confident that they are qualified to (and actually will) turn donated money (partly via coffee) into math, in ways that might help a lot. I am also confident that the world should allow them to attempt this.

They’re now working with ARIA. That seems great.

Ultimately it all likely amounts to nothing, but the upside potential is high and the downside seems very low. I’ve helped fund them in the past and am happy about that.

To donate, go here, or get in touch at [email protected].

Eisenstat Research

Focus: Two people doing research at MIRI, in particular Sam Eisenstat

Leader: Sam Eisenstat

Funding Needed: Medium

Confidence Level: High

Given Sam Eisenstat’s previous work, including from 2025, it seems worth continuing to support him, including supporting researchers. I still believe in this stuff being worth working on, obviously only support if you do as well. He’s funded for now but that’s still only limited runway.

To donate, contact [email protected].

AFFINE Algorithm Design

Focus: Johannes Mayer does agent foundations work

Leader: Johannes Mayer

Funding Needed: Low

Confidence Level: Medium

Johannes Mayer does solid agent foundations work, and more funding would allow him to hire more help.

To donate, contact [email protected].

CORAL (Computational Rational Agents Laboratory)

Focus: Examining intelligence

Leader: Vanessa Kosoy

Funding Needed: Medium

Confidence Level: High

This is Vanessa Kosoy and Alex Appel, who have another research agenda formerly funded by MIRI that now needs to stand on its own after their refocus. I once again believe this work to be worth continuing even if the progress isn’t what one might hope. I wish I had the kind of time it takes to actually dive into these sorts of theoretical questions, but alas I do not, or at least I’ve made a triage decision not to.

To donate, click here. For larger amounts contact directly at [email protected]

Mathematical Metaphysics Institute

Focus: Searching for a mathematical basis for metaethics.

Leader: Alex Zhu

Funding Needed: Low

Confidence Level: Low

Alex Zhu has run iterations of the Math & Metaphysics Symposia, which had some excellent people in attendance, and intends partly to do more things of that nature. He thinks eastern philosophy contains much wisdom relevant to developing a future ‘decision-theoretic basis of metaethics’ and plans on an 8+ year project to do that.

I’ve seen plenty of signs that the whole thing is rather bonkers, but also strong endorsements from a bunch of people I trust that there is good stuff here, and the kind of crazy that is sometimes crazy enough to work. So there’s a lot of upside. If you think this kind of approach has a chance of working, this could be very exciting. For additional information, you can see this google doc.

To donate, message Alex at [email protected].

Focal at CMU

Focus: Game theory for cooperation by autonomous AI agents

Leader: Vincent Conitzer

Funding Needed: Medium

Confidence Level: Low

This is an area MIRI and the old rationalist crowd thought about a lot back in the day. There are a lot of ways for advanced intelligences to cooperate that are not available to humans, especially if they are capable of doing things in the class of sharing source code or can show their decisions are correlated with each other.

With sufficient capability, any group of agents should be able to act as if it is a single agent, and we shouldn’t need to do the game theory for them in advance either. I think it’s good things to be considering, but one should worry that even if they do find answers it will be ‘into the void’ and not accomplish anything. Based on my technical analysis I wasn’t convinced Focal was going to sufficiently interesting places with it, but I’m not at all confident in that assessment.

They note they’re also interested in the dynamics prior to Ai becoming superintelligent, as the initial conditions plausibly matter a lot.

To donate, reach out to Vincent directly at [email protected] to be guided through the donation process.

Organizations Doing Cool Other Stuff Including Tech

This section is the most fun. You get unique projects taking big swings.

ALLFED

Focus: Feeding people with resilient foods after a potential nuclear war

Leaders: David Denkenberger

Funding Needed: High

Confidence Level: Medium

As far as I know, no one else is doing the work ALLFED is doing. A resilient food supply ready to go in the wake of a nuclear war (or other major disaster with similar dynamics) could be everything. There’s a small but real chance that the impact is enormous. In my 2021 SFF round, I went back and forth with them several times over various issues, ultimately funding them, you can read about those details here.

I think all of the concerns and unknowns from last time essentially still hold, as does the upside case, so it’s a question of prioritization, how likely you view nuclear war scenarios and how much promise you see in the tech.

If you are convinced by the viability of the tech and ability to execute, then there’s a strong case that this is a very good use of funds.

I think that this is a relatively better choice if you expect AI to remain a normal technology for a while or if your model of AI risks includes a large chance of leading to a nuclear war or other cascading impacts to human survival, versus if you don’t think this.

Research and investigation on the technical details seems valuable here. If we do have a viable path to alternative foods and don’t fund it, that’s a pretty large miss, and I find it highly plausible that this could be super doable and yet not otherwise done.

Donate here, or reach out to [email protected].

Good Ancestor Foundation

Focus: Collaborations for tools to increase civilizational robustness to catastrophes

Leader: Colby Thompson

Funding Needed: High

Confident Level: High

The principle of ‘a little preparation now can make a huge difference to resilience and robustness in a disaster later, so it’s worth doing even if the disaster is not so likely’ generalizes. Thus, the Good Ancestor Foundation, targeting nuclear war, solar flares, internet and cyber outages, and some AI scenarios and safety work.

A particular focus is archiving data and tools, enhancing synchronization systems and designing a novel emergency satellite system (first one goes up in June) to help with coordination in the face of disasters. They’re also coordinating on hardening critical infrastructure and addressing geopolitical and human rights concerns.

They’ve also given out millions in regrants.

One way I know they make good decisions is they continue to help facilitate the funding for my work, and make that process easy. They have my sincerest thanks. Which also means there is a conflict of interest, so take that into account.

Donate here, or contact them at [email protected].

Charter Cities Institute

Focus: Building charter cities

Leader: Kurtis Lockhart

Funding Needed: Medium

Confidence Level: Medium

I do love charter cities. There is little question they are attempting to do a very good thing and are sincerely going to attempt to build a charter city in Africa, where such things are badly needed. Very much another case of it being great that someone is attempting to do this so people can enjoy better institutions, even if it’s not the version of it I would prefer that would focus on regulatory arbitrage more.

Seems like a great place for people who don’t think transformational AI is on its way but do understand the value here.

Donate to them here, or contact them via webform.

Carbon Copies for Independent Minds

Focus: Whole brain emulation

Leader: Randal Koene

Funding Needed: Medium

Confidence Level: Low

At this point, if it worked in time to matter, I would be willing to roll the dice on emulations. What I don’t have is much belief that it will work, or the time to do a detailed investigation into the science. So flagging here, because if you look into the science and you think there is a decent chance, this becomes a good thing to fund.

Donate here, or contact them at [email protected].

Organizations Focused Primarily on Bio Risk

Secure DNA

Focus: Scanning DNA synthesis for potential hazards

Leader: Kevin Esvelt, Andrew Yao and Raphael Egger

Funding Needed: Medium

Confidence Level: Medium

It is certainly an excellent idea. Give everyone fast, free, cryptographically screening of potential DNA synthesis to ensure no one is trying to create something we do not want anyone to create. AI only makes this concern more urgent. I didn’t have time to investigate and confirm this is the real deal as I had other priorities even if it was, but certainly someone should be doing this.

There is also another related effort, Secure Bio, if you want to go all out. I would fund Secure DNA first.

To donate, contact them at [email protected].

Blueprint Biosecurity

Focus: Increasing capability to respond to future pandemics, Next-gen PPE, Far-UVC.

Leader: Jake Swett

Funding Needed: Medium

Confidence Level: Medium

There is no question we should be spending vastly more on pandemic preparedness, including far more on developing and stockpiling superior PPE and in Far-UVC. It is rather a shameful that we are not doing that, and Blueprint Biosecurity plausibly can move substantial additional investment there. I’m definitely all for that.

To donate, reach out to [email protected] or head to the Blueprint Bio PayPal Giving Fund.

Pour Domain

Focus: EU policy for AI enabled biorisks, among other things.

Leader: Patrick Stadler

Funding Needed: Low

Confidence Level: Low

Everything individually looks worthwhile but also rather scattershot. Then again, who am I to complain about a campaign for e.g. improved air quality? My worry is still that this is a small operation trying to do far too much, some of it that I wouldn’t rank too high as a priority, and it needs more focus, on top of not having that clear big win yet. They are a French nonprofit.

Donation details are at the very bottom of this page, or you can contact them at [email protected].

ALTER Israel

Focus: AI safety and biorisk for Israel

Leader: David Manheim

Funding Needed: Low

Confidence Level: Medium

Israel has Ilya’s company SSI (Safe Superintelligence) and otherwise often punches above its weight in such matters but is getting little attention. This isn’t where my attention is focused but David is presumably choosing this focus for good reason.

To support them, get in touch at [email protected].

Organizations That Can Advise You Further

The first best solution, as I note above, is to do your own research, form your own priorities and make your own decisions. This is especially true if you can find otherwise illegible or hard-to-fund prospects.

However, your time is valuable and limited, and others can be in better positions to advise you on key information and find opportunities.

Another approach to this problem, if you have limited time or actively want to not be in control of these decisions, is to give to regranting organizations, and take the decisions further out of your own hands.

Effective Institutions Project (EIP) (As A Donation Advisor)

Focus: AI governance, advisory and research, finding how to change decision points

Leader: Ian David Moss

Confidence Level: High

I discussed their direct initiatives earlier. This is listing them as a donation advisor and in their capacity of attempting to be a resource to the broader philanthropic community.

They report that they are advising multiple major donors, and would welcome the opportunity to advise additional major donors. I haven’t had the opportunity to review their donation advisory work, but what I have seen in other areas gives me confidence. They specialize in advising donors who have brad interests across multiple areas, and they list AI safety, global health, democracy and (peace and security).

To donate, click here. If you have further questions or would like to be advised, contact them at [email protected].

Longview Philanthropy

Focus: Conferences and advice on x-risk for those giving >$1 million per year

Leader: Simran Dhaliwal

Funding Needed: None

Confidence Level: Low

Longview is not seeking funding, instead they are offering support to large donors, and you can give to their regranting funds, including the Emerging Challenges Fund on catastrophic risks from emerging tech, which focuses non-exclusively on AI.

I had a chance to hear a pitch for them at The Curve and check out their current analysis and donation portfolio. It was a good discussion. There were definitely some areas of disagreement in both decisions and overall philosophy, and I worry they’ll be too drawn to the central and legible (a common issue with such services).

On the plus side, they’re clearly trying, and their portfolio definitely had some good things in it. So I wouldn’t want to depend on them or use them as a sole source if I had the opportunity to do something higher effort, but if I was donating on my own I’d find their analysis useful. If you’re considering relying heavily on them or donating to the funds, I’d look at the fund portfolios in detail and see what you think.

I pointed them to some organizations they hadn’t had a chance to evaluate yet.

They clearly seem open to donations aimed at particular RFPs or goals.

To inquire about their services, contact them at [email protected].

Organizations That then Regrant to Fund Other Organizations

There were lots of great opportunities in SFF in both of my recent rounds. I was going to have an embarrassment of riches I was excited to fund.

Thus I decided quickly that I would not be funding any regrating organizations. If you were in the business of taking in money and then shipping it out to worthy causes, well, I could ship directly to highly worthy causes.

So there was no need to have someone else do that, or expect them to do better.

That does not mean that others should not consider such donations.

I see three important advantages to this path.

  1. Regranters can offer smaller grants that are well-targeted.
  2. Regranters save you a lot of time.
  3. Regranters avoid having others try to pitch on donations.

Thus, if you are making a ‘low effort’ donation, and think others you trust that share your values to invest more effort, it makes more sense to consider regranters.

In particular, if you’re looking to go large, I’ve been impressed by SFF itself, and there’s room for SFF to scale both its amounts distributed and level of rigor.

SFF Itself (!)

Focus: Give out grants based on recommenders, primarily to 501c(3) organizations

Leaders: Andrew Critch and Jaan Tallinn

Funding Needed: High

Confidence Level: High

If I had to choose a regranter right now to get a large amount of funding, my pick would be to partner with and participate in the SFF process as an additional funder. The applicants and recommenders are already putting in their effort, with plenty of room for each round to scale. It is very clear there are plenty of exciting places to put additional funds.

With more funding, the decisions could improve further, as recommenders would be better motivated to devote more time, and we could use a small portion of additional funds to make them better resourced.

The downside is that SFF can’t ‘go small’ efficiently on either funders or causes.

SFF does not accept donations but they are interested in partnerships with people or institutions who are interested in participating as a Funder in a future S-Process round. The minimum requirement for contributing as a Funder to a round is $250k. They are particularly interested in forming partnerships with American donors to help address funding gaps in 501(c)(4)’s and other political organizations.

This is a good choice if you’re looking to go large and not looking to ultimately funnel towards relatively small funding opportunities or individuals.

Manifund

Focus: Regranters to AI safety, existential risk, EA meta projects, creative mechanisms

Leader: Austin Chen (austin at manifund.org).

Funding Needed: Medium

Confidence Level: Medium

This is a regranter that gives its money to its own regranters, one of which was me, for unrestricted grants. They’re the charity donation offshoot of Manifold. They’ve played with crowdfunding, and with impact certificates, and ACX grants. They help run Manifest.

You’re essentially hiring these people to keep building a website and trying alternative funding allocation mechanisms, and for them to trust the judgment of selected regranters. That seems like a reasonable thing to do if you don’t otherwise know where to put your funds and want to fall back on a wisdom of crowds of sorts. Or, perhaps, if you actively want to fund the cool website.

Manifold itself did not apply, but I would think that would also be a good place to invest or donate in order to improve the world. It wouldn’t even be crazy to go around subsidizing various markets. If you send me manna there, I will set aside and use that manna to subsidize markets when it seems like the place to do that.

If you want to support Manifold itself, you can either donate or buy a SAFE by contacting Austin at [email protected].

Also I’m a regranter at Manifund, so if you wanted to, you could use that to entrust me with funds to regrant. As you can see I certainly feel I have plenty of good options here if I can’t find a better local one, and if it’s a substantial amount I’m open to general directions (e.g. ensuring it happens relatively quickly, or a particular cause area as long as I think it’s net positive, or the method of action or theory of impact). However, I’m swamped for time, so I’d probably rely mostly on what I already know.

AI Risk Mitigation Fund

Focus: Spinoff of LTFF, grants for AI safety projects

Leader: Thomas Larsen

Funding Needed: Medium

Confidence Level: High

Seems very straightforwardly exactly what it is, a regranter that is usually in the low six figure range. Fellow recommenders were high on Larsen’s ability to judge projects. If you think this is better than you can do on your own and you want to fund such projects, then go for it.

I’ve talked to them on background about their future plans and directions, and without sharing details their plans make me more excited here.

Donate here or contact them at [email protected].

Long Term Future Fund

Focus: Grants of 4-6 figures mostly to individuals, mostly for AI existential risk

Leader: Caleb Parikh (among other fund managers)

Funding Needed: High

Confidence Level: Low

The pitch on LTFF is that it is a place for existential risk people who need modest cash infusions to ask for them, and to get them without too much overhead or distortion. Looking over the list of grants, there is at least a decent hit rate.

One question is, are the marginal grants a lot less effective than the average grant?

My worry is that I don’t know the extent to which the process is accurate, fair, favors insiders or extracts a time or psychic tax on participants, favors legibility, or rewards ‘being in the EA ecosystem’ or especially the extent to which the net effects are distortionary and bias towards legibility and standardized efforts. Or the extent to which people use the system to extract funds without actually doing anything.

That’s not a ‘I think this is bad,’ it is a true ‘I do not know.’ I doubt they know either.

What do we know? They say applications should take 1-2 hours to write and between 10 minutes and 10 hours to evaluate, although that does not include time forming the plan, and this is anticipated to be an ~yearly process long term. And I don’t love that this concern is not listed under reasons not to choose to donate to the fund (although the existence of that list at all is most welcome, and the reasons to donate don’t consider the flip side either).

Given their current relationship to EA funds, you likely should consider LTFF if and only if you both want to focus on AI existential risk via regrants and also want to empower and strengthen the existing EA formal structures and general ways of being.

That’s not my preference, but it could be yours.

Donate here, or contact the fund managers at [email protected].

Foresight

Focus: Regrants, fellowships and events

Leader: Allison Duettmann

Funding Needed: Medium

Confidence Level: Low

Foresight also does other things. I’m focusing here on their AI existential risk grants, which they offer on a rolling basis. I’ve advised them on a small number of potential grants, but they rarely ask.

The advantage on the regrant side would be to get outreach that wasn’t locked too tightly into the standard ecosystem. The other Foresight activities all seem clearly like good things, but the bar these days is high and since they weren’t the topic of the application I didn’t investigate.

Donate here, or reach out to [email protected].

Centre for Enabling Effective Altruism Learning & Research (CEELAR)

Focus: Strategic incubator and launchpad for EA talent, research, and high-impact initiatives, with emphasis on AI safety, GCR reduction, and longtermist work

Leader: Attila Ujvari

Funding Needed: High

Confidence Level: Low

I loved the simple core concept of a ‘catered hotel’ where select people can go to be supported in whatever efforts seem worthwhile. They are now broadening their approach, scaling up and focusing on logistical and community supports, incubation and a general infrastructure play on top of their hotel. This feels less unique to me now and more of a typical (EA UK) community play, so you should evaluate it on that basis.

Donate here, or reach out to [email protected].

Organizations That are Essentially Talent Funnels

I am less skeptical of prioritizing AI safety talent funnels than I was last year, but I remain skeptical.

The central reason remains simple. If we have so many good organizations already, in need of so much funding, why do we need more talent funnels? Is talent our limiting factor? Are we actually in danger of losing important talent?

The clear exception is leadership and management. There remains, it appears, a clear shortage of leadership and management talent across all charitable space, and startup space, and probably flat out all of space.

Which means if you are considering stepping up and doing leadership and management, then that is likely more impactful than you might at first think.

If there was a strong talent funnel specifically for leadership or management, that would be a very interesting funding opportunity. And yes, of course there still need to be some talent funnels. Right now, my guess is we have enough, and marginal effort is best spent elsewhere.

What about for other talent? What about placements in government, or in the AI labs especially Anthropic of people dedicated to safety? What about the prospects for much higher funding availability by the time we are ready to put people to work?

If you can pull it off, empowering talent can have a large force multiplier, and the opportunity space looks better than a year ago. It seems plausible that frontier labs will soak up every strong safety candidate they can find, since the marginal returns there are very high and needs are growing rapidly.

Secondary worries include the danger you end up feeding capability researchers to AI labs, and the discount for the time delays involved.

My hunch is this will still receive relatively more attention and funding than is optimal, but marginal funds here will still be useful if deployed in places that are careful to avoid being lab talent funnels.

AI Safety Camp

Focus: Learning by doing, participants work on a concrete project in the field

Leaders: Remmelt Ellen and Linda Linsefors and Robert Kralisch

Funding Needed: Low

Confidence Level: High

By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back something in this section, I’d start here.

Donors can reach out to Remmelt at [email protected], or leave a matched donation to support next projects.

Center for Law and AI Risk

Focus: Paying academics small stipends to move into AI safety work

Leaders: Peter Salib (psalib @ central.uh.edu), Yonathan Arbel (yarbel @ law.ua.edu) and Kevin Frazier (kevin.frazier @ law.utexas.edu).

Funding Needed: Low

Confidence Level: High

This strategy is potentially super efficient. You have an academic that is mostly funded anyway, and they respond to remarkably small incentives to do something they are already curious about doing. Then maybe they keep going, again with academic funding. If you’re going to do ‘field building’ and talent funnel in a world short on funds for those people, this is doubly efficient. I like it. They’re now moving into hiring an academic fellow, the theory being ~1 year of support to create a permanent new AI safety law professor.

To donate, message one of leaders at the emails listed above.

Speculative Technologies

Focus: Enabling ambitious research programs that are poor fits for both academia and VC-funded startups including but not limited to Drexlerian functional nanomachines, high-throughput tools and discovering new superconductors.

Leader: Benjamin Reinhardt

Funding Needed: Medium

Confidence Level: Medium

I have confirmation that Reinhardt knows his stuff, and we certainly could use more people attempting to build revolutionary hardware. If the AI is scary enough to make you not want to build the hardware, it would figure out how to build the hardware anyway. You might as well find out now.

If you’re looking to fund a talent funnel, this seems like a good choice.

To donate, go here or reach out to [email protected].

Talos Network

Focus: Fellowships to other organizations, such as Future Society, Safer AI and FLI.

Leader: Chiara Gerosa

Funding Needed: Medium

Confidence Level: Low

They run two fellowship cohorts a year. They seem to place people into a variety of solid organizations, and are exploring the ability to get people into various international organizations like the OECD, UN or European Commission or EU AI Office.

The more I am convinced people will actually get inside meaningful government posts, the more excited I will be.

To donate, contact [email protected].

MATS Research

Focus: Researcher mentorship for those new to AI safety.

Leaders: Ryan Kidd and Christian Smith.

Funding Needed: High

Confidence Level: Medium

MATS is by all accounts very good at what they do and they have good positive spillover effects on the surrounding ecosystem. The recruiting classes they’re getting are outstanding.

If (and only if) you think that what they do, which is support would-be alignment researchers starting out and especially transitioning from other professions, is what you want to fund, then you should absolutely fund them. That’s a question of prioritization.

Donate here, or contact them via webform.

Epistea

Focus: X-risk residencies, workshops, coworking in Prague, fiscal sponsorships

Leader: Irena Kotikova

Funding Needed: Medium

Confidence Level: Medium

I see essentially two distinct things here.

First, you have the umbrella organization, offering fiscal sponsorship for other organizations. Based on what I know from the charity space, this is a highly valuable service – it was very annoying getting Balsa a fiscal sponsor while we waited to become a full 501c3, even though we ultimately found a very good one that did us a solid, and also annoying figuring out how to be on our own going forward.

Second, you have various projects around Prague, which seem like solid offerings in that class of action of building up EA-style x-risk actions in the area, if that is what you are looking for. So you’d be supporting some mix of those two things.

To donate, contact [email protected].

Emergent Ventures

Focus: Small grants to individuals to help them develop their talent

Leader: Tyler Cowen

Funding Needed: Medium

Confidence Level: High

Emergent Ventures are not like the other talent funnels in several important ways.

  1. It’s not about AI Safety. You can definitely apply for an AI Safety purpose, he’s granted such applications in the past, but it’s rare and topics run across the board, well beyond the range otherwise described in this post.
  2. Decisions are quick and don’t require paperwork or looking legible. Tyler Cowen makes the decision, and there’s no reason to spend much time on your end either.
  3. There isn’t a particular cause area this is trying to advance. He’s not trying to steer people to do any particular thing. Just to be more ambitious, and be able to get off the ground and build connections and so on. It’s not prescriptive.

I strongly believe this is an excellent way to boost the development of more talent, as long as money is serving as a limiting factor on the project, and that it is great to develop talent even if you don’t get to direct or know where it is heading. Sure, I get into rhetorical arguments with Tyler Cowen all the time, around AI and also other things, and we disagree strongly about some of the most important questions where I don’t understand how he can continue to have the views he expresses, but this here is still a great project, an amazingly cost-efficient intervention.

Donate here (specify “Emergent Ventures” in notes), or reach out to [email protected].

AI Safety Cape Town

Focus: AI safety community building and research in South Africa

Leaders: Leo Hyams and Benjamin Sturgeon

Funding Needed: Low

Confidence Level: Low

This is a mix of AI research and building up the local AI safety community. One person whose opinion I value gave the plan and those involved in it a strong endorsement, so including it based on that.

To donate, reach out to [email protected].

ILINA Program

Focus: Talent for AI safety in Africa

Leaders: Cecil Abungu

Funding Needed: Low

Confidence Level: Low

I have a strong endorsement in hand in terms of their past work, if you think this is a good place to go in search of talent.

To donate, reach out to [email protected].

Impact Academy Limited

Focus: Global talent accelerator and hiring partner for technical AI safety, supporting worker transitions into AI safety.

Leader: Roy Hagemann and Varun Agarwal

Funding Needed: Medium

Confidence Level: Low

They previously focused on India, one place with lots of talent, they’re now global. A lot has turned over in the last year, so you’ll want to check them out anew.

To donate, contact [email protected].

Atlas Computing

Focus: Mapping & creating missing orgs for AI safety (aka Charity Entrepreneurship for AI risk)

Leaders: Evan Miyazono

Funding Needed: Medium

Confidence Level: Low

There was a pivot this past year from technical research to creating ‘missing orgs’ in the AI risk space. That makes sense as a strategy if and only if you expect the funding necessary to come in, or you think they can do especially strong targeting. Given the change they will need to be reevaluated.

They receive donations from here, or you can email them at [email protected].

Principles of Intelligence (Formerly PIBBSS)

Focus: Fellowships and affiliate programs for new alignment researchers

Leader: Lucas Teixeira and Dusan D. Nesic

Funding Needed: High

Confidence Level: Low

There are some hits here. Gabriel Weil in particular has impressed me in our interactions and with his work and they cite a good technical paper. But also that’s with a lot of shots on goal, and I’d have liked to see some bigger hits by now.

A breakdown revealed that, largely because they start with relatively senior people, most of them get placed in a way that doesn’t require additional support. That makes them a better bet than many similar rivals.

To donate, reach out to [email protected], or fund them through Manifund here.

Tarbell Center

Focus: Journalism fellowships for oversight of AI companies.

Leader: Cillian Crosson (Ex-Talos Network; still on their board.)

Funding Needed: High

Confidence Level: Medium

They offer fellowships to support journalism that helps society navigate the emergence of increasingly advanced AI, and a few other journalism ventures. They have sponsored at least one person who went on to do good work in the area. They also sponsor article placement, which seems reasonably priced in the grand scheme of things, I think?

I am not sure this is a place we need to do more investment, or if people trying to do this even need fellowships. Hard to say. There’s certainly a lot more tech reporting and more every day, if I’m ever short of material I have no trouble finding more.

It is still a small amount of money per person that can meaningfully help people get on their feet and do something useful. We do in general need better journalism. They seem to be in a solid place but also I’d be fine with giving a bunch more funding to play with, they seem pretty unique.

Donate here, or reach out to them via webform.

Catalyze Impact

Focus: Incubation of AI safety organizations

Leader: Alexandra Bos

Funding Needed: Medium

Confidence Level: Low

Why funnel individual talent when you can incubate entire organizations? I am not convinced that on the margin we currently need more of either, but I’m more receptive to the idea of an incubator. Certainly incubators can be high leverage points for getting valuable new orgs and companies off the ground, especially if your model is that once the org becomes fundable it can unlock additional funding.

If you think an incubator is worth funding, then the question is whether this is the right team. The application was solid all around, and their track record includes Timaeus and Carma, although counterfactuals are always difficult. Beyond that I don’t have a differentiator on why this is the team.

To donate, contact them at [email protected].

CeSIA within EffiSciences

Focus: New AI safety org in Paris, discourse, R&D collaborations, talent pipeline

Leaders: Charbel-Raphael Segerie, Florent Berthet

Funding Needed: Low

Confidence Level: Low

They’re doing all three of discourse, direct work and talent funnels. They run the only university AI safety course in Europe, maintain the AI Safety Atlas, and have had their recommendations integrated verbatim into the EU AI Act’s Code of Practice. Their two main priorities are supporting the enforcement of the EU AI Act, and driving international agreements on AI red lines.

To donate, go here, or contact them at [email protected].

Stanford Existential Risk Initiative (SERI)

Focus: Recruitment for existential risk causes

Leader: Steve Luby

Funding Needed: Medium

Confidence Level: Low

Stanford students certainly are one place to find people worth educating about existential risk. It’s also an expensive place to be doing it, and a place that shouldn’t need extra funding. And that hates fun. And it’s not great that AI is listed third on their existential risk definition. So I’m not high on them, but it sure beats giving unrestricted funds to your Alma Mater.

Interested donors should contact Steve Luby directly at [email protected].

Non-Trivial

Focus: Talent funnel directly to AI safety and biosecurity out of high school

Leader: Peter McIntyre

Funding Needed: Low

Confidence Level: Low

Having high school students jump straight to research and placement sounds good to me, and plausibly the best version of a talent funnel investment. I haven’t confirmed details but I like the theory.

To donate, get in touch at [email protected].

CFAR

Focus: Teaching rationality skills, seeking to make sense of the world and how to think

Leader: Anna Salamon

Funding Needed: High

Confidence Level: High

I am on the board of CFAR, so there is a direct and obvious conflict. Of course, I am on the board of CFAR exactly because I think this is a worthwhile use of my time, and also because Anna asked me. I’ve been involved in various ways since the beginning, including the discussions about whether and how to create CFAR in the first place.

CFAR is undergoing an attempted revival. There weren’t workshops for many years, for a variety of reasons including safety concerns and also a need to reorient. The workshops are now starting up again, with a mix of both old and new units, and I find much of the new material interesting and potentially valuable. I’d encourage people to consider attending workshops, and also donating.

To donate, click here, or reach out to [email protected].

The Bramble Center

Focus: Workshops in the style of CFAR but focused on practical courage, forming high value relationships between attendees with different skill sets and learning to care for lineages, in the hopes of repairing the anglosphere and creating new capable people to solve our problems including AI in more grounded ways.

Leader: Anna Salamon

Funding Needed: Low

Confidence Level: High

LARC is kind of a spin-off of CFAR, a place to pursue a different kind of agenda. I absolutely do not have high confidence that this will succeed, but I do have high confidence that this is a gamble worth taking, and that if those involved here (especially Anna Salamon but also others that I know) want to devote their time to trying this, that we should absolutely give them that opportunity.

Donate here.

Final Reminders

If an organization was not included here, or was removed for the 2025 edition, again, that does not mean they aren’t good, or even that I wouldn’t endorse them if asked.

It could be because I am not aware of the organization, or lack sufficient knowledge at this point to be confident in listing them, or I fear my knowledge is obsolete.

It could be that they asked to be excluded, which happened in several cases.

If by accident I included you and you didn’t want to be included and I failed to remove you, or you don’t like the quote here, I sincerely apologize and will edit you out right away, no questions asked.

If an organization is included here, that is a good thing, but again, it does not mean you should donate without checking if it makes sense based on what you think is true, how you think the world works, what you value and what your priorities are. There are no universal right answers.



Discuss

How Reducing Cognitive Interference Could Revolutionize Stroke Recovery

2025-11-27 23:34:39

Published on November 27, 2025 3:34 PM GMT

The most frustrating truth in motor learning is that thinking too hard about how you move can make you worse. Not "trying hard" in the sense of effort or volume, but "trying hard" in the sense of conscious, deliberate attention to mechanics. When a pianist thinks too carefully about which fingers to move, their playing becomes stilted. When a basketball player consciously analyzes their shooting form mid-shot, they miss. When a stroke patient over-focuses on the individual components of reaching for a cup, their recovery slows.

This phenomenon has a name: the reinvestment hypothesis, and it suggests that explicit, conscious monitoring of motor tasks actively interferes with the implicit, procedural systems that actually execute skilled movement. For stroke survivors trying to regain the ability to move a paralyzed arm, this creates a devastating paradox. They're desperate to move, they focus intensely on moving, and that very focus may be sabotaging their recovery.

What if we could temporarily turn off that interference? What if we could create a neural state where the overthinking stops, where the brain's executive control systems step aside and let the motor learning circuits do their work unimpeded? And what if we could do this precisely during the moments when the brain is receiving the most valuable training signal, when motor intent is perfectly paired with sensory feedback?

This isn't science fiction. The pieces already exist. What's needed is putting them together in the right way.

Why Stroke Recovery Takes So Long

When someone has a stroke affecting their motor cortex or corticospinal tract, the fundamental problem is simple, the brain's signals can no longer reach certain muscles. The neural highway is damaged. What remains is a patient who can imagine moving their arm, who desperately wants to move their arm, but whose arm refuses to cooperate.

Traditional rehabilitation tries to work around this through sheer repetition. Physical therapists guide patients through thousands of movement attempts, hoping that spared pathways will gradually take over the functions of the damaged ones. Modern approaches add technology: robotic exoskeletons that assist movement, high-intensity task-specific training programs, even virtual reality environments. The evidence base for these interventions is solid. Constraint-Induced Movement Therapy shows real improvements in upper-limb outcomes, action observation combined with motor imagery produces meaningful gains, and high-dose practice matters.

But even with all these advances, stroke recovery remains frustratingly slow. A patient might spend six months of intensive therapy just to regain the ability to grasp a cup. During that time, they can't work, can't care for themselves fully, can't return to the life they had. The functional improvements plateau. The neural reorganization hits limits.

The question is: why?

Part of the answer comes from an unexpected place: research on how people learn implicit versus explicit motor skills. It turns out that when stroke patients are given explicit instructions about how to perform a movement, "extend your wrist like this," "focus on your elbow angle," they often learn more slowly than when they're allowed to discover the movement through practice alone. The conscious, effortful attention to movement mechanics that seems like it should help is actually getting in the way.

This makes sense from a neural architecture perspective. Motor learning relies on cortico-basal ganglia circuits for sequence chunking and action selection, and cortico-cerebellar circuits for error-based calibration and timing. These are largely implicit, procedural systems. Meanwhile, explicit motor control recruits prefrontal and parietal areas, the same systems involved in working memory, executive function, and conscious attention. When stroke patients consciously micromanage their movements, they're essentially forcing a top-down, executive-driven process onto a system that works better through bottom-up, automatic learning.

This pattern holds up in the lab. When people are instructed to focus on the effects of their movement ("move the cup to the target") rather than their body parts ("extend your arm"), they learn faster and perform better. The mechanism appears to be that external focus allows the motor system to self-organize without interference from conscious monitoring. Though the effect size estimates have been debated, the direction is consistent: less conscious control often means better motor learning.

This creates a problem for stroke patients. They have every reason to consciously focus on their movements, those movements don't work, they're effortful, they require intense concentration just to produce any result at all. The very difficulty of the task seems to demand explicit attention. But that attention may be part of what's keeping them stuck.

What's needed is a way to short-circuit this pattern. To provide the brain with successful movement experiences while simultaneously reducing the executive interference that slows procedural learning. To create conditions where implicit, automatic motor learning can flourish.

Closing the Loop with BCI-FES

Before we can address the cognitive interference problem, we need to solve the more fundamental issue: how do you give a stroke patient successful movement experiences when their brain signals can't reach their muscles?

The answer comes from combining two technologies: brain-computer interfaces (BCIs) and functional electrical stimulation (FES). A BCI reads electrical activity from the brain, typically using an EEG to detect motor-related patterns, and translates it into a control signal. When a patient imagines or attempts to move their paralyzed hand, the BCI detects the motor imagery signature (a decrease in mu and beta frequency power over the motor cortex, called event-related desynchronization or ERD) and uses it to trigger an external device.

FES, meanwhile, delivers small electrical currents directly to muscles, causing them to contract. The stimulation bypasses the damaged neural pathways entirely, creating movement through peripheral activation.

Put them together and you get something remarkable: a closed-loop system where motor intent directly causes motor action, even when the natural pathway is broken. The patient thinks about moving their hand, the BCI detects this intention, and FES makes their hand actually move. The brain's command reaches the muscle, just through an artificial route.

Why does this matter? Because of Hebbian plasticity, the principle that neurons that fire together wire together. When the brain generates a motor command and immediately receives sensory feedback that the movement actually happened, it strengthens the entire sensorimotor loop. The intent and the consequence are temporally linked, creating an associative learning signal. Over time, this should help rebuild functional neural pathways, allowing the patient to eventually move without needing the assistive technology.

Figure 1: Effect of BCI-Based Training on FMA-UE by Feedback Type

Mean difference in Fugl-Meyer Upper Extremity scores and 95% confidence intervals from Li et al., JNER 2025 meta-analysis of 21 RCTs (n=886). BCI+FES shows the strongest effect (MD = 4.37 points), outperforming BCI with robotic or visual feedback alone.

And it works. The evidence is now substantial. A 2025 meta-analysis of 21 randomized controlled trials including 886 patients found that BCI-based training produces clinically meaningful improvements in upper-limb function, with an overall mean difference of 3.69 points on the Fugl-Meyer Assessment Upper Extremity scale (FMA-UE). The BCI+FES subgroup performed best, with a mean difference of 4.37 points, well above the minimal clinically important difference of 4-5 points.

These effects appear in both subacute stroke (mean difference 5.31 points, where plasticity is still high) and chronic stroke (mean difference 2.63-3.71 points, where spontaneous recovery has plateaued). The benefits extend beyond the immediately trained movements, with improvements seen on the Wolf Motor Function Test and Action Research Arm Test as well. Most importantly, the gains are durable, multiple studies show maintained improvements at 6-12 month follow-up.

The mechanism appears to be genuine neural reorganization, not just temporary facilitation. EEG and fMRI studies show that patients undergoing BCI-FES training develop increased connectivity in motor areas of the affected hemisphere, with the degree of connectivity change correlating with functional improvement. The brain is actually rewiring.

Perhaps most telling, BCI-FES outperforms both components used alone. BCI with visual feedback is better than traditional therapy, but BCI with actual movement (FES) is better still. The closed loop matters. The temporal pairing of intent and sensory consequence matters. In a double-blind RCT, patients receiving motor-imagery-contingent FES (where the stimulation only occurred when they successfully generated the right brain pattern) showed greater improvements than patients receiving the same amount of FES on a fixed schedule. The contingency, the fact that their intention caused the movement, was therapeutically important.

This makes BCI-FES a strong foundation for rehabilitation. But there's still room for improvement. The effect sizes, while clinically meaningful, are modest. The between-patient variability is substantial. Some patients show dramatic recovery; others show minimal change. And the question remains: if we're successfully creating the conditions for Hebbian learning through intent-contingent feedback, why aren't the effects even larger?

One possibility is that we're fighting the executive interference problem. Yes, the BCI-FES provides the motor signal, but patients are still consciously trying, monitoring, and analyzing their attempts. Their prefrontal cortex is still in the way.

What Happens When Top-Down Control Is Reduced

Here's where things get interesting. There's a rare neurological phenomenon called savant syndrome, where individuals with significant developmental or cognitive disabilities exhibit "islands of genius," isolated areas of exceptional ability that contrast sharply with their overall impairment. People with savant syndrome can perform lightning-fast calendar calculations, reproduce complex musical pieces after a single hearing, create photorealistic drawings from memory.

The traditional explanation focused on enhanced local processing or specialized neural architecture. But starting in the 2000s, neuroscientist Allan Snyder proposed something more provocative: what if savant abilities aren't special skills that most people lack, but rather latent capabilities that most brains suppress? What if the savant's abilities emerge not from having something extra, but from lacking certain top-down inhibitory processes that normally filter and constrain cognition?

The hypothesis is that typical brains engage in constant top-down control, applying rules, strategies, learned patterns, and executive oversight to sensory and motor processing. This is usually adaptive; it lets us categorize, generalize, and operate efficiently. But it also means we can't access the raw, unfiltered information that our lower-level systems actually process. A savant with reduced prefrontal inhibition might "see" the individual features of a scene with unusual clarity because they're not being automatically chunked into higher-level categories.

The truly striking evidence came when researchers tried to temporarily create "savant-like" states in neurotypical people. In a series of controversial but replicated experiments, low-frequency repetitive transcranial magnetic stimulation (rTMS) applied to the left anterior temporal lobe - an area involved in top-down semantic and conceptual control - transiently improved performance on drawing tasks, proofreading, and even numerical estimation. The effect was small and temporary, but it suggested the principle was real: reducing certain types of top-down control can unmask capabilities that are normally suppressed.

Now, we're not trying to create savant abilities in stroke patients. They don't need to do calendar calculations or draw photorealistic portraits. But the underlying principle is directly applicable: what if the same executive systems that normally constrain cognition are also interfering with motor relearning?

We already know that explicit monitoring hurts motor learning. We know that prefrontal areas contribute to explicit, rule-based motor control. We know that stroke patients tend to over-engage these systems, consciously trying to control movements that should be automatic. What if we could temporarily reduce this prefrontal interference, creating a state where implicit motor learning could proceed more efficiently?

This is where the specific anatomical target matters. The left dorsolateral prefrontal cortex (left DLPFC, Brodmann areas 9 and 46) is heavily involved in working memory, executive control, and explicit strategy use. When you consciously think about how to perform a movement, when you monitor your performance and try to adjust it analytically, when you apply verbal instructions to motor control - that's largely left DLPFC activity.

And there's direct evidence that inhibiting left DLPFC can facilitate implicit learning. In a striking 2017 study, researchers applied continuous theta burst stimulation (cTBS), a rapid, inhibitory form of rTMS, to the left DLPFC while adults learned novel word forms. The cTBS group learned faster and showed better retention than both the sham stimulation group and a group receiving stimulation to the right DLPFC. The interpretation: by temporarily reducing the left DLPFC's top-down control, the inhibitory stimulation allowed implicit, statistical learning mechanisms to work more efficiently.

The same principle should apply to motor learning. In fact, it may apply even more strongly, since motor learning is fundamentally an implicit process. By using cTBS to transiently reduce left DLPFC activity during motor training, we might be able to shift patients from an explicit, effortful, self-monitoring mode into a more implicit, automatic, procedural learning mode. We'd be creating a temporary "flow state" where the overthinking stops and the motor system can do its job.

The Proposal: Strategic Neural Inhibition Paired with Closed-Loop Training

Here's the complete picture: take the proven BCI-FES approach that successfully closes the sensorimotor loop, and add a brief pre-session intervention that temporarily reduces executive interference. Specifically, apply 40 seconds of cTBS to the left DLPFC immediately before each BCI-FES training session.

The timing is critical. cTBS produces a reduction in cortical excitability that lasts approximately 30-60 minutes. This matches the typical length of a BCI-FES training block. The patient receives the cTBS, has a few minutes of setup time (EEG cap placement, FES electrode positioning, system calibration), and then begins training while their DLPFC is still inhibited. They're primed for implicit learning right when they're receiving the most valuable learning signal: the tight temporal pairing of motor intent and sensory feedback.

The technical parameters matter. For cTBS, the established protocol is 600 pulses delivered as 50 Hz triplets repeating at 5 Hz, taking exactly 40 seconds. The stimulation intensity is typically 70-80% of active motor threshold or a guideline-concordant percentage of resting motor threshold. The target is left DLPFC, which can be localized using individual structural MRI and neuronavigation for precision, or approximated using the scalp-based F3 position if imaging isn't available. These parameters come from Huang et al.'s seminal 2005 paper establishing the theta burst stimulation protocol, and they're explicitly covered in the 2021 International Federation of Clinical Neurophysiology safety guidelines.

For the BCI-FES component, the system uses standard EEG (16-32 channels, focused on sensorimotor areas C3/C4) to detect motor imagery or movement attempt through mu and beta band event-related desynchronization. Common spatial patterns (CSP) and linear discriminant analysis (LDA) provide the classification, though other machine learning approaches work as well. The key is the contingency: FES is only delivered when the patient successfully generates the appropriate brain pattern. This is calibrated each session, as brain signals change over time.

The FES parameters also matter. A common starting point is 50 Hz stimulation with a 300-microsecond pulse width, based on the parameters used in successful RCTs. The current amplitude is titrated to produce visible muscle contraction without discomfort, typically starting low and gradually increasing as the patient adapts. Early sessions might focus on wrist extension, but the approach can be extended to multi-channel stimulation patterns that produce more complex, functional movements as the patient improves.

Figure 2: Conceptual Schematic of the Intervention

The workflow is straightforward. The patient arrives, and a trained technician applies the cTBS using a TMS coil positioned over left DLPFC. This takes less than a minute once the target is located. After the 40-second stimulation burst, there's a brief transition period while the EEG cap and FES electrodes are placed and the system is calibrated (3-5 minutes). Then the patient begins the motor training: they repeatedly attempt or imagine moving their affected limb, the BCI detects successful attempts, and FES causes the actual movement. This continues for 40-60 minutes, during which time the DLPFC remains inhibited from the earlier cTBS.

Importantly, during the training period, explicit strategy coaching is minimized. The patient isn't given detailed instructions about how to move or what to think about. Instead, the focus is on external goals ("reach for the target") and letting the system provide feedback through successful movements. The reduced DLPFC activity should make this implicit approach more natural - with less executive monitoring, patients may find it easier to stop overthinking and just let the movements emerge.

What should this achieve? The mechanisms are complementary. The cTBS provides cognitive priming, setting the brain state to favor implicit learning. The BCI-FES provides contingent feedback, ensuring that motor intent is reliably paired with sensory consequences, creating strong Hebbian learning signals. Together, they address both the "what to learn" (the motor patterns that produce successful reaching, grasping, manipulation) and the "how to learn it" (through implicit, procedural mechanisms rather than explicit, executive-driven ones).

This is not two competing interventions but complementary components working at different levels. FES provides peripheral afference (sensory input from the stimulated muscles) that's temporally locked to central intent (the motor imagery that triggered the BCI). cTBS provides supramodal cognitive state optimization, reducing the top-down interference that would normally slow procedural learning. One ensures the motor system gets the right training signal; the other ensures the brain is in the right state to learn from it.

Expected Outcomes and Clinical Impact

Given that BCI-FES alone produces mean improvements of approximately 4.4 points on the FMA-UE scale, what additional benefit should we expect from adding cTBS-mediated cognitive priming?

The honest answer is, we don't know exactly, because this specific combination hasn't been tested. But we can make educated estimates based on the magnitude of effects seen when implicit learning conditions are optimized. Suppose the cognitive interference hypothesis is correct, and a substantial portion of the between-patient variability in BCI-FES outcomes reflects differences in how much executive monitoring interferes with learning. In that case, we might expect gains of 2-3 additional FMA-UE points above the BCI-FES baseline.

This may not sound dramatic, but in stroke rehabilitation, every point matters. The minimal clinically important difference for FMA-UE is approximately 4-5 points. An intervention that reliably produces 6-7 points of improvement (the current BCI-FES effect plus 2-3 points from cognitive priming) would be solidly clinically meaningful. More importantly, if the effect is real, it would represent a change in kind, not just degree, evidence that cognitive state manipulation can accelerate motor recovery.

Figure 3: Power Analysis for Detecting Incremental Effects

Sample size requirements per arm for a two-arm RCT to detect additional improvement beyond BCI-FES alone. Assumes standard deviation of 7 points (typical for FMA-UE), alpha of 0.05, 80% power. To reliably detect a 2-point improvement requires approximately 200 participants per arm; a 3-point improvement requires approximately 80 per arm. This determines the scale of validation study needed.

But clinical scores are only one outcome. We should also see changes in the learning process itself. Specifically:

Neurophysiological markers should show accelerated changes. We'd expect to see larger and more consistent increases in ipsilesional mu/beta desynchronization during motor imagery (indicating stronger motor preparation), faster improvements in BCI classifier performance within sessions (indicating more efficient learning), and greater increases in sensorimotor connectivity measured with resting-state EEG (indicating more robust neural reorganization).

Behavioral process indicators would provide mechanistic validation. Reduced dual-task costs during trained movements would suggest that the motor patterns are becoming more automatic (dual-task paradigms interfere with explicit control but not implicit, proceduralized movements). Less verbal strategy use and self-reported mental effort during training would indicate reduced executive involvement. These process measures wouldn't just confirm that the intervention works - they'd confirm that it works through the proposed mechanism of enhanced implicit learning.

Figure 4: Hypothetical Mechanism Panel

The baseline curve (solid gray) shows session-wise mu/beta event-related desynchronization (ERD) during motor imagery training extracted from Brunner et al.'s 2024 randomized controlled trial. In that study, chronic stroke patients completing 3 weeks of BCI-FES training exhibited mean ERD of approximately -29% at the start of therapy and -20% at the end (8-30 Hz band, less-affected hand imagery), indicating a modest reduction in desynchronization across therapy. The dashed red line projects the effect of combining continuous theta-burst stimulation (cTBS) with BCI-FES: based on literature suggesting that cTBS reduces executive interference and increases ERD magnitude, the prediction assumes a 40% increase in ERD magnitude (i.e., more negative values) relative to the baseline. Error bars are omitted because the real study reported group means ± SD; the prediction illustrates a hypothetical enhancement rather than measured data.
The solid gray line represents session-by-session classifier accuracy during BCI-FES training. The baseline values were approximated from published learning curves: a 2021 RESNA conference report on BCI-FES lower-extremity rehabilitation described that Participant 1's maximum classification accuracy increased from 84% in session 1 to 88% in session 25, Participant 2 averaged 90.6% accuracy with a peak of 97% in session 2, and Participant 3 averaged 88.9% accuracy with peaks of 94% at sessions 13, 14, and 19. These data, together with the typical range of 62-97% accuracy reported in previous studies, were used to construct a plausible baseline learning curve showing gradual improvement from ~84% to ~91% across 12 sessions. The dashed red line shows a predicted enhancement if cTBS were applied concurrently: assuming cTBS improves learning rates by ~40% and raises the asymptotic accuracy by ~5-7 percentage points, the predicted curve rises more steeply and reaches ~96% accuracy at later sessions. Actual error bars were unavailable, so the figure illustrates mean trajectories.
Currently, published BCI-FES studies rarely report cognitive-dual-task costs or measures of automatization. Some trials examine prefrontal beta/alpha ratio changes during dual-task motor-imagery training, yet they provide no numerical data for training curves. Because of this gap, the gray baseline curve in Panel C represents simulated dual-task cost decreasing from 35% to 20% across sessions, and the red dashed curve depicts the predicted effect of cTBS reducing cognitive load and halving the dual-task cost. This panel is included to illustrate the hypothesized benefits of cTBS on automatization; it should be interpreted qualitatively until empirical data become available.

The timeline of recovery might also shift. If the cognitive priming is working as proposed, patients in the combined intervention might reach rehabilitation milestones faster - achieving independent grasp, functional reaching, or bimanual coordination weeks earlier than patients receiving BCI-FES alone. This would be clinically significant even if the final outcomes at month 3 or month 6 converged, because faster recovery means earlier return to work, reduced caregiver burden, and lower overall rehabilitation costs.

From the patient's perspective, the experience might be subtly different. With reduced DLPFC activity during training, they might report that movements "just happen" more easily, with less mental strain. They might describe a flow-like state, or feeling "in the zone" during training. They might be less frustrated by failures, since there's less conscious monitoring to generate negative self-talk. These subjective reports would be valuable qualitative data suggesting that the cognitive manipulation is having its intended effect.

For the field, success would mean validation of a broader principle: that cognitive state matters for motor rehabilitation, and we can manipulate it systematically. This would open up an entire research program. If inhibiting DLPFC helps, what about modulating other prefrontal regions? What about enhancing reward processing or optimizing attention through different stimulation targets? What about using closed-loop, brain-state-dependent stimulation that only applies neuromodulation when the patient is in suboptimal cognitive states?

The novelty here isn't in having invented a new technology. BCIs exist, FES exists, cTBS exists. The novelty lies in the strategic combination of targeted neuromodulation to create the optimal cognitive state for learning, which closed-loop training enables. It's taking two independently validated interventions and arranging them in time and space to produce synergistic effects.

Risk and mitigation possibilities

Any proposal to combine multiple brain interventions needs to address safety, feasibility, and potential failure modes.

Safety is paramount. cTBS to DLPFC has been used extensively in depression research and cognitive neuroscience, with a well-characterized safety profile. The 2021 IFCN guidelines provide explicit screening criteria: no metallic implants near the stimulation site, no active epilepsy without specialist oversight, and no unstable medical conditions. When these guidelines are followed, adverse events are rare and mild, primarily transient headache or scalp discomfort. The seizure risk exists but is estimated at less than 0.1% for a single session using standard parameters, and even lower for theta burst stimulation protocols.

For stroke patients specifically, most safety concerns arise in the acute phase (first few weeks post-stroke), when the brain is unstable and seizure risk is elevated. But many BCI-FES trials already safely include subacute and chronic stroke patients with noninvasive brain stimulation, demonstrating feasibility. The key is proper screening and monitoring.

FES safety is even more established. The main concerns are muscle fatigue (addressed by appropriate duty cycles and current titration) and skin irritation (addressed by proper electrode placement and conductive gel). Combining FES with EEG requires careful cable routing to avoid artifacts, but this is routine in BCI-FES systems.

The specific interaction between cTBS and BCI-FES actually reduces certain risks. Because cTBS is delivered as a brief, "offline" burst before training (not continuous stimulation during EEG recording), it doesn't interfere with the BCI's signal quality. The 3-5 minute setup period after cTBS and before training ensures that any immediate stimulation-related artifacts have dissipated before data collection begins.

Patient variability is inevitable. Not everyone will respond equally. Some patients may have difficulty generating consistent motor imagery signals for the BCI to decode, especially those with severe paresis or damage to premotor areas. Some patients may not respond strongly to cTBS, there's known inter-individual variability in response to brain stimulation, likely depending on baseline cortical state, genetic factors, and lesion characteristics.

This argues for stratification and personalization. Lesion location matters: Patients with left-hemisphere strokes (especially those affecting frontal areas) might have different responses than patients with right-hemisphere strokes. Language dominance could matter too, for the ~10% of people who are right-hemisphere dominant for language, inhibiting left DLPFC might have different effects. This is why a rigorous study design would include a right-DLPFC cTBS control arm, which would help distinguish lateralized effects from nonspecific arousal or placebo effects.

Some patients might already be in an "uninhibited" state. If frontal damage has already reduced executive control, additional cTBS might provide minimal benefit or could even be counterproductive (too little executive oversight might lead to distracted, unfocused training). This argues for screening based on baseline cognitive function and potentially titrating the cTBS intensity individually.

Figure 5: Proposed Study Design and Patient Flow

The optimal parameters are uncertain. We know the standard cTBS600 protocol produces effects lasting 30-60 minutes, but is that the ideal duration? Should patients train for the full window, or is there an optimal period after which the benefit diminishes? How many sessions per week maximize learning without causing fatigue? Traditional therapy is often 5 days per week; is that also optimal for this combined approach, or would more frequent but shorter sessions be better?

These are empirical questions that can be answered through systematic study, but they represent genuine uncertainty. A pilot phase testing different schedules (daily sessions vs. twice-daily vs. alternate days) and durations (30 vs. 45 vs. 60 minutes of training) would be valuable before scaling up.

There's a dual-task tradeoff that needs careful attention. The goal is to reduce cognitive interference, not to produce inattention. If cTBS completely eliminates executive control, patients might become disengaged, unfocused, or might not encode the relationship between their efforts and the resulting movements. The dose of neuromodulation matters. Theta burst stimulation has a dose-response relationship; different intensities and patterns produce different magnitudes of inhibition. Starting with conservative parameters (70% of active motor threshold) and carefully monitoring both engagement and learning is essential.

We should also ensure that the BCI-FES training itself maintains motivation and external focus. Gamification helps here, turning the training into score-based challenges or goal-reaching tasks keeps patients engaged without requiring verbal, explicit strategy instruction. Visual feedback showing successful BCI detections and resulting movements provides concrete reinforcement without requiring self-monitoring.

The implementation is complex. Running this intervention requires equipment (TMS system, EEG, BCI software, FES device), space (a quiet room for TMS, adequate space for reaching movements), and expertise (someone who can safely administer TMS, someone who can calibrate the BCI, someone who can adjust FES parameters). This is currently feasible in research settings and advanced rehabilitation centers, but would need streamlining for broader clinical adoption.

Cost is a consideration, but potentially manageable. Many rehabilitation hospitals already have TMS equipment (often used for treating post-stroke depression). BCI-FES systems are increasingly commercialized and dropping in price. The limiting factor is trained personnel. But if the intervention reliably produces faster recovery, the economics could favor it, earlier hospital discharge, fewer therapy hours needed, quicker return to work would offset the higher initial cost per session.

Home adaptation is tricky. The cTBS component requires clinical administration and supervision (TMS devices cannot be safely used at home). But the BCI-FES component could potentially be simplified for home use, as several groups are developing portable BCI-FES systems. A hybrid approach might work: patients come to clinic 2-3 times per week for supervised cTBS+BCI-FES sessions, and practice with take-home BCI-FES (without the neuromodulation) on other days.

We might be wrong about the mechanism. The cognitive interference hypothesis is plausible and supported by converging evidence, but it's possible that cTBS is working through a different mechanism, or not working at all. This is why mechanistic outcome measures matter: measuring explicit vs. implicit learning markers, dual-task costs, subjective strategy use, and neurophysiological indicators of executive engagement would help validate or refute the proposed mechanism even before clinical outcomes are clear.

Table 1: Key Risks and Mitigation Strategies

Risk/Challenge Likelihood Impact Mitigation Strategy
Variable cTBS response between patients High Medium Stratify by baseline cognitive function and lesion location; consider individualized intensity titration based on motor threshold; monitor response in first 2-3 sessions
Difficulty generating consistent motor imagery signals Medium High Pre-screen BCI performance during enrollment; provide 2-3 calibration sessions; exclude patients with <60% classification accuracy; use adaptive classifiers
Patient dropout/attrition Medium Medium Shorter session blocks (45 min vs 60 min); offer transportation assistance; home-based BCI-FES practice between clinic visits; strong engagement protocols with regular feedback
Insufficient statistical power Medium High Conservative sample size calculation (80/arm for 3-point difference); plan interim analysis at 50% enrollment; consider adaptive design allowing sample size re-estimation
cTBS-induced inattention interfering with BCI control Low High Start with conservative TMS intensity (70% AMT); monitor engagement via task performance; pause protocol if accuracy drops >20%; adjust intensity if needed
No dual-task cost data available for validation High Low Accept Figure 4C as hypothesis-generating; prioritize FMA-UE as primary outcome; add dual-task measures as exploratory secondary outcome for future studies
Limited generalizability from single-site trial Medium Medium If multi-site funding unavailable, clearly state single-site limitation; provide detailed protocol documentation for replication; report site-specific factors (therapist experience, equipment)
Heterogeneous lesion locations affecting outcomes High Medium Stratified randomization by hemisphere; pre-specified subgroup analyses by lesion location; collect high-resolution MRI for post-hoc lesion mapping analyses

These challenges are real, but none are insurmountable. They require careful study design, appropriate controls, adequate sample sizes, and honest reporting of both successes and failures. The field of neurorehabilitation has a history of promising interventions that don't replicate or don't scale, avoiding that requires scientific rigor and skepticism alongside enthusiasm.

What We Know and What We Need to Learn

Strong evidence:

BCI-FES works for stroke rehabilitation. Multiple meta-analyses, including hundreds of patients, show consistent benefits. The effect sizes are modest but clinically meaningful. The effects are durable. The mechanism appears to involve genuine neural reorganization, not just facilitation. Contingent feedback, pairing motor intent with sensory consequence, is important; noncontingent FES doesn't work as well. This is the foundation everything else builds on.

Implicit learning is better than explicit learning for motor tasks. Decades of work in motor learning, from both sports science and neurorehabilitation, support this. External focus outperforms internal focus (though effect size estimates vary). Errorless learning (minimizing explicit failures) outperforms trial-and-error in stroke populations. Reducing cognitive load during practice facilitates long-term retention. The OPTIMAL theory provides a framework: autonomy, enhanced expectancies, and external focus all boost learning by reducing interference and enhancing motivation.

Executive control can interfere with implicit learning. This is well-documented in cognitive psychology. Working memory load impairs motor skill acquisition. Providing explicit rules or strategies often hurts performance on tasks that benefit from implicit statistical learning. The prefrontal cortex, particularly DLPFC, is consistently implicated in top-down control that can impair implicit processes.

cTBS to the left DLPFC can facilitate certain types of learning. The 2017 word-form learning study is the clearest evidence. Participants who received inhibitory cTBS to the left DLPFC before training showed faster learning and better retention than sham controls. This wasn't a motor task, but it was an implicit learning task, which is the relevant parallel.

Moderate evidence:

cTBS effects last 30-60 minutes. This comes from the original theta burst stimulation papers and subsequent studies measuring motor cortex excitability. But the exact duration varies between individuals and might be different for cognitive effects (learning facilitation) than for physiological effects (cortical excitability changes). The time course of benefit needs empirical validation in the stroke rehabilitation context.

State-dependent TMS effects matter. There's growing evidence that the effects of brain stimulation depend on the ongoing brain state when stimulation is applied. This supports the timing logic of our proposal (apply cTBS right before training, so the inhibited state coincides with learning), but it also adds complexity; individual differences in baseline brain state might moderate outcomes.

Multimodal combinations work better than single interventions. There's a pattern in neurorehabilitation where combining approaches (NIBS + robotics, BCI + action observation, etc.) tends to produce larger effects than either alone. But this isn't always true, and the specific combinations matter. Just adding interventions together can backfire if they're incompatible or if they create too much complexity.

Weak evidence / informed speculation:

The specific combination of cTBS + BCI-FES will produce additive benefits. This is plausible based on complementary mechanisms, but it's the hypothesis being tested, not an established fact. It's possible the combination produces no additional benefit (because BCI-FES already moves patients into implicit learning mode). It's possible there are negative interactions (cTBS-induced inattention interferes with BCI control). It's possible the effects are only additive in certain patient subgroups.

Left DLPFC is the optimal target. We're targeting left DLPFC based on its role in verbal, explicit control and the word-learning study results. But maybe right DLPFC would work as well (or better). Maybe dorsal premotor cortex (involved in motor planning and selection) would be more directly relevant. Maybe bilateral DLPFC inhibition would be needed. These are variants that could be tested systematically.

The effect size will be 2-3 FMA-UE points. This is an educated guess based on the magnitude of cognitive interference effects seen in other contexts. If executive monitoring is accounting for, say, 30% of the variance in BCI-FES outcomes, and cTBS reduces that interference by 50%, you get a couple points of improvement. But these are made-up numbers. The actual effect could be smaller (if interference is less important than we think) or larger (if it's a major bottleneck).

What would convincing evidence look like?

A properly powered, multi-arm randomized controlled trial with:

  1. Adequate sample size: Based on Figure 3, approximately 80 participants per arm to detect a 3-point difference, or 200 per arm to detect a 2-point difference. Conservative planning would target the larger sample.
  2. Active controls: Not just sham cTBS, but also right-DLPFC cTBS (to control for nonspecific stimulation effects) and FES-only (to isolate the BCI closed-loop contribution).
  3. Mechanistic outcomes: Not just FMA-UE at endpoint, but process measures throughout (BCI classifier performance, dual-task costs, strategy questionnaires, EEG connectivity) that test whether the intervention works through the proposed mechanism.
  4. Follow-up: At least 3 months post-intervention to assess durability. BCI-FES effects are known to persist; do the cognitive priming effects create lasting changes in learning approach, or do they only help during active training?
  5. Stratification: Pre-specified analysis by lesion location, stroke severity, time post-stroke, and baseline cognitive function. Not to hunt for significant subgroups, but to understand who benefits most.
  6. Independent replication: No single study, even if well-designed, is sufficient. The field needs replication in independent samples, ideally at multiple sites, to build confidence.

This is a substantial undertaking. But the components are all clinical-ready, the safety profile is understood, and the theoretical motivation is strong. If the effect is real, it's worth knowing. If it's not, that's also scientifically valuable; it would help refine models of motor learning and cognitive interference in rehabilitation.

Why Now: The Technological and Scientific Convergence

Proposals like this depend on a confluence of enabling factors. Why is this approach feasible now in a way it wasn't a decade ago?

BCI technology has matured. Early BCI systems were finicky, required extensive calibration, and had poor signal quality. Modern systems are more robust. The signal processing (common spatial patterns, linear discriminant analysis, and increasingly deep learning classifiers) is standardized and reliable. Multiple commercial BCI-FES systems exist, not just research prototypes. The bar to running a BCI study has dropped dramatically.

TMS is widely available. What was once rare equipment is now in many hospitals. The primary clinical indication, treating depression with repetitive TMS, has driven adoption. Many rehabilitation centers that work with stroke patients already have TMS systems on-site. The additional infrastructure needed to test this proposal is minimal for institutions already running BCI rehabilitation research.

The cognitive neuroscience is clearer. Our understanding of implicit versus explicit learning, the role of the prefrontal cortex in executive control, and state-dependent plasticity has advanced substantially. The theoretical motivation for this approach wouldn't have been as clear 15 years ago. The 2017 DLPFC-cTBS word-learning study was particularly important; it provided proof-of-concept that inhibiting left DLPFC can facilitate implicit learning in a context (adult learning of complex structures) that's analogous to motor relearning.

The field recognizes the need for combination approaches. Early neurorehabilitation research often tested single interventions in isolation. There's now broader acceptance that complex problems require integrated solutions. Meta-analyses consistently show that combined approaches (stimulation + training, BCI + conventional therapy, etc.) tend to outperform single interventions. The conceptual space for "let's strategically combine these tools" has expanded.

Regulatory pathways are clarifying. FDA breakthrough device designations and approvals for novel neuromodulation approaches in rehabilitation have increased. There's precedent for complex, multimodal interventions getting regulatory approval. The risk-benefit calculation is better understood.

Patient populations are available. Stroke is common, with approximately 800,000 strokes per year in the US alone. A significant fraction of survivors have upper-limb impairment suitable for this intervention. Recruitment for well-designed studies is feasible.

There's also a broader trend toward precision medicine in rehabilitation. The one-size-fits-all approach is breaking down. Tools like PREP2 help stratify patients by their likely recovery trajectory. Biomarkers guide treatment selection. The proposal to use cognitive state manipulation to enhance training fits naturally into this movement toward individualized, mechanistically-informed rehabilitation.

Figure 6: Technology Readiness and Integration Timeline

 

Researchers Positioned to Test This Approach

The proposed combination of cTBS with BCI-FES is immediately feasible for groups with established expertise in both neuromodulation and brain-computer interfaces. Two research teams are particularly well-positioned:

Prof. Surjo R. Soekadar (Charité - Universitätsmedizin Berlin) is Einstein Professor of Clinical Neurotechnology and heads the Clinical Neurotechnology Lab at Charité. His group develops brain-computer interfaces combined with non-invasive brain stimulation for stroke rehabilitation. With ERC Starting, Proof-of-Concept, and Consolidator Grants supporting next-generation BCIs and closed-loop neuromodulation, Soekadar's lab has demonstrated that quadriplegic patients can control hand exoskeletons using BCI systems. The lab maintains specialized real-time hardware capable of processing EEG data and triggering TMS pulses according to brain state, precisely the infrastructure needed for this proposal.

Prof. Cuntai Guan (Nanyang Technological University, Singapore) is President's Chair Professor and Director of the Centre for Brain-Computing Research. An IEEE Fellow with 420+ publications and 26 granted patents in BCI technology, Guan received the international BCI Research Award and has directed multiple large-scale stroke rehabilitation studies. His group has published extensively on motor imagery BCI-FES protocols and maintains active clinical partnerships for stroke trials. The infrastructure includes established BCI platforms, clinical trial experience, and stroke patient recruitment pipelines.

KITE Research Institute (Toronto Rehab/University Health Network, Canada) is a large-scale rehabilitation science enterprise with integrated engineering, neuromodulation, and tele-neurorehabilitation programs. KITE labs have established expertise in visual-feedback balance systems, closed-loop FES protocols, and transcutaneous spinal cord stimulation for motor recovery. Their infrastructure for combining neuromodulation with technology-assisted rehabilitation, along with access to diverse stroke patient populations through University Health Network's clinical network, makes them a natural collaborator for testing implicit-learning optimization combined with closed-loop training paradigms.

Both groups possess the necessary components: TMS equipment, validated BCI-FES systems, neuroimaging capabilities, stroke patient access, and regulatory expertise. We welcome inquiries from these and other qualified research teams interested in testing this hypothesis.

What This Could Mean for Rehabilitation Science

If this approach succeeds, the implications extend beyond stroke. The principle that cognitive state matters for motor learning and can be systematically manipulated to enhance rehabilitation could apply to many populations.

Traumatic brain injury patients relearning motor skills face similar challenges: damaged pathways, effortful movement attempts, and likely excessive executive monitoring. The same approach could accelerate their recovery.

Cerebral palsy involves atypical motor development where learned compensatory strategies may interfere with optimal movement patterns. Temporarily reducing executive control while practicing better-coordinated movements could help break maladaptive habits.

Parkinson's disease rehabilitation often focuses on cuing and external focus to bypass impaired automatic movement. Combining cTBS-mediated cognitive manipulation with practice might strengthen procedural learning of compensatory strategies.

Sports injury rehabilitation in athletes trying to regain precise motor control could benefit. Athletes are often highly analytical about their movements; temporarily "turning off" that analysis during training might speed skill reacquisition.

Beyond specific populations, success would validate implicit-first training as a general principle in neurorehabilitation. Current practice often emphasizes conscious attention, verbal instruction, and explicit error correction. An alternative paradigm would minimize explicit coaching, maximize successful movement experiences (through assistive technology if needed), and actively work to reduce cognitive interference. This would be a meaningful shift.

It would also open questions about optimal training states more broadly. If inhibiting DLPFC helps, what about enhancing reward processing (targeting ventral striatum or ventromedial prefrontal cortex to boost motivation)? What about manipulating attention (targeting the parietal cortex)? What about arousal (targeting brainstem nuclei)? The space of possible cognitive state manipulations is large, and we've barely begun to explore it systematically.

There are parallels to anesthesia and altered states of consciousness. We're comfortable temporarily modulating consciousness for surgery; perhaps we should be comfortable temporarily modulating cognitive modes for learning. The key difference is the timeframe (minutes to hours rather than days to weeks) and specificity (targeted cognitive processes rather than global consciousness). But the principle of strategic, temporary neural modulation for therapeutic benefit is similar.

Figure 7: Potential Extensions and Related Research Directions

The Savant Connection Revisited: Latent Potential and Neural Constraints

We started with savant syndrome as a framing device, but it's worth returning to why that parallel is meaningful and where it breaks down.

The insight from savant research, that reducing top-down inhibition can unmask latent abilities, is directly applicable. Stroke patients have latent motor potential. Their cortico-basal-ganglia and cortico-cerebellar circuits aren't entirely destroyed; they're disrupted and need to be rewired. The capacity for plasticity exists. What's often lacking is the right conditions for that plasticity to express itself.

By reducing prefrontal interference while providing high-quality training signals, we're creating conditions analogous to the "uninhibited" state that produces savant-like abilities. Not to create exceptional skills, but to accelerate recovery of ordinary ones. The parallel is the mechanism (reduced top-down control), not the outcome (exceptional versus normal function).

But there's a critical difference. Savants often have reduced executive function globally, which creates challenges in many domains while enabling islands of exceptional ability. What we're proposing is a targeted, temporary, controllable reduction of specific executive processes during training, while preserving overall cognitive function. The patient isn't becoming globally uninhibited; they're entering a temporary state during rehabilitation sessions that's optimized for implicit motor learning.

This distinction matters for safety and ethics. We're not trying to fundamentally alter someone's cognitive architecture. We're using a brief, focal intervention to bias learning during training. The patient is themselves before, during (they're aware, engaged, intentional), and after. The effect is time-limited and specific.

There's also an interesting connection to flow states, the subjective experience described by athletes, musicians, and other skilled performers when they're performing at their best. Flow is characterized by effortless action, absence of self-consciousness, and merging of action and awareness. It's essentially implicit, automatic execution without executive interference. Our proposal might be creating a mild, temporary flow-like state through neural intervention rather than through skill development and optimal challenge level.

If flow states are associated with reduced DLPFC activity (as some neuroimaging studies suggest), then cTBS to DLPFC might be a way to pharmacologically induce flow-like conditions. This would make the motor training more efficient and potentially more enjoyable. Patients might find rehabilitation less mentally exhausting if they're not constantly monitoring and judging their performance.

Conclusion

Stroke rehabilitation has improved incrementally over the decades. New technologies,  robotics, brain-computer interfaces, and noninvasive brain stimulation, each added a few percentage points of benefit. BCI-FES represents one of the more successful recent advances, with robust evidence for clinically meaningful improvements that persist long-term.

But we can do better. The bottleneck may not be the technology providing the training signal; it may be the cognitive state in which that training occurs. When stroke patients are engaged in effortful, explicit, self-monitored practice, they're working against their own motor learning systems. The harder they try, consciously, the slower they learn.

The proposal here is to fix that. Use continuous theta burst stimulation to briefly inhibit the left dorsolateral prefrontal cortex, creating a temporary state where executive interference is reduced. Time this inhibition to coincide with BCI-FES training, when motor intent is being reliably paired with sensory feedback. Give the brain the best possible training signal while also removing the cognitive obstacles that slow learning.

This is not radical. Every component is established, safe, and ready for clinical testing. The novelty is in the combination, the recognition that cognitive state and training conditions should be optimized together, not separately. It's an engineering insight as much as a neuroscience one: the system as a whole can be greater than the sum of its parts if the parts are arranged correctly.

If this works, if adding cognitive priming to closed-loop training produces meaningful additional benefit, it would validate a principle that could transform rehabilitation. Not just for stroke, but for any condition where motor relearning is needed and executive interference is a barrier. We'd have shown that learning states can be engineered, not just discovered.

If it doesn't work, we'll learn something important too. Maybe the cognitive interference problem is smaller than expected. Maybe the specific target is wrong. Maybe the combination creates unexpected negative interactions. Negative results, if they're rigorous and well-characterized, advance the field by ruling out hypotheses and refining models.

The next step is straightforward: a carefully designed Phase II trial with adequate power, active controls, mechanistic outcomes, and rigorous analysis. Not a small pilot (those can be misleading), but a properly sized study that can detect meaningful effects and characterize responders. Multi-site would be ideal for generalizability, though a single-site with strict protocols is acceptable for proof-of-concept.

This is eminently doable. The equipment exists, the expertise exists, the patient population exists, the theoretical motivation is solid, and the regulatory pathway is clear. What's needed is the will to do the experiment and the intellectual honesty to report the results accurately.

For stroke survivors, the stakes are their lives, their ability to work, to care for themselves, and to engage in the activities that make life meaningful. Months of recovery time matter. Points on a functional assessment matter. If we can reliably shorten the road to recovery, we have an obligation to try.

The overthinking problem is solvable. 



Discuss

The crux on consciousness

2025-11-27 21:20:05

Published on November 27, 2025 1:20 PM GMT

[Epistemic status: This is a fictional dialogue which surveys the main moves in the literature between Camp #1 and Camp #2 positions described in Rafael Harth’s post Why it’s so hard to talk about consciousness. The goal of this post is to present both sides as engaging in a charitable dialogue and trying to identify the crux.]

C2: I think consciousness is pretty weird.

C1: What do you mean by weird?

C2: Normally, you can describe things in terms of the physical relationships that they have with each other. But when I’m having an experience – if I look at a tomato or an apple – I experience something red, and it feels like that redness is over and above what can be described in terms of physics.

C1: See, I don’t think it’s that weird. I think that our brains have this higher-dimensional representation of different concepts that are all related in a kind of web. Obviously, we don’t have direct access to the high dimensional detail of every neuron firing, so our brain compresses this high-dimensional space into something that’s easier for us to represent to ourselves. When you see something red, this is just the brain picking out a representation for some higher-dimensional concept and compressing it and presenting it to you.

C2: Just so I understand your view. Are you saying that hard problem intuitions are simply confused i.e. that once we understand the full neuroscience, the mystery dissolves? Or are you granting that there’s a genuine explanatory gap but denying it reflects a gap in nature?

C1: The latter. I’ll grant that there’s something to explain. But I think the explanation bottoms out in functional organisation. The hard problem just boils down to a limitation in our introspective access.

C2: I don’t think that fully explains it, though.

C1: Why not? When you talk about the hard problem of consciousness, you’re just saying that it’s difficult to explain from a third-person perspective, but when the brain starts representing things to itself, it generates a first-person perspective – a concept of “self” that things are happening to. This representation is recognised from a first-person perspective inside that self-concept. So the difference between a first-person and a third-person point of view gives rise to the hard problem intuitions because you can always imagine removing yourself from the first person perspective. There’s no actual underlying mystery.

C2: So you’re talking about a phenomenal concept. The brain has a concept of something “red”, represents this concept to itself and this is supposed to generate our experience?

C1: Yes, exactly.

C2: I have a lot of problems with this phenomenal concept strategy. How exactly do you define a phenomenal concept?

C1: You could define them recognitionally – like I recognise a dog, this fits into the causal web of concepts that I’ve built up in my brain and so it picks out a concept that I recognise and deploys it in my brain as a phenomenal concept.

C2: This doesn’t quite work. What about things that you don’t recognise? Are you not conscious of them? For something like colour how do you ever get your first experience of colour if it always needs to be a recognitional concept.

C1: Okay, fair enough. Maybe you could define it quotationally? e.g. redness is [insert red picture here].

C2: Again, wouldn’t you need access to the [insert red picture here] concept in the first place? It seems circular because then redness wouldn’t even be propositional, you’d need to experience an example of it to be able to deploy it.

C1: Alright, I agree that this is a bit cumbersome. But I have a cleaner version which I think you’ll appreciate. You could imagine it as an indexical fact. I could index redness and say “redness is that sort of thing,” where “that” is just the lower-dimensional representation that my brain is presenting to me as a compression of the higher dimensional space.

C2: Okay, I can get on board with this. So I’ve got these indexical phenomenal concepts. Now, I could imagine that a zombie would also possess these indexical phenomenal concepts because a zombie is also in possession of indexicals. It can index and say “this thing, that sort of thing,” etc..

C1: Sure, I agree with that.

C2: But then if a zombie possesses the concepts, then zombies have experiences, which is a contradiction.

C1: Okay fine, let’s say that zombies don’t possess them. Maybe zombies lack the capability to index “redness” but can index other things.

C2: Well, if a zombie doesn’t possess them, then you’re not really explaining anything. If a zombie doesn’t possess this phenomenal concept, then it’s not really powerful enough to solve the hard problem, right? You need something to bridge that gap.

C1: I think there’s an obvious conclusion here. This zombie thought experiment has a problem because you’ve just derived a contradiction from it. Instead of saying that there’s something mysterious to consciousness, why don’t we just say that there’s a problem with this thought experiment regarding zombies?

C2: Fine. I’ll grant that zombies are difficult to conceive. Here’s a weaker form of the thought experiment that I think you’ll be happy to grant. Imagine spectrum inversion. There’s a physical duplicate of yourself, but instead of seeing red where you see red, they actually see blue or green. Your spectra are completely inverted, and you would never know because you’re not able to communicate this difference to each other.

C1: Hold on. You can’t just do a simple remapping between red and blue; this would throw out all sorts of behavioural relations which arise because of the structure of colour space. For example, we have particular light cones in our eyes with particular densities that react to certain wavelengths of light. Surfaces have different properties of reflectance which interact in complex ways with these wavelengths. There is also significantly different wiring in our visual cortices which will produce behavioural differences. I don’t accept that you can simply invert the spectrum here and leave behaviour unchanged.

C2: Good point. There’s obviously a lot of structure which needs to be respected. But you need something stronger for your view to go through. You need the full physics to be fixed, and for the feeling of blue or the feeling of red to also be fixed by the physics.

My claim is the following: I could conceive of the feeling of red or the feeling of blue being slightly different, maybe some rotation in colour space that takes you a little bit away from redness and towards blueness; some difference in the qualitative character of my experience with the exact same physics.

C1: I’m not convinced this actually makes a difference like you claimed. If we permute our spectra or invert them and somehow miraculously leave behaviour unchanged. That’s not creating an actual difference in the world that can be observed.

C2: Hold on, you just admitted that your hard problem intuitions fall out of the difference between third-person and first-person relationships. So it might not make a difference in the third-person observable behavior, but it would make a difference to the first-person experiencer. They would feel something different if their experience were different.

C1: A lot of work is going into the word “difference” there. What are you actually claiming is the difference?

C2: Imagine there’s a person at time T experiencing redness. And then at time T2, the exact same physical structure is replicated, but instead of redness, they are now experiencing blueness while the underlying physical relations remain fully fixed. This is conceivable.

C1: This is perfectly compatible with my position. You’re saying that you can hold this physics constant and vary the experience without changing my behavior? But the experience has now changed its relationship with my memories so my behaviour would change. I would now remember the red experience so if the experience really did change I would say “Oh, I’m experiencing a blue experience now!” which changes the underlying behaviour.

C2: Okay, fine. Let me try something stronger. Imagine there’s world A where the physics is fixed and I’m experiencing a red experience, and world B where the physics is fixed and I’m experiencing a blue experience. World A and world B have the exact same physical structure, but I claim that they are different worlds because my experience differs between them.

C1: I disagree. From my perspective, they are the same world, and I don’t see any substantial actual difference in these two worlds. If you think there is a difference, I’d like to know what exact physical fact makes the difference.

C2: This is a good challenge. Now let me go on a little bit of a tangent. What do you mean by physical fact?

C1: That which is described by our physical laws.

C2: Good. So a physical law would be describing, say, how an electron moves in an electric field?

C1: Yes, exactly.

C2: Great. So what is charge? How do you define charge?

C1: Charge is defined with respect to the equations of electromagnetic interaction. For example it appears in Coulomb’s law.

C2: Good. Now how do you define an electric field?

C1: Electric fields deflect charge. Electric field and charge are related by physical laws and it’s the physical laws which give you this structure.

C2: Isn’t this kind’ve circular? You’re defining charge with respect to electric fields and vice-versa.

C1: It’s not really circular – it’s a web of structural relations. Each entity in the structure is defined by the say it’s related to other entities in the structure.

C2: So if I had an electron and it wasn’t actually in an electric field, is it still charged?

C1: Yes.

C2: But it’s not being deflected at that moment. How can you tell it’s charged?

C1: I would say counterfactually that it would have been deflected if it had been in an electric field even though none was there at the time.

C2: Okay. So you’re essentially saying that the structure can fully specify the underlying reality. All I need to do is write down the equations and the relations that are described by the equations fully underpin reality.

C1: Yes.

C2: In world A and world B, all of these structural relations are identical, so that’s why you want to say that the two worlds are identical.

C1: Yes.

C2: Well, I would say that world A is different precisely because I am having a different experience in world A to world B, and counterfactually, the first-person observer would realise that if they were in world B, they would be having a different experience. My claim is counterfactual: if the observer switched, then they would realise that they are having different experiences. That’s what I mean by saying they are different worlds.

C1: I appreciate what you’re doing, but I’m not really convinced that this is modally potent. You’re trying to cash out this counterfactual in terms of something that is not metaphysically possible to achieve. In physics, things are really happening in the physical world. You’re talking about things which can never happen and are not even metaphysically possible.

C2: Well, I think it is modally potent, and I’m cashing it out in precisely the structural terms that you seem to care about. You were happy to use counterfactuals to define the electrons’ charge even when it’s not in an electric field. Why not now?

C1: Well our counterfactuals are importantly different. I’m cashing it out in terms of what an electron would do in an electric field. Your counterfactual doesn’t specify how the system would do anything differently because both worlds have the exact same laws of physics. There are no causal powers at play and you can’t ever realise this counterfactual because it’s not metaphysically possible to actually switch world A with world B.

C2: It has counterfactual causal powers though. If you switched them then you would see a difference from your first person perspective. You just can’t do the switch in practice.

C1: Why go to all of this work to posit a mysterious metaphysical counterfactual that doesn’t seem to have any real causal powers though? Shouldn’t the more parsimonious move be to reject the need for this counterfactual?

C2: Sure, I’m all for parsimony if we can account for all of the data – but this purely relational view of experience doesn’t explain the first person data points of my experience. Something is needed to fix what blue and red feel like from my first person perspective. If nothing fixes this then there are redundant degrees of freedom.

C1: I’m glad you bring up redundant degrees of freedom because I think that’s exactly what you’re focussing on here. There are many equivalent redundant descriptions of reality that are all correct. It’s like choosing a coordinate system or gauge potential. The actual choice doesn’t affect the physics.

C2: I grant that the choice doesn’t affect physics (third person) but it does affect phenomenal experience (first person). Phenomenal character is fixed by that choice among the redundant degrees of freedom!

C1: I think we’ve gone way off the metaphysical rails here but let me play along for the sake of argument. What exactly is fixing the experience here? It sounds like magic.

C2: Well when you described the electron moving through the electric field you described it in purely relational terms using an equation and counterfactual structure. But you said nothing about what charge really is intrinsically.

C1: What do you mean?

C2: I mean there’s an extra property not captured by the relation alone. If you switched charge with some other property (call it scharge) standard physics would say this doesn’t matter as it doesn’t affect the equations of motion. But on my view it does matter. There’s an intrinsic property which has changed which sits separately to the relational properties described by the equations of physics.

C1: And let me guess… this intrinsic property is non-physical right?

C2: Well it depends what you mean. I’d say it is physical as it’s contained in the physical particles and fields themselves. It’s just not relational like the properties of standard physics.

C1: Why postulate such a property? If it doesn’t have any physical observables it would be causally idle.

C2: Well it’s not causal in a relational sense. Physics is still causally closed. It’s just that the intrinsic properties realise this causal structure.

C1: It sounds epiphenomenal to me. You couldn’t explain any observable behaviour by postulating these properties.

C2: Sure, it wouldn’t have any third person physical observables. But it would do the job of fixing the phenomenal character of experience from a first person perspective.

C1: And every physical particle has these intrinsic properties.

C2: Yes.

C1: So everything is conscious then?

C2: Uhhh…

C1: Well this is the logical conclusion of your view! You’re saying electrons have a categorical property which fixes the phenomenal character of your experience. So electrons are conscious! That sounds absurd.

C2: Sure. But absurd sounding things can be true all the same. Quantum mechanics and General relativity sounded absurd at the time.

C1: Let me put it this way then, how do you combine all of these tiny little microexperiences into a coherent macroexperience? You have a combination problem.

C2: Okay, I don’t know, but at least this gives us a good starting for solving the hard problem right? We’ve reduced the hard problem into a combination problem. That seems more tractable.

C1: You’ll forgive me for not being convinced. If we have to postulate conscious electrons I think we’ve gone off the rails. I can just reject your move to make World A and B counterfactually different even though it’s not metaphysically possible to switch them.

C2: Okay, fine. Instead of saying these categorical properties are phenomenal let’s say they’re proto-phenomenal. They only yield the full suite of phenomenal experience when combined in the appropriate way but observed in isolation they’re not phenomenal.

C1: Oh, come on! We got into this mess in the first place because we weren’t happy bridging the non-phenomenal/phenomenal gap using physics and structure alone. Now you’ve just re-introduced the gap except we now have these causally idle proto-phenomenal properties!

C2: I think it makes sense though! The problem with your view is that there are structural relations all the way down and there’s nothing to realise the structure. On my view we at least have a base which realises the structural relations.

C1: I disagree that a base is needed for structure to be realised. And I think this is the crux between us.

C2: How would you have structure without a realiser though?

C1: Well consider a chess game. There are rules which define the way the pieces move; bishops move diagonally and rooks move in straight lines. The rules of the game completely specify “chess”. So much that you could even remove the pieces and just play blindfolded chess. As long as the rules are intact you don’t need any mysterious base to realise chess.

C2: This is a great analogy. If I switch the rook and bishop pieces on the board but keep the rules the same something has obviously changed.

C1: Nothing has changed about chess qua chess though! It’s still chess!

C2: Yes, because chess is fully specified by structure. Consciousness isn’t fully specified by structure – there’s something else in addition to the structure and that is the categorical base which is realising it.

C1: But to my previous point you can imagine the base being absent and still playing blindfolded chess. The fact that chess is structural doesn’t need a base to realise it.

C2: Well, blindfolded chess still requires players with minds keeping track of the board state. The physical pieces are inessential, but something has to realise the structure. You’ve just shifted the realiser from wooden pieces to mental states. If you removed the players too, there’d be no chess game – just an abstract mathematical structure that could describe a chess game. The game itself is constituted by something concrete.

C1: I’d say the abstract structure is the game. The players are just one way of instantiating it, but the structure is what matters.

C2: Then we have a genuine disagreement about the ontology of structure. I’d say abstract structures don’t exist independently, they’re always structures of something. You apparently think structure can float free.

C1: Structure doesn’t “float free” in a spooky way. The physical relations between neurons are abstract but it’s the structural relations between neurons which are important for consciousness.

C2: Right, but “relations between neurons” presupposes neurons – things standing in relations. And my question is what neurons are, intrinsically, beyond their relational properties.

C1: There’s no intrinsic property of neurons. They’re just related to each other by physical laws, it’s the relations which are fundamental.

C2: I find that incoherent. Relations need relata.

C1: Let me put it this way, do you believe in chairs?

C2: What?

C1: Chairs. Like sofas, couches, stools etc.. do you think they exist?

C2: Sure, but what does this have to do with anything?

C1: Well, is there an intrinsic property of chairness?

C2: I see what you’re getting at. No. The concept of a “chair” is fully specified by the causal/functional role it plays. But consciousness is different.

C1: How exactly is it different?

C2: Because there’s something about my phenomenal character which is fixed. The redness of red, blueness of blue etc..

C1: This seems like special pleading. Everything we encounter in everyday life can be described by the causal/functional role. The rules of chess, but also life, fire etc.. People historically thought these also needed categorical/intrinsic properties to describe like phlogiston or élan vital and we discovered through continued scientific enquiry that they weren’t needed for a complete explanation of the world.

C2: Yes, but I have a principled reason to special plead here. The complete description of the world is only complete from the third person perspective. It’s incomplete from a first person perspective because we need to explain the phenomenal character of consciousness.

C1: Well why don’t things like chairs, life or fire need a categorical property to realise them then?

C2: Because their definition is fully exhausted by the causal/functional role that they play in the structure of the world. So you can get a complete description of life by just understanding more about the functional roles and relational properties. The same isn’t true of consciousness.

C1: I think we’ve reached an impasse, because I think you can do this for consciousness. Consciousness just is the causal/functional role of the physical neurons in the brain. There’s no additional property needed to fix phenomenal character.

C2: I agree that we’ve reached an impasse here, on my view an intrinsic property is needed to fix the phenomenal character. Whereas on your view there’s just structural relations all the way down, nothing is needed to realise the structure as long as the relations hold.

C1: I agree that this is the crux.



Discuss