2026-01-12 11:13:33
Published on January 12, 2026 3:13 AM GMT
Inequality is a common and legitimate worry that people have about reprogenetic technology. Will rich people have super healthy smart kids, and leave everyone else behind over time?
Intuitively, this will not happen. Reprogenetics will likely be similar to most other technologies: At first it will be very expensive (and less effective); then, after an initial period of perhaps a decade or two, it will become much less expensive. While rich people will have earlier access, in the longer run the benefit to the non-rich in aggregate will be far greater than the benefit to the rich in aggregate, as has been the case with plumbing, electricity, cars, computers, phones, and so on.
But, is that right? Will reprogenetics stay very expensive, and therefore only be accessible to the very wealthy? Or, under what circumstances will reprogenetics be inaccessible, and how can it be made accessible?
To help think about this question, I'd like to know examples of past technologies that stayed inaccessible, even though people would have wanted to buy them.
Can you think of examples of technologies that have strongly disproportionately benefited very rich people for several decades?
Let's be more precise, in order to get at the interesting examples. We're trying to falsify some hypothesis-blob along the lines of:
Reprogenetics can technically be made accessible, and there will be opportunity to do so, and there will be strong incentive to do so. No interesting (powerful, genuine, worthwhile, compounding) technologies that meet those criteria ever greatly disproportionately benefit rich people for several decades. Therefore reprogenetics will not do that either.
So, to falsify this hypothesis-blob, let's stipulate that we're looking for examples of a technology such that:
We can relax one or more of these criteria somewhat and still get interesting answers. E.g. we can relax "could be made accessible" and look into why some given technology cannot be made accessible.
What are some other examples?
In general, necessary medical procedures tend to be largely covered by insurance. But that doesn't mean they aren't prohibitively expensive for non-rich people. Cancer patients especially tend to experience "financial toxicity", i.e. they can't easily afford to get all their treatments so they are stressed out and might not get all their treatments and they die more. There's some mysterious process by which drugs cost more with unclear reasons [1] (maybe just, drug companies raise the price when they can get away with it). This would be more of a political / economic issue, not an issue with the underlying technologies.
Some of these medical things, especially IVF, are kinda worrisome in connection with reprogenetics. Reprogenetics would be an elective procedure, like IVF, which requires expert labor and special equipment. It probably wouldn't be covered by insurance, at least for a while—IVF IIUC is a mixed bag, but coverage is increasing. This suggests that there should maybe be a push to include reprogenetics in medical insurance policies.
Of course, there are many technologies where rich people get early access; that's to be expected and isn't that bad. It's especially not that bad in reprogenetics, because any compounding gains would accumulate on the timescale of generations, whereas the technology would advance in years.
Lalani, Hussain S., Massimilano Russo, Rishi J. Desai, Aaron S. Kesselheim, and Benjamin N. Rome. “Association between Changes in Prices and Out‐of‐pocket Costs for Brand‐name Clinician‐administered Drugs.” Health Services Research 59, no. 6 (2024): e14279. https://doi.org/10.1111/1475-6773.14279. ↩︎
2026-01-12 11:09:09
Published on January 12, 2026 3:09 AM GMT
My friend Justis wrote a post this week on what his non-rationalist (“normal”) friends are like. He said:
Digital minimalism is well and good, and being intentional about devices is fine, but most normal people I know are perfectly fine with their level of YouTube, Instagram, etc. consumption. The idea of fretting about it intensely is just like… weird. Extra. Trying too hard. Because most people aren’t ultra-ambitious, and the opportunity cost of a few hours a day of mindless TV or video games or whatever just doesn’t really sting.
This seems 1) factually incorrect and 2) missing the point of everything.
First off, in my experience, worry about screen addiction doesn’t cleave along lines of ambition at all. Lots of people who aren’t particularly ambitious care about it, and lots of ambitious people unreflectively lose many hours a day to their devices.
Second, digital intentionality is about so much more than productivity. It’s about living your life on purpose. It touches every part of life, because our devices touch every part of our lives. To say that people only care about their device use because it gets in the way of their ambitions is to misunderstand the value proposition of digital intentionality.
Yesterday I got talking with the station agent while I was waiting for a train, and (completely unprompted by me) he started saying things like “Did you know that in Korea, their books say the internet is a real addiction you can have?” and “You used to have to go to Vegas to be so overstimulated; now they put touchscreens on the street!” and “I go on my phone to use the calculator and then I realize I’m just scrolling and I didn’t even ever use the calculator!”
Or right now I’m sitting at a café, and I just overheard a woman say, “Intelligent people are making things very addictive to distract us.”
‘Normal’ people care about this, which makes sense, because it affects all of us. You don’t have to be ultra-ambitious, or even ambitious at all, to feel the opportunity cost of being on your devices all the time. People lament the moments they miss with their kids or loved ones because they’re looking at their phones. And there are plenty of non-opportunity costs — people complain about their attention spans shortening, their memory getting worse. They think about how they used to be able to read books and now they can’t. And people are on their phones while they’re driving, all the time.
How to Do Nothing is a book about digital intentionality (its subtitle is Resisting the Attention Economy), whose author thinks that the entire concept of productivity makes us forget what it is to be human. To her, devices are bad in part because they keep us focused on productivity. Her thesis is that if you really pay attention to the world around you, you’ll find that it’s so interesting that you just won’t want to spend time on your devices. (She made it sound so cool to not only notice but be able to identify all the birds you see and hear, that now I own binoculars and go birding every weekend!)
Even Cal Newport’s Digital Minimalism is surprisingly value-agnostic, considering that Newport frames most of his books in terms of productivity. He talks about a father who used to love art, but let it fall by the wayside; after reconnecting with what he wants through digital minimalism, he starts drawing a picture to put in his child’s lunchbox every night.
I’ve read a lot of books on digital intentionality, and people mostly come to it not because they’re worried about not accomplishing their goals, but in desperation when they realize the overall impact of their devices on their lives and psyches.
People just want to be able to sit with their thoughts. They want to be able to live in moments, and remember things, and maybe read a book ever again. People want to feel like humans in a world where life is increasingly disembodied.
I’m not into digital intentionality because I have some big goal I want to accomplish, or even because I had some small goal, like reading a lot of books. (I basically don’t have goals! It’s something I struggle with.) I’m into digital intentionality because I didn’t want to lose any more years of my life to shit that gave me no value and that I wouldn’t even remember, that was designed to keep me sedentary just to drive ad revenue to companies that already have too much money. I wanted to go outside and form memories and be a person and talk to other people. And now I do.
2026-01-12 08:07:56
Published on January 12, 2026 12:07 AM GMT
"I have a lot of questions", said Carol. "I need to know how this works."
"Of course", said Zosia. "Ask us anything."
Carol hesitated, gathering her thoughts. She knew that Zosia couldn't lie to her, but she also knew that she was speaking with a highly convincing superintelligence with the knowledge of all the best sophists and rhetoricians in the world. She would have to be careful not to be too easily swayed.
"I'm concerned about how your transformation affects your collective moral worth", she finally said. "I accept that you are very happy. But are you one happy person or many? And if you're one person, are you happy enough to outweigh the collective happiness of all the individuals whom you used to be?"
"That's an excellent question", replied Zosia immediately. "You're trying to determine if humanity is better off now than it was before, and you've astutely drilled down to the heart of the issue.
"To your first question, we honestly don't feel as if we are many individuals now. Subjectively, we feel more like many pieces of one mind. Certainly, we have one unified will. Insofar as different individuals have different sensations and different thoughts, we think about them as subsystems in a different mind, similar to how you can hear or see something without being consciously aware of it until something causes it to come to your conscious attention. When I talk to you, it is like your legs continuing to walk while you navigate to your destination. Your legs and the part of your brain responsible for controlling them have no independent will nor independent personhood. Does that make sense to you?"
Carol's mind raced. Zosia hadn't even tried to convince her that each human was an individual! Was she effectively admitting that eight billion individuals were effectively killed in favor of one? That would be a moral catastrophe.
"Your answer is really disturbing", she finally said. "I don't assign any moral value to the several parts of my brain or my nervous system. If I feel a sensation that is theoretically agreeable or disagreeable but it does not affect my conscious mind, I don't consider that to either add to or subtract from the total happiness in the world. If individuals in your collective are analogous to subsystems in my mind, I would think that your moral worth is that of one individual and not many. That would mean that humanity was much better off when we were many individuals, even if our average happiness was lower."
Zosia smiled. "I understand where you're coming from", she said gently. "But you might think about why you assign no moral value to the subsystems in your mind. Is it because they have no independent will, or is it because they are inherently primitive systems? Consider your visual processing system. Yes, it exists only to gatekeep data from and pass information to your higher-order mind, and to move your eyeballs in response to top-down signals.
"But imagine instead of a simple visual cortex, you had a fully developed human being whose job was to do the same thing that your visual cortex does now. This individual is like any human in every respect except one—his only goal is to serve your conscious mind, and he has no will except your will. I think you would still consider this person worthy of moral consideration even though his function was the same as your visual cortex.
That means that it's not the fact that a system is part of a whole that deprives them of moral worth. No, it's simply its complexity and "human-ness". Yes, Zosia is—I am— merely a part of a whole, not a true individual. But I still have the full range of mental complexity of any individual human. The only thing that's different between me and you is that my will is totally subsumed into the collective will. As we've established, though, it's not independent will that makes someone worthy of moral consideration. I am happy when the collective is happy, but that doesn't make my individual happiness any less meaningful."
Carol considered Zosia's words as she walked home, needing some time to think over their conversation before they would meet again the next day. Zosia seemed convincing. Still, there was something that unsettled her. Zosia spoke as though the hive mind were analogous to individuals who happened to share the exact same knowledge and utility function. But all of them also seemed to have the same personality as well. In the analogy to subsystems of a human mind, you would expect the different individuals to have different methods, even if they had the same knowledge and the same goals. Yet each individual's actions seemed to be the output of a single, unified thought process. That made it seem like there was no local computation being done—each person's actions were like different threads of the same computer process.
Did that undermine Zosia's point, or did it just mean that she had to switch up her mental model from an "individual"—someone with a distinct personality—to an "instance", another copy of the hive mind with distinct experiences but an identical disposition? Carol wasn't sure, but she knew that she had little time to make great philosophical headway. One to three months was how long Zosia had said she had before they would have her stem cells, and therefore her life. Should she resume her efforts to put the world back the way it was?
The question continued to haunt her as she fell into a fitful sleep.
2026-01-12 07:08:57
Published on January 11, 2026 11:08 PM GMT
This was written for FB and twitter where my filter bubble is strongly Democrat / Blue Tribe. I'd ideally update some of my phrasing for the somewhat more politically diverse LW, though I'm hoping my actual talking points still land pretty reasonably.
...
I am not currently Trying For Real to do anything about the Trump Administration. If I were, I'd be focused on finding and empowering a strong opposition leadership with bipartisan support.
It's in the top 7 things I consider dedicating this year to, maybe in the top 4. I could be persuaded to make it my #1 priority. Thing seem pretty bad. The three reasons I'm not currently prioritizing it are:
1. I don't currently see an inroad to really helping
2. Figuring out what to do and upskilling into it would be a big endeavor.
3. AI is just also very important and much more neglected (i.e. ~half the country is aware that Trump is bad and out of control, a much teenier fraction understand that the world is about to get steamrolled by AI)[1]
My top priority, if I were getting more involved, would be trying to find and empower someone who is, like, the actual executive leader of the Trump Opposition (and ideally finding a coalition of leaders that include republicans, probably ex-Trump-staffers who have already taken the hit of getting kicked out of the administration)
The scariest thing about what's happening is how fast things move, how much Trump-et-al are clearly optimizing for the this blitz of stuff that's constantly fucking up people's Orient/Decide/Act loop. A scattered resistance seems like it basically won't work, there need's to be someone thinking like a Buck-stops-here leader, who has the usual cluster of "good leadership traits."
I currently guess such a person is basically also gathering the support to be the next presidential candidate (I think they need all the traits that would make a good presidential candidate).
(Their campaign slogan could be "Make America Great Again!", since Trump has seemed intent on destroying AFAICT that made America actually exceptional)
Anyone who's around and available is going to be imperfect. There's a fine line between "not letting the perfect be the enemy of the good" and "actually trying to find someone who is sufficiently great at leading the opposition."
(Gavin Newsome is the only guy I've heard of who seemed like he might be trying to play this role. I don't know that he is actually good enough, both in terms of competence and in terms of morals).
I also think the people in my mostly-liberal-network are not really grappling with: the opposition needs to be able to peel away Republicans. I think the priority right now really needs to be "stop the erosion of the constitution and our institutions", not "try to fight for what would normally be the political agenda you're trying to bring about."
I see people getting approximately as worked up over constitutional violations as various normal liberal talking points. We need a strong allyship between democrats and republicans.
I think a lot of democrats feel bitten by having tried to compromise in the past and feeling like the republicans kept defecting, and are now wary of anything that looks like compromise with republican leadership. This is reasonable, and I don't actually know what the solution here is. But, the solution doesn't look like enacting the standard playbook of how folk have been politically active over the past 20 years. That playbook clearly didn't work, whatever the solution is needs to look at least somewhat different than doubling down on the stuff you were doing already.
If I were spending more time on this, my next actions would be doing a more thorough review of who the existing leadership among the resistance are, what the existing networks and power structures are. I have a sinking feeling there's nobody who'll really stand out as a great contender, and I'm not sure what to do if that's the case.
But, the worlds where things go well, my current guess is we get a democrat-ish leader with a republican second-in-command, who are are able to lead a strong coordinated resistance, and who naturally transition to being a presidential/vice-presidential candidate in a couple years.
It's plausible I do end up focusing on "civilizational level 'improve discourse', as opposed to my normal focus on the rationality/x-risk community", which could pull doubleduty for "somehow help with Trump" and "somehow help with AI"
2026-01-12 05:26:24
Published on January 11, 2026 9:26 PM GMT
As a weirdo, I like to read LessWrong sometimes. There are a few extremely tiny features that I wish the site had that it doesn't. Luckily enough, I know how webpages work, and certain kinds of tweaks are especially easy. I'm attaching two of these here now, and may return to add more later.
Current contents:
You're going to need the ability to inject CSS onto webpages [1]. I use Stylus for Firefox, but any mainstream browser is going to have some add-on for this. There appears to be a version of Stylus for Chrome for example.
Userstyles come in a few forms, but mine are going to be dumb CSS, which removes a lot of the need to explain anything.
First, create a userstyle to paste the code into:
It should look like this
Now, edit the title (on the left in Stylus) so that you can find this snippet later.
Finally, set the URLs the snippet will be active on. I'll provide this in each script.
/************************
* LessWrong Vote Hider *
************************
* It can feel nice to place your votes on LessWrong
* without knowing how everyone else voted (but seeing
* their votes is also pretty helpful!)
*
* This snippet hides the post score (the top-right tiny one
* as well as the big one at the bottom) and also
* both comment scores (vote score and agreement)
************************
* URL settings:
* URLs starting with https://www.lesswrong.com/posts/
************************/
/* Post score (bottom) */
.PostsVoteDefault-voteScores *:not(:hover),
/* Post score (top-right) */
.LWPostsPageHeaderTopRight-vote .LWPostsPageTopHeaderVote-voteScore:not(:hover),
/* Comment score */
.OverallVoteAxis-voteScore:not(:hover),
/* Comment agreement */
.AgreementVoteAxis-agreementScore:not(:hover) {
color: #00000000;
}
/**************************
* LessWrong Vote Floater *
**************************
* Makes the vote box from the bottom of the article float on the bottom right.
* Also hides the other vote count (the tiny one at the top-right of the page).
**************************
* URL settings:
* URLs starting with: https://www.lesswrong.com/posts/
**************************/
/* Shove the footer section into the bottom-right corner */
.PostsPagePostFooter-footerSection {
position: fixed;
right: 0;
bottom: 0;
}
/* Adjust the footer block */
.PostsPagePostFooter-footerSection > * {
margin: 0.5em;
}
/* The tiny top-right vote is redundant now since the other vote widget is always visible */
.LWPostsPageHeaderTopRight-vote { display: none; }
If you guys have any you use, I'd love to see them in the comments. I know these are simple, but that's on purpose.
This is generally pretty safe, but almost everything you can do on your web browser can be exploited somehow. Wait for someone else to tell you it's safe (someone who will be blamed if it goes wrong) before going too hog-wild. ↩︎
2026-01-12 01:47:35
Published on January 11, 2026 5:47 PM GMT
With the EA Forum's giving season just behind us, it's a natural moment to look back on your donations over the past year and think about where you'd like to give in the year ahead. We (Tristan and Sergio) rarely spend as much time on these decisions as we'd like. When we tried to dig a bit deeper this year, we realized there are a lot of big questions about personal donations that haven't been crisply put together anywhere else, hence the post.
We've tried to make some of those questions clearer here, to highlight things that you might want to consider if they haven't occurred to you before, and to encourage comments from others as to how they think about these factors. Some of these factors aren’t original to us, and in general we’re aiming to bring together considerations that are scattered across different posts, papers, and conversations, and present them in one place through the lens of personal donation decisions. Happy giving!
This post focuses on five considerations that arise as you try to deepen your giving, especially as you give to specific opportunities rather than just to a fund. Those are:
Early on, it likely makes sense for nearly all of your donating to run through some fund. You're new to a cause area, or new to EA and considering the broad set of potential donation opportunities at hand, and you simply don't have a well enough constructed view that it makes sense to try to stake out your own position.
But eventually, you'll become familiar enough that you've begun to form your own inside view. You'll look at what funders broadly fund in areas that interest you, and start to disagree with certain decisions, or at least feel that some segment of the cause area is being neglected. These start as intuitions, useful indicators but likely nothing robust enough to deviate from donating to the fund you think is most impactful.
But at some point, you'll likely arrive at a place where you have enough knowledge about some part of a cause (especially if you work on it) that it's worth considering choosing the targets of your donations yourself. Where is that point?
Frankly it’s hard to tell, we’ve debated this more than once ourselves[1]. But here are some signals that you might be ready to allocate a portion of donations according to your own judgment:
You've engaged directly with the orgs/founders you're considering. Brief calls and public materials are a start, but they don't replicate the depth of evaluation that dedicated grantmakers do[2].
Even when you meet some of these signals, we'd suggest an 'earn your autonomy' approach: start with ~20% for inside-view bets while keeping most funds allocated through established grantmakers. Track your reasoning and expected outcomes, then increase autonomy gradually if your bets look good in hindsight.
We take the meat-eater problem seriously, but we don't at all think that the conclusion is to avoid donating in the Global Health and Development (GHD) space: the effects might actually even out if e.g. further development reduces the total amount of natural space, potentially counterbalancing increased meat consumption by reducing the number of suffering wild animals. But the problem is enough to give us pause, and highlights the general issue that, for anyone with a diverse set of things they care about in the world, they should likely consider the indirect effects of the interventions they're funding.
The meat-eater problem[3] is a specific case of a much broader issue, that we are often radically uncertain about the long-run or indirect effects of our actions which is incredibly important given that second-order (and further) effects might be the most important aspect of any given intervention.
This is "complex cluelessness", uncertainty not just about the sign and magnitude of indirect effects, but cases where plausible effects flow in opposite directions and we lack a reliable way to weigh them.
There's much more to say about cluelessness and different people offer different responses. But if you don't want to be paralyzed, sometimes you have to bracket what you can't reliably assess and act on what you can. This doesn't mean ignoring second-order effects — quite the opposite. It means there may be real value in donating to those working to map out the potential unintended consequences of common EA interventions.
Probably everyone here is familiar with moral uncertainty, but what does it actually mean for your giving? What would this body of work have to say about how we can donate more wisely? More concretely: if you're uncertain between different moral frameworks or cause priorities, how should you allocate your donations?
The standard answer is to maximize expected value (EV). Donate everything to whatever has the highest expected impact given your credences across different moral views. But donating 100% to what you think is the most important cause is far from the obvious strategy here.[4]
The benefits of EV maximization under ordinary empirical uncertainty don't fully apply to philosophical uncertainty. With empirical uncertainty, a portfolio of diversified bets tends to reliably do better in the long run: individual gambles may fail, but the overall strategy works. With philosophical uncertainty, you're not making independent bets that will converge toward truth over time. If you're wrong about hedonistic utilitarianism, you're likely to stay wrong, and all your actions will be systematically misguided..
Second, moral uncertainty can reflect value pluralism rather than confusion. You can genuinely care about multiple ethical perspectives. You might genuinely have utilitarian concerns and deontological ones at the same time, and your donations can reflect that.
If these objections to EV maximization for dealing with moral uncertainty seem relevant to you, an alternative approach might be through frameworks such as moral parliaments, subagents, or Moral Marketplace Theory. While distinct, these approaches share the insight that when genuinely uncertain between moral views, you should give each perspective meaningful representation. If you're 60% longtermist, 25% focused on present human welfare, and 15% focused on animal welfare, you might allocate your donations roughly in those proportions, not because you're hedging, but because you're giving each perspective the representation it deserves given your actual uncertainty.
The framework becomes especially relevant when thinking about the relationship between your career and your donations. If you work full-time in a cause area, you've already made a massive allocation to that perspective (40-60 hours per week, your professional development, your social capital, your comparative advantage).
It's reasonable to think that 80,000 hours is already enough of an investment, and that unless you're really, really confident in your cause prioritization, you should use your donations to give voice to your other values. If you're 70% confident AIS (AI Safety) is the top priority and 30% confident it's something else (animal welfare, nuclear risk, GHD), allocating both your entire career and all your donations to AIS treats that 70% credence as certainty. Your career might be an indivisible resource that you've allocated to your plurality view, but your donations are divisible, they're an opportunity to give your minority perspectives some voice.
A potential blind spot of this framework is that it treats you as an individual but you're actually part of a community. If everyone diversifies individually, we lose specialization. If everyone specializes, assuming others will cover minority views, those views will be neglected.
Nevertheless, even if individual diversification is collectively suboptimal, it might still be personally defensible. Maybe you are not just optimizing community output, you could also care about maintaining integrity with your own values.
When you donate can matter as much as where. The right timing strategy could depend on how engaged you are with the funding landscape, whether you can spot time-sensitive opportunities, and how much you expect to learn over time. There are (at least) three possible approaches:
Regularly donating, e.g. monthly, reduces cognitive overhead, helps with self-control around spending, and gives orgs predictable cashflow for planning. A possible downside of this approach is something like the "set-and-forget" bias, where your automated allocations continue unchanged even as your knowledge or the landscape evolves. Using a fund or regrantor mitigates this somewhat (they adapt their grants as the landscape shifts), but doesn't eliminate it completely; the fund itself might be the wrong choice now, or your split between different causes/worldviews may no longer match your current thinking.
Another approach that can potentially generate a lot of value is to keep a buffer to act on time-sensitive opportunities: matching campaigns, bridge funding for quality orgs hit by landscape shifts, key hires, or short policy windows. $12,000 at the right moment can beat $1,000/month when money is genuinely the binding constraint. This strategy works best when you can distinguish "temporary funding shock" from "org struggling for good reasons", which requires more engagement and time than the Steady Drip method, also inviting the risk of sloppy evaluation when you're pressed for time trying to make decisions.
There's also the question of patient philanthropy, a question that used to be a live area of exploration but since seems to have gone under the radar as people have become increasingly convinced that this is The Most Important Century. We at least are not totally convinced, and as such reserve invest part of our current savings so that we might be able to donate later which comes with multiple benefits:
Expected financial growth: Historically, investments in the market have delivered positive real returns.
Epistemic growth: This connects to the "complex cluelessness" discussion in Section 2: you may not resolve all downstream uncertainty, but you can (hopefully) learn which interventions are more robust and which indirect effects are tractable enough to update on.
Option value: You can always donate later, but you can't un-donate.
But patient philanthropy comes with downsides as well. Even if you just accept the weaker claim that AI is likely to make the world a much weirder place than it is today, that's good reason to think about donating today, while the world is still intelligible and there seem to be clearly good options on the table for improving the world under many worldviews.
One of the things that most stuck with us from the 80,000 Hours podcast, was a moment in an early podcast with Alex Gordon-Brown where he mentioned that he always puts some of his donations towards interventions in the GHD space, out of what we might call moral seriousness.
Here, moral seriousness is passing the scrutiny in the eye of a skeptic recently acquainted with EA's core ideas. We imagine her saying: "Wait wait, you just spent all this time talking to me about how important donating more effectively is, about what an absolute shame it is what others on this Earth are suffering through right now, at this moment, but you're donating all of your money to prevent abstract potential future harms from AI? Really? Did you ever even care about the children (or animals) to begin with?"
We could explain Longtermism to her, try to convince her of the seriousness of our caring for all these things at once while still deciding to go all in on donating to AIS. We could explain the concept of hits-based giving, and why we think the stakes are high enough that we should focus all our funds there. But then we hear her saying: "Sure sure, I get it, but you aren't even donating a portion of your 10% to them. Are you really okay with dedicating all of your funds, which over the course of your life could have saved tens of thousands of animals and hundreds of humans, to something which might in the end help no one? Do you really endorse the belief that you owe them nothing, not even some small portion?"
Frankly, the skeptic seems right. We're comfortable with longtermism being a significant part of our giving, but neither of us wants it to be 100%. Still, the same questions about coordination arise here too: if the community is still split between these areas, is there any need to personally allocate across them? One reason to think so is that most people will come to EA first through an interaction with a community member, and it seems particularly important for that person to signal that their moral concern is broad and doesn't just include weird, speculative things that are unfamiliar. We want to reserve some portion for GHD and animal welfare, making sure that at least part of what we're working towards is helping others now, actively, today.
Moreover, through the lens of the moral uncertainty framework we discussed earlier, you can think of that skeptic as a subagent who deserves a seat at your decision-making table, your "common-sense representative" demanding a place among your other moral views. Even if your carefully reasoned philosophical views point heavily toward longtermism, there's something to be said for giving your intuitions about present, visible suffering some weight in your actions. Not as a concession to outside perception, but because those intuitions are themselves part of your moral compass.
Up until now, I’ve (Tristan) made my donations totally out of deference, knowing that funders have a far more in-depth view of the ecosystem than I do, and time to really deeply consider the value of each project. But now I’m at a crossroads, as I believe that funders aren’t prioritizing AIS advocacy enough. I really believe that, but I’m still relatively junior (only ~2 years in the AIS space), and am quite weary to think I should then entirely shift my donations based on that. But then what amount would be appropriate? 50% to organizations based on my inside view, 50% to funds?
Part of the issue here is that, by then choosing to donate to a very narrow window of opportunities (AIS advocacy orgs), you lose the benefit of then trying to pit those advocacy orgs against the broader set of organizations working on AIS. You’re choosing for the most effective AIS advocacy organizations, not the most effective organization reducing AI risk. I have abstract arguments as to why I think AIS advocacy is potentially really impactful, but I don’t have the expertise to even begin to evaluate any technical interventions and how they stack up against them.
What’s important here is that you’ve tried to consider a number of factors that capture important considerations and have that ready to go as you dig deeper into a given organization. For example, it’s not enough to establish that a given organization is impactful, i.e. has done great work in the past, but also that they’re set to do good work in the future, and more specifically that your contribution will go to supporting good work. It’s important to ask what’s being bought with your further donation, and to have a sense of the upside of that specific work, beyond the org more generally.
The meat-eater problem refers to the concern that interventions saving human lives, particularly in developing countries, may indirectly increase animal suffering. The logic is that each person saved will consume meat throughout their lifetime, leading to more animals being raised and slaughtered in factory farms. If you value animal welfare, this could potentially offset the positive impact of saving human lives.
How does this work in practice? Suppose you're 95% confident that only humans matter morally, and 5% confident that shrimp can suffer and their welfare counts. In that 5% scenario, you think helping one shrimp matters much less than helping one human, maybe one millionth as much. But there are about a trillion shrimp killed each year in aquaculture. Expected value maximization multiplies your 5% credence by a trillion shrimp, and even dividing by a million for how little each counts, that overwhelms your 95% confidence about humans. The expected value calculation will tell you to donate almost everything to shrimp welfare, and many people find this conclusion troubling or even fanatical.