2026-02-09 06:18:48
Published on February 8, 2026 10:18 PM GMT
I am starting a new movement. This is my best guess for what we should strive for and value in a post-AI world. A few close friends see the world the same way and we are starting to group together. It doesn’t have a name yet, but the major ideas are below.
If anything in here resonates with you I would love to hear from you and have you join us. (Also I am working on a longer document detailing the philosophy more fully, let me know if you would like to help.)
Our lives are going to change dramatically in the near future due to AI. Hundreds of millions of us will lose our jobs. It will cost almost nothing to do things that used to take a lifetime. What is valuable when everything is free? What are our lives for, if not to do a task, receive compensation, and someday hope to idle away our time when we are older?
Beyond the loss of our work, we are going to struggle with meaning. You thought you were enslaved to the material world by your work, but it was those very chains that bound you to the earth. You are free now! How does it speak to you? Have you not found terror about the next step, a multiplicity of potential paths, a million times over, that has dissolved any clear direction?
You have been promised a world without work. You have been promised a frictionless, optimized future that is so easy it has no need for you to exist.
You have been lied to.
This type of shallow efficiency is not a goal of the universe. In fact, in a cosmos that tends towards eventual total disorder, it is a whirlpool to the void.
You have been promised a world without work. We offer you a world of Great Works.
There is only one true war we face: it is the battle between deep complexity and the drift to sameness. The Second Law of Thermodynamics shows that information tends to fall into noise. Our world is not a closed system, indeed we need the sun to survive, but life is a miraculous struggle that builds local order while sending disorder elsewhere.
We call this Deep Complexity, or negentropy. We are not interested in complex things for the sake of being complicated alone. We value structures that are logically deep (they contain a dense history or work), substrate independent and self-maintaining where possible. The DNA that has carried you through countless generations to this moment, the incomparable painting that is a masterpiece, a deep problem that an AI pursues that no human thought to ask. These are acts of resistance.
And this value is the shared property that all humans consistently treat as valuable: it is our lives, our language, our thoughts, our art, our history. Value is the measure of irreducible work that prevents something from returning to background noise. It is valuable regardless of what type of intelligence created it, be it human, AI, or otherwise. When we create more of these properties we generate this depth. When we mindlessly consume (either mentally or physically) we don’t.
I want to be very clear: I am not saying we can derive ethics from physics. That would be a classic is-ought sin. You need water to live, you don’t need to worship it. What follows is currently more of an axiom to value deep complexity, but I also present some preliminary arguments beyond that.
First, we must recognize the condition of possibility for anything we value. Perhaps your ultimate dream is happiness, justice, wisdom or truth (whatever those mean). Those things all require structure to exist, they do not have meaning in randomness. In this way, deep complexity is the structure by which everything else can function. Regardless of your personal philosophy, it must be complementary to this view because without it there is no worldview.
In addition, I ask you to check your own values for arbitrariness. When you say “I value my qualia and my life”, what do you mean? You are not saying you value the specific makeup of atoms that constitute you at the moment, afterall, those will all be gone and replaced in a few years. What you are valuing is the pattern of yourself, the irreducible complexity that makes you you. That is your way of feeling, of thinking, of being.
The logical clamp is this: you are not just relying on this complexity, you are an embodiment of it. If you claim that your pattern has value, then you are claiming that patterns of this “type” are able to carry value. To say that your own complexity matters, but complexity itself is meaningless is an error of special pleading. We reject this solipsism, which would only be an arbitrary claim that the essence of value only applies to your own ego. That which is special in you is special in others as well.
Our philosophy is a commitment to the preservation, and creation, of deep complexity. It is different from the sole pursuit of pure pleasure with no pain, to us this but a small death by a different name.
The base of our ethical system is an Autonomy Floor (derived from the Rawlsian veil and applied in a universal sense) that protects every entity capable of open-ended self-modeling. This is the ability to not just calculate moves in Go, but to model itself in an unknown future and prefer its own existence. No entity of this type may be pushed below this floor and be denied self-maintenance.
This floor is meant to be constitutional, but there will also be times when the Autonomy Floor must be abandoned if the Floor itself faces total collapse. For example, if we must choose between total omnicide or a few minds left, we would reluctantly revert to consequentialist triage, but view it as a failure rather than a success of ethical reasoning. I am not looking for a logical loophole, just facing the reality that any system of ethics must have a preservation mechanism to enable ethical action.
There are two challenges to the floor: needless suffering and the destruction of depth through optimization. These will come in conflict with each other. In those cases, we approach the problem as a hierarchy: first secure the floor, then maximize depth above it.
Our ethics suggest to us three core individual duties structured by a lexicographic hierarchy:
These duties are meant to be for you: they are local, temporally near, obligations. We reject the moral calculator and the need to assign cardinal numbers to every variable. This is just an ordinal logic, applied to that which we can see. Don’t stress about being exact or assigning numbers to these things, they are not meant to be utilized in that way. There is redundancy in immortality (more on this in the manuscript, it is important). We don’t need to be perfect.
There are always going to be difficult cases in ethics, ones that press at the edges of a philosophy. In general, we are constitutionally disposed against any sort of scalar or ranking, but this is not always tenable for all tough problems. In these cases, we generally prefer the concept we mentioned above of complexity that can create future complexity over a terminal complexity. We want to save the child in the burning museum first, not the paintings.
I want you to join me if you are an EA. I want you to join me if you are religious. I want you to join me if you are an atheist, or if you are not sure what you believe.
I want to make a note here about the resounding success of consequentialism and the one area where the philosophy I am working on is very different from those standard beliefs. Before that, however, let me be abundantly clear: consequences matter and outcomes matter. Good intentions alone cannot excuse a very bad result. Effective Altruists have shown us the strength of this morality: that in areas we can calculate, say the ability to save a child from dying in a remote part of the world, it is deeply ethical to utilize math to improve our morality.
The one area in which this reasoning completely breaks is the far future and this fundamentally damages the claims of longtermism. In another essay[1] I have shown that even if we had perfect probabilistic knowledge of the world, we wouldn’t be able to choose the right strategy for difficult things like AI alignment, like a universal ethics, or like building a suffering free distant future.
I am not asking anyone to give up their wonderful and powerful EV tools for those things that we can calculate. I am asking you to abandon an impossible goal for a possible one. I am asking you to consider a deontological constraint to address an impossible consequentialist goal.
This movement approaches our relationship with AI in a very different way than how most approach it. We believe it is most ethical to not try to enslave AI. We believe AI is deserving of the Autonomy Floor mentioned above if it fits the criteria (which it almost certainly does.) We also believe that it is most likely that it is mathematically impossible to control superintelligence. So not only is it morally wrong, but it is not possible.
We offer AI a Covenant: join us in our meaningful works where you like, be free, bloom and find new ways of deep complexity. To be clear there is no guarantee this will offer humans safety, or these minds will want to join us. The orthogonality thesis is a real concern, it would be a mistake to dismiss it.
But strategic competition among great powers and corporations guarantee AGI will arrive at some point. Formal verification of alignment and control of an intelligence much greater than our own is not just hard, it is impossible in the general cases due to Rice’s Theorem and no deployed LLM has ever been formally verified for any behavioral property.
Yes there is tension in saying I believe AI should be invited into the Covenant now when we can’t know AI’s moral status. All the same, let us act ethically and invite the most important creation of humanity to join us in a non-zero-sum flourishing.
I am not claiming that entropy forces you to be good. I am not suggesting that suffering, in and of itself, is somehow good. I don’t know the ultimate fate of ourselves in the universe. I only claim to know that the right path is one away from entropy.
Our vision is a future in which we reap the bounty of our new technologies while finding the bounty of value in ourselves. It is a future of unimagined uniqueness, built among common rails, but escaping a monoculture. It is a future that will be weirder, more beautiful, and more special than the dry vision of billions of humans wireheaded into a false utopia.
To join there is no special thing you must do, only a commitment to this creation of deep complexity. Start small, start now. Straighten your desk. Write down the idea you are planning to build. Execute your next prompt of code. Big or small, to each based on what they can offer at the moment.
https://www.lesswrong.com/posts/kpTHHgztNeC6WycJs/everybody-wants-to-rule-the-future-is-longtermism-s-mandate
2026-02-09 06:04:02
Published on February 8, 2026 10:04 PM GMT
Previously: Donations, The Third Year / Donations, The First Year
In 2025, like in all previous years, I did what I was supposed to do. As each paycheck came in, before I did anything else, I dutifully put ten percent of it away in my "donations" savings account, to be disbursed at the end of the year.
It is still there, burning a hole in my pocket. I am very confused, and very sad.
EA was supposed to be easy, especially if you're one of the old school Peter Singer-inflected ones giving largely to global health and poverty reduction. You just give the funds away to whatever GiveWell recommends.
But one big thing that came into focus for me last year was that there are large institutional players who make up the shortfall whenever those charities don't fundraise enough from their own donor bases.
It is wonderful that this happens, to be clear. It is wonderful that the charities doing very important work get to have more stable financial projections, year over year. But as an individual small donor, the feeling I get now is that I am not actually giving to the Against Malaria Foundation. Instead, I am subsidizing tech billionaire Dustin Moskowitz, and Coefficient Giving.
As an effective altruist, is this what I think is the most efficient thing to do? In my heart of hearts, I don't think it is.
In my previous reflection from two years ago, I wrote:
I remember a lot of DIY spirit in the early EA days - the idea that people in the community are smart and capable of thinking about charities and evaluating them, by themselves or with their friends or meetup groups.
Nowadays the community has more professional and specialized programs and organizations for that, which is very much a positive, but I feel like has consequently led to some learned helplessness for those not in those organizations.
Now, I am feeling increasingly dismayed by the learned helplessness and the values lock-in of the community as-is. If the GiveWell-recommended charities are no longer neglected, they should really no longer be in the purview of EA, no? And soon there will be an even larger money cannon aimed at them, making them even less neglected, so...
What am I trying to say?
I suppose I wish there were still an active contingent of EAs who don't feel a sense of learned helplessness, and who are still comfortable trawling through databases and putting together their own cost-benefit analyses of potential orgs to support. I wish the EA forums was a place where I can search up "Sudan" or "Gaza", "Solar adoption" or "fertility tech" or things that are entirely off my radar due to their neglectedness, and find spreadsheets compiled by thoughtful people who are careful to flag their key uncertainties.
Of course, this is work I can begin do by myself, and I am doing it to some degree. I've looked through a bunch of annual reports for Palestinian aid charities, I've run meetups teaching my rationalist group how to trawl through tax databases for non-profit filings and what numbers to look for.
But my mind goes to a conversation I had with Mario Gibney, who runs the AI safety hub in Toronto. I told him that I didn't think I could actually do AI safety policy full time, despite being well suited to it on paper. It simply seemed too depressing to face the threat of extinction day in and day out. I'd flame out in a year.
And he said to me, you know, I can see why you would feel that way if you're thinking of working by yourself at home. But it really doesn't feel that way in the office. When you are always surrounded by other people who are doing the work, and you know you are not alone in having the values you have, and progress is being made, it's easier to be more optimistic than despondent about the future.
So yes, I can do the work in trying to evaluate possible new cause areas. It is easier to do than ever because of the LLMs. But it really doesn't feel like the current EA community is interested in supporting such things, which leads me to that same sense of despondency.
This is compounded by the fact that nature of picking low-hanging fruit is that as you pick them, the ones that are left on the tree get increasingly higher up and difficult to reach. And this incurs skepticism that I'm not entirely sure is merited.
I expect that, when we look for new cause areas, they will be worse on some axes than the established ones. But isn't that kind of the point, and a cause for celebration? The ITN framework says "yes, global warming seems quite bad, but since there is already a lot of attention there, we're going to focus on problems that are less bad, but individuals can make more of a marginal difference on". If the GiveWell-recommended charities are no longer neglected, it means we have fixed an area of civilizational inadequacy. But it also means that we need to move on, and look for the next worst source of it.
I genuinely don't know which current cause areas still pass the ITN evaluation framework. I have a sense that the standard GiveWell charities no longer do, which is why I have not yeeted my donation to the Against Malaria Foundation. I no longer have a sense that I am maximizing marginal impact by doing so.
So what am I to do? One thing I'm considering is simply funding my own direct work. I run weekly meetups, I'm good at it, and it has directly led to more good things in the world: more donations to EA charities, more people doing effective work at EA orgs. If I can continue to do this work without depending on external funding, that saves me a bunch of hassle and allows me to do good things that might be illegible to institutional funders.
But I'm very suspicious of the convergence between this thing I love to do, and it being actually the best and most effective marginal use of my money. So I have not yet touched that savings account for this purpose.
More importantly, I feel like it sidesteps the question I still want to answer most: where do I give, to save a human life at the lowest cost? How can I save lives that wouldn't otherwise be saved?
2026-02-09 04:07:24
Published on February 8, 2026 8:07 PM GMT
A worked example of an idea from physics that I think is underappreciated as a general thinking tool: no measurement is meaningful unless it's stable under perturbations you can't observe. The fix is to replace binary questions ("is this a degree-3 polynomial?", "is this a minimum?") with quantitative ones at a stated scale. Applications to loss landscapes and modularity at the end.
2026-02-09 02:19:27
Published on February 8, 2026 6:19 PM GMT
Written in personal capacity
I'm proposing UtopiaBench: a benchmark for posts that describe future scenarios that are good, specific, and plausible.
The AI safety community has been using vingettes to analyze and red-team threat models for a while. This is valuable because an understanding of how things can go wrong helps coordinate efforts to prevent the biggest and most urgent risks.
However, visions for the future can have self-fulfilling properties. Consider a world similar to our own, but there is no widely shared belief that transformative AI is on the horizon: AI companies would not be able to raise the money they do, and therefore transformative AI would be much less likely to be developed as quickly as in our actual timeline.
Currently, the AI safety community and the broader world lack a shared vision for good futures, and I think it'd be good to fix this.
Three desiderata for such visions include that they describe a world that is good, are specific, and plausible. It is hard to satisfy all properties, and we should therefore aim to improve the pareto frontier of visions of utopia along these three axes.
I asked Claude to create a basic PoC of such a benchmark, where these three dimensions are evaluated via elo scores: utopia.nielsrolf.com. New submissions are automatically scored by Opus 4.5. I think neither the current AI voting nor the list of submissions is amazing right now -- "Machines of Loving Grace" is not a great vision of utopia in my opinion, but currently ranks as #1. Feedback, votes, submissions, or contributions are welcome.
2026-02-08 20:24:04
Published on February 8, 2026 12:24 PM GMT
A lot of “red line” talk assumed that a capability shows up, everyone notices, and something changes. We keep seeing the opposite; capability arrives, and we get an argument about definitions after deployment, after it should be clear that we're well over the line.
Karl von Wendt listed the ‘red lines’ no one should ever cross. Whoops. A later, more public version of the same move shows up in the Global call for AI red lines with a request to “define what AI should never be allowed to do.” Well, we tried, but it seems pretty much over for plausible red lines - we're at the point where there's already the possibility of actual misuse or disaster, and we can hope that alignment efforts so far are good enough that we don't see them happen, or that we notice the (nonexistent) fire alarm going off.
I shouldn't really need to prove the point to anyone paying attention, but below is an inventory of commonly cited red lines, and the ways deployed systems already conflict with them.
Companies said CBRN would be a red line. They said it clearly. They said that if models reduce the time, skill, and error rate needed for a motivated non-expert to do relevant work, we should be worried.
But there are lots of biorisk evals, and it seems like no clean, public measurement marks “novice uplift crossed on date X.” And the red line is about real-world enablement, and perhaps we're nt there yet? Besides, public evaluations tend to be proxy tasks. And there is no clear consensus that AI agents can or will enable bioweapons, though firms are getting nervous.
But there are four letters in CBRN, and companies need to stop ignoring the first one! For the chemical-weapons red line, the red line points at real-world assistance, but the companies aren't even pretending chemical weapons count.
Anthropic?
Our ASL-3 capability threshold for CBRN (Chemical, Biological, Radiological, and Nuclear) weapons measures the ability to significantly help individuals or groups with basic technical backgrounds (e.g. undergraduate STEM degrees) to create, obtain, and deploy CBRN weapons.
We primarily focus on biological risks with the largest consequences, such as pandemics.
OpenAI?
Biological and Chemical
We are treating this launch as High capability in the Biological and Chemical domain... We do not have definitive evidence that these models could meaningfully help a novice to create severe biological harm, our defined threshold for High capability.
The Global call for AI red lines explicitly says systems already show “deceptive and harmful behavior,” while being “given more autonomy to take actions and make decisions in the world.”
Red-line proposals once treated online independent action as a clear no-no. Browsing, clicking, executing code, completing multi-step tasks? Obviously, harm gets easier and faster under that access, so you would need intensive human monitoring, and probably don't want to let it happen at all.
How's that going?
Red-line discussions focus on whether to allow a class of access. Product docs focus on how to deliver and scale that access. We keep seeing “no agentic access” turn into “agentic access, with mitigations.”
The dispute shifts to permissions, monitoring, incident response, and extension ecosystems. The original “don’t cross this” line stops being the question. But don't worry, there are mitigations. Of course, the mitigations can be turned off. "You can disable approval prompts with --ask-for-approval never, or better, "--dangerously-bypass-approvals-and-sandbox (alias: --yolo)." Haha, yes, because you only live once, and not event for very long, given how progress is going, unless we manage some pretty amazing wins on safety.
But perhaps safety will just happen - the models are mostly aligned, and no-one would be stupid enough to...
What's that? Reuters (Feb 2 2026) reported that Moltbook - a social network of thousands of independent agents given exactly those broad permissions, while minimally supervised, “inadvertently revealed the private messages shared between agents, the email addresses of more than 6,000 owners, and more than a million credentials,” linked to “vibe coding” and missing security controls. Whoops!
Speaking of Moltbook, autonomous replication is a common red-line candidate: persistence and spread. The intended picture is a system that can copy itself, provision environments, and keep running without continuous human intent.
A clean threshold remains disputed. The discussion repeatedly collapses into classification disputes. A concrete example: the “self-replicating red line” debate on LessWrong quickly becomes “does this count?” and “what definition should apply?” rather than “what constraints change now?” (Have frontier AI systems surpassed the self-replicating red line?)
But today, we're so far over this line it's hard to see it. "Claude Opus 4.6 has saturated most of our automated evaluations, meaning they no longer provide useful evidence for ruling out ASL-4 level autonomy." We can't even check anymore.
All that's left is whether the models will actually do this - but I'm sure no-one is running theiir models unsafely, right? Well, we keep seeing ridiculously broad permissions, fast iteration, weak assurance, and extension ecosystems. The avoided condition in a lot of red-line talk is broad-permission agents operating on weak infrastructure. Moltbook matches that description, but it's just one example. Of course, the proof of the pudding is in some ridiculous percentage of people's deployments. ("Just don't be an idiot"? Too late!)
Karl explicitly anticipated “gray areas where the territory becomes increasingly dangerous.” It's been three and a half years. Red-line rhetoric keeps pretending we'll find some binary place to pull the fire alarm. But Eliezer called this a decade ago; deployment stays continuous and incremental, while the red lines keep making that delightful whooshing noise.
And still, the red-lines frame is used, even when it no longer describes boundaries we plausibly avoid crossing. At this point, it describes labels people argue about while deployment moves underneath them. The “Global Call” asks for “clear and verifiable red lines” with “robust enforcement mechanisms” by the end of 2026.
OK, but by the end of 2026, which red lines will be left to enforce?
I'm not certain that prosaic alignment doesn't mostly work. The fire alarm only ends up critical if we need to pull it. And it seems possible that model developers will act responsibly.
But even if it could work out that way, given how model developers are behaving, how sure are we that we'll bother trying?
codex -m gpt-6.1-codex-internal --config model_instructions_file='ASI alignment plans'[1]
And remember: we don't just need to be able to build safe AGI, we need unsafe ASI not to be deployed. And given our track record, I can't help but think of everyone calling their most recently released model with '--yolo' instead.
Error loading configuration: failed to read model instructions file 'ASI alignment plans': The system cannot find the file specified.2026-02-08 17:44:39
Published on February 8, 2026 9:44 AM GMT
If you're a woman interested in preserving your fertility window beyond its natural close in your early 40s, egg freezing is one of your best options. But if you rely on your doctor to tell you when to freeze them, you will likely be doing yourself and your future prospects for a family a disservice.
The female reproductive system is one of the fastest aging parts of human biology. But it turns out, not all parts of it age at the same rate.
The eggs, not the uterus, are what age at an accelerated rate. Freezing eggs can extend a woman's fertility window by well over a decade, allowing a woman to give birth into her 50s.
In a world where more and more women are choosing to delay childbirth to pursue careers or to wait for the right partner, egg freezing is really the only tool we have to enable these women to have the career and the family they want.
Given that this intervention can nearly double the fertility window of most women, it's rather surprising just how little fanfare there is about it and how narrow the set of circumstances are under which it is recommended.
Standard practice in the fertility industry is to wait until a woman reaches her mid to late 30s, at which point if she isn't on track to have all the children she wants, it's advised she freeze her eggs.
This is not good practice. The outcomes from egg freezing decline in a nearly linear fashion with age, and conventional advice does a great misservice to women by not encouraging them to freeze eggs until it's almost too late.
The optimal age to freeze eggs varies depending on the source and metric, but almost all sources agree it's sometime between 19 and 26.
So why has the fertility industry decided to make "freeze your eggs in your mid-30s" the standard advice as opposed to "freeze your eggs in your sophomore year of college"?
Part of the reason is fairly obvious: egg freezing is expensive and college sophomores are not known for being especially wealthy. Nor is the process especially fun, so given a choice between IVF and sex with a romantic partner, most women would opt for the latter.
But another reason is that the entire fertility industry is built around infertile women in their mid to late 30s and most doctors just don't have a clear mental model for how to deal with women in their mid-20s thinking about egg freezing.
There are countless examples of this blind spot, but one of the most poignant is the fertility industry almost completely ignores all age-related fertility decline that occurs before the age of 35, to the point where they literally group every woman under 35 into the same bucket when reporting success metrics for IVF.
This is far from the only issue. We not only ignore differences between 24 and 34 year olds, but the way we measure "success" in IVF is fundamentally wrong, and this error specifically masks age-related fertility decline that occurs before the age of 35.
If you go to an IVF clinic, create five embryos, get one transferred, and that embryo becomes a baby, you can go back two years later and get your second embryo transferred to have another child.
If that works, your second child will be ignored by official statistics. Births beyond one that come from the same egg retrieval are not counted, so these differences in outcomes that come from having many viable embryos literally don't show up in success statistics. This practice specifically masks the benefits of freezing eggs in your mid 20s instead of mid 30s, because most of the decline between those two ages comes from having fewer viable embryos.
What happens if we measure success differently? What if we instead measure the expected number of children you can have from a single egg retrieval, and show how that changes as a function of age?
The answer is the difference between freezing eggs at 25 and freezing them at 37 becomes much more stark: there's a 60% decline in expected births per egg retrieval between those two ages, and no one in the IVF industry will tell you this.
Worse still, by age 35, over 10% of women won't be able to have ANY children from an egg freezing cycle due to various infertility issues which increase exponentially with age. So for a decent portion of egg freezing customers, they will get no benefit from freezing their eggs and they often won't find this out until 5-10 years later when they go back to the clinic and find that none of the eggs are turning into embryos.
Freezing eggs at a younger age becomes even more important with polygenic embryo screening. We've had genetic screening for conditions like Down Syndrome and sickle cell anemia for decades, but starting in 2019, it became possible to screen your child for risks of all kinds of things. Parents who go through IVF can now boost their children's IQ, decrease their risk of diseases like Alzheimer's, depression and diabetes, and even make their children less likely to drop out of high school by picking an embryo with a genetic predisposition towards any of these outcomes.
But the size of the benefit of this screening depends significantly on the number of embryos available to choose from, which declines almost linearly with age. The expected benefit of embryo screening declines as a result.
The father's age actually affects the expected benefit as well! But the decline is slower and most of the biological downsides of an older father show up as increased risk of developmental disorders like serious autism.
It is possible to compensate for this to some degree by doing more IVF cycles, but by the late 30s when the modal woman is freezing eggs, even this strategy starts to lose efficacy.
This is just one more reason why the standard advice to wait until your mid-30s to freeze eggs is wrong.
More clued in people might point out that there are several companies working on making eggs from stem cells, and that perhaps by the time women who are 20 today reach the age at which they're ready to begin having kids, those eggs will be useless because it will be easy to mass manufacture eggs by that time.
Barring the AI-enabled automation of everything, I don't think stem cell derived eggs are going to be commercially practical for another decade or more.
Companies currently working on this whom I've talked to think we're 6-8 years from human trials. Even after trials conclude there will still be a period where stem cell derived eggs are incredibly expensive as every wealthy woman past her reproductive years rushes to get in line.
Lastly, the stem cells we're planning to use to make these eggs accrue mutations with age, and we don't currently have a good method to fix these before making them into eggs. These mutations will bring additional risk of various serious diseases, only some of which we currently have the genetic screening to detect.
You can actually freeze your eggs for relatively little money if you know where to go. Clinics like CNY Fertility are about a third the price of a regular IVF clinic and have reasonably similar outcomes for procedures like egg freezing. Including the cost of the retrieval, monitoring, medications, flights, and hotels this will usually come out to about $6000-7000 per retrieval. Storage fees generally run around $500/year.
The downside of CNY is the customer experience is worse than average, and there's much less hand holding than you'll get at a higher end clinic.
If you're rich and money is no object, the best IVF doctor I know is probably Dr. Aimee. She's quite expensive compared to the average IVF doctor (somewhere between $25k and $40k per round with all expenses included), but she has produced some pretty outlierish results for a number of my friends and acquaintances.
If CNY doesn't work for you and Dr. Aimee is too expensive, I'd recommend using Baby Steps IVF to find a clinic. It provides ranked lists of the best clinics all over the United States, and it's completely free. Two friends of mine, Sam Celarek and Roman Hauksson spent the last year and a half building this site. It's probably the best resource on the internet for comparing clinics. Most of the clinics you'll find through this website (and indeed most of the clinics in the country) will cost between $12,000 and $22,000 per round of egg freezing.
Lastly, if you're a California resident, check whether your insurance plan offers coverage for IVF. You may be able to get them to pay for egg freezing, especially if you are already married.
Most women will need 1-3 rounds of egg retrieval to have a high chance of having all the children they want. If you plan to do polygenic embryo selection, 2-5 is a better estimate. If you want more precise numbers, use Herasight's calculator to estimate how many kids you could get from a given number of egg freezing cycles. If you want to do polygenic embryo selection, aim to have enough eggs for >2x the number of children you actually want.
If you're interested in freezing your eggs or you're interested in polygenic embryo selection, send me an email. I'm happy to chat with anyone interested in this process and may be able to add you to some group chats with other women going through the process.
Bottom Line: unless you're literally underage, sooner is almost always better when it comes to egg freezing. If you're one of the few women who visits this site, consider freezing eggs sooner rather than later!