2025-11-24 15:46:52
Published on November 24, 2025 7:46 AM GMT
You know what's sweeter than free money? Free money that you take from an organization of smart people who love prediction markets and having good values, by being better at prediction markets and values than they are.
Sadly, you can no longer turn this into actual U.S. legal tender. You can, however, still turn it into mana and bragging points.
Here's the setup.
Once a year, LessWrong picks ~50 posts from two years ago to be the Best Of posts. This is called the LessWrong Annual Review. (So, at the end of 2025 we'll be picking the best posts of 2024.) Any post can be nominated, including things that weren't actually posted to LessWrong, but LessWrong posts with 100+ karma are the default for consideration.
During the year, LessWrong has a bot. That bot looks for posts that get 100 karma, and it makes a market on Manifold Markets for whether that post will make the top fifty during the LessWrong annual review. Each market is initialized to 14% chance that the post reaches that lofty goal.
Manifold Markets is a prediction market platform. That platform shows market about whether something will happen or not; if you bet yes, the market displays higher odds it happens, and if whatever the market was about actually happens you get paid based on how unlikely the outcome was when you bet. If it doesn't happen, you lose your stake.
(Note that Manifold Markets uses a fake currency called mana. For a while it was possible to get paid out in USD for predicting correctly, but that's no longer possible.)
After a little messing around with LessWrong's GraphiQL, you can see that there are 4244 total 2024 posts, of which 364 had >=100 karma. Again, there will be about 50 posts in the top 100, which is sort of like saying a 100 karma post becoming a Best Of LessWrong selection has a base rate of about 14%.
(Anyone better with GraphiQL feel free to double check me.)
But this is a base rate, and it's pretty dumb.
Sure, some high karma posts are great. But others have much lower chance of getting immortalized.
Consider the current top crop. There's The Lightcone is Nothing Without Its People. It's a fundraiser post. People on LessWrong love it. But is it going to be reread years from now? Is anyone going to nominate it for a Best Of collection? Then there's Introducing AI Lab Watch: cool project, not a very useful post. How about Research Directions OpenPhil Wants To Fund In Technical AI Safety? I see why it got upvoted, it's useful information for some people, but who is going to look at it two years later and think "yeah, this really matters for recommending to the whole site?" Or Introducing Open Asteroid Impact, which is a great April Fools post, don't get me wrong, but likely isn't going into the Best Of collection on the strength of the gag.
(Think I'm wrong about any of those? Bet against me then. My mana is yours for the taking if you're right.)
Or go for easier fields. A constant question when trading on a market is whether you think you know something that the rest of the market doesn't. You want to know who the dumb money is. Except here LessWrong is the dumb money.
Manifold lets you see how many other people have traded on a market, so you can look for markets with zero traders. You can also skim or filter for markets from LessWrong with exactly 14% odds. Those are places where no human being has spent any individual attention, and quite possibly no other human being will.
(I'll note that Manifold also gives you a small mana bonus for the first market you trade in every day. That means you could go to one such market a day, make one bet for 20 mana, and come out ahead by 5 mana a day even if you lost every single bet you made.)
Sure, you're locking up your mana. . . except the LessWrong Annual Review is just around the corner. It's not like there's a lot of new updates or changes over time here. You can just wait until late November, make a bunch of bets on last year's posts, and confidently expect the market will resolve in a couple of months.
So the next question is if you're actually sure about this. If you buy "No" meaning, say, "Introducing Open Asteroid Impact is not going to be in the Best of LessWrong top 50" then you are in effect paying 86 mana in exchange for 100 mana once it resolves, if it resolves no.
You are not 100% sure of anything. I admit my title is rounding and clickbaity.
But I will take odds of 99% on some of these. You, dear reader, may or may not have similar certainty on some markets. There's a lot of safe bets, because again, the automatic process that creates these markets is pretty dumb. What are the kinds of places where you can be sure you have an edge?
Sometimes my answer is an inscrutible intuition for the whims of the annual review process. I took a few positions like that. Mostly my answer is "because it is obvious to anyone who took five seconds to think about it, but I can tell from the fact that nobody has bid on the market that nobody did."
Take 2023 Survey Results. It's one of mine. There's a lot of work that goes into the community survey. But it's inherently a kind of navel gazing post, not the kind that winds up in Best Of lists. Keep an eye out for the overly-community focused post.
Or consider the announcement of Daniel Kahneman's death. This is a linkpost to an obituary. In what world does someone nominate this? What would the voters be thinking when they add it to the Best Of collection? Same with Verner Vinge. Look for anything where the upvotes that got it to 100 were condolences or agreement with a statement nobody needs to reread
Look at OMMC Announces RIP. It's an April Fools gag. It was funny when it came out, but I didn't remember it a week later. Gag posts are fun and also correctly struggle in the Annual Review.
I think A List of 45 Mech Interp Project Ideas From Apollo's Research Interpretability Team is a good post for them to make, and a good post for the right sort of person to read. It's obviously not getting a place in the Best Of list, though if anyone went and pulled off one of the projects and made a post maybe that would get there. I have a thesis that AI posts are actually incorrectly frontpaged fairly often, and in particular posts about specific developments or advancements have a hard time making Best Of lists.
This is my same point from Are You Smarter Than A Base Rate again, basically.
Now, this general strategy of making small EV bets on 'sure things' is colloquially known as picking up pennies in front of a steam roller. I generally get about ten mana for every hundred I risk, so I need to be right at least eleven times for every time I'm wrong. That said, I am allowed to pick my size - I often bet down markets a little bit when I suspect it's wrong, and save my big bets for when I'm actually that sure.
Size appropriately, and don't bet mana you can't afford to lose. Me? I think this particular steamroller is slow and easy to pinch pennies from.
I'm going to be honest with you; even when you could convert silly internet points into cold hard cash, the return on investment wasn't great. Being totally correct on every post as soon as it came out was worth, like, about a dime's worth of mana unless someone specifically disagreed with you. If it takes you ten seconds to read a post title, click through to check the author, input your bet up or down, and submit your bet, then you'd be making about $36 an hour. You can go faster than that or even automate parts of it, but at some point you're the dumb automatic money that you were planning to take from LessWrong.
I did it because I had a thesis about the LessWrong readership, and this was one way to test my predictions.[1] Being in tune with the vibe of LessWrong is a useful subskill for me.[2] I did it because I wanted a cheap place to practice bet sizing and limit order usage. I did it because I like high volume chances to test my prediction and calibration skills. It's good practice. In a few cases I did it to express a statement about the world, that this kind of thing was or was not a fit for inclusion in the Best Of lists.
(I also did it on principle. I knew those markets were mispriced and it itched.)
You may or may not have the same reasons. It's not a lucrative hobby, that's for sure. And the fewer of you partake, the more easy bets there are for me. But more than LessWrong's generous donation of mana to my coffers, I want those markets to be priced correctly. It'd be nice to know now whether the posts I'm writing will be remembered then, and so if you're in the market (heh) for an extremely niche hobby, then maybe you should join me.
See you on the other side of the order book.
Concisely, my thesis is "there's too much AI content, and it does not actually fit the 'timeless' criteria for frontpage."
It'd be more useful if more ACX posts were considered, but alas.
2025-11-24 14:40:31
Published on November 24, 2025 6:40 AM GMT
🥞 Apply now! (Takes < 15 min if you have a résumé ready.)
Ashgro helps AI safety projects focus on AI safety.
We offer fiscal sponsorship to AI safety projects, saving them time and allowing them to access more funding. We save them time by handling accounting, management of grants and expenses, and HR. We allow them access to more funding by housing them within a 501(c)(3) public charity (Ashgro Inc.), which can receive grants from pretty much any source.
Handle parts of tickets, whole tickets and eventually address just generally described opportunities or problems. You'll start out doing very basic tasks, but we aim to move you up the ladder to more and more complicated or open-ended tasks as quickly as we (and you) can. Given that we're a small team, though, some share of very basic tasks will remain your responsibility for the foreseeable future.
Examples of very basic tasks:
Examples of well-defined tasks:
Examples of high-level work:
All of the above are past examples. Future work will be different.
You need to be able to:
Nice to have – lacking these should not (!) stop you from applying:
AI warning: We want to know what you can do, not what AI can do. So as soon as we have any suspicion that any part of your application is written by AI, we will put it on the ‘maybe’ pile. If we are reasonably sure that part of the application was written by AI, we will reject it, no matter how good it otherwise seems.
If you make it through all of this, we'll be excited to offer you a job.
Expect six to eight weeks from submitting your application to getting an offer. We aim to go faster than that, but life usually intervenes. If you let us know, we can accelerate the process for you.
🥞 Apply now! (Takes < 15 min if you have a résumé ready.)
2025-11-24 14:39:46
Published on November 24, 2025 6:39 AM GMT
I am a relatively central instance of what is referred to by the noun 'rationalist', in present-day discussion in the Western world. Not the idealized rationalist who makes perfect bayesian updates on all of their information; but someone from the school of philosophy on LessWrong, who has a strong practical and theoretical interest in human rationality, who knows how to write down Bayes' theorem[1], and is part of an extended network of people for whom this is also true.
To be clear, most of my life I have not identified as this symbol. This is for a few reasons:
However in the last 6-12 months, I have felt the pressures here let up.
Regarding the first two; somehow I feel able to act and be seen as an individual even if I say that I am a rationalist. In some regards this is because I am more well-defined—many relevant parties (e.g. EAs, Progress Studies, etc) cannot view me as simply another of a tribe, but as someone that they have a trade relationship with, who they cannot say false things about as easily. Relatedly, I have more status than before, which means that my own character and role shines through more clearly.
I am also less worried about tribalism; a standard temptation is to step up and defend groups you are part of, but I repeatedly find that I care not what people say about my tribe. Just the other night I heard some people at Inkhaven say some things about rationalists that seemed wrong to me, as they were mentioning lots of different groups and criticizing them; I felt no deep impulse to step in and correct them. Nor do I when people further afield on twitter or elsewhere speak of 'rationalists'. I am not that bothered what other people think.
The third one stands as it did before, and it is unfortunate. However, I don't think it makes sense for me to reject the label on principle and sow confusion. This is because nobody chose the label. If the rationality community were a company, or a religion, or a brand owned by someone, then I could refuse to use the name in protest of their poor judgment. But this is not the case! It is a name that has grown significantly due to others wanting a name for this group. Like existentialism, neoliberalism, and New Atheism, a name can come around in the culture for a group that did not themselves claim (or even much use for self-description). It nonetheless is still used to refer to a recognizable group of people and refers to a real phenomena, and it is unhelpful to pretend that it has no referent or you are not a part of it.
So I think it is correct for me to identify myself to others as a rationalist; it translates a lot of relevant information, and would be a lot of effort for me and my readers/listeners to all of that information without using the term.
Postscript: There's a question of whether to stick strictly to 'aspiring rationalist', which many have attempted.
Firstly, I don't believe there's enough political will to change it hard enough so that outsiders would also change it (e.g. hooligans on twitter/reddit, people from afar)—this would take not merely just using 'aspiring rationalist', but being very annoying about it, refusing to accept the description, correcting it 100% of the time, etc.
Secondly, a child who has learned the pythagorean theorem may call themselves a mathematician; a man who has to fight someone tomorrow may say some wisdom, like "A warrior does not second-guess himself on the night of battle". I think it makes sense for people practicing to be something to use the name of someone who has practised.
Here's a way that I re-derive it.
In this image, suppose each point in the square represents a state of the world. The two circles are the two hypotheses, A & B.
Notice that, if you randomly pick a point, the probability of being in the orange area can be calculated as follow: . That is, take the probability of being in B, and then multiply that by the conditional probability of being in the orange area once you know you're in B, and you'll get the probability of being in .
Next, notice that this is symmetrically true for A.
Now we have two things equal to one another! Both are equal to the probability of the orange area, .
And if you haven't noticed, that's one step from Bayes' Theorem! Divide both sides by (or symmetrically by ) to get the standard equation for conditional probability.
2025-11-24 13:46:34
Published on November 24, 2025 5:46 AM GMT
I left China at the age of four. My memories of those first four years are scattered impressions: a three-leaf clover I chowed down on while Mom’s back was turned, the smell of a revolting herbal remedy, the time an older girl scratched me on the cheek in daycare.
We went back to visit every several years. The summer before college, I visited my birthplace, Chengdu, to see my grandparents. At some point, we had dinner with a larger group of family friends. Two of the children at that party, as it happened, had been my best friends in day care.
They remembered me. They remembered the daycare we went to, and the street it was on, and the ways our parents were connected. They told me about the group of boys who’d roughly all grown up together: one who excelled in school and was going to Peking University, another who went too deep into League of Legends, another who was currently obsessed with The Three Body Problem. They were so warm, inviting me back home like an old friend who’d always belonged. I was immediately one of the boys - they asked me to translate words they’d heard in American movies, and snickered at the definition of “asshole.”
To them, I was a thread that had flown off the tapestry of their lives. They picked me right up, dusted me off, and sewed me back in.
I didn’t remember a damned thing about them – I didn’t even know I had friends in daycare.
—
After arriving in the US, my family toured the M-states: I moved from Missouri to Maryland at six, and then to Massachusetts at ten or eleven.
Missouri is also barely a splash of impressions: climbing chestnut trees, encountering a proto-psychopath on the schoolbus, sitting for hours “helping” my father fish the Ozarks.
In Maryland I have more substantial memories: holding hands with my best friend William before we learned it was not cool, and then finding out it was not cool. I remember riding my first bike and then having it stolen by the older kids upstairs, and watching my mom sneak out at night to steal it back. I remember the face of the nasty teacher who gave me a C just to put me in my place.
My memories come alive right around the time William introduced me to Diablo II. Although neither of our parents allowed us to play more than a couple hours a week, we spent many hours theorycrafting and poring over the official strategy guides. For many years after, I’d boot up Diablo just to recapture that time.
—
With the benefit of hindsight and the theory of spaced repetition, I understand now why I remember so little of those early years, and why only Diablo remains as fresh as yesterday. After I left China, my daycare chums in Chengdu passed the same streets, met the same elders, played with the same classmates month after month, year after year. Their memories of early childhood were reinforced again and again. They could easily triangulate even my minor, brief role in this world. The brain remembers those patterns that are repeated across time.
I had no such luck. Every few years, the world was switched out by an entirely new stage, with an entirely new cast. There was a surjective function from friends I held dear to days for saying goodbye. For others, life was a single, cohesive drama; for me, it was a series of improv scenes. It is no wonder that my memories are so scattered.
—
Math majors and PhDs often ask me how to decide between academic and industry jobs. Broadly speaking, these conversations have a common dramatic structure: the student lobs a bomb at me in the form of a mad lib:
Compared to academic jobs, industry jobs are 10x easier to find, pay 10x better, demand half the workload and half the red tape, BUT __.
My job in this drama is to defuse the bomb by filling in the blank with a single intangible value or principle - academic freedom, say - so beautiful that it overwhelms all practical considerations and justifies all the tragedy of academic existence. Some students hurl the bomb at me aggressively - in their heart of hearts they are already checked out of the academy and are looking to verify that the ivory tower is full of shit. Others hand me the bomb timidly, because they are romantics and martyrs at heart - with their eyes, they plead with me to half-ass the answer with anything remotely persuasive. They need something sacred to whisper on their lips as they throw themselves onto the cross of the academic job market.
I’ve always disliked this conversation, until now. I finally know how to fill in the blank, at least in a way that would have persuaded my past self:
Compared to academic jobs, industry jobs are 10x easier to find, pay 10x better, demand half the workload and half the red tape, but continuity.
—
I’ve been starved for continuity most of my life. My family moved when I was four, and then six, and then ten. Then, I went to college, did a PhD, did a postdoc, and finally landed a tenure-track professorship, moving seven times in 31 years. Seven times the stage was reset and the cast replaced.
How many more would it be if I go to industry? Everyone is moving, all the time. Startups collapse, or are acquired. Entire organizations are shuffled and reshuffled when new directives are delivered from on high. In many places, the best way to get promoted is to jump ship and be hired at a new level. One day, you’re shooting the wind with the coworker at the next desk over. The next day, the desk is empty.
—
What do I mean by continuity? The great cathedral of Notre Dame began construction in 1163 and was completed in 1345. Continuity is what I imagine being involved in that project was like: your father, and his father, and so on four generations back, all toiling towards a common cause, a single continuous sacred labor, that ties together every aspect of your life.
I completed my PhD in 2021.
My PhD advisor, Jacob Fox, completed his PhD in 2010. Around half of my research projects come from problems Jacob started thinking about more than a decade ago. I see him practically every year at conferences, workshops, or research visits. He is someone I can trust for advice about anything from career development, to research taste, to advising students.
Jacob’s PhD advisor, Benny Sudakov, completed his PhD in 1999. Benny is a legendary PhD advisor who has trained and continues to train many outstanding mathematicians. This past summer, I raced Benny in the Random Run, a long-standing tradition of the biennial Random Structures and Algorithms conference. On a standard track, the number of laps in the run is determined by the roll of two dice; the second die is only rolled when the front-runner finishes the first set of laps. In the advisor-student pair category, Benny and his student Aleksa edged out my student Ruben and me for the win. In two years, I hope to be in better shape.
Benny’s PhD advisor, Noga Alon, completed his PhD in 1983. I received Erdős number 2 by spending 2021-2024 as a postdoc working with Noga, who is still sharper than any of us. Together with Joel Spencer, Noga wrote the textbook The Probabilistic Method which I and many others use to train PhD students. Joel has a fun tradition of publishing photos of young children reading The Probabilistic Method on his website. There is a picture of Jacob’s daughter there, as well as one of Noga reading the book to my six-month-old.
This is just one thread of a densely woven tapestry, a community of combinatorialists that traces itself back continuously to the problem-solving circles of Paul Erdős and his university buddies in Budapest. Our story is, I think, not dissimilar to that of the builders of Notre Dame.
Erdős rolled the dice for the first Random Run in 1983. I pray the dice continue to roll for many years hence.
2025-11-24 13:19:55
Published on November 24, 2025 5:19 AM GMT
Here I am on the plane on the way home from Inkhaven. Huge thanks to Ben Pace and the other organizers for inviting me. Lighthaven is a delightful venue and there sure are some brilliant writers taking part in this — both contributing writers and participants. With 40 posts published per day by participants (not counting those by organizers and contributing writers) it feels impossible to highlight samples of them. Fortunately that's been done for me: the Inkhaven spotlight. And just to semi-randomly pick a single post to highlight, for making me laugh out loud, I'll link to one by Rob Miles. There are also myriad poignant, insightful, informative, and otherwise delightful posts. And, sure, plenty that don't quite work yet. But that's the point: force yourself to keep publishing, ready or not, and trust that quality will follow.
Confession: I'm not at all happy with this post. Opening with "here I am on the plane"? Painful. I would've fixed that in an editing pass, but I'm going to leave it because it illustrates two Inkhaven lessons for me. First, that sometimes it's ok to hit publish before something is perfect. And second, that I in particular need to follow the advice (#5 in my collection of Inkhaven tips) to dedicate blocks of time to dumping words onto the page. Editing is separate. If I hadn't started typing "here I am on the plane" then I would've sat there agonizing about a good opener, gotten distracted, and had nothing.
Write, publish, repeat.
I'm still agonizing about whether to commit to continuing to churn out a post every day for the rest of the month now that I've left. I do have a pretty much unlimited number of ideas to write about, even sticking to the theme of writing about writing. Here are half a dozen of them:
The problem is how daunting it feels to do justice to some of those. But that's where the writing tip to First Just Write comes in. It's been half an hour now of writing this thing and I'm starting to think I could bear to hit publish. If I do, I expect it will be the worst post I've published while at Inkhaven. But something has to be my worst post.
Judge for yourself. Here's a recap of everything else I published leading up to and during my Inkhaven stay:
2025-11-24 11:10:11
Published on November 24, 2025 3:10 AM GMT
Eggs are expensive, sperm are cheap. It’s a fundamental fact of biology . . . for now.
Currently, embryo selection can improve any heritable trait, but the degree of improvement is limited by the number of embryos from which to select. This, in turn, is because eggs are rare.
But what if we could select on sperm instead? We could choose the best sperm from tens or even hundreds of millions, and use that to make an embryo. However, any method that relies on DNA sequencing must destroy the sperm. Sure, you can identify the best one, but that’s of limited value if you can’t use it for fertilizing an egg.
There have been a few ways proposed to get around this:
Here, I propose a different approach, which I call androgenetic haploid selection.
This would have to be done at the spermatid stage, before the sperm swim away.