MoreRSS

site iconLessWrongModify

An online forum and community dedicated to improving human reasoning and decision-making.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of LessWrong

Jevons Burnout

2026-02-11 21:29:49

Published on February 11, 2026 1:29 PM GMT

Let's say we're in a Jevons paradox world.

That's a world where powerful AI, or AGI, or whatever, exists, and doesn't steal everyone's job, due to Jevons paradox.

In economics, the Jevons paradox, or Jevons effect, is the tendency for technological improvements that increase the efficiency of a resource's use to lead to a rise, rather than a fall, in total consumption of that resource. (Wikipedia)

Example: When steam engines became more efficient---requiring less coal---this (paradoxically?) led to an increase in demand for coal. And so, more coal was burned.

A lot of people worry that AI will steal our jobs. The common reply (citing Jevons) is that because AI will also make human labor much more efficient, demand for human labor will increase, rather than decrease. As long as this effect is greater than the replacement effect (of AI labor for human labor), demand for human labor will rise overall.

Of course, Jevons paradox is not a law of nature. It's certainly possible that AI will in fact replace human labor at a rate higher than AI-empowered human efficiency gains.

But, like I said, let's suppose we are in a Jevon's paradox world. If it's a powerful AI/AGI scenario, this means human labor will be unbelievably productive, and demand for this AI-augmented human labor will be enormous.

Unfortunately, I don't think this means we'll all get more free time for hobbies and leisure, while the AIs and robots get our busywork done for us. Rather, as we become more and more disproportionately productive, our own trade-offs between work time and leisure time will shift. Work time will be so productive that the opportunity cost of leisure time will become unbearable. Eventually, every hour you spend relaxing is another moon you could have colonized...

I worry that in a Jevons paradox world, we also get Jevons Burnout.

(I'm already addicted to going through my backlog of coding project ideas with claude. This weekend I stared at code instead of playing video games. During the Superbowl I managed a fleet of AI agents instead of watching the game. Maybe we are in a Jevons paradox world, after all.)



Discuss

Strategic awareness tools: design sketches

2026-02-11 20:28:21

Published on February 11, 2026 12:28 PM GMT

We’ve recently published a set of design sketches for tools for strategic awareness.

We think that near-term AI could help a wide variety of actors to have a more grounded and accurate perspective on their situation, and that this could be quite important:

  • Tools for strategic awareness could make individuals more epistemically empowered and better able to make decisions in their own best interests.
  • Better strategic awareness could help humanity to handle some of the big challenges that are heading towards us as we transition to more advanced AI systems.

We’re excited for people to build tools that help this happen, and hope that our design sketches will make this area more concrete, and inspire people to get started. 

The (overly-)specific technologies we sketch out are:

  • Ambient superforecasting — When people want to know something about the future, they can run a query like a Google search, and get back a superforecaster-level assessment of likelihoods.
  • Scenario planning on tap — People can easily explore the likely implications of possible courses of actions, summoning up coherent grounded narratives about possible futures, and diving seamlessly into analysis of the implications of different hypotheticals.
  • Automated OSINT — Everyone has instant access to professional-grade political analysis; when someone does something self-serving, this will be transparent.
Hand-drawn concept board titled “Tools for strategic awareness” showing mockups for ambient superforecasting, scenario planning on tap, and automated OSINT, illustrating AI tools for forecasting, scenario analysis, and better strategic decisions.

If you have ideas for how to implement these technologies, issues we may not have spotted, or visions for other tools in this space, we’d love to hear them. 

This article was created by Forethought. Read the full article on our website.



Discuss

Introspective RSI vs Extrospective RSI

2026-02-11 19:54:03

Published on February 11, 2026 11:54 AM GMT

On the left, a blue man solves a rubix cube with a thought bubble depicting himself thinking about the rubix cube. On the right, a blue man is being studied with medical equipment by other blue men.
Introspective-RSI Extrospective-RSI
The meta-cognition and meso-cognition occur within the same entity. The meta-cognition and meso-cognition occur in different entities.

Much like a human, an AI will observe, analyze, and modify its own cognitive processes. And this capacity is privileged: the AI can make self-observations and self-modifications that can't be made from outside.

 

AIs will automate various R&D tasks that humans currently perform to improve AI, using similar workflows (studying prior literature, forming hypotheses, writing code, running experiments, analyzing data, drawing conclusions, publishing results).

 

Here are some differences between them (I mark the most important with *):

  1. *Monitoring opportunities: During E-RSI, information flows between the AIs through external channels (e.g. API calls) which humans can monitor. But in I-RSI, there are fewer monitoring channels.
  2. *Generalisation from non-AI R&D: E-RSI involves AIs performing AI R&D in the same way they perform non-AI R&D, such as drug discovery and particle physics. So we can train the AIs in those non-AI R&D domains and hope that the capabilities and propensities generalise to AI R&D.
  3. Latency: I-RSI may be lower latency, because the metacognition is "closer" to the mesocognition in some sense, e.g. in the same chain-of-thought.
  4. Parallelisation: E-RSI may scale better through parallelization, because it operates in a distributed manner.
  5. Diminishing returns: Humans have already performed lots of AI R&D, which so there might be diminishing returns to E-RSI (c.f. ideas are harder to find). But I-RSI would be a novel manner of improving AI cogntion, so there may be low-hanging fruit leading to rapid progress.
  6. Historical precedent: Humans perfoming AI R&D provides a precedent for E-RSI. So we can apply techniques for improving the security of human AI-R&D to mitigating risks from E-RSI, such as research sabotage.
  7. Verification standards: To evaluate E-RSI, we can rely on established mechanisms for verifying human AI-R&D, e.g. scientific peer-review. However, we don't have good methods to verify I-RSI.
  8. *Transition continuity: I think the transition from humans leading AI-R&D to E-RSI will be gradual, as AIs take greater leadership in the workflows. But I-RSI might be more of sudden transition.
  9. Representation compatibility: Because of the transition continuity, I expect that E-RSI will involve internal representations which are compatible with human concepts.
  10. *Transferability: E-RSI will produce results which can more readily be shared between instances, models, and labs. But improvements from I-RSI might not transfer between models, perhaps not even between instances.

See here for a different taxonomy — it separates E-RSI into "scaffolding-level improvement" and other types of improvements.



Discuss

What concrete mechanisms could lead to AI models having open-ended goals?

2026-02-11 17:08:17

Published on February 11, 2026 9:08 AM GMT

Most of the AI takeover thought experiments and stories I remember are about a kind of AI that has open-ended goals: the Squiggle Maximizer, the Sorcerer’s Apprentice robot, Clippy, probably also U3, Consensus-1, and Sable. I wonder what concrete mechanisms could even lead to models having open-ended goals.

Here are my best guesses:

  1. Training on open-ended tasks, given enough capabilities or the right scaffolding
  2. RL with open-ended reward specifications, like maximizing cumulative reward with no terminal reward and no time penalty (like the Coast Runners example of specification gaming)
  3. Mesa-optimization, where SGD finds a policy that internally implements an open-ended objective that happens to perform well on a bounded outer task

Number 3 seems possible but very unlikely, because the learned objective would need to persist beyond the episode, outperform simpler heuristics on the training distribution, and not be suppressed by training signals like a penalty for wasted computation.

Things I think are not realistic mechanisms of open-ended goal formation:

  1. Instrumental convergence, because if subgoals are instrumental there’s no incentive to keep pursuing them after the parent goal has been accomplished
  2. Uncertainty about whether a goal has been accomplished (the Sorcerer’s Apprentice failure mode), because this hypothetical is not bearing out empirically (current models are satisficers that don't seem to reason about minimizing uncertainty)

So, what concrete mechanisms could lead to models having open-ended goals?



Discuss

Is Everything Connected? A McLuhan Thought Experiment

2026-02-11 16:51:11

Published on February 11, 2026 6:04 AM GMT

Summary

The purpose of this essay is not to present an academic claim, but to convey my thoughts based entirely on my personal readings. To avoid any confusion, I will begin this section by referring to the thought process I will employ in subsequent sections.

 

Introduction

Media is a message. Today, when you pick up a McLuhan book, you will realize that works written in the 1960s remain relevant even today. In addition, you may find that some sentences seem as if they were written directly for today.

When things go far enough, humankind thus becomes a creature of its own machine.

-Excerpt from the book "Media Message, Media is the Message"

To be a creature of one's own machine is a very bold statement for the technologies of that time, because the most common technologies were radio and television. Home computers weren't even mentioned. And in a time when technologies were so limited, McLuhan was able to make incredible statements that encompass even the present and the future.

Media is the message. This is McLuhan's most important and well-known theory. However, when you look at the subheadings used to construct this two-word sentence, almost every subheading is serious enough to be the main subject of a study in itself. The subheading I will address is the one where he talks about media being an extension of the physical and spiritual aspects of humanity.

All media is an extension of certain physical and mental faculties of human beings. The wheel is an extension of the foot; the book is an extension of the eye; clothing is an extension of the skin

-Excerpt from the book "Media Message, Media is the Message"

Upon reflection on this quote, it becomes clear that everything humanity has conceived and produced today stems from one or more of its own characteristics. This explains why McLuhan's thoughts and work are timeless. The idea that what humanity has produced from yesterday to today is essentially an interpretation of a part of itself makes it easier for us to make more consistent predictions about the future.

In this context, my own thought and conclusion is this: The virtual world we inhabit today is an extension of the brain. And the world we inhabit today is an extension of the virtual world.


The Virtual World is an Extension of the Brain

About 100 years ago, an archivist would meticulously examine tons of documents, sorting and organizing them according to their dates and categories. Even the thought of it was incredibly laborious. Now, let's look at today; we have programs that can handle all this archival work in minutes. These programs were something people could only dream of in their minds years ago, but now almost every technological device can do it.

We can say that computers are extensions of things that are physically difficult to fit into the world. Many files and accounts, which in the past were equivalent to tons of paper, are now equivalent to many punched cards, and today to the size of a hard drive.

Of course, the advent of digitalization would require a digital space; after all, data was no longer on paper, but in another realm. This, naturally, would inevitably lead to people wanting to explore this virtual world and asking themselves, "What else can be done in this realm?"

Personally, my favorite example is William Higinbotham creating the first video game. Higinbotham designed a simple tennis game for bored visitors in his lab. For the first time, people could interact with a world inside a screen. That day was actually a milestone, because it pioneered something new; a non-physical but experiential world. In other words, the virtual world.

The closest example of this is like trying things out on your own with a newly learned software program.

The virtual world has made incredible strides thanks to Higinbotham and many others like him. Every successful or unsuccessful attempt to push the boundaries of technology has contributed to the creation of the virtual world we use today.

If we go back a few years, the virtual world was a Saturday night pastime for many; today, it is progressing towards becoming a place we are in every day, and if we push it a little further, every second.

  • Before you even realize it, your hand reflexively reaches for your phone; you find yourself watching videos on platforms like Instagram, TikTok, or YouTube. Social media, then, is an extension of dopamine.
  • You have a complex dataset at your disposal, and you build an artificial neural network (ANN) to organize it. It's similar to organizing multiple tasks simultaneously. ANNs are extensions of neurons.
  • Take the map applications we use today. Our brain uses the hippocampus to perceive space. Map applications are extensions of the hippocampus.
Hippokampus
Hippokampus

Today, the real world is beginning to merge with the virtual world. This is already a predictable assumption for tech enthusiasts. But how will this happen? I will address this topic in the next section.

 

The Real World is an Extension of the Virtual World

Today, advertising campaigns and actions like mobilizing masses are carried out through social media platforms. And the place they all try to influence is your mind. Think about it, smartphones are now indispensable, and as in the example in the previous section, we have started to pick up our phones reflexively. Naturally, the place where advertisements and ideas that try to appeal to the individual can be launched and reach the masses most quickly is not only social media, but also the virtual world.

So how does the extension of the virtual world influence the real world? This influence is more mental and linguistic than material. English is one of the indispensable languages ​​worldwide today. If you want to reach the world, not just an audience in your own country, you can do so through social media and English. You just need your content to be somewhat engaging; the rest follows like a snowball effect.

The awareness created in the virtual world triggers the real world. I want to clarify this with a few examples.

  • The Arab Spring: Tweets, hashtags, and other social media posts triggered popular protests, uprisings, and conflicts in Tunisia, Egypt, Libya, Syria, Yemen, and neighboring countries.
  • GameStop Short Squeeze: A surge in the stock price of a physical video game retailer from $20 to $400, fueled by calls on Reddit. This gave rise to the term "meme stock" and highlighted the potential for stock market manipulation by social media audiences.
  • Star Wars Battlefront II: A Reddit user asked why unlocking characters like Darth Vader was so difficult and expensive. EA responded coldly, "We want to give players a sense of accomplishment and pride," leading to one of the biggest online lynchings. Even after making changes to the game, EA failed to reconnect with its original audience.
  • George Floyd: The rapid spread of images related to George Floyd's death on social media transformed a local police incident into a global political movement. Online content has directly triggered street protests, corporate reforms, and public policies. This has led to the formation of a consciousness where the online world is not only representative but can directly influence reality.
  • Cambridge Analytica: collected data from millions of people on Facebook, not just basic information like age and gender, but also their specific interests, concerns, and sources of anger, creating a psychometric profile of each user. By directly influencing algorithms, they caused posts that could create a counter-effect to be specifically included in that user's feed, directly impacting the Brexit referendum and the 2016 US elections, becoming one of the biggest and most important examples of data manipulation and disinformation.

All these examples, and many more, have triggered and activated the data and consciousnesses within the virtual world. However, this doesn't yet mean that the real world is an extension of the virtual world. Because the examples above are simply adaptations of media content from throughout history. My theory is that all of this is a stepping stone.

The real world is an extension of the virtual world. This sounds very much like a line from The Matrix, but today, statements from high-tech figures like Zuckerberg, Musk, and Gates are ringing the bells of a shift towards merging consciousness with technology and moving the virtual world towards replacing the real world.

It's not hard to see that Zuckerberg is trying to lead the way in this area with his statements and investments. Consider Metaquest. Virtual entertainment areas, virtual meetings, etc. But why did Zuckerberg's project fail? Because transitioning from the real world to that virtual world was pure torture. The heavy device you put on your head, the dizziness and nausea caused by prolonged use, all contributed to Metaquest's initial failure. However, this wasn't entirely a failure. The process of improving the device, fixing its shortcomings, and re-releasing it using user data seems likely to continue until Metaquest becomes something like Nervgear (a virtual reality device from Sword Art Online).

Works depicting dystopian futures, like Ghost in the Shell and The Matrix, aren't created solely for popularity. Their authors also have a vision and a perspective on the future. The actions of people like Zuckerberg can create a vision of the future and give authors' works, even fictional ones, a degree of realism. What I'm describing might seem very abstract or just science fiction, but only a few years ago we would have dreamed of things like, "Wouldn't it be great if an AI like Carvis existed?" and today we have countless trainable AIs like ChatGPT and Gemini.

Technology is the magic of the modern world. With sufficient reading and reasoning, the direction of this magic can be largely predicted. This text is an attempt to understand that direction.



Discuss

Designing Prediction Markets

2026-02-11 13:38:20

Published on February 11, 2026 5:38 AM GMT

Prerequisite: basic familiarity with what a prediction market is

So you want to run a prediction market. You need a way for people to trade shares. What are your options?

CLOBs and Markets

If you were making a prediction market from scratch, you'd probably come up with a Central Limit Order Book (CLOB). Traders post BUY and SELL orders, stating what they're willing to buy and sell, and at what price, and you record these orders in your book.

  • Alice posts: "I'll buy 20 YES shares at $0.60 each"
  • Bob posts: "I'll sell 25 YES shares at $0.65 each"
  • Carol posts: "I'll buy 10 YES shares at $0.61 each"

When someone wants to trade, and doesn't want to wait for someone to fulfill their order, you match them with the best available offer. Pretty intuitive.

This system shows up directly in Hypixel Skyblock and other MMOs. The Bazaar lets you post orders and wait, or instantly fulfill existing orders. Have some Enchanted Iron to sell? You can list it at 540 coins and wait for a buyer, or instantly sell it by fulfilling the highest buy order at 470 coins.

The gap between the highest buy order ("bid") and the lowest sell order ("ask") is called the bid-ask spread.

Enter the Market Maker

CLOBs work well, but they have a problem: they need people actively posting orders from both sides. If nobody's posting orders, the market can't function. The spread can also become very wide when few traders are active, making it expensive to trade.

This is where market makers come in. A market maker continuously posts both buy and sell orders, ensuring there's always someone to trade with.

Market makers profit by maintaining a gap in prices between their bid and asks. For example:

  • They buy YES shares at $0.60
  • Sell YES shares at $0.65
  • Whenever one person buys YES from them and another person sells it to them, they pocket the $0.05 difference

This is called crossing the spread. The market maker provides liquidity to the market and is compensated for it through the spread. In traditional finance, firms like Citadel Securities make billions doing exactly this. In Hypixel Skyblock, this strategy is called "bazaar flipping".

Pricing Shares

How do Market Makers price their shares? In an existing market, they can simply look at existing price charts to determine their prices, but that's not very feasible with prediction markets. Thus, we need some way of determining a fair price for shares.

For simplicity, let's ignore extracting profit. We'll assume someone's paying our market maker, whom we'll call Duncan, a flat fee to provide this service, and they're otherwise operating in a fully efficient market where the bid-ask spread is 0.

Duncan holds some inventory of YES and NO shares, and people can trade with him. How should Duncan price his shares? Examining the question, we can see some key constraints:

  1. Prices should sum to $1: If a YES share pays out $1 when the market resolves YES, and a NO share pays out $1 when the market resolves NO, then the price of a YES share and a NO share together should be exactly $1, since the market can only resolve to one of these two options.
  2. Price equals probability: If YES shares cost $0.70, then that means the market thinks there's a 70% chance of the outcome being YES, since that's how expected value works. This is the key mechanism by which prediction markets work and even if you don't know the details of the market implementation yet, you should know this already.

Creating New Shares

Duncan needs the ability to issue shares. Otherwise, he'll run out of them, and won't be able to trade anymore. (No, he can't just raise share prices in an inverse relationship with his supply, since he sells both YES and NO shares this would violate the constraint that prices must sum to $1.)

Fortunately, it's very easy to issue new shares. Since YES and NO sum to 1, for every dollar Duncan receives from a trader, he can mint one YES share and one NO share as a pair. When the market resolves, he'll pay out $1 to holders of the winning share type, fully covering his obligation.

From this, we can infer that any valid formula must have certain properties: buying YES must raise P(YES), the probability must depend on inventory ratios (when Duncan holds a lot of NO, the probability is high because it means he's sold a lot of YES), and YES shares should always cost less than $1, except when the market is at 100%, and vice versa. Since 0 and 1 aren't probabilities, this should never happen.

A Natural Probability Formula

Given these constraints, you might come up with this formula for deriving the probability from Duncan's inventory (and thus the prices of YES and NO):

where  is Duncan's YES inventory and  is Duncan's NO inventory.

  • When  (such as when the market is initialized), the probability is 50%
  • If Duncan fully runs out of YES shares, the probability is 1, meaning you can't profit from buying YES anymore and you can buy NO for free.
  • If Duncan fully runs out of NO shares, the probability is 0.

This formula seems to satisfy all of our desiderata, and is fairly intuitive. Since P(YES) is the price of yes, we now know how to price our shares.

Discrete Shares

If Duncan has 50 YES and 50 NO shares, probability is 50%, so shares cost $0.50 each.

You give Duncan $1, and tell him you want to buy YES.

  1. YES costs $0.50, so $1 buys 2 YES shares
  2. He mints 1 YES + 1 NO (inventory: 51 YES, 51 NO)
  3. Duncan gives you 2 YES shares in exchange (inventory: 49 YES, 51 NO)
  4. New probability: 51/(49+51) = 51%

Another example. Duncan has 100 YES and 50 NO:

  • Probability: 50/150 = 33.33%
  • Price per YES: $0.33
  • Your $1 buys 3 YES shares
  • He mints $1 of shares (inventory: 101 YES, 51 NO)
  • He gives you back 3 YES: (inventory: 98 YES, 51 NO)
  • New probability: 51/149 = 34.23%

You might have noticed the problem already: Duncan isn't accounting for how the purchase itself affects the price.

When you buy multiple shares at once, you're getting them all at the initial price, but each share you buy should be more expensive than the last! You get a discount on bulk purchases!

Duncan could solve this by selling shares one at a time or even fractions of a share at a time, adjusting the price after each infinitesimal sale. But this is computationally expensive and assumes shares are discrete units rather than infinitely divisible.

For a continuous equation, we need to use calculus and solve a differential equation

The Calculus of Market Making

(warning: differential equations)

Let's formalize this problem. Suppose Duncan starts with  YES shares and  NO shares. You deposit  dollars. and buy YES from Duncan.

After the trade:

  • Duncan has minted  new shares of each type
  • NO inventory: 
  • YES inventory: 

where "sold" is the quantity of YES shares Duncan gives to the trader. (In this context, s stands for "starting".)

The market probability at any point is:

Substituting our inventory formulas:

Since we're obeying the constraint price equals probability, the rate at which Duncan sells you shares is determined by the current probability.

The trader deposits money at rate  and receives shares at rate . The price per marginal share is . Since we want the price to be the probability, we get: 

Since we're taking money as our input, we take the reciprocal:

 This is our initial differentiation equation. I encourage you to try to solve it on your own,  but if you don't know calculus or get stuck, the solution is enclosed below.

 

 Multiply both sides by 

 

 Observe that the  and . By product rule, then: 

 

 

 , since if you spend no money you don't get any shares. If you plug in  and solve for , you get , so we can just drop that term. 

 Since  is just  and  is , we get: 

 !<

 You might notice the term  shows up in the denominator of a term of , and is equivalent to . If you multiply  and  together, you get: 

 

 The product of Duncan's YES and NO shares remains constant, regardless of the trade![1]

Constant Product Market Maker

Thus, we've discovered the fundamental invariant: 

 where  is a constant determined by Duncan's initial inventory. Because YES * NO is always constant, we call this a Constant Product Market Maker (CPMM).

So Duncan, knowing this, has determined an algorithm for pricing shares:

  1. Receive money from trader
  2. Mint YES and NO shares
  3. Give out exactly enough YES shares (or NO shares, depending on what the trader wants) to maintain the constant product 

Here's an example of this in practice:

  • Duncan starts out by initializing a market with $50 of liquidity. (Initial inventory: 50 YES, 50 NO)
  • He solves for his constant product, which needs to remain invariant. 
  • You bet $50 on YES. Duncan uses this to mint more shares. (Inventory: 100 YES, 100 NO)
  • He now needs to pay out enough YES shares that he reaches his constant product again. , solve for .
  • Plug in NO and 
  • He has 100 YES, and needs to have 25 YES, so he gives you 75 YES shares in exchange for your $50. (Inventory: 25 YES, 100 NO)
  • The new probability is .

Meanwhile, if a trader wants to sell shares, it's similarly simple: He adds the shares to his inventory, figures out how many YES + NO pairs he needs to give up in order to reach the constant product, and then exchanges these pairs for cash and gives them to the trader, removing the shares from circulation. Alternatively, and perhaps more elegantly, the trader can simply buy the opposite share and then give pairs to Duncan in exchange for cash.

(Note that, since Duncan's inventory is inversely related to the market probability, that means Duncan pockets a lot of money from traders when the market resolves counter to expectations, and loses more of his initial liquidity the more confident a correct market is.)

In fact, this process can be fully automated, creating an Automated Market Maker (AMM). This is the foundation of Uniswap, and many prediction market protocols.

Conclusion

Starting from basic constraints about prediction markets (prices sum to 1, price equals probability), we derived a unique solution. We didn't just arbitrarily choose the CPMM out of a list of options. It emerged, inexorably, from the requirements we placed.

When you properly formalize a problem with the right constraints, there's often exactly one correct answer. Independent researchers, solving similar problems with similar constraints, will converge on the same solution. When Newton and Leibniz invented calculus, they didn't get similar results because they shared their work, or because they were working on the same problem (they were working in very different fields). They got similar results because they were working on a class of problems with the same underlying structure, even if the similarities are not obvious at first.

The market itself does Bayesian updating—on expectation, as more people trade, the probability approaches the true likelihood, based on the accumulated knowledge of the traders. Our pricing mechanism had to respect this Bayesian structure. The constant product formula isn't arbitrary; it's what you get when you correctly formalize "each marginal share should cost the current probability" in continuous terms. While this isn't an empirical fact about the territory, the laws of probability nevertheless have carved out a unique shape in design space, and your map had better match it.[2]

(This is especially obvious in the context of a prediction market (which is, in a certain sense, the purest form of market, separating the trading and aggregating of information from everything else), but it applies to markets and AMMs in full generality, being used in DeFi and Crypto space.)

 

 

  1. ^

    If you don't know calculus, this is the important part.

  2. ^

    Ok, I'm completely overstating my case here and these paragraphs are largely joking, there are other solutions to this problem if you pick different probability functions matching these desiderata or come at prediction market design from a different cases, many of which have their own pros and cons, and Hanson explicitly wrote about Constant Function Market Makers. It's just that this one is very intuitive and has useful properties for a purely probabilistic YES/NO market which is why I wrote about it



Discuss