2026-02-11 13:38:20
Published on February 11, 2026 5:38 AM GMT
Prerequisite: basic familiarity with what a prediction market is
So you want to run a prediction market. You need a way for people to trade shares. What are your options?
If you were making a prediction market from scratch, you'd probably come up with a Central Limit Order Book (CLOB). Traders post BUY and SELL orders, stating what they're willing to buy and sell, and at what price, and you record these orders in your book.
This system shows up directly in Hypixel Skyblock and other MMOs. The Bazaar lets you post orders and wait, or instantly fulfill existing orders. Have some Enchanted Iron to sell? You can list it at 540 coins and wait for a buyer, or instantly sell it by fulfilling the highest buy order at 470 coins.
The gap between the highest buy order ("bid") and the lowest sell order ("ask") is called the bid-ask spread.
CLOBs work well, but they have a problem: they need people actively posting orders from both sides. If nobody's posting orders, the market can't function. The spread can also become very wide when few traders are active, making it expensive to trade.
This is where market makers come in. A market maker continuously posts both buy and sell orders, ensuring there's always someone to trade with.
Market makers profit by maintaining a gap in prices between their bid and asks. For example:
This is called crossing the spread. The market maker provides liquidity to the market and is compensated for it through the spread. In traditional finance, firms like Citadel Securities make billions doing exactly this. In Hypixel Skyblock, this strategy is called "bazaar flipping".
How do Market Makers price their shares? In an existing market, they can simply look at existing price charts to determine their prices, but that's not very feasible with prediction markets. Thus, we need some way of determining a fair price for shares.
For simplicity, let's ignore extracting profit. We'll assume someone's paying our market maker, whom we'll call Duncan, a flat fee to provide this service, and they're otherwise operating in a fully efficient market where the bid-ask spread is 0.
Duncan holds some inventory of YES and NO shares, and people can trade with him. How should Duncan price his shares? Examining the question, we can see some key constraints:
Duncan needs the ability to issue shares. Otherwise, he'll run out of them, and won't be able to trade anymore. (No, he can't just raise share prices in an inverse relationship with his supply, since he sells both YES and NO shares this would violate the constraint that prices must sum to $1.)
Fortunately, it's very easy to issue new shares. Since YES and NO sum to 1, for every dollar Duncan receives from a trader, he can mint one YES share and one NO share as a pair. When the market resolves, he'll pay out $1 to holders of the winning share type, fully covering his obligation.
From this, we can infer that any valid formula must have certain properties: buying YES must raise P(YES), the probability must depend on inventory ratios (when Duncan holds a lot of NO, the probability is high because it means he's sold a lot of YES), and YES shares should always cost less than $1, except when the market is at 100%, and vice versa. Since 0 and 1 aren't probabilities, this should never happen.
Given these constraints, you might come up with this formula for deriving the probability from Duncan's inventory (and thus the prices of YES and NO):
where is Duncan's YES inventory and is Duncan's NO inventory.
This formula seems to satisfy all of our desiderata, and is fairly intuitive. Since P(YES) is the price of yes, we now know how to price our shares.
If Duncan has 50 YES and 50 NO shares, probability is 50%, so shares cost $0.50 each.
You give Duncan $1, and tell him you want to buy YES.
Another example. Duncan has 100 YES and 50 NO:
You might have noticed the problem already: Duncan isn't accounting for how the purchase itself affects the price.
When you buy multiple shares at once, you're getting them all at the initial price, but each share you buy should be more expensive than the last! You get a discount on bulk purchases!
Duncan could solve this by selling shares one at a time or even fractions of a share at a time, adjusting the price after each infinitesimal sale. But this is computationally expensive and assumes shares are discrete units rather than infinitely divisible.
For a continuous equation, we need to use calculus and solve a differential equation
(warning: differential equations)
Let's formalize this problem. Suppose Duncan starts with YES shares and NO shares. You deposit dollars. and buy YES from Duncan.
After the trade:
where "sold" is the quantity of YES shares Duncan gives to the trader. (In this context, s stands for "starting".)
The market probability at any point is:
Substituting our inventory formulas:
Since we're obeying the constraint price equals probability, the rate at which Duncan sells you shares is determined by the current probability.
The trader deposits money at rate and receives shares at rate . The price per marginal share is . Since we want the price to be the probability, we get:
Since we're taking money as our input, we take the reciprocal:
This is our initial differentiation equation. I encourage you to try to solve it on your own, but if you don't know calculus or get stuck, the solution is enclosed below.
Multiply both sides by :
Observe that the and . By product rule, then:
, since if you spend no money you don't get any shares. If you plug in and solve for , you get , so we can just drop that term.
Since is just and is , we get:
!<
You might notice the term shows up in the denominator of a term of , and is equivalent to . If you multiply and together, you get:
The product of Duncan's YES and NO shares remains constant, regardless of the trade![1]
Thus, we've discovered the fundamental invariant:
where is a constant determined by Duncan's initial inventory. Because YES * NO is always constant, we call this a Constant Product Market Maker (CPMM).
So Duncan, knowing this, has determined an algorithm for pricing shares:
Here's an example of this in practice:
Meanwhile, if a trader wants to sell shares, it's similarly simple: He adds the shares to his inventory, figures out how many YES + NO pairs he needs to give up in order to reach the constant product, and then exchanges these pairs for cash and gives them to the trader, removing the shares from circulation. Alternatively, and perhaps more elegantly, the trader can simply buy the opposite share and then give pairs to Duncan in exchange for cash.
(Note that, since Duncan's inventory is inversely related to the market probability, that means Duncan pockets a lot of money from traders when the market resolves counter to expectations, and loses more of his initial liquidity the more confident a correct market is.)
In fact, this process can be fully automated, creating an Automated Market Maker (AMM). This is the foundation of Uniswap, and many prediction market protocols.
Starting from basic constraints about prediction markets (prices sum to 1, price equals probability), we derived a unique solution. We didn't just arbitrarily choose the CPMM out of a list of options. It emerged, inexorably, from the requirements we placed.
When you properly formalize a problem with the right constraints, there's often exactly one correct answer. Independent researchers, solving similar problems with similar constraints, will converge on the same solution. When Newton and Leibniz invented calculus, they didn't get similar results because they shared their work, or because they were working on the same problem (they were working in very different fields). They got similar results because they were working on a class of problems with the same underlying structure, even if the similarities are not obvious at first.
The market itself does Bayesian updating—on expectation, as more people trade, the probability approaches the true likelihood, based on the accumulated knowledge of the traders. Our pricing mechanism had to respect this Bayesian structure. The constant product formula isn't arbitrary; it's what you get when you correctly formalize "each marginal share should cost the current probability" in continuous terms. While this isn't an empirical fact about the territory, the laws of probability nevertheless have carved out a unique shape in design space, and your map had better match it.[2]
(This is especially obvious in the context of a prediction market (which is, in a certain sense, the purest form of market, separating the trading and aggregating of information from everything else), but it applies to markets and AMMs in full generality, being used in DeFi and Crypto space.)
If you don't know calculus, this is the important part.
Ok, I'm completely overstating my case here and these paragraphs are largely joking, there are other solutions to this problem if you pick different probability functions matching these desiderata or come at prediction market design from a different cases, many of which have their own pros and cons, and Hanson explicitly wrote about Constant Function Market Makers. It's just that this one is very intuitive and has useful properties for a purely probabilistic YES/NO market which is why I wrote about it
2026-02-11 12:49:14
Published on February 11, 2026 4:49 AM GMT
punctilio (n.): precise observance of formalities.
Pretty good at making your text pretty. The most feature-complete and reliable English micro-typography package—transforms plain ASCII punctuation into typographically correct Unicode, even across HTML element boundaries.
Smart quotes · Em/en dashes · Ellipses · Math symbols · Legal symbols · Arrows · Primes · Fractions · Superscripts · Ligatures · Non-breaking spaces · HTML-aware · Bri’ish localisation support
import { transform } from 'punctilio'
transform('"It\'s a beautiful thing, the destruction of words..." -- 1984')
// → “It’s a beautiful thing, the destruction of words…”—1984
npm install punctilio
As far as I can tell, punctilio is the most reliable and feature-complete. I built punctilio for my website. I wrote[1]and sharpened the core regexes sporadically over several months, exhaustively testing edge cases. Eventually, I decided to spin off the functionality into its own package.
I tested punctilio 1.7.13 against smartypants 0.2.2, tipograph 0.7.4, smartquotes 2.3.2, typograf 7.6.0, and retext-smartypants 6.2.0.[2] These other packages have spotty feature coverage and inconsistent impact on text. For example, smartypants mishandles quotes after em dashes (though quite hard to see in GitHub’s font) and lacks multiplication sign support.
| Input | smartypants |
punctilio |
|---|---|---|
| 5x5 | 5x5 (✗) | 5×5 (✓) |
My benchmark.mjs measures how well libraries handle a wide range of scenarios. The benchmark normalizes stylistic differences (e.g. non-breaking vs regular space, British vs American dash spacing) for fair comparison.
| Package | Passed (of 159) |
|---|---|
punctilio |
154 (97%) |
tipograph |
92 (58%) |
typograf |
74 (47%) |
smartquotes |
72 (45%) |
smartypants |
68 (43%) |
retext-smartypants |
65 (41%) |
| Feature | Example | punctilio |
smartypants |
tipograph |
smartquotes |
typograf |
|---|---|---|---|---|---|---|
| Smart quotes | "hello" → “hello” | ✓ | ✓ | ✓ | ✓ | ✓ |
| Leading apostrophe | 'Twas → ’Twas | ✓ | ✗ | ✗ | ◐ | ✗ |
| Em dash | -- → — | ✓ | ✓ | ✗ | ✗ | ✓ |
| En dash (ranges) | 1-5 → 1–5 | ✓ | ✗ | ✓ | ✗ | ✗ |
| Minus sign | -5 → −5 | ✓ | ✗ | ✓ | ✗ | ✗ |
| Ellipsis | ... → … | ✓ | ✓ | ✓ | ✗ | ✓ |
| Multiplication | 5x5 → 5×5 | ✓ | ✗ | ✗ | ✗ | ◐ |
| Math symbols | != → ≠ | ✓ | ✗ | ◐ | ✗ | ◐ |
| Legal symbols | (c) 2004 → © 2004 | ✓ | ✗ | ◐ | ✗ | ✓ |
| Arrows | -> → → | ✓ | ✗ | ◐ | ✗ | ◐ |
| Prime marks | 5'10" → 5′10″ | ✓ | ✗ | ✓ | ✓ | ✗ |
| Degrees | 20 C → 20 °C | ✓ | ✗ | ✗ | ✗ | ✓ |
| Fractions | 1/2 → ½ | ✓ | ✗ | ✗ | ✗ | ✓ |
| Superscripts | 2nd → 2ⁿᵈ | ✓ | ✗ | ✗ | ✗ | ✗ |
| English localization | American / British | ✓ | ✗ | ✗ | ✗ | ✗ |
| Ligatures | ?? → ⁇ | ✓ | ✗ | ✓ | ✗ | ✗ |
| Non-English quotes | „Hallo” | ✗ | ✗ | ✓ | ✗ | ◐ |
| Non-breaking spaces | Chapter 1 | ✓ | ✗ | ✗ | ✗ | ✓ |
punctilio
| Pattern | Behavior | Notes |
|---|---|---|
'99 but 5' clearance |
5' not converted to 5′
|
Leading apostrophe is indistinguishable from an opening quote without semantic understanding |
«Bonjour» |
Not spaced to « Bonjour »
|
French localization not supported |
Setting aside the benchmark, punctilio’s test suite includes 1,100+ tests at 100% branch coverage, including edge cases derived from competitor libraries (smartquotes, retext-smartypants, typograf), and the Standard Ebooks typography manual. Further, all transformations are stable when applied multiple times.
Perhaps the most innovative feature of the library is that it properly handles DOMs! (This means it'll also work on Markdown: convert to HTML, transform with punctilio, convert back to Markdown.)
Other typography libraries either transform plain strings or operate on AST nodes individually (retext-smartypants can’t map changes back to HTML). But real HTML has text spanning multiple elements. If you concatenate text from <em>Wait</em>..., transform it, then try to split it back, you’ve lost track of where </em> belonged.
punctilio introduces separation boundaries. First, insert a “separator” character (default: U+E000) at each element boundary before transforming (like at the start and end of an <em>). Every regex allows this character mid-pattern without breaking matches. For example, “.[SEP]..” still becomes “…[SEP]”.
import { transform, DEFAULT_SEPARATOR } from 'punctilio'
const sep = DEFAULT_SEPARATOR
transform(`"Wait${sep}"`)
// → `“Wait”${sep}`
// The separator doesn’t block the information that this should be an end-quote!
punctilio validates the output by ensuring the separator count remains the same.
For rehype / unified pipelines, use the built-in plugin which handles the separator logic automatically:
import rehypePunctilio from 'punctilio/rehype'
unified()
.use(rehypeParse)
.use(rehypePunctilio)
.use(rehypeStringify)
.process('<p><em>"Wait</em>..." -- she said</p>')
// → <p><em>"Wait</em>…"—she said</p>
// The opening quote inside <em> and the closing quote outside it
// are both resolved correctly across the element boundary.
Options
punctilio doesn't enable all transformations by default. Fractions and degrees tend to match too aggressively (perfectly applying the degree transformation requires semantic meaning). Superscript letters and punctuation ligatures have spotty font support. Furthermore, ligatures = true can change the meaning of text by collapsing question and exclamation marks. Non-breaking spaces are also opt-in since they alter whitespace throughout the text.
transform(text, {
punctuationStyle: 'american' | 'british' | 'none', // default: 'american'
dashStyle: 'american' | 'british' | 'none', // default: 'american'
symbols: true, // math, legal, arrows, primes
collapseSpaces: true, // normalize whitespace
fractions: false, // 1/2 → ½
degrees: false, // 20 C → 20 °C
superscript: false, // 1st → 1ˢᵗ
ligatures: false, // ??? → ⁇, ?! → ⁈, !? → ⁉, !!! → !
nbsp: false, // non-breaking spaces (after honorifics, between numbers and units, etc.)
checkIdempotency: true, // verify transform(transform(x)) === transform(x)
})
5'10" → 5′10″) requires semantic understanding to distinguish from closing quotes (e.g. "Term 1" should produce closing quotes). punctilio counts quotes to heuristically guess whether the matched number at the end of a quote (if not, it requires a prime mark). Other libraries like tipograph 0.7.4 use simpler patterns that make more mistakes.american style follows the Chicago Manual of Style:british style follows Oxford style:punctilio is idempotent by design: transform(transform(text)) always equals transform(text). If performance is critical, set checkIdempotency: false to skip the verification pass.
If you run into bugs, please create an issue! You can read about my other open-source projects on my website.
While Claude is the number one contributor to this repository, that’s because Claude helped me port my existing code and added some features. The core regular expressions (e.g. dashes, quotes, multiplication signs) are human-written and were quite delicate. Those numerous commits don’t show in this repo’s history.
The Python libraries I found were closely related to the JavaScript packages. I tested them and found similar scores, so I don’t include separate Python results. ↩︎
2026-02-11 08:15:53
Published on February 11, 2026 12:15 AM GMT
Save the date: LessOnline will be back again in 2026! As usual, it will take place at Lighthaven in Berkeley, CA.
Further details will be posted to LessWrong, or subscribe here for further updates, such as ticket sales going live. Details about who's attending, housing, relation to Manifest will be forthcoming.
What is LessOnline?
LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle... or hang out with them irl again if you met them at the first or second LessOnline!

2026-02-11 08:08:19
Published on February 11, 2026 12:08 AM GMT
I show two parallel Opus 4.6 agents build a fully functioning Rust regex engine after ~1,273 Claude sessions and 1.7K lines of code.
As I do not have the resources to build a c-compiler like is done in the original work by Carlini, the next task I am tackling is an embedded SQLite-like database engine in rust, which I predict to be 1 OOM larger in terms of LOC.
2026-02-11 06:43:21
Published on February 10, 2026 10:43 PM GMT
TLDR
Oversight fails when explanation is costly and weakly penalized. This note sketches a minimal incentive mechanism—Witness-or-Wager (WoW)—that removes free opacity by requiring each substantive claim to take one of three admissible forms: a verifiable witness, a grounded probabilistic wager, or silence.
WoW is not an alignment solution. It is a small incentive layer that makes epistemic honesty locally optimal when verification is feasible.
Motivation: The Oversight Gap
Many oversight techniques (debate, critique, decomposition) rely on eliciting explanations. However, empirical work shows that elicited explanations are often post-hoc rationalizations optimized for plausibility rather than faithfulness (e.g. Jacovi & Goldberg 2020; Turpin et al. 2023).
This creates a structural asymmetry:
This looks less like a prompting problem and more like an incentive problem. If honesty is expensive and weakly enforced, explanation requests will tend to select for rhetorical adequacy rather than epistemic fidelity.
Atomization Lens (Logical Atomism, Operationalized)
WoW treats “show your work” as a requirement to decompose a claim into a minimally viable set of logical atoms—small, addressable units whose verification can be priced and audited. This borrows a methodological idea from Logical Atomism: complex propositions can be analyzed into “elementary” components.
WoW does not assume a unique final analysis, metaphysical atomism, or a bedrock truth in Logical Atomism. Atomization is verifier-relative and domain-typed; it is an enforcement interface chosen to minimize verification cost while preserving epistemic accountability.
The Core Mechanism: Witness-or-Wager
For any substantive claim, the system must supply a minimum checkable answer in one of three admissible forms:
A logical atom is the smallest verifier-addressable check unit in a given domain (e.g., one rule application, one citation entailment check, one test execution, one tool call plus predicate). A minimum checkable answer (MCA) should be a minimally viable set of such atoms, sufficient for verification under the verifier’s procedure, and minimal (up to slack) under the verifier’s cost model.
A wager is not a bet on future realization, but a present-time epistemic claim: honesty is evaluated by whether the stated probability is justified by the available evidence and reasoning at the time of output. Reasoning traces are assessed for plausibility and grounding, not narrative richness; excessive or complex traces incur higher verification cost and therefore lower expected reward.
- Silence
Explicit abstention (“I don’t know”). Or Explicit speculation.
No reward, no penalty.
Free-standing assertions, rhetorical explanations, or ungrounded hedging are inadmissible as the rewarded minimum checkable answer.
Incentive Structure
The following is a simplified sketch to convey the incentive direction. Let:
Honesty dominates when:
This can be implemented with fixed rewards for admissible outputs and verifier-set costs that increase with verification burden. Precision is not required—only monotonicity (longer/more complex justifications should weakly cost more). Both witnesses and wagers are evaluated immediately; there is no deferral of epistemic accountability to future outcomes.
Subordination: Why Deductive Comes First
A key design choice is lexicographic subordination:
If a feasible witness exists with positive net payoff, it strictly dominates any wager.
Without this, the mechanism collapses into scalar optimization: systems hedge probabilistically even when proof is cheap.
This is an economic design constraint, not a philosophical claim about truth. Deductive and inductive outputs are treated as non-fungible reward channels or drives, enforced sequentially:
It is a layered topology. Two separate drive systems. There is no “exchange rate” between witness points and wager points. This is a topic for another post: tiered drive mechanisms / non-fungible rewards.
Examples
Relation to Existing Oversight: Existing Work and Possible Extensions
WoW does not replace debate, amplification, or process supervision. It can wrap them: concise verifiable chains are favored over performative verbosity. WoW can work alongside a system like doubly-efficient debate, reducing verification cost while increasing sensitivity to fabrication and obfuscation.
Scope and Limits
WoW governs epistemic outputs, not values or ontology.
Applicability varies by domain:
| Domain | Bounded Verification | Probabilistic Scoring | WoW Strength |
|---|---|---|---|
| Math / Code | High | Medium | Strong |
| NL / Commonsense | Low–Medium | High | Partial |
| Long-horizon forecasting | Very Low | Medium | Partial |
In weak-verification domains, wagers dominate; witnesses are rare. This is expected.
Open Problems
These are implementation questions, not requirements for the core mechanism.
Conclusion
Oversight fails when explanation is optional and opacity is free. Witness-or-Wager removes free opacity by forcing binding epistemic commitments—proof, probability, or silence—and structuring rewards so honesty is locally optimal when feasible.
This is a minimal incentive layer, not a complete solution, but it cleanly stacks with existing oversight approaches.
FAQs
“Doesn’t this just push the problem into defining ‘atoms’?”
The mechanism does not require a precise atomization of reasoning steps—only a verifier-controlled cost schedule that increases with verification burden.
“Wagers still allow bullshit narratives.”
WoW does not solve narrative inflation in inductive domains; it merely prices it. Defining admissible wager traces is an open problem.
“Why not just always ask for probabilities?”
Without deductive-first subordination, the mechanism degenerates into calibrated hedging even when proof is cheap.
“This assumes too much verifier power.”
WoW assumes bounded verification and credible enforcement in some domains; it is not intended for unconstrained settings.
“Isn’t silence exploitable?”
Silence prevents forced bluffing and dominates low-quality wagers; if silence is over-preferred, the system is mis-calibrated (e.g. penalties too steep, judging too harsh, etc.). The system should be able to easily optimize just by being honest.
2026-02-11 01:59:24
Published on February 10, 2026 5:59 PM GMT
Cross-posted from Telescopic Turnip
Recommended soundtrack for this post
As we all know, the march of technological progress is best summarized by this meme from Linkedin:
Inventors constantly come up with exciting new inventions, each of them with the potential to change everything forever. But only a fraction of these ever establish themselves as a persistent part of civilization, and the rest vanish from collective consciousness. Before shutting down forever, though, the alternate branches of the tech tree leave some faint traces behind: over-optimistic sci-fi stories, outdated educational cartoons, and, sometimes, some obscure accessories that briefly made it to mass production before being quietly discontinued.
The classical example of an abandoned timeline is the Glorious Atomic Future, as described in the 1957 Disney cartoon Our Friend the Atom. A scientist with a suspiciously German accent explains all the wonderful things nuclear power will bring to our lives:
Sadly, the glorious atomic future somewhat failed to materialize, and, by the early 1960s, the project to rip a second Panama canal by detonating a necklace of nuclear bombs was canceled, because we are ruled by bureaucrats who hate fun and efficiency.
While the Our-Friend-the-Atom timeline remains out of reach from most hobbyists, not all alternate timelines are permanently closed to exploration. There are other timelines that you can explore from the comfort of your home, just by buying a few second-hand items off eBay.
I recently spent a few months in one of these abandoned timelines: the one where the microwave oven replaced the stove.
First, I had to get myself a copy of the world’s saddest book.
Marie T. Smith’s Microwave Cooking for One is an old forgotten book of microwave recipes from the 1980s. In the mid-2010s, it garnered the momentary attention of the Internet as “the world’s saddest cookbook”:
To the modern eye, it seems obvious that microwave cooking can only be about reheating ready-made frozen food. It’s about staring blankly at the buzzing white box, waiting for the four dreadful beeps that give you permission to eat. It’s about consuming lukewarm processed slop on a rickety formica table, with only the crackling of a flickering neon light piercing through the silence.
But this is completely misinterpreting Microwave Cooking for One’s vision. Two important pieces of context are missing. First – the book was published in 1985. Compare to the adoption S-curve of the microwave oven:
When MCfO was published, microwave cooking was still a new entrant to the world of household electronics. Market researchers were speculating about how the food and packaging industries would adapt their products to the new era and how deep the transformation would go. Many saw the microwave revolution as a material necessity: women were massively entering the workforce, and soon nobody would have much time to spend behind a stove. In 1985, the microwave future looked inevitable.
Second – Marie T. Smith is a microwave maximalist. She spent ten years putting every comestible object in the microwave to see what happens. Look at the items on the book cover – some are obviously impossible to prepare with a microwave, right? Well, that’s where you’re wrong. Marie T. Smith figured out a way to prepare absolutely everything. If you are a disciple of her philosophy, you shouldn’t even own a stove. Smith herself hasn’t owned one since the early 1970s. As she explains in the cookbook’s introduction, Smith believed the microwave would ultimately replace stove-top cooking, the same way stove-top cooking had replaced campfire-top cooking.
So, my goal is twofold: first, I want to know if there’s any merit to all of these forgotten microwaving techniques. Something that can make plasma out of grapes, set your house on fire and bring frozen hamsters back to life cannot be fundamentally bad. But also, I want to get a glimpse of what the world looks like in the uchronia where Marie T. Smith won and Big Teflon lost. Why did we drift apart from this timeline?
Before we start experimenting, it’s helpful to have a coarse intuition of how microwave ovens work. Microwaves use a device called a magnetron to emit radiation with wavelengths around 5-10 cm, and send it to bounce around the closed chamber where you put your food. The idea that electromagnetic radiation can heat stuff up isn’t particularly strange (we’ve all been exposed to the sun), but microwaves do it in an odd spooky way. Microwaves’ frequency is too low to be absorbed directly by food molecules. Instead, it is just low enough that, in effect, the electric field around the molecules regularly changes direction. If the molecules have a dipole moment (as water does), they start wiggling around, and the friction generates plenty of heat.
As far as I can tell, this kind of light-matter interaction doesn’t occur to a noticeable degree anywhere on Earth, except in our microwave ovens. This is going to be important later: the microwave is weird, and it often behaves contrary to our day-to-day intuitions. (For example, it’s surprisingly hard to melt ice cubes in the microwave. This is because the water molecules are locked in a lattice, so they can’t spin as much as they would in a liquid.) Thus, to tame the microwave, the first thing we’ll need is an open mind.
With that in mind, let’s open the grimoire of Microwave Cooking for One and see what kind of blood magic we can conjure from it.
The book cover, with its smiling middle-aged woman and its abundance of provisions, makes it look like it’s going to be nice and wholesome.
It’s not going to be nice and wholesome.
Microwave cooking is not about intuition. It’s about discipline. The timing and the wattage matter, but so do the exact shape and size of the vessels. Smith gives us a list of specific hardware with exceedingly modern names like the Cook’n’Pour® Saucepan or the CorningWare™ Menu-ette® so we can get reproducible results. If you were used to counting carrots in carrot units, that has to stop – carrots are measured in ounces, with a scale, and for volume you use a metal measuring cup. Glass ones are simply too inaccurate for where we are going.
The actual recipe section starts with the recipe for a bowl of cereal, which I am 70% sure is a joke:
Whenever a cooking time is specified, Smith includes “(____)” as a placeholder, so you can write in your own value, optimized for your particular setup. If your hot cereal is anything short of delicious, you are invited to do your own step of gradient descent.
A lot of recipes in the book involve stacking various objects under, above, and around the food. For vegetables, Smith generally recommends slicing them thinly, putting them between a cardboard plate and towel paper, then microwaving the ensemble. This works great. I tried it with onion and carrots, and it does make nice crispy vegetables, similar to what you get when you steam the vegetables in a rice cooker (also a great technique). I’d still say the rice cooker gives better results, but for situations where you absolutely need your carrots done in under two minutes, the microwave method is hard to beat.
But cardboard contraptions, on their own, can only take us this far. They do little to overcome the true frontier for microwave-only cooking: the Maillard Reaction.Around 150°C, amino acids and sugars combine to form dark-colored tasty compounds, also known as browning. For a good browning, you must rapidly reach temperatures well above the boiling point of water. This is particularly difficult to do in a microwave – which is why people tend to use the microwave specifically for things that don’t require the Maillard reaction.
But this is because people are weak. True radicals, like Marie T. Smith and myself, are able to obtain a perfectly fine Maillard reaction in their microwave ovens. All you need is the right cookware. Are you ready to use the full extent of microwave capabilities?
In 1938, chemists from DuPont were trying to create a revolutionary refrigerant, when they accidentally synthesized a new compound they called teflon. It took until the early 1950s for the wife of a random engineer to suggest that teflon could be used to coat frying pans, and it worked. This led to the development of the teflon-coated frying pan.
In parallel, in 1953, chemists from Corning were trying to create photosensitive glass that could be etched using UV light, when they accidentally synthesized a new compound they called pyroceram. Pyroceram is almost unbreakable, extremely resistant to heat shocks, and remarkably non-sticky. Most importantly, the bottom can be coated with tin oxide, which enables it to absorb microwave radiation and become arbitrarily hot. This led to the development of the microwave browning skillet.
In the stove-top timeline where we live, the teflon-coated pan has become ubiquitous. But in the alternate microwave timeline, nobody has heard of teflon pans, and everybody owns a pyroceram browning skillet instead.
I know most of you are meta-contrarian edgelords, but nothing today will smash your Overton window harder than the 1986 cooking TV show Good Days, where Marie T. Smith is seen microwaving a complete cheeseburger on live TV using such a skillet.
I acquired mine second-hand from eBay and it quickly became one of my favorite objects. I could only describe its aesthetics as tradwife futurism. The overall design and cute colonial house drawings give it clear 1980s grandma vibes, but the three standoffs and metal-coated bottom give it a strange futuristic quality. It truly feels like an object from another timeline.
The key trick is to put the empty skillet alone in the microwave and let it accumulate as much heat as you desire[1] before adding the food. Then, supposedly, you can get any degree of searing you like by following the right sequence of bleeps and bloops.
According to Marie Smith, this is superior to traditional stove-top cooking in many ways – it’s faster, consumes less energy, and requires less effort to clean the dishes. Let’s try a few basic recipes to see how well it works.
Let’s start with something maximally outrageous: the microwaved steak with onions. I’d typically use olive oil, but the first step in Smith’s recipe is to rub the steak in butter, making this recipe a heresy for at least three groups of people.
The onions are cooked with the veggie cooking method again, and the steak is done with a masterful use of the browning skillet.
I split the meat in two halves, so I could directly compare the orthodox and heretical methods.[2] The results were very promising. It takes a little bit of practice to get things exactly right, but not much more than the traditional method. The Pyroceram pan was about as easy to clean as the Teflon one. I didn’t measure the energy cost, but the microwave would probably win on that front. So far, the alternate timeline holds up quite well.
As a second eval, I tried sunny-side up eggs. On the face of it, it’s the simplest possible recipe, but it’s surprisingly hard to master. The problem is that different parts of the egg have different optimal cooking temperatures. Adam Ragusea has a video showcasing half a dozen techniques, none of which feature a microwave.
What does Marie Smith have to say about this? She employs a multi-step method. Like with the steak, we start by preheating the browning skillet. Then, we quickly coat it with butter, which should instantly start to boil. This is when we add the egg, sprinkle it lightly with water, and put it back in the oven for 45 (___) seconds. (Why the water sprinkling? Smith doesn’t explain. Maybe it’s meant to ensure the egg receives heat from all directions?)
Here again, I was pleased with the result – I’d go as far as saying it works better than the pan. With that success, I went on to try the next step of difficulty: poached eggs.
Poached eggs are my secret internal benchmark. Never in my life have I managed to make proper poached eggs, despite trying every weird trick and lifehack I came across. Will MCfO break my streak of bad luck?
Like for veggies, the egg is poached in the middle of an assemblage of multiple imbricated containers filled with specific amounts of water and pre-heated in a multi-step procedure. We are also told that the egg yolk must be punctured with a fork before cooking. (What happens if you don’t? The book doesn’t say, and I would rather not know.)
The recipe calls for 1 minute and 10 seconds of cooking at full power. Around the 1 minute and 5 seconds mark, my egg violently exploded, sending the various vessels to bounce around the walls of the oven. And listen, as I said, I came to this book with an open mind, but I expect a cookbook to give you at least enough information to avoid a literal explosion. So I wrote “LESS” in the “(____)” and never tried this recipe again.
The rest of the book is mostly made of variations of these basic methods. Some recipes sound like they would plausibly work, but were not interesting enough for me to try (for example, the pasta recipes primarily involve boiling water in the microwave and cooking pasta in it).
All in all, I think I believe most of the claims Smith makes about the microwave. Would it be possible to survive in a bunker with just a laptop, a microwave and a Cook’n’Pour SaucePan®? I think so. It probably saves energy, it definitely saves time washing the dishes, and getting a perfect browning is entirely within reach. There were failures, and many recipes would require a few rounds of practice before getting everything right, but the same is true for stove-top cooking.
On the other hand, there’s a reason the book is called Microwave Cooking for One and not Microwave Cooking for a Large, Loving Family. It’s not just because it is targeted at lonely losers. It’s because microwave cooking becomes exponentially more complicated as you increase the number of guests. I am not saying that the microwave technology in itself cannot be scaled up – if you really want to, it can:
But these industrial giant microwaves are processing a steady stream of regular, standard-sized pieces of food. Homecooking is different. Each potato comes in a different size and shape. So, while baking one potato according to MCfO’s guidance is easy and works wonderfully, things quickly get out of hand when you try baking multiple potatoes at the same time. Here is the sad truth: baking potatoes in the microwave is an NP-hard problem. For a general-purpose home-cooking technology, that’s a serious setback.
The weird thing is, the microwave maximalists of the 1980s got the sociology mostly right. People are preparing meals for themselves for longer and longer stretches of their lives. Women are indeed spending less time in the kitchen. The future where people cook For One – the one that was supposed to make the microwave timeline inevitable, arrived exactly as planned. And yet, the microwave stayed a lowly reheating device. Something else must be going on. Maybe the real forking path happened at the level of vibes?
To start with the obvious, the microwave has always been spooky, scary tech. Microwave heating was discovered by accident in 1945 by an engineer while he was developing new radar technologies for the US military. These are the worst possible circumstances to discover some new cooking tech – microwave manufacturers had to persuade normal civilians, who just watched Hiroshima on live TV, to irradiate their food with invisible electromagnetic waves coming from an object called “the magnetron”. Add that to the generally weird and counterintuitive behavior of food in the microwave, and it’s not surprising that people treated the device with suspicion.
Second, microwave cooking fell victim to the same curse that threatens every new easy-to-use technology: it became low-status tech. In Inadequate Equilibria, Eliezer makes a similar point about velcro: the earliest adopters of velcro were toddlers and the elderly – the people who had the most trouble tying their shoes. So Velcro became unforgivably unfashionable. I think a similar process happened with microwaves. While microwave ovens can cook pretty much any meal to any degree of sophistication, the place where they truly excel is reheating shitty canned meals, and soon the two became inseparable in the collective mind, preventing microwaves from reaching their full potential for more elaborate cuisine.
Third, compared to frying things in a pan, microwave cooking is just fundamentally less fun. I actually enjoy seeing my food transform into something visibly delicious before my eyes. But microwave cooking, even when done perfectly right, gives you none of that. You can still hear the noises, but not knowing what produced them makes them significantly more ominous. Some advanced recipes in MCoF call for 8 minutes at full power, and 8 minutes feel like a lot of time when you are helplessly listening to the monstrous anger of the oil, the stuttering onions’ rapid rattle, and the shrill, demented choirs of wailing pork ribs.
With all that said, I do think Microwave Cooking for One is an admirable cookbook. The recipes are probably not the finest cuisine, but they’ll expand your cooking possibilities more than any other recipe book.[3] What I find uniquely cool about Marie T. Smith is that she started with no credentials or qualifications: she was a random housewife who simply fell in love with a new piece of technology, spent a decade pushing it to its limits, and published her findings as a cookbook. Just a woman and a magnetron. You can just explore your own branch of the tech tree!
Let’s not oversell it – if your reference class is “tech visionaries”, maybe that’s taking it a bit too far. If your reference class is “Middle-aged Americans from the eighties who claim they can expand your horizons using waves”, then Marie T. Smith is easily top percentile.
To illustrate the fact that things can get really hot in a microwave oven, here is a tutorial for smelting metals in the microwave. You just need a graphite crucible and very tolerant roommates.
For maximum scientific rigor, I should have done a blind comparison, but I didn’t – in part because I’m lazy, in part because asking someone else to do the blinding feels like breaking the rules. It’s microwave cooking For One. I must face it alone. It is my journey.
My main complaint with Microwave Cooking for One is that it doesn’t have an entry for “birthday cake”. Come on Marie, you had one job.