MoreRSS

site iconShtetl-OptimizedModify

The Blog of Scott Aaronson
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Shtetl-Optimized

No more NYT cooperation: my dog-rape red line

2026-05-14 00:27:50

Over the years, I’ve written two op-eds for The New York Times about quantum computing, at the NYT editors’ invitation:

I’ve also visited the NYT office and helped NYT reporters with numerous stories about quantum computing and beyond. In the wake of Cade Metz’s infamous NYT hatchet job against Scott Alexander and the rationalist community, I resolved no longer to talk to Metz, but it never even occurred to me to extend that to a broader ban against the NYT itself. After all, it’s the friggin’ NYT!

This week, however, Nicholas Kristof—a man who I praised fulsomely in a blog post 20 years ago, for his coverage of the genocide in Darfur—has used the NYT to broadcast the oldest, crudest form of antisemitic libel, accusing the Jews of garishly preposterous crimes (poisoning wells? baking blood into matzo? in this case, training dogs to rape prisoners). As countless others have since pointed out, the sole source for this ludicrous accusation was a Hamas-linked organization called “Euro-Med” that praised the October 7 “martyrs.” Kristof’s piece came out the same day an Israeli human rights organization released a major report meticulously documenting Hamas’s mass rapes on October 7–something that did happen—and was apparently designed to neutralize the impact of that report.

While the debunking of Kristof came fast, it wasn’t fast enough: now that antisemitic blood libel (dog libel?) has the Gray Lady’s imprimatur, I expect it to ignite violence against Jews all over the world, and I expect my kids to be less safe. So I hereby announce:

I, Scott Aaronson, member of the National Academy of Sciences, will no longer cooperate with anyone from the NYT on anything—neither quantum computing stories nor anything else—until the NYT, at minimum, formally retracts its dog-rape claim and fires Kristof.

To my friends doing good science and technology writing for NYT: I’m sorry, I hope you understand, and please fight for what’s right within the organization.

I say “bare minimum” because I hope this scandal causes a major reckoning for the NYT as an institution. I don’t know if any libel or other laws were broken, but if they were, I hope Kristof personally and the NYT as a whole get sued for whatever they’re worth.


Luckily, others have already said most of what I would’ve wanted to. Here’s Eli Lake, in a passage that part of me wishes I’d never read:

Let’s start with what is known about the biology of male dogs. Their penises are small and thin. They become erect only when they smell the pheromones of a female dog in heat. Brandon McMillan, the three-time Emmy-winning host of CBS’s Lucky Dog, who has spent 25 years training animals, told me he had never heard of a dog who was trained to rape a human being and doubted this was possible.

“When a female is in heat, the pheromones released carry it to the male canine,” McMillan said. “That’s how they reproduce and the miracle happens. I don’t see how you would train a dog to do that. The dog has to get turned on, for lack of a better word.”

On the NYT’s “vetting process” (or lack thereof), here’s comedian Jeff Maurer:

Imagine that you’re a fact checker or editor at The New York Times. You know damn well that you’ve nabbed a lifeboat while most of your field is being picked apart by hagfish at the bottom of the North Atlantic. If you’re a lowly fact-checker, you might be in your 20s and desperate to rationalize your choice to enter a field that might as well be buggy repair. I‘ve had similar gigs and been in similar environments; I feel like I can get in these peoples’ heads. And that’s why I’m convinced that the dog claim should have made things easier for the fact checkers.

One day, Nicholas Kristof — the award winning columnist who has been with the Times since it was Farmer Sulzberger’s New Amsterdam Seed Catalogue — comes to you, the lowly fact checker. He’s working on a big piece, and it’s scorching hot: It’s sexual assault plus Israel/Palestine, which is like detonating a nuclear weapon inside a volcano. You know it will be very tough to say “I don’t find this source credible” or “we can’t run that without more substantiation”. And that’s true even before we consider that you probably run in a social circle in which Israel is thought of as a substantially-more-evil version of The Borg Collective from Star Trek (which is why they have those vaporizing weapons).

But you do notice that Kristof’s sources are thin. In fact, he only has two named sources. The first source tells him a story that has substantially changed since he told it to The Washington Post two years ago. The second source celebrated the October 7 attacks and previously made torture claims that he later recanted. Much of the testimony contains no detail that could be corroborated. The organizations and journalists that Kristof relies on are highly dubious and engaged in the mutual citation circle jerk that is typically the way that lies enter the zeitgeist. There may be some truth in there somewhere, but Kristof’s methods are — what’s the journalistic term? — total ass. But you’re afraid that if you question things, you’ll be thought of as pro-rape, or worse yet: pro-Israel.

And in that context, the dog story is a godsend. It frees you from addressing the broader problems with the article, because you can pick the battle of focusing on the most-outrageous, least-believable claim. The article doesn’t need the dog rape detail — it’s already full of shocking-if-true claims. Striking that one section lets you maintain a sheen of professionalism, and when the thing explodes, you also get to say “You should have seen what he wanted to print”.

The fact that that didn’t happen tells me that the level of scrutiny for claims about Israel at the Times is absolute zero. Even if some Israeli somewhere did manage to turn his dog into a sex criminal, there is not nearly enough evidence for that claim to run in the Times. It’s an unsubstantiated rumor, and a vile one. I think that a person who believes that without compelling evidence is showing evidence of a damaged mind. And I think that a newspaper that publishes that without compelling evidence is showing evidence of a damaged vetting process, or perhaps a vetting process that — when it comes to claims about Israel — doesn’t exist at all.

Last but not least, here’s Haviv Rettig Gur, with righteous anger entirely appropriate to the situation, yet also a determination to expose and fight prisoner abuse, which is a genuine problem in Israel as it is in pretty much every other country on earth:

No, dogs aren’t being trained to systematically rape prisoners, you nattering halfwits. And no, Hamas propaganda operatives are not reliable sources on the question of Israeli crimes. The vast, vast majority of soldiers are honorable men who walked into fire so our families may live. The whole world may turn on them; I will stand with them, grateful for their sacrifice. And Kristof, a willing purveyor of propaganda happily feigning that he can’t see the water and thrilling to a moral crusade engineered by would-be genocidaires he pretends not to understand — is no messenger of moral reckoning.

But friends, so fucking what. Let the narcissistic guttersnipes strut their moral emotions before the world, let the UN publish endless reports that don’t hold up to basic scrutiny, let the NGOs dream their rabid, sick dreams that no journalist ever fact-checks — yes, they’re lying. But so fucking what.

We still, for ourselves — because fuck them — must see that it isn’t all fake. The problem [of abuse in Israeli prisons] is real. It’s far smaller than they claim, but real nonetheless. And when discipline and morality break down, it can only get worse. We either crack down now or we watch it fester and grow.

And our own Ben Gvirs are stubbornly refusing to fix what is actually broken, the real thing in the real world.

And so we are caught in a strange sort of vise, the same vise we find ourselves in with the genocide lie: A vast propaganda machine that seeks to destroy us — countless activists too high on their own self-regard to see the irony of raging against a ‘genocide’ while calling for the erasure of a people — all while our own incompetent, venal, self-absorbed political class insist in their mindless chatter on confirming every claim of our enemies for sheer, bald egomania.

I’m sick of it all. I know you’re all sick of it too.

And that, in a nutshell, is what I think about this.

Just because they’re lying, just because a vast perfidious campaign has overwhelmed global elites in a bid to clear the way for our removal, just because they’re still, after two millennia, building their visions of redemption on The Evil Jew — doesn’t mean there isn’t also, separately, a problem on the ground.

So what do we do now? Simple. We see it, we acknowledge it’s happening, we bring our rage to our inept leaders until they bend to our will and act to stop the breakdown…

And we soldier on.

We soldier on because the enemy really is coming to murder us. Because Hamas must still go if Gaza is ever to rise to a new day. Because Hezbollah will yet destroy Lebanon on the altar of destroying us. Because the ayatollahs built their whole damn religion on the extermination of our children.

We fix the broken things within us as if the pogromists and their simpering Kristofs don’t exist. We owe no answers to the propagandists who seek to clear a path to our deaths. But we do owe answers to ourselves.

Let the screaming mob rage and churn like so much sea-foam. Despite that raging mob, despite the enemy who still seeks our destruction, and yes, despite feckless incompetents like Ben Gvir, our minister of prisons, who claim to lead us — we remain the strongest, freest Jews who ever lived, more capable and committed than our self-destructive enemies ever imagined. And the task is still before us, yet to be completed, the sacred duty given to our generation to ensure our children don’t have to face the genocidaires who now surround us. We do not waver, we do not stumble. We soldier on.

Because fuck them all.


Inspired by Haviv’s energy, I’m leaving the comments on this post turned off. Why? Because outside the dog-rape libel, I was having a pretty good week! On Monday I visited Quantinuum in Colorado and got a personal tour of Helios 1, considered the all-around most powerful quantum computer on earth at the moment. Now I’m visiting Stony Brook University, where I’ll give two talks and meet lots of interesting people. Then I’m headed to NYC for the annual meeting of Quanta magazine’s advisory board.

Right now, then, I have neither the time nor the inclination to battle the infinite army of brain-eaten zombies who arrive for every single post touching on Israel. My life will be better this way.

The Trevisan Award and the Decimal Digits of Powers of 2

2026-05-11 03:54:41

WHOA … I’ve won the inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science! This has a particular meaning for me as someone who knew Luca Trevisan as well as I did for 25 years — who had him as a professor and thesis committee member, whose blog bounced off his blog, who benefitted tremendously from his expository work in TCS — before Luca tragically succumbed to cancer two years ago.

As I told the committee, receiving this award makes me want to use my blog for more actual CS theory exposition, and less venting, in order to retrospectively become worthier of such an honor.

I’m ridiculously grateful to the entire TCS community — my people, my homies — for tolerating me to do what I do.

If you’re curious, here’s the official citation:

The inaugural Luca Trevisan Award for Expository Work in Theoretical Computer Science is awarded to Scott Aaronson for his sustained and high-impact inspirational efforts to explain and promote our field to broad audiences. His blog Shtetl-Optimized has hosted remarkably frequent and elaborate posts over more than two decades, and has become a central meeting place for wide-ranging conversations across the TCS and Physics communities. Scott Aaronson’s blog posts contain crystal-clear, informative expositions of exciting new results, calibrated evaluations of technological claims, and profound analyses of topics in these fields and (way) beyond. The uniquely enthusiastic and witty style of his writings (including his book Quantum Computing Since Democritus and his other lecture notes and surveys), lectures, and interviews have made him a top invitee for both popular and professional appearances, attracting large audiences. These qualities have inspired many students to enter the field, and made Scott Aaronson a go-to person for journalists and scientists alike looking for a definitive word on the latest scientific activity in TCS and quantum computing.


In the rest of this post, I’m going to start practicing what I preached—y’know, about turning over more of this blog to actual exposition, of a kind that the Trevisan Award could plausibly be meant to encourage. I’ll start with something that’s been on my back burner for the past couple months: namely, the (lightly edited) transcript of a talk that I gave this spring at UT Austin’s undergraduate math club. So, without further ado, and in memory of Luca…


On the Decimal Digits of Powers of 2
by Scott Aaronson

Hi! I’ve given six previous talks here at UT’s math club, some on relatively “important” topics—Gödel’s Theorem, time travel, Huang’s proof of the Sensitivity Conjecture, and so on.

Today, I want to talk about an unimportant question, one that my son Daniel, who was then 8, and who’s sitting here in the front row (along with his sister Lily), asked me a few months ago.

Daniel asked: which powers of 2 can you double without needing to carry digits? Clearly 1, 2, 4, 32, and 1024 all have this property, their doubles being 2, 4, 8, 64, and 2048 respectively. Are there any others?

Since I happen to have the powers of 2 up to 220 = 1,048,576 committed to memory since childhood, I confirmed that there were no other examples up to there: 128, 256, 512, 2048, etc. all require carries. So I told Daniel: I can’t find any other example, and on that basis, I conjecture that there aren’t any more. But if that conjecture is true, I don’t know if it will ever be proven, by humans or even AI!

Then I googled it, and saw that this is a known question (not very well known, but there’s a StackExchange post about it). And indeed it had been checked up to 2millions, and no other counterexample had been found.


Why did I become confident so quickly that yes, 1024 is probably the last example of a power of 2 that can be doubled without carrying?

Because of the heuristic that the decimal digits of 2n are more or less “random,” apart from various constraints that are irrelevant here (like that the last digit always cycles among 2, 4, 6, and 8). And 2n has about n/log210 decimal digits. Since only 0, 1, 2, 3, 4 can be doubled without carrying, the probability of 2n being a counterexample should therefore be about $$ (\frac{1}{2} )^{n / \log_2 10}. $$

So, if we’ve already checked up to (say) n=1000, then the probability of a larger counterexample should be at most

$$ \sum_{n=1001}^{\infty} (\frac{1}{2} )^{n / \log_2 10} $$

which, when we sum the geometric series, is exceedingly close to 0.


Ah, but why did I say that I don’t know if the conjecture will ever be proven? Because it seems to belong a large class of similar statements, none of which mathematicians have had any idea how to prove.

Variant of a conjecture by Jeffrey Shallit: 65,536 is the only power of 2 that has no power of 2 among its decimal digits.

Freeman Dyson’s conjecture (2005): There’s no power of 2 for which, when you reverse the decimal digits, you get a power of 5.

Paul Erdös’s conjecture (1979): For every n≥9, there’s at least one ‘2’ in the base-3 representation of 2n.

Or looking even more broadly:

Conjecture: The decimal expansion of π is not all 6’s and 7’s after some finite point.

(This would follow from the stronger conjecture that π is base-10 normal—that is, that every finite pattern of decimal digits occurs in it with the limiting frequency you’d expect.)

Or:

Conjecture: π+e is an irrational number.

What all the above conjectures have in common, and what I find so fascinating about them, is that they seem hopeless to prove for exactly the same reason why they seem almost certainly true. Namely, they all seem to be true “merely” because it would be too insane of a coincidence were they false!

The trouble is, that’s not the sort of reason that seems amenable to turning into a proof. Fermat’s Last Theorem is an interesting exception that proves the rule here. That xn+yn=zn has no nontrivial integer solution for n≥3, did seem almost certainly true on statistical grounds for n≥4 (and for the n=3 case, a proof goes back to Euler). And of course, FLT was ultimately provable. But Wiles’s eventual proof exploited a lucky connection between the Fermat equation and deep, fancy things like modular forms and elliptic curves. At no point did the proof formalize the statistical argument that a 12-year-old could understand, for why the theorem is “almost certainly true.” It simply had nothing to do with the statistical argument.

So then, if you wanted to prove conjectures like my son Daniel’s, or like Shallit’s or Dyson’s or Erdös’s above, the question would be: could these “recreational” problems about base-10 representations ever be connected to anything similarly deep? Right now, it’s very hard to see how they could.

Still, all hope is not lost! Here’s a striking theorem that I learned about when I researched this:

Theorem (James Maynard, 2016): For every digit a from 0 to 9, there are infinitely many primes with no a’s in their base-10 representation.

The proof uses heavy Fourier-analytic techniques. Likewise, presumably there are infinitely many primes that you can double without carrying—2, 3, 11, 13, 23, 31, …—because the primes are much denser than the powers of 2! And presumably there are infinitely many primes that are missing any two decimal digits, or even whose decimal digits consist entirely of 0’s and 1’s. But Maynard’s techniques are not yet powerful enough to prove such things.


Even though I promised a topic of no importance today, I can’t resist pointing out a potential connection here to one of the biggest questions on earth right now, and something that’s professionally interested me for the past four years: namely, the question of how to align powerful AIs with human values and prevent them from destroying the world.

Paul Christiano, and the Alignment Research Center in Berkeley that he founded, have developed a whole program for how to make AI safe that depends on the possibility of formalizing “heuristic arguments”—that is, the kinds of arguments that convince us that the above conjectures are all almost certainly true, even without proofs of them.

The intuition here is that we’ll never have a rigorous proof that, for example, a real-world neural network will behave safely under all circumstances—it’s just too complicated. The best we can hope for is an argument that, e.g., “for this model to scheme against humans would require a crazy unexplained coincidence in its weights.” But how can we hope to formalize such arguments? As baby test cases, can we at least formalize our intuitions for why π is normal, or why Daniel’s conjecture is true, in some principled way?

ARC has tried: there’s a 2022 paper by Christiano, Neyman, and Xu on “Formalizing the presumption of independence.” But it’s tricky, and ARC itself would be the first to agree that the existing results are unsatisfying. How do we even formalize the intuition that, for example, you should be willing to bet at even odds against the 10100th digit of π being a 5?


In the rest of this talk, I’d like to circle back to Daniel’s original question about powers of 2, and show you some things that can be proved about it—with thanks to Greg Kuperberg and my other friends on Facebook, and in some cases to GPT5Pro.

Let’s start with the following easier question. Is there a power of 2 whose decimal digits start with 31415? Or with the complete works of Shakespeare, encoded by letter values in some suitable way? Or with a googolplex digits all of which can be doubled without carrying (as Daniel wanted)?

I claim that the answers are yes, yes, and yes! How do we prove this?

The key fact we’ll use is simply that log102 is an irrational number. (If you don’t remember the proof: suppose log102 = a/b. Then 10a/b = 2, so 10a=2b. But this has no integer solutions other than a=b=0.)

Suppose we want k as a prefix, where 10d-1 ≤ k ≤ 10d. Then we want integers n,r such that

$$ k 10^r \le 2^n \le (k+1) 10^r, $$

i.e. taking the base-10 log,

$$ \log_{10}k + r \le n \log_{10}2 \le \log_{10}(k+1) + r. $$

In other words, the fractional part of n log102 needs to lie between the fractional part of log10(k), and the fractional part of log10(k+1) (where again, k is given).

But now we can appeal to the following Key Fact: if α is any irrational number, then the set

{the fractional part of nα : n∈N}

is dense in the interval [0,1]. Or equivalently, if I rotate around and around the unit circle by 2απ radians each time, then if α is irrational, I’ll eventually get arbitrarily close to any given point on the unit circle.

Why is the key fact true? Just the pigeonhole principle! Clearly, for any ε>0, the fact that α is irrational means that there must be two points, xα and yα, whose fractional parts are distinct yet closer together than ε. But then, by adding multiples of (x-y)α, we can get our fractional part to be ε-close to anything in [0,1].

And to sum up, this is why there must be a power of 2 whose decimal representation starts with the complete works of Shakespeare, or with a googolplex carry-free digits! (Indeed, from the above discussion, we could even extract an efficient algorithm for constructing that power of 2.)


So much for the first decimal digits of 2n. Now let’s look at 2n‘s last decimal digits!

Here there are some complications, arising from the twin facts that

(a) 10 is composite, and
(b) 2 is one of its factors.

But we can deal with those complications!

For starters: what are the possible last decimal digits of 2n?

1, 2, 4, 8, 6, 2, 4, 8, 6, …

So, there’s an initial 1, but then we cycle forever through the even nonzero digits.

What about the last two digits of 2n? If you’ve never tried this before, it’s instructive to work it out:

01, 02, 04, 08, 16, 32, 64, 28, 56, 12, 24, 48, 96, 92, 84, 68, 36, 72, 44, 88, 76, 52, 04, …

So, there’s an initial 01 and 02, but after that, we cycle forever through 20=4×5 possibilities, namely all the possible multiples of 4 whose last digits are nonzero.

You can check that the general pattern is: the last k decimal digits of 2n have an initial segment that looks like 001, 002, 004, 008, …, 2k-1. And then there’s an eternal cycle of length 4×5k-1, where the last digit can be any of 2,4,6,8, while every other digit can either be any possible even digit or any possible odd digit, depending on the digits to its right—in (if you want to say this a fancier way) a recursively defined embedding of the powers of 2 mod 5k into the cyclic group Z/10k. So, there’s an initial “runup” as you fill out all the needed powers of 2, but then once that’s done, you just cycle around forever in an embedding of Z/5k into Z/10k, because

(a) 2 happens to be a primitive element of Z/5k for any k, and
(b) 5 divides 10.

So in particular, and relevant to Daniel’s conjecture: there exists a power of 2 whose last googolplex digits can all be doubled without carrying. Why? For the last digit, you can pick 2 or 4. Then, for the last googolplex digits but one, you can pick 1 or 3 for those constrained to be odd, and 0, 2, or 4 for those constrained to be even. Lots of choices that work!


So, we can avoid carries in the leftmost digits of 2n, we can avoid carries in the rightmost digits … so that “merely” leaves all the digits in the middle, where who the hell knows! Empirically, the digits seem to pass every standard randomness test that you can throw at them. So in particular, e.g., the fraction of the digits that are in {0,1,2,3,4} seems to converge inexorably towards 50%, so that it’s extremely plausible to conjecture that the fraction is less than 49% or more than 51% for only finitely many values of n. But of course, that’s presumably even harder to prove than Daniel’s original conjecture.


OK, last topic. Suppose we want to program a computer to check Daniel’s conjecture up to 2n, for some huge n. What algorithm will do that most efficiently? A naive algorithm would just calculate 1, 2, 4, …, 2n and check all the digits of all of them. That takes O(n2) time, since each 2k, for k=0,1,…,n, has ~klog102 = O(k) decimal digits.

We can improve this to roughly O(n) time, by simply noticing that we only need to check O(1) digits per power of 2 in expectation, presumably, until we find the first digit that requires carrying. Then we don’t even need to compute the remaining digits: we can simply move on to the next power of 2. (This sort of trick is used all over the place in the design of fast algorithms.)

But when I posted about Daniel’s problem on Facebook, my friend Greg Kuperberg (who’s a mathematician at UC Davis) noticed that further improvements are possible. To wit: 8×6 = 48, which again ends in 8. So, 8×16 ends in 8, as does 8×16k for every k≥0. Meaning: no 2n where n=3+4k can possibly be a counterexample to Daniel’s conjecture. They’re all ruled out!

Likewise, 64×1,048,576 ends in 64, so no 2n of the form n=6+20k can be a counterexample. They’re all ruled out as well.

We can keep going this way, filling out the “search tree” of potential counterexamples to Daniel’s conjecture via breadth-first search. At the root of the search tree, we try all possibilities for the last digit. One level deeper, we try all possibilities for the second-to-last digit, and so on. As we go, we prune subtrees according to constraints like the ones above that we keep discovering and adding.

When I worked this out, I got an algorithm for checking Daniel’s conjecture up to 2n, which under reasonable assumptions takes time only O(nα), where α = 1-log52 ≈ 0.569, and space only polylog(n).

Paul Crowley (who’s my Facebook friend) then actually implemented this algorithm, and he tells me that he used it to check that Daniel’s conjecture holds all the way up to $$2^{10^{21}}$$ (!!), using 40 minutes on a 128-core machine.

So, to return at last to the first thing I told Daniel: yes, I think his conjecture is almost certainly true, even though I have no idea when, if ever, the human race or its successors will have a proof.

Will you heed my warnings NOW?

2026-04-29 16:11:59

Holy crap … yesterday I was elected to the US National Academy of Sciences! If you don’t believe me, click the link and keep scrolling down until you hit the name “Aaronson.” But then continue scrolling to see 144 other inductees, including my IAS postdoctoral classmate Maria Chudnovsky, my longtime friend and colleague Salil Vadhan, and even Janet Yellen. I’m humbled to be in such company.

Years ago, somewhere on this blog, I mused that, if I were ever invited to join NAS, I hoped I’d follow the wisdom of Richard Feynman, who famously resigned his NAS membership, comparing it to an honor society back at his high school that spent most of its time debating who should be a member of the honor society. Feynman was also annoyed at having to pay dues.

But now that I’m actually faced with the choice, it’s like, dude! At my advanced age of 44, I’ve encountered so many people who dislike me or even sneer at me, and so many clubs that won’t have me as a member, that I feel mostly gratitude and warmth toward a fine club like NAS that will have me as a member. Anyway, I’ll certainly try it out to see what it’s like—even Feynman did that!

A few hours after I started getting congratulatory emails, for which I was thankful, someone from UT Austin’s press office asked me how I feel about this “culmination” and “capstone” of my entire research career. I replied, look, I know I’ve slowed down a lot since my nubile twenties, but I still hold out the hope that this isn’t any kind of “capstone”!

In any case, I’m ridiculously grateful to all the friends, family, colleagues, and readers who believed in me and helped me reach wherever this is.


Now for a totally different topic, but that will ultimately loop back to the first one:

Last week, I did an Ask Me Anything about quantum computing and blockchain for stacker.news, a forum devoted to bitcoin. Thanks to Will Scoresby for organizing it.

As a longer-term commitment, I also collaborated with my colleagues Dan Boneh, Justin Drake, Sreeram Kannan, Yehuda Lindell, and Dahlia Malkhi, in a panel convened by Coinbase, to put out a detailed position paper about the quantum threat to cryptocurrencies and how best to respond to it. Take a look!

Notably, the situation evolved even while we were writing our position paper—for example, with the major recent papers from Google and Caltech/Oratomic that I blogged about a month ago.

I’d now like to add a few words of my own, not presuming to speak for my fellow Coinbase panelists.

See, some of the most reputable people in quantum hardware and quantum error-correction—people whose judgment I trust more than my own on those topics—are now telling me that a fault-tolerant quantum computer able to break deployed cryptosystems ought to be possible by around 2029.

Maybe they’re overoptimistic. Maybe it will take longer. I dunno. I’m not a timing guy.

But here’s what I do know: the companies racing to scale up fault-tolerant QC, have no plans to slow down in order to “give cybersecurity time to adapt” or whatever. The way they see it, cryptographically relevant QCs will plausibly be built sometime soon: indeed, it’s ultimately unavoidable, even if people’s only interest in QC was to do quantum simulations for materials science and chemistry. So, given that reality, isn’t it better that it be done first by mostly US-based companies in the open, than by (let’s say) Chinese or Russian intelligence in secret? And besides, haven’t there already been years of warnings and meetings about the quantum threat to RSA, Diffie-Hellman, and elliptic curve cryptography? Aren’t many in cybersecurity still in denial about the threat? Haven’t these slumberers shown that they won’t wake up until dramatic achievements in fault-tolerant QC roust them—the way Anthropic’s Mythos model has now jolted even the most ostrich-like about the cybersecurity risks of AI? So, mixing metaphors, mightn’t we just as well rip this Band-Aid off ASAP, rather than giving foreign intelligence agencies extra years to catch up? Indeed, when you think about it that way, isn’t racing to build a cryptographically relevant QC, as quickly as possible, the most ethical, socially responsible thing for an American QC company to do?

Is the above line of reasoning suspiciously self-serving and convenient? Does it remind you of the galaxy-brained arguments that AI company after AI company has offered over the last decade for why “really, if you think about it, accelerating toward dangerous superintelligence is the safest course of action that we could possibly take”? I.e., the arguments that led to the current frenzied AI race, which some believe imperils all life on earth?

It’s not my place here to answer such questions; I leave further ethical and geopolitical debate to the comment section! My point is simply: whether or not anyone likes it, this is how some of the leading QC companies are now thinking about the Shor of Damocles that they genuinely believe now hangs over the Internet.

And I’d say that that makes my own moral duty right now ironically simple and clear: namely, to use my unique soapbox, as the writer of The Internet’s Most Trusted Quantum Computing Blog Since 2005TM, to sound the alarm.

So, here it is: if quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you. This post is your warning. Please start switching to quantum-resistant encryption, and urge your company or organization or blockchain or standards body to do the same.

Yea, heed my warning, for it comes not from some WordPress-using rando, but from the inventor of BosonSampling and PostBQP and shadow tomography, the Schlumberger Centennial Chair and Founding Director of the Quantum Information Center at the University of Texas at Austin, and (wait for it) new member of the US National Academy of Sciences, that august and distinguished body brought into being by President Abraham Lincoln in 1863.

Because, you know, none of this is about me. It’s only about you. And whether you’ll listen to me.

Three greats who we’ve lost

2026-04-19 14:30:48

Sir Charles Antony Richard Hoare (1934-2026) won the 1980 Turing Award for numerous contributions to computer science, including foundational work on concurrency and formal verification and the invention (with Dijkstra) of the dining philosophers problem. But he’s perhaps best known, to pretty much everyone who’s ever studied CS, as the inventor of the Quicksort algorithm. I’m sorry that I never got to meet him.

Michael O. Rabin (1931-2026), of Harvard University, was one of the founders of theoretical computer science and winner of the 1976 Turing Award. In 1959, he and Dana Scott introduced the concept of a “nondeterministic machine”—that is, a machine with exponentially many possible computation paths, which accepts if and only if there exists an accepting path—which would of course later play a central role in the formulation of P vs. NP problem. He’s also known for the Miller-Rabin primality test, which helped to establish randomness as a central concept in algorithms, and for many other things. He’s survived by his daughter Tal Rabin, also a distinguished theoretical computer scientist. I was privileged to meet the elder Rabin on several visits to Harvard, where he showed me great kindness.

Sir Anthony Leggett (1938-2026), of the University of Illinois Urbana-Champaign, was one of the great quantum physicists of the late 20th century, and recipient of the 2003 Nobel Prize for his work on superfluidity. When I knew him, he was a sort of elder statesman of quantum computing and information, who helped remind the rest of us of why we got into the field in the first place—not to solve Element Distinctness moderately faster, but to learn the truth of quantum mechanics itself. Tony insisted, over and over, that the validity of quantum mechanics on the scale of everyday life is an open empirical problem, to be settled by better experiments and not by a-priori principles. I first met Tony at a Gordon Research Conference in southern California. Even though I was then a nobody and he a recent Nobel laureate, he took the time to listen to my ideas about Sure/Shor separators, and to suggest (correctly) what we now call 2D cluster states as an excellent candidate for what I wanted. In all my later interactions with Tony, at both the University of Waterloo (where he was visiting faculty for a while) and at UIUC (where my wife Dana and I considered taking jobs), he was basically the friendliest, funniest guy you could possibly meet at his level of achievement and renown. I was bummed to hear about his passing.

Before we start on quantum

2026-04-07 15:51:11

Imagine that every week for twenty years, people message you asking you to comment on the latest wolf sighting, and every week you have to tell them: I haven’t seen a wolf, I haven’t heard a wolf, I believe wolves exist but I don’t yet see evidence of them anywhere near our town.

Then one evening, you hear a howl in the distance, and sure enough, on a hill overlooking the town is the clear silhouette of a large wolf. So you point to it — and all the same people laugh and accuse you of “crying wolf.”

Now you know how it’s been for me with cryptographically relevant quantum computing.


I’ve been writing about QC on this blog for a while, and have done hundreds of public lectures and interviews and podcasts on the subject. By now, I can almost always predict where a non-expert’s QC question is going from its first few words, and have a well-rehearsed answer ready to go the moment they stop talking. Yet sometimes I feel like it’s all for naught.

Only today did it occur to me that I should write about something more basic. Not quantum computing itself, but the habits of mind that seem to prevent some listeners from hearing whatever I or other researchers have to tell them about QC. The stuff that we’re wasting our breath if we don’t get past.

Which habits of mind am I talking about?

  1. The Tyranny of Black and White. Hundreds of times, I’ve answered someone’s request to explain QC, only to have them nod impatiently, then interrupt as soon as they can with: “So basically, the take-home message is that quantum is coming, and it’ll change everything?” Someone else might respond to exactly the same words from me with: “So basically, you’re saying it’s all hype and I shouldn’t take any of it seriously?” As in my wolf allegory, the same person might even jump from one reaction to the other. Seeing this, I’ve become a fervent believer in horseshoe theory, in QC no less than in politics. Which sort of makes sense: if you think QCs are “the magic machines of the future that will revolutionize everything,” and then you learn that they’re not, why wouldn’t you jump to the opposite extreme and conclude you’ve been lied to and it’s all a scam?
  2. The Unidimensional Hype-Meter. “So … [long, thoughtful pause] … you’re actually telling me that some of what I hear about QC is real … but some of it is hype? Or—yuk yuk, I bet no one ever told you this one before—it’s a superposition of real and hype?” OK, that’s better. But it’s still trying to project everything down onto a 1-dimensional subspace that loses almost all the information!
  3. Words As Seasoning. I often get the sense that a listener is treating all the words of explanation—about amplitudes and interference, Shor versus Grover, physical versus logical qubits, etc.—as seasoning, filler, an annoying tic, a stalling tactic to put off answering the only questions that matter: “is Quantum real or not real? If it’s real, when is it coming? Which companies will own the Quantum space?” In reality, explanations are the entire substance of what I can offer. For my experience has consistently been that, if someone has no interest in learning what QC is, which classes of problems it helps for, etc., then even if I answer their simplistic questions like “which QC companies are good or bad?,” they won’t believe my answers anyway. Or they’ll believe my answers only until the next person comes along and tells them the opposite.
  4. Black-Boxing. Sometimes these days, I’ll survey the spectacular recent progress in fault-tolerance, 2-qubit gate fidelities, programmable hundred-qubit systems, etc., only to be answered with a sneer: “What’s the biggest number that Shor’s algorithm has factored? Still 15 after all these years? Haha, apparently the emperor has no clothes!” I’ve commented that this is sort of like dismissing the Manhattan Project as hopelessly stalled in 1944, on the ground that so far it hasn’t produced even a tiny nuclear explosion. Or the Apollo program in 1967, on the ground that so far it hasn’t gotten any humans even 10% of the way to the moon. Or GPT in 2020, on the ground that so far it can’t even do elementary-school math. Yes, sometimes emperors are naked—but you can’t tell until you actually look at the emperor! Engage with the specifics of quantum error correction. If there’s a reason why you think it can’t work beyond a certain scale, say so. But don’t fixate on one external benchmark and ignore everything happening under the hood, if the experts are telling you that under the hood is where all the action now is, and your preferred benchmark is only relevant later.
  5. Questions with Confused Premises. “When is Q-Day?” I confess that this question threw me for a loop the first few times I heard it, because I had no idea what “Q-Day” was. Apparently, it’s the single day when quantum computing becomes powerful enough to break all of cryptography? Or: “What differentiates quantum from binary?” “How will daily life be different once we all have quantum computers in our homes?” Try to minimize the number of presuppositions.
  6. Anchoring on Specific Marketing Claims. “What do you make of D-Wave’s latest quantum annealing announcement?” “What about IonQ’s claim to recognize handwriting with a QC?” “What about Microsoft’s claim to have built a topological qubit?” These questions can be fine as part of a larger conversation. Again and again, though, someone who doesn’t know the basics will lead with them—with whichever specific, contentious thing they most recently read. Then the entire conversation gets stuck at a deep node within the concept tree, and it can’t progress until we backtrack about five levels.

Anyway—sorry for yet another post of venting and ranting. Maybe this will help:

The wise child asks, “what are the main classes of problems that are currently known to admit superpolynomial quantum speedups?” To this child, you can talk about quantum simulation and finding hidden structures in abelian and occasionally nonabelian groups, as well as Forrelation, glued trees, HHL, and DQI—explaining how the central challenge has been to find end-to-end speedups for non-oracular tasks.

The wicked child asks, “so can I buy a quantum computer right now to help me pick stocks and search for oil and turbocharge LLMs, or is this entire thing basically a fraud?” To this child you answer: “the quantum computing people who seek you as their audience are frauds.”

The simple child asks, “what is quantum computing?” You answer: “it’s a strange new way of harnessing nature to do computation, one that dramatically speeds up certain tasks, but doesn’t really help with others.”

And to the child who doesn’t know how to ask—well, to that child you don’t need to bring up quantum computing at all. That child is probably already fascinated to learn classical stuff.

Quantum computing bombshells that are not April Fools

2026-04-02 05:26:47

For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.

Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.

Neither of these results change the basic principles of QC that we’ve known for decades, but they do change the numbers.

When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known!  In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year?  Subtracting, of course, off a number of years that no one knows.

In any case, these results provide an even stronger impetus for people to upgrade now to quantum-resistant cryptography.  They—meaning you, if relevant—should really get on that!

When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.

Naturally, journalists have been hounding me for comments, though it was the worst possible week, when I needed to host like four separate visitors in Austin. I hope this post helps! Please feel free to ask questions or post further details in the comments.

And now, with no time for this blog post to leaven and rise, I need to go home for my family’s Seder. Happy Passover!