MoreRSS

site iconShtetl-OptimizedModify

The Blog of Scott Aaronson
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Shtetl-Optimized

Will you heed my warnings NOW?

2026-04-29 16:11:59

Holy crap … yesterday I was elected to the US National Academy of Sciences! If you don’t believe me, click the link and keep scrolling down until you hit the name “Aaronson.” But then continue scrolling to see 144 other inductees, including my IAS postdoctoral classmate Maria Chudnovsky, my longtime friend and colleague Salil Vadhan, and even Janet Yellen. I’m humbled to be in such company.

Years ago, somewhere on this blog, I mused that, if I were ever invited to join NAS, I hoped I’d follow the wisdom of Richard Feynman, who famously resigned his NAS membership, comparing it to an honor society back at his high school that spent most of its time debating who should be a member of the honor society. Feynman was also annoyed at having to pay dues.

But now that I’m actually faced with the choice, it’s like, dude! At my advanced age of 44, I’ve encountered so many people who dislike me or even sneer at me, and so many clubs that won’t have me as a member, that I feel mostly gratitude and warmth toward a fine club like NAS that will have me as a member. Anyway, I’ll certainly try it out to see what it’s like—even Feynman did that!

A few hours after I started getting congratulatory emails, for which I was thankful, someone from UT Austin’s press office asked me how I feel about this “culmination” and “capstone” of my entire research career. I replied, look, I know I’ve slowed down a lot since my nubile twenties, but I still hold out the hope that this isn’t any kind of “capstone”!

In any case, I’m ridiculously grateful to all the friends, family, colleagues, and readers who believed in me and helped me reach wherever this is.


Now for a totally different topic, but that will ultimately loop back to the first one:

Last week, I did an Ask Me Anything about quantum computing and blockchain for stacker.news, a forum devoted to bitcoin. Thanks to Will Scoresby for organizing it.

As a longer-term commitment, I also collaborated with my colleagues Dan Boneh, Justin Drake, Sreeram Kannan, Yehuda Lindell, and Dahlia Malkhi, in a panel convened by Coinbase, to put out a detailed position paper about the quantum threat to cryptocurrencies and how best to respond to it. Take a look!

Notably, the situation evolved even while we were writing our position paper—for example, with the major recent papers from Google and Caltech/Oratomic that I blogged about a month ago.

I’d now like to add a few words of my own, not presuming to speak for my fellow Coinbase panelists.

See, some of the most reputable people in quantum hardware and quantum error-correction—people whose judgment I trust more than my own on those topics—are now telling me that a fault-tolerant quantum computer able to break deployed cryptosystems ought to be possible by around 2029.

Maybe they’re overoptimistic. Maybe it will take longer. I dunno. I’m not a timing guy.

But here’s what I do know: the companies racing to scale up fault-tolerant QC, have no plans to slow down in order to “give cybersecurity time to adapt” or whatever. The way they see it, cryptographically relevant QCs will plausibly be built sometime soon: indeed, it’s ultimately unavoidable, even if people’s only interest in QC was to do quantum simulations for materials science and chemistry. So, given that reality, isn’t it better that it be done first by mostly US-based companies in the open, than by (let’s say) Chinese or Russian intelligence in secret? And besides, haven’t there already been years of warnings and meetings about the quantum threat to RSA, Diffie-Hellman, and elliptic curve cryptography? Aren’t many in cybersecurity still in denial about the threat? Haven’t these slumberers shown that they won’t wake up until dramatic achievements in fault-tolerant QC roust them—the way Anthropic’s Mythos model has now jolted even the most ostrich-like about the cybersecurity risks of AI? So, mixing metaphors, mightn’t we just as well rip this Band-Aid off ASAP, rather than giving foreign intelligence agencies extra years to catch up? Indeed, when you think about it that way, isn’t racing to build a cryptographically relevant QC, as quickly as possible, the most ethical, socially responsible thing for an American QC company to do?

Is the above line of reasoning suspiciously self-serving and convenient? Does it remind you of the galaxy-brained arguments that AI company after AI company has offered over the last decade for why “really, if you think about it, accelerating toward dangerous superintelligence is the safest course of action that we could possibly take”? I.e., the arguments that led to the current frenzied AI race, which some believe imperils all life on earth?

It’s not my place here to answer such questions; I leave further ethical and geopolitical debate to the comment section! My point is simply: whether or not anyone likes it, this is how some of the leading QC companies are now thinking about the Shor of Damocles that they genuinely believe now hangs over the Internet.

And I’d say that that makes my own moral duty right now ironically simple and clear: namely, to use my unique soapbox, as the writer of The Internet’s Most Trusted Quantum Computing Blog Since 2005TM, to sound the alarm.

So, here it is: if quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you. This post is your warning. Please start switching to quantum-resistant encryption, and urge your company or organization or blockchain or standards body to do the same.

Yea, heed my warning, for it comes not from some WordPress-using rando, but from the inventor of BosonSampling and PostBQP and shadow tomography, the Schlumberger Centennial Chair and Founding Director of the Quantum Information Center at the University of Texas at Austin, and (wait for it) new member of the US National Academy of Sciences, that august and distinguished body brought into being by President Abraham Lincoln in 1863.

Because, you know, none of this is about me. It’s only about you. And whether you’ll listen to me.

Three greats who we’ve lost

2026-04-19 14:30:48

Sir Charles Antony Richard Hoare (1934-2026) won the 1980 Turing Award for numerous contributions to computer science, including foundational work on concurrency and formal verification and the invention (with Dijkstra) of the dining philosophers problem. But he’s perhaps best known, to pretty much everyone who’s ever studied CS, as the inventor of the Quicksort algorithm. I’m sorry that I never got to meet him.

Michael O. Rabin (1931-2026), of Harvard University, was one of the founders of theoretical computer science and winner of the 1976 Turing Award. In 1959, he and Dana Scott introduced the concept of a “nondeterministic machine”—that is, a machine with exponentially many possible computation paths, which accepts if and only if there exists an accepting path—which would of course later play a central role in the formulation of P vs. NP problem. He’s also known for the Miller-Rabin primality test, which helped to establish randomness as a central concept in algorithms, and for many other things. He’s survived by his daughter Tal Rabin, also a distinguished theoretical computer scientist. I was privileged to meet the elder Rabin on several visits to Harvard, where he showed me great kindness.

Sir Anthony Leggett (1938-2026), of the University of Illinois Urbana-Champaign, was one of the great quantum physicists of the late 20th century, and recipient of the 2003 Nobel Prize for his work on superfluidity. When I knew him, he was a sort of elder statesman of quantum computing and information, who helped remind the rest of us of why we got into the field in the first place—not to solve Element Distinctness moderately faster, but to learn the truth of quantum mechanics itself. Tony insisted, over and over, that the validity of quantum mechanics on the scale of everyday life is an open empirical problem, to be settled by better experiments and not by a-priori principles. I first met Tony at a Gordon Research Conference in southern California. Even though I was then a nobody and he a recent Nobel laureate, he took the time to listen to my ideas about Sure/Shor separators, and to suggest (correctly) what we now call 2D cluster states as an excellent candidate for what I wanted. In all my later interactions with Tony, at both the University of Waterloo (where he was visiting faculty for a while) and at UIUC (where my wife Dana and I considered taking jobs), he was basically the friendliest, funniest guy you could possibly meet at his level of achievement and renown. I was bummed to hear about his passing.

Before we start on quantum

2026-04-07 15:51:11

Imagine that every week for twenty years, people message you asking you to comment on the latest wolf sighting, and every week you have to tell them: I haven’t seen a wolf, I haven’t heard a wolf, I believe wolves exist but I don’t yet see evidence of them anywhere near our town.

Then one evening, you hear a howl in the distance, and sure enough, on a hill overlooking the town is the clear silhouette of a large wolf. So you point to it — and all the same people laugh and accuse you of “crying wolf.”

Now you know how it’s been for me with cryptographically relevant quantum computing.


I’ve been writing about QC on this blog for a while, and have done hundreds of public lectures and interviews and podcasts on the subject. By now, I can almost always predict where a non-expert’s QC question is going from its first few words, and have a well-rehearsed answer ready to go the moment they stop talking. Yet sometimes I feel like it’s all for naught.

Only today did it occur to me that I should write about something more basic. Not quantum computing itself, but the habits of mind that seem to prevent some listeners from hearing whatever I or other researchers have to tell them about QC. The stuff that we’re wasting our breath if we don’t get past.

Which habits of mind am I talking about?

  1. The Tyranny of Black and White. Hundreds of times, I’ve answered someone’s request to explain QC, only to have them nod impatiently, then interrupt as soon as they can with: “So basically, the take-home message is that quantum is coming, and it’ll change everything?” Someone else might respond to exactly the same words from me with: “So basically, you’re saying it’s all hype and I shouldn’t take any of it seriously?” As in my wolf allegory, the same person might even jump from one reaction to the other. Seeing this, I’ve become a fervent believer in horseshoe theory, in QC no less than in politics. Which sort of makes sense: if you think QCs are “the magic machines of the future that will revolutionize everything,” and then you learn that they’re not, why wouldn’t you jump to the opposite extreme and conclude you’ve been lied to and it’s all a scam?
  2. The Unidimensional Hype-Meter. “So … [long, thoughtful pause] … you’re actually telling me that some of what I hear about QC is real … but some of it is hype? Or—yuk yuk, I bet no one ever told you this one before—it’s a superposition of real and hype?” OK, that’s better. But it’s still trying to project everything down onto a 1-dimensional subspace that loses almost all the information!
  3. Words As Seasoning. I often get the sense that a listener is treating all the words of explanation—about amplitudes and interference, Shor versus Grover, physical versus logical qubits, etc.—as seasoning, filler, an annoying tic, a stalling tactic to put off answering the only questions that matter: “is Quantum real or not real? If it’s real, when is it coming? Which companies will own the Quantum space?” In reality, explanations are the entire substance of what I can offer. For my experience has consistently been that, if someone has no interest in learning what QC is, which classes of problems it helps for, etc., then even if I answer their simplistic questions like “which QC companies are good or bad?,” they won’t believe my answers anyway. Or they’ll believe my answers only until the next person comes along and tells them the opposite.
  4. Black-Boxing. Sometimes these days, I’ll survey the spectacular recent progress in fault-tolerance, 2-qubit gate fidelities, programmable hundred-qubit systems, etc., only to be answered with a sneer: “What’s the biggest number that Shor’s algorithm has factored? Still 15 after all these years? Haha, apparently the emperor has no clothes!” I’ve commented that this is sort of like dismissing the Manhattan Project as hopelessly stalled in 1944, on the ground that so far it hasn’t produced even a tiny nuclear explosion. Or the Apollo program in 1967, on the ground that so far it hasn’t gotten any humans even 10% of the way to the moon. Or GPT in 2020, on the ground that so far it can’t even do elementary-school math. Yes, sometimes emperors are naked—but you can’t tell until you actually look at the emperor! Engage with the specifics of quantum error correction. If there’s a reason why you think it can’t work beyond a certain scale, say so. But don’t fixate on one external benchmark and ignore everything happening under the hood, if the experts are telling you that under the hood is where all the action now is, and your preferred benchmark is only relevant later.
  5. Questions with Confused Premises. “When is Q-Day?” I confess that this question threw me for a loop the first few times I heard it, because I had no idea what “Q-Day” was. Apparently, it’s the single day when quantum computing becomes powerful enough to break all of cryptography? Or: “What differentiates quantum from binary?” “How will daily life be different once we all have quantum computers in our homes?” Try to minimize the number of presuppositions.
  6. Anchoring on Specific Marketing Claims. “What do you make of D-Wave’s latest quantum annealing announcement?” “What about IonQ’s claim to recognize handwriting with a QC?” “What about Microsoft’s claim to have built a topological qubit?” These questions can be fine as part of a larger conversation. Again and again, though, someone who doesn’t know the basics will lead with them—with whichever specific, contentious thing they most recently read. Then the entire conversation gets stuck at a deep node within the concept tree, and it can’t progress until we backtrack about five levels.

Anyway—sorry for yet another post of venting and ranting. Maybe this will help:

The wise child asks, “what are the main classes of problems that are currently known to admit superpolynomial quantum speedups?” To this child, you can talk about quantum simulation and finding hidden structures in abelian and occasionally nonabelian groups, as well as Forrelation, glued trees, HHL, and DQI—explaining how the central challenge has been to find end-to-end speedups for non-oracular tasks.

The wicked child asks, “so can I buy a quantum computer right now to help me pick stocks and search for oil and turbocharge LLMs, or is this entire thing basically a fraud?” To this child you answer: “the quantum computing people who seek you as their audience are frauds.”

The simple child asks, “what is quantum computing?” You answer: “it’s a strange new way of harnessing nature to do computation, one that dramatically speeds up certain tasks, but doesn’t really help with others.”

And to the child who doesn’t know how to ask—well, to that child you don’t need to bring up quantum computing at all. That child is probably already fascinated to learn classical stuff.

Quantum computing bombshells that are not April Fools

2026-04-02 05:26:47

For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.

Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.

Neither of these results change the basic principles of QC that we’ve known for decades, but they do change the numbers.

When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known!  In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year?  Subtracting, of course, off a number of years that no one knows.

In any case, these results provide an even stronger impetus for people to upgrade now to quantum-resistant cryptography.  They—meaning you, if relevant—should really get on that!

When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.

Naturally, journalists have been hounding me for comments, though it was the worst possible week, when I needed to host like four separate visitors in Austin. I hope this post helps! Please feel free to ask questions or post further details in the comments.

And now, with no time for this blog post to leaven and rise, I need to go home for my family’s Seder. Happy Passover!

Movie Review: “The AI Doc”

2026-03-30 05:56:12

Yesterday Dana, the kids, and I went to the theater to watch The AI Doc: Or How I Became An Apocaloptimist, the well-reviewed new documentary about whether AGI will destroy the world. This was surely the weirdest family movie night we’ve ever done. Firstly, because I personally know probably half of the many people interviewed in the film, from Eliezer Yudkowsky to Ajeya Cotra to Liv Boeree to Daniel Kokotajlo to Ilya Sutskever to Jan Leike to Yoshua Bengio to Shane Legg to Sam Altman and Dario Amodei. But more importantly, because this is a documentary that repeatedly, explicitly, earnestly raises the question of whether children now alive will make it to adulthood, before unaligned AI kills them and everyone else. So pass the popcorn, kiddos!

(We did have popcorn. And if the kids were scared — well, I figured we can’t shield them forever from the great questions of the world they’re entering. But actually they didn’t seem especially scared.)

I thought that the filmmaker, Daniel Roher, did about as good a job as can be done, in fitting into a 100-minute film a question that honestly seems too gargantuan for any film — the question of the future of life on earth. He tries to hear out every faction: first the AI existential risk people, then the AI optimists and accelerationists like “Beff Jezos,” then the “stochastic parrot” / “current harms” people like Emily Bender and Timnit Gebru, and finally the AI company CEOs (Altman, Amodei, and Hassabis were the three who agreed to be interviewed), with Yuval Noah Harari showing up from time to time to insert deepities.

Roher plays the part of an anxious, curious, uninformed everyman, who finds each stance to be plausible enough while he’s listening to it, and who mostly just wants to know what kind of world his soon-to-be-born son (about whom we get regular updates) will grow up in.

I didn’t think all the interviewees were equally cogent or equally deserved a hearing. But if any viewers were actually new to AI discourse, rather than marinated in it like me, the film would serve for them as an excellent introduction to the parameters of current debate (for better or worse) and to some of the leading representatives of each camp.

If I had to summarize Roher’s conclusion, it would be something like: go ahead, enjoy your life, have children if you want, but understand that now is a time of world-historical promise and peril much like the early nuclear age, so pay attention, and demand of your elected leaders that they ensure that AGI is developed in a pro-human direction, because tech leaders (even the relatively well-intentioned ones) are trapped in a race to the bottom and can’t get out on their own. Honestly, I’d have a pretty hard time improving on that message.

The main thing that gave me pause about the film was not on the screen but in the theater, which was nearly empty. For the film to serve its purpose, a significant fraction of the world will need to see and discuss it, either in the theater or on streaming. So, y’know, it’s still playing.

For whatever it’s worth, here were my wife Dana’s comments: “The biggest flaw of this movie is that Daniel Roher never breaks out of his ‘clueless everyman’ character, even when he’s talking to the most important people in AI. He wastes an opportunity to ask them non-superficial questions, questions deeper than ‘so, uh, are we all gonna die or not?'”

And here were my 13-year-old daughter’s comments: “So many of the people they interviewed seemed like hippies, who don’t know what AI will do any more than I know!” Also, after Daniel Roher wishes Sam Altman mazel tov on his forthcoming baby: “Sam Altman is Jewish?!”

And here were my 9-year-old son’s comments: “I thought this would be a movie, where AI would try to take over and the humans would fight back! I had no idea it would just be people talking about it. The documentary kind of movie is so, so, so boring.”

My theoretical computer science notes from Epsilon Camp

2026-03-29 13:17:35

Last summer, I was privileged to teach a two-week course on theoretical computer science to exceptional 11- and 12-year-olds at Epsilon Camp, held at Washington University in St. Louis. The course was basically a shorter version of the 6.045 course that I used to teach to undergrads at MIT.

I was at Epsilon Camp to accompany my son Daniel, who attended a different course there, for the 7- and 8-year-olds. So they got me to teach while I was there.

Teaching at Epsilon was some of the hardest work I’ve done in years: I taught two classes, held office hours, and interacted with or supervised students for 6-7 hours per day (compared to ~4 hours per week as a professor), on top of being Daniel’s sole caregiver, on top of email and all my other normal responsibilities. But it was also perhaps the most extraordinary teaching I’ve ever done: during “lecture,” the kids were throwing paper planes, talking over and interrupting me every ten seconds, and sometimes getting in physical fights with each other. In my ~20 years as a professor, this was the first time that I ever needed to worry about classroom discipline (!). It gave me newfound respect for what elementary school teachers handle every day.

But then, when I did have the kids’ attention, they would often ask questions or make observations that I would’ve been thrilled to hear from undergrads at UT Austin or MIT. Some of these kids, I felt certain, can grow up if they want to be world-leading mathematicians and physicists and computer scientists, Terry Taos and Ed Wittens of their generation. Or at least, that’ll be true if AI isn’t soon going to outperform the top human scientists at their own game, a prospect that of course casts a giant shadow not only over Epsilon Camp but over our entire enterprise. But enough about the future. For now I can say: it was a privilege of a lifetime to teach these kids, to be the one who first introduced them to theoretical computer science.

Or at least, the one who first systematically introduced them. As I soon realized, there was no topic I could mention—not the halting problem or the Busy Beaver function, not NP-completeness or Diffie-Hellman encryption—that some of these 11-year-olds hadn’t previously seen, and that they didn’t want to interrupt me to share everything they already knew about. Rather than fighting that tendency, I smiled and let them do this. While their knowledge was stunningly precocious, it was also fragmentary and disjointed and weirdly overindexed on examples rather than general principles. So fine, I still had something to teach them!

Coming to Epsilon Camp was also an emotional experience for me. When I was 15, I attended Canada/USA Mathcamp 1996, the first year that that camp operated. I might not have gone into research otherwise. Coming from a public high school—from the world of English teachers who mainly cared whether you adhered to the Five Paragraph Format, and chemistry teachers who’d give 0 on right answers if you didn’t write “1 mol / 1 mol” and then cross off both of the moles—I was suddenly thrust, sink or swim, into a course on elliptic curves taught by Ken Ribet, who’d played a major role in the proof of Fermat’s Last Theorem that had just been completed, and a talk on algorithms and complexity by Richard Karp himself, and lectures on number theory by Richard Guy, who had stories from when he knew G. H. Hardy.

Back when I was 15, I got to know George Rubin Thomas, the founding director of Mathcamp … and then, after 29 years, there he was again at Epsilon Camp—the patriarch of a whole ecosystem of math camps—and not only there, but sitting in on my course. Also at Epsilon Camp, unexpectedly, was a woman who I knew well back when we were undergrads at Cornell, both of took taking the theoretical computer science graduate sequence, but who I’d barely seen since. She, as it turned out, was accompanying her 8-year-old son, who got to know my 8-year-old. They played together every day and traded math facts.


It occurred to me that the course I taught, on theoretical computer science, was one of the most accessible I’ve ever done, and therefore more people might be interested. So I advertised on this blog for someone to help me LaTeX up the notes for wider distribution. I was thrilled to find a talented student to volunteer. Alas, because of where that student lives, he needs to stay anonymous for now. I thank him, pray for his safety, and hope to be able to reveal his name in the future. I’m also thrilled to have gotten three great high school students—Ian Ko, Tzak Lau, and Sunetra Rao—to help with the figures. Thanks to them as well.

You can read the notes here [59 pages, PDF]

If you’re curious, here’s the table of contents:

  • Lecture 1: Bits
  • Lecture 2: Gates
  • Lecture 3: Finite Automata
  • Lecture 4: Turing Machines
  • Lecture 5: Big Numbers
  • Lecture 6: Complexity, or Number of Operations
  • Lecture 7: Polynomial vs. Exponential
  • Lecture 8: The P vs. NP Problem
  • Lecture 9: NP-completeness
  • Lecture 10: Foundations of Cryptography
  • Lecture 11: Public-Key Cryptography and Quantum Computing

Happy as always to receive comments and corrections. Enjoy!