2026-04-19 14:30:48
Sir Charles Antony Richard Hoare (1934-2026) won the 1980 Turing Award for numerous contributions to computer science, including foundational work on concurrency and formal verification and the invention (with Dijkstra) of the dining philosophers problem. But he’s perhaps best known, to pretty much everyone who’s ever studied CS, as the inventor of the Quicksort algorithm. I’m sorry that I never got to meet him.
Michael O. Rabin (1931-2026), of Harvard University, was one of the founders of theoretical computer science and winner of the 1976 Turing Award. In 1959, he and Dana Scott introduced the concept of a “nondeterministic machine”—that is, a machine with exponentially many possible computation paths, which accepts if and only if there exists an accepting path—which would of course later play a central role in the formulation of P vs. NP problem. He’s also known for the Miller-Rabin primality test, which helped to establish randomness as a central concept in algorithms, and for many other things. He’s survived by his daughter Tal Rabin, also a distinguished theoretical computer scientist. I was privileged to meet the elder Rabin on several visits to Harvard, where he showed me great kindness.
Sir Anthony Leggett (1938-2026), of the University of Illinois Urbana-Champaign, was one of the great quantum physicists of the late 20th century, and recipient of the 2003 Nobel Prize for his work on superfluidity. When I knew him, he was a sort of elder statesman of quantum computing and information, who helped remind the rest of us of why we got into the field in the first place—not to solve Element Distinctness moderately faster, but to learn the truth of quantum mechanics itself. Tony insisted, over and over, that the validity of quantum mechanics on the scale of everyday life is an open empirical problem, to be settled by better experiments and not by a-priori principles. I first met Tony at a Gordon Research Conference in southern California. Even though I was then a nobody and he a recent Nobel laureate, he took the time to listen to my ideas about Sure/Shor separators, and to suggest (correctly) what we now call 2D cluster states as an excellent candidate for what I wanted. In all my later interactions with Tony, at both the University of Waterloo (where he was visiting faculty for a while) and at UIUC (where my wife Dana and I considered taking jobs), he was basically the friendliest, funniest guy you could possibly meet at his level of achievement and renown. I was bummed to hear about his passing.
2026-04-07 15:51:11
Imagine that every week for twenty years, people message you asking you to comment on the latest wolf sighting, and every week you have to tell them: I haven’t seen a wolf, I haven’t heard a wolf, I believe wolves exist but I don’t yet see evidence of them anywhere near our town.
Then one evening, you hear a howl in the distance, and sure enough, on a hill overlooking the town is the clear silhouette of a large wolf. So you point to it — and all the same people laugh and accuse you of “crying wolf.”
Now you know how it’s been for me with cryptographically relevant quantum computing.
I’ve been writing about QC on this blog for a while, and have done hundreds of public lectures and interviews and podcasts on the subject. By now, I can almost always predict where a non-expert’s QC question is going from its first few words, and have a well-rehearsed answer ready to go the moment they stop talking. Yet sometimes I feel like it’s all for naught.
Only today did it occur to me that I should write about something more basic. Not quantum computing itself, but the habits of mind that seem to prevent some listeners from hearing whatever I or other researchers have to tell them about QC. The stuff that we’re wasting our breath if we don’t get past.
Which habits of mind am I talking about?
Anyway—sorry for yet another post of venting and ranting. Maybe this will help:
The wise child asks, “what are the main classes of problems that are currently known to admit superpolynomial quantum speedups?” To this child, you can talk about quantum simulation and finding hidden structures in abelian and occasionally nonabelian groups, as well as Forrelation, glued trees, HHL, and DQI—explaining how the central challenge has been to find end-to-end speedups for non-oracular tasks.
The wicked child asks, “so can I buy a quantum computer right now to help me pick stocks and search for oil and turbocharge LLMs, or is this entire thing basically a fraud?” To this child you answer: “the quantum computing people who seek you as their audience are frauds.”
The simple child asks, “what is quantum computing?” You answer: “it’s a strange new way of harnessing nature to do computation, one that dramatically speeds up certain tasks, but doesn’t really help with others.”
And to the child who doesn’t know how to ask—well, to that child you don’t need to bring up quantum computing at all. That child is probably already fascinated to learn classical stuff.
2026-04-02 05:26:47
For those of you who haven’t seen, there were actually two “bombshell” QC announcements this week. One, from Caltech, including friend-of-the-blog John Preskill, showed how to do quantum fault-tolerance with lower overhead than was previously known, by using high-rate codes, which could work for example in neutral-atom architectures (or possibly other architectures that allow nonlocal operations, like trapped ions). The second bombshell, from Google, gave a lower-overhead implementation of Shor’s algorithm to break 256-bit elliptic curve cryptography.
Notably, out of an abudance of caution, the Google team chose to “publish” its result via a cryptographic zero-knowledge proof that their circuit exists (so, without revealing the details to attackers). This is the first time I’ve ever seen a new mathematical result actually announced that way, although I understand that there’s precedent in the 1500’s, when mathematicians would (for example) prove their ability to solve quartic equations by challenging their rivals to duels. I’m not sure how much it will actually help, as once other groups know that a smaller circuit exists, it might be only a short time until they’re able to find it as well.
Neither of these results change the basic principles of QC that we’ve known for decades, but they do change the numbers.
When you put both of them together, Bitcoin signatures for example certainly look vulnerable to quantum attack earlier than was previously known! In particular, the Caltech group estimates that a mere 25,000 physical qubits might suffice for this, where a year ago the best estimates were in the millions. How much time will this save — maybe a year? Subtracting, of course, off a number of years that no one knows.
In any case, these results provide an even stronger impetus for people to upgrade now to quantum-resistant cryptography. They—meaning you, if relevant—should really get on that!
When I got an early heads-up about these results—especially the Google team’s choice to “publish” via a zero-knowledge proof—I thought of Frisch and Peierls, calculating how much U-235 was needed for a chain reaction in 1940, but not publishing it, even though the latest results on nuclear fission had been openly published just the year prior. Will we, in quantum computing, also soon cross that threshold? But I got strong pushback on that analogy from the cryptography and cybersecurity people who I most respect. They said: we have decades of experience with this, and the answer is that you publish. And, they said, if publishing causes people still using quantum-vulnerable systems to crap their pants … well, maybe that’s what needs to happen right now.
Naturally, journalists have been hounding me for comments, though it was the worst possible week, when I needed to host like four separate visitors in Austin. I hope this post helps! Please feel free to ask questions or post further details in the comments.
And now, with no time for this blog post to leaven and rise, I need to go home for my family’s Seder. Happy Passover!
2026-03-30 05:56:12
Yesterday Dana, the kids, and I went to the theater to watch The AI Doc: Or How I Became An Apocaloptimist, the well-reviewed new documentary about whether AGI will destroy the world. This was surely the weirdest family movie night we’ve ever done. Firstly, because I personally know probably half of the many people interviewed in the film, from Eliezer Yudkowsky to Ajeya Cotra to Liv Boeree to Daniel Kokotajlo to Ilya Sutskever to Jan Leike to Yoshua Bengio to Shane Legg to Sam Altman and Dario Amodei. But more importantly, because this is a documentary that repeatedly, explicitly, earnestly raises the question of whether children now alive will make it to adulthood, before unaligned AI kills them and everyone else. So pass the popcorn, kiddos!
(We did have popcorn. And if the kids were scared — well, I figured we can’t shield them forever from the great questions of the world they’re entering. But actually they didn’t seem especially scared.)
I thought that the filmmaker, Daniel Roher, did about as good a job as can be done, in fitting into a 100-minute film a question that honestly seems too gargantuan for any film — the question of the future of life on earth. He tries to hear out every faction: first the AI existential risk people, then the AI optimists and accelerationists like “Beff Jezos,” then the “stochastic parrot” / “current harms” people like Emily Bender and Timnit Gebru, and finally the AI company CEOs (Altman, Amodei, and Hassabis were the three who agreed to be interviewed), with Yuval Noah Harari showing up from time to time to insert deepities.
Roher plays the part of an anxious, curious, uninformed everyman, who finds each stance to be plausible enough while he’s listening to it, and who mostly just wants to know what kind of world his soon-to-be-born son (about whom we get regular updates) will grow up in.
I didn’t think all the interviewees were equally cogent or equally deserved a hearing. But if any viewers were actually new to AI discourse, rather than marinated in it like me, the film would serve for them as an excellent introduction to the parameters of current debate (for better or worse) and to some of the leading representatives of each camp.
If I had to summarize Roher’s conclusion, it would be something like: go ahead, enjoy your life, have children if you want, but understand that now is a time of world-historical promise and peril much like the early nuclear age, so pay attention, and demand of your elected leaders that they ensure that AGI is developed in a pro-human direction, because tech leaders (even the relatively well-intentioned ones) are trapped in a race to the bottom and can’t get out on their own. Honestly, I’d have a pretty hard time improving on that message.
The main thing that gave me pause about the film was not on the screen but in the theater, which was nearly empty. For the film to serve its purpose, a significant fraction of the world will need to see and discuss it, either in the theater or on streaming. So, y’know, it’s still playing.
For whatever it’s worth, here were my wife Dana’s comments: “The biggest flaw of this movie is that Daniel Roher never breaks out of his ‘clueless everyman’ character, even when he’s talking to the most important people in AI. He wastes an opportunity to ask them non-superficial questions, questions deeper than ‘so, uh, are we all gonna die or not?'”
And here were my 13-year-old daughter’s comments: “So many of the people they interviewed seemed like hippies, who don’t know what AI will do any more than I know!” Also, after Daniel Roher wishes Sam Altman mazel tov on his forthcoming baby: “Sam Altman is Jewish?!”
And here were my 9-year-old son’s comments: “I thought this would be a movie, where AI would try to take over and the humans would fight back! I had no idea it would just be people talking about it. The documentary kind of movie is so, so, so boring.”
2026-03-29 13:17:35
Last summer, I was privileged to teach a two-week course on theoretical computer science to exceptional 11- and 12-year-olds at Epsilon Camp, held at Washington University in St. Louis. The course was basically a shorter version of the 6.045 course that I used to teach to undergrads at MIT.
I was at Epsilon Camp to accompany my son Daniel, who attended a different course there, for the 7- and 8-year-olds. So they got me to teach while I was there.
Teaching at Epsilon was some of the hardest work I’ve done in years: I taught two classes, held office hours, and interacted with or supervised students for 6-7 hours per day (compared to ~4 hours per week as a professor), on top of being Daniel’s sole caregiver, on top of email and all my other normal responsibilities. But it was also perhaps the most extraordinary teaching I’ve ever done: during “lecture,” the kids were throwing paper planes, talking over and interrupting me every ten seconds, and sometimes getting in physical fights with each other. In my ~20 years as a professor, this was the first time that I ever needed to worry about classroom discipline (!). It gave me newfound respect for what elementary school teachers handle every day.
But then, when I did have the kids’ attention, they would often ask questions or make observations that I would’ve been thrilled to hear from undergrads at UT Austin or MIT. Some of these kids, I felt certain, can grow up if they want to be world-leading mathematicians and physicists and computer scientists, Terry Taos and Ed Wittens of their generation. Or at least, that’ll be true if AI isn’t soon going to outperform the top human scientists at their own game, a prospect that of course casts a giant shadow not only over Epsilon Camp but over our entire enterprise. But enough about the future. For now I can say: it was a privilege of a lifetime to teach these kids, to be the one who first introduced them to theoretical computer science.
Or at least, the one who first systematically introduced them. As I soon realized, there was no topic I could mention—not the halting problem or the Busy Beaver function, not NP-completeness or Diffie-Hellman encryption—that some of these 11-year-olds hadn’t previously seen, and that they didn’t want to interrupt me to share everything they already knew about. Rather than fighting that tendency, I smiled and let them do this. While their knowledge was stunningly precocious, it was also fragmentary and disjointed and weirdly overindexed on examples rather than general principles. So fine, I still had something to teach them!
Coming to Epsilon Camp was also an emotional experience for me. When I was 15, I attended Canada/USA Mathcamp 1996, the first year that that camp operated. I might not have gone into research otherwise. Coming from a public high school—from the world of English teachers who mainly cared whether you adhered to the Five Paragraph Format, and chemistry teachers who’d give 0 on right answers if you didn’t write “1 mol / 1 mol” and then cross off both of the moles—I was suddenly thrust, sink or swim, into a course on elliptic curves taught by Ken Ribet, who’d played a major role in the proof of Fermat’s Last Theorem that had just been completed, and a talk on algorithms and complexity by Richard Karp himself, and lectures on number theory by Richard Guy, who had stories from when he knew G. H. Hardy.
Back when I was 15, I got to know George Rubin Thomas, the founding director of Mathcamp … and then, after 29 years, there he was again at Epsilon Camp—the patriarch of a whole ecosystem of math camps—and not only there, but sitting in on my course. Also at Epsilon Camp, unexpectedly, was a woman who I knew well back when we were undergrads at Cornell, both of took taking the theoretical computer science graduate sequence, but who I’d barely seen since. She, as it turned out, was accompanying her 8-year-old son, who got to know my 8-year-old. They played together every day and traded math facts.
It occurred to me that the course I taught, on theoretical computer science, was one of the most accessible I’ve ever done, and therefore more people might be interested. So I advertised on this blog for someone to help me LaTeX up the notes for wider distribution. I was thrilled to find a talented student to volunteer. Alas, because of where that student lives, he needs to stay anonymous for now. I thank him, pray for his safety, and hope to be able to reveal his name in the future. I’m also thrilled to have gotten three great high school students—Ian Ko, Tzak Lau, and Sunetra Rao—to help with the figures. Thanks to them as well.
You can read the notes here [59 pages, PDF]
If you’re curious, here’s the table of contents:
Happy as always to receive comments and corrections. Enjoy!
2026-03-19 00:10:13
I’m on a spring break vacation-plus-lecture-tour with Dana and the kids in Mexico City this week, and wasn’t planning to blog, but I see that I need to make an exception. Charles Bennett and Gilles Brassard have won the Turing Award, for their seminal contributions to quantum computing and information including the BB84 quantum key distribution scheme. This is the first-ever Turing Award specifically for quantum stuff (though previous Turing Award winners, including Andy Yao, Leslie Valiant, and Avi Wigderson, have had quantum among their interests).
As a practical proposal, BB84 is already technologically feasible but has struggled to find an economic niche, in a world where conventional public-key encryption already solves much the same problem using only the standard Internet—and where, even after scalable quantum computers become able to break many of our current encryption schemes, post-quantum encryption (again running on the standard Internet) stands ready to replace those schemes. Nevertheless, as an idea, BB84 has already been transformative, playing a central role in the birth of quantum information science itself. Beyond BB84, Bennett and Brassard have made dozens of other major contributions to quantum information science, with a personal favorite of mine being the 1994 BBBV (Bennett Bernstein Brassard Vazirani) paper, which first established the limitations of quantum computers at solving unstructured search problems (and indeed, proved the optimality of Grover’s algorithm even before Grover’s algorithm had been discovered to exist).
While I take my kids to see Aztec artifacts, you can learn much more from Ben Brubaker’s Quanta article, to which I contributed without even knowing that it would be about Bennett and Brassard winning the Turing Award (info that was strictly embargoed before today). It’s an honor to have known Charlie and Gilles as well as I have for decades, and to have been able to celebrate one of their previous honors, the Wolf Prize, with them in Jerusalem. Huge congrats to two of the founders of our field!