2026-03-15 09:42:54
Scott’s foreword: I’ve known fellow quantum computing theorist Daniel Gottesman, now at the University of Maryland, for a quarter-century at this point. Daniel has been a friend, colleague, coauthor, and one of the people from whom I’ve learned the most in my career. Today he writes about a topic close to my heart, and one to which I’ve regularly lent this blog over the decades: namely, the struggle to protect enrichment and acceleration in the United States (in this case, the public magnet programs in Montgomery County, Maryland) from the constant attempts to weaken or dismantle them. Thanks so much to Daniel for doing this, and please help out if you can!
Without further ado, Daniel Gottesman:
Scott has kindly let me write this guest post because I’d like to ask the readers of Shtetl-Optimized for help. I live in Montgomery County, Maryland, and the county is getting ready to replace our current handful of great magnet programs with a plethora of mediocre ones.
Montgomery County has a generally quite good school system, but its gifted education programs are really inadequate at the elementary and middle school level. Montgomery County Public Schools (MCPS) offers nothing at all for gifted children until 4th grade. Starting in 4th grade, magnet programs are available, but there are not enough spaces for everyone who meets the minimum qualifications. A few years ago, the elementary and middle school magnets were switched to a lottery system, meaning the highest-achieving students, who most need special programming, might or might not get in, based purely on luck of the draw.
The remaining bright spot has been the high school magnets. Montgomery County has two well-known and high-performing magnets, a STEM magnet at Montgomery Blair high school and an International Baccalaureate (IB) program at Richard Montgomery. The Richard Montgomery IB program draws students from the whole county and the Blair Magnet draws from 2/3 of the county (with the remaining 1/3 eligible to go to another successful but less well-known magnet at Poolesville). And these programs have so far resisted the lottery: They pick the best students from the application pool.
So with inadequate magnets in the lower grades and stellar magnets in high school, you can guess which one is up for a change.
MCPS now wants to reconfigure the high school magnet programs by splitting the county up into 6 regions. Students will only be allowed to apply to programs in their home region. Each region will have its own STEM magnet and its own IB program, as well as programs in the arts, medicine, and leadership. And actually there are multiple program strands in each of these subjects, sometimes in different schools. The whole plan is big and complicated, with close to 100 different programs around the county, more than half of them new.
The stated purpose of this plan is to expand access to these programs by admitting more students and reducing travel times to the programs. And who could object to that? There are definitely places in the county that are far from the current magnets and there are certainly more students that can benefit from high-quality magnets than there is currently space for.
The problem is that making high-quality magnets has not been a priority in the design process. The last time MCPS tried adding regional magnets was about 7 years ago, when they added 3 regional IB programs while keeping Richard Montgomery available to students all over the county. It was a failure: Test scores at the regional IB programs are far below those at Richard Montgomery (the worst-performing regional IB had only 24% getting a passing grade in even one subject in 2024, compared to 99% at Richard Montgomery) and all 3 are underenrolled. Now MCPS has decided they can solve this problem by preventing students from going to Richard Montgomery to try to force them to go to the regional IBs. In addition, they want to repeat the same mistakes with the STEM and other magnets. The best programs in the county will shrink and only be accessible to a small fraction of students, leaving everyone else with new programs of likely highly-varying quality.
And if that were not enough, they want to do this revamp on a ridiculously short timeline. The new programs are supposed to start in the 2027-8 school year, and between now and then, they need to recruit and train teachers for these 100 programs, create all the curricula for the first year of the programs (they are only planning to do one year at a time), and much much more. The probability of a train wreck in the early years of the new system seems high.
Equity is certainly a concern driving this change. And let me be clear: I am totally in favor of improving equity in the school system. But I agree with Scott on this point: strong magnet programs in the public schools are pro-equity and weakening magnet programs is anti-equity. Magnet programs are pro-equity even if the magnets are disproportionally populated by more affluent students, which is admittedly the case in MCPS: Affluent students will always have access to enrichment outside school and to private schools for the most affluent, whereas the public magnet programs are the only source of enrichment for those without those resources.
If MCPS really wants to address the difference in achievement between richer and poorer students, the way to do that is to create gifted programming starting from kindergarten. If you wait until high school, it is unreasonable to expect even brilliant students to catch up to their also highly-capable peers who have been doing math and science camps and extracurriculars and contests and whatnot since they were little. Some can manage it, but it is certainly not easy. Unfortunately, MCPS’s notion of equity seems more focused on optimizing the demographic breakdown of magnet programs, which is most easily achieved by techniques which don’t improve — and usually degrade — the quality of the education provided.
So how can you help? The Board of Education (BOE) is supposed to vote on this plan on Mar. 26. Those of us opposed to it are hoping to sway enough members to vote to tell MCPS to investigate alternatives. For instance, I have proposed a model with only 3 regions, which could also substantially improve access while preserving the strong existing magnets.
If you live in Montgomery County, write to BOE members telling them you oppose this change. You can also sign a petition — there are many, but my favorite is here.
If you are an alumnus of one of the MCPS magnets, write to the BOE telling them how your education there was valuable to you and how a smaller program would not have served you as well.
If you are unconnected to Montgomery County, you can still spread the word. If the BOE gets enough press inquiries asking about the many things that don’t add up in the MCPS proposal, perhaps they will recognize that this is a bad idea.
If you are really really interested in this topic and want to learn more: Last fall, I put together a long analysis of some of the flaws in MCPS’s plan and their claims, and of the alternative 3-region model. You can find it here.
2026-03-11 04:47:30
Last Thursday, my friend and colleague Sam Baker, in UT Austin’s English department, convened an “emergency panel” here about the developing Pentagon/Anthropic situation, and asked me to speak at it. Even though the situation has continued to develop since then, I thought my prepared remarks for the panel might be of interest. At the bottom, I include a few additional thoughts.
Hi! I’m Scott Aaronson! I teach CS here at UT. While my background is in quantum computing, I’ve spent the past four years dabbling in AI alignment. I did a two-year leave at OpenAI, in their now-defunct Superalignment team. I joined back when OpenAI’s line was “we’re a little nonprofit, doing all this in the greater interest of humanity, and we’d dissolve ourselves before we raced to build an AI that we thought would be dangerous.” I know Sam Altman, and many other current and former OpenAI people. I also know Dario Amodei—in fact, I knew Dario well before Anthropic existed. Despite that, I don’t actually feel like I have deep insight into the current situation with Anthropic and the Pentagon that you wouldn’t get by reading the news, or (especially) reading commentators like Zvi Mowshowitz, Kelsey Piper, Scott Alexander, and Dean Ball. But since I was asked to comment, I’ll try.
The first point I’ll make: the administration’s line, to the extent they’ve had a consistent line, is basically that they needed to cut off Anthropic because Anthropic is a bunch of woke, America-hating, leftist radicals. I think that, if you actually know the Anthropic people, that characterization is pretty laughable. Unless by “woke,” what the administration meant was “having any principles at all, beyond blind deference to authority, and sticking to them.”
I mean, Anthropic only got into this situation in the first place because it was more eager than the other AI companies to support US national security, by providing a version of Claude that could be used on classified networks. So they signed a contract with the Pentagon, and that contract had certain restrictions in it, which the Pentagon read and agreed to … until they decided that they no longer agreed.
That brings me to my second point. The Pentagon regularly signs contracts with private firms that limit what the Pentagon can do in various ways. That’s why they’re called military contract-ors. So anyone who claims it’s totally unprecedented for Anthropic to try to restrict what the government can do with Anthropic’s private property—I think that person is either misinformed or else trying to misinform.
The third point. If the Pentagon felt that it couldn’t abide a private company telling it what is or isn’t an appropriate military use of current AI, then the Pentagon was totally within its rights to cancel its contract with Anthropic, and find a different contractor (like OpenAI…) that would play ball. So it’s crucial for everyone here to understand that that’s not all that the Pentagon did. Instead they said: because Anthropic dared to stand up to us, we’re going to designate them a Supply Chain Risk—a designation that was previously reserved for foreign nation-state adversaries, and that, incredibly, hasn’t been applied to DeepSeek or other Chinese AI companies that arguably do present such risks. So basically, they threatened to destroy Anthropic, by making it horrendously complicated for any companies that do business with the government—i.e., just about all companies—also to do business with Anthropic.
Either that, the Pentagon threatened, or we’ll invoke the Defense Production Act to effectively nationalize Anthropic—i.e., we’ll just commandeer their intellectual property, use it for whatever we want despite Anthropic’s refusal. You get that? Claude is both a supply chain risk that’s too dangerous for the military to use, and somehow also so crucial to the supply chain that we, the military, need to commandeer it.
To me, this is the authoritarian part of what the Pentagon is doing (with the inconsistency being part of the authoritarianism; who but a dictator gets to impose his will on two directly contradictory grounds?). It’s the part that goes against the free-market principles that our whole economy is built on, and the freedom of speech and conscience that our whole civilization is built on. And I think this will ultimately damage US national security, by preventing other American AI companies from wanting to work on defense going forward.
That brings me to the fourth point, about OpenAI. While this was going down, Sam Altman posted online that he agreed with Anthropic’s red lines: LLMs should not be used for killing people with no human in the kill chain, and they also shouldn’t be used for mass surveillance of US citizens. I thought, that’s great! The frontier AI labs are sticking together when the chips are down, rather than infighting.
But then, just a few hours after the Pentagon designated Anthropic a supply chain risk, OpenAI announced that it had reached a deal with the Pentagon. Huh?!? If they have the same red lines, then why can one of them reach a deal while the other can’t?
The experts’ best guess seems to be this: Anthropic said, yes, using AI to kill people autonomously or to surveil US citizens should already be illegal, but we insist on putting those things in the contract to be extra-double-sure. Whereas OpenAI said, the Pentagon can use our models for “all lawful purposes”—this was the language that the Pentagon had insisted on. And, continued OpenAI, we interpret “all lawful purposes” to mean that they can’t cross these red lines. But if it turns out we’re wrong about that … well, that’s not our problem! That’s between the Pentagon and the courts, or whatever.
Again, we don’t fully know, because most of the relevant contracts haven’t been made public, but that’s an inference from reading between the lines of what has been made public.
Back in 2023-2024, when there was the Battle of the Board, then the battle over changing OpenAI’s governance structure, etc., some people formed a certain view of Sam, that he would say all the good and prosocial and responsible things even while he did whichever thing maximized revenue. I’ll leave it to you whether last week’s events are consistent with that view.
OK, fifth and final point. I remember 15-20 years ago, talking to Eliezer Yudkowsky and others terrified about AI. They said, this is the biggest issue facing the world. It’s not safe for anyone to build because it could turn against us, or even before that, the military could commandeer it or whatever. And I and others were like, dude, you guys obviously read too much science fiction!
And now here we are. Not only are we living in a science-fiction story, I’d say we’re living in a particularly hackneyed one. I mean, the military brass marching into a top AI lab and telling the nerds, “tough luck, we own your AI now”? Couldn’t reality have been a little more creative than that?
The point is, given the developments of the past couple weeks, I think we now need to retire forever the argument against future AI scenarios that goes, “sorry, that sounds too much like a science-fiction plot.” As has been said, you’d best get used to science fiction because you’re living in one!
Updates and Further Thoughts: Of course I’ve seen that Anthropic has now filed a lawsuit to block the Pentagon from designating it a supply chain risk, arguing that both its free speech and due process rights were violated. I hope their lawsuit succeeds; it’s hard for me to imagine how it wouldn’t.
The fact that I’m, obviously, on Anthropic’s side of this particular dispute doesn’t mean that I’ll always be on Anthropic’s side. Here as elsewhere, it’s crucial not to outsource your conscience to anyone.
Zvi makes an extremely pertinent comparison:
[In shutting down Starlink over Ukraine,] Elon Musk actively did the exact thing [the Pentagon is] accusing Anthropic of maybe doing. He made a strategic decision of national security at the highest level as a private citizen, in the middle of an active military operation in an existential defensive shooting war, based on his own read of the situation. Like, seriously, what the actual fuck.
Eventually we bought those services in a contract. We didn’t seize them. We didn’t arrest Musk. Because a contract is a contract is a contract, and your private property is your private property, until Musk decides yours don’t count.
Another key quote in Zvi’s piece, from Gregory Allen:
And here’s the thing. I spent so much of my life in the Department of Defense trying to convince Silicon Valley companies, “Hey, come on in, the water is fine, the defense contracting market, you know, you can have a good life here, just dip your toe in the water”.
And what the Department of Defense has just said is, “Any company that dips their toe in the water, we reserve the right to grab their ankle, pull them all the way in at any time”. And that is such a disincentive to even getting started in working with the DoD.
Lastly, I’d like to address the most common counterargument against Anthropic’s position—as expressed for example by Noah Smith, or in the comments of my previous post on this. The argument goes roughly like so:
You, nerds, are the ones who’ve been screaming for years about AI being potentially existentially dangerous! So then, did you seriously expect to stay in control of the technology? If it’s really as dangerous and important as you say, then of course the military was going to step in at some point and commandeer your new toy, just like it would if you were building a nuclear weapon.
Two immediate responses:
2026-03-08 11:06:05
Sorry to interrupt your regular programming about the AI apocalypse, etc., and return to the traditional beat of this blog’s very earliest years … but I’ve now gotten multiple messages asking me to comment on something called the “JVG (Jesse–Victor–Gharabaghi) algorithm” (yes, the authors named it after themselves). This is presented as a massive improvement over Shor’s factoring algorithm, which could (according to popular articles) allow RSA-2048 to be broken using only 5,000 physical qubits.
On inspection, the paper’s big new idea is that, in the key step of Shor’s algorithm where you compute xr mod N in a superposition over all r’s, you instead precompute the xr mod N’s on a classical computer and then load them all into the quantum state.
Alright kids, why does this not work? Shall we call on someone in the back of the class—like, any undergrad quantum computing class in the world? Yes class, that’s right! There are exponentially many r’s. Computing them all takes exponential time, and loading them into the quantum computer also takes exponential time. We’re out of the n2-time frying pan but into the 2n-time fire. This can only look like it wins on tiny numbers; on large numbers it’s hopeless.
If you want to see people explaining the same point more politely and at greater length, try this from Hacker News or this from Postquantum.com.
Even for those who know nothing about quantum algorithms, is there anything that could’ve raised suspicion here?
Often, when something is this bad, the merciful answer is to let it die in obscurity. In this case, I feel like there was a sufficient level of intellectual hooliganism, just total lack of concern for what’s true, that those involved deserve to have this Shtetl-Optimized post as a tiny bit of egg on their faces forever.
2026-03-06 07:36:31
To start on a somber note: those of us at UT Austin are in mourning this week for Savitha Shan, an undergrad double major here in economics and information systems, who was murdered over the weekend by an Islamist terrorist who started randomly shooting people on Sixth Street, apparently angry about the war in Iran. Two other innocents were also killed.
As it happens, these murders happened just a few hours after the end of my daughter’s bat mitzvah, and in walking distance from the venue. The bat mitzvah itself was an incredibly joyful and successful event that consumed most of my time lately, and which I might or might not say more about—the nastier the online trolls get, the more I need to think about my family’s privacy.
Of all the many quantum computing podcasts/interviews I’ve done recently, I’m probably happiest with this one, with Yuval Boger of QuEra. It covers all the main points about where the hardware currently is, the threat to public-key cryptography, my decades-long battle against quantum applications hype, etc. etc., and there’s even an AI-created transcript that eliminates my verbal infelicities!
A month ago, I blogged about “The Time I Didn’t Meet Jeffrey Epstein” (basically, because my mom warned me not to). Now the story has been written up in Science magazine, under the clickbaity headline “Meet Three Scientists Who Said No to Epstein.” (Besides yours truly, the other two scientists are friend-of-the-blog Sean Carroll, whose not-meeting-Epstein story I’d already heard directly from him, and David Agus, whose story I hadn’t heard.)
To be clear: as I explained in my post, I never actually said “no” to Epstein. Instead, based on my mom’s advice, I simply failed to follow up with his emissary, to the point where no meeting ever happened.
Anyway, ever since Science ran this story and it started making the rounds on social media, my mom has been getting congratulatory messages from friends of hers who saw it!
I’ve been a huge fan of the philosopher-novelist Rebecca Newberger Goldstein ever since I read her celebrated debut work, The Mind-Body Problem, back in 2005. Getting to know Rebecca and her husband, Steven Pinker, was a highlight of my last years at MIT. So I’m thrilled that Rebecca will be visiting UT Austin next week to give a talk on Spinoza, related to her latest book The Mattering Instinct (which I’m reading right now), and hosted by me and my colleague Galen Strawson in UT’s philosophy department. More info is in the poster below. If you’re in Austin, I hope to see you there!
The 88-year-old Donald Knuth has published a 5-page document about how Claude was able to solve a tricky graph theory problem that arose while he was working on the latest volume of The Art of Computer Programming—a series that Knuth is still writing after half a century. As you’d expect from Knuth, the document is almost entirely about the graph theory problem itself and Claude’s solution to it, eschewing broader questions about the nature of machine intelligence and how LLMs are changing life on Earth. To anyone who’s been following AI-for-math lately, the fact that Claude now can help with this sort of problem won’t come as a great shock. The virality is presumably because Knuth is such a legend that to watch him interact productively with an LLM is sort of like watching Leibniz, Babbage, or Turing do the same.
John Baez is a brilliant mathematical physicist and writer, who was blogging about science before the concept of “blogging” even existed, and from whom I’ve learned an enormous amount. But regarding John’s quest for the past 15 years — namely, to use category theory to help solve the climate crisis (!) — I always felt like the Cookie Monster would, with equal intellectual justification, say that the key to arresting climate change was for him to eat more Oreos. Then I read this Quanta article on the details of Baez’s project, and … uh … I confess it failed to change my view. Maybe someday I’ll understand why it’s better to say using category theory what I would’ve said in a 100x simpler way without category theory, but I fear that day is not today.
2026-02-28 03:37:50
I don’t have time to write a full post right now, but hopefully this is self-explanatory.
Regardless of their broader views on the AI industry, the eventual risks from AI, or American politics, right every person of conscience needs to stand behind Anthropic, as they stand up for their right to [checks notes] not be effectively nationalized by the Trump administration and forced to build murderbots and to help surveil American citizens. No, I wouldn’t have believed this either in a science-fiction movie, but it’s now just the straightforward reality of our world, years ahead of schedule. In particular, I call on all other AI companies, in the strongest possible terms, to do the right thing and stand behind Anthropic, in this make-or-break moment for the AI industry and the entire world.
2026-02-20 14:31:16