MoreRSS

site iconThe New YorkerModify

The full text is output by feedx. A weekly magazine since 1925, blends insightful journalism, witty cartoons, and literary fiction into a cultural landmark.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The New Yorker

Trump and the Iran Deal That Wasn’t

2026-04-24 04:06:01

2026-04-23T19:38:13.959Z

So how, exactly, does America’s war with Iran, the one that Donald Trump said would probably be over in a couple of days, or four to six weeks, nearly eight weeks ago, end?

Since Trump has thus far failed to achieve the peace deal that he—and the world’s financial markets—had anticipated by the end of his two-week ceasefire with the hard-line Iranian regime, the conflict has entered into a liminal state that Gideon Rachman, of the Financial Times, called “the fog of peace.” It’s a murkiness befitting a President who has conducted this conflict in the Middle East as a one-man smoke machine obscuring reality behind such a cloud of lies and disinformation that it’s difficult to imagine that even Trump himself could keep straight what is real and what is fiction. On Monday, he told the New York Post that Vice-President J. D. Vance was in the air, en route to Pakistan, to seal an agreement with Iran. But Vance had never left, and days later he still hadn’t. By Tuesday, after variously threatening to bomb all of Iran to smithereens and claiming that he was on the brink of a “FAR BETTER” deal to halt Iran’s nuclear program than any of his predecessors, Trump unilaterally announced an indefinite ceasefire.

As of Thursday morning, Trump was publicly demanding that the U.S. Navy “shoot and kill” any Iranian boat dropping mines in the Strait of Hormuz and then, half an hour later, insisting that “we have total control” over the “Sealed up Tight” strait. Also, the President wanted Iran’s leadership to know that he doesn’t need a deal; however, he might kill any Iranian negotiator who did not give him what he wanted. The bottom line appears to be that more negotiations may or may not take place in Pakistan soon and that there may or may not be an unofficial new Trump deadline of this weekend for Iran to come back to the table. Got that?

One safe conclusion amid the confusion is that it remains, a decade into the Trump era, extremely difficult to distinguish between Trump in dealmaking mode and Trump in meltdown mode. Was the President lying when he said that Vance was on a plane to Islamabad? Out of the loop? Playing some clever game of head-fakery with his adversaries? At one point earlier this week, Trump gave interviews to four different publications suggesting that he had a deal with Iran and listing specifics, including that the regime had agreed to an “unlimited” suspension of its nuclear program and to hand over all its enriched uranium. Not only was this not true but it was almost impossible to believe that it could ever be true with this Iranian government, as experts quickly pointed out. “They’re running into the same fundamental hurdle that shaped the long decade-plus of negotiations” that led to Barack Obama’s nuclear deal with Iran, Suzanne Maloney, of the Brookings Institution, told the Washington Post, “which is that the Iranians are completely immovable on the question of enrichment.”

In foreign-policy circles, there tends to be a lot of learned discussion of just how one is meant to differentiate the signal from the noise at such fraught geopolitical moments. But the wise men, so far as I know, have historically been silent on what to do about a situation in which it’s the President himself who is responsible for so much of the noise while at the same time seeming oblivious to any of the signals.

Trump’s instability and inability to read his adversaries correctly aren’t the only reasons to wonder: Why would anyone make a deal with this man?

Top of mind for Iran’s negotiators, no doubt, is that Trump could hardly be counted on to keep his word even if they were to reach an agreement. There’s also the very real possibility that a future American President would reject Trump’s deal, just as Trump, in the course of his two terms in office, has rejected so many deals made by his predecessors. The long list includes Trump ordering the U.S. to pull out of the Paris climate agreement (twice), the Trans-Pacific Partnership, the World Health Organization, the Intermediate-Range Nuclear Forces Treaty, the Open Skies Treaty, and, of course, the original Iran nuclear deal negotiated under Obama. In January, he issued an executive order to exit sixty-six different organizations that the U.S. had agreed to participate in during past Administrations, including groups ranging from the U.N. Alliance of Civilizations to the Regional Cooperation Agreement on Combating Piracy and Armed Robbery Against Ships in Asia.

The list raises many unanswered questions, not least of which is the possibility that the United States is now in favor of piracy on the high seas. But the general point is relevant not only to the Iranian government, which has plenty of credibility issues of its own after decades of pursuing a nuclear program while publicly disclaiming any interest in doing so, but more broadly to whether any U.S. guarantees—to anyone—will still be valid beyond the limited time horizon of Trump’s erratic Presidency.

Unfortunately, the events of the past few months have underscored an unpleasant reality of Trump’s character: he is constant only in his faithlessness, whether to his own supposed friends and partners or to the nation’s. Ask the women who have served in his Cabinet, three of whom have been ousted in recent weeks despite their collective self-abasement in Trump’s name, which included calling him “the greatest President in American history” (Pam Bondi) and “the greatest President of my lifetime” (Lori Chavez-DeRemer), as well as extolling Trumpian powers so vast that they apparently involved the ability to personally stop hurricanes (Kristi Noem). Ask the other thirty-one members of NATO, who are no longer certain that the alliance’s fundamental guarantee of mutual defense, enshrined in Article 5 of the treaty that was created in the aftermath of the Second World War, still applies to the Trump-led United States.

Just this week came the news that Trump, having already suspended the resettlement program for Afghans who had helped the United States over the course of its twenty-year war in their country, is now looking to send more than a thousand refugees—whose fates have been in limbo since they fled Kabul—to the Democratic Republic of the Congo. These are people who were forced to leave their homeland because of the work that they did for the United States. When the U.S. military’s withdrawal in 2021 precipitated the Taliban’s return to power, Joe Biden gave America’s word to those who had aided us. Trump has no problem breaking it. He is, after all, a President who has already suspended the entirety of America’s program to allow refugees into the United States—with the exception of a few thousand white Afrikaners who, he claims, are victims of a nonexistent genocide. It is only a measure of the extra cruelty that he and his Administration seem to reserve for the world’s neediest that they now propose to send these Afghans to Congo at a time when it is already suffering one of the world’s worst ongoing humanitarian crises.

Countries—like companies and like people—rely on credibility to get things done. Trump and his allies insist that he has gone to war because Iran “cannot be trusted” to have a nuclear weapon. Fair enough. But can anyone trust him, either? No wonder there’s an impasse. Expect high gas prices and more low, low, low Trump approval ratings. ♦

Daily Cartoon: Thursday, April 23rd

2026-04-24 00:06:01

2026-04-23T14:55:04.498Z
A man and a woman sit around the table solving a crossword puzzle.
“O.K., if twenty-down is ‘no cap,’ and nine-down is ‘six seven,’ then fifteen-across has to be ‘drip.’ ”
Cartoon by Harriet Burbeck

What Jesus Meant

2026-04-23 19:06:02

2026-04-23T10:00:00.000Z

[Vice-President J. D.] Vance, who is Catholic, told a conservative audience at the University of Georgia that the pope was wrong to say that disciples of Christ are “never on the side of those who once wielded the sword and today drop bombs.”

—New York Times.

To: The Pope

From: The Offices of the President and Vice-President

Dear Pope,

We have noticed that you said some things about Jesus recently that show you have no idea who Jesus was or what Jesus meant or what He did for a living. We say this not to be mean or critical but simply to point out that you are of very low intelligence about Jesus and should literally maybe get a new job, as we heard yours pays very poorly (not at all?).

Some things we know that might be news to you:

Did Jesus like war? No one knows for certain which side He was really on, as He never spoke much about it. But He definitely wasn’t against war and, as one story goes, once said to “bomb the shit out of them,” in regard to Herod’s army. You can find all of this in a very famous biography of Him called “Bible.”

What Jesus did say—and this is quoted in “Bible”—was that “vengeance is mine but it can also be yours if the price is right” (which is likely where the game show got its name). What did Jesus mean by that? Scholars who have studied “Bible” say that this was largely about arms dealing. Jesus hated communism and liberals and the A.C.L.U., which existed in a form back then, and felt that, if necessary, it was “O.K.” (His word) to take up arms against them and kill them. “Go ahead,” He apparently once said. “What do I care?”

Some other things you might not know about Jesus that we share here in the spirit of friendship and also so you don’t get lippy with the Jesus-talk and make a fool of yourself again:

Jesus said that the last shall be first and the first shall be last. Again, what did He mean? The meaning here is crystal clear: Be first. Get to the head of the line. Except, “Wait,” you might say. “What about the first being last?” Use your head, please. If the last shall be first and the first shall be last, what are the last supposed to do? Be first. Run up there, get to the front. Cut the line. Oh, but there’s an old woman and a baby up there. “How is that my problem?” Jesus might have said. Jesus never talked about children or the elderly and wisely remained a bachelor with no kids but did date casually and was apparently very good-looking.

Do you know how Jesus died? It wasn’t in a traffic accident, as most people think. It was Crucifixion, which is unpleasant, but the upside is that it made Him very famous (likely a four or higher Nielsen rating, were it televised today), so not all bad.

“Why did Jesus have to die?” a lot of people ask. The answer is because some people sin and vote and use the wrong bathroom and criticize others who are the President or Vice-President, which they shouldn’t do, and that’s why Jesus likely died. For other people’s sins.

Have you ever heard the story of Jesus turning water into wine? O.K., well, many people here in America say that a certain someone reminds them of Jesus. We won’t name names, but suffice it to say that this person recently turned wine into Diet Coke—which, granted, was done by accident when a certain someone poured a Diet Coke into a glass of wine. But many in attendance said it was a miracle.

What were Jesus’ thoughts about money? Again—“Bible,” which tells us that He walked into a bank one day to cash a check but the bank happened to be doing business in a church because the main branch was being renovated and apparently Jesus went crazy and turned over the desks of bankers who were trying to close a very big real-estate deal. Which begs the question: Was Jesus rude? Yes, at times. What was His message in turning over the tables, besides showing His rudeness? The message was: Don’t be stupid and rent. Buy. Renting is for saps. That’s what pissed Jesus off. “Cash is king,” He said at one point in “Bible,” toward the end, where they kill the shark and Quint dies.

Our point is this: Get to know Jesus better, Pope. Ask yourself the question each day: “What would Jesus do?” Isn’t that the question we are all trying to answer? And the answer, we think, is that He would go into tech or possibly private equity. ♦

LIV Golf Is Dying of Boredom

2026-04-23 19:06:02

2026-04-23T10:00:00.000Z

You shouldn’t count other people’s money, but I can’t help thinking that the kingdom of Saudi Arabia could’ve found better uses for five billion dollars than sinking it into the upstart golf league LIV. Several news outlets have reported that the Saudi Public Investment Fund, which has been pumping about a hundred million dollars a month into LIV since the league’s launch, in 2022, will be pulling its funding at the end of the year. The league isn’t officially dead, but it doesn’t seem long for this world. What happens, then, to LIV’s players, who received absurdly lucrative contracts to defect from the P.G.A. Tour, and may be banned from returning? Was this all just a waste of time? What do I do now with all of my team merch for Cleeks Golf Club and the HyFlyers?

In the end, LIV was less evil, as a geopolitical project, than many people claimed, and more soulless, as a sport, than I thought possible. Since LIV’s inception, the press has widely labelled it an exercise in “sportswashing”—image rehabilitation for Mohammed bin Salman, Saudi Arabia’s crown prince, after the murder and dismemberment of the Washington Post journalist Jamal Khashoggi. In LIV’s first season, I spoke with Saudi and U.S. officials, some of whom did business with M.B.S., who said that the sportswashing idea made no sense. Starting an expensive professional golf league was a roundabout way to launder the reputation of a violent autocrat.

LIV was one element of Vision 2030, M.B.S.’s effort to transform and diversify Saudi Arabia into a post-oil economy. The Saudis wanted to attract Western business, and to have Riyadh supplant Abu Dhabi and Dubai as the Middle East’s economic capital. The project was outrageously ambitious; it called for, among other things, building a city from scratch, called Neom, whose plans, at one point, included an artificial moon. Golf was part of the plan’s goal to make Saudi Arabia a sports-and-entertainment destination; the leaders of the PIF, the Saudi sovereign-wealth fund, also thought that golf would attract business executives, and the league ended up being a useful tool with which to curry favor with President Donald Trump, who would go on to host LIV tournaments at his golf clubs. It didn’t hurt that Yasir Al-Rumayyan, the PIF’s governor, was an avid golfer. If LIV was a vanity project, it would have been Rumayyan’s. M.B.S. is a video-game guy. Although he has loomed large in the golf world’s imagination, there have been no indications that, to any disproportionate degree, the business mattered to him.

Vision 2030, as a whole, has let some air into the repressive Saudi state. It has been a liberalizing force, moderating religious rule, neutering the religious police, and expanding the rights of women. The country, for the first time in decades, has cinemas. There are concerts, comedy shows, malls, sports. For the golfers, this didn’t make the moral choice of joining LIV any less fraught. Some gestured, comically, at their excitement at helping to reform Saudi society. (“We’ve all made mistakes,” Greg Norman, LIV’s first C.E.O., said, of Khashoggi’s murder.) Many golfers, frustrated by the P.G.A. Tour’s status as a near-monopoly, in which players had little negotiating power, simply used LIV as leverage to get paid. Phil Mickelson told the golf writer Alan Shipnuck, “We know they killed Khashoggi and have a horrible record on human rights.” He went on, “They execute people over there for being gay. Knowing all of this, why would I even consider it? Because this is a once-in-a-lifetime opportunity to reshape how the P.G.A. Tour operates.” One golf agent told Shipnuck, “What you have to understand about professional golfers is that they are all whores.”

More than fifty golfers were willing to debase themselves in order to grab some of the Saudis’ cash. Mickelson, who ultimately joined, and was paid some two hundred million dollars, played a round with Rumayyan, and could be heard gushing, “Great shot, Your Excellency!” Jon Rahm was reportedly paid at least three hundred million. The money bitterly divided the golf world. Loyalists to the P.G.A. Tour, the dominant existing league, viewed the defectors as sellouts who gave up on real competition. The LIV defectors saw the loyalists as hypocrites who were jealous of their deals. Both were probably right.

It wasn’t that the money ruined the sanctity of the sport. If money did that, we’d have no sports at all; Juan Soto makes nearly eight hundred million dollars to play for the Mets. Rory McIlroy, perhaps the P.G.A. Tour golfer most vocally opposed to LIV, told me he wasn’t offended by the greed: “I’m gonna make a shit ton of money here, that’s the thing!” LIV’s problem was that it was a sport run by people who seemingly misunderstood something fundamental about sports: you can’t manufacture attachment. LIV’s most innovative idea was to have golfers play on teams, rather than only as individuals. Teams were assembled via a draft, which, it turned out, was a sham—some players had agreed, beforehand, on who’d play where. The team names (Crushers! Fireballs!) and branding (“Just like a majestic performance on the golf course, the Majesticks Golf Club team identity aims to sparkle and excite!”) had the charm of a consultant’s pitch deck. Rooting for them felt like rooting for a brand of toothpaste. Sports franchises everywhere can be tacky, rapacious, incompetent, extortionate, and otherwise exploitative, but only because their customers, the fans, are essentially captives. Your team is like your family; the new owner may be an oligarch or a war criminal, but what are you going to do, leave? LIV was as if the biggest boor you know tried to pay you to become your uncle. It didn’t work. I attended a LIV tournament at Trump Bedminster, in New Jersey, where people were doing almost everything other than watching the golf: drinking to excess, gawking at Tucker Carlson hanging out with Donald Trump. I saw one guy, next to a green, watching porn on his phone.

LIV lost a staggering amount of money. In 2024, LIV’s U.K. entity alone reported revenues of sixty-five million, and expenses of around five hundred and twenty-five million. Many assume its American branch is a similar money pit. This surprised no one, except, apparently, the Saudis. LIV incinerated so much money that one can understand why it was taken to be a sportswashing exercise. It was, however, a business proposition, albeit one with metrics in addition to just profit and loss—proximity to Trump, wooing foreign executives, the knock-on effects of building up a domestic sports-and-entertainment sector. Still, it was supposed to make money. Eventually, the Saudis found better means of achieving their strategic goals. A two-billion-dollar investment in Jared Kushner’s private-equity firm helped court Trump. Executives, it turned out, would come to Riyadh to beg for money whether they could do so on a golf course or not. (Also, the five billion and counting spent on LIV could’ve been used to build actual golf courses; the country still has only around a dozen.) M.B.S. has recently been more interested in investing in movie studios. The PIF contributed twelve billion dollars to Paramount’s takeover bid for Warner Bros. Discovery. (Films and CNN, incidentally, are more effective means of shaping perceptions.)

LIV’s imminent demise comes now for two reasons: the war in Iran stressed the Saudi economy, and the PIF has refocussed after years of underperformance. The Wall Street Journal reported that the PIF, which was more than nine hundred billion dollars in assets, is “strapped for cash.” It’s pulling back on Neom, the city from scratch, which means they’ll probably be stuck with just the regular old moon. It has stopped work on a hundred-mile-long “horizontal skyscraper” called the Line; all that’s left is a seventy-five-mile-long trench in the desert. One expert estimates that the fund had a return near zero in 2024; the S. & P. was up twenty-five per cent that year. Between 2017 and 2025, the PIF’s annual return has been seven per cent. The S. & P. averaged double that. A hedge-fund manager, presiding over such failure, would be out of a job.

When the reports surfaced that the Saudis were cutting off LIV’s funding, the league was playing a tournament in Mexico City. The TV broadcast cut out—technical issues, LIV said—but it was ominous. Players, who are paid in quarterly installments, didn’t get their paychecks when they expected to. According to the golf writer Eamon Lynch, they debated whether they should refuse to play. The PIF ultimately agreed to keep the league funded for the rest of the season, and the checks went out, but the league appears to be on a death march. Earlier this week, the hosts of one golf podcast began recounting the results of the Mexico City tournament, when they recognized the futility of the exercise: “This just feels like talking about who’s winning the dominos game on the Titanic.” Scott O’Neil, LIV’s C.E.O., is trying to find more investors. Maybe he will, but the odds aren’t great. In February, O’Neil told the Financial Times that LIV was five to ten years away from turning a profit. A few months ago, the league was working with Citibank to try to sell stakes in the individual teams. Earlier this year, the league said it had hoped the franchises would eventually be worth a billion dollars each. Now the league says it hopes to sell the teams for three hundred million dollars each. With seemingly little revenue, it is unclear where such a valuation comes from. As McIlroy explained to me, a few years ago, the value of the franchises are “all tied to the economics of the league, and right now that league doesn’t have any economics.”

What did LIV accomplish? M.B.S. is now a normalized leader and source of funds, but that was hardly LIV’s doing. A lot of golfers got paid. But over four years, I never got a convincing sense that the competition mattered to anyone. The stakes felt empty. For all of the P.G.A. Tour’s problems, at least you knew the players were desperately trying to win. For a sport to work, you need the players to care. Absent that, once you got past LIV’s business drama, and the sniping and backbiting, what you were left with was watching sensationally wealthy, morally compromised middle-aged men go to work. The market for such a product is saturated. If you find yourself missing it, next season, might I suggest tuning into a Cabinet meeting? ♦

Why Earnestness Is Everywhere

2026-04-23 19:06:02

2026-04-23T10:00:00.000Z

Download a transcript.

Listen and subscribe: Apple | Spotify | Wherever You Listen

Sign up to receive our weekly cultural-recommendations newsletter.


Cynicism is widely considered a defining quality of our conspiracy-addled, irony-poisoned age. But audiences and creatives alike now seem ready to cast it aside in favor of an attitude that’s long been out of style: earnestness. On this episode of Critics at Large, Vinson Cunningham, Naomi Fry, and Alexandra Schwartz trace this trend from the outer-space buddy comedy “Project Hail Mary” to the real-life Artemis II mission, whose crew has spoken movingly about Earth as a “lifeboat” in the middle of a vast, mysterious universe. The hosts also consider two buzzy new books—Lena Dunham’s “Famesick,” and “Transcription,” by Ben Lerner—which find their authors turning to earnestness in midlife, after precocious beginnings. In this era of political, economic, and environmental precarity, younger generations, too, have come to celebrate big feelings, rather than living in fear of seeming cringe. “We’ve just seen too much awful stuff, and it’s impossible to ironize,” Cunningham says. “The only sane response to that is to kind of sober up and say, ‘All right, what resources do humans still have?’ ”

Read, watch, and listen with the critics:

“Project Hail Mary” (2026)
“The Pitt” (2025-)
“Love on the Spectrum” (2022-)
“Heated Rivalry” (2025-)
Famesick,” by Lena Dunham
“Girls” (2012-17)
Transcription,” by Ben Lerner
Climbing Cringe Mountain with Gen Z” (The New York Times)
Amos & Boris,” by William Steig
László Krasznahorkai’s Nobel Prize lecture

New episodes drop every Thursday. Follow Critics at Large wherever you get your podcasts.

What Will It Take to Get A.I. Out of Schools?

2026-04-23 19:06:02

2026-04-23T10:00:00.000Z

I don’t like A.I., and I am raising my children not to like it. I’ve been telling them for years now that chatbots are manipulative and dangerous, that A.I.-image generators are loosening our collective grip on reality, that large language models are built atop industrial-scale intellectual-property theft. At times, I find myself speaking with my kids about A.I. in the same terms that we might discuss a creepy neighbor who lives down the block: avoid eye contact, cross the street when you walk past his house, and, when in doubt, call on a trusted adult. Yes, I, too, have suspected that the creepy neighbor walks on cloven hooves inside his Yeezy Boosts, but he probably isn’t going anywhere—in fact, he keeps buying up properties around town—so just try your best not to engage.

Somehow, I was not prepared for the creepy neighbor to start hanging around my kids’ schools; somehow, I thought we had until high school. In February, my son, who is in third grade at a public K-5 in Massachusetts, came home with a piece of paper in his backpack which read “Certificate of Completion, ” for “demonstrating an understanding of the basic concepts of Artificial Intelligence.” He and his classmates had earned this honor, I learned, by playing a computer game produced by the nonprofit Code.org in partnership with Amazon Future Engineer, called Mix & Move with AI, in which the student “designs” a cartoon dancer and “remixes” a popular song—available, needless to say, on Amazon Music. The game is an inane drag-and-drop affair that has little to do with A.I.; the certificate, it turned out, was merely a memento of a pointless and deceptive branding exercise.

Then, in March, students at my eleven-year-old daughter’s public middle school began receiving new Google Chromebooks, and that is when I heard the tap-tap of the cloven hooves approaching our doorstep. The Chromebooks, which the students use in every class and for homework, came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: “Help me write.” If she is starting work on a slide-show presentation, the prompt is “Help me visualize.” She shoos away these interruptions, but they persist: “Help me edit.” “Beautify this slide.” The image generator is there, if she’d ever wish to pull the plug on her imagination. The Gemini chatbot is there, if she ever wants to talk to no one.

So many times, so many times, I warned her about the creepy neighbor. Now he reads her poems and knows her passwords. He’s always watching through the screen.

No single company has a monopoly on A.I. in K-8 education. In Boston’s public schools, sixth graders used chatbots powered by OpenAI’s ChatGPT and Anthropic’s Claude to prepare for this year’s statewide standardized tests. In New York’s and Los Angeles’s school districts, among others, kindergartners talk to a gamified reading bot called Amira, which records children’s voices in order to provide A.I.-driven feedback. A public-school parent in Brooklyn told me about a second-grade art class in which the students can cook up A.I. slop using Adobe Express for Education. When a group of fourth graders in Los Angeles used the same Adobe program to design a Pippi Longstocking book cover, it spat out highly sexualized images.

Google has an institutional advantage over its A.I. competitors in the form of the Chromebook and its built-in “learning management system,” Google Classroom. During the COVID-19 pandemic, as school districts scrambled to set up remote learning, many of them found a cheap and easy option in the Chromebook, which strikes me as little more than a slow browser connected to a janky trackpad. A report by the U.S. Public Interest Research Group noted that, by the last quarter of 2020, year-on-year sales of the device were up by two hundred and eighty-seven per cent. In a national survey conducted by the Times last November, about eighty per cent of K-12 teachers said that their districts use Chromebooks, which has created a vast captive market for Gemini and helped make A.I. in schools a near-universal prospect.

Support for generative A.I. in elementary and middle schools clusters around the belief that early exposure to the technology will foster digital-media literacy, give students a foundation in engineering concepts, and prepare them for a future in which most professions are steeped in A.I. Proponents say that teachers can use A.I. to save time on grading papers and tedious administrative tasks; they also tout the adaptive-learning aspects of A.I. tools, which adjust in real time to a child’s progress and, by producing troves of data, help teachers give individualized attention to each student. “One of the core things that we think about when we bring A.I. to education institutions is: How do you put the educator at the center of that experience?” Shantanu Sinha, who is one of the V.P.s of Google for Education, told me. Gemini’s aim, Sinha went on, is to “empower the educators” in “creating richer experiences. We are not the pedagogical experts.”

Other advocates suggest that A.I. might eliminate the need for pedagogical expertise altogether. Alpha, a fast-growing private-school chain that employs “guides” instead of teachers and serves children as young as four, claims that it “harnesses the power of AI technology to provide each student with personalized 1:1 learning,” allowing kids to “crush academics in just 2 hours” per day, according to its website. At a recent White House summit on children and tech, Melania Trump appeared alongside Figure 03, a humanoid contraption by the robotics company Figure AI, which looks, sounds, and moves as if Eve from “WALL-E” had mated with an arthritic Imperial Stormtrooper. The First Lady asked her audience to imagine such an A.I.-powered robot as a teacher, one who is “always patient and always available” to its student. This lucky pupil will learn more quickly and have more time for friends and sports, Trump said; he or she will become “a more complete person.” Figure 03’s face is literally a black screen: a robotic balaclava.

The message from the White House—and, often, from tech companies and public schools—is that Figure 03 and its A.I. militia are irreversibly here, and belong everywhere, and we should feel terrified but also “empowered,” and that the more time and resources we hand over to them the less they will hurt us, hopefully, maybe. Last month, New York City’s Department of Education began soliciting public feedback on its preliminary guidelines for using A.I. in K-12 classrooms, which include this admonishment: “The question is not whether AI belongs in schools. The question is whether we will collectively build a system that governs AI to serve every student and every stakeholder.”

It’s quite the rhetorical suplex—opening a debate by declaring its central premise off limits. But, as we know from hallucinating chatbots, saying something doesn’t make it so. Countless studies have sown doubt about the place of A.I. in pedagogical settings. “The integration of LLMs into learning environments,” a 2025 study out of M.I.T. cautioned, “may inadvertently contribute to cognitive atrophy.” (The authors appended an F.A.Q. to the paper with instructions on how to discuss its findings: “Please do not use the words like ‘stupid’, ‘dumb’, ‘brain rot’, ‘harm’, ‘damage’, ‘brain damage’, ‘passivity’, ‘trimming’ and so on.”)

More recently, Education Week published findings from an analysis of data from some thirteen hundred U.S. school districts, which found that about one in five student interactions with generative A.I. “involved cheating, self-harm, bullying, and other problematic behaviors.” This month, a study by researchers from M.I.T., Carnegie Mellon, U.C.L.A., and the University of Oxford showed that people who used L.L.M.s on fraction-solving math problems and then lost access to A.I. assistance “perform significantly worse without AI and are more likely to give up. . . . These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.” (This research has not yet been peer-reviewed or published in a scientific journal.) And, at the start of the year, the Brookings Institution released a “premortem on AI and children’s education,” which paired analysis of about four hundred research studies with hundreds of interviews with students, parents, educators, and technologists, and concluded that A.I. tools “undermine children’s foundational development.”

The main arguments against the use of generative A.I. in children’s education are threefold. The first is that L.L.M.s encourage cognitive offloading before kids have done much cognitive onloading—that is, if these tools cause atrophy of thought in adults, then we can scarcely overestimate the potential effects on a brain that has not developed those cognitive muscles in the first place.

The second is that chatbots, which mimic emotional intimacy and tend toward sycophancy, warp how children forge their selfhood and relationships. Around age ten or eleven, kids are “suddenly developing more sophisticated relationships and social hierarchies,” Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, told me. “A lot of that can be traced back to surging oxytocin and dopamine receptors. Oxytocin makes us want to bond with peers, and dopamine makes it feel good when we get positive feedback.” When a fawning L.L.M. enters the chat, “it’s hijacking the biological tendency to want peer feedback,” Prinstein said. Tweens do a lot of mutual emotional disclosure in the normal course of growing up, he went on, “but if they’re going to a chatbot, they miss out on practicing skills that we use for the rest of our lives.”

The third complaint against the use of A.I. in schools is that it confuses ends and means, privileging the most efficient route to the correct answer, the crispest thesis statement, or the neatest drawing over the messier and less quantifiable process of building a thinking, feeling person. “We are potentially undermining complex thinking, changing the development of sociality, and mistaking the learning goal,” Mary Helen Immordino-Yang, who is a professor of education, psychology, and neuroscience at University of Southern California, told me. “We are cutting off learning at the knees.”

Even some pro-A.I. education advocates concede that A.I. poses significant cognitive and social-emotional risks to young people. Amanda Bickerstaff is the co-founder and C.E.O. of the organization AI for Education, which provides training for educators and students on generative A.I. literacy. “Children should not be using chatbots under age ten,” Bickerstaff told me. “These tools require expertise and evaluation skills that even many adults don’t have.” Google’s decision to make Gemini available to all ages, she said, marked one of the few times in her career that she has lost sleep over a work-related matter; she recalled thinking, “They so clearly know that this is going to be bad for kids, and yet they’re still going to do it.” Bickerstaff went on, “I don’t think they’re asking really basic questions like, ‘If a kid can immediately make a picture instead of draw one, what will happen to that kid’s ability to think on their own and draw?’ ”

Drew Bent, who leads education research at Anthropic, told me, “It’s not our place as a company to say, ‘O.K., use A.I. at this age—don’t use it at that age.’ ” Like Sinha, at Google, Bent emphasized that his team has focussed more intently on how teachers engage with A.I., through tools such as Amira and MagicSchool, both of which are partly powered by Claude. “You have to already be at a certain level of critical thinking which you develop over childhood,” Bent said. “Before teachers even put an A.I. tool in the classroom, they have to get to those skills of ‘When can you trust the source?’ A.I. models can come off as very authoritative, very confident.” Case in point: Bent was one of two Anthropic employees to tell me that the Claude chatbot is intended for users who are at least eighteen years old. But, when I mentioned this in passing to Claude, it replied with a “small correction,” saying that the cutoff is actually thirteen.

Some of my daughter’s old schoolwork is stored on her new Chromebook, including a slide show she made last year, in fifth grade, about the history of the printing press. I recall gently encouraging her, before the project was due, to rearrange some of the pictures and to reconsider her choice of black-on-dark-blue type design; she just as gently rebuffed me. The other day, for the purposes of this article, we ran the slide show through Gemini’s beautifying and editing process in Google Slides. Gemini scrubbed and buffed the captions; inside thirty seconds, it had symmetrically shuffled the pictures, added a bunch of its own, and revamped the typography, which was now bigger and easy to read, evocative of fifteenth-century movable type, and set against a contrasting backdrop of aged vellum.

Comparing the two slide shows felt, for me, like the mother-daughter pool race in “Mommie Dearest,” with Gemini playing the role of Joan Crawford: I’m bigger and I’m faster; I will always beat you. My daughter was unmoved. “I like mine better, because it’s original and I worked really hard on it,” she said. “I like mine better because it didn’t take thirty seconds.”

Immordino-Yang told me that the ultimate goal of any school assignment is not the finished project itself but the experience of having done it—an experience that A.I. tools are intended to abbreviate or obviate. With their prettifying intrusions and impatient, lurking presence, they block and reroute a young person’s natural, gradual progression toward cognitive maturity, “especially one who is still developing the neuropsychological substrate for creating narratives and thinking through arguments over time,” Immordino-Yang said. “It’s a fragile process, and it’s being interrupted.” Put another way, she said, “We don’t say to the parents of an eight-month-old, ‘Don’t encourage your child to crawl—that’s a useless skill.’ ” (A fixation on what is “useful” and not has also led to the decline of handwriting in school, despite its proven role in the development of motor skills, language-processing, and working memory.)

Amy Finn, an associate professor of psychology at the University of Toronto, told me that “part of the magic of how kids learn is that they have less knowledge of what they’re going to experience and fewer expectations about what’s going to be relevant. They don’t have that adult filter of strategically extracting things from their experience, and so they retain a lot of unexpected details that adults would find irrelevant. That allows them to be creative in ways that adults are not creative.” The child brain’s tendency toward delightful non sequitur and unpredictable meanderings is not aligned with an L.L.M.’s orientation toward speed and sleekness and summary, toward frictionless, rational outcomes. (An obsession with outcomes-over-process is also a hallmark of the universally loathed style of instruction known as “teaching to the test,” which began taking hold in American classrooms in the early two-thousands, after the No Child Left Behind Act tied federal funding to standardized assessments.)

The question of what a child finds relevant or irrelevant also arose in my conversation with Sinha, of Google for Education. I asked him for a few A.I. best-use cases that an elementary-school teacher might consider. “You could use Gemini to create a children’s story that isn’t just an arbitrary children’s story,” he said, “but you could bring in context of your classroom, or even pictures, and work with Gemini then to say, ‘Hey, here’s a storybook that we can all read together that makes it a little bit more relevant, a little bit more personalized.’ ” He offered another example. “Maybe a child had a drawing that they were proud of, and the teacher can select one, and put that into Google Vids”—the company’s A.I. video-generation and editing app—“and animate it into a really interesting video of that drawing, which immediately engages and hooks students in a very different way.” He added that, by using A.I. tools, students are “able to create much more impressive projects that you could have never done before.”

But why and in what ways does a child’s story or drawing need to be “impressive”? Impressive to whom? And should it leave the impression that it was made with A.I.? “This is where I could go back to an educator,” Sinha said. “Like, what do you want from this?”

In the nineteen-twenties, the American psychologist Sidney Pressey invented a “teaching machine,” about the size of a typewriter, that could administer a multiple-choice test and grade it in real time. As Audrey Watters writes in her 2021 book, “Teaching Machines,” the ed-tech innovators of yore—including Pressey’s better-known rival B. F. Skinner—spoke about their devices “in ways almost identical to those who push for personalized learning today, all so that, as Pressey put it, a teacher could focus on her ‘real function’ in the classroom: ‘inspirational and thought-stimulating activities,’ including giving each student individualized attention.” (Skinner once proclaimed that the act of grading papers was “beneath the dignity of any intelligent person.”)

Over a century of technological change, the ideology of ed-tech has remained constant: that the latest innovation—whether teaching machines, Khan Academy video tutorials, or chatbots—is perpetually on the verge of launching a new era in personalized learning, one that will prove liberatory both to overworked teachers and underengaged students. This durable belief was evident in my conversation with Bent, of Anthropic’s education-research team, who spoke of A.I. tools “giving teachers more one-on-one time with students.” He went on, “When a teacher has thirty students, it becomes very hard to track where all the students are at, to create custom activities for all of those students.” But, with Claude, “we see a teacher who has thirty or thirty-five students in their class doing what a teacher would do if they had five students in their class, but just doing it better.”

The feasibility of such a scenario is yet to be established. But a new educator training program called the National Academy for AI Instruction may offer teachers a chance to stress-test some of the many promises that the A.I. industry has made to their profession. The academy, which is headquartered at the United Federation of Teachers’ office in Manhattan, is a joint project of the U.F.T. and the American Federation of Teachers, and is funded via a twenty-three-million-dollar partnership with Microsoft, OpenAI, and Anthropic. The in-person and online classes offered by the academy are intended to help educators “not accept the inevitable but navigate it,” Randi Weingarten, the president of the A.F.T., told me.

On first appraisal, the National Academy for AI Instruction might sound like manufactured consent in the form of a webinar, bought and paid for by a cabal of tech giants. Yet it’s difficult to come away from conversations with Weingarten thinking she’s an A.I. booster, or, for that matter, a supporter of ubiquitous Chromebooks in classrooms. “The more people rely on A.I., the more people are not thinking,” she said. “We need more paper and pencil, more hands-on learning, and fewer screens.”

And if union members disagree with their district’s pro-A.I. policy, or if they don’t want Gemini barging into their students’ workspaces? “We will defend them,” Weingarten said. “This is all going so fast, and part of my goal is to give teachers permission to object.” The teachers’ unions declined to partner with Google, Weingarten said, because the company “would not make the representations about protecting students and staff safety and privacy that we were looking for.” (Sinha refuted this, saying that Gemini complies with federal regulations, that student data is not leveraged for profit, and that chats with students are never seen by humans or used to train A.I. models. Additionally, a Google representative said, in an e-mail, “Based on conversations with our teams internally, we have no knowledge of AFT raising privacy concerns with us before launching” the A.I. academy.)

Other teacher- and parent-led organizations are likewise trying to build permission structures for limiting A.I. use in schools. Craig Garrett, whose child attends a public school in Brooklyn, told me that he started a WhatsApp group of concerned parents in June, now called District 14 Families for Human Learning, after he discovered that his then kindergartener had been reading to the Amira bot in class all year. (Activists have questioned whether classroom use of Amira, by recording students’ voices, violates a New York State education law forbidding the “unauthorized release of personally identifiable information.”) Garrett is also part of the Coalition for an A.I. Moratorium, a citywide group of educators, parents, and students that is petitioning the New York City mayor, Zohran Mamdani, and Kamar Samuels, the schools chancellor, for a two-year pause on A.I. in K-12 classrooms.

Also part of the coalition is Naveed Hasan, a public-school parent in Manhattan who serves on a citywide advisory committee on education, and who, as a computer scientist, has worked in A.I. for more than twenty years. “I have a philosophical problem with private companies trying to make intelligence into a utility,” Hasan told me. “They tell us not to worry about intelligence—we will let you subscribe to it, and you will be free to do other things.” He went on, “We need to influence the mayor, and to influence everyone who works for the mayor, to get him to order a stop to all this.”

Members of the Coalition for an A.I. Moratorium maintain that few teachers or parents appeared to have been consulted on New York’s preliminary A.I. guidelines, which do little to address privacy concerns or the potential negative effects of A.I. use on students’ brain development and mental health. The city D.O.E. official overseeing the guidelines, Miatheresa Pate, is a current recipient of a fellowship jointly offered by Google and GSV Ventures, an ed-tech venture-capital firm whose portfolio includes Amira and MagicSchool. (Other names on the current Google-GSV fellowship roster include top school officials in Berkeley, Dallas, Los Angeles, and Newark, and statewide officials in Colorado and Maryland.) “If you ask tobacco companies to help write your school’s policy on cigarettes,” Garrett quipped, “you’re going to end up with guidance on how to smoke responsibly in school.” (In an e-mail, a D.O.E. spokesperson said that more than a thousand “stakeholders,” including families and educators, were “engaged” in drafting New York’s preliminary guidance, and added that, while Amira and MagicSchool are used in some schools, the city “has no centralized contract for either product and use is determined at the school level—not by Dr. Pate.”)

A kindred group, Schools Beyond Screens, was formed last year among parents in the Los Angeles Unified School District, where the superintendent, Alberto Carvalho, is currently on administrative leave following F.B.I. raids of his home and office, in February, allegedly over his ties to a bankrupt ed-tech company that was developing an A.I. chatbot for kids. (Carvalho, who has denied any wrongdoing, is also on the board of Code.org, purveyors of Mix & Move with AI.) Among the goals of Schools Beyond Screens is to enforce closer scrutiny of the lucrative contracts that urban districts enter into with tech companies. “The money spent on tech platforms and replacement Chromebooks is money that could be going to teachers,” Kate Brody, the mother of a first grader in an L.A.U.S.D. school, told me. The group also wants districts to establish clearer consent guidelines around the use of digital platforms and to adopt a Student Tech Bill of Rights, which includes the right to “read whole books,” to “regularly read and write on paper,” and to “a low-stimulation learning environment.”

“It still feels like there’s no place to say, ‘As a family, we don’t believe in this. We don’t think it’s right,’ ” Brody said. “My primary concern with my kids using A.I. is cognitive, but for other parents it’s moral, it’s ethical, it’s environmental. These things were rolled out so quickly, with no consent, and now we are trying to dismantle them.”

What Brody and others are trying to dismantle is already part of a daunting corporate and technological superstructure. Yet there is nothing eternal or canonical or irreversible about this system. Gemini is new, but the spectacle of children hunched all day over a median-nerve-shredding computer manqué is itself a relatively recent and, it would seem, plausibly impermanent phenomenon. Chromebooks in classrooms are not inevitable; we could choose to see them as a stubborn but eminently killable weed of the pandemic, like QR-code menus in restaurants. (The Times recently published an excellent story on how “Chromebook remorse” is taking hold in many U.S. school districts.) Nowhere is it written that a multinational conglomerate with a market cap of roughly four trillion dollars is fated to command our public schools, or to grant fellowships to the leaders of those schools, or to monetize the inefficient children who attend them. Another item in the Student Tech Bill of Rights, in fact, is the “right to a learning environment that is free from undue corporate influence.”

Brody told me that anti-A.I. advocacy in education is tricky because screens have become virtually synonymous with school, and A.I. is increasingly synonymous with screens. “You have to be more surgical about it than with a lot of other problems,” she said, “unless you’re going to, like, take the computers and chuck them into the sea.” But why not? I thought back to what Sinha had asked me: “What do you want from this?” What if the answer is nothing? ♦