MoreRSS

site iconMother JonesModify

Our newsroom investigates the big stories that may be ignored or overlooked by other news outlets.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Mother Jones

Graham Platner Apologizes for Using the R-Word

2026-04-16 22:37:00

On Wednesday, Maine Democratic Senate candidate Graham Platner apologized for using the r-word in an article in the Maine Monitor, saying that he is “sorry that I said it” and “I am endeavoring to improve every single day.”

Platner used the r-word in the context of dismissing concerns that people had with his tattoo, a Totenkopf, a symbol that continues to be embraced by Neo-Nazis.

On Monday, I was the first reporter to highlight the problematic aspect of Platner using the r-word to dismiss concerns about the tattoo in the other article published over the weekend. As I highlighted, the term is very offensive to many disabled people, and just because President Donald Trump has an affinity for the term that doesn’t mean other politicians should as well. Platner also previously used the r-word on Reddit, along with making racist comments.

Considering how the Trump administration has targeted disabled people—including enacting brutal cuts to Medicaid, which will alter the services some disabled people receive—some people may argue that others are overreacting to Platner’s use of the r-word. However, having better standards than President Donald Trump has for himself is a good thing when trying to play a role in flipping the Senate to Democratic power.

I asked Platner’s campaign on Monday before publication about why Platner was still using the r-word, despite disabled people calling out the offensiveness of using the term for years. I have still not received a response.

In his apology at a press gaggle, Platner did not highlight what work he has done to engage with disabled Mainers or what work that he plans to do. He did, however, say that he continues “to try to be better.”

Inside That Weird Anti-Science Conference Where Trump’s EPA Chief Delivered the Keynote

2026-04-16 19:30:00

This story was originally published by the Guardian and is reproduced here as part of the Climate Desk collaboration.

As scientists confirmed that March was the United States’ most abnormally hot month in recorded history, dozens of climate deniers gathered to promote misinformation and tout their newfound influence on federal policy.

At a conference hosted by the prominent science-denying think tank the Heartland Institute last week, a crowd of mostly middle-aged men in suits claimed the world is finally waking up to the idea that the climate crisis does not exist. “I feel wonderful,” James Taylor, president of the Heartland Institute, said in an interview. “The truth is winning out.”

The clearest sign of the crowd’s rising power was the gathering’s keynote speaker: Lee Zeldin, the administrator of the Environmental Protection Agency (EPA), whom President Donald Trump is also reportedly considering for attorney general. “It is a day to celebrate vindication,” Zeldin said on Wednesday morning.

In previous administrations, Zeldin said, a “cabal” of elites promoted climate science to further their agenda. Now, “we aren’t just following blind obedience to whatever the dire, doom-and-gloom prediction of the day is,” he said.

There is scientific consensus that global warming is real and urgent, and caused primarily by the burning of fossil fuels.

“Part of the mentality of these folks is that they present themselves as victims,” said Naomi Oreskes. “Of course, that’s completely preposterous.”

As people entered the event, held in the basement of a hotel near the White House, they were greeted by wares promoting climate denial. “Good news,” read a banner outside the main ballroom, erected by the CO2 Coalition, a climate-denying nonprofit that co-sponsored the conference. “There is no climate crisis.”

A table overflowed with displays reading “CO2 is a lifesaver,” pamphlets titled “Fossil fuels are the greenest energy source” and “Challenging ‘net zero’ with science,” and children’s books falsely claiming the acceleration of sea level rise is insignificant. Baskets held buttons proclaiming “Unashamed about my carbon footprint,” as well as stress balls resembling tiny Earths that read: “Don’t stress. There is no climate crisis.”

The event convened climate skeptics and outright deniers alike. While some incorrectly claimed global warming did not exist, others conceded that it was happening but falsely said it was not known to be human-caused—or an emergency.

“I believe humans have played a role in climate change. That is a far cry from saying I believe in a ‘climate crisis’,” said Taylor, the Heartland president, in an emailed response to a question about the scientific consensus around global warming. “It is important not to conflate two very different assertions.”

But presenters seemed to agree on some common false themes: Carbon emissions are harmless or even beneficial, renewable energy is destroying the planet, big tech and the financial sector are collaborating to undermine fossil fuels, and climate science and policy were pushed by powerful “leftist” politicians and media figures.

Naomi Oreskes, a historian of science at Harvard University who has studied climate denialism for 20 years, said rightwing think tanks like the Heartland Institute have long painted themselves as underdogs being squashed by the elite.

“Part of the mentality of these folks is that they present themselves as victims,” she said. “Of course, that’s completely preposterous, because they’re not victims, and in fact many of these people are affiliated with very powerful groups and have been supported by Fortune 500 companies.”

She noted that the Heartland Institute has received funding from Big Oil companies including Shell and ExxonMobil. It has also taken contributions from the Mercers, a family of Republican megadonors.

When the Guardian asked Taylor about where Heartland currently obtains funding, he said the question was “curious and disappointing.” “We are funded by individuals who believe in what we advocate for: We believe in freedom, we believe in affordable energy,” he said.

In an email, Taylor added: “It has been nearly 20 years since Heartland received any money from oil companies. Even then, it was only a tiny percentage of our funding. I would gladly accept oil company funding again.”

He added that Big Oil “openly supports the UN climate agenda and gives far more to climate activist causes than they ever gave to Heartland,” and claimed green groups’ funding was “shady.”

With Trump in the White House, groups like the Heartland Institute, the CO2 Coalition, and the Committee for a Constructive Tomorrow (CFACT)—a rightwing group that complains about “climate exaggeration” and which also co-sponsored the event—are enjoying unprecedented influence.

The audience for a youth-focused panel, a protest organizer noted, “was almost entirely geriatric white men who will not live to see the effects of climate change.”

“Twenty years ago it would have been shocking…for the EPA administrator to take seriously a group of people whose positions are so patently at odds with all of the scientific evidence,” said Oreskes. “But essentially, climate deniers are in charge now.”

During the president’s last term, a founder of the Heartland Institute met with Trump at the White House to advise him on the withdrawal from the Paris climate accord. Last year, a representative said it has “very strong affiliations” with Trump officials, DeSmog reported.

The group also contributed to Project 2025, an ultra-conservative guidebook for Trump’s second term, and the president has made good on some of the organization’s top priorities. Among them: the repeal of the “endangerment finding,” the legal determination that serves as the basis for virtually all US climate regulations. CFACT’s president, Craig Rucker, mentioned the rollback while introducing Lee Zeldin on Wednesday, and the crowd erupted in cheers.

CFACT, too, has had apparent influence on the Trump White House. Last year, the Trump administration cancelled funding for a California offshore wind project after receiving a request from the group. The CO2 Coalition’s founder also helped form a White House committee to question climate science during Trump’s first term. And last month, the group successfully nominated an ophthalmologist with no background in air pollution science to serve on a crucial air pollution committee, the New York Times reported.

Though conference attenders widely claimed their star was rising, polls indicate that the vast majority of Americans believe in climate change. That is especially true for young people, including 42 percent of young Republicans, according to one recent survey.

Asked about polls showing most Americans believe in the climate crisis, Taylor pointed to a 2019 survey showing most Americans were unwilling to pay even $10 per month in higher electric bills to fight global warming. “Americans lose very little sleep over global warming,” he said. But a Thursday panel, “Bringing Youth into the Climate Realist Fold,” indicated deniers have anxiety about young people’s climate concerns.

“My suggestion is to capitalize on the popularity of climate realism influencers to engineer a hashtag movement, like ‘Me Too’, but for truth,” said CO2 Coalition member Anika Sweetland, who obtained a bachelor of science in climate studies and claims to be a climate scientist, and who has little discernible presence on Instagram or TikTok. “Something like ‘hashtag fact check’ or ‘hashtag my climate wake up’.”

Another panelist, Lucy Biggers, 36, who claimed she made the Dakota Access Pipeline fight at Standing Rock “go viral,” explained that she once considered herself a climate activist because she was “indoctrinated into the groupthink.”

“Young people have been so misled,” said Biggers, who serves as head of social media at the Free Press.

The youth-focused panel was disrupted by activists with Climate Defiance. “Yo, how’s it going my fellow youths,” one disrupter, sporting a suit and a backwards hat, shouted sarcastically before being shoved out of the ballroom. “There’s no such thing as fossil fuel-caused climate change!”

In an interview, an organizer of the protest who requested anonymity for fear of retaliation said the action was intended to ensure the panel was not “allowed to go undisrupted,” especially because the audience “was almost entirely geriatric white men who will not live to see the effects of climate change the way that my generation will.”

“The message that we wanted to bring was that climate change denial is not just a matter of a difference of opinions,” said the organizer, adding that they do not believe efforts to spread climate denial to youth will be effective. “These people think that they are untouchable and that they can spread this kind of misinformation entirely unchecked? No.”

How the American Oligarchy Went Hyperscale

2026-04-16 19:00:00

Two years ago, we devoted an entire issue to the rise of the American oligarchy. Since then, our oligarchic system has become more entrenched and pervasive, revolving around a small crew of tech titans whose quest for wealth and power—in all of its forms—is destabilizing our democracy and reshaping our society. In the May + June 2026 issue, we investigate our new AI overlords and the world they are striving to create, whether we like it or not. Read the rest of the package here

One of the largest data center projects ever proposed covers a roughly 5.7-square-mile stretch of farmland in the Louisiana Delta hamlet of Holly Ridge. When it is finally completed at a cost of $27 billion—if it is finally completed—it will house 11 buildings and hundreds of thousands of GPUs and produce enough electricity to power New Orleans three times over. The project, named Hyperion after a Titan from Greek mythology, will “unlock historic innovation, and extend American technology leadership,” Meta’s Mark Zuckerberg declared in a Facebook post after returning from President Donald Trump’s inauguration in January 2025—which is another way of saying it may someday power chatbots. The site, he boasted, “is so large it would cover a significant part of Manhattan.” To underscore the point, Zuckerberg helpfully attached an illustration: a jagged lavender rectangle, stretching from the bottom of Harlem to the top of SoHo.

But the thing about building a data center this big is you cannot simply build a data center—you must build a world to go with it. You need three new power plants and transmission lines to connect them to the grid. You need hundreds of millions of gallons of water and miles of pipes. You must pave roads and build new ones, clear fields, and build ponds. You need a port to bring in gravel and dirt from wherever you can get it. You need stoplights and sheriff’s deputies and laundromats. You need thousands of workers and places to house them—executive lodging, cheap motels, and man camps with movie theaters and gyms. Pecan groves will become RV lots; homes will become parking lots or a Dollar General or food truck parks.

When I visited Holly Ridge last November, nearly a year after the project was first announced, the surrounding parish was ­experiencing a speculative frenzy. It felt like everything that had not already flipped was on the market—and everyone who had not sold out or been priced out was cashing in or thinking about it.

If the economic story of the last two decades was the consolidation of wealth and power in Silicon Valley, the story of the last few years is what tech billionaires want to do with it. Since the summer of 2024, America’s richest men have been on a building spree with few precedents in recent history. The scale of the investment, in the hopes of winning the race to create artificial general intelligence (or AGI), is so vast that proponents have turned to previous eras of reckless extraction and technological advancement to describe it. Energy Secretary Chris Wright called it “Manhattan Project 2.” It is one of the largest investments of private capital since the transcontinental railroads. Expenditures for AI data centers amounted to about a quarter of all GDP growth in the first half of 2025, with the largest companies collectively spending $400 billion on construction projects—many of which won’t be fully operational for years.

Data centers have replaced megayachts as the preferred theater of oligarchic status signaling. Instead of submarines and retractable dance floors, these billionaires tout their compute, their gigawatts, and their acreage. The largest of the new facilities, the so-called “hyperscale” sites where AI models are to be trained, come with names that reflect the pathologies of their founders. Sam Altman’s Stargate in Abilene, Texas, will be “roughly the size of New York’s Central Park,” according to Bloomberg—while OpenAI’s Project ­Jupiter site in New Mexico could be larger still. Amazon and Anthropic are developing Project Rainier on 1,200 acres outside South Bend, Indiana. Elon Musk trained his Mein Kampf–loving chatbot, Grok, at Memphis’ Colossus 1. Colossus 2 is on the other side of town. The names evoke both ancient and contemporary mythology; in D.F. Jones’ science fiction trilogy, ­Colossus is the rogue AI that enslaves mankind. (Grok, for its part, has described itself as “MechaHitler.”) Both Zuckerberg and Jeff Bezos have AI projects called­ ­Prometheus. There are at least five AI companies named for Icarus.

A black-and-white aerial photo of four two-story homes. Behind them, separated by a few trees, are multiple large warehouse-style buildings.
A housing development near data centers in Sterling, Virginia.Stephen Voss

These futuristic fantasies are being planted on the ruins of the past. Open­AI is sourcing data center parts from the Ohio plant where union autoworkers once made Pontiac Firebirds. (Full disclosure: The ­Center for Investigative Reporting is currently suing OpenAI and Microsoft for copyright infringement.) Meta is building another hyperscale campus in the master-­planned community where Jeffrey Epstein once lived. A company called Patmos installed a data center in the building where the Kansas City Star was once printed. Microsoft is reopening Three Mile Island, and developers are renovating robber baron–era steel mills for server farms. A mock-up of a rebuilt Gaza City pitched to the Trump administration by a group of Israeli businessmen envisioned an “Elon Musk Smart Manufacturing Zone” next to a cluster of data centers, tailored to meet US AI regulations—of which, I’m pleased to report, there are vanishingly few. Then Jared Kushner unveiled a similar plan at Trump’s Board of Peace signing ceremony at Davos.

Across the country, third-party agents are stalking bean fields on behalf of anonymous buyers, making big promises about tax revenue and jobs in exchange for a still bigger quantity of water and power. Utilities are keeping coal plants online. The White House is slashing regulations on nuclear safety. Demand for gas turbines to power these facilities is so high that there is a backlog until 2030, and people like Musk are importing power plants from overseas piece by piece, like Italian relics in the Gilded Age. The gold rush is driving a relentless demand for energy (which will nearly triple within the industry by 2030), real estate (nearly 2 billion square feet and counting), and investor cash ($1.6 trillion by 2030). When did you suspect it was a bubble? Maybe it was when former Energy Secretary Rick Perry became the face of a $15 billion project in Amarillo named for Enrico Fermi. Maybe it was when Altman literally said it was.

The AI boom has ushered oligarchy onto a new plane by uniting the monopolistic ambitions of the world’s richest men with the nationalist ambitions of their political champions. In the process, it has sparked a reckoning, in big towns and small and across the political spectrum, over the demand for resources and tax dollars and over what kind of future we might build—about who gets to decide to bet the house and whose chips are simply fodder for the pot. The data centers have, in a sense, transformed opaque structures of inequality and power into literal ones. Oligarchy is now more than an idea; it is a place. Across the country, the empire builders of AI have sold themselves as the gateway to the future you’ve always dreamed of, and the solution to the problems they helped bring about. I hit the road because I wanted to see what this historic disruption was doing to the communities it was purporting to level up. The only thing more disruptive than if the oligarchs are right might be what happens if they’re wrong.

It is no longer novel or even particularly accurate to note that the richest Americans control a greater share of resources than they did during the Gilded Age.
They are about three times wealthier.
That wealth is increasingly concentrated in a single industry, consumed by a singular purpose.

Land is cleared for the Stargate data center project in Abilene, Texas.Stephen Voss

When I rolled into Abilene one evening in August, a few weeks before the first machines powered on at OpenAI and Oracle’s joint venture, the parking lot of the Super 8 motel outside of town was filled with trucks, splattered with red clay. The woman at the front desk just laughed when I asked if they’d gotten much business from the data center’s construction. A smoke detector was going off next door and my room smelled like cigarettes; the nightly rate had nearly doubled. Signs advertising short-term housing and RV rentals lined the roads. On a Sunday afternoon, when the rest of the city shut down, crew after crew, in black boots and blue jeans, emptied out of pickups and four-wheelers to grab energy drinks and snacks at the nearest gas station. It was 99 degrees.

The shock of what Trump calls “big, beautiful buildings” is not that their footprint is so unlike anything you have ever seen—from the outside, data centers resemble nothing more futuristic than souped-up fulfillment centers or a series of airplane hangars—but that all these plants are simply there, where nothing once was. “This is the unicorn that comes, like, once in a billion years,” the then–city manager explained when a plan to build data centers on the site first came up for a vote in 2021. “I feel like it was an invasion,” a neighbor told local officials a few years after that—as though a “monstrosity” of “concrete palaces” had simply risen out of the plains.

Altman, who has a propensity for delivering grandiose statements about his industry in the soft-spoken and reflective tone of a philosophy student confessing to a murder, once told the New Yorker, “If I weren’t in on this, I’d be, like, ‘Why do these fuckers get to decide what happens to me?’” This is a pretty good description of what it was like to attend a local planning meeting in America in 2025. According to Data Center Watch, a newsletter published by an industry intelligence firm, an estimated $98 billion in projects were paused or canceled in the face of community opposition in the second quarter of last year alone. Opponents have blocked major deals in Tucson, Arizona; Indianapolis; New Brunswick, New Jersey; and Prince William County, Virginia. In February, protesters fed up with data centers filled the rotunda of the ­Minnesota State Capitol.

Hyperscale projects have galvanized everyone from singer SZA (“AI is killing and polluting black and brown cities. None of you care cause your [sic] codependent on a machine. Have a great life”) to Dale Earnhardt’s son Kerry, who helped defeat a proposal to turn the Intimidator’s North Carolina land into a technology park, and then-Rep. Marjorie Taylor Greene, who argued that their spread would hasten the arrival of Skynet. Angry ratepayers helped power Democratic sweeps in Virginia and New Jersey last fall.

There is a whole lot of NIMBYism packed into these local fights, often in the most literal kind of way. Arguments about traffic and the character of the community are not unique to the AI boom. But I’ve listened to hours and hours of community meetings, in towns across the country, and you can hear in this opposition a reckoning with something more profound, too. At a county commissioners meeting in Indiana, an attorney for an anonymous developer promised that a $12 billion data center in the town of New Carlisle would be “laid out in a way to be bucolic.” Speaker after speaker threw the word back in his face. New Carlisle already had an $11 billion data center. They knew what it looked like. What New Carlisle didn’t need, one Hoosier told her county commissioners, was to give away its power and water for a technology that would “radicalize our teenagers to be hateful and dangerous or suicidal.” This is not the kind of person who can be swayed by Altman’s promise of ­erotica on demand.

The facelessness of the buildings is a symbol for the coldness of the corporations themselves. “Why should I trust this company that doesn’t trust themselves enough to let me know who you are?” asked a woman at a meeting in Menomonie, ­Wisconsin, where a $1.6 billion, 300-plus-acre data center was proposed. Another speaker found it suspicious that the anonymous buyer was headquartered in the “tax shelter” of Delaware.

When I asked Timothy Accola about the proposed project in his Menomonie backyard, he quickly set me straight: “Front yard, really.” Accola, a 38-year-old microbiologist with bushy sideburns like a Civil War general, lives with a Great Dane named Hamlet on the edge of town. He recently installed solar panels and tends a small orchard—peaches, cherries, apples, plums. “I was planning,” he told me, “on staying there the rest of my life.”

But in July, he got back from a work trip to find a letter in his mailbox from the city, alerting him that his neighbors’ farm was poised to become a data center. Accola’s opposition, he admits, carries a strong whiff of self-­interest. He dreads the light and noise from a facility that must operate 24 hours a day and believes the proposal has “ruined any sort of value that my property has on a domestic sale market.” But his concerns went deeper.

At a community meeting later that summer, Accola told the city council that he had been reading up about data centers on Reddit and listening to a lot of Ed Zitron—the acerbic tech podcaster who has emerged as perhaps the foremost chronicler of the flimsy finances and false promises of the AI bubble.

“This thing is going to pop. Is it going to pop before or after they finish building this place? Anybody have a realistic answer on that?” he asked. “What are we going to do then when there’s a 5-million-square-foot facility in this field that is absolutely empty?”

Other residents at the meeting invoked Foxconn, whose boondoggle on the other side of the state had produced a tenth of its promised 13,000 jobs after the state offered $3 billion in incentives. (Recently, a new tenant set up shop in the industrial footprint: Microsoft data centers.) Politicians in Oregon have shelled out billions of dollars in incentives to lure data centers. But at one Google site in The Dalles, according to the Oregonian, an offer of $260 million in incentives and a third of the city’s 2024 water supply had resulted in just 200 full-time jobs—many of them off-site.

The same narratives pop up again and again. Communities are essentially given a choice: Approve it fast or watch some other town reap the rewards—and miss out on future investments. (In the end, local officials decided to block both the Menomonie and New Carlisle projects, at least for now.) The anxiety on display in public meetings across the country is over not just what happens if all of these get built, but the very real possibility that many of them will not. There is a long-standing fear, in big towns and small, of a giant company coming in and swallowing up everything else, ­because that’s what so much of their experience of American capitalism has been. But there’s something even worse than getting another Walmart, and it’s being promised a Walmart and getting only a Spirit Halloween.

A tightly cropped black-and-white photo of a tear in the mesh that lines a chain-link fence. Through the rip, a large wall for a data center can be seen being stood up with a crane and braces.
A data center under construction near a baseball field in Herndon, Virginia.Stephen Voss

The rise of the American oligarchy happened slowly and then all at once. If you were plotting inequality on a chart, you’d see a steady upward slope from Clinton-­era deregulation to Bush-era tax cuts to Obama-era techno-optimism and on through Trump’s first term. Then it spiked. Between 2000 and 2020, as tech monopolies consolidated power, the share of the nation’s wealth held by the top 0.00001 percent roughly doubled. By the end of 2025, it had nearly doubled again. It is no longer novel or even particularly accurate to note that the richest Americans control a greater share of resources than they did during the Gilded Age; they are, according to French economist Gabriel Zucman, about three times wealthier. That wealth is increasingly concentrated in a single industry, consumed by a singular purpose.

The American oligarchy is an AI oligarchy. Musk’s net worth has tripled since the 2024 election, to well north of $800 ­billion, according to Forbes, with a new pay package approved by Tesla shareholders poised to make him the ­first-ever trillionaire. The world’s six richest men as of early March were all ­actively involved in AI development; the seventh was Nvidia’s Jensen Huang, whose chips prop up this entire system. There are 10 trillion-dollar companies, and nine of them are in AI.

You don’t have to be an AI hater to think critically about where this is headed. My reporting process for this piece was aided by an AI transcription product that saved weeks of my life. My colleagues have used AI to decipher thousands of Reconstruction-­era Freedmen’s Bureau ­records. Maybe you like vibe coding. Maybe you’ve fallen in love with Claude. But even if the programs being trained at these big, beautiful buildings overcome a propensity for hallucinations and abuse and elevate us to a new level of consciousness, the greatest disruption of this era may be the trade-off it took to get here.

Once the realm of Marxists, madmen, and the French, the manifesto has found new purchase in recent years as the preferred artistic medium of AI’s emperor class, surpassing even the product launch. Billionaire venture capitalist Marc Andreessen’s “Techno-Optimist Manifesto” presaged Big Tech’s great schism with liberalism. Dario Amodei’s “Machines of Loving Grace” signposted Anthropic as the wokest of the would-be world builders—a company that would still do business with an autocratic Gulf state, but without sounding quite so enthusiastic about it. Zuckerberg’s “Personal Superintelligence,” published in the plain-text style of an early 2000s blog post, begins with a line that sounds like the last known transmission from the crew of a spaceship: “Over the last few months we have begun to see glimpses of our AI systems improving themselves.” Altman’s first effort was titled “Moore’s Law for Everything,” and it holds up five years later as a cipher for all that’s followed.

The title refers to the proposition—more a catechism than a tenet of science—that the processing power of cutting-edge computer chips will double every two years. Altman extended that concept to society more broadly, arguing that the compounding progress of AI could, if channeled properly, raise the global standard of living and usher in a new age of abundance. We were entering the Burj Khalifa part of the exponential curve. “‘Moore’s Law for everything’ should be the rallying cry of a generation whose members can’t afford what they want,” ­Alt­man wrote at the dawn of this new era. “It sounds utopian, but it’s something technology can deliver.” To which I would simply ask: Does all of this feel utopian?

Altman drew inspiration from the work of Henry George, an economist who operated in an era of industrial transformation to which the data center boom is frequently analogized. George wrote in 1868 that the railroad “kills little towns and builds up cities, and in the same way kills little businesses and builds up great ones.” The first transcontinental route, he predicted, would produce staggering inequality unless new mechanisms were introduced to redistribute its riches. George’s solution, which nearly got him elected mayor of New York, was to tax the value of land instead of labor. Altman proposed a land value tax and a universal basic income—disbursed as annual dividends from the government’s stake in AI companies.

But George did not get what he wanted, and Altman has mostly moved on. In July, not long after publishing a new manifesto called “The Gentle Singularity,” he told podcaster Theo Von (they met at Trump’s inauguration) that “I used to be really excited about things like UBI—I still am kind of excited,” but that simply collecting a check was not going to “feel good.” Instead, Altman now proposed giving everybody on Earth “a slice of the world’s AI capacity,” because “I think what people really want is the agency to kind of co-create the future together.”

In the recent past, the agency to kind of co-create the future was called politics. But there, too, Altman’s thoughts have evolved, in tandem with those of his fellow titans. Although OpenAI was founded on the promise of developing the nascent technology in a thoughtful and sort of utilitarian way—and Altman still goes around saying things like, “We need to level up humans” and “We don’t really know what role money will play in a post-AGI world”—the conversations that AI proponents are having in public right now tell a simpler and less idealistic story: The people who claim to be building the future traded the dream of democratic abundance for a strongman who will make them money.

In September, a few dozen tech luminaries gathered at the White House to promote first lady Melania Trump’s initiative to encourage children to use AI. The ulterior motive for the summit came into focus at a dinner in the residence later that evening, when the president called on the AI moguls one by one to say a few words about their work. You could hear a dull clattering of silverware in the background.

“I just wanted to say thank you,” said OpenAI President Greg Brockman, for the administration’s “optimism.”

“Thank you for being such a pro-­business, pro-innovation president—it’s a very refreshing change,” said Altman.

“Thank you for incredible leadership,” said Bill Gates.

“Thank you so much for bringing us all together, and the policies that you have put in place for the United States to lead,” said Microsoft’s Satya Nadella.

“Thank you for setting the tone,” said Apple’s Tim Cook.

It’s not just the chatbots, it turns out, that tend toward sycophancy. This was the kind of scene that a lesser autocrat might have kept off camera but that Trump found value in showing to the world. The AI executives offered him ballroom donations, settlement checks, and legitimacy. He offered them deals, deregulation, and deference. It was important that everyone understand the arrangement.

This alliance stemmed not just from convenience, but a shared sense of purpose. For years, Silicon Valley harbored a nascent insecurity about its Social ­Network era. There was a belief (not at all wrong) that so much venture capital cash had been wasted on frivolous things and that the industry promoted cosmopolitan decadence in place of nationalism. Google did business in China, but not with the Pentagon. The manifesto for the Founders Fund, the venture capital group Peter Thiel helped found, lamented, “We wanted flying cars, instead we got 140 Characters”—an argument that so radicalized Vice President JD Vance, it led to a religious conversion.

Wealth is increasingly concentrated in a single industry, consumed by a singular purpose. The American oligarchy is an AI oligarchy.

Now the pendulum has swung the other way. When Vance, in a speech to tech investors last year, declared, “We are a nation of builders,” he was deliberately echoing yet another manifesto—Andreessen’s pandemic-era “It’s Time to Build,” which urged conservatives to offer “uncompromised political support…for aggressive investment in new products, in new industries, in new factories, in new science, in big leaps forward.” It’s not as if the Biden administration had been especially hostile to AI development. It had pushed semiconductor manufacturing onshore and cheered on the hyperscale age. The data center boom started on his watch. But Trump offered the embrace of nationalism and the stench of carbon.

The AI boom fused the administration’s desire to protect its “civilization” with Big Tech’s desire to build a better one. Zuckerberg—sporting Caesar curls and Meta glasses—has styled himself as an Augustinian world builder out to “advance the frontier.” He teamed up with Palmer Luckey, the Facebook exile turned AI weapons developer, on a Pentagon project to “turn warfighters into technomancers.” Amazon Web Services secured its largest-ever contract with Customs and Border Protection. The new ambassador to Denmark, tasked with acquiring Greenland in the service of Trump’s expansionism, helped launch the Founders Fund. Thiel, of course, helped pick Vance. The administration’s AI czar, David Sacks, was once Thiel’s chief operating officer at PayPal and is now an AI investor. When MTG complained about Skynet, she was responding to an amendment that would restrict states’ ability to regulate AI; Sacks wrote an executive order that did just that.

This embrace of technological supremacy in the service of the “homeland” is at once a new vision of politics and a very old one. In July, as the Department of Homeland Security deployed Silicon Valley’s finest surveillance tools against immigrants and their defenders, the agency took a break from posting AI slop and white nationalist lyrics to share a 150-year-old artwork by John Gast. The painting, titled American Progress, is a celebration of manifest destiny. The left side of the image shows Native Americans and bison retreating ­toward the edge of the canvas. The right side reveals the technological advancement displacing them: three railroads, steaming inland across the continent.

A person is dwarfed standing amid tall concrete walls being erected for a data center.
A data center under construction in Gainesville, Virginia.Stephen Voss

The implicit promise of the AI revolution is that all the things made worse by AI will eventually be fixed by AI. Some oligarchs are even pitching AI as a solution to the problems oligarchy has helped bring about. Zuckerberg recently suggested that his chatbots would alleviate a crisis of digital disconnection. The average American has “fewer than three friends,” he said, but demand for “like, 15”—a remarkable statement from the protagonist of The Social Network. Prompted by Trump to explain how Stargate would “help us with the fight against the various problems,” Altman suggested that the AI trained in Abilene would “cure diseases at a rapid rate.” But Trump’s second term has been one long crash course on the difference between can and will. Millions of people will die of things we already know how to prevent because Musk threw global health funding “into the wood chipper.” We have entered the Burj Kha­lifa part of the measles curve. If there’s one thing the lords of the algorithm ought to understand, it’s that outputs are a product of the values you put in.

Whereas Russia’s kleptocrats started off in extractive mineral industries and then branched out into tech and finance, America’s oligarchs made their fortunes online before pivoting to natural resources. Musk built a lithium refinery in Texas and has talked about starting a mining company. An Altman startup is partnering with the US government on a plutonium project. Tech billionaires were early investors in a venture to harvest rare-earth minerals in Greenland. Above all else, these AI barons are gobbling up land and industrial sites and hoarding fossil fuels. Facebook did not become a metaverse company; it has become an energy company.

The amount of power needed to keep the data center boom going is astounding. An analysis by Accenture projected that by 2030, energy demand for data centers would be equivalent to that of the entire country of Canada, and the industry’s share of global emissions could jump elevenfold. Another study projected that data center use would account for roughly 12 percent of all US electricity consumption in that same time frame, up threefold from the start of the boom. In Virginia, where 42 percent of all government incentives over a 10-year period went to server farms, data centers already account for a quarter of the state’s power demand, using an amount of energy equivalent to about 2 million households. In Indiana, the drive for energy sources on a grid stressed by Google and Amazon facilities has kept coal-fired power plants online and put a strain on everything else.

Timothy Accola’s neighbors in Menomonie who worried about their electric bills were not just speculating aimlessly. A study by two Harvard Law School researchers found that consumers would end up paying billions to underwrite tech companies’ power infrastructure—to say nothing of billions of dollars in tax breaks. A CNBC ­analysis found 16 states doled out $6 billion worth of tax breaks over the last five years. In New Jersey, the AI expansion drove a 22 percent hike in electric rates year over year.

All this dirty energy means that, as with public health, oligarchy is slowing our progress on climate. While Trump touts the promise of an AI that cures cancer, his agencies are rolling back clean air regulations designed to help prevent people from getting it. (Trump’s Environmental Protection Agency helpfully no longer considers the impact on human lives when setting air pollution limits.) One recent study from researchers at the California Institute of Technology and the University of California, Riverside, found that the increased power use by the AI industry would produce about $20 billion in public health damages by 2028, equivalent to the entire car and truck emissions of California and “double that of US coal-based steelmaking.”

Companies that once branded themselves as the vanguard of the climate revolution have rediscovered the wonders of ­carbon. Ketan Joshi, an independent energy analyst who has tracked how leading tech companies are talking about their climate targets now, drew my attention to a recent update from Microsoft on the status of its sustainability “moonshot.”

“The moon,” the company announced, “has gotten further away.”

Gas turbines stand in a row. The sky above is distorted by rising heat haze.
The xAI data center in Memphis. It attracted controversy after it was reported that the company was using portable gas turbines without proper permits.Stephen Voss

It’s possible, if you look in the right place, to actually see these visions of progress burning up before your eyes. On a ­Sunday morning last fall, I drove my rental car through the west side of Memphis, past a Valero ­refinery and a Tennessee ­Valley ­Authority plant and so many railyards, until I reached a long gray building with earthmovers scattered around the exterior and rows of identical Cybertrucks in the parking lot. This is ­MechaHitler’s bunker.

Colossus 1, the first of three xAI facilities in and around Memphis, embodied the build-at-any-cost mindset that was propelling the hyperscale boom and the commingling of corporate and political power it was building toward. In 2024, desperately playing catch-up to OpenAI and Meta, Musk struck a deal with the chamber of commerce to construct what he has marketed as the “world’s largest and most powerful supercomputer,” in furtherance of xAI’s mission to “understand the true nature of the universe.” It was up and running in 122 days—a remarkable feat that he pulled off by signing a ton of NDAs and treating the Clean Air Act like a CVS receipt. Although Musk built his fortune by collecting federal subsidies for green energy, xAI initially powered the site with dozens of old gas turbines, which it claimed—when pressed, months later—were exempt from permit requirements because they were temporary.

The Clean Air Act’s exemptions were meant for things like lawn mowers, said Patrick Anderson, an attorney at the Southern Environmental Law Center, which threatened to sue xAI last year. Musk’s turbines were “emitting the amount of pollution you might see from a power plant 10 times larger,” Anderson said.

The EPA appeared to accept the law center’s argument in a regulatory decision in January. Two months later, Mississippi regulators approved 41 gas turbines at an xAI plant across the state line. The xAI experiment was clarifying in its brazenness. Musk hardly pitched his new neighbors at all. The deal was hammered out before residents of nearby Boxtown, a largely Black neighborhood in a city with one of the highest asthma rates in the country, were aware it was happening.

To Justin Pearson, the area’s Democratic state representative, Musk’s project embodies the ­historic relationship between large corporations and neighborhoods like Boxtown. His community has “been treated like an extractive colony”—a place where physical and economic health are “sacrificed” for tax revenue and oligarchic profits. A visit to the Colossi would dispel any cheery-eyed mythmaking about what you are getting when a hyperscaler moves in next door. It sounds like a jet engine from 100 yards away.

“The pride and joy of Grok is that it can create a racist Mickey Mouse,” Pearson said. “It hurts my stomach every time that I see Grok in the news, because I know that’s being powered by the pollution that we’re experiencing in our community.” (xAI did not respond to a request for comment. An ­analysis conducted by researchers at the University of Tennessee for Time found that nitrogen dioxide levels had spiked in the vicinity since the project was launched.)

The racist posts were one facet of the problem. All the sexual abuse material was another. According to the New York Times, 41 percent of all images generated by the Colossus-trained Grok over a nine-day period starting in late December were sexualized images of women, while an analysis from the Center for Countering Digital Hate estimated that Grok had produced 23,000 “sexualized images of children.” (X recently stated that it had a “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.”)

In February, a few weeks after Defense Secretary Pete Hegseth announced that the Pentagon would begin giving Grok access to classified networks, the French government’s cybercrimes ­prosecutions unit raided Musk’s Paris office over the alleged enabling of sexual abuse materials and unlawful data extraction.

But if xAI and Musk represent an edge case, it is nonetheless a revealing one. The Colossus of Memphis sends a signal about what the oligarchy is that all the whirring servers of the world can’t drown out—shimmering hot air, a reckless consolidation of power, a new extractive machine built on the foundations of old ones.

On my way out of town, just before I reached the state line, I passed a pair of lost-looking ­National Guard members walking along an empty stretch of road. This, too, was a story about the commingling of money and power in Trump’s second term. The president had deployed the Guard to Memphis following a conversation with a representative of one of the companies underwriting the new White House ballroom. The meeting was with the CEO of Union Pacific, and the project he hoped to bend Trump’s ear over that day was a new transcontinental railroad.

A large data center stands directly behind a baseball field. Three players in uniform stand in the outfield.
A data center under construction next to Sully Highlands Park, a large community area that includes a baseball field and soccer fields, in Herndon, Virginia.Stephen Voss

Zuckerberg’s project in Louisiana makes Colossus look puny. The Hyperion campus covers about three times the area of Musk’s Memphis operations. Once cotton land, the spread had been sold to the state decades ago, with an eye toward landing a major corporate client, but efforts to lure a car manufacturer had fallen through. According to the Baton Rouge Advocate, state officials first caught wind of Meta’s interest at a Mardi Gras party hosted by Shell. A few months and countless NDAs later, Louisiana had closed a deal, offering billions in incentives in the hopes of landing 300 to 500 full-time jobs. Gov. Jeff Landry hailed it as “a once-in-a-lifetime transformational opportunity.”

The frenzy of activity in Richland ­Parish was all-encompassing. At a food truck park filled with out-of-state plates, I met a duo slinging pizzas who’d worked in the North Dakota oil fields and a pit master from Mississippi who’d come from a liquefied natural gas project near the Gulf. A few months ago, the gravel lot had been someone’s home; the owner of the house next door had recently turned down an offer for $1.5 million. I took to counting the number of dump trucks I passed and learned that everyone else had been doing that, too. Diane Cobb once counted 94 on the 8-mile drive between the parish seat of Rayville and her house down the street from Hyperion. Her friend Robin Williams counted 96.

I met Cobb and Williams at a community meeting hosted by the Sierra Club’s Delta Chapter. The impetus for the meeting, held at a pizza shop, was an upcoming permitting hearing. Meta is adding more solar capacity to the Louisiana grid as part of the project, and in a statement, emphasized that it “matches 100% of our electricity usage with clean and renewable energy.” But although Meta pledged to reach net-zero carbon emissions by 2030 and procure “renewable energy and developing technology to support a climate resilient global community”—and although Hyperion was literally a sun god—the Louisiana supercluster will draw much of its power from natural gas. The two plants being built across the street from Cobb and Williams’ church would be capable of producing 5.2 million tons of carbon dioxide a year.

Not all the attendees were as critical. Curtis Harrison, a small-time real estate investor in his 60s in a Las Vegas Raiders hat, sat attentively during the town hall, notebook in hand. He told me he respected the Sierra Club’s advocacy and wanted to hear them out. But Harrison also considered the project a lifeline that the area could not afford to turn down. He had spent much of his career doing film work in California until he lost his job and much of his savings in the Great Recession. He’d moved back to his family’s home base and begun painstakingly rebuilding a nest egg.

Hyperion, he believed, was only the beginning—more companies were sure to come, and they would turn the parish into a place people moved to, rather than away from. His days were a frenzy of networking and dealing. He owned eight units and had just met with his banker to discuss buying a duplex. He was trying, he said, to bring in the future without turning his back on the old-timers. For Meta workers, he might charge $800 per month for a room. For Section 8 tenants and senior citizens, he’d charge half that. Still, he admitted, there was a note of Dr. Frankenstein to the boom.

“Be careful what you wish for, because when it comes true, then what?” he said. “We’re at the ‘then what’ part right now.”

The data centers may usher us onto a higher plane of existence and cure cancer,
but what they won’t do is reset the fundamental dynamic
between the people who live in their shadows and oligarchs who reap their rewards.

A data center in Ashburn, Virginia.Stephen Voss

Not every landlord is as reflective about the growing pains. Before I left town, I checked in with Karen Taylor, who was holding down the fort at one of the new drop-off laundry and home cleaning services in a wood-frame garage near the interstate. About a half-dozen washers and dryers shared space in the interior next to a gleaming new boat, which Taylor said belonged to the owner. She recalled being called in to clean a $125,000 home that had just flipped for $300,000. The new owners, hoping to attract Meta workers, had kicked out the old tenants and raised the rent. Taylor showed up to find kids’ toys, books, and photographs soaking in the rain.

“You know how long it took me to spend that money? One day,” Taylor said of the job. “And these people’s memories and lives, these kids—where are they? Where did they go?”

There were suspicions about Zuckerberg and Meta, and about AI more broadly. But the takeover had also crystallized people’s thoughts about power structures closer to home. The NDAs had sown distrust, and the gold rush feel made people suspicious of their neighbors. They whispered conspiratorially about well-connected residents who had bought or sold property near the site.

“It’s not Holly Ridge anymore, but Meta Ridge,” one resident complained.

It was hard to ignore that all this talk of a beautiful new future was unfolding in a place where basic services were unreliable. Only two-thirds of parish residents have access to the internet, well below the national average. Although Meta had chosen the site, in part, because there was ample water for a cooling system that can go through hundreds of thousands of gallons a day, the stuff that came out of the tap could be brown and milky and ruin your clothes in the wash. A resident who lives across the street from the site pulled out his phone to read the “boil advisory” ­notices he’d received from the parish—six in ­October alone, some lasting for days.

Louisiana is supposed to be one of the winners of the data center boom, and for some of the people who live there, it already is. There are thousands of temporary jobs for people willing to travel, and a sense of possibility, as Harrison put it, where there was once stagnation. Construction comes with ancillary benefits in addition to a whole lot of dust: Meta, in a statement, boasted that the company would fund $300 million worth of infrastructure improvements in Louisiana (including upgrades to roads and water systems) and provide $800 million in property taxes alone. The company was “dedicated to building lasting relationships and creating opportunities that strengthen the fabric of north Louisiana.” But lotteries are an extension of inequality, not an answer to it. Every time I passed by the elementary school, its playground all but enveloped by big trucks and construction equipment, it was hard not to think back to Memphis’ Pearson and his talk of “sacrifice” zones. The data centers may usher us onto a higher plane of existence and cure cancer, but what they won’t do is reset the fundamental dynamic between the people who live in their shadows and oligarchs who reap their rewards.

A headstone bearing the name "Ewing" stands just in front of a chain-link fence and the bare, tangled branches of trees. An industrial building is visible in the background.
A data center overlooks Tippett’s Hill Cemetery, one of the largest and oldest African American cemeteries in Loudoun County, Virgina, with gravestones dating back to the 1700s.Stephen Voss

For all their talk of science fiction and the great unknown that lies ahead, these empire builders are following a familiar script. “There [was] a sorcerer’s apprentice quality” to the aspiring tycoons of the 19th century, Stanford historian Richard White wrote in Railroaded: The Transcontinentals and the Making of Modern America. “They laid their hands on a technology they did not fully understand, initiated sweeping changes, and saw these changes often take on purposes they did not intend.”

When I spoke with White last fall, he suggested the transcontinental age ­offered a cautionary tale for today’s moguls. The Golden Spike was an exception; transcontinentals were more often a story of hubris and a driver of societal unrest. People weren’t opposed to new technology; they objected to the way it was being advanced. They rebelled against the monopolization of “private power”—the sense of subservience and dependency that railroad barons imposed on everyone else and the immense influence they wielded over democratic institutions and their lives.

“What people forget is most of those railroads went bankrupt,” White said. “They also forget that when going bankrupt, they crash the whole economy—again, not once, but at least three times in the railroad depressions in the late 19th century. They tend to be speculative enterprises, which are building not for an existing market, but for a market they think they’re going to create by the very fact that they are building it. It becomes ‘Build it and they will come.’ And they’re building into a lot of areas where they never come.”

It’s not hysterical to wonder if we’re doing the same. Altman still says the Abilene branch of his ChatGPT “Death Star”—all lit up like a secret base when I first saw it—will be fully operational by the end of the year, but he recently abandoned his plans for an expansion. Bezos has already begun to talk of big-box data centers as yesterday’s news; in two decades, he predicted recently, it would be cheaper to build them in space than in Indiana. Meta, in a move the Wall Street Journal described as “Frankenstein financing,” transferred most of the ownership of its Richland Parish operation in October to a third party—raising concerns that it could more easily exit the deal down the road. The company says the deal allowed for “strategic optionality and flexibility” to “effectively meet future infrastructure capacity needs,” and that it is committed to the $27 billion project. But fears of a false start abound. Everyone wants to be the Prometheus who stole fire. But sometimes, you’re the Prometheus whose liver got pecked out by an eagle.

The backlash to big, beautiful buildings has grown so intense that even Trump shifted his public stance, promising during the State of the Union that tech companies would henceforth supply their own power needs. (The pledge, characteristically, is nonbinding.) In the meantime, a new kind of hyperscale project was popping up in Americans’ backyards: a $38 billion plan to convert empty warehouses (including one former Amazon facility) into an archipelago of immigrant detention centers. In news stories, the gray, boxy buildings that house children and those that provide agentic workflow solutions seem to blur together. The responses from local communities were starting to sound the same, too.

It was amid the rising skepticism of oligarchs and their machines that Altman published “The Gentle Singularity” last year. The industry had sold its growth with a promise of mass disruption. The elimination of whole industries was part of the pitch. The OpenAI CEO’s manifesto reads like an attempt to recalibrate a hype cycle that had, thanks to evangelists like Altman, gotten a little out of hand. This idea that people would be losing control of their lives to robots, or to a class of people who talked like them, was unfounded. There might be some growing pains, but “with abundant intelligence and energy (and good governance), we can theoretically have anything.”

But even as the president took steps to separate his administration from the tech titans’ unpopular ventures, the political project and the technological one could not be so easily disentangled. A few days after Trump’s big address, he and Hegseth set out to demonstrate their dominance over Amodei and Anthropic, demanding that the “RADICAL LEFT WOKE COMPANY” give the Pentagon unfettered use of its technology. When Amodei refused, citing insufficient guardrails against fully autonomous weapons systems and data collection on American citizens, the administration labeled his company a “supply chain risk” and banned any defense contractors from working with the firm. But the so-called Department of War had already lined up a replacement vendor, eager for the cash flow and ready to change the world.

The “DoW displayed a deep respect for safety,” wrote the author of “The Gentle Singularity” on the night of February 27—three hours before the start of a military campaign that killed 175 people at a school for girls, assassinated the supreme leader of Iran, triggered missile attacks in 13 countries, destroyed oil refineries, shuttered the Strait of Hormuz, and threw the global economy into crisis—and shared “a desire to partner to achieve the best possible outcome.”

Read more of our coverage of the roots and rise of the American oligarchy

Tech Billionaires Want Christians to Believe in AI

2026-04-16 19:00:00

Two years ago, we devoted an entire issue to the rise of the American oligarchy. Since then, our oligarchic system has become more entrenched and pervasive, revolving around a small crew of tech titans whose quest for wealth and power—in all of its forms—is destabilizing our democracy and reshaping our society. In the May + June 2026 issue, we investigate our new AI overlords and the world they are striving to create, whether we like it or not. Read the rest of the package here

In early January, a short essay by a little-­known AI entrepreneur turned internet philosopher named Will Manidis went viral on X. The post was mostly an attempt to explain why Boston, where Manidis lived before relocating to New York a few years ago, had failed as a tech hub. He pointed to a suite of reasons for the slow decline of the city’s once-crackling biotech scene, mainly the usual culprits of overregulation and overtaxation. But at the core of Manidis’ argument was something much deeper: The heart of the problem was the growing consensus among Boston’s stodgy elites that there was something unsettling and possibly even dangerous about the rapid pace of technological development. That mounting uneasiness about tech—and especially artificial intelligence—lay beneath the decisions that sealed the fate of Boston’s tech scene.

“The average American understands AI is a thing that wastes water, skyrockets power costs, and scams their grandparents in exchange for exposing children to deviant sexual content, sports gambling, and all other manner of sin,” he writes. “If we cannot articulate why innovation is a moral imperative, we can expect the entire technology industry to end up like Boston. First taxed, then looted, then exhausted. And we’ll be stuck wondering where it all went.”

Manidis, who describes himself as a Christian, writes about religious matters on X and his Substack. When I called to talk with him about this idea of tech as a “moral imperative,” he used a theological metaphor: “The mix of oligarchs and tech people and tech money and tech politics and the tech right,” he told me, “they’ve just been unable to communicate a coherent apologetic.”

His term—apologetic—refers to the project of defending the mysteries of faith to nonbelievers. The Christian tradition of apologetics is rich. Its brightest lights include St. Paul, Thomas Aquinas, and C.S. Lewis—all of whom made the case for their faith not by biblical invocation or surrender to the divine, but rather through engagement, rational arguments, and evidence. Manidis believes AI needs those kinds of defenders, because the public appears to be losing faith in it.

Last summer, right-wing luminaries converged at the annual National Conservatism Conference, a group that has emerged as a strong influence on the Trump administration’s policy decisions. The speaker lineup included some of ­MAGA’s most trusted interlocutors—for example, Director of National Intelligence Tulsi Gabbard, Missouri Sen. Josh Hawley, and White House budget director and Project 2025 architect Russell Vought. But lesser-known conservative thinkers appeared as well.

University of New Mexico psychology professor Geoffrey Miller, for instance, confronted Palantir CTO Shyam Sankar during a heated exchange reported by The Verge. The AI industry, Miller told Sankar, is “globalist, secular, liberal, feminized transhumanists. They explicitly want mass unemployment, they plan for UBI-based communism, and they view the human species as a biological ‘bootloader,’ as they say, for artificial superintelligence.”

Many aspects of Miller’s position are extreme, but his discomfort with AI is broadly shared. A Pew Research Center survey last November found that more than half of Americans say they are “more concerned than excited” about the technology, up from 37 percent in 2021, the year before ChatGPT launched. Historically, Republicans share this opinion slightly more than Democrats, but Manidis doesn’t think the messengers of the tech world are doing AI any favors bolstering support on either end of the political spectrum. For example, that one time in 2015 when Sam Altman, co-founder of OpenAI, famously opined that AI “will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

“What becomes incredibly useful for these people to do is to present their products as the answer to the meaning of life.”

“Why?” Manidis lamented to me on the phone. “Why would you say that? Like, come on, buddy.”

As if in response to Altman’s overblown rhetoric, some Silicon Valley oligarchs are attempting to run interference between two emerging camps in the religious right: AI’s cheerleaders on one side and its skeptics on the other. The likes of Palantir’s Peter Thiel and other religious techies such as Andreessen Horowitz’s Katherine Boyle and Anduril’s Trae Stephens are spearheading an effort to create the “apologetic” that Manidis called for. Bolstered by their own Christian zealotry, they argue that far from being the demonic force described by Miller, technology is more comparable to a savior—even a Christlike messiah. Not only are Christians called to embrace technology, but they have an obligation to do so, because progress itself is a moral good.

Culturally speaking, these tech elites are coded very differently from charismatic Holy Rollers who have had a long tradition of promising their followers that adherence to Christian faith and practices will yield material wealth. But essentially, they are offering a similar, though slightly inverted proposition: Tech can make you rich and a good Christian. Call it the prosperity gospel of technology. Much in the way they have shaped culture with social media algorithms, tech evangelists now are attempting to normalize the use and acceptance of AI by wrapping it in a spiritual message. They also have explicit policy goals, and the Trump administration appears to be heeding their call, with new federal efforts aimed at unshackling AI from safety regulations.

Greg Epstein is a humanist chaplain at Harvard University and MIT who has spent the last two decades building ethical communities for nonreligious people and, more recently, writing about the similarities between Silicon Valley and faith groups. In 2024, he published the book Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation. Epstein laments that while many have written about the cultlike aspects of the tech world, few have examined the motivations that lie behind making it that way. “What becomes incredibly useful for these people to do is to present their products as the answer to the meaning of life,” Epstein told me.

In Silicon Valley’s embrace of Christianity, he sees a marriage of convenience: “They’re trying to imbue wealth with meaning,” he said. “But they’re also trying to imbue a certain kind of meaning with wealth.” In other words, Christianity gets an elite, luxury-set rebrand, and in return, the tech titans get to sanctify their vast fortunes.

In an illustration designed to look like a painting, the liquid in a glass of water appears to be changing into red wine. A metallic robot hand points at the glass, with a light shining from its extended index finger.
Nicolás Ortega/”Water Glass And Jug,” Jean-Baptiste-Siméon Chardin/Web Gallery of Art

If one were to name a spokesperson for the anti-AI right, it would be hard to imagine someone more perfectly suited for the role than British writer Paul Kingsnorth. In his 2025 book, Against the Machine: On the Unmaking of Humanity, Kingsnorth, an erstwhile lefty environmental activist turned Orthodox Christian crusader, makes the case that technology, especially AI, is a semi-sentient being with its own anti-human, anti-Christian agenda. In prose so entertaining that you hardly notice how frantic and conspiratorial it all is, Kingsnorth conjures an ominous vision that implicates “the Machine”—or technology—in all manner of the political right’s favorite bêtes noires. He describes “progressive leftism and the Machine” as a “usefully snug fit” because they are both “suspicious of the past, impatient with borders and boundaries, and hostile to religion.” Both progressive leftism and the Machine, he concludes, “are in pursuit of a global utopia where, in the dreams of both Lenin and Lennon, the world will live as one.”

For example, Kingsnorth considers this technology demon to be the true culprit behind “mass gender confusion.” Moreover, the struggle for transgender acceptance is actually a step on the path toward permanently abandoning our bodies. “A young generation of hyper-urbanized, always-on young people, increasingly divorced from nature and growing up in a psychologised, inward-looking anticulture,” he writes, “is being led toward the conclusion that biology is a problem to be overcome.” Young people learn that the “body is a form of oppression and that the solution to their pain may go beyond a new set of pronouns, or even invasive surgery, towards ­nanotechnology, ‘cyberconsciousness software,’ and perhaps, ultimately, the end of their physical embodiment altogether.”

Those ominous predictions apparently struck a chord: Kingsnorth’s book was a New York Times bestseller and widely reviewed, especially among Christian critics. In Christianity Today, Justin Ariel Bailey was rhapsodic, calling the work “a trenchant and terrifying account of what modern people have sacrificed in exchange for technology’s promise of power and autonomy.”

To say that Silicon Valley’s Christian power players see things differently is an understatement—and they’re working hard to spread the countervailing message of technology’s godly promise. Leading this charge is Boyle, the Andreessen Horowitz partner who is an ally of Vice President JD Vance. Boyle, who shares thoughts about her Catholic faith openly on social media, runs a fund called American Dynamism, which, its website says, aims to back tech companies—in aerospace, defense, education, public safety, and other sectors—whose success “supports the flourishing of all Americans.” For her, the efforts to set guardrails around AI are nefarious, camouflaged, as she co-wrote with her colleague Martin Casado in a 2024 Wall Street Journal op-ed, as efforts “to promote safety.” In fact, they insisted, “We believe the true purpose is to suppress open-source innovation and deter competitive startups.”

Boyle, an ex–Washington Post journalist whose former colleagues recall her as pleasant, a bit distant, and always impeccably dressed, argues that tech not only is not evil, but also perfectly embodies the family-first values of many Christians. In a keynote address (PDF) at the American Enterprise Institute last year, she argued for a coming together between the tech sector and the American family so they could become allies against an overzealous government. “Much has been written about this nascent alliance between the tech right, or the so-called tech right, and this administration, and how weird it is for the transhumanists of Silicon Valley to find common ground with a MAHA mother in Missouri,” she said. “Except that they have identified a common evil. They know that the gravest threat to their businesses, their industry, their family’s health, and their freedom is a censorious and authoritarian state.”

Boyle highlighted the many ways in which technology could be a boon for families. Mothers could spend more time at home with their children through tech-­enabled remote work. Tech could also make both parents “more entrepreneurial” by allowing them to start businesses on platforms like Etsy. “This means a mother can now earn income while her child naps from the school parking lot,” she said. AI could be harnessed “to build infinitely patient and extremely knowledgeable tutors for every child in this country.”

But the biggest tech win of all for families, Boyle said, was that it could “help reshape the culture” to make motherhood high status. She continued: “Meme it, and we will be it,” concluding, “a single influencer on Instagram can have a greater effect on behavior than the smartest tax policies.”

One of Boyle’s most successful projects appears to support that hypothesis. Before she joined Andreessen Horowitz, Boyle was with another venture firm, General Catalyst. There, she invested in Hallow, which, with 24 million downloads across 150 countries, claims to be the world’s most popular prayer app. There is a free version that includes features such as chats with Magisterium, “an AI-powered tool designed to provide answers based on the teachings of the Catholic Church.” But for $69.99 a year, users can “choose from 10,000+ sessions, 5-60 minute lengths, 100+ guides, and 1,000s of music options to lead you deeper into relationship with Christ,” and have access to celebrity spokespeople (“pray a rosary with Mark Wahlberg”). Boyle sees Hallow’s success as evidence that people are hungry for religion. “What I think Hallow is showing is…this desperate consumer need that is manifesting itself,” she told Tablet magazine in 2021. But it also provides a wholesome experience for Christian users, who are deepening their relationship not only with God, but also with technology. (When I reached out to Hallow for comment, I received an email back from Hallow’s AI agent, promising a real person would get back to me. They never did. Boyle also did not respond to a request for comment.)

Boyle is not the only captain of Silicon Valley industry attempting to give AI’s reputation a Christian-friendly makeover. Trae Stephens, the billionaire in charge of the autonomous weapons company Anduril, has been a vocal proponent of tech apologetics in San Francisco. A leader at the nondenominational Epic Church in San Francisco, Stephens delivered a lecture in 2024 titled “God and Technology,” in which he argued that humans, like God, are creators and that “what our soul deeply longs for is progress in building a better future.” He assured listeners that if they chose “good quests” rather than empty or destructive ones, they would be fulfilling God’s plan. (Stephens did not respond when I reached out to him for comment.)

Stephens invoked a historical precedent to make his point. Some of the scientists who worked on the Manhattan Project, which created the atomic bomb, “were tormented by what they were doing,” he said. “And you could make a really rational argument in either direction. Was it a good thing to do? Was it a bad thing to do?” Stephens didn’t give any hint as to which he believed, though his professional life suggests the former.

His career and immense fortune were created by harnessing the power of AI to build “smart battlefields”—think of Anduril as the Waymo of drones and bombs. In a 2024 Wired interview, for example, Stephens spoke of “a classification of drones called loiter munitions, which are aircraft that search for targets and then have the ability to go kinetic on those targets, kind of as a kamikaze.” Since 2019, Anduril has been awarded more than $1.8 billion in government contracts.

As an answer to the classic “What would Jesus do?” question, “start robot wars” would be an unconventional response, to put it mildly. And yet, Stephens appears to endorse surrendering to tech. As he put it in his Wired interview, “The call that I have been trying to make to the tech community is that we have a moral obligation to do things to benefit humanity, to draw us closer to God’s plan for his people.”

“It’s almost as if [other AI companies] kind of think they’re creating God or something.” —Mark Zuckerberg, CEO of Meta

To wit: In 2024, his wife, health care tech executive Michelle Stephens, co-founded ACTS 17 Collective, a Bay Area group for “thinkers, builders, artists, and leaders who are wrestling with what it means to live with purpose and conviction.” The name is a reference to a New Testament book that focuses on Christian apologetics and is also, conveniently, an acronym: Acknowledging Christ in Technology and Society. Garry Tan, the Christian president and CEO of tech startup incubator Y Combinator, has hosted ACTS 17 events at his home—which used to be a church—and Pat Gelsinger, former Intel CEO, also a Christian, has been a speaker.

Last year, ACTS 17 sponsored a series of four lectures by PayPal and Palantir co-founder Peter Thiel, who is Trae Stephens’ former boss, JD Vance’s mentor, Gawker’s murderer, and President Donald Trump’s megadonor. His subject? The Antichrist.

The event was private, with tickets reportedly costing $200, but transcripts were leaked to reporters. While Kingsnorth argues in his book that technology itself is the devil incarnate causing a one-world government, Thiel appears to believe the exact opposite: Anything preventing unbridled technological development—from overbearing government regulation to climate activist Greta Thunberg—is the Antichrist. “In the 17th, 18th century, the Antichrist would have been a Dr. Strangelove, a scientist who did all this sort of evil crazy science,” he said, according to the Washington Post. “In the 21st century, the Antichrist is a Luddite who wants to stop all science.” AI’s detractors, he reportedly claimed, were part of a plot to install a global government. “There are a lot of rational reasons I can give why the one-world state’s a bad idea,” he said. “But I think if you strip it from the biblical context, you will never find it scary enough. You will never really resist.”

Of course, exceedingly wealthy Silicon Valley dreamers with weird ideas are nothing new. (Juicero, anyone?) But for most Americans, these fever dreams may be a little too weird, says tech journalist Gil Durán, host of the Nerd Reich podcast and author of a forthcoming book by the same name. “If you read anything by Michelle [Stephens] or by Katherine Boyle—these things are pretty far out there,” he told me. He gave the example of Boyle’s American Enterprise Institute keynote in which she argued that the state was the enemy of the family. “That is an extremely bizarre thing for her to say, especially since American Dynamism is all about partnering with an authoritarian government,” he added in an email, in reference to the Trump administration. They have “no sense of calibrating for a mass audience,” he told me, “so as long as those are the people in charge of it, I’d say that chances are they’re going to fail.”

Still, there is some indication that Christian tech apologetics are working their way into the highest realms of political influence. Vice President Vance, in a sprawling 2020 essay titled “How I Joined the Resistance,” published in the Catholic publication The Lamp, chronicled his conversion to Catholicism. In 2011, Vance writes, he attended a lecture by Thiel that he describes as “the most significant moment of my time at Yale Law School.” In the talk, Thiel, who would later become Vance’s employer and then close friend, expressed frustration with the slow pace of technological progress. He argued that professional striving was a fundamentally empty quest for prestige and status and posited that he saw “these two trends—elite professionals trapped in hyper-­competitive jobs, and the technological stagnation of society—as connected,” Vance writes. “If technological innovation were actually driving real prosperity, our elites wouldn’t feel increasingly competitive with one another over a dwindling number of prestigious outcomes.”

That notion of endless empty striving is what Thiel’s Stanford professor, the late French Catholic academic René Girard, called “mimetic desire.” This phenomenon causes immense human anguish—and, according to Thiel, technological innovation can deliver us from it, and hence from suffering.

Vance, who does not seem to have ceased striving, now describes technology as a net good not only for American economic prosperity, but also for the human condition. In a speech at Boyle’s 2025 American Dynamism Summit, Vance quoted Pope John Paul II’s 1981 encyclical, Laborem exercens. Focusing on work and the individual, the pope made two fundamental points: Labor should be a greater priority than capital, and individuals should be more important than things. These decades-old teachings received an update from Vance, who factored in technology and AI. “In a healthy economy, technology should be something that enhances rather than supplants the value of labor, and I think there’s too much fear that AI will simply replace jobs rather than augmenting so many of the things that we do,” he said. “Real innovation makes us more productive, but it also, I think, dignifies our workers.”

Vance, whose views have been publicly criticized by both the current pope and the previous one, was obviously putting his own spin on the teachings—and he didn’t mention the decidedly un-Christlike fact that replacing workers with robots would further line the pockets of tech oligarchs. Nevertheless, his interpretation that AI promotes human dignity appears to be spreading. Last July, the Trump administration released “Winning the Race: America’s AI Action Plan.” The report promises that AI will “usher in a new golden age of human flourishing” and will “increase the standard of living for all Americans.”

This rhetoric, of course, is precisely what Kingsnorth considers to be most dangerous: a hubristic quest to replace God with progress—and maybe even to become gods willing robots into sentient beings. “We will always seek some greater meaning, some transcendent truth, and if we can’t or won’t find the real thing we will attempt to create it,” he writes in Against the Machine. “This attempt is the story of modernity; the Machine is what we have created to fulfill it.”

But Kingsnorth appears to be shouting into a headwind of mimetic desire. In the past two years, as the most recent Pew poll shows, conservatives have become less skeptical of AI. In 2023, 59 percent said they were “more concerned than excited” about AI, but by late 2025, that number had fallen to 50 percent. Manidis, it seems, may not have to worry about the Boston scenario repeating itself after all.

Read more of our coverage of the roots and rise of the American oligarchy

Creating Baby Geniuses to Thwart the AI Threat? (Yes, Really.)

2026-04-16 19:00:00

Additional reporting by Anna Rogers

Two years ago, we devoted an entire issue to the rise of the American oligarchy. Since then, our oligarchic system has become more entrenched and pervasive, revolving around a small crew of tech titans whose quest for wealth and power—in all of its forms—is destabilizing our democracy and reshaping our society. In the May + June 2026 issue, we investigate our new AI overlords and the world they are striving to create, whether we like it or not. Read the rest of the package here

Mathematician Tsvi Benson-Tilsen once worked at the Peter Thiel–funded Machine Intelligence Research Institute, where he was one of many experts tasked with figuring out how to ensure AI doesn’t eventually destroy humankind. After seven years, he concluded that he’s not smart enough to figure it out. As of today, he doesn’t think anybody is.

The 33-year-old is racing against a threat known as “the singularity,” the moment when superintelligent machines, having surpassed the feeble cognitive abilities of humans, begin to act in ways contrary to the interests of humanity. “If it’s smarter than you, you cannot tell what’s dangerous necessarily, and you cannot tell what it’s thinking, because it could hide its thoughts,” Benson-Tilsen explains.

Even the sector’s leading thinkers don’t really comprehend how their systems work and thus cannot guarantee their models won’t try to deceive, overthrow, or even kill us. “Our inability to understand models’ ­internal mechanisms means that we cannot meaningfully predict such behaviors and therefore struggle to rule them out,” Anthropic CEO Dario Amodei wrote in April 2025, citing the possibility of AI-­contrived cyber and biological weapons.

The report released a couple months later by Amodei’s firm—the $380 billion behemoth behind Claude—didn’t exactly quell these concerns. Anthropic had presented the leading AI systems, including Claude, Gemini, ChatGPT, and DeepSeek, with an extreme stress test: What would they do if a hypothetical corporate executive made a business decision the models didn’t like? In more than 75 percent of simulations across five of the tested models, they attempted to blackmail or trick the executive. Occasionally, they even trapped their imaginary boss, Kyle, in a control room with insufficient oxygen and extreme temperatures. That is, they killed him.

Such scenarios may seem remote to the roughly 50 percent of Americans who say they use large language models, a subset of the broader category of “generative AI” designed for content creation, and tasks like composing grocery lists or designing birthday party invites. But the industry’s momentum is toward “agentic AI,” which lets the machines conceive and execute plans without human input. Useful for booking a multidestination vacation, perhaps. But a fully autonomous AI system with a generalized, non-task-specific mission—a.k.a. artificial general intelligence (AGI)—might simply decide humans are in the way.

Benson-Tilsen is optimistic about how long it will take AGI to reach that conclusion—he puts the odds at around 20 percent by 2050, a timeline he believes gives humanity time to come up with a solution: namely, advancing technologies that enable parents to optimize their offspring, including for superior intelligence, with the hope that some of these smarter humans will understand the logic of AGI and ensure that its goals do not interfere with the continuance of, well, us. And this notion of creating superbabies to stop the rise of something akin to Skynet from The Terminator is capturing the fancy—and the wallets—of the same billionaires who bankrolled the AI revolution.

In late 2024, Benson-Tilsen founded the Berkeley Genomics Project—no relation to the University of California, Berkeley, where he was a PhD candidate—to build a case for editing the genes of human embryos. This is prohibited or highly restricted in every developed country, hence Benson-Tilsen’s effort to spur dialogue about how it could theoretically be done safely and ethically.

Scientists have tinkered plenty with nonreproductive, or somatic, cells. The first ­successful use of gene therapy—two young girls were injected with modified cells to treat a rare genetic disorder—occurred in the early 1990s, and the first attempt to edit a patient’s genes inside their body happened in 2017. No approved genetic therapy to date has involved germline editing (that is, modifying reproductive cells). But screening embryos for genetic traits prior to implantation has grown increasingly popular—even among couples who lack any of the genetic variance known to cause disorders like cystic fibrosis, Huntington’s disease, or sickle cell disease. And now a host of startups is working toward genetically optimizing children, including for intelligence, with at least one that said it was committed to using germ­line editing to get there.

The primary argument for genetic optimization is that it could revolutionize disease prevention. Based on the capital flowing into these startups—$36.5 billion in 2024, according to Astute Analytica—investors are bullish on the industry’s future. Backers include Thiel; Coinbase co-founder Brian Armstrong; OpenAI’s Sam Altman and his husband, Oliver Mulherin; venture capitalist Marc Andreessen; and Ethereum founder Vitalik Buterin, all of whom are heavily invested in AI, too.

Investors follow the money, of course, but part of the dual appeal of genetic optimization and AI is that both are central to transhumanism. This futurist philosophy, popular among the tech elite, aims to marry advancements in biology and technology to accomplish things today’s humans cannot—like extending our lives (perhaps forever!) or circumventing climate change (by colonizing other planets). While it may seem odd that these billionaires are constructing one technology some of them admit could bring about human extinction, even as they back another one to save us from what they’re building, there is, in fact, a unifying theme: “the rejection of limitation,” explains Alexander Thomas, author of The Politics and Ethics of Transhumanism. “That colonial impulse of ‘I want more.’”

Sure, there are less extreme ways to extend life expectancy and clean up the planet. But those solutions—like expanding health care access and slashing carbon emissions—would force the AI moguls to acknowledge their culpability and perhaps commit some of their vast financial resources to the cause. And putting the brakes on AI would leave too many trillions on the table, so instead they fantasize about a future in which they are celebrated for building the ark that saves humanity from the next great flood. Never mind that they opened the floodgates.

If you’ve ridden the New York subway lately, you’ve seen the signs: “Have your best baby.” The ads direct people to the website PickYourBaby.com, which belongs to a startup called Nucleus that caters to couples undergoing in vitro fertilization. In 44 percent of IVF cycles, ­patients screen their viable embryos for extra chromosomes and easily detectable disorders, such as Tay-Sachs and Huntington’s, which are caused by single genes. But Nucleus will screen pre-implantation embryos for hundreds of traits, many of which are controlled by multiple genes in a delicate, poorly understood balancing act.

How important is IQ or hair color? Is a marginally lower risk of developing Alzheimer’s worth giving up a few inches in height? These are some of the questions ­Nucleus asks prospective parents to consider as it claims to give them a better-than-winging-it chance at having babies with or without certain features. The pitch is that these so-called polygenic risk scores increase the likelihood of passing down dad’s blue eyes and mom’s smarts, should the parents choose to, or decrease the chance of having a child who develops cancer.

When I spoke with 26-year-old Nucleus founder Kian Sadeghi in February, his demeanor was gentler than his company’s brash marketing tactics suggest. ­Sadeghi, who dropped out of college before launching his startup, explained that a family tragedy had propelled his interest in genetic optimization: His cousin died in her sleep at age 15 from complications that doctors suspected were related to long QT syndrome, a serious but generally treatable heart disorder nobody knew she had. “How does this happen?” Sadeghi, then a second grader, recalls asking. “Bad genetics,” answered his dad, a physician.

He would later have an epiphany when his biology professor at the University of Pennsylvania presented a chart depicting the plummeting cost of gene sequencing. In 2003, when an international team of scientists completed the Human Genome Project, decoding a genome cost $3 billion and took 13 years. By 2019, when Sadeghi was a college freshman, a person’s full DNA could be sequenced for about $1,000 in just a few days. “Obviously, the price is going to keep decreasing,” he remembers thinking. “Someone needs to build the kind of interpretation layer to stop what happened to my cousin from happening to anybody.”

Elon Musk, who revels in controversy, has said he personally avoided working in the field of genetic optimization because of what he called “the Hitler problem.”

When Covid hit, he unenrolled from school and began scouring for investors to create such a company, eventually landing $3.5 million in a seed funding round led by Thiel’s Founders Fund. Five years and thousands of embryo analyses later, Sadeghi says Nucleus can screen embryos for IQ and hundreds of possible health conditions for a few thousand dollars on top of the cost of IVF. Simulations published by the company, which have not been peer-reviewed, report to lower the risk of several common conditions by 27 to 67 percent.

Had his aunt and uncle had access to the tool, they may have known to treat their daughter. Or, had she been conceived via IVF, they might have simply chosen a different embryo. Sadeghi emphasizes these are personal choices that only prospective parents can make. “There’s no universal ideal, because the way parents define that is so different,” he says.

Still, the concept of establishing preferences for heritable traits makes many people uneasy. The United States has a dark history of eugenics, justifying racism on the basis of perceived genetic differences and forcing the sterilization of mentally disabled people. It was less than a century ago that Nazi Germany predicated the murder of millions on ethnic and physical characteristics. Even Elon Musk, who revels in controversy, has said he personally avoided working in the field of genetic optimization because of what he called “the Hitler problem.”

Fans of genetic optimization are quick to point out that Hitler’s atrocities were state directed, whereas nobody is forcing parents to screen their embryos. Israel even covers the costs of IVF and genetic testing for up to two live births per family, largely to prevent Tay-Sachs, a fatal genetic mutation prevalent among people of Ashkenazi Jewish descent. “Whereas in Nazi Germany Jewish life was systematically destroyed in the name of eugenics, Zionists in the Land of Israel conceived of eugenics as part of their mission to restore the Jewish people,” Raphael Falk, a Jewish geneticist, wrote in 2010.

But the argument—and the science—becomes hazier when couples seek to optimize embryos around complex traits. Take mental illness. Conditions like schizophrenia and bipolar disorder run in some families, though other risk factors, such as prenatal exposure to viruses and childhood trauma, make it impractical to predict whether a baby will ultimately face either diagnosis. “Calculating a ‘polygenic risk score’ for, say, schizophrenia is near impossible,” says Fyodor Urnov, director of therapeutic R&D at UC Berkeley’s Innovative Genomics Institute. If vendors of genetic screening promise otherwise, “they are lying.”

The genetic basis for intelligence is similarly elusive, “confounded by environmental factors” including nutrition, family stability, and primary school quality, according to a 2024 meta-analysis. There’s also the scale issue: Polygenic risk scores are based on tens of thousands of genomes. But there’s far less variability when parents are choosing from a small collection of their own embryos. One 2019 research article in the journal Cell calculated that the average IQ gain parents could expect from screening five embryos was just 2.5 points. (A typical IQ score is around 100.) “We’re talking about a very minimal gain,” says Sophie von Stumm, a University of York psychology professor and cognitive development expert. “I know companies are selling this…but right now, selecting embryos for polygenic scores to get smart kids is pretty impossible.”

Kaitlyn Gallacher, communications director at Nucleus, says the science has “advanced significantly,” particularly regarding IQ, but she notes: “No genetic model determines a child’s life…Environment, upbringing, education, and many other factors shape outcomes. That nuance is built directly into how we present genetic insights.”

Nucleus isn’t alone in qualifying the usefulness of its IQ predictions. Looking at an embryo’s predicted score in a vacuum is “borderline nonsense,” says ­Jonathan Anomaly, communications director for the embryo selection company Herasight. After clients review their embryo options for serious health risks, particularly ­hereditary conditions, “then, fine,” he says. “If you care about IQ, maybe you kind of look at it.”

“Some don’t care about it at all and some care a lot,” adds Anomaly, who seems to be among the latter. A former Duke University philosophy lecturer, he wrote in a 2018 paper, titled “Defending Eugenics,” that he was concerned about successful, well-educated women “substituting cats for kids,” which would result in “bad effects on the gene pool” over time. “The current demographics of Western countries are troubling,” he wrote, “as people with a higher IQ, more education, and greater income reproduce at relatively low levels.”

Musk shares a similar perspective, which helps explain why he’s had at least 14 children, including four with Shivon Zilis, an executive at his company Neuralink. “He really wants smart people to have kids,” Zilis, a Yale-educated AI specialist, told Musk biographer Walter Isaacson.

Especially him. “To reach legion-level ­before the apocalypse we will need to use surrogates,” Musk told the mother of one of his children in a text reviewed by the Wall Street Journal. The Washington Post, meanwhile, reported that Musk has been a client of Orchid, an embryo screening company founded by Noor Siddiqui, whose career was kickstarted via a fellowship funded by Thiel. (Orchid says it does not screen embryos for IQ, though it does screen for “intellectual disabilities” and autism.)

Embryo selection is just the start of what some tech billionaires envision. At the “IVF clinic of the future,” Coinbase’s Armstrong mused in a post on X, multiple technologies will be combined for absolute offspring optimization. Using in vitro ­gametogenesis, he wrote, technicians will be able to generate thousands of eggs from a client’s blood or skin cells. Once those eggs are fertilized, the client will be able to screen the embryos and pick a favorite, whose DNA can then be edited as desired. Even surrogacy won’t be necessary, Armstrong predicted, because artificial wombs will allow fetuses to develop without “the risk/burden of pregnancy.” He dubbed this medley the “Gattaca stack,” perhaps forgetting that the protagonist of the 1997 dystopian classic was a nongenetically optimized “in-valid,” gestated in his mother’s womb.

A GIF illustration in red, black, and white of a figure evocative of a baby in a womb, made up of 0's and 1's, representing binary code, and the letters A, T, G, and C, representing the four-letter genetic code. The baby rotates clockwise.
Outlanders Design

Twenty-month-old KJ Muldoon of eastern Pennsylvania represents the marvel of gene editing. Without it, he probably wouldn’t be alive. Born without functional copies of CPS1, a gene needed to produce a protein that enables the liver to clear out ammonia, he spent most of his first year of life in the hospital.

Doctors and scientists at Children’s Hospital of Philadelphia and the University of Pennsylvania developed a custom gene-­editing protocol for KJ. It was the first use of a new form of CRISPR, a biological tool that allows scientists to modify DNA with great accuracy. In the absence of other options, the decision to move forward was easy for KJ’s mother, ­Nicole. “We would do anything for our kids,” she said at the time. Proponents of embryonic gene editing have long used that same rationale: What good parent wouldn’t?

But to look at KJ’s case and conclude we’re ready to program pre-implantation embryos is like declaring you’re fluent in a new language because you’ve tried Duolingo. His treatment, in early 2025, required altering just two DNA base pairs among the 6 billion that make up a person’s genome. And the mutated CPS1 gene that caused his condition was no mystery—it had been thoroughly researched since the 1970s.

That’s not the case for the vast majority of the 20,000 or so genes within our chromosomes. And even those genes, the little blueprints our bodies use to make specific proteins, are far better understood than the remaining 98 percent of our genetic material, which scientists refer to as our cells’ “dark matter.” That’s because research suggests it serves important biological functions we haven’t quite grasped yet.

Tweaking a single well-known gene is one thing. But trying to edit an ­embryo for more complex traits or ­conditions would mean meddling with dozens to thousands of sequences ­scattered widely throughout our chromosomes. It’s a bit like playing with the dials on an unlabeled control panel, a level of unknown that gives many scientists and bioethicists pause.

KJ’s therapy was not particularly controversial because he was already born, so his treatment targeted only the cells of his liver. But embryos are clumps of cells that haven’t decided what they are yet. Some will give rise to the liver and the brain, while others will spawn sperm or egg cells, passing any genetic changes along to future generations. The consequences of embryonic gene editing are impossible to predict, which is why Australia, the ­European Union, the United Kingdom, and the United States all prohibit doing it for reproductive purposes.

“I cannot imagine…figuring out how to raise a newborn without ChatGPT.” —Sam Altman, CEO of OpenAI

China too. Yet Chinese biophysicist He Jiankui defied the prohibition and shocked the world in 2018, announcing that twin girls had been born from embryos he’d edited pre-implantation. By deleting part of the CCR5 gene, which produces a protein docking site for HIV, he claimed he made the twins immune to the virus, though there’s been no independent confirmation of his work.

In his 2021 book, CRISPR People, Stanford University law professor Henry Greely described He’s experiment as “criminally reckless” and “deeply unethical.” The Chinese government agreed, sentencing He to three years in prison. But after his release in 2022, He returned to work on embryo-editing research, now at a private lab with secretive financing. Then, in 2025, the Chinese government issued new regulations opening the door to “manipulating human reproductive cells” under the oversight of the State Council health department. He says his work is aimed solely at curing disease, and he predicts embryonic gene editing will soon be legal for reproductive purposes in both China and the United States.

He’s ex-wife, Cathy Tie, a Chinese-born Canadian, says she is pursuing the same goal of editing embryos for heritable ­conditions. She, like Orchid’s Siddiqui, was awarded a large grant from Thiel, which enabled her to drop out of college at 18 and build a company geared toward analyzing rare mutations and common cancer genes. Last year, Tie launched Manhattan Genomics, which has not disclosed its investors. It is part of a wave of embryo-editing startups that includes ­Preventive, launched in 2025 with $30 million from techies including ­Armstrong and Altman and his husband.

Both Manhattan Genomics and Preventive stress that their goal isn’t optimizing babies for brilliance. “We draw the line at disease prevention,” Tie told NPR. But it might not be entirely up to them. Federal funds cannot be used for research on embryo editing­—Congress has seen to that—which leaves private investors in charge of the direction of the science. “I wouldn’t take them at their word,” Jennifer Denbow, a California Polytechnic State University professor who researches reproductive technologies, says of the entrepreneurs. “There’s some very powerful influences and a lot of money that is interested, ultimately, in intelligence.”

Berkeley’s Urnov is also pessimistic. In his opinion, he said via email, “The ‘embryo editors’ are deceiving themselves and the public when they speak of using this technology to address the public health challenge of genetic disease.” According to him, “their sole purpose is ‘baby improvement.’”

Altman has previously said that some form of genetic engineering is inevitable. “Superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen. It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ­ourselves,” he wrote in 2017. “My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”

That was effectively the mission a company called Bootstrap Bio touted openly. “Of particular interest to me is whether we could modify intelligence,” co-founders Chase Denecke and Ben Korpan noted in a viral 2023 essay.

Like Benson-Tilsen, they framed the concept as an imperative, noting there’s currently a “very limited number of people” who have the intellect required to protect the world from a dangerously capable AGI. “It is not an exaggeration to say that the lives of literally everyone depend on whether a few hundred engineers and mathematicians can figure out how to control the machines built by the mad scientists in the office next door,” Denecke and Korpan wrote.

When I emailed Denecke, he told me Bootstrap Bio had ceased operations due to lack of funding. (The company’s former chief science officer, Qichen Yuan, was indicted for attempted sex trafficking of a child in September 2025. His case is ongoing.) But before the company unraveled, its early investors included Malcolm and Simone Collins, whom you may have heard of. They’re the media-hungry face of the pronatalist movement, which contends the world is headed for population collapse if birth rates don’t rise substantially.

Malcolm, a quirky, fast-talking former venture capitalist, envisions two possibilities: Either the cognitive capabilities of humans improve significantly or we’re all goners. “For a long time, humanity has had a synthetic component to it that has evolved alongside our organic component,” he told me. “We finally reached a stage where the synthetic component is sort of turning around and putting a gun to the organic component and saying, ‘Okay, now prove your worth.’”

Malcolm laments that the people he considers the most brilliant are prioritizing careers over child-rearing, ceding ground to those he finds genetically lacking and thus risking a future in which there are not enough smart people to save us. “The dumb ones,” he said during a 2024 episode of Based Camp, the podcast he hosts with Simone, “are going to be more and more of the general population as time goes on. And so they will be electing and building bureaucracies that make it harder and harder for the geniuses to do their jobs.”

The couple’s message contains echoes of The Bell Curve, the controversial 1994 book that claimed people with less “cognitive capital” were having more children. It also theorized that white people had higher IQs than Black people on account of genetic differences. Right-wing nationalists use the same thoroughly debunked theory, which they call dysgenics, to rationalize racism and xenophobia. Malcolm does not say any race is inherently smarter than another but he emphasizes that genetic differences matter. “Noticing genes exist and caring about them in terms of future populations is not eugenics,” he says, “but it is definitely playing in a spicy territory.”

While it would be tempting to discount the Collinses’ views as fringe, their worldview extends far beyond their farmhouse. They have a growing online platform and run in powerful circles. Malcolm says he’s attended meetings focused on fertility at the White House, whose gene-obsessed occupant, Donald Trump, routinely disparages people of color (including literally all Somalis) as “low IQ.” The Collinses reportedly have spent time, too, with Musk, who also links intelligence to race, having endorsed statements theorizing that students from predominantly Black universities have IQs approaching “borderline intellectual impairment” and shouldn’t be allowed to become pilots.

The obsession with intelligence and its genetic components suggests an inherent sense of superiority, Cal Poly’s Denbow says. The notion that IQ can be genetically optimized only reinforces the false notion that some people are “more worthwhile.”

Among the numerous ethical questions raised by genetic engineering is whether its use will effectively create a new hereditary caste system, not unlike the dystopian pecking order in Aldous Huxley’s Brave New World, where “Alpha” elites rule over the lesser classes. The future that Malcolm Collins describes sounds very much like a society stratified by genetic haves and have-nots: “I think it will be both dramatically less equitable, but dramatically better for the poor individuals in the same way that the United States right now might be less equitable than it would have been at the time of the Revolution,” he says. “But right now, the poorest Americans still have cellphones and computers and refrigerators, right? They’re not dying of cholera in the streets.”

In Benson-Tilsen’s ideal tomorrow, there would be no genotocracy—some of the wunderkinds optimized for superior intelligence will have quashed the threat of advanced AI, and the technology needed to have healthier and smarter babies will be widely accessible and affordable. But current trends—a small group of Silicon Valley titans holding a vast amount of our nation’s technological, political, and financial power—don’t seem to point in that direction. What, I ask him, will stop billionaire investors from hijacking the tech of even the most well-intentioned embryo-­editing entrepreneur? After a long pause, he concedes he doesn’t have a great answer. He follows up in an email later the same evening: “To a very large extent, I simply personally don’t know.”

Well, perhaps the superhumans will figure it out.

Read more of our coverage of the roots and rise of the American oligarchy

Geek Tragedy

2026-04-16 19:00:00

Two years ago, we devoted an entire issue to the rise of the American oligarchy. Since then, our oligarchic system has become more entrenched and pervasive, revolving around a small crew of tech titans whose quest for wealth and power—in all of its forms—is destabilizing our democracy and reshaping our society. In the May + June 2026 issue, we investigate our new AI overlords and the world they are striving to create, whether we like it or not. Read the rest of the package here.

The AI bubble has been a boon to the portfolios and prospects of the tech world’s biggest players. Their companies are vying for hegemony and their net worths are trending toward Mount Olympus. “It almost feels like you guys are the new Formula 1 drivers,” Theo Von told Sam Altman last year. But the race for market dominance has been anything but smooth. Beyond the hype, you can find a litany of false promises, questionable investments, and just plain decadence, complicating both their predictions for the future and claims to come in peace. Here are the numbers you won’t find in the quarterly earnings report.



ELON MUSK

10
Years since Musk said he would send a rocket to Mars within two years

20,237
Cybertrucks sold in the US in 2025, about 230,000 fewer than Musk predicted would sell each year

518,428
Preventable child deaths in the year since Musk dismantled USAID, saying, “Time for it to die”

679,584
Antisemitic posts on X over a one-year period between 2024 and 2025

1.8 million
Sexually explicit images of women generated by X’s AI agent Grok during a nine-day period last winter




SAM ALTMAN


100,000
Jobs Donald Trump claimed would be created by OpenAI’s Stargate data centers “almost immediately”


100
Employees needed to operate Stargate’s Abilene, Texas, campus once construction is completed


630,000
Weekly ChatGPT users that OpenAI said showed signs of psychosis or mania


$1.4 trillion
Amount OpenAI said it plans to invest in AI infrastructure over the next eight years


$20 billion
The company’s reported revenues in 2025




JEFF BEZOS

$75 million
Amount Amazon spent to acquire and promote Melania

$75 million
Cost of the 246-foot yacht Bezos bought to follow his 417-foot yacht

$100 million
2025 loss by the Washington Post, which this year cut 40-plus percent of its staff

48
Years until Amazon pays sales taxes on its New Carlisle, Indiana, data center

$4 billion
Amazon’s potential tax savings during that period





MARK ZUCKERBERG

$77 billion
Amount lost by Meta on the “Metaverse” before declaring AI the next big thing


$250 million
What Meta offered to hire one AI researcher last year


10%
Portion of Meta’s 2024 revenue the company estimated came from ads for scams


$6.4 million
Amount Meta spent on TV ads to convince Americans data centers are good


4,500
Square footage of Zuckerberg’s underground bunker in Hawaii


All illustrations by Yann Legendre.

Read more of our coverage of the roots and rise of the American oligarchy