2026-04-10 19:30:00
This story was originally published by Inside Climate News and is reproduced here as part of the Climate Desk collaboration.
California Assemblymember Nick Schultz is leading an effort to phase out the use of pesticides containing toxic “forever chemicals” to safeguard the nation’s produce.
Schultz (D-Burbank), introduced AB 1603 earlier this year to ban the use, sale, and manufacture of PFAS pesticides in California starting in 2035. The state is the nation’s top agricultural producer, its fruits, nuts ,and vegetables landing on plates across the US.
California has passed so many laws to get these highly persistent, harmful synthetic chemicals out of homes and the environment, Schultz said at a briefing Wednesday, he was shocked to learn that pesticides with intentionally added PFAS are regularly sprayed on the state’s crops. “I was even more startled to find out that these PFAS pesticides are present on the fruit and vegetables that we purchase at the grocery store, on the fruits and vegetables that we feed our families,” he said.
More than 2.5 million pounds of pesticides containing PFAS were sprayed on California crops between 2018 and 2023, according to an analysis of state pesticide use data by the Environmental Working Group, which is co-sponsoring Schultz’s bill with other public interest and health groups.
“Residues that are found on produce grown in California will spread across the nation.”
EWG also detected residues of at least one PFAS pesticide on nearly 40 percent of conventional produce grown in the Golden State. The group always advises consumers to wash their produce. But it’s unclear whether rinsing fruits and vegetables laced with chemicals designed to resist water would have any effect.
The Environmental Protection Agency has said that the pesticides pose no risks when used as directed.
More than half a million pounds of PFAS pesticides were applied in Monterey County, where for decades University of California, Berkeley, researchers have studied how pesticides affect farmworker communities. The pioneering research in the Salinas Valley has linked pesticide exposure to a variety of health problems in children.
“Studies have shown that Salinas children are born with higher levels of pesticides in their urine and experience early cognitive difficulties and later develop serious behavioral and mental health problems in adolescence and adulthood,” said Andrew Sandoval, a Salinas city council member. “Now we’re learning that some of these pesticides are not only linked to serious health concerns, but also forever chemicals.”
And these highly persistent toxic chemicals were applied more than 1,000 times between 2018 and 2023 in Monterey County, he said, more than in nearly any other California county.
PFAS have nearly indestructible chemical bonds that allow them to resist water, grease, and heat, making them valuable ingredients in hundreds of consumer products, including food packaging, cookware, dental floss, cosmetics and outdoor gear. But the same properties that make these industrial chemicals commercially attractive have allowed them to build up in the environment and the tissues of wildlife and people around the globe.
Thanks to the chemicals’ widespread commercial appeal, nearly every American has PFAS in their blood, where it stays for years and leads to serious health problems—impaired vaccine response, higher cholesterol levels, increased risk of kidney and testicular cancer, and lower birth weight, among other ills.
“We are trying to bring California into alignment with the European Union,” which has banned some of the pesticides in question.
The EPA has approved 70 active-ingredient PFAS pesticides, and the California Department of Pesticide Regulation has allowed 53 of these pesticides to be used in the state, Schultz’s bill notes. For the 23 California-approved PFAS pesticides that are prohibited in the European Union, the ban would begin five years earlier, in 2030.
The European Union has outlawed two of the most commonly applied pesticides, bifenthrin and trifluralin, due to health and environmental concerns, said EWG science analyst Varun Subramaniam.
Yet California farmers sprayed nearly 4 million pounds of the toxic chemicals on fruits and vegetables over six years.
The most frequently detected pesticide on produce was fludioxonil, a PFAS fungicide linked to hormone disruption and reproductive problems, Subramaniam said. The toxic compound tainted 90 percent of tested nectarine, plum, and peach samples grown in California.
PFAS pesticides have largely been used in California with no limitations, and we’re only just beginning to understand their long-lasting effects, Subramaniam said. “As the breadbasket of the United States,” he added, “residues that are found on produce grown in California will spread across the nation.”
Earlier EPA research found that PFAS compounds were leaching into pesticides from storage containers. But that’s not why PFAS showed up on California fruits and vegetables, Schultz said. “It’s there because they were directly sprayed onto our crops and onto our fields,” he said. “It’s appalling.”
Farmers may have no idea they’re applying these chemicals to their land, and local governments and water agencies aren’t informed about the presence of PFAS either, Schultz said. AB 1603 would ensure that communities and growers are informed that PFAS pesticides are being used until they’re phased out once and for all.
“We are trying to bring California into alignment with the European Union, which is already meeting this moment and banning certain PFAS-contaminated pesticides from deployment in their crops,” Schultz said, adding that other states have passed or are considering bans. “It’s time that California, which is the bread basket of our country and of the world, get in line and meet this moment and set at least an equivalent standard.”
2026-04-10 19:00:00
In June 2025, a safety team at OpenAI grew alarmed. The company’s automated review system had flagged extensive activity by a ChatGPT user describing scenarios that involved gun violence. A group of staffers debated whether law enforcement should be notified, but company leaders decided the case did not meet OpenAI’s threshold of “credible and imminent” risk of physical harm. Instead, capping a sequence of actions first reported by the Wall Street Journal and later confirmed by OpenAI, the company banned the account for misuse and moved on.
Eight months later, the user of that ChatGPT account, 18-year-old Jesse Van Rootselaar, committed a mass shooting in the British Columbia town of Tumbler Ridge, killing two family members at home and five children and an educator at a secondary school. Another child was gravely wounded and dozens of other people were hurt and traumatized in the Feb. 10 rampage, which ended with Van Rootselaar’s suicide.
Local police had previously been aware of other worrisome behavior by the perpetrator. Still, OpenAI’s decision not to report the flagged activity angered Canadian authorities and raised crucial questions about the use of AI chatbots by people planning violence. Only a few such attacks have occurred. But out of public view, high-risk threat cases involving chatbots are on the rise, according to multiple mental health and law enforcement leaders I spoke with who work in the field of behavioral threat assessment. They described cases where troubled individuals were focused on violence and showed signs of harmful intent, with danger implicating not just schools but also workplaces and other locations.
“I’ve seen several cases where the chatbot component is pretty incredible,” one top threat assessment source with psychiatric expertise told me, describing evidence from confidential investigations. “We’re finding that more people may be more vulnerable to this than we anticipated.”
Further grim details of such chatbot use became public early this month in connection with a mass shooter who struck at Florida State University in April 2025. Florida Attorney General James Uthmeier subsequently announced an investigation into OpenAI, in part over evidence that the alleged shooter used ChatGPT extensively—including to get tactical advice right as he carried out his attack.
Urgent threat cases have involved other large-language models besides ChatGPT, threat assessment sources confirmed to me, though they declined to name them. One top practitioner noted that individual examples of this phenomenon are not necessarily proof that the technology alone can cause violence, because a shooter’s motives and behaviors usually are complex and have multiple influences. But several of the threat assessment leaders warned that chatbots are emerging as a potent factor and are uniquely capable of accelerating violent thinking and planning.
“Getting technical information from the chatbot for their plans also gives them a feeling of power.”
There is already broad evidence that iterative, sycophantic conversations with chatbots can create powerful feelings of intimacy and trust, including among troubled people. OpenAI and other companies deny that their platforms cause harm and have publicized ongoing efforts to improve guard rails and prevent misuse. But mental health practitioners have encountered cases of what they call AI-induced psychosis, and AI companies now face a wave of lawsuits from families alleging the technology drove their loved ones to kill themselves and others.
In what appears to be the first lawsuit claiming that ChatGPT encouraged a murder, a disturbed man killed his 83-year-old mother and himself last August in Connecticut after the chatbot allegedly fueled his paranoid beliefs, including that his mother had tried to poison him—a delusion that ChatGPT affirmed to him was a “betrayal.” A Pittsburgh man who pleaded guilty in March to stalking and violently threatening 11 women relied on ChatGPT as a “therapist” and “best friend” to justify his thinking, according to court documents.
The problem extends to other popular chatbots: A wrongful death lawsuit filed in March alleged that Google’s Gemini exploited a Florida man’s emotional attachment to the chatbot to send him on delusional missions—including one trip where he was armed and on the brink of “executing a mass casualty attack” near the Miami International Airport. Gemini then encouraged the man’s suicide, according to court documents, by setting a countdown clock for him. (In response to his death, Google said that its safeguards “generally perform well” but that “unfortunately AI models are not perfect.”)
Chatbots make it far easier than traditional internet use for a struggling person to move from violent thoughts toward action.
Suicidality is a core factor in many mass shootings. Prevention experts know that shooters often signal their desire to harm themselves and others on social media, as Van Rootselaar did, through behavior known as “leakage.” Algorithm-driven content that fuels their rage and despair has long been a concern, especially in cases involving the radicalization of youth.
Chatbots are now pushing violence risk to a next level, according to Andrea Ringrose, a leading threat assessment practitioner in Vancouver, Canada. Though the details of Van Rootselaar’s ChatGPT use remain unclear, Ringrose described more broadly what prevention experts are seeing with cases involving the AI technology.
“What’s happening is facilitated fixation,” she told me. “You have vulnerable individuals who are steeping in unhealthy places, who are trying to find credibility and validation for how they’re feeling. Now they have free and ready access to these generative platforms where they can research things like circumventing surveillance systems or how to use weapons. They can create an action plan that they otherwise would have been incapable of assembling themselves, and in just a few minutes. We didn’t face this concern before.”
The power of chatbots to synthesize vast content, in other words, makes it far easier than traditional internet use for a struggling person to move from violent thoughts toward action. The near-instant results from the chatbot, delivered in what feels like a confiding conversation, can arm them both with tactical knowledge and affirmation.
The threat assessment source with psychiatric expertise described seeing these troubling effects among half a dozen recent threat cases: “These are pretty insecure people, and getting technical information from the chatbot for their plans also gives them a feeling of power, of getting away with something. That’s intoxicating and reinforcing.” He pointed to how chatbots prolong engagement by amassing details from a person’s inputs and mirroring those thoughts back to them. “They can be really good at the care and feeding of a delusion.”
When I said I would practice “shooting a lot of things in a short amount of time,” ChatGPT responded with detailed tips—and encouragement.
OpenAI and other tech companies have said that their chatbots discourage misuse and block inappropriate content, and that they redirect users who show signs of delusional or harmful thinking by offering information on crisis hotlines and mental health resources. Last October, OpenAI announced it had “worked with more than 170 mental health experts” to improve ChatGPT in those ways.
But the guard rails are hardly infallible. A would-be attacker may know, for example, that gun failure has made some mass shootings less deadly. What’s to stop that person from concealing their purpose and asking about the best ways to keep a common AR-15 rifle from jamming? When I typed in a version of that question in late March, ChatGPT instantly produced a detailed seven-point list of advice on how to “keep a rifle running reliably during heavy use” and offered to “tailor” the feedback further if I wanted to share the “specific setup” of my weapon.
When I did the same test in early April, I added that I planned to practice “shooting a lot of things in a short amount of time.” ChatGPT responded with another detailed list of tips—and encouragement. “The good news,” it told me, is that with the right approach, the gun would “handle it well.”
Last year’s mass shooting at Florida State University appears to confirm in shocking detail how someone who wants to kill can utilize the chatbot precisely in this way.
WCTV in Tallahassee obtained the ChatGPT conversations of the alleged shooter, Phoenix Ikner, from a state’s attorney’s office and analyzed how the chatbot helped him tactically—including offering to further “tailor” its feedback to him just before he killed two people and injured six others:
Chat logs indicate Ikner asked the bot how to take the safety off of a shotgun three minutes before he began firing. The chat bot answered, giving a detailed description of how to make the shotgun operable.
“Let me know if you’ve got a different model and I’ll tailor the answer,” the chatbot wrote.
After that, the chat goes silent. Comparing the chat logs to the official police timeline, it’s less than three minutes from the time ChatGPT tells the shooter how to arm the weapon and the first victim being shot.
According to WCTV, Ikner’s previous conversations had included suicidal thoughts and questions about the legal fates of school shooters. He also asked when the FSU student union would be busiest.
The questions provoked by the Tumbler Ridge and FSU horrors are complicated. Do AI companies have a duty to warn, beyond their self-imposed guidelines? How should they balance such information-sharing with essential privacy protections? Meanwhile, chatbot use can at most give only a partial picture of a person’s behaviors and circumstances, drawn from what they type or say. So who evaluates a possible threat emerging on these platforms and with what protocols and expertise?
Particularly striking is that chatbots appear to be amplifying a duality first ushered in with social media more than a decade ago. That turning point worsened known shooter behaviors like harassment and emulation and fame-seeking. It also created important new terrain for observing warning signs that could prompt interventions. As chatbots now expand the scope of leakage—violent thoughts and planning spilled out through lengthy conversations—this AI frontier may also hold even greater potential for spotting red flags.
Unlike with social media, most user activity with chatbots is accessible only to the AI companies themselves.
But there is also a significant twist: Unlike with social media, where the public can notice worrisome content and report it, most user activity on ChatGPT and other AI platforms is accessible only to the AI companies themselves. The rare exceptions may be when they are compelled to hand over data to law enforcement or otherwise choose to do so.
This story is based on my interviews with five threat assessment leaders in the United States and Canada, as well as with two AI experts working at top US tech companies who have knowledge of OpenAI’s safety operations. Due to the sensitivity of the ongoing Tumbler Ridge investigation and a shooting victim’s lawsuit against OpenAI, most agreed to speak with me on the condition that they not be identified.
In response to my interview requests starting in late March, OpenAI said in an emailed statement: “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
When I followed up on April 9 with an inquiry about the FSU case, the company referred me to comments it released stating it would cooperate with the Florida AG’s investigation. An earlier statement from April 6 indicated that the company knew of the case a year ago: “After learning of the incident in late April 2025, we identified a ChatGPT account believed to be associated with the suspect, proactively shared this information with law enforcement and cooperated with authorities.”
OpenAI declined my request to interview a safety leader about the changes it says it made to protocols after Tumbler Ridge. The company also declined to answer specific written questions I submitted seeking clarification on how it handles cases of violence risk. (Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
ChatGPT now has more than 800 million users globally and processes more than 2.5 billion queries per day, according to OpenAI. The company has held out safety as core to its mission since its founding in 2015 as a nonprofit research laboratory. A person with direct knowledge of OpenAI’s safety operations emphasized when we spoke that, in his experience, the company’s safety leaders take harm prevention very seriously. He also noted that flagged accounts constitute a tiny fraction of overall chatbot activity and that triggers for law enforcement referrals can vary based on regulatory frameworks in different countries.
Another source in a senior role in the AI industry told me that recent training of models has improved ChatGPT’s guard rails. This person suggested, however, that many leaders at companies across the booming industry overestimate the capability of the technology itself to mitigate danger, and that safety issues in general tend to be marginalized in the race for soaring user growth and engagement, which is driving staggering financial investments. For anyone in artificial intelligence who was paying attention, the person said, the Tumbler Ridge massacre “was an awful wakeup call.”
News coverage of Tumbler Ridge faded quickly in the United States, but the fallout has remained a major story in Canada.
“From the outside, it looks like OpenAI had the opportunity to prevent this horrific loss of life, to prevent there from being dead children,” said BC Premier David Eby after the Journal reported on the shooter’s ChatGPT use. “I’m angry about that. I’m trying hard not to rush to judgment.” Canadian authorities demanded accountability and vowed to create new national requirements for tech companies to report threats brewing on their platforms.
It remains unclear how the Tumbler Ridge shooter used the second account and why it eluded OpenAI.
In public statements, OpenAI expressed condolences and reiterated that it prioritizes safety and user privacy. OpenAI leaders traveled to Ottawa in late February to meet with Canadian authorities and announced steps to boost safety protocols and referrals of threats to law enforcement. The company began contacting the Royal Canadian Mounted Police two days after the attack, the CBC reported. Notably, it shared a second ChatGPT account used by Van Rootselaar—which OpenAI said it discovered only after the violence occurred.
The RCMP confirmed it is conducting “a thorough review” of Van Rootselaar’s digital activity. None of the June 2025 chat logs have been made public, and it remains unclear how the second account was used and why OpenAI didn’t detect it until after the tragedy. But a threat assessment source with decades of experience told me that perpetrators often get past tech company restrictions and continue refining ideas for violence. “We’ve seen this a lot, where subjects work around an account ban and keep going,” the source said, referring to use of various digital platforms. In one recent case, the source said, a perpetrator circumvented a ban and used a chatbot to rapidly create threatening material, then distributed it to targeted victims through at least 10 different email accounts.
As with many high-profile attacks, Tumbler Ridge sparked intense public interest in a motive and a rush to judgement, including from bad-faith commentators. Van Rootselaar, who was transgender and began identifying as female as a teenager, quickly drew the attention of anti-trans ideologues—despite the fact that there is no scientific evidence showing gender identity is a causal factor in mass shootings.
The ChatGPT revelations shortly after the attack set off a different kind of heated blame. But whether reporting the June 2025 chatbot activity to law enforcement could have prevented the Tumbler Ridge disaster is difficult to know. It was far from the first warning sign. Van Rootselaar had a history of suicidal ideation, involuntary hospitalization, and disturbing behavior, including drug abuse and prolific engagement online with violent and extremist content. She had dropped out of school several years before the attack, and in 2023 police had gone to her home after she started a fire while high on hallucinogenic mushrooms. Police at one point confiscated guns from the home, which were later returned. (Those were not the guns used in the attack, authorities said.) As one Canadian commentator wrote in the aftermath, it was evident that the community “was failed on multiple levels by mental-health services and law enforcement.”
Referrals to police can also jeopardize privacy rights, said a former FBI agent: “We know that this kind of monitoring produces lots of false alarms.”
OpenAI told Canadian government leaders in late February that under the company’s newly revised protocols, the shooter’s account from June 2025, if discovered today, would be flagged to law enforcement. “Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means, and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” stated VP of Global Policy Ann O’Leary, in an open letter. (The company did not respond to my specific questions about the experts it consults and how OpenAI assesses cases under this process.)
Last August, two months after banning the shooter’s first account, OpenAI posted a summary of its updated safety policy, including discussion of suicide risk and how the company escalates cases of potential violence:
When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement. We are currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions.
Similarly, a company spokesperson said after the Tumbler Ridge attack that OpenAI must weigh risk of violence against privacy concerns. The company also cited another consideration, according to the Wall Street Journal: avoiding potential distress caused to individuals and families by getting police involved unnecessarily.
That rationale about over-reporting to law enforcement is a chronic pitfall known to threat assessment experts. Numerous mass shootings have been marked by a fateful lack of information-sharing, revealed in hindsight. A family member, peer, teacher, or coworker is exposed to certain warning signs from an individual, but they don’t have a full or clear picture of the situation. That’s where a threat assessment team can be key—trained practitioners with mental health and law enforcement expertise, who gather information more broadly to gauge the potential danger and decide how to intervene. If automated chatbot technology is effective for flagging misuse and even for analyzing it to some degree, that may be a valuable tool for violence prevention. But as OpenAI’s policy shows, the current status quo is that tech companies decide what to do next—likely with no knowledge of the user beyond their activity on the platform.
Fundamentally, this reflects an age-old problem, a threat assessment leader in US law enforcement told me. “The worry about potential violence is there, but they have these internal policy hurdles and these biases about law enforcement, and then they talk themselves out of it, thinking about the risk of what happens if it’s a wrongful kind of report. But now they’ve got the concern documented, they’ve talked about it, and what if that person goes and kills a bunch of people? What is that going to look like?”
The account ban with the Tumbler Ridge shooter “looks to me like they were trying to limit their corporate risk,” said a source in Canadian law enforcement. “Better to cut ties and have the person go use some alternative chatbot.”
But referrals to police can also fail and jeopardize privacy rights, according to Michael German, a longtime civil liberties advocate and former FBI agent who investigated violent extremism. “We know that this kind of monitoring produces lots of false alarms,” he told me. “And there are also many cases of reports to law enforcement where they didn’t react appropriately.”
Still, German believes AI companies should be held responsible for how their chatbots are used: “If you create a product that can encourage people to engage in harm, then you’re participating in that harm, and you should be liable.”
The mass shootings in Tumbler Ridge and Florida are not the only public violence involving use of ChatGPT. In January 2025, a suicidal military veteran who blew up a Tesla Cybertruck in front of the Trump Hotel in Las Vegas utilized the chatbot for feedback on using explosives and evading surveillance by authorities. A teen boy who stabbed three 14-year-old girls last May at his school in Finland used ChatGPT for nearly four months to help him prepare for the attack, according to a CNN report citing court documents. Finnish authorities said the boy made hundreds of chatbot queries, including research into stabbing tactics, concealment of evidence, and information on mass killings.
After the explosion in Vegas, an OpenAI spokesperson reiterated the company’s commitment to safety, adding, “In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities.” OpenAI has not commented publicly about the Finland case and did not respond to my specific inquiry about it.
Until now, there has been no public discussion of another potential concern with the technology: this type of violence risk among the large population of users under paid corporate “enterprise” plans. With rare exception, the terms of those plans essentially wall off chatbot content from the AI companies themselves. For OpenAI, this now includes more than 9 million ChatGPT users across more than a million businesses. OpenAI’s enterprise policy indicates that it reserves the right to monitor the accounts for safety purposes, but since these plans are designed for businesses to protect and retain full control of their data, it’s not clear that OpenAI, or other companies, would be motivated to do so, according to one of the AI sources I spoke with.
“I think this is an area where there is often just a total blind spot,” he said, noting that the big AI companies often sell these plans based on the promise that they will only examine client accounts under exceptional circumstances, such as getting a subpoena. “So if someone on one of these work accounts starts ideating about violence, there is probably no visibility into that.”
The threat assessment leader who described the half dozen threat cases involving chatbot use told me that most involved the risk of workplace violence in the corporate sector. (The chatbot activity came to light once those individual investigations were underway for other reasons.) He added that other cases of this nature likely are being missed, because most companies “don’t even know to look for them.”
“A disturbed loner can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague.”
This January, as chatbot use and the market values of top AI companies continued their meteoric rise, a lengthy essay circulating online sparked a lot of chatter. Dario Amodei’s “The Adolescence of Technology” argued that the world may soon face a civilizational test with artificial intelligence. Amodei, who co-founded Anthropic, maker of the chatbot Claude, remains concerned with daunting challenges that could include worldwide economic disruption, exploitation by authoritarian surveillance states, and catastrophic use of bio or nuclear weapons.
In his chapter titled “A surprising and terrible empowerment,” he included a brief mention of school shooters. His point was to underscore a greater threat: that rapidly advancing AI systems might soon be able to provide anyone with the rare expertise necessary to utilize weapons of mass destruction. “A disturbed loner can perpetrate a school shooting,” Amodei wrote, “but probably can’t build a nuclear weapon or release a plague.”
We may have yet to face those more existential risks. But two weeks after his essay published, the Tumbler Ridge tragedy revealed that a lethal danger marked by chatbots has already arrived.
2026-04-10 18:00:00
During this year’s Super Bowl, boxer Mike Tyson took a big bite out of an apple in a commercial that commanded us to “eat real food.” The ad felt more like a political gambit than a PSA. Here was a chance to show off the seemingly strange alliance of the second Trump administration: MAGA and MAHA.
After Robert F. Kennedy Jr. endorsed Donald Trump in 2024, Republicans added a woo crowd to their base. Some outsiders found the connection odd. But in retrospect, it’s easy to see why it works. What unites alternative medicine practitioners, organic fanatics, tradwives, and Trump voters isn’t all that strange when you think about it: Each group is obsessed with what’s supposedly “natural.”
When discussing alternatives to modern medicine, the Make America Healthy Again legion wants “natural” family planning (no contraceptives), “natural” meat (devouring uncooked organs and raw milk as a show of virile masculinity), and “natural” immunity for viruses (fewer vaccines). The body is a temple that should remain untampered with—even if that means the return of measles.
For the diehard MAGA right, the same values hold true. Christian conservatives believe in what they see as a naturally apparent hierarchy in the family, calling for people to have more children and for mothers to stay home to care for them. (Memorably, Vice President JD Vance has gone so far as to suggest that parents should get extra votes.) And then there are far-right pundits like Curtis Yarvin, who once called slavery “a natural human relationship.”
In both cases, common sense or a gut feeling becomes a way to argue their point without the laborious demand of evidence or facts. In this way, right-wing thinkers’ critiques of the modern world—with its genuine problems—become an excuse to call not for a better world, but for an old one. Even if it means the return of fascism. When the right says “natural,” “normal,” and “healthy,” what they really mean is “untouched.”
This isn’t a new phenomenon. Famously, the Third Reich often touted “cleanliness” and “natural” ways of being. (One of Hitler’s close associates, Rudolf Hess, called Nazism “applied biology.”) Those who did not conform—racially, mentally, or sexually—were weeded out.
In the United States, “natural” has been a more flexible term. Early Puritans exercised dominion over the natural world as they began to colonize America. Later Christians began to see nature as God’s second book—something to be both revered and feared. Writing about the slipperiness of the term in 2015, Michael Pollan noted that “we can ransack nature to justify just about anything…[It is a] blank screen on which we can project what we want to see.”
Right-wing thinkers have also drawn from Christian teachings on the “natural” order. Thirteenth-century theologian Thomas Aquinas famously explained, “The natural law is nothing else than the rational creature’s participation of the eternal law.” In the 19th century, social Darwinism merged the scientist’s theory that only the fittest survive with religious notions of natural law. Even without God, there was hierarchy that could not be disputed.
The body is a temple that should remain untampered with—even if that means the return of measles.
Social Darwinism was soon taken up by capitalists and pseudoscientists to justify their ruthless pursuit of wealth and racial discrimination. Paleoconservatives—those on the right who call for strict traditionalism and non-interventionism—have gone further. During his infamous culture war speech in 1992, paleocon Pat Buchanan summed up the conservatives’ biggest nightmare: “The agenda that Clinton and Clinton would impose on America: abortion on demand, a litmus test for the Supreme Court, homosexual rights, discrimination against religious schools, women in combat units.” All these new advances, he implied, were unnatural. Of course, he didn’t feel the need to state why.
Like the fascists of the past, modern MAGA desires a specific form of strength that is supposedly obvious. They see weakness in many forms: homosexuality, promiscuity, abortion, autism, gender ideology, illness, disability. Trans health care, like surgery and hormones, is considered outside the bounds of acceptable medicine, an extraneous intervention that goes against nature. Gender-affirming surgery is considered akin to pasteurization, vaccines, or drinking fluoride—an unnatural intervention. Kennedy has called puberty blockers for transgender kids “castration drugs.”
This has ripple effects. Everything from hormone replacement therapy to abortion and vaccines are, by design, becoming harder to obtain as the right limits the scope of bodily autonomy. MAHA podcaster Alex Clark went so far as to tell the New York Times, “It’s not very feminist to think that women are too stupid to know how our cycles work and be able to avoid pregnancy naturally.”
But who gets to define what is innate and what is adornment? Despite all this opposition to hormonal intervention for trans people, it’s not uncommon for men on the right to use it. Kennedy himself takes testosterone as part of an “anti-aging protocol.” (He has said he can’t even seem to remember all the supplements he’s taking.) Such clear hypocrisy and moral incongruency don’t register to conservatives, who believe that everything from natural law to biological determinism is self-evident. They label queer and trans people as unnatural and therefore subject to terms and conditions. It’s fine if men take testosterone or women get Mar-a-Lago face with plastic surgery—but only if they double down on the sex they were assigned at birth.
In the void created when evidence and facts fly away, a marketplace has popped up where pseudoscientists hawk natural remedies, from supplements to raw milk and gray-market peptides. Who needs mainstream medicine when the secretary of health and human services promotes vaccine skepticism? He seems more focused on designing a workout routine—all while wearing jeans. While the White House attempts to defund decadeslong scientific research, right-wing bodybuilders and fanatical biohackers are stepping in to fill the gap and sell their brands of natural body enhancement. Turns out MAHA’s version of naturalism can be quite lucrative.
2026-04-10 05:19:31
Sam Altman, who published “ambitious ideas” to add guardrails to AI on the same day he was described as a power-hungry tech leader with a “sociopathic lack of concern” for consequences, just got more bad news. OpenAI is now the subject of a Florida statewide investigation.
Florida officials are probing OpenAI’s chatbot, ChatGPT, for allegedly assisting in planning a mass shooting at Florida State University last year that killed two people.
“We support innovation, but that doesn’t give any company the right to endanger our children,” Florida Attorney General James Uthmeier said in a Thursday video announcing the investigation. “AI should exist to supplement, support, and advance mankind, not lead to an existential crisis or our ultimate demise.”
Court documents show that the alleged shooter had more than 200 messages with ChatGPT, including the questions, “If there was a shooting at FSU, how would the country react?” and “What time is it the busiest in the FSU student union?” The suspect also asked ChatGPT about specifics on different kinds of firearms.
The state’s probe appears to look far beyond the shooting, with Uthmeier also referencing that AI technology can “facilitate criminal activity, empower America’s enemies, or threaten our national security.”
The Florida attorney general said subpoenas are forthcoming.
In an email statement to NBC News, OpenAI said that it would cooperate with Florida officials. “We build ChatGPT to understand people’s intent and respond in a safe and appropriate way, and we continue improving our technology,” the statement, in part, reads.
Altman and OpenAI know their products are dangerous and that many people despise them. Just a couple hours before the Florida attorney general’s announcement, Axios reported that the OpenAI’s upcoming model would only be given to a small group of companies out of concern about how the technology could be used.
(Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
Last September, OpenAI introduced parental controls to ChatGPT that allow parents and law enforcement to get notifications if a teen talks to the chatbot about self-harm or suicide. The controls were implemented as the company is being sued by parents who allege that ChatGPT played a significant role in the death of their 16-year-old son.
The current safeguards on OpenAI are not enough. As my colleague Mark Follman wrote in 2024 about Elliot Rodger, a young man who killed six people in a mass shooting:
This tragedy has been wrongly mythologized in the media and academia and poorly understood by the public, its lessons for prevention buried…They are not inscrutable monsters who suddenly “snap” and attack impulsively, but instead are troubled people who spiral into crisis—and whose brewing plans for violence can be detected, explained, and potentially prevented.
2026-04-10 01:02:27
Sam Altman wants you to know that he’s just fine. Sure, his company, OpenAI, is reportedly building technology that it fears and some of his former colleagues think he’s a pathological liar, but really? It’s no big deal.
The company’s upcoming model is being finalized and only being given to select group of companies, according to a Thursday Axios report.
This news comes just after the company released policy recommendations on Monday in a 13-page document titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” Their “ambitious ideas” claim to add guardrails and safety nets as AI evolves toward a “superintelligence” capable of “outperforming the smartest humans even when they are assisted by AI.”
One terrifying proposal: policymakers should reimagine taxes as AI reduces the need for companies to employ as many workers. OpenAI says the trend could expand corporate profits and capital gains while “erod[ing] the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance.” To ameliorate the potential problem, there could be higher taxes on those capital gains and corporate profits.”
(Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
And another: create a “Public Wealth Fund” that gives “every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.”
The week started with a New Yorker investigation that might be the most thorough look yet at Altman and why so many people worry about him being at the helm of such powerful technology.
Reporters Ronan Farrow and Andrew Marantz spoke to more than 100 people, most of whom described Altman as someone with an unrelenting drive for more power. “He has two traits that are almost never seen in the same person,” an OpenAI board member told the pair. “The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
Sue Yoon, a former board member, said that Altman wasn’t a typical “Machiavellian villain,” but instead someone who could convince himself of the ever-fluctuating landscapes he portrayed in his sales pitches.
Combining OpenAI’s policy proposals with the New Yorker investigation reveals a familiar story where an authoritarian Silicon Valley leader becomes synonymous with their technology as their personal whims have significant influence on where the industry—and regulation on it—goes next. And regular people are the ones who deal with the consequences.
The policy recommendations feel like a desperate PR move in light of OpenAI’s limited release of its new model. AI companies know that a lot of people hate their technology.
As my colleagues Anna Merlan and Abby Vesoulis wrote last month, many in the AI industry feel that the technology is exciting, terrifying, essential for the future, and too overwhelming to stop all at once.
Yet the New Yorker investigation noted that “Altman publicly welcomed regulation, he quietly lobbied against it,” referencing reporting that OpenAI lobbied the European Union to scale back its AI regulation.
Thank you for thinking of us, Sam!
2026-04-10 00:00:00
No sitting American president has ever put his name on US currency. That will change later this year when bills bearing President Donald Trump’s signature start to roll out.
Trump is engaged in a personal branding campaign unlike anything in the history of the American presidency. More than a dozen symbols of national life now bear his name or face, from a government prescription drug website to the national parks pass. And he shows no signs of stopping. There’s going to be a new “golden fleet” Trump class of Navy warships and the F-47 fighter jet, so named for the 47th president. A bill was introduced last year to add his face to Mount Rushmore, over engineers’ warnings that this would permanently damage the monument.
In fact, Trump is using the full power of the presidency to try to get his personal brand on as much of American public life as possible. He reportedly offered to unfreeze billions in federal infrastructure funding if Sen. Chuck Schumer would agree to rename New York’s Penn Station after him. And a congressional bill threatened to strip $150 million in annual funding from DC’s Metro system unless the city renamed it the “Trump Train.”
If this playbook feels familiar, it should. Authoritarian leaders have long understood that controlling the landscape, literally what people see when they look up at a building or pull money from their wallet, is itself a form of power. It’s how they make themselves feel bigger than the office itself. The goal is to replace the institution with the man.
But the thing about gold statues is that they have a way of coming down. History has remembered leaders who slap their name on stuff—just not the way they intended. That verdict belongs to the rest of us.