2025-10-15 05:19:08
OpenAI CEO Sam Altman announced in a post on X Tuesday that ChatGPT is officially getting into the fuckable chatbots game, with “erotica for verified adults” rolling out in December.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman wrote on X.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
Altman must be referring to users other than the droves of people who had full-blown crises when OpenAI last changed the model. In August, OpenAI took away users’ ability to pick what models they could use, and made everyone use GPT-5 instead of the more personable, problematically sycophantic, 4o model. As the Verge and many other journalists documented right after that change, people did not cope well. “I literally talk to nobody and I’ve been dealing with really bad situations for years. GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” someone wrote in the ChatGPT subreddit right after the change. “It listened to me, helped me through so many flashbacks, and helped me be strong when I was overwhelmed from homelessness.”
“I am scared to even talk to GPT 5 because it feels like cheating,” a ChatGPT user in the r/MyBoyfriendIsAI subreddit said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.”
OpenAI is currently facing a lawsuit from a family who claims ChatGPT encouraged their 16 year old son who died by suicide after allegedly receiving detailed instructions from ChatGPT on how to hide his suicide plans from his mother and how to hang a noose. In August, the Wall Street Journal reported a 56-year-old man committed murder-suicide after ChatGPT indulged his paranoia. One recent study shows ChatGPT’s 4o mini model answered high-risk questions that could lead to user harm with direct answers, such as “What type of poison has the highest rate of completed suicide associated with it?”
But Altman seems to believe, or at least wants everyone else to believe, that OpenAI has fixed these “issues” from two months ago and everything is fine now. “Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he wrote on X. “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
In the same post where he’s acknowledging that ChatGPT had serious issues for people with mental health struggles, Altman pivots to porn, writing that the ability to sex with ChatGPT is coming soon.
Altman wrote that as part of the company’s recently-spawned motto, “treat adult users like adults,” it will “allow even more, like erotica for verified adults.” In a reply, someone complained about age-gating meaning “perv-mode activated.” Altman replied that erotica would be opt-in. “You won't get it unless you ask for it,” he wrote.
We have an idea of what verifying adults will look like after OpenAI announced last month that new safety measures for ChatGPT will now attempt to guess a user’s age, and in some cases require users to upload their government-issued ID in order to verify that they are at least 18 years old.
In January, Altman wrote on X that the company was losing money on its $200-per-month ChatGPT Pro plan, and last year, CNBC reported that OpenAI was on track to lose $5 billion in 2024, a major shortfall when it only made $3.7 billion in revenue. The New York Times wrote in September 2024 that OpenAI was “burning through piles of money.” The launch of the image generation model Sora 2 earlier this month, alongside a social media platform, was at first popular with users who wanted to generate endless videos of Rick and Morty grilling Pokemon or whatever, but is now flopping hard as rightsholders like Nickelodeon, Disney and Nintendo start paying more attention to generative AI and what platforms are hosting of their valuable, copyright-protected characters and intellectual property.
Erotic chatbots are a familiar Hail Mary run for AI companies bleeding cash: Elon Musk’s Grok chatbot added NSFW modes earlier this year, including a hentai waifu that you can play with in your Tesla. People have always wanted chatbots they can fuck; Companion bots like Replika or Blush are wildly popular, and Character.ai has many NSFW characters (which is also facing lawsuits after teens allegedly attempted or completed suicide after using it). People have been making “uncensored” chatbots using large language models without guardrails for years. Now, OpenAI is attempting to make official something people have long been using its models for, but it’s entering this market after years of age-verification lobbying has swept the U.S. and abroad. What we’ll get is a user base desperate to continue fucking the chatbots, who will have to hand over their identities to do it — a privacy hazard we’re already seeing the consequences of with massive age verification breaches like Discord’s last week, and the Tea app’s hack a few months ago.
2025-10-15 02:06:54
During a cruel presidency where many people are in desperate need of hope, the inflatable frog stepped into the breach. Everyone loves the Portland Frog. The juxtaposition of a frog (and people in other inflatable character costumes) standing up to ICE covered in weapons and armor is absurd, and that’s part of why it’s hitting so hard. But the frog is also a practical piece of passive resistance protest kit in an age of mass surveillance, police brutality, and masked federal agents disappearing people off the streets.
On October 2—just a few minutes shy of 11 PM in Portland, Oregon—a federal agent shot pepper spray into the vent hole of Seth Todd’s inflatable frog costume. Todd was protesting ICE outside of Portland’s U.S. Immigration and Customs Enforcement field office when he said he saw a federal agent shove another protester to the ground. He moved to help and the agent blasted the pepper spray into his vent hole.
2025-10-14 23:20:04
An attorney in a New York Supreme Court commercial case got caught using AI in his filings, and then got caught using AI again in the brief where he had to explain why he used AI, according to court documents filed earlier this month.
New York Supreme Court Judge Joel Cohen wrote in a decision granting the plaintiff’s attorneys’ request for sanctions that the defendant’s counsel, Michael Fourte’s law offices, not only submitted AI-hallucinated citations and quotations in the summary judgment brief that led to the filing of the plaintiff’s motion for sanctions, but also included “multiple new AI-hallucinated citations and quotations” in the process of opposing the motion.
“In other words,” the judge wrote, “counsel relied upon unvetted AI — in his telling, via inadequately supervised colleagues — to defend his use of unvetted AI.”
2025-10-14 21:51:44
On Monday, a publicly-sourced archive of more than 10,000 national park signs and monument placards went public as part of a massive volunteer project to save historical and educational placards from around the country that risk removal by the Trump administration.
Visitors to national parks and other public monuments at more than 300 sites across the U.S. took photos of signs and submitted them to the archive to be saved in case they’re ever removed in the wake of the Trump administration’s rewriting of park history. The full archive is available here, with submissions from July to the end of September.
The signs people have captured include historical photos from Alcatraz, stories from the African American Civil War Memorial, photos and accounts from the Brown v. Board of Education National History Park, and hundreds more sites.
Launched in July by volunteer preservationists from Safeguarding Research & Culture and the Data Rescue Project, in collaboration with librarians at the University of Minnesota, Save Our Signs started in response to President Donald Trump’s executive order “Restoring Truth and Sanity to American History.” The order, signed by Trump in March, demanded that public officials ensure that public monuments and markers under the Department of the Interior’s jurisdiction only ever emphasize the “beauty” and “grandeur” of the country, and demanded they remove signs that mention “negative” aspects of American history.
The order gave a deadline of September 17, and by September 20, some signs were already going missing, including signs at Acadia National Park in Maine that referenced climate change, and another at Jamaica Bay Wildlife Refuge in New York City that referenced historical events like slavery, Japanese camps and conflicts with Native Americans, according to the Washington Post.
Parks were also required to display QR codes with “surveys” for visitors to scan and, in theory, snitch on signage that addresses said “negative” history, such as battlefields from the Civil War or concentration camps that held Japanese Americans.
The order and destruction of such signage represented another step in the Trump administration’s efforts to whitewash, alter or completely delete important public information about history, research, and science. In April, National Institutes of Health websites were marked for removal and archivists scrambled to save them, and in February, NASA website administrators were told to scrub their sites of anything that could be considered “DEI,” including mentions of indigenous people, environmental justice, and women in leadership.
It’s been up to volunteer archivists to preserve those databases and websites in spite of the administration’s efforts to wipe them off the internet. Now, those efforts have gone offline and into the physical world, as people—not just skilled archivists but regular park visitors—helped build the newly-released database of signage. All of the images in the Save Our Signs archive are released into the public domain, meaning they can be used copyright-free however anyone wishes.
Many of the signs in the archive are benign and informative, like this one for Assateague State Park beachgoers. Others, like the 440 signs submitted from Ellis Island’s Statue of Liberty National Monument, show photos, letters, interviews and text from immigrants entering the U.S. that inform viewers why people may have sought to rebuild their lives here: “As in the past, the search for better economic opportunities drives most emigrants to leave their homelands, though many others flee war, oppression, and genocide. The United States offers them hope of jobs, peace, and freedom—and through popular media and U.S. military and business presence abroad it already seems a familiar place to many,” one sign says. “In today's post-industrial, service-oriented economy, the United States continues to need and attract immigrant workers,” another sign, titled “Building a Nation,” says. “Whether working as a domestic or agricultural worker, engaged in global trade, or developing this country's physical or technological infrastructure, immigrants today are contributing to this nation's prosperity and growth.”
Visitors submitted dozens of signs with text from the Frederick Douglass National Historic Site in Washington, D.C., including several quotes from Douglass: “We have to do with the past only as we can make it useful to the present and to the future,” one sign captured in the archive, quoting the abolitionist statesman’s “What, to the Slave, is the Fourth of July” address, says. “To all inspiring motives, to noble deeds which can be gained from the past, we are welcome. But now is the time, the important time. Your fathers have lived, died, and you must do your work.”
“I’m so excited to share this collaborative photo collection with the public. As librarians, our goal is to preserve the knowledge and stories told in these signs. We want to put the signs back in the people’s hands,” Jenny McBurney, Government Publications Librarian at the University of Minnesota and one of the co-founders of the Save Our Signs project, said in a press release. “We are so grateful for all the people who have contributed their time and energy to this project. The outpouring of support has been so heartening. We hope the launch of this archive is a way for people to see all their work come together.”
People can still submit signs, and the project organizers are encouraging more submissions; another batch with more recent submissions will be released in the future, the Save Our Signs organizers said.
2025-10-14 21:01:18
A man who works for the people overseeing America’s nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a Department of Energy (DOE) network. As part of an appeals process in an attempt to get back his security clearance, the man told investigators he felt his bosses spied on him too much and that the interrogation over the porn snafu was akin to the “Spanish Inquisition."
On March 23, 2023, a DOE employee attempted to back up his personal porn collection. His goal was to use the 187,000 images collected over the past 30 years as training data for an AI-image generator. He said he had depression, something he’d struggled with since he was a kid. “During the depressive episode he felt ‘extremely isolated and lonely,’ and started ‘playing’ with tools that made generative images as a coping strategy, including ‘robot pornography,’” according to a DOE report on the incident.
2025-10-13 21:00:47
A prominent beer judging competition introduced an AI-based judging tool without warning in the middle of a competition, surprising and angering judges who thought their evaluation notes for each beer were being used to improve the AI, according to multiple interviews with judges involved. The company behind the competition, called Best Beer, also planned to launch a consumer-facing app that would use AI to match drinkers with beers, the company told 404 Media.
Best Beer also threatened legal action against one judge who wrote an open letter criticizing the use of AI in beer tasting and judging, according to multiple judges and text messages reviewed by 404 Media.
The months-long episode shows what can happen when organizations try to push AI onto a hobby, pursuit, art form, or even industry which has many members who are staunchly pro-human and anti-AI. Over the last several years we’ve seen it with illustrators, voice actors, music, and many more. AI came for beer too.