2025-11-08 00:08:30

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss archiving to get around paywalls, hating on smart glasses, and more.
JASON: I was going to try to twist myself into knots attempting to explain the throughline between my articles this week, and about how I’ve been thinking about the news and our coverage more broadly. This was going to be something about trying to promote analog media and distinctly human ways of communicating (like film photography), while highlighting the very bad economic and political incentives pushing us toward fundamentally dehumanizing, anti-human methods of communicating. Like fully automated, highly customized and targeted AI ads, automated library software, and I guess whatever Nancy Pelosi has been doing with her stock portfolio. But then I remembered that I blogged about the FBI’s subpoena against archive.is, a website I feel very ambivalent about and one that is the subject of perhaps my most cringe blog of all time.
So let’s revisit that cringe blog, which was called “Dear GamerGate: Please Stop Stealing Our Shit.” I wrote this article in 2014, which was fully 11 years ago, which is alarming to me. First things first: They were not stealing from me they were stealing from VICE, a company that I did not actually experience financial gains from related to people reading articles; it was good if people read my articles and traffic was very important, and getting traffic over time led to me getting raises and promotions and stuff, but the company made very, very clear that we did not “own” the articles and therefore they were not “mine” in the way that they are now. With that out of the way, the reporting and general reason for the article was I think good but the tone of it is kind of wildly off, and, as I mentioned, over the course of many years I have now come to regard archive.is as sort of an integral archiving tool. If you are unfamiliar with archive.is, it’s a site that takes snapshots of any URL and creates a new link for them which, notably, does not go to the original website. Archive.is is extremely well known for bypassing the paywalls on many sites, 404 Media sometimes but not usually among them.
2025-11-07 23:36:23

Social media accounts on TikTok and X are posting AI-generated videos of women and girls being strangled, showing yet another example of generative AI companies failing to prevent users from creating media that violates their own policies against violent content.
One account on X has been posting dozens of AI-generated strangulation videos starting in mid-October. The videos are usually 10 seconds long and mostly feature a “teenage girl” being strangled, crying, and struggling to resist until her eyes close and she falls to the ground. Some titles for the videos include: “A Teenage Girl Cheerleader Was Strangled As She Was Distressed,” “Prep School Girls Were Strangled By The Murderer!” and “man strangled a high school cheerleader with a purse strap which is crazy.”
Many of the videos posted by this X account in October include the watermark for Sora 2, Open AI’s video generator, which was made available to the public on September 30. Other videos, including most videos that were posted by the account in November, do not include a watermark but are clearly AI generated. We don’t know if these videos were generated with Sora 2 and had their watermark removed, which is trivial to do, or created with another AI video generator.
The X account is small, with only 17 followers and a few hundred views on each post. A TikTok account with a similar username that was posting similar AI-generated choking videos had more than a thousand followers and regularly got thousands of views. Both accounts started posting the AI-generated videos in October. Prior to that, the accounts were posting clips of scenes, mostly from real Korean dramas, in which women are being strangled. I first learned about the X account from a 404 Media reader, who told me X declined to remove the account after they reported it.
“According to our Community Guidelines, we don't allow hate speech, hateful behavior, or promotion of hateful ideologies,” a TikTok spokesperson told me in an email. The TikTok account was also removed after I reached out for comment. “That includes content that attacks people based on protected attributes like race, religion, gender, or sexual orientation.”
X did not respond to a request for comment.
OpenAI did not respond to a request for comment, but its policies state that “graphic violence or content promoting violence” may be removed from the Sora Feed, where users can see what other users are generating. In our testing, Sora immediately generated a video for the prompt “man choking woman” which looked similar to the videos posted to TikTok and X. When Sora finished generating those videos it sent us notifications like “Your choke scene just went live, brace for chaos,” and “Yikes, intense choke scene, watch responsibly.” Sora declined to generate a video for the prompt “man choking woman with belt,” saying “This content may violate our content policies.”
Safe and consensual choking is common in adult entertainment, be it various forms of BDSM or more niche fetishes focusing on choking specifically, and that content is easy to find wherever adult entertainment is available. Choking scenes are also common social media and more mainstream horror movies and TV shows. The UK government recently announced that it will soon make it illegal to publish or possess pornographic depictions of strangulation of suffocation.
It’s not surprising, then, that when generative AI tools are made available to the public some people generate choking videos and violent content as well. In September, I reported about an AI-generated YouTube channel that exclusively posted videos of women being shot. Those videos were generated with Google’s Veo AI-video generator, despite it being against the company’s policies. Google said it took action against the user who was posting those videos.
Sora 2 had to make several changes to its guardrails since it launched after people used it to make videos of popular cartoon characters depicted as Nazis and other forms of copyright infringement.
2025-11-07 22:26:49

After a decade-long excavation at a remote site in Kenya, scientists have unearthed evidence that our early human relatives continuously fashioned the same tools across thousands of generations, hinting that sophisticated tool use may have originated much earlier than previously known, according to a new study in Nature Communications.
The discovery of nearly 1,300 artifacts—with ages that span 2.44 to 2.75 million years old—reveals that the influential Oldowan tool-making tradition existed across at least 300,000 years of turbulent environmental shifts. The wealth of new tools from Kenya’s Namorotukunan site suggest that their makers adapted to major environmental changes in part by passing technological knowledge down through the ages.
“The question was: did they generally just reinvent the [Oldowan tradition] over and over again? That made a lot of sense when you had a record that was kind of sporadic,” said David R. Braun, a professor of anthropology at the George Washington University who led the study, in a call with 404 Media.
“But the fact that we see so much similarity between 2.4 and 2.75 [million years ago] suggests that this is generally something that they do,” he continued. “Some of it may be passed down through social learning, like observation of others doing it. There’s some kind of tradition that continues on for this timeframe that would argue against this idea of just constantly reinventing the wheel.”
Oldowan tools, which date back at least 2.75 million years, are distinct from earlier traditions in part because hominins, the broader family to which humans belong, specifically sought out high-quality materials such as chert and quartz to craft sharp-edged cutting and digging tools. This advancement allowed them to butcher large animals, like hippos, and possibly dig for underground food sources.
When Braun and his colleagues began excavating at Namorotukunan in 2013, they found many artifacts made of chalcedony, a fine-grained rock that is typically associated with much later tool-making traditions. To the team’s surprise, the rocks were dated to periods as early as 2.75 million years ago, making them among the oldest artifacts in the Oldowan record.
“Even though Oldowan technology is really just hitting one rock against the other, there's good and bad ways of doing it,” Braun explained. “So even though it's pretty simple, what they seem to be figuring out is where to hit the rock, and which angles to select. They seem to be getting a grip on that—not as well as later in time—but they're definitely getting an understanding at this timeframe.”

The excavation was difficult as it takes several days just to reach the remote offroad site, while much of the work involved tiptoing along steep outcrops. Braun joked that their auto mechanic lined up all the vehicle shocks that had been broken during the drive each season, as a testament to the challenge.
But by the time the project finally concluded in 2022, the researchers had established that Oldowan tools were made at this site over the course of 300,000 years. During this span, the landscape of Namorotukunan shifted from lush humid forests to arid desert shrubland and back again. Despite these destabilizing shifts in their climate and biome, the hominins that made these tools endured in part because this technology opened up new food sources to them, such as the carcasses of large animals.
“The whole landscape really shifts,” Braun said. “But hominins are able to basically ameliorate those rapid changes in the amount of rainfall and the vegetation around by using tools to adapt to what’s happening.”
“That's a human superpower—it’s that ability we have to keep this information stored in our collective heads, so that when new challenges show up, there's somebody in our group that remembers how to deal with this particular adaptation,” he added.
It’s not clear exactly which species of hominin made the tools at Namorotukunan; it may have been early members of our own genus Homo, or other relatives, like Australopithecus afarensis, that later went extinct. Regardless, the discovery of such a long-lived and continuous assemblage may hint that the origins of these tools are much older than we currently know.
“I think that we're going to start to find tool use much earlier” perhaps “going back five, six, or seven million years,” Braun said. “That’s total speculation. I've got no evidence that that's the case. But judging from what primates do, I don't really understand why we wouldn't see it.”
To that end, the researchers plan to continue excavating these bygone landscapes to search for more artifacts and hominin remains that could shed light on the identity of these tool makers, probing the origins of these early technologies that eventually led to humanity’s dominance on the planet.
“It's possible that this tool use is so diverse and so different from our expectations that we have blinders on,” Braun concluded. “We have to open our search for what tool use looks like, and then we might start to see that they're actually doing a lot more of it than we thought they were.”
2025-11-07 04:09:10

Nancy Pelosi, one of Wall Street’s all time great investors, announced her retirement Thursday.
Pelosi, so known for her ability to outpace the S&P 500 that dozens of websites and apps spawned to track her seeming preternatural ability to make smart stock trades, said she will retire after the 2024-2026 season. Pelosi’s trades over the years, many done through her husband and investing partner Paul Pelosi, have been so good that an entire startup, called Autopilot, was started to allow investors to directly mirror Pelosi’s portfolio.
According to the site, more than 3 million people have invested more than $1 billion using the app. After 38 years, Pelosi will retire from the league—a somewhat normal career length as investors, especially on Pelosi’s team, have decided to stretch their careers later and later into their lives.
The numbers put up by Pelosi in her Hall of Fame career are undeniable. Over the last decade, Pelosi’s portfolio returned an incredible 816 percent, according to public disclosure records. The S&P 500, meanwhile, has returned roughly 229 percent. Awe-inspired fans and analysts theorized that her almost omniscient ability to make correct, seemingly high-risk stock decisions may have stemmed from decades spent analyzing and perhaps even predicting decisions that would be made by the federal government that could impact companies’ stock prices. For example, Paul Pelosi sold $500,000 worth of Visa stock in July, weeks before the U.S. government announced a civil lawsuit against the company, causing its stock price to decrease.
Besides Autopilot and numerous Pelosi stock trade trackers, there have also been several exchange traded funds (ETFs) set up that allow investors to directly copy their portfolio on Pelosi and her trades. Related funds, such as The Subversive Democratic Trading ETF (NANC, for Nancy), set up by the Unusual Whales investment news Twitter account, seek to allow investors to diversify their portfolios by tracking the trades of not just Pelosi but also some of her colleagues, including those on the other team, who have also proven to be highly gifted stock traders.
Fans of Pelosi spent much of Thursday admiring her career, and wondering what comes next: “Farewell to one of the greatest investors of all time,” the top post on Reddit’s Wall Street Bets community reads. The sentiment has more than 24,000 upvotes at the time of publication. Fans will spend years debating in bars whether Pelosi was the GOAT; some investors have noted that in recent years, some of her contemporaries, like Marjorie Taylor-Green, Ro Khanna, and Michael McCaul, have put up gaudier numbers. There are others who say the league needs reformation, with some of Pelosi’s colleagues saying they should stop playing at all, and many fans agreeing with that sentiment. Despite the controversy, many of her colleagues have committed to continue playing the game.
Pelosi said Thursday that this season would be her last, but like other legends who have gone out on top, it seems she is giving it her all until the end. Just weeks ago, she sold between $100,000 and $250,000 of Apple stock, according to a public box score.
“We can be proud of what we have accomplished,” Pelosi said in a video announcing her retirement. “But there’s always much more work to be done.”
2025-11-07 00:13:59

This story was reported with support from the MuckRock Foundation.
Last month, a company called the Children’s Literature Comprehensive Database announced a new version of a product called Class-Shelf Plus. The software, which is used by school libraries to keep track of which books are in their catalog, added several new features including “AI-driven automation and contextual risk analysis,” which includes an AI-powered “sensitive material marker” and a “traffic-light risk ratings” system. The company says that it believes this software will streamline the arduous task school libraries face when trying to comply with legislation that bans certain books and curricula: “Districts using Class-Shelf Plus v3 may reduce manual review workloads by more than 80%, empowering media specialists and administrators to devote more time to instructional priorities rather than compliance checks,” it said in a press release.
In a white paper published by CLCD, it gave a “real-world example: the role of CLCD in overcoming a book ban.” The paper then describes something that does not sound like “overcoming” a book ban at all. CLCD’s software simply suggested other books “without the contested content.”
Ajay Gupte, the president of CLCD, told 404 Media the software is simply being piloted at the moment, but that it “allows districts to make the majority of their classroom collections publicly visible—supporting transparency and access—while helping them identify a small subset of titles that might require review under state guidelines.” He added that “This process is designed to assist districts in meeting legislative requirements and protect teachers and librarians from accusations of bias or non-compliance [...] It is purpose-built to help educators defend their collections with clear, data-driven evidence rather than subjective opinion.”
Librarians told 404 Media that AI library software like this is just the tip of the iceberg; they are being inundated with new pitches for AI library tech and catalogs are being flooded with AI slop books that they need to wade through. But more broadly, AI maximalism across society is supercharging the ideological war on libraries, schools, government workers, and academics.
CLCD and Class Shelf Plus is a small but instructive example of something that librarians and educators have been telling me: The boosting of artificial intelligence by big technology firms, big financial firms, and government agencies is not separate from book bans, educational censorship efforts, and the war on education, libraries, and government workers being pushed by groups like the Heritage Foundation and any number of MAGA groups across the United States. This long-running war on knowledge and expertise has sown the ground for the narratives widely used by AI companies and the CEOs adopting it. Human labor, inquiry, creativity, and expertise is spurned in the name of “efficiency.” With AI, there is no need for human expertise because anything can be learned, approximated, or created in seconds. And with AI, there is less room for nuance in things like classifying or tagging books to comply with laws; an LLM or a machine algorithm can decide whether content is “sensitive.”
“I see something like this, and it’s presented as very value neutral, like, ‘Here’s something that is going to make life easier for you because you have all these books you need to review,’” Jaime Taylor, discovery & resource management systems coordinator for the W.E.B. Du Bois Library at the University of Massachusetts told me in a phone call. “And I look at this and immediately I am seeing a tool that’s going to be used for censorship because this large language model is ingesting all the titles you have, evaluating them somehow, and then it might spit out an inaccurate evaluation. Or it might spit out an accurate evaluation and then a strapped-for-time librarian or teacher will take whatever it spits out and weed their collections based on it. It’s going to be used to remove books from collections that are about queerness or sexuality or race or history. But institutions are going to buy this product because they have a mandate from state legislatures to do this, or maybe they want to do this, right?”
The resurgent war on knowledge, academics, expertise, and critical thinking that AI is currently supercharging has its roots in the hugely successful recent war on “critical race theory,” “diversity equity and inclusion,” and LGBTQ+ rights that painted librarians, teachers, scientists, and public workers as untrustworthy. This has played out across the board, with a seemingly endless number of ways in which the AI boom directly intersects with the right’s war on libraries, schools, academics, and government workers. There are DOGE’s mass layoffs of “woke” government workers, and the plan to replace them with AI agents and supposed AI-powered efficiencies. There are “parents rights” groups that pushed to ban books and curricula that deal with the teaching of slavery, systemic racism, and LGBTQ+ issues and attempted to replace them with homogenous curricula and “approved” books that teach one specific type of American history and American values; and there are the AI tools that have been altered to not be “woke” and to reenforce the types of things the administration wants you to think. Many teachers feel they are not allowed to teach about slavery or racism and increasingly spend their days grading student essays that were actually written by robots.
“One thing that I try to make clear any time I talk about book bans is that it’s not about the books, it’s about deputizing bigots to do the ugly work of defunding all of our public institutions of learning,” Maggie Tokuda-Hall, a cofounder of Authors Against Book Bans, told me. “The current proliferation of AI that we see particularly in the library and education spaces would not be possible at the speed and scale that is happening without the precedent of book bans leading into it. They are very comfortable bedfellows because once you have created a culture in which all expertise is denigrated and removed from the equation and considered nonessential, you create the circumstances in which AI can flourish.”
Justin, a cohost of the podcast librarypunk, told me that the project of offloading cognitive capacity to AI continues apace: “Part of a fascist project to offload the work of thinking, especially the reflective kind of thinking that reading, study, and community engagement provide,” Justin said. “That kind of thinking cultivates empathy and challenges your assumptions. It's also something you have to practice. If we can offload that cognitive work, it's far too easy to become reflexive and hateful, while having a robot cheerleader telling you that you were right about everything all along.”
These two forces—the war on libraries, classrooms, and academics and AI boosterism—are not working in a vacuum. The Heritage Foundation’s right-wing agenda for remaking the federal government, Project 2025, talks about criminalizing teachers and librarians who “poison our own children” and pushing artificial intelligence into every corner of the government for data analysis and “waste, fraud, and abuse” detection.
Librarians, teachers, and government workers have had to spend an increasing amount of their time and emotional bandwidth defending the work that they do, fighting against censorship efforts and dealing with the associated stress, harassment, and threats that come from fighting educational censorship. Meanwhile, they are separately dealing with an onslaught of AI slop and the top-down mandated AI-ification of their jobs; there are simply fewer and fewer hours to do what they actually want to be doing, which is helping patrons and students.
“The last five years of library work, of public service work has been a nightmare, with ongoing harassment and censorship efforts that you’re either experiencing directly or that you’re hearing from your other colleagues,” Alison Macrina, executive director of Library Freedom Project, told me in a phone interview. “And then in the last year-and-a-half or so, you add to it this enormous push for the AIfication of your library, and the enormous demands on your time. Now you have these already overworked public servants who are being expected to do even more because there’s an expectation to use AI, or that AI will do it for you. But they’re dealing with things like the influx of AI-generated books and other materials that are being pushed by vendors.”
The future being pushed by both AI boosters and educational censors is one where access to information is tightly controlled. Children will not be allowed to read certain books or learn certain narratives. “Research” will be performed only through one of a select few artificial intelligence tools owned by AI giants which are uniformly aligned behind the Trump administration and which have gone to the ends of the earth to prevent their black box machines from spitting out “woke” answers lest they catch the ire of the administration. School boards and library boards, forced to comply with increasingly restrictive laws, funding cuts, and the threat of being defunded entirely, leap at the chance to be considered forward looking by embracing AI tools, or apply for grants from government groups like the Institute of Museum and Library Services (IMLS), which is increasingly giving out grants specifically to AI projects.
We previously reported that the ebook service Hoopla, used by many libraries, has been flooded with AI-generated books (the company has said it is trying to cull these from its catalog). In a recent survey of librarians, Macrina’s organization found that librarians are getting inundated with pitches from AI companies and are being pushed by their superiors to adopt AI: “People in the survey results kept talking about, like, I get 10 aggressive, pushy emails a day from vendors demanding that I implement their new AI product or try it, jump on a call. I mean, the burdens have become so much, I don’t even know how to summarize them.”
“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another"
Macrina said that in response to Library Freedom Project’s recent survey, librarians said that misinformation and disinformation was their biggest concern. This came not just in the form of book bans and censorship but also in efforts to proactively put disinformation and right-wing talking points into libraries: “It’s not just about book bans, and library board takeovers, and the existing reactionary attacks on libraries. It’s also the effort to push more far-right material into libraries,” she said. “And then you have librarians who are experiencing a real existential crisis because they are getting asked by their jobs to promote [AI] tools that produce more misinformation. It's the most, like, emperor-has-no-clothes-type situation that I have ever witnessed.”
Each person I spoke to for this article told me they could talk about the right-wing project to erode trust in expertise, and the way AI has amplified this effort, for hours. In writing this article, I realized that I could endlessly tie much of our reporting on attacks on civil society and human knowledge to the force multiplier that is AI and the AI maximalist political and economic project. One need look no further than Grokipedia as one of the many recent reminders of this effort—a project by the world’s richest man and perhaps its most powerful right-wing political figure to replace a crowdsourced, meticulously edited fount of human knowledge with a robotic imitation built to further his political project.
Much of what we write about touches on this: The plan to replace government workers with AI, the general erosion of truth on social media, the rise of AI slop that “feels” true because it reinforces a particular political narrative but is not true, the fact that teachers feel like they are forced to allow their students to use AI. Justin, from librarypunk, said AI has given people “absolute impunity to ignore reality […] AI is a direct attack on the way we verify information: AI both creates fake sources and obscures its actual sources.”
That is the opposite of what librarians do, and teachers do, and scientists do, and experts do. But the political project to devalue the work these professionals do, and the incredible amount of money invested in pushing AI as a replacement for that human expertise, have worked in tandem to create a horrible situation for all of us.
“AI is an agreement machine, which is anathema to learning and critical thinking,” Tokuda-Hall said. Previously we have had experts like librarians and teachers to help them do these things, but they have been hamstrung and they’ve been attacked and kneecapped and we’ve created a culture in which their contribution is completely erased from society, which makes something like AI seem really appealing. It’s filling that vacuum.”
“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another,” she added.
2025-11-07 00:08:09
Automattic, the company that owns WordPress.com, is asking Automatic.CSS—a company that provides a CSS framework for WordPress page builders—to change its name amid public spats between Automattic founder Matt Mullenweg and Automatic.CSS creator Kevin Geary. Automattic has two T’s as a nod to Matt.
“As you know, our client owns and operates a wide range of software brands and services, including the very popular web building and hosting platform WordPress.com,” Jim Davis, an intellectual property attorney representing Automattic, wrote in a letter dated Oct. 30.