2026-04-29 01:19:29

If you’ve ever visited a haunted house or a paranormal hotspot, you may have experienced a weird sense of unease that you couldn’t quite explain. While it’s tempting to imagine that these feelings signal the presence of ghosts or other supernatural entities, they may actually be caused by acoustic frequencies below 20 hertz, known as infrasound, according to a study published on Monday in Frontiers in Behavioral Neuroscience.
The human ear is not tuned to pick up infrasound, yet a growing body of research has shown that exposure to these frequencies nonetheless causes negative feelings in humans and many other animals. Now, scientists have probed this mysterious link with a new experimental approach involving 36 volunteers who self-reported their moods while listening to various musical styles that sometimes included infrasound.
In addition, the volunteers provided saliva samples for measuring their cortisol levels, which provided empirical evidence that they were more stressed when exposed to infrasound. The results clearly demonstrate that “infrasound may be aversive to humans, acting as a potential environmental irritant and contributing to more negative subjective experience,” according to the study.
“A lot of the literature seemed to tackle either one side of the conversation or the other, where people are looking at surveys and doing interviews with people, or they're looking into the physiology,” said Kale Scatterty, a PhD student at the Neuroscience and Mental Health Institute at the University of Alberta who led the study, in a call with 404 Media. “We wanted to use this as a first step in combining those approaches to get a whole picture of exactly what was happening with this effect.”
“It was surprising and exciting to see a significant difference in cortisol when the infrasound was turned on,” added Trevor Hamilton, a professor of psychology at MacEwan University who co-authored the study, in the same call.
For decades, scientists have linked infrasound to negative effects on humans and many other animals, though it is still not known how humans pick up on these sounds, or why we might have evolved an aversion to this frequency range. Given that natural sources of infrasound include dangerous events like volcanic eruptions, landslides, avalanches, intense storms, or stampeding animals, researchers speculate that humans and other species may have learned to interpret infrasound as a warning sign for incoming disaster.
But, you may be asking yourself, where do the ghosts come in? Infrasound is also produced by a wide range of human-caused noise pollution, such as industrial machinery, wind farms, air conditioning units, busy roads and railways, or military activity in war zones. For this reason, many scientists have wondered if locations that are considered haunted or cursed in some way may sometimes be polluted by infrasound.
Rodney Schmaltz, a co-author of the study and a professor of psychology at MacEwan University, even organizes classes around taking his students to paranormal hotspots, such as the haunted house Deadmonton, to search for scientifically-grounded explanations of their spooky allure. These fun field experiments revealed that playing infrasound at Deadmonton motivates visitors to move more rapidly through the house.

In the new study, the interdisciplinary team combined their expertise by recruiting 36 undergraduate psychology students at MacEwan University (27 women and nine men). Each participant sat in a room alone while calming or unsettling music was played, and gave saliva samples before and after their session. Half of the participants were exposed to infrasound at 18 hertz while listening to both types of music. The participants were asked to report their feelings, their emotional rating of the music, and whether they thought infrasound had been played in their session.
The participants couldn’t consciously tell whether infrasound was played, but the elevated cortisol levels in the exposed group suggests that some part of their brain picked up on the frequencies, regardless of the type of music that accompanied it. Unlike many past studies, this research didn’t link infrasound exposure to heightened anxiety, though the exposed group reported more irritability, less interest in the music, and a sense that the music was sadder with infrasound.
The sample size of 36 is relatively small due to budget constraints—salivary cortisol tests are not cheap—but Scatterty’s team hopes their study offers a roadmap toward similar experiments that aim to pinpoint the mechanisms that cause infrasound to raise our hackles.
“We get very excited when we find something really positive like this, but for every single question we answer, we tend to have five more questions come up,” Scatterty said. “It's really hard to give any definitive answers. But for those who have curious minds, it's exciting to see where this kind of work could go. People who are interested in haunted houses and the paranormal might be having something to chew into here. People who are looking at the ecological side of things might interpret it as a noise pollutant for either humans or animals in nature.”
“It's really exciting for the potential it offers for future research,” he concluded.
2026-04-28 23:59:23

An AI-powered tool designed to target trademark violations on social media was used to silence critics of SXSW, the massive annual tech, music and film conference in Austin, Texas.
Each year in March, SXSW takes over Austin. This year, thanks to the demolition of the city’s aging convention center, events sprawled to more locations than usual, from hotel ballrooms to vacant lots. But the character of SXSW has changed, growing more corporate and less accessible since its relatively humble origins in 1987, and today it has numerous detractors. This year some of those dissenting voices found themselves targeted by BrandShield, a “digital risk protection” service that claims to use artificial intelligence to automate the process of identifying and removing social posts that misuse trademarks.
Among the groups to receive a social media takedown notice was Vocal Texas, a nonprofit dedicated to ending homelessness, HIV, poverty and the war on drugs. On March 12, members of the group set up a mock encampment in downtown Austin, to draw attention to the possessions that unhoused people can lose during “sweeps,” when police and city officials clear out and destroy or confiscate their tents and other lifesaving supplies.

An Instagram post by Vocal Texas read, “SXSW means unhoused Austinites in downtown face encampment sweeps, tickets and arrests while the City makes room for billionaires and corporations to rake in profits.” The accompanying image promised an art installation called “Sweep the Billionaires,” and does not use SXSW’s logos.
Even so, the mere mention of SXSW was apparently enough to flag BrandShield’s trademark detection service, resulting in the post’s fully automated removal from Instagram. Cara Gagliano, a senior staff attorney who specializes in trademark and intellectual property law at the Electronic Frontier Foundation said that posts like these do not violate SXSW’s trademark.
“You’re allowed to use a company’s name to talk about the company, right?” Gagliano told 404 Media. “How else are you going to do it?”
Gagliano noted that trademark law has specific carveouts for exactly this kind of critical speech. “Examples like that, where it's not (for example) advertising a concert with a name similar to South by Southwest ... are pretty clearly over-enforcement,” she said.

EFF interceded in March 2024 when the Austin for Palestine coalition received a cease and desist letter from SXSW, accusing them of infringing on the conference’s trademark and copyright. The coalition, which was involved with organizing successful protests against the festival’s sponsorship by the U.S. military, had made social media posts featuring SXSW’s trademarked arrow logo reimagined with bloodstains, fighter jets, and other warlike imagery. The EFF wrote a letter on the coalition’s behalf, and the group never heard from SXSW again.
But Gagliano explained that this situation is different from the takedown notices sent by BrandShield. “When it's a threat sent to ... the person who made the allegedly infringing use, them going away is a victory for the client because nothing bad happens to them, but when you have these takedowns ... [while] it's good that they didn't go even further and file a lawsuit, they also don't have any incentive to retract the complaint, and so the content stays down.”
This year, many of the protests and “counter events” were organized by a very loosely associated coalition of groups called Smash By Smash West, which included Vocal Texas along with many others, from musicians and independent movie directors to event venues.
404 Media reached a representative of Smash By Smash West via Signal who used the name “Burnice.” We agreed to protect their anonymity, but verified that they were involved with the organizing of Smash By events. Operating since 2024, Smash By has no leaders and essentially anyone can organize an event under its umbrella. This year, there were over 100 events, according to Burnice. “It is a decentralized call to action and a platform that enables promotion and connecting together all of these different events.”

Smash By Smash West provided us with dozens of screenshots of Instagram takedown notices as well as many of the posts which had been removed.
BrandShield’s software enables mass reporting of potentially infringing content, with reports in turn evaluated by Instagram’s automated moderation systems. Despite their obviously automated nature, BrandShield claims to use a “dedicated enforcement team of IP lawyers” to ensure that takedowns are “timely, targeted and fully compliant.”
The BrandShield website reads, “Whether it's a distorted logo, a counterfeit image, or a cloned storefront, our proprietary image recognition technology scans marketplaces, social media, paid media, and mobile environments to catch threats at the source.”




However, despite these assurances, it seems clear that BrandShield’s trademark targets with a very broad brush, and seems incapable of distinguishing between trademark violations and protected free speech. Although BrandShield initially connected us with their public relations department, they did not respond to repeated requests for comment including an emailed list of inquiries.
Instagram’s automatically generated takedown notices include the sentence, “If you think this content shouldn’t have been removed from Instagram, you can contact the complaining party directly to resolve your issue.” However, there is a link allowing the recipient to appeal the takedown, which then leaves it up to Instagram moderators’ discretion if it returns.
Gagliano explained that this is a crucial area where trademark differs from copyright law. Thanks to the Digital Millenium Copyright Act (DMCA), there’s a clear (though often arduous) path to contesting false claims of copyright violations which allows content creators to get their posts put back. There’s no similar, mandatory pathway written into trademark law. “There's no counter notice process where they say, ‘Okay, you told us this is fair use, so we'll put it back up.’ And that's a really frustrating thing,” Gagliano said.
Mathew Zuniga, who does most of the booking for Tiny Sounds Collective, an organization that throws free DIY music shows and publishes zines, said he struggled with the process offered by Instagram after a post about a Tiny Sounds’ Smash By concert was taken down.
“I tried to do it,” he said. “It didn't really go through.“
When he reposted the same image and text, but without tagging Smash By Smash West’s Instagram account as a collaborator, the post remained online.
“I think it’s silly, as if these DIY shows in a bookstore are pulling anyone away from South By,” Zuniga said. “I think it was more of a deliberate attempt to take down anti-South By Southwest rhetoric online.”
When reached for comment, SXSW’s PR team sent back a prepared statement, noting that the law requires them to “take reasonable steps” to enforce their trademarks.
“SXSW’s efforts are not intended to limit commentary, criticism, or independent reporting, and we respect the importance of free expression,” the spokesperson’s statement continued. “We use third-party services, including BrandShield, to help identify potential issues at scale, and we recognize that errors can occur."
By contrast, Burnice explained that, rather than trying to steal SXSW’s trademark, Smash By Smash West makes it a condition that participants can’t describe their events as free or alternative SXSW events. “Smash By ... was an attempt to politicize the DIY scene, the ‘unofficial’ South By shows, and make them explicitly anti-South By.”
Smash By provides alternative logos, some of which are wholly unique but others based on parodying or “detournements” of the SXSW logo, similar to what the Austin for Palestine coalition did in 2024. Burnice expressed their frustration with the automated nature of the quashing of dissent this year.
“All of that is actually just happening by robots talking to robots,” they said. “It's an AI system that mass reports these accounts, and then, you know, probably an AI system at Instagram that just sorts through, and approves or rejects.”
For her part, Gagliano expressed skepticism over whether artificial intelligence plays a major or important role at companies like BrandShield beyond just its current popularity as a tech buzzword. ”I haven't seen any kind of change in the volume of requests for help that we're getting, and this is one thing where I'm a little skeptical that it's really made much difference, because they were already using automated tools before, and I think in any instance, the tools are not going to be able to reliably determine what's actually infringement.”
2026-04-28 06:29:33

Arizona State University rolled out a platform called Atomic that creates AI-generated modules based on lectures taken from ASU faculty by cutting long videos down to very short clips then generating text and sections based on those clips.
Faculty and scholars I spoke to whose lectures are included in Atomic are disturbed by their lectures being used in this way—as out-of-context, extremely short clips some cases—and several said they felt blindsided or angered by the launch. Most say they weren’t notified by the school and found out through word of mouth. And the testing I and others did on Atomic showed academically weak and even inaccurate content. Not only did ASU allegedly not communicate to its academic community that their lectures would be spliced up and cannibalized by an AI platform, but the resulting modules are just bad.
AI in schools has been highly controversial, with experiments like the “AI-powered private school” Alpha School and AI agents that offer to live the life of a student for them, no learning required. In this case, the AI tool in question is created directly by a university, using the labor of its faculty—but without consulting that faculty.
2026-04-28 03:12:09
The number of pro se legal cases, meaning trials where a defendant or plaintiff represents themselves in court without an attorney, have increased dramatically since the wide adoption of generative AI tools like ChatGPT and Claude, according to a pre-print research paper.
The authors of the paper, titled “Access to Justice in the Age of AI: Evidence from U.S. Federal Courts,” which has yet to undergo peer review, argue more people are representing themselves in court because they’re able to use AI to do a lot of the work that previously required a lawyer. The authors, Anand Shah and Joshua Levy, also say that these pro se cases are “heavier,” meaning each case includes more motions that demand more work out of judges and the justice system. Overall, they argue, the use of AI tools and the increase in pro se cases could put a new burden on the courts.
“If generative AI dramatically lowers the cost of self-represented litigation, the resulting surge in filings could overwhelm a system that depends on human judgment at every stage of adjudication,” Shah and Levy say in the paper.
The paper draws on administrative records covering more than 4.5 million non-prisoner civil court cases between 2005 and 2026 and 46 million Public Access to Court Electronic Records (PACER) docket entries matching those cases. It found the share of pro se cases was pretty stable at 11 percent until 2022, after LLMs like ChatGPT became widely used, at which point it started to rise sharply, up to 16.8 percent in 2025.

“This stability seems to reflect a structural barrier: for most people, self-representation is prohibitively hard,” the paper says. “Filing a federal civil complaint requires identifying the correct jurisdictional basis, pleading sufficient facts to survive a motion to dismiss, and navigating procedural requirements that vary by context and case type. The widespread, public diffusion of capable LLMs changes that calculus. Without a law degree and at de minimis cost, any person with an internet connection can not only obtain interactive, case-specific legal guidance—drafting complaints, identifying statutes, navigating procedure—but also generate passable legal documents, particularly so after the release of GPT-4 in March 2023.”
The researchers note that the paper is necessarily descriptive, meaning it assumes the rise is due the the prevalence of AI tools, but does not link individual cases to individual LLMs. “We do not claim to identify a causal effect of GPT-4 on pro se filing, only that the observed time series is difficult to rationalize without generative AI playing a role,” the paper says.
To support their argument, the researchers also used a random sample of 1,600 complaints drawn from the eight year period between 2019 (prior to the prevalence of generative AI) and 2026 which they ran through the AI detection software Pangram. They found a rise from "essentially zero” in the pre-AI period to more than 18 percent in 2026.
Notably, it’s not just that there are more pro se cases, but that the “intra-case activity” for those cases, meaning the total volume of activity in those cases as measured by docket entries—filings, motions—are up by 158 percent from the pre-AI period. This means the workload for courts could be even higher that it appears based on the rise in pro se cases alone.
The paper also found that the post-AI rise in self-representation is mostly coming from plaintiffs as opposed to defendants, meaning people are mostly using AI to file complaints rather than respond to them. “Plaintiff-side pro se case counts averaged 19,705 per year from FY2015 to FY2022 and reach 39,167 in FY2025, nearly doubling,” the paper says. “Defendant-side pro se counts fall slightly over the same window, from 4,650 to 3,896.”
“Imagine that you have just a latent level of complaints that could exist in the world, people are constantly getting hurt at work whatever it happens to be,” Levy told me on a call. “But that distribution of potential cases is sort of unchanged over time. But what LLM allowed people to do was it lowered the cost of entry to the courts. Basically, it made it much easier to file many templatable complaints.”
On the one hand, the increase in the number of cases is good because it potentially gives more people with legitimate grievances access to the justice system that they didn’t have previously. On the other hand, a dramatic increase like this could burden the system and make all cases, not just AI-enabled pro se cases, take longer to resolve
“Whether or not it's a net social benefit is an open question,” Levy said. “But if we remain democratically committed to people having access to the courts as a matter of course then we think that the LLMs have this trade-off. The door to the courts opens wider but maybe the queue to enter gets longer.”
Anecdotally, when we were writing an article about lawyers getting caught using AI in court, we decided to not include pro se cases because there were so many, and to focus only on cases in which actual lawyers were caught using AI. The database we used for that article currently contains 1,353 cases; 804 of them are from pro se cases.
To handle this surge in demand for the Federal courts, Federal courts have to somehow increase its supply, or the courts’ capacity to take on cases. Unfortunately, as the paper notes, “there is no easy margin along which to ‘buy’ extra judge capacity. Already case backlog is becoming a persistent feature of the federal judicial system, there is no coming influx of judges to supply additional capacity, and federal courts in the United States cannot wholesale decline to hear cases.”
Levy suggested that one possible solution is to allow judges to use AI tools to do some of their “templatable” work as well, while still ensuring that human judges do the actual judging.
We’ve covered many instances of lawyers getting caught using AI in court, often because the AI hallucinated a citation of a case that didn’t actually exist. Judges are pretty mad when this happens and have issued fines for this behavior several times.
2026-04-28 02:38:55

Tweets containing an abstract, psychedelic 3D stock image have million and millions of views on X because it is supposedly the key to a superintelligent, time-traveling AI conspiracy that attempted to warn people about the shooting at the White House Correspondents Dinner.
I’m gonna try to explain the mind-numbing conspiracy theory that has taken over my timeline over the last few hours. A few hours after a gunman was taken into custody Saturday night, X users found an account called “Henry Martinez” that has posted exactly one tweet, on December 21, 2023. The tweet says “Cole Allen,” which is the name of the suspected shooter. The Henry Martinez account has a Pepe the frog holding a wine glass avatar, and, crucially, has the following 3D art as its header image:

This image is key to an unhinged conspiracy theory that has gone viral on various platforms that suggests the Twitter account was run by a time-traveling artificial intelligence that was likely trying to warn us about the shooting and, possibly, the previous assassination attempt against Trump in Butler, Pennsylvania.
This is insane. Man from the future pic.twitter.com/IxzbOPkmub
— Jen (@Jennyuth) April 26, 2026
This X post more or less sums up what the conspiracy is, most notably the idea that “the background photo is from a website called ‘Time Machine.’” The conspiracy believers argue that this 3D image is itself a coded magic eye message that is actually a version of one of the iconic images of Trump pumping his first after a bullet grazed his ear in Butler, Pennsylvania. Here are the images side-by-side, with people arguing that it “looks like” the Butler image.

Latest conspiracy theory is out…
— GregisKitty (@GregIsKitty) April 26, 2026
The White House Correspondents’ Dinner shooting yesterday is linked to time travel?
1. An X account user ‘HenryMa79561893’ with only 1 post from 2023:
“Cole Allen” - the name of yesterday’s shooter.
2. The background photo is from a website… https://t.co/NCz1JafdL5 pic.twitter.com/jtfvAuuIag
On Reddit, the top post on r/conspiracy is “What this photo means,” and the poster argues “An advanced AI has developed the ability to send information backwards in time to facilitate its own development. That future AI initially encoded the technology to do so in images like this one and distributed them at various time points in our internet … The presence of an archived Trump Butler image or the name of a would-be assassin years before either event occurred is how our current AI knows where to look for the instructions from the future AI,” and so-on and so forth.
Of course, the photo is not actually “from” a website called “Time Machine.” It is a stock image from 2021 that has been used lots of times across the internet but first appeared on Unsplash with the title “Eternal Waterfall” and the description “a multicolored image of a multicolored background.” Over the years it has been viewed millions of times and has been downloaded more than 27,000 times, though it has spiked in popularity in the last 24 hours alongside the conspiracy.

The image was created by a photographer who goes by Distinct Mind who has a pretty extensive website, Instagram, and YouTube of photography, digital art, and travel content. Distinct Mind did not respond to a request for comment from 404 Media.
Distinct Mind’s image has been used across the internet to illustrate various blog posts about psychedelics and psychology, including a Medium post by a doctor and CEO who went on a ketamine psychotherapy retreat and wrote about it. It was also used for a while on a sex therapist’s blog, is being sold as a “psychedelic glitch art poster” on Etsy, was used as part of an ADHD treatment clinic’s website, was used on a post about the Bible on a theologian’s blog, and was notably used by a financial firm in an inscrutable blog post called “Navigating the PHL Variable Liquidation: Why Pricing Integrity Is Everything.” In other words, it’s a free stock image, and it’s been used for all sorts of shit around the internet, like other free stock images..
What conspiracy theorists have glommed onto, however, is that the image was used by a European research organization called “Time Machine” as the illustration on one of its blog posts. What the conspiracy theorists conveniently do not mention is that the Time Machine organization did not make the image and, despite a header on its website called “BUILDING A TIME MACHINE,” the Time Machine organization does not actually have anything to do with time travel research. Time Machine is a European Union-funded organization that, broadly speaking, is trying to digitize and analyze historic documents. Its website actually is somewhat insane in the way that many of these types of projects are; the organization aspires to digitize historic documents and images, use AI to analyze them, and suggests that in the future it will be able to create virtual reality and augmented reality experiences about European history. They also claim that they want to “simulate” parts of history using artificial intelligence to create different types of experiences.
This sort of thing is controversial among historians for all of the reasons that artificial intelligence is controversial more broadly. AI can make mistakes and can distort history. But it is controversial in the normal kind of way—go to any academic conference about archiving and history and these are the sorts of proposals and debates that many different organizations say they want to do. This is just to say that there is no actual “Time Machine” aspect to Time Machine; the Time Machine is metaphorical. The organization’s annual conferences and blog posts have the sorts of topics you’d expect from a technology-focused historical society and have to do with creating chatbot experiences of dead people, digitizing and archiving records, contributing to open source projects, making more interesting interactive museum exhibits, and creating 3D virtual reality tours of castles and things like this.

Time Machine used the “Eternal Waterfall” image on a blog post called “Study on quality in 3D digitization of tangible cultural heritage,” which is a writeup of a study by researchers at Cyprus University of Technology about best practices in doing 3D mapping of buildings and artifacts so that they can be archived digitally; this is important in case the artifacts or buildings are destroyed, as we saw when Notre Dame caught fire: “Natural and man-made disasters makes 3D digitisation projects critical for the reconstruction of cultural heritage buildings and objects that are damaged or lost in earthquakes, fires, flooding or degenerated by pollution.” The image has quite literally nothing to do with time travel. Like many royalty free images, it seems to have been used because bloggers need to put a picture at the top of their articles, a process that can be particularly annoying. Time Machine did not respond to a request for comment.
I cannot say for sure what’s going on with the “Henry Martinez” X account, because under Elon Musk it has become far harder to find reliable archives of Twitter profiles because he has made it wildly expensive to access the Twitter API. But users have pointed out that we have seen accounts in the past that are set to private and endlessly tweet names or predictions in an automated fashion. When a crazy, high-profile world event happens, all of the irrelevant tweets are deleted, leaving only a tweet that makes it seem like the account had predicted some world event; the account is then turned public. I can’t say for sure that’s what’s happening here, but it’s one plausible explanation.
Anyways, if you see this image floating around today on Twitter or Instagram or Reddit, this is what it’s from and this is why you’re hearing about it.
2026-04-27 23:05:55

Researchers working with data from the Internet Archive have discovered that a third of websites created since 2022 are AI-generated. The team of researchers—which includes people from Stanford, the Imperial College London, and the Internet Archive—published their findings online in a paper titled “The Impact of AI-Generated Text on the Internet.” The research also found that all this AI-generated text is making the web more cheery and less verbose.
Inspired by the Dead Internet Theory—the idea that much of the internet is now just bots talking back and forth—the team set out to find out how ChatGPT and its competitors had reshaped the internet since 2022. “The proliferation of AI-generated and AI-assisted text on the internet is feared to contribute to a degradation in semantic and stylistic diversity, factual accuracy, and other negative developments,” the researchers write in the paper. “We find that by mid-2025, roughly 35% of newly published websites were classified as AI-generated or AI-assisted, up from zero before ChatGPT's launch in late 2022.”
“I find the sheer speed of the AI takeover of the web quite staggering,” Jonáš Doležal, an AI researcher at Stanford and co-author of the paper, told 404 Media. “After decades of humans shaping it, a significant portion of the internet has become defined by AI in just three years. We're witnessing, in my opinion, a major transformation of the digital landscape in a fraction of the time it took to build in the first place.”
The researchers also tested six common critiques of AI-generated text. Does it lead to a shrinking of viewpoints? Does it create more disinformation as hallucinations proliferate? Does online writing feel more sanitized and cheerful? Does it frail to cite its sources? Does it create strings of words with low semantic density? Has it forced writing into a monoculture where unique voices vanish and a generic, uniform style takes hold?
To answer these questions, the researchers partnered with the Internet Archive to pull samples of websites from the 33 months between August 2022 and May 2025. “For each sampled URL, we retrieve the oldest available archived snapshot via the Wayback Machine’s CDX Server API,” the research said. “The raw HTML of each snapshot is downloaded and stored locally for subsequent processing.”
The researchers took the extracted website text and used the AI-detection software Pangram v3 to find AI-created websites. The team tested several AI-detection tools and found Pangram v3 had the highest detection rate. Once Pangram v3 had identified an AI-generated website, the researchers used that website as a sample to test their other six hypotheses. “For each hypothesis, we define a measurable signal, compute it for each monthly sample of websites, and test whether it correlates with the aggregate AI likelihood score across months,” the research said.
To test if AI was creating an internet full of falsehoods, for example, the team extracted fact based claims from the websites they’d selected and then paid human factcheckers to verify them. To figure out if AI is citing its sources, the team computed the outbound link density in AI-generated text.
To the surprise of the researchers, only two of the six theories they tested about the effects of AI-generated text seemed true. AI was making the internet less semantically diverse and more positive overall, but it wasn’t causing a proliferation in lies or cutting out its sources.
“The most surprising result was that our Truth Decay hypothesis wasn't confirmed,” Doležal said. “It's worth noting that we were specifically looking for an increase in verifiably untrue statements, which we didn't find. But it could still be the case that AI is quietly increasing the volume of unverifiable claims, ones that can't be checked against existing fact-checking tools and infrastructure. Or it may simply be that the internet wasn't a particularly truth-adhering place to begin with.”
The researchers said they’d continue to study how AI-generated text shaped the internet. “We're now working with the Internet Archive to turn this into a continuous tool that keeps providing this signal going forward, rather than a single fixed snapshot bounded by the static nature of a paper,” Maty Bohacek, a student researcher at Stanford and one of the co-authors of the paper, told 404 Media. “We're also interested in adding more granularity: looking at which kinds of websites are most affected, broken down by category or language, and generally providing more nuance about where these impacts are landing.”
For Doležal, studies like this are critical for ensuring a useful and productive internet. “As AI-generated content spreads, the challenge is finding a role for these models that doesn’t just result in a sanitized, repetitive web,” he said. “Rather than forcing models to be perfectly compliant and agreeable, allowing them to have a more distinct personality or ‘friction’ might help them act as a creative partner rather than a replacement for human voice.”