MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

2026-04-28 06:29:33

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Arizona State University rolled out a platform called Atomic that creates AI-generated modules based on lectures taken from ASU faculty by cutting long videos down to very short clips then generating text and sections based on those clips. 

Faculty and scholars I spoke to whose lectures are included in Atomic are disturbed by their lectures being used in this way—as out-of-context, extremely short clips some cases—and several said they felt blindsided or angered by the launch. Most say they weren’t notified by the school and found out through word of mouth. And the testing I and others did on Atomic showed academically weak and even inaccurate content. Not only did ASU allegedly not communicate to its academic community that their lectures would be spliced up and cannibalized by an AI platform, but the resulting modules are just bad. 

💡
Do you know anything else about ASU Atomic specifically, or how AI is being implemented at your own school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at [email protected].

AI in schools has been highly controversial, with experiments like the “AI-powered private school” Alpha School and AI agents that offer to live the life of a student for them, no learning required. In this case, the AI tool in question is created directly by a university, using the labor of its faculty—but without consulting that faculty. 

“We are testing an early version of ASU Atomic to learn what works, and what doesn't, to further improve the learner experience before a full release,” the Atomic FAQ page says. “Once you start your subscription, you may generate unlimited, custom built learning modules tailored specifically to your learning goals and schedule.”

The FAQ notes that ASU alumni and those who “previously expressed interest in ASU's learning initiatives or participated in research that helped shape ASU Atomic” were invited to test the beta. But on Monday morning, I signed up for a free 12 day trial of the Atomic platform with my personal email address — no ASU affiliation required. I first learned about the platform after seeing ASU Professor of US Literature Chris Hanlon post about it on Bluesky

“When I looked at it, I was really surprised to see my own face, and the faces of people I know, and others that I don't know” in module materials generated by Atomic, Hanlon said. It had clipped a one-minute snippet from a 12 minute video he’d done as part of a lecture mentioning the literary critic Cleanth Brooks, which the AI transcribed as “Client” Brooks. “What was in that video did not strike me as something anyone would understand without a lot more context,” Hanlon said. When he contacted his colleagues whose lecture videos were also in that module, they were all just as shocked and alarmed, he said. “I mean, it happens to all of us in certain ways all the time, but have your institution do it—to have the university you work for use your image and your lectures and your materials without your permission, to chop them up in a way that might not reflect the kind of teacher you really are... Let alone serve that to an actual student in the real world.”

The videos appear to be scraped from Canvas, ASU’s learning management system where lecture materials and class discussions are made available to students. Canvas is owned by Instructure, and is one of the most popular learning management systems in the country, used by many universities. “ASU Atomic currently draws from ASU Online's full library of course content across subjects including business, finance, technology, leadership, history, and more. If ASU teaches it, Atom—your AI learning partner—can build a hyper-personalized learning module around it,” the Atomic FAQ page says.

As of Monday afternoon, after I reached out at the ASU Atomic email address for comment, signups on Atomic were closed. I could still make new modules using my existing login, however.

In my own test, I went through a series of prompts with a chatbot that determined what I wanted my custom module to be. I told it I was interested in learning about ethics in artificial intelligence at a moderate-beginner level, with a goal of learning as fast as possible. 

AI Is Supercharging the War on Libraries, Education, and Human Knowledge
“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another.”
University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Atomic generated a seven-section learning module, with sections that repeated titles (“Ethics and Responsibility in AI” and “AI Ethics: From Theory to Practice”). The first clip in the first section is a two-minute video taken from a lecture by Euvin Naidoo, Thunderbird School of Management's Distinguished Professor of Practice for Accounting, Risk and Agility. In it, Naidoo talks about “x-riskers,” who he defines as “a community that believes that the progress and movement and acceleration in AI is something we should be cautious about.” Atomic’s AI transcribes this as “X-Riscus,” and transfers that error throughout the module, referring to “X-Riscus” over and over in the section and the quiz at the end. 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

The next section jumps directly into the middle of a lecture where a professor is talking about a study about AI in healthcare, with no context about why it’s showing this: 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

In a later section, film studies professor and Associate Director of ASU’s Lincoln Center for Applied Ethics, Sarah Florini, appears in a minute-long clip from a completely unrelated lecture where she briefly defines artificial intelligence and machine learning. But the content of what she’s saying is irrelevant to the module because it came from a completely unrelated class and is taken out of context.  

“It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true"

“This was a video from one of the courses in our online Film and Media Studies Masters of Advanced Study. The class is FMS 598 Digital Media Studies. It is not a course about AI at all,” Florini told me. “It is an introduction to key concepts used to study digital media in the field of media studies.” She recorded it in 2020, before generative AI was widely used. “That slide and those remarks were just in there to get students to think of AI as a sub-category of machine learning before I talked about machine learning in depth. That is not at all how I would talk about AI today or in a class that focused more on machine learning and AI tech technologies,” she said. “It’s really a great example of how problematic it is to take snippets of people teaching and decontextualize them in this way.” 

Florini told me she wasn’t aware of the existence of the Atomic platform until Friday. “I was not notified in any way. To the best of my knowledge no faculty were notified. And there was no option to opt in or out of this project,” she said.

Another ASU scholar I contacted whose lecture was included in the module Atomic generated for me (and who requested anonymity to speak about this topic) said they’d only just learned about the existence of Atomic from my email. They searched their inbox for mentions of it from the administration or anyone else, in case they missed an announcement about it, but found nothing. Their lecture snippet presented by Atomic was extremely short and attempted to unpack a very complex topic.

“I don't love the idea of my lectures being taken out of the context of my overall course, and of the readings for that module, and then just presented as saying something,” they told me. “It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true... Or they're gonna think, that's obviously fucking stupid, this ‘expert’ must be dumb. But I could have been presenting a foil!” The clips are so short, it's impossible in some cases to discern context at all.

That lecturer told me the idea of their work being chopped up and used in this way was less a matter of concern for their ownership of the material, and more distressing that someone might come away from these modules with half-baked or wrong conclusions about the topics at hand. “All of the complexity of the topic is being flattened, as though it's really simple,” they said of the snippet Atomic made of their lecture. When they assign this topic to students, it comes with dozens of pages of peer reviewed academic papers, they said. Atomic provides none of that. The module Atomic produced in my test provided zero source links, zero outside readings for further study, no specific citations for where it was getting this information whatsoever, and no mention of who was even in the videos it presented, unless a Zoom name or other name card was visible in the videos. 

“I would really like to know, how did this particular thing happen? How did this actually end up on the asu.edu website?” Hanlon said. “It is such a clunky thing. It is so far removed from what I think the typical educational experience at ASU is. Who decided this would represent us?” 

ASU Atomic, the ASU president’s office, and media relations did not immediately respond to my requests for comment, but I’ll update if I hear back.

People Using AI to Represent Themselves in Court Are Clogging the System

2026-04-28 03:12:09

People Using AI to Represent Themselves in Court Are Clogging the System

The number of pro se legal cases, meaning trials where a defendant or plaintiff represents themselves in court without an attorney, have increased dramatically since the wide adoption of generative AI tools like ChatGPT and Claude, according to a pre-print research paper. 

The authors of the paper, titled “Access to Justice in the Age of AI: Evidence from U.S. Federal Courts,” which has yet to undergo peer review, argue more people are representing themselves in court because they’re able to use AI to do a lot of the work that previously required a lawyer. The authors, Anand Shah and Joshua Levy, also say that these pro se cases are “heavier,” meaning each case includes more motions that demand more work out of judges and the justice system. Overall, they argue, the use of AI tools and the increase in pro se cases could put a new burden on the courts.

“If generative AI dramatically lowers the cost of self-represented litigation, the resulting surge in filings could overwhelm a system that depends on human judgment at every stage of adjudication,” Shah and Levy say in the paper. 

The paper draws on administrative records covering more than 4.5 million non-prisoner civil court cases between 2005 and 2026 and 46 million Public Access to Court Electronic Records (PACER) docket entries matching those cases. It found the share of pro se cases was pretty stable at 11 percent until 2022, after LLMs like ChatGPT became widely used, at which point it started to rise sharply, up to 16.8 percent in 2025.  

People Using AI to Represent Themselves in Court Are Clogging the System

“This stability seems to reflect a structural barrier: for most people, self-representation is prohibitively hard,” the paper says. “Filing a federal civil complaint requires identifying the correct jurisdictional basis, pleading sufficient facts to survive a motion to dismiss, and navigating procedural requirements that vary by context and case type. The widespread, public diffusion of capable LLMs changes that calculus. Without a law degree and at de minimis cost, any person with an internet connection can not only obtain interactive, case-specific legal guidance—drafting complaints, identifying statutes, navigating procedure—but also generate passable legal documents, particularly so after the release of GPT-4 in March 2023.”

The researchers note that the paper is necessarily descriptive, meaning it assumes the rise is due the the prevalence of AI tools, but does not link individual cases to individual LLMs. “We do not claim to identify a causal effect of GPT-4 on pro se filing, only that the observed time series is difficult to rationalize without generative AI playing a role,” the paper says. 

To support their argument, the researchers also used a random sample of 1,600 complaints drawn from the eight year period between 2019 (prior to the prevalence of generative AI) and 2026 which they ran through the AI detection software Pangram. They found a rise from "essentially zero” in the pre-AI period to more than 18 percent in 2026. 

Notably, it’s not just that there are more pro se cases, but that the “intra-case activity” for those cases, meaning the total volume of activity in those cases as measured by docket entries—filings, motions—are up by 158 percent from the pre-AI period. This means the workload for courts could be even higher that it appears based on the rise in pro se cases alone. 

The paper also found that the post-AI rise in self-representation is mostly coming from plaintiffs as opposed to defendants, meaning people are mostly using AI to file complaints rather than respond to them. “Plaintiff-side pro se case counts averaged 19,705 per year from FY2015 to FY2022 and reach 39,167 in FY2025, nearly doubling,” the paper says. “Defendant-side pro se counts fall slightly over the same window, from 4,650 to 3,896.”

“Imagine that you have just a latent level of complaints that could exist in the world, people are constantly getting hurt at work whatever it happens to be,” Levy told me on a call. “But that distribution of potential cases is sort of unchanged over time. But what LLM allowed people to do was it lowered the cost of entry to the courts. Basically, it made it much easier to file many templatable complaints.”

On the one hand, the increase in the number of cases is good because it potentially gives more people with legitimate grievances access to the justice system that they didn’t have previously. On the other hand, a dramatic increase like this could burden the system and make all cases, not just AI-enabled pro se cases, take longer to resolve

“Whether or not it's a net social benefit is an open question,” Levy said. “But if we remain democratically committed to people having access to the courts as a matter of course then we think that the LLMs have this trade-off. The door to the courts opens wider but maybe the queue to enter gets longer.”

Anecdotally, when we were writing an article about lawyers getting caught using AI in court, we decided to not include pro se cases because there were so many, and to focus only on cases in which actual lawyers were caught using AI. The database we used for that article currently contains 1,353 cases; 804 of them are from pro se cases.

To handle this surge in demand for the Federal courts, Federal courts have to somehow increase its supply, or the courts’ capacity to take on cases. Unfortunately, as the paper notes, “there is no easy margin along which to ‘buy’ extra judge capacity. Already case backlog is becoming a persistent feature of the federal judicial system, there is no coming influx of judges to supply additional capacity, and federal courts in the United States cannot wholesale decline to hear cases.”

Levy suggested that one possible solution is to allow judges to use AI tools to do some of their  “templatable” work as well, while still ensuring that human judges do the actual judging. 

We’ve covered many instances of lawyers getting caught using AI in court, often because the AI hallucinated a citation of a case that didn’t actually exist. Judges are pretty mad when this happens and have issued fines for this behavior several times. 

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation

2026-04-28 02:38:55

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation

Tweets containing an abstract, psychedelic 3D stock image have million and millions of views on X because it is supposedly the key to a superintelligent, time-traveling AI conspiracy that attempted to warn people about the shooting at the White House Correspondents Dinner. 

I’m gonna try to explain the mind-numbing conspiracy theory that has taken over my timeline over the last few hours. A few hours after a gunman was taken into custody Saturday night, X users found an account called “Henry Martinez” that has posted exactly one tweet, on December 21, 2023. The tweet says “Cole Allen,” which is the name of the suspected shooter. The Henry Martinez account has a Pepe the frog holding a wine glass avatar, and, crucially, has the following 3D art as its header image:

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation

This image is key to an unhinged conspiracy theory that has gone viral on various platforms that suggests the Twitter account was run by a time-traveling artificial intelligence that was likely trying to warn us about the shooting and, possibly, the previous assassination attempt against Trump in Butler, Pennsylvania. 

0:00
/0:19

This X post more or less sums up what the conspiracy is, most notably the idea that “the background photo is from a website called ‘Time Machine.’” The conspiracy believers argue that this 3D image is itself a coded magic eye message that is actually a version of one of the iconic images of Trump pumping his first after a bullet grazed his ear in Butler, Pennsylvania. Here are the images side-by-side, with people arguing that it “looks like” the Butler image. 

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation

On Reddit, the top post on r/conspiracy is “What this photo means,” and the poster argues “An advanced AI has developed the ability to send information backwards in time to facilitate its own development. That future AI initially encoded the technology to do so in images like this one and distributed them at various time points in our internet … The presence of an archived Trump Butler image or the name of a would-be assassin years before either event occurred is how our current AI knows where to look for the instructions from the future AI,” and so-on and so forth.

Of course, the photo is not actually “from” a website called “Time Machine.” It is a stock image from 2021 that has been used lots of times across the internet but first appeared on Unsplash with the title “Eternal Waterfall” and the description “a multicolored image of a multicolored background.” Over the years it has been viewed millions of times and has been downloaded more than 27,000 times, though it has spiked in popularity in the last 24 hours alongside the conspiracy. 

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation

The image was created by a photographer who goes by Distinct Mind who has a pretty extensive website, Instagram, and YouTube of photography, digital art, and travel content. Distinct Mind did not respond to a request for comment from 404 Media.

Distinct Mind’s image has been used across the internet to illustrate various blog posts about psychedelics and psychology, including a Medium post by a doctor and CEO who went on a ketamine psychotherapy retreat and wrote about it. It was also used for a while on a sex therapist’s blog, is being sold as a “psychedelic glitch art poster” on Etsy, was used as part of an ADHD treatment clinic’s website, was used on a post about the Bible on a theologian’s blog, and was notably used by a financial firm in an inscrutable blog post called “Navigating the PHL Variable Liquidation: Why Pricing Integrity Is Everything.” In other words, it’s a free stock image, and it’s been used for all sorts of shit around the internet, like other free stock images.. 

What conspiracy theorists have glommed onto, however, is that the image was used by a European research organization called “Time Machine” as the illustration on one of its blog posts. What the conspiracy theorists conveniently do not mention is that the Time Machine organization did not make the image and, despite a header on its website called “BUILDING A TIME MACHINE,” the Time Machine organization does not actually have anything to do with time travel research. Time Machine is a European Union-funded organization that, broadly speaking, is trying to digitize and analyze historic documents. Its website actually is somewhat insane in the way that many of these types of projects are; the organization aspires to digitize historic documents and images, use AI to analyze them, and suggests that in the future it will be able to create virtual reality and augmented reality experiences about European history. They also claim that they want to “simulate” parts of history using artificial intelligence to create different types of experiences. 

This sort of thing is controversial among historians for all of the reasons that artificial intelligence is controversial more broadly. AI can make mistakes and can distort history. But it is controversial in the normal kind of way—go to any academic conference about archiving and history and these are the sorts of proposals and debates that many different organizations say they want to do. This is just to say that there is no actual “Time Machine” aspect to Time Machine; the Time Machine is metaphorical. The organization’s annual conferences and blog posts have the sorts of topics you’d expect from a technology-focused historical society and have to do with creating chatbot experiences of dead people, digitizing and archiving records, contributing to open source projects, making more interesting interactive museum exhibits, and creating 3D virtual reality tours of castles and things like this. 

Did a Time Traveling Superintelligent AI Try to Warn About White House Correspondents Dinner Shooting? An Investigation
A diagram from Time Machine's website that does not make much sense

Time Machine used the “Eternal Waterfall” image on a blog post called “Study on quality in 3D digitization of tangible cultural heritage,” which is a writeup of a study by researchers at Cyprus University of Technology about best practices in doing 3D mapping of buildings and artifacts so that they can be archived digitally; this is important in case the artifacts or buildings are destroyed, as we saw when Notre Dame caught fire: “Natural and man-made disasters makes 3D digitisation projects critical for the reconstruction of cultural heritage buildings and objects that are damaged or lost in earthquakes, fires, flooding or degenerated by pollution.” The image has quite literally nothing to do with time travel. Like many royalty free images, it seems to have been used because bloggers need to put a picture at the top of their articles, a process that can be particularly annoying. Time Machine did not respond to a request for comment. 

I cannot say for sure what’s going on with the “Henry Martinez” X account, because under Elon Musk it has become far harder to find reliable archives of Twitter profiles because he has made it wildly expensive to access the Twitter API. But users have pointed out that we have seen accounts in the past that are set to private and endlessly tweet names or predictions in an automated fashion. When a crazy, high-profile world event happens, all of the irrelevant tweets are deleted, leaving only a tweet that makes it seem like the account had predicted some world event; the account is then turned public. I can’t say for sure that’s what’s happening here, but it’s one plausible explanation. 

Anyways, if you see this image floating around today on Twitter or Instagram or Reddit, this is what it’s from and this is why you’re hearing about it. 

Study Finds A Third of New Websites are AI-Generated

2026-04-27 23:05:55

Study Finds A Third of New Websites are AI-Generated

Researchers working with data from the Internet Archive have discovered that a third of websites created since 2022 are AI-generated. The team of researchers—which includes people from Stanford, the Imperial College London, and the Internet Archive—published their findings online in a paper titled “The Impact of AI-Generated Text on the Internet.” The research also found that all this AI-generated text is making the web more cheery and less verbose.

Inspired by the Dead Internet Theory—the idea that much of the internet is now just bots talking back and forth—the team set out to find out how ChatGPT and its competitors had reshaped the internet since 2022. “The proliferation of AI-generated and AI-assisted text on the internet is feared to contribute to a degradation in semantic and stylistic diversity, factual accuracy, and other negative developments,” the researchers write in the paper. “We find that by mid-2025, roughly 35% of newly published websites were classified as AI-generated or AI-assisted, up from zero before ChatGPT's launch in late 2022.”

“I find the sheer speed of the AI takeover of the web quite staggering,” Jonáš Doležal, an AI researcher at Stanford and co-author of the paper, told 404 Media. “After decades of humans shaping it, a significant portion of the internet has become defined by AI in just three years. We're witnessing, in my opinion, a major transformation of the digital landscape in a fraction of the time it took to build in the first place.”

The researchers also tested six common critiques of AI-generated text. Does it lead to a shrinking of viewpoints? Does it create more disinformation as hallucinations proliferate? Does online writing feel more sanitized and cheerful? Does it frail to cite its sources? Does it create strings of words with low semantic density? Has it forced writing into a monoculture where unique voices vanish and a generic, uniform style takes hold?

To answer these questions, the researchers partnered with the Internet Archive to pull samples of websites from the 33 months between August 2022 and May 2025. “For each sampled URL, we retrieve the oldest available archived snapshot via the Wayback Machine’s CDX Server API,” the research said. “The raw HTML of each snapshot is downloaded and stored locally for subsequent processing.”

The researchers took the extracted website text and used the AI-detection software Pangram v3 to find AI-created websites. The team tested several AI-detection tools and found Pangram v3 had the highest detection rate. Once Pangram v3 had identified an AI-generated website, the researchers used that website as a sample to test their other six hypotheses. “For each hypothesis, we define a measurable signal, compute it for each monthly sample of websites, and test whether it correlates with the aggregate AI likelihood score across months,” the research said.

To test if AI was creating an internet full of falsehoods, for example, the team extracted fact based claims from the websites they’d selected and then paid human factcheckers to verify them. To figure out if AI is citing its sources, the team computed the outbound link density in AI-generated text. 

To the surprise of the researchers, only two of the six theories they tested about the effects of AI-generated text seemed true. AI was making the internet less semantically diverse and more positive overall, but it wasn’t causing a proliferation in lies or cutting out its sources.

“The most surprising result was that our Truth Decay hypothesis wasn't confirmed,” Doležal said. “It's worth noting that we were specifically looking for an increase in verifiably untrue statements, which we didn't find. But it could still be the case that AI is quietly increasing the volume of unverifiable claims, ones that can't be checked against existing fact-checking tools and infrastructure. Or it may simply be that the internet wasn't a particularly truth-adhering place to begin with.”

The researchers said they’d continue to study how AI-generated text shaped the internet. “We're now working with the Internet Archive to turn this into a continuous tool that keeps providing this signal going forward, rather than a single fixed snapshot bounded by the static nature of a paper,” Maty Bohacek, a student researcher at Stanford and one of the co-authors of the paper, told 404 Media. “We're also interested in adding more granularity: looking at which kinds of websites are most affected, broken down by category or language, and generally providing more nuance about where these impacts are landing.”

For Doležal, studies like this are critical for ensuring a useful and productive internet. “As AI-generated content spreads, the challenge is finding a role for these models that doesn’t just result in a sanitized, repetitive web,” he said. “Rather than forcing models to be perfectly compliant and agreeable, allowing them to have a more distinct personality or ‘friction’ might help them act as a creative partner rather than a replacement for human voice.”

Government Hacking Tools Are Now in Criminals' Hands (with Lorenzo Franceschi-Bicchierai)

2026-04-27 22:02:44

Government Hacking Tools Are Now in Criminals' Hands (with Lorenzo Franceschi-Bicchierai)

This week Joseph talks to Lorenzo Franceschi-Bicchierai, a journalist at TechCrunch. Lorenzo has possibly the deepest understanding of one of the wildest cybersecurity stories in years: how an employee of Trenchant, a government malware vendor that is supposed to only sell to the ‘good’ guys, secretly sold a bunch of hacking tools to a Russian company. Those tools, it looks like, then ended up with the Russian government and possibly Chinese criminals too. It’s a really insane story about how powerful hacking tech can fall into the wrong hands.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

0:00 - Guest Introduction: Lorenzo Franceschi-Bicchierai

02:52 – What Is Trenchant?

03:52 – Secrecy & Evolution of Exploit Industry

05:05 – Modern Spyware Industry Landscape

08:34 – Discovery of Peter Williams

10:31 – Apple Spyware Notifications Context

13:03 – Early Reporting Strategy

14:13 – Indictment & Confirmation

15:34 – What Peter Williams Did

18:17 – Economics of Zero-Day Market

24:53 – Google Discovers “Corona” Exploit Kit

28:11 – Shift to Mass Exploitation in China

31:03 – How Did It Spread? (Speculation)

34:36 – Link Back to Trenchant Leak

36:27 – Security Failure & Industry Implications

41:04 – Ethical Stakes & Real-World Harm

43:15 – Motive & Final Reflections

Google DeepMind Paper Argues LLMs Will Never Be Conscious

2026-04-27 21:54:36

Google DeepMind Paper Argues LLMs Will Never Be Conscious

A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”

The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades. 

“I think he [Lerchner] arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me. 

Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI. 

The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body. 

“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”

One could imagine an embodied AI programmed with human-like physical needs, and Jäger talked about why a system like that couldn’t achieve consciousness as well, but that’s beyond the scope of this article. There are mountains of literature and decades of research that have gone into these questions, and almost none of it is cited in Lerchner’s paper. 

“I'm in sympathy with 99 percent of everything that he [Lerchner] says,” Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, told me. “My only point of contention is that all these arguments have been presented years and years ago.”

Both Bishop and Jäger said that it was good, but odd, that Google allowed Lerchner to publish the paper. Both said the argument Lerchner makes, and that they agree with, is not an obscure philosophical point irrelevant to the average user, but that the claim that AI can’t achieve consciousness means that there’s a hard cap on what AI could accomplish practically and commercially. For example, Jäger and Bishop said AGI, and the impact 10 times the Industrial Revolution that DeepMind CEO Hassabis predicts, is not likely according to this perspective. 

“[Elon] Musk himself has argued that to get level five autonomy [in self-driving cars] you need generalized autonomy” which is Musk’s term for AGI, Bishop said. 

Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist. 

Lerchner’s paper includes a disclaimer at the bottom that says “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.” The paper was originally published on March 10 and is still featured on Google DeepMind’s site. The PDF of the paper itself, hosted on philpapers.org, originally included Google DeepMind letterhead, but appears to have been replaced with a new PDF that removes Google’s branding from the paper, and moved the same disclaimer to the top of the paper, after I reached out for comment on April 20. Google did not respond to that request for comment. 

“We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can't be consciousness,” Bishop told me. “Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world.”

Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed. 

“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I'm talking about Geoffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they're used in a very weird way right now. And I'm always very surprised that there is so little interest. I guess it's just a high pressure environment and they go ahead developing things they don't have time to read.”

Emily Bender, a Professor of Linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, told me that Lerchner might have been told that he’s replicating old work, or that he should at least cite it, if he had gone through a normal peer-review process. 

“Much of what's happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” but that do not go through a proper scientific paper publishing process. 

Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts [...] it would be a better world if we didn't have that setup.”