MoreRSS

site icon404 MediaModify

A journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of 404 Media

CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court

2026-03-17 06:40:53

CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court

A judge ordered the reinstatement of a video game developer after he was fired as part of a scheme cooked up by a CEO using ChatGPT. Facing the possibility of paying out a massive bonus to the developer of Subnautica 2, the CEO of publisher Krafton used ChatGPT to create a plan to take over the development studio and force out its founder, according to court records.

The Monday ruling details the bizarre story. Unknown Worlds Entertainment is the studio behind the 2018 underwater survival game Subnautica. The company has since been working on the sequel, Subnautica 2. In 2021, South Korean publisher Krafton bought Unknown Worlds Entertainment for $500 million and promised to pay out another $250 million if Subnautica 2 sold well enough.

Krafton’s internal sales projections for Subnautica 2 looked great, and looked like it would be on the hook for the additional $250 million. In an attempt to avoid paying this, Krafton CEO Changhan Kim turned to ChatGPT for help avoiding paying the developers the $250 million bonus. “As Unknown Worlds prepared to release its hotly anticipated sequel, Subnautica 2, the parties’ relationship fractured,” the court decision said. “Fearing he had agreed to a ‘pushover’ contract, Krafton’s CEO consulted an artificial intelligence chatbot to contrive a corporate ‘takeover’ strategy.”

Kim partnered with Krafton Head of Corporate Development Maria Park and the company’s legal team to work out options. He toyed with finding a reason to fire the founders. According to court records, Park pinged Kim on Slack and told him that attempting to avoid paying the bonus would be legally risky. “Hi CEO . . . it seems to be highly likely that the earn-out will still be paid if the sales goal is achieved regardless of the dismissal with cause,” the Slack message said according to court records. “Therefore, there isn’t much that we can practically gain other than punishment with a simple dismissal alone, whereas I am worried that we may be exposed to lawsuit and reputation risk.”

But the CEO would not accept defeat. “And so Kim turned to ChatGPT for help,” court records said. “When the AI chatbot responded that the earnout would be ‘difficult to cancel,’ Kim complained to Park that the [payout] was a ‘contract under which we can only be dragged around.’”

Kim pressed the chatbot for an answer. “At ChatGPT’s suggestion, Kim formed an internal task force, dubbed ‘Project X.’ The task force’s mandate was to either negotiate a ‘deal’ on the earnout or execute a ‘Take Over’ of Unknown Worlds. They looked to buy time,” court records said. “Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a ‘Response Strategy’ to a ‘No-Deal’ Scenario.”

This was a piece of ChatGPT’s “Project X” for Krafton:

“a. Preemptive Framing - Repeat that protecting quality and fan trust is the highest priority, undermine the ‘Large Corporation VS. Indie’ framing

b. Securing Control Points -

* Lock down Steam/console publishing rights and access rights over code/build pipeline through both legal and technical aspects.

* For the earn-out freeze, keep room for negotiations through provision stating ‘immediate removal if specific development results are achieved’

a. Systematic materials for legal defense - Prepare contract interpretation memorandums, log all communications, seek external consultation
b. Team retention - Operation of retention packages for key personnel and rapid backfill pipelines in anticipation of resignation/departure scenarios
c. Two handed strategy - Create a structure that allows for both hardball (Legal+ Finance) and softball (Support/Incentives) approaches so moderate factions within Unknown Worlds can push for compromise.”

Kim followed ChatGPT’s advice rather than his lawyers’ advice, according to the court records. The first step was posting a message on Subanutica’s website to get fans on his side. According to court documents, Kim said the goal of the message was to “secure public support from fans and legal validation of our legitimacy.” He then suggested that ChatGPT write it for him. It achieved the opposite of his intended goal. Fans found the message bizarre and worried about the future of the game. Those fears were compounded when Kim fired the game’s original creators and entered into a legal battle with them.

The legal battle is ongoing, but Kim looks set to lose. The judge has ordered he reinstate the fired developers and has exposed the CEO’s flailing use of ChatGPT. Krafton told Kotaku that it was “evaluating its options” regarding the ruling and that it “puts players at the heart of every decision.”

Texting a Random Stranger Better for Loneliness Than Talking to a Chatbot, Study Shows

2026-03-17 00:48:49

Texting a Random Stranger Better for Loneliness Than Talking to a Chatbot, Study Shows

Lonely young people are likely better off texting a random stranger than talking to a chatbot, according to a new study.

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling.

The research included 300 first-semester college students who were either randomly paired with another student, given a daily solo writing task, or put into a Discord server with a chatbot running on ChatGPT-4o mini. 

The students were instructed to have at least one interaction per day in each of the groups. The human-human pairs were instructed to message each other however they wanted, while the researchers instructed the bot to “listen actively and show empathy,” and to be a “friendly, positive, and supportive AI friend to help the student navigate their new college experience.” The human participants ultimately acted pretty similarly in both types of chat, sending between eight and 10 messages a day in both their human text chains and their Discord conversations with the large language model (LLM).

However, participants who were paired with a human partner reported significantly lower loneliness after the study, and those paired with the chatbot did not. “This is just such a low tech, simple intervention, and can make people feel significantly less lonely,” Ruo-Ning Li, PhD candidate at UCB and one of the authors of the paper, told 404 Media. 

The research looked at college students specifically, to try to understand whether LLMs could be a scalable tool to help with the isolation that people can feel when going through a big change. The transition to college can be overwhelming: new classmates, new places, new rules. Young people are often away from parents or familiar structure for the first time, building out their new social networks among others who are doing the same. This is a particularly vulnerable time: if chatbots could really cure loneliness for a group of people like this, “then it would be great,” said Li. But only human to human interaction, despite it being with a random person over text, had any significant effect. 

The research is part of a movement to understand the effects of LLM interactions over periods of time. Another paper from the same lab, published this week in Psychological Science, looks at the experiences of more than 2,000 people over twelve months, checking in with them once a quarter. The study found that higher reported chatbot use was linked with higher loneliness later on — and vice versa. “Changes in chatbot use have a small effect on emotional isolation in the future. And emotional isolation has a similarly sized effect on your likelihood to use chatbots in the future,” Dr. Dunigan Folk, one of the study’s authors, told 404 Media. He cautioned against calling it a “spiral”, since other things could be changing in peoples’ lives to make them use chatbots and be lonelier. But, he said “it’s suggestive of a negative feedback loop because it’s a reciprocal relationship.” Chatbots, he said, could be something like “social junk food.” They might make people feel good in the moment, “but over time, they might not nourish us the same way that human relationships do.”

He said this finding would be consistent with people replacing human relationships with LLMs. “I think it’s a trade-off thing where you talk to AI instead of a person,” Folk said. “the person would have been a lot more rewarding.”

And there is evidence to show that AI does have some short-term effects on mood. “If you measure their feeling of loneliness or social connection right after the interaction, people do feel better,” said Li. However, she added, “making people feel momentarily happy is not that hard.” It is not clear that a single positive experience is scalable or persistent longer term. “We eat candy, we feel happy. But if we eat a lot of candy over a long time, it could be harmful for our health,” Li said. 

That positive short term effect is often reflected in public reports of chatbot usage. For example, two weeks ago, the Guardian published a column where a reporter trialled using an LLM as a therapist, described their validating interaction with it, and concluded that the “experience of being therapised by a chatbot has been wonderful.” While this isn’t necessarily a robust study design, there is empirical research that “one-shot” interactions with bots do make people feel better in the short term. 

However, human interactions also have positive effects that chatbot use could be distracting people from. Li considers it important to consider the side effects of chatbot interactions, including their potential for replacing the incentive to seek out the positive effects of human connection. “AI can help mitigate negative feelings, but obviously, it cannot replace humans to build connections,” she said. “That shouldn’t be the goal of the AI design.”

A four-week March 2025 study from the MIT Media Lab and OpenAI explored how different types of LLM interaction and conversation impacted users’ mental wellbeing. The paper found that while some instances of chatbot use “initially appeared beneficial in mitigating loneliness,” higher daily LLM usage was associated with “higher loneliness, dependence, and problematic use, and lower socialization.”

Witness Caught Using Smartglasses in Court Blames it all on ChatGPT

2026-03-16 22:41:09

Witness Caught Using Smartglasses in Court Blames it all on ChatGPT

An insolvency judge in England tossed out testimony after discovering a witness was being coached on what to say in real time through a pair of smartglasses. When the voice of the coach started coming through the cellphone after it was disconnected from the glasses, the witness blamed the whole thing on ChatGPT.

Insolvency and Companies Court (ICC) Judge Agnello KC in Britain wrote up the incident after it happened in January and the UK-based legal research blog Legal Futures was first to report it. The case considered the liquidation of a Lithuanian company co-owned by a man named Laimonas Jakštys. Jakštys was in court to get his business off an insolvency list and to put himself back in charge of it. It didn’t go well.

“Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked,” Judge Agnello wrote. “These questions were interpreted and then there was a pause before there was a reply. After several questions, [defense lawyer Sarah Walker] then informed me that she could hear an interference coming from around Mr. Jakštys and asked if Mr. Jakštys could take his glasses off for a period as she was aware smart glasses existed.”

There was a Lithuanian interpreter on hand to help Jakštys talk to the court and she, too, said she could hear voices from Jakštys’s glasses. The judge pointed out they were smart glasses and asked him to take them off. “After a few further questions, when the interpreter was in the process of translating a question, Mr Jakštys’ mobile phone started broadcasting out loud with the voice of someone talking,” Judge Agnello wrote. “There was clearly someone on the mobile phone talking to Mr. Jakštys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”

Jakštys showed up the next day in the glasses again and the judge told him to turn them off. “Jakštys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked,” the judgement said. “He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me.”

During the court appearance, Jakštys claimed his mobile phone had been stolen but couldn’t provide a police report for the incident. He also repeatedly received calls on his smartglasses-connected phone from a number listed as “abra kadabra.” The call log showed that many of the calls occurred when he was on the witness stand. The judge asked him about the identity of “abra kadabra” and Jakštys said it was a taxi driver.

“When he was pressed as to why all these calls were made…Mr. Jakštys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence,” Judge Agnello said.

In the end, the Judge tossed out all of Jakštys’ testimony. “He was untruthful in relation to his use about the smart glasses and in being coached through the smart glasses,” the judgement said. “In my judgment, from what occurred in court, it is clear that call was made, connected to his smart glasses and continued during his evidence until his mobile phone was removed from him. When asked about this, his explanation was that he thought it was ChatGPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”

This incident in the London court is just another in a long line of bad behavior from people wearing smartglasses. CBP agents have been spotted wearing them during immigration raids and Harvard students have loaded them with facial recognition tech to instantly dox strangers.

The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet

2026-03-15 00:00:39

The Removed DOGE Deposition Videos Have Already Been Backed Up Across the Internet

The DOGE deposition videos a judge ordered removed from YouTube on Friday after they had gone massively viral have since been backed up across the internet, including as a torrent and to the Internet Archive. The videos included DOGE members unable or unwilling to define DEI; discussing how they used ChatGPT and terms such as “black” and “homosexual” to flag grants for termination but not “white” or “caucasian,” and acknowledgements that despite their aggressive cuts they failed to achieve the stated goal of lowering the government deficit.

The news shows the difficulty in trying to remove material from the internet, especially that which has a high public interest and has already been viewed likely millions of times. It’s also an example of the “Streisand Effect,” a phenomenon where trying to suppress information often results in the information spreading further.

💡
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say

2026-03-14 21:00:25

Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say

Welcome back to the Abstract! These are the studies this week that searched for life in the dark, stood up for hedgehogs, dropped some wisdom, and died in an inexplicably epic explosion.

First, aliens might be riding around interstellar space on exomoons, just in case that’s of interest to you. Then: an ultrasonic solution to roadkill, the limits of metrification, and an answer to a cosmic mystery.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files. b

The view from a rogue exomoon

Dahlbüdding, David et al. “Habitability of Tidally Heated H2-Dominated Exomoons around Free-Floating Planets.” 

Living on a planet with a boring old Sun is for normies. In a new study, astronomers suggest that alien life could potentially emerge in a much more unexpected place—”exomoons” that orbit free-floating planets in interstellar space. 

There are likely trillions of rogue planets wandering through the Milky Way, untethered to any star, raising the tantalizing mystery of whether any of them could be habitable. Now, researchers led by David Dahlbüdding of the Max Planck Institute for Extraterrestrial Physics (MPE) extend this question to exomoons that were dragged out into interstellar space with their planets.  

“The search for exomoons within conventional stellar systems continues with no confirmed detection to date,” the team said. “Thus, free-floating planets might offer an alternative pathway for the first discovery of an exomoon.”

In other words, astronomers have never clearly seen an exomoon. But new techniques for spying free-floating worlds—such as microlensing, which reveals objects through the warped light of their gravity—could provide the sensitivity that is required for this long-sought detection.

With regard to potential habitability, Dahlbüdding and his colleagues focused specifically on exomoons that orbit planets with thick hydrogen atmospheres. If such a pair were to be kicked out of a star system, the exomoon’s orbit could become stretched out into a far more elliptical shape. This shift would cause the planet to exert more intense tidal forces onto its satellite, generating heat that could keep liquid water flowing on the moon over vast timescales.

“Close encounters before the final ejection even increase the ellipticity of the moon’s orbit, boosting tidal heating over millions to billions of years, depending on the moon’s and free-floating planet’s properties,” the team said. The tidal forces and atmospheric components could also “create favourable conditions for RNA polymerisation and thus support the emergence of life.”

“These potentially habitable moons could be detected through a variety of techniques,” including microlensing, the researchers added, though they noted that actually analyzing their atmospheres “may not be feasible with any instruments currently in operation.”

While we may not be able to spot signs of life on these worlds anytime soon, it would be exciting just to discover a planet and a moon bound together, but unbound from any star, which is a genuine near-term possibility.

In other news…

Ultra-sonic the hedgehog

Rasmussen, Sophie Lund et al. “Hearing and anatomy of the ear of the European hedgehog Erinaceus europaeus.” Biology Letters.

Hedgehogs have long been ubiquitous in Europe, but cars now kill up to one-third of their population each year. Even more nightmarish, the advent of robotic lawn mowers has led to an uptick in hedgehog deaths.

To help protect these iconic critters, scientists suggest testing out acoustic repellents. A series of experiments with 20 hedgehogs from a wildlife rescue established that “hedgehogs can perceive a broad ultrasonic range,” with peak sensitivity around 40 kHz.

Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say
Rasmussen, who goes by Dr. Hedgehog, with a hedgehog. Image: Joan Ostenfeldt

The results “show a potential for the development of targeted ultrasonic sound repellents to deter hedgehogs temporarily from potential dangers such as the particular models of robotic lawn mowers found to be hazardous to hedgehog survival, and more importantly, cars,” said researchers led by Sophie Lund Rasmussen of the University of Oxford.

“Designing sound repellents for cars to reduce the high number of road-killed hedgehogs enhances animal welfare and supports conservation of this declining flagship species,” the team concluded.

To channel the old joke, why did the hedgehog cross the road? Answer: Ideally it didn’t, due to scientific intervention. (I’ll be here all night).

Dropping in on science history

Cornu, Armel et al. “The drop and the metric system: how an unruly unit survived revolutions.” Annals of Science.

The metric system has been adopted by every country except Liberia, Myanmar, and the United States. But even as metrication was rapidly embraced in the 17th and 18th centuries, a far more imprecise system—the drop—refused to drop out. 

People have measured liquids in drop form for thousands of years, and still do in many contexts today. Researchers led by Armel Cornu of Uppsala University have now explored how such “non-standard units survive lengthy waves of standardization.” The paper is worth a read for its many interesting asides, like how acids were tested “by counting the number of drops…that could be placed on the skin before one witnessed the effects.” Gnarly. 

It also gets into the political dimensions of metrication, including this proto-populist justification for standardizing units: “Numerous complaints about the diversity of measurements and their lack of cross-readability” were directed with “a special ire at powerful lords who abused standards in order to extort the population,” Cornu’s team said. The metric system was one response to "the discontent of peasants and the little people against the powerful.” 

Anyway, a little bit of drop-related science history never hurt anyone—unless you volunteered to be an acid tester.

A (dead) star is born 

Farah, Joseph et al. “Lense–Thirring precessing magnetar engine drives a superluminous supernova.” Nature.

Astronomers have discovered the mysterious power source of rare and radiant stellar explosions called “Type I superluminous supernovae” which are ten times brighter than regular supernovae. 

The secret superluminous sauce, as it turns out, is the birth of a magnetar, a highly magnetized stellar remnant, according to a supernova first observed in December 2024. The light from this stellar explosion contained imprints of the Lense–Thirring effect, in which spacetime is dragged around by massive and rapidly rotating objects, a key sign of a magnetar origin. 

Alien Life Might Exist on the Starless Moons of Rogue Planets, Scientists Say
Artist’s conception of a magnetar surrounded by an accretion disk exhibiting Lense-Thirring precession. Image: Joseph Farah and Curtis McCully

“Our observations are consistent with a magnetar centrally located within the expanding supernova ejecta,” said researchers led by Joseph Farah of Las Cumbres Observatory. “These results provide the first observational evidence of the Lense–Thirring effect in the environment of a magnetar and confirm the magnetar spin-down model as an explanation for the extreme luminosity observed in Type I superluminous supernovae.” 

“We anticipate that this discovery will create avenues for testing general relativity in a new regime—the violent centres of young supernovae,” the team concluded. 

Forget “stellar” as slang for great; we have graduated to “superluminous.” 

Thanks for reading! See you next week.

DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery

2026-03-14 08:19:58

DOGE Deposition Videos Taken Down After Judge Order and Widespread Mockery

A judge on Friday ordered the immediate removal of a series of depositions of members of DOGE, but not before clips of the depositions, including one in which a member was largely unable to define DEI, went viral and were covered widely, including by 404 Media.

At the time of writing, the depositions are not available on YouTube, where the Modern Language Association had uploaded them. The MLA, American Council of Learned Societies, and American Historical Association, are suing the National Endowment for the Humanities (NEH) and others around DOGE’s cuts of hundreds of millions of dollars worth of grants. Neither the plaintiffs nor the government immediately responded to a request for comment.