2026-03-04 22:00:52
Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.
The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”
The problem is that editors started to notice that some of these translations introduced errors to articles. For example, a draft translation for a Wikipedia article about the French royal La Bourdonnaye family cites a book and specific page number when discussing the origin of the family. A Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia, checked that source and found that the specific page of that book “doesn't talk about the La Bourdonnaye family at all.”
“To measure the rate of error, I actually decided to do a spot-check, during the discussion, of the first few translations that were listed, and already spotted a few errors there, so it isn't just a matter of cherry-picked cases,” Lebleu told me. “Some of the articles had swapped sources or added unsourced sentences with no explanation, while 1879 French Senate election added paragraphs sourced from material completely unrelated to what was written!”
As Wikipedia editors looked at more OKA-translated articles, they found more issues.
“Many of the results are very problematic, with a large number of [...] editors who clearly have very poor English, don't read through their work (or are incapable of seeing problems) and don't add links and so on,” a Wikipedia page discussing the OKA translation said. The same Wikipedia page also notes that in some cases the copy/paste nature of OKA translators’ work breaks the formatting on some articles.
Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.
For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”
Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.
“The use of Grok proved controversial, notably given the reasons for which Grok has been in the news recently, and a recent in-house study showed ChatGPT and Claude perform more accurately, leading them to switch a few days ago, although they still recommend Grok as ‘valuable for experienced editors handling complex, template-heavy articles,’” Lebleu told me.
Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule.
“OKA translators who have received, within six months, four (correctly applied) warnings about content that fails verification will be blocked without further warning if another example is found,” the Wikipedia editors wrote. “Content added by an OKA translator who is subsequently blocked for failing verification may be presumptively deleted [...] unless an editor in good standing is willing to take responsibility for it.”
A job posting for a “Wikipedia Translator” from OKA offers $397 a month for working up to 40 hours per week. The job listing says translators are expected to publish “5-20 articles per week (depending on size).”
“They leverage machine translation to accelerate the process. We have published over 1500 articles and the number grows every day,” the job posting says.
“Given this precarious status, I am worried that more uncertainty in the translator duties may lead to an overloading of responsibilities, which is worrying as independent contractors do not necessarily have the same protections as paid employees,” Lebleu wrote in the public Wikipedia discussion about OKA.
Jonathan Zimmermann, the founder and president of OKA, and who goes by 7804j
on Wikipedia, told me that translators are paid hourly, not per article, and that there is no fixed article quota.
“We emphasize quality over speed,” Zimmerman told me in an email. “In fact, some of the problematic cases involved unusually high output relative to time spent — which in retrospect was a warning sign. Those cases were driven by individual enthusiasm and speed rather than institutional pressure.”
Zimmerman told me that “errors absolutely do occur,” but that OKA’s process includes human review, requires translators to check their content against cited sources, and that “senior editors periodically review samples, especially from newer translators.”
“Following the recent discussion, we have strengthened our safeguards,” Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”
Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms
Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.
“I agree that using AI to check AI can absolutely fail — and in some contexts it can fail at very high rates. We’re not assuming the secondary model is reliable in isolation,” Zimmerman said. “The key point is that we’re not replacing human verification with automated verification. The second model is a complement to manual review, not a substitute for it.”
“When a coordinated project uses AI tools and operates at scale, it’s going to attract attention. I understand why editors would examine that closely. Ultimately, the outcome of the discussion formalized expectations that are largely aligned with our existing internal policies,” Zimmerman added. “However, these restrictions apply specifically to OKA translators. I would prefer that standards apply equally to everyone, but I also recognize that organized, funded efforts are often held to a higher bar.”
2026-03-04 08:01:04

Scientists have peered inside the Sun and observed subtle shifts and “glitches” that have occurred over four decades, shedding light on the enigmatic long-term vibrations of our star, reports a study published on Tuesday in Monthly Notices of the Royal Astronomical Society.
The Sun goes through a roughly 11-year cycle that includes a period of high and low activity, known as solar maximum and minimum. The past few cycles have revealed changes in solar behavior that could have implications for predicting space weather and unraveling the internal dynamics of our Sun, along with other Sun-like stars.
To drill down on this mystery, researchers with the Birmingham Solar-Oscillations Network (BiSON), a network of telescopes that have monitored the Sun since the 1970s, compared the last four solar minima using this unique 40-year dataset and focused on internal vibrations that make the sun subtly oscillate.
“The entire Sun oscillates in a globally coherent way, and the oscillations are formed by sound waves trapped inside the Sun that make it resonate just like a musical instrument,”said Bill Chaplin, a professor of astrophysics at the University of Birmingham who co-authored the study, in a call with 404 Media.
“For this particular study, we were interested in seeing whether there are differences in what the Sun is doing in its structure when you focus on the periods or epochs when the Sun is very quiet,” he continued. “The last few cycles have seen some quite marked changes in behavior.”
For example, scientists have been perplexed for years by an unusually long and quiet solar minimum between cycle 23 to 24, which occurred from 2008 to 2009. Chaplin and his colleagues were able to use BiSON’s long record of asteroseismology—the study of stellar interiors—to directly contrast the interior vibrations of the Sun during this minimum to others.
“There were hints that there were things that were different” about this cycle, said Chaplin. “But now that we have the cycle 24-25 minimum—the last one in about 2019—in the bag, then we thought, ‘okay, now's the time to actually go back and look at this.’”
The team specifically looked for an acoustic wave “glitch” caused by an interior layer in which helium atoms lose electrons, producing a detectable change in the Sun’s internal structure. This glitch was significantly stronger during the 2008–2009 minimum, suggesting that the Sun’s outer interior was slightly hotter and allowed sound waves to travel faster at that time of magnetic weakness.
“The ionizing helium affects the speed at which the sound waves move through that region,” explained Chaplin. “It leaves a characteristic imprint.”
“It's not just that there is a difference with the other cycles, but it's starting to tell us about what physically has really changed beneath the surface,” he added. “They're quite subtle changes, but it's nevertheless giving us clues as to what is actually happening beneath the Sun during this very quiet period.”
The results confirm that the Sun doesn’t return to the same minimum baseline at the end of every cycle, and its activity varies within timescales of decades and centuries. For example, Chaplin pointed to one bizarrely long quiet period from 1645 to 1715, known as the Maunder Minimum.
Astronomers during this time marvelled at the prolonged lack of visible sunspots on the Sun’s surface, a sign of extremely low solar activity. Centuries later, BiSON and other solar observatories are allowing scientists to study the interior dynamics behind these shifts in depth for the first time.
“This is the first step in actually demonstrating that there are changes,” Chaplin said. “Does this mean that there are systematic changes in the way that the Sun is generating its field? It's really only now, because we have this long dataset, that we can start to ask questions like that. Previously, we just didn't have enough data to say.”
Scientists hope to keep recording the long-term behavior of the Sun with projects like BiSON so that we can better understand its mercurial nature over time. This is interesting work on its own merits, but it is also useful for refining forecasts of space storms that can wreak havoc on power grids and space assets (while also producing pretty auroras).
Chaplin also nodded to the European space telescope PLAnetary Transits and Oscillations of stars (PLATO), due for launch in 2027. This mission will search for analogous oscillations in stars beyond the Sun, building on similar work conducted by NASA’s retired Kepler space telescope.
Studying the vibrations of the Sun and other similar stars is not only important for life here on Earth; it also has implications in the search for extraterrestrial life, because local solar activity is one key to assessing the habitability of star systems similar to our own.
“The data that we have on other stars from Kepler has really helped to understand and get a better picture of the cyclic variability of other stars, like the Sun,” Chaplin concluded. “But it's still not an entirely clear picture; let's put it that way. Seismology now enables you to do really detailed analysis of stars that you can't do by other means.”
2026-03-04 03:57:20

Update: after this article was published, the national press office for the FBI said in a statement that “Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development.” For clarity, 404 Media has updated the headline, included the FBI’s full statement below, but left the original article intact so readers can see the comments made at the conference. An FBI spokesperson told 404 Media that “DAD Hemmen was discussing hypothetical FBI application of AI technology in the context of positive and negative outcomes resulting from the technology's development. FBI's current deployment of AI is inventoried, reviewed, and reported per Executive Order requirements, OMB guidance, and guidance from other relevant authorities. All FBI operations are conducted in accordance with the Constitution, applicable statutes, executive orders, Department of Justice regulations and policies, and Attorney General guidelines.”
The FBI is using artificial intelligence in what it describes as “remote access operations,” FBI parlance for hacking, according to an FBI official.
The comments, given at a national security and AI conference 404 Media was attending, give an unusually candid admission of the FBI’s use of hacking tools, which are often shrouded in secrecy.
“My team, one of the parts of our capabilities mission is our computer network operations program, where we're doing on-network or remote access operations,” Todd Hemmen, the deputy assistant director of the FBI’s Cyber Division, said on Tuesday. Remote access operations is a turn of phrase for when the FBI remotely enters a computer network; in other words, when the agency hacks into a target.
Specifically for those sorts of operations “AI has tremendous benefits, not entirely different than the benefits that are being enjoyed by some of our adversarial nationstate actors,” he continued. He pointed to “the speed at which we are able to conduct—autonomous isn't the right word—but AI enabled attacks.”
Hemmen was speaking on a panel about how criminals and nationstates are using AI to power scams and fraud. When 404 Media asked a follow-up question for more details on how the FBI is using AI for its remote access operations, Hemmen said he wouldn’t give any case specific examples, but spoke more broadly about the benefits.
He pointed to reconnaissance, when a hacker scopes out a target network to in turn find potential ways to break into it. “You have very large attack surfaces; AI can scan those surfaces very, very efficiently. So it's that initial scanning in terms of where are the vulnerabilities, how can I exploit and gain access,” he said. He added AI then can be used for moving laterally through the network, which is when a hacker moves from one position to another to access more data or capabilities. While a threat actor—a cybercriminal or an adversary nationstate—may then steal data, “we have a different mission obviously, but I see AI as having applicability across, again, every single tactic that would be relevant to those on-network operations, So it's a game changer in that sense.”
In his role, Hemmen oversees the division’s technical tools. The FBI did not respond to a request for additional comment.
Relatively little is known about what hacking tools the FBI deploys, what sort of cases it decides to deploy them in, and for what exact purpose. Over the years journalists have pieced together parts, though. Previously, the FBI used a “non-public” vulnerability to hack suspected visitors of a dark web child abuse site. The FBI’s Remote Operations Unit (ROU) used classified hacking tools—which are typically reserved for intelligence gathering operations—in ordinary criminal investigations, potentially complicating criminal defendants’ opportunity to scrutinize the evidence collected against them. The FBI has also used hacking tools, euphemistically called network investigative techniques, to investigate bomb threats and the users of a privacy-focused email service. The FBI also purchased hacking tools from the notorious spyware vendor NSO Group and explored using them against phones in the U.S., The New York Times previously reported.
2026-03-04 03:42:41

X said it will temporarily demonetize accounts that share AI-generated war footage without a label. The decision comes days after the US and Israel launched airstrikes in Iran and AI-slop war footage flooded social media timelines across the internet.
“Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program. During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Nikita Bier, X’s head of product, said in a post on X.
Many of the AI-generated videos currently on X purport to show Iranian ballistic missiles hitting sites in Israel. One video shared thousands of times on X showed missiles slamming into the ground near the Dome of the Rock in Jerusalem while a computer generated voice said “Oh my god, hear they come.” X users community noted the video, but the account that shared it has a Bluecheck and is eligible for a financial payout for engagement as part of X’s content creator program.
Up to now, the Iranians have been deliberately firing their older missiles and drones, using them as expendable bait to drain US and Israeli air defenses.
— Richard (@ricwe123) March 3, 2026
That strategy clearly worked.
Now they’re escalating, rolling out their more advanced ballistic missiles and drones.
So… pic.twitter.com/0w1RiT0guC
Tel Aviv, stripped of illusion, as you have never witnessed it. pic.twitter.com/HE3ckjBMti
— Abdulruhman Ismail (@a_abdulruhman) March 3, 2026
Bier said today that X will stop people from making money on unlabeled AI war footage, but won’t stop accounts from sharing it.
“Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will result in a permanent suspension from the program,” he added. “This will be flagged to us by any post with a Community Note or if the content contains meta data (or other signals) from generative AI tools. We will continue to refine our policies and product to ensure X can be trusted during these critical moments.”
Today we are revising our Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program.
— Nikita Bier (@nikitabier) March 3, 2026
During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies,…
Fake war footage shared on social media isn’t a new problem. For several years every new conflict would be met with a flood of fake videos. Old war footage passed off as coming from the current war was popular, but so was recordings of video games run through filters to make it look low-resolution. The same three clips from milsim video game Arma 3 were shared at the outbreak of every new conflict for a decade. The Government of Pakistan even shared Arma 3 footage once in a post that’s still live on X.
What is new is the proliferation of easy to use AI video-generation tools. AI image and video generation has come a long way in the past few years and it’s trivially easy to remove the watermark that’s supposed to distinguish them from the real thing. X’s verification system—which rewards accounts for engagement—has also created incentives for Bluecheck accounts to publish fast, verify later (if ever), and rake in the cash. So in the hours and days after the war with Iran began, fake footage of airstrikes and conflict spread on X.
The way X is handling the problem gives the game away. According to Bier, the site will rely on the community to police itself and the punishment is a 90 day suspension not from the site but from the monetization program.
2026-03-04 03:22:26

Mr. Deepfakes was the biggest website in the world for sharing AI-generated abuse imagery, swapping tips and tricks for more realistic results, and posting endless, fake, nonconsensual videos of everyone from celebrities to everyday people. In a new podcast by the CBC, I got to tell the tale of how deepfakes started, what targets go through, and where we go next.
It's called Understood: Deepfake Porn Empire. It's about the decades-long rise of non-consensual deepfake porn, the targets who are fighting back, and what it takes to stop its proliferation. Check it out here and listen wherever you get your podcasts.
The first three episodes are already up, so you can binge them all before the finale next Tuesday.
In the first episode, "The Dawn of Fake Porn," you’ll get a fascinating history of the decades of cultural and technological standards that set the stage for AI-generated nonconsensual imagery as we know it today. I learned a lot in this episode myself, including about a guy who went by “Lux Lucre” who ran two Usenet groups dedicated to fake nudes of celebrities in the 90s. This stuff goes so much farther back than you might realize.
In episode two, “So You’ve Been Deepfaked,” I got the chance to talk to Taylor, who discovered she’d been targeted by AI images while at university, working in a male-dominated field. Instead of hoping it’d go away, she set out to find her harasser, and found his other targets in the process. It all led back to one place: the biggest deepfake site in the world, Mr. Deepfakes.
Episode three just came out today: “The Notorious D.P.F.K.S.” is a romp through the investigative highs and lows that led a team of journalists scattered around the world to the door of Mr. Deepfakes himself. I was so thrilled to talk to investigative journalist Ida Herskind, OSINT specialist Zakaria Hameed, and Bellingcat’s Ross Higgins in this episode. Come for the How I Met Your Mother references, stay for the gripping chase.
Episode four, the series finale, launches next week. It’s a true crime story with CBC reporters on stakeouts and infiltrating hospitals, and legal and social experts breaking down what it all means now that we’re in a post-Mr. Deepfakes world—but far from a post-AI abuse landscape. Follow the Understood feed wherever you listen to get it when it comes out on Tuesday.
If you liked this season, head back to catch up on another series I hosted with the CBC: Pornhub Empire, on the rise and fall of the porn monolith.
Tune in and let me know what you think!
2026-03-03 22:03:26

Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.
The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.
This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.