2026-02-27 01:02:16

Tea App Green Flags, a service that claims it can “protect your digital reputation,” will remove negative posts about men from private online groups where women share “red flags” about men they’ve dated in order to help other women.
The service is another escalation in the age of online dating, women attempting to protect each other from other men in the dating pool, and instances of men fighting against those efforts. It also shows how some of these allegedly private women’s groups, especially the Tea app, are regularly infiltrated and manipulated by men.
When I reached out to an email listed on Tea App Green Flags’s site, I got a call from a man behind the operation who identified only as Jay. He said he started the service about two years ago, and that he initially focused on the Are We Dating the Same Guy Facebook groups. For the past year, he’s been offering services specifically for the Tea app, a “dating safety” app for women that suffered a devastating breach last year, and which my investigation revealed, was founded by a man who wanted to monetize the Are We Dating the Same guy phenomenon. The site also claims it can remove posts from Tea app copycat for men TeaOnHer, as well as posts on Instagram.
Jay declined to say how much revenue the site generates, but claims he gets about 50 to 60 calls a day and currently has six employees. On its website, Tea App Green Flags claims it has removed more than 2,500 posts on the Tea app for 759 clients. Jay said that most of his clients are men, but that some are women who are trying to take down posts about their husbands or boyfriends.
Potential clients can pay $1.99 to report one account and up to $79.99 to report 25 accounts.
“We just want to take down posts about people who are being defamed,” Jay told me. “And when I say defamed, it means like, ‘this guy has a small penis,’ or ‘this guy smells.’ That doesn't fit the mission statement of what the Tea app was for, which is to warn women against people who are harmful, who are abusive, who are cheaters. We've noticed that a lot of the individuals that come to us, almost all of them, come to us for little stupid things.”
Clients interested in Tea App Green Flags’s services go to the site and fill out a form with their information and information about the posts they want removed. The company reviews the case and then starts the “takedown process,” which can take between 21-30 days. Tea App Green Flags says it will then continue to monitor posts about the client and remove them for three months.
When I asked Jay how this “takedown process” works he said “I can’t give that info. That’s the business.”
Jay told me that he would not work with clients who have been accused of sexual assault by multiple people on the Tea app, or by one person in one of the Are We Dating the Same Guy Facebook groups who used their real name and face in a profile picture.
“Sometimes we find along the process that there are pedophiles or people who actually did what they did, and they're very bad,” Jay said. “So we say, we're not doing this. We can't take a rap for that. We're ethical. We just want to take down people who are being defamed.”
Jay told me he understands why Facebook groups like Are We Dating the Same Guy are necessary and thinks they are a good idea, but the anonymous nature of the Tea app "causes a cesspool of defamation.”
When I asked Jay what he thinks about the fact that some women don’t feel safe sharing information about some dangerous men unless they can do so anonymously, he said it would be better if women showed their face, or if the Tea app at least gave women that option.
“I have a Tea app account. I'm a dude. All my reps have Tea app accounts. They're men,” Jay said. “How much can you trust these people and what they're doing?”
One reason the Tea app hack was so dangerous is because the app used to ask women to upload a picture of their face in order to verify that they are women. Those images were posted all over the internet because of the hack, putting those women at risk and leading to more harassment.
Tea App Green Flags is far from the first attempt from men trying to fight back against these types of groups. In 2024, for example, we wrote about a man who tried to sue women who posted about him in Are We Dating the Same Guy Facebook groups. His first case was dismissed, and he refiled days later as a class action lawsuit; later that year, he was sent to prison for tax fraud.
Tea did not immediately respond to a request for comment.
2026-02-26 23:48:40

It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.
Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.
“FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together,” Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
I’ve used FPDS to reveal ICE paid Palantir tens of millions of dollars for work on “complete target analysis of known populations” (which then led to a leak from inside Palantir describing the company’s new work for ICE); figure out Customs and Border Protection (CBP) spent millions of dollars on software that uses AI to detect “sentiment and emotion” in online posts; and identify the multiple agencies that bought access to a massive, and warrantless, database of peoples’ travel histories.
FPDS was very basic, in a very good way. You could type in something like “Clearview AI” for example, and it would show all the government contracts that mentioned the facial recognition company. That included both contracts with Clearview AI, but also ones with larger government contractors that were reselling the technology and included “Clearview AI” in the item description. Often when digging through government purchasing data you’ll find some surveillance technology is not sold to agencies by the company directly, but by firms that have ongoing relationships with the government.

Then when FPDS displayed the results, it was incredibly easy to get the information you wanted at a glance. Each result was a single rectangle which showed the company that the contract was with, the agency buying the product, and, importantly for me, the broad category of product. This often included things like computer-related services, letting me very quickly figure out whether, as a technology journalist, that is something I should look into. FPDS also displayed new contracts before they appeared in SAM.gov.
The General Services Administration ran FPDS. The idea was to bring FPDS into SAM.gov, so there aren’t a bunch of different sites but a single platform for contractors or the public to explore.
I do use SAM.gov a lot too. But for a singular purpose: to find what agencies might buy in the future. On that site, agencies often post Requests for Information in which they signal the sort of spy tech they are interested in. It’s not a contract or sale, but an indication of what they want to get their hands on.
The thing is, SAM.gov is awful for finding what agencies have actually bought. Searches that would return clear results in FPDS are not available immediately in SAM.gov. You may have to tweak some obscure setting to get them to display. You might need to be logged in for some results (FPDS didn’t require this); for other results, it seems better to actually not be logged in. The results do not immediately show the category of the purchase, such as whether it was technology related or not. You have to filter the results by a specific agency if you don’t want just a bunch of noise, but the filters appear finicky and sometimes don’t work. And all of that is only if the data you’re searching for is surfaceable at all through SAM.gov.
As one site that connects agencies and contractors wrote recently, FPDS “has long been the master repository of federal contract activity, containing millions of contract actions that NEVER hit SAM.gov.” Now, maybe they will, but that doesn’t solve SAM’s search issues.
Also, whenever someone pastes a SAM.gov link into 404 Media’s Slack channel, co-founder and journalist Sam Cole gets a notification. “I get excited… someone wants to talk to me. Then it’s SAM.gov 😞,” she told me on Thursday.
The work of journalists and researchers certainly won’t be impossible with SAM.gov. But it is absolutely a less transparent system than the perfectly good one we had until this week.
2026-02-26 23:45:29

The Islamic State’s online warriors are still posting. It’s been almost a decade since the group lost the Battle of Raqqa and saw its IRL territorial ambitions thwarted. Unable to hold territory in the real world, the group renewed its focus on posting and has started using AI to resurrect dead leaders. And, because social media platforms have gutted their content moderation operations, the terror group’s strategy is working.
The Islamic State’s online success is detailed in a new report from the Institute for Strategic Dialogue (ISD), an independent research institution that studies extremist movements. For the study, researchers tracked IS accounts on Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX. It found videos posted in Discord channels dedicated to video games and tracked how the groups have modified old content to fit on new platforms.
Like many others posting online in 2026, the Islamic State has found success by talking about the Epstein Files, using AI to create new videos of dead leaders, and has begun taking its message to video games like Minecraft and Roblox.
“They are very adept at exploiting platforms [and] spreading messages,” Moustafa Ayad, a researcher at ISD and author of this study, told 404 Media. He noted that the group has been active online for 10 years and that part of their success is a willingness to experiment.
Ayad said that Facebook remains a central hub for IS, despite its push into new spaces. His research discovered 350 IS accounts on Facebook that generated tens of thousands of views. One video of an IS fighter talking to camera had more than 77,000 views and 101 shares. The Islamic State branding is blurred to defeat the site’s auto-moderation.
According to Ayad, Islamic State’s engagement numbers are up across the board. “Trust and safety teams have been rolled back over the past few years…a lot of this is outsourced to third party companies who aren’t necessarily experts in understanding if a piece of content came from the Islamic State,” he said.
Social media companies like Meta used the election of Donald Trump as an excuse to cut back on moderating their platforms. Meta said this would mean “more speech and fewer mistakes.” No policies around terrorism have changed, but broadly speaking the largest social media platforms are doing a worse job at moderating their sites. In practice it’s turned Facebook into a place where a group like the Islamic State can spread its message without falling afoul of content moderation teams. Even three years ago, IS influencers wouldn’t have lasted long on the site.
This rollback of moderation has coincided with a spike in views for IS accounts, the report argues. “Individual IS ‘influencer’ accounts are experiencing higher engagement rates on terrorist content than previously recorded by ISD analysts,” the report said. “It is unclear if this uptick is due to moderation gaps, platform mechanics or specific tactical adjustments by IS supporters and support outlets and groups.”
“We’re not talking about content where there’s a gray area,” Ayad said. “It’s very clearly branded Islamic State…supports violence, supports the killing of minorities, the celebration of bombings, the pillaging that is happening in Sub Saharan Africa.”
Something new is the adoption of AI systems to resurrect dead leaders. Ayad described a video where the deceased IS leader Abu Bakr al-Baghdadi delivered speeches again.
“It’s a sanctioned version of using AI for a ‘beloved leader’ or taking him out of context and placing him in a meadow, surrounded by beautiful flowers, paying homage,” he said. “Some of these circles are strange.”
Another popular topic in current IS propaganda is the Epstein Files. According to Ayad, an AI-generated photo of Donald Trump and Bill Clinton canoodling in bed makes frequent appearances on IS accounts across platforms. The picture is, supposedly, pulled from the Epstein files but it’s a popular fake. Ayad said Epstein has been a perfect springboard for IS to talk about “western degeneracy.”
Ayad has also seen Islamic State videos created using Minecraft and Roblox. “They’re creating these virtual worlds that mimic the Islamic State’s caliphate, literally calling it something like Wilayat Roblox [the Province of Roblox] … and they’ll completely mimic the video styles of well-known Islamic State Videos using Roblox characters. This includes faux executions. It includes Arabic and English voiceover in the same cadence as an Islamic State narrator.”
One of the most famous pieces of Islamic State propaganda is a film called Flames of War: The Fighting Has Just Begun. Ayad has seen multiple 1 for 1 recreations of the film using Roblox characters. “They’re often tied to Discords where a number of users are creating this content. They always claim it’s fake or a LARP,” he said. “To see them in this video game skin is odd, to say the least.”
What drives an Islamic State poster? “It’s done very much for the love of the game,” Ayad said. It’s done for the fact that, as a user, ‘I might not be able to participate in physical Jihad but I can participate in electronic Jihad.’”
Keeping Islamic State off of major social media platforms is a constant battle, but one frustrating finding of the study is that the tactics for avoiding moderation haven’t changed much.
“Techniques included the use of alternative news outlets to rebrand IS news, as well as purchasing or hijacking channels with high subscriber bases. These were then repurposed to share IS content. IS supporters, groups and outlets also use coded language: they sometimes referred to the group as ‘black hole’ or the ‘righteous few’ to confound moderation efforts.”
To fight back against IS online, Ayad said that platforms needed to be better at coordination. Often a group is kicked off of Facebook so it moves to TikTok or another platform where it flourishes. He also said that all the companies need to be more transparent about who they’re kicking off their platform and why.
“Europol does these big takedown days and they’re effective to a certain degree but the fact of the matter is that the Islamic State is spread across an expanse of different platforms and messaging applications,” he said. “They’re able to shift operations to another place, wait it out and regenerate on that platform…it’s not like you’re dealing with an average user, you’re dealing with a user that’s determined to spread their ideology and exploit your platform to their own ends.”
And then there’s the old problem of language. “There needs to just be better moderation of under-moderated languages,” Ayad said. Facebook and other platforms have long been terrible at moderating non-English languages. A lot of rancid content online gets a pass because it’s in Arabic or Bengali.
2026-02-26 04:08:57

Amazon is telling people who use its wishlists feature to switch to post office boxes or non-residential delivery addresses if they want to ensure their home addresses remain private, as part of a change in how it processes gifts bought from third-party sellers. The change is especially concerning to many sex workers, influencers and public figures who use Amazon wishlists to receive gifts from fans and clients.
First spotted by adult content creators raising the alarm on social media, the changes open anyone who uses wishlists publicly to increased privacy risk unless they change how they receive packages.
In an email sent to list holders, Amazon said beginning March 25, it will reveal users’ shipping addresses to third-party sellers. The platform added that gift purchasers might end up seeing your address as part of this process, too.
Before this change, the only information visible to sellers and gift purchases was the recipients’ city and state.
“We're writing to inform you about an upcoming change to Amazon Lists. Starting March 25, 2026, we will remove the option to restrict purchases from third-party sellers for list items. When this change takes effect, gift purchasers will be able to purchase items sold by third-party sellers from your lists and your delivery address will be shared with the seller for fulfillment. This change will provide gift purchasers with access to a wider selection of items when shopping from your lists,” Amazon said in the email. “Important note: When gifts are purchased from your shared or public lists, Amazon needs to provide your shipping address to sellers and delivery partners to fulfill these orders. During the delivery process, your address may become visible to gift purchasers through delivery updates and tracking information. To help protect your privacy, we recommend using a PO Box or non-residential address for any list you share with public audiences.”
If you have public wishlists, you can manage individual list settings here and select "manage list." From there you can change your list privacy settings to private or shared to limit who has access, or remove your shipping address entirely by selecting "none" from the dropdown menu.
Most of the popular shipping methods in the US, including UPS, Fedex, and the USPS, don’t show full addresses as part of package tracking. But if a third-party seller shares a gift recipient’s home address with a buyer as part of the tracking process, Amazon is saying that’s out of the platform’s control. And some of those delivery services send photos as part of the tracking process for proof of delivery, which could include more information about one’s home or location than they would want a gift sender to see.
“Those who do a range of work where privacy concerns are top of mind would be left to wonder what problem Amazon is solving with this change,” Krystal Davis, an adult content creator who posted about receiving the email from Amazon, told 404 Media. “Those who use these lists as an opportunity to allow fans to show support and offset expenses will lose that option. The alternatives to Amazon wishlist are significantly lacking.”
Many online sex workers use Amazon wishlists to receive gifts from subscribers and fans. It’s a practice that’s gone on for years. Revealing one’s full address to buyers — especially if they don’t realize this change has gone into effect, or missed the email sent by Amazon with the warning to switch to a P.O. box — puts their safety at serious risk. And like so many privacy and security issues that affect sex workers first, anyone could potentially be affected; lots of people use public wishlists who might want to keep their location private, and should consider checking their settings or switching to a non-residential address if they want to maintain that privacy.

Amazon provides conflicting information on when and how this change will go into effect. The email sent to wishlist holders says it will start on March 25, 2026, but as of writing, a notice on the “Manage List” settings page said starting February 25, third party sellers will see users’ shipping addresses. Amazon confirmed to 404 Media that the option to restrict purchases from third-party sellers for list items is being removed on March 25, one month from today.
2026-02-25 23:19:36

This week we start with Jason’s follow up to Ring launching its ‘Search Party’ feature. It turns out, according to a leaked email he got, the feature is only starting with finding lost dogs. After the break, Emanuel explains why we’ve learned nothing about amplification when it comes to the recent looksmaxxing trend. In the subscribers-only section, Sam explains how Grok produced the real name of a sex worker who performs pseudonymously.
Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
1:11 - Leaked Email Suggests Ring Plans to Expand ‘Search Party’ Surveillance Beyond Dogs
30:26 - We Have Learned Nothing About Amplifying Morons
Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked
2026-02-25 23:16:52

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.
The FBI got a search warrant for X to provide details on the Grok prompts a man allegedly used to create more than 200 nonconsensual sexual videos of a woman he knew in real life, according to court records.
The details of the investigation are contained in an FBI affidavit about the alleged actions of Simon Tuck, who is accused of extensively harassing and threatening the woman’s husband. Tuck regularly worked out with and texted with the woman and, according to the affidavit, secretly filmed her while she was working out in his garage. Over the course of the last several months, Tuck swatted their home, made a series of anonymous reports to the man’s employer claiming that he was a child abuser and a drug addict, posed as the man and made a series of mass shooting and suicide threats. Tuck also made a series of other threats and bizarre actions, which included reaching out to a funeral home to say that the man would be dead soon and sending threats to the man while posing as a member of Sector 16, a Russian hacking crew.
The affidavit notes that, in January, the FBI got a search warrant for the man’s conversations with Grok. The FBI says that it received “prompts provided to GrokAI that generated approximately 200 pornographic videos of a woman who closely resembled VICTIM’s wife’s physical appearance.”
“For example, in one prompt, TUCK queried: ‘In a sensual sports style, a confident blonde woman playfully undresses on a tennis court, starting with her white crop top pulled up to expose her bare breasts. She has long wavy hair, a toned athletic body, and a flirtatious smile, wearing a short navy pleated skirt and holding a racket. She slowly lowers her top, revealing full nudity, tosses her hair, and swings the racket teasingly, with a surprising clumsy spin like a comedic twirl,’” the affidavit says.

The FBI says that Tuck also allegedly used Grok to create a complaint about the woman’s husband that was then filed to the company he works for.
The actions described in the affidavit are extreme and horrifying, but are not terribly out of the ordinary for harassment cases that we have reported on before. What’s notable here is that this case shows that law enforcement is looking at chats with AI bots as potential sources of evidence and that X is complying with these requests.
Most importantly, it highlights X’s role in allowing Grok to create nonconsensual sexual material in a criminal case that involves extreme cyberstalking and real life harm. According to the affidavit, Tuck used Grok to create this nonconsensual sexual material at the same time that Grok was being heavily criticized for creating child sexual abuse material. This all happened during the “undress her” phenomenon, which showed just how terribly Grok’s content moderation is. Last week, we also reported that Grok was used to reveal the real name of an adult performer.
Correction: This piece originally said the FBI issued Grok with a subpoena. It was a search warrant.