2026-04-22 05:06:51
Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service's normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.
As part of the "expanded Cash App Families experience," eligible legal guardians and parents can create managed accounts that offer "a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence," Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child's account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won't be able to access Cash App itself.
Today, we're launching Cash App accounts for kids age 6-12. Parents manage the accounts. Kids get to learn about safety, start saving for goals, and design and use their own debit card.
— Kristen Anderson (@FintechKristen) April 21, 2026
Next generation banking never looked so good.
Proud of the team for this one. pic.twitter.com/jIAcbvsfB9
Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a "sponsored account" to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.
A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.
This article originally appeared on Engadget at https://www.engadget.com/apps/cash-app-now-supports-accounts-for-kids-6-12-210651025.html?src=rss2026-04-22 04:51:19
YouTube notifications can get messy fast, particularly if you’re subscribed to a lot of different channels. To address that, today the company will begin muting push notifications from creators that you haven’t engaged with in the last month.
The change to YouTube notifications began as a small trial the company tested out earlier this year. The idea behind it is that if a viewer continually receives notifications about content they don't engage with, this may eventually cause the user to disable YouTube notifications altogether. Now obviously, this is bad for YouTube. Turning off notifications means people will use the platform less, thereby resulting in lower revenue. However, it's also bad for content creators, especially the ones you do like, who will have one fewer avenue to keep you updated about new and upcoming videos.
So starting today, for channels that you have subscribed to and have notifications set to "all," YouTube will no longer send out push notifications to mobile devices from creators that you haven't interacted with for one month. That said, these notifications will continue to be available inside the YouTube app in your inbox (the little bell icon in the top right).
Notably, for those who are clicking on notifications and watching related videos, nothing will change. Additionally, based on info from the test earlier this year, YouTube said "channels that upload infrequently will not have their notifications affected." This is a good thing, especially for creators who post long-form content that takes extra time to make, as people probably don't want notifications to go away in case they happen to miss a once-a-month upload.
The one thing that's unclear is if you start watching a channel again that you have not interacted with in a while, is if YouTube will automatically restart related push notifications. However, as a way to prevent too many alerts from clogging up your phone, YouTube's new protocol seems like a good way to cut down on the clutter.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-is-muting-push-notifications-from-channels-you-dont-watch-205119228.html?src=rss2026-04-22 03:52:23
When online platforms violate their own privacy policies to sell your photos, have no fear: They just might have to pay an undisclosed settlement fee 12 years later. (Who says justice is dead?) According to Reuters, AI company Clarifai says it has deleted 3 million profile photos taken from dating site OkCupid in 2014. It follows a settlement reached last month between the FTC and Match Group, OkCupid's owner.
The Delaware-based Clarifai reportedly certified the data deletion to the FTC on April 7. The company also confirmed to US Representative Lori Trahan (D-MA) that it deleted any models that trained on the data. Clarifai told the representative's office that it hadn't shared the data with third parties.
The FTC opened the investigation in 2019, after The New York Times reported that Clarifai had built a training database using OkCupid dating profile photos. The behavior was a direct violation of OkCupid’s privacy policy. Court documents reviewed by Reuters reveal that Clarifai asked OkCupid executives for the data in 2014. Apparently, they obliged.

"We're collecting data now and just realized that OkCupid must have a HUGE amount of awesome data for this," Clarifai founder Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn. The AI startup used the dating site's images to build a facial recognition service that can identify a person's age, gender and race. (Another brilliant and totally ethical idea from Clarifai, tapping into unsecured city surveillance cameras without authorization, was reportedly shuttered.)
Zeiller suggested to The New York Times in 2019 that people needed to, well, get over it. "There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that," the AI founder declared. Some of OkCupid's founders were reportedly investors in Clarifai.
As part of the settlement, the FTC "permanently prohibited" OkCupid from misrepresenting its data collection and privacy controls. TechCrunch notes how strange it is to use that as a penalty, given that FTC rules already bar that behavior.
This article originally appeared on Engadget at https://www.engadget.com/ai/ai-company-deletes-the-3-million-okcupid-photos-it-used-for-facial-recognition-training-195223996.html?src=rss2026-04-22 03:32:20
Meta is facing a new lawsuit over its advertising practices. The nonprofit group Consumer Federation of America (CFA) has filed a proposed class-action suit against Meta for "failing to protect users" from scam ads on Facebook and Instagram.
The lawsuit, which was first reported by Wired, alleges that Meta has run afoul of consumer protection laws in Washington D.C. for misleading Facebook and Instagram users about scams on its apps and that the company has "chased profits rather than protecting its users." The filing includes numerous examples of alleged scam ads that CFA says it found in Meta's ad library. These include ads promoting a "free government iPhone," as well as those claiming to offer $1,400 checks to people born in certain years. Many of the ads use AI videos, according to CFA.

Meta's advertising practices have been in the spotlight since last year when Reuters reported on internal documents that indicated the company was making billions of dollars from ads promoting scams and banned goods. The report also highlighted how Meta's own processes have at times made it harder for its own employees to fight malicious advertisers.
"Meta claims it is doing all it can to crack down on scam advertising on its platforms," CFA's lawsuit states. "But in reality, Meta has knowingly taken steps and adopted policies that pad its bottom line at the expense of its users’ safety and well-being. In fact, rather than prohibiting advertisers who the company itself has determined pose a higher risk to its users (as other tech companies like Google have), Meta just charges these advertisers more. The perverse result is that the riskier the advertiser, the more money Meta makes."
CFA's allegations "misrepresent the reality of our work and we will fight them," a Meta spokesperson said in a statement. "We aggressively combat scams across our platforms to protect people and businesses — last year alone, we removed over 159 million scam ads, 92% of which we took down before anyone reported them, and took down 10.9 million accounts on Facebook and Instagram associated with criminal scam centers. We fight scams because they are bad for business — people don't want them, advertisers don't want them, and we don't want them either.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-misled-users-about-scam-ads-on-facebook-and-instagram-lawsuit-says-193220235.html?src=rss2026-04-22 03:20:12
New York is the latest state to take a stand against prediction markets. Attorney General Letitia James has sued Coinbase Financial Markets and Gemini Titan on charges that both are illegally running unlicensed gambling operations. The suit also claims that these prediction markets violate state laws that prevent betting on games involving New York college sports teams.
"Gambling by another name is still gambling, and it is not exempt from regulation under our state laws and Constitution," James said. "Gemini and Coinbase’s so-called prediction markets are just illegal gambling operations, exposing young people to addictive platforms that lack the necessary guardrails."
Multiple states have taken similar actions over the proliferation of prediction markets, but they may face a new roadblock at the federal level. Earlier this month, the US Commodity Futures Trading Commission sued three of the states that have charged prediction markets with running unlicensed gambling. The CFTC claimed that it should be the sole regulator for prediction markets and called the efforts by Arizona, Connecticut and Illinois an overreach of authority.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/new-york-attorney-general-sues-two-prediction-markets-on-illegal-gambling-allegations-192012225.html?src=rss2026-04-22 03:02:00
Florida Attorney General James Ulthmeier has announced that the state's Office of Statewide Prosecution has opened a criminal investigation into OpenAI and ChatGPT. The investigation was opened because the suspect in a mass shooting at Florida State University in 2025 reportedly used ChatGPT in the lead up to the shooting.
Per Uthmeier, "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime." That means that the responses provided by ChatGPT to the shooter could be interpreted as the AI assistant aiding and abetting his actions. Or at least that's what Florida seems interested in arguing.
OpenAI provided the following statement when asked to comment on the Florida investigation:
Last year's mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime. After learning of the incident, we identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement. We continue to cooperate with authorities. In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity. ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.
As part of the investigation, Florida has subpoenaed OpenAI for information on "all policies and internal training materials" related to how the company handles things like users threatening to harm others, threatening to harm themselves and how OpenAI responds to law enforcement. The state is also asking OpenAI to share its organizational chart and any publicly released statements on the shooting.
"Florida is leading the way in cracking down on AI's use in criminal behavior, and if ChatGPT were a person, it would be facing charges for murder," Attorney General James Uthmeier said. "This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT's actions in the shooting at Florida State University last year."
Florida’s investigation isn’t the first time OpenAI has been connected to a mass shooting. Canadian regulators called for OpenAI to change how it approaches threats of harm following a Wall Street Journal report that claimed the company flagged the account of a Canadian shooting suspect in 2025 but failed to bring their threats to law enforcement. The company agreed to new policies around how it works with Canadian law enforcement in March. Separately, OpenAI is still in the midst of a wrongful death lawsuit from 2025 for the role it may have played in the suicide of a teenage user.
This article originally appeared on Engadget at https://www.engadget.com/ai/florida-ag-opens-criminal-investigation-into-openai-and-chatgpt-190200227.html?src=rss