2026-04-22 07:21:31
SpaceX and AI company Cursor have struck a new partnership that could see the owner of X buy the AI company for $60 billion later this year. "SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI," SpaceX wrote in a post on X.
SpaceXAI and @cursor_ai are now working closely together to create the world’s best coding and knowledge work AI.
— SpaceX (@SpaceX) April 21, 2026
The combination of Cursor’s leading product and distribution to expert software engineers with SpaceX’s million H100 equivalent Colossus training supercomputer will…
According to SpaceX, the deal allows for it to either invest $10 billion into the company known for its AI coding tool, or acquire it entirely "later this year" for $60 billion. If an acquisition were to happen, it's not clear at what point Cursor could officially join the fold of Elon Musk's rapidly expanding and increasingly enmeshed web of companies. SpaceX bought xAI, the billionaire's AI company that also controls X, earlier this year. SpaceX is currently getting ready to go public this summer in what will likely be the biggest initial public offering (IPO) in history.
Cursor, which has reportedly been in talks to raise its own $2 billion round of funding, is known for its AI coding tool of the same name that's become the vibe coding platform of choice for many developers. It allows people to use either its own models or those from other leading AI companies, including OpenAI, Google, Anthropic and xAI.
In a statement, Cursor said its partnership with SpaceX will "accelerate our model training efforts" while addressing infrastructure-related issues that have slowed it down in the past. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," the company said. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models for coding and beyond."
This article originally appeared on Engadget at https://www.engadget.com/ai/spacex-and-cursor-strike-partnership-that-might-end-in-a-60-billion-acquisition-232131487.html?src=rss2026-04-22 06:43:30
Anthropic's buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company's special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla's team find and patch 271 vulnerabilities in the latest release of the Firefox browser. "So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t," the foundation said.
The blog post from Mozilla feels like a positive sign for Anthropic's Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there's something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn't able to turn up any bugs that a human wouldn't have been able to find, given enough time and resources, which indicates that AI isn't presently able to do more to crack cybersecurity protections than a person can.
An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren't personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.
This article originally appeared on Engadget at https://www.engadget.com/ai/mozilla-says-it-patched-271-firefox-vulnerabilities-thanks-to-anthropics-claude-mythos-224330023.html?src=rss2026-04-22 05:06:51
Cash App, the banking and payments app run by Block, has added support for parent-managed kids accounts. The new accounts include key benefits from the service's normal account, with an eye towards teaching financial literacy to younger users ages 6 to 12. Cash App first allowed teenage users on its platform in 2021.
As part of the "expanded Cash App Families experience," eligible legal guardians and parents can create managed accounts that offer "a dedicated place on the platform to send allowances, set aside savings, and track spending for their child, kickstarting their path to financial independence," Cash App says. Adults managing these accounts will be able to set up recurring transfers, see how their child is spending and do things like lock their child's account to prevent transactions. Kids will get a custom debit card and the ability to receive payments from up to five trusted accounts, though notably they won't be able to access Cash App itself.
Today, we're launching Cash App accounts for kids age 6-12. Parents manage the accounts. Kids get to learn about safety, start saving for goals, and design and use their own debit card.
— Kristen Anderson (@FintechKristen) April 21, 2026
Next generation banking never looked so good.
Proud of the team for this one. pic.twitter.com/jIAcbvsfB9
Cash App says managed accounts are designed for kids 6 through 12. Once those kids turn 13, Cash App says parents will be able to choose to convert their account to a "sponsored account" to unlock more features, like the ability to send and receive payments, invest in stocks or trade crypto. Those sponsored accounts are technically still monitored and controlled by a parent or legal guardian, but they do give 13-year-olds more control over how they use their money.
A parent-managed account for kids is not a new idea in the fintech space, though Cash App is trying to reach a younger audience than some of its competitors. Venmo rolled out access to its payment platform to teens between the ages of 13 to 17 in 2023. Separately, both Apple and Google also offer their own kids accounts in Google Wallet and Apple Cash Family.
This article originally appeared on Engadget at https://www.engadget.com/apps/cash-app-now-supports-accounts-for-kids-6-12-210651025.html?src=rss2026-04-22 04:51:19
YouTube notifications can get messy fast, particularly if you’re subscribed to a lot of different channels. To address that, today the company will begin muting push notifications from creators that you haven’t engaged with in the last month.
The change to YouTube notifications began as a small trial the company tested out earlier this year. The idea behind it is that if a viewer continually receives notifications about content they don't engage with, this may eventually cause the user to disable YouTube notifications altogether. Now obviously, this is bad for YouTube. Turning off notifications means people will use the platform less, thereby resulting in lower revenue. However, it's also bad for content creators, especially the ones you do like, who will have one fewer avenue to keep you updated about new and upcoming videos.
So starting today, for channels that you have subscribed to and have notifications set to "all," YouTube will no longer send out push notifications to mobile devices from creators that you haven't interacted with for one month. That said, these notifications will continue to be available inside the YouTube app in your inbox (the little bell icon in the top right).
Notably, for those who are clicking on notifications and watching related videos, nothing will change. Additionally, based on info from the test earlier this year, YouTube said "channels that upload infrequently will not have their notifications affected." This is a good thing, especially for creators who post long-form content that takes extra time to make, as people probably don't want notifications to go away in case they happen to miss a once-a-month upload.
The one thing that's unclear is if you start watching a channel again that you have not interacted with in a while, is if YouTube will automatically restart related push notifications. However, as a way to prevent too many alerts from clogging up your phone, YouTube's new protocol seems like a good way to cut down on the clutter.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-is-muting-push-notifications-from-channels-you-dont-watch-205119228.html?src=rss2026-04-22 03:52:23
When online platforms violate their own privacy policies to sell your photos, have no fear: They just might have to pay an undisclosed settlement fee 12 years later. (Who says justice is dead?) According to Reuters, AI company Clarifai says it has deleted 3 million profile photos taken from dating site OkCupid in 2014. It follows a settlement reached last month between the FTC and Match Group, OkCupid's owner.
The Delaware-based Clarifai reportedly certified the data deletion to the FTC on April 7. The company also confirmed to US Representative Lori Trahan (D-MA) that it deleted any models that trained on the data. Clarifai told the representative's office that it hadn't shared the data with third parties.
The FTC opened the investigation in 2019, after The New York Times reported that Clarifai had built a training database using OkCupid dating profile photos. The behavior was a direct violation of OkCupid’s privacy policy. Court documents reviewed by Reuters reveal that Clarifai asked OkCupid executives for the data in 2014. Apparently, they obliged.

"We're collecting data now and just realized that OkCupid must have a HUGE amount of awesome data for this," Clarifai founder Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn. The AI startup used the dating site's images to build a facial recognition service that can identify a person's age, gender and race. (Another brilliant and totally ethical idea from Clarifai, tapping into unsecured city surveillance cameras without authorization, was reportedly shuttered.)
Zeiller suggested to The New York Times in 2019 that people needed to, well, get over it. "There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that," the AI founder declared. Some of OkCupid's founders were reportedly investors in Clarifai.
As part of the settlement, the FTC "permanently prohibited" OkCupid from misrepresenting its data collection and privacy controls. TechCrunch notes how strange it is to use that as a penalty, given that FTC rules already bar that behavior.
This article originally appeared on Engadget at https://www.engadget.com/ai/ai-company-deletes-the-3-million-okcupid-photos-it-used-for-facial-recognition-training-195223996.html?src=rss2026-04-22 03:32:20
Meta is facing a new lawsuit over its advertising practices. The nonprofit group Consumer Federation of America (CFA) has filed a proposed class-action suit against Meta for "failing to protect users" from scam ads on Facebook and Instagram.
The lawsuit, which was first reported by Wired, alleges that Meta has run afoul of consumer protection laws in Washington D.C. for misleading Facebook and Instagram users about scams on its apps and that the company has "chased profits rather than protecting its users." The filing includes numerous examples of alleged scam ads that CFA says it found in Meta's ad library. These include ads promoting a "free government iPhone," as well as those claiming to offer $1,400 checks to people born in certain years. Many of the ads use AI videos, according to CFA.

Meta's advertising practices have been in the spotlight since last year when Reuters reported on internal documents that indicated the company was making billions of dollars from ads promoting scams and banned goods. The report also highlighted how Meta's own processes have at times made it harder for its own employees to fight malicious advertisers.
"Meta claims it is doing all it can to crack down on scam advertising on its platforms," CFA's lawsuit states. "But in reality, Meta has knowingly taken steps and adopted policies that pad its bottom line at the expense of its users’ safety and well-being. In fact, rather than prohibiting advertisers who the company itself has determined pose a higher risk to its users (as other tech companies like Google have), Meta just charges these advertisers more. The perverse result is that the riskier the advertiser, the more money Meta makes."
CFA's allegations "misrepresent the reality of our work and we will fight them," a Meta spokesperson said in a statement. "We aggressively combat scams across our platforms to protect people and businesses — last year alone, we removed over 159 million scam ads, 92% of which we took down before anyone reported them, and took down 10.9 million accounts on Facebook and Instagram associated with criminal scam centers. We fight scams because they are bad for business — people don't want them, advertisers don't want them, and we don't want them either.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-has-misled-users-about-scam-ads-on-facebook-and-instagram-lawsuit-says-193220235.html?src=rss