2026-03-21 07:20:33
A group of former Twitter investors have prevailed at a federal civil trial over Elon Musk's actions amid his $44 billion acquisition of the social platform in 2022. A jury in San Francisco found Friday that tweets made by Musk about fake accounts on the platform had defrauded investors in the company. The jury sided with Musk on other allegations in the case.
It's not yet clear how much Musk will owe in damages as a result of the case but, as the Associated Press reports, it could amount to billions of dollars. Jurors calculated that shareholders should get "between about $3 and $8 per stock per day."
The class action lawsuit, one of several brought against Musk in the months following his takeover of the company, cited Musk’s tweets about fake accounts on the platform. Facing a sinking Tesla share price in the days after announcing he would buy Twitter for $54.20 a share, the suit said Musk made tweets and statements that were intentionally meant to drive down Twitter's share price in an attempt to renegotiate or exit the deal.
The suit called out Musk's May 13, 2022, tweet that claimed the Twitter deal was "temporarily on hold" due to the number of fake accounts and bots on the platform, as well as one a few days later that suggested fake accounts might account for more than 20 percent of users. Twitter's stock dropped significantly following the May 13 tweet.
During the trial, Musk said the tweets were him "speaking his mind" and maintained that Twitter executives had "lied" about the number of bots on the platform, according to KQED. Former Twitter shareholders, on the other hand, said "they sold shares at deflated prices amid Musk’s public waffling."
Musk faced several lawsuits during and after his $44 billion takeover of the company. That includes other shareholder lawsuits related to his delay in disclosing his stake in the company, as well as one from former executives related to unpaid severance benefits (Musk later settled those claims). He also narrowly avoided a trial over his attempts to back out of the deal.
2026-03-21 05:16:30
Pinterest's CEO has thrown his support behind an Australia measure banning social media for younger teens and is calling for governments around the world to implement similar bans. "Social media, as it’s configured today, is not safe for young people under 16," Ready writes in a piece published by Time. "We need a clear standard: no social media for teens under 16, backed by real enforcement, and accountability for mobile phone operating systems and the apps that run on them."
Ready is one of the highest-profile tech CEOs to come out in favor of a broad ban on social media for teens. That may also seem like a bold stance for someone who runs a platform with a user base that's more than 50 percent Gen Z, but Ready doesn't think that ban should apply to Pinterest. Pinterest, as he notes, already bars teens under 16 from accessing messaging features and other social features. It also makes teen accounts private by default.
A spokesperson for Pinterest confirmed the company has no plans to change its own policies regarding users under 16, and said Pinterest considers itself a "visual search platform" not social media. Pinterest, like most social media and social media-adjacent companies, doesn't allow users under 13 to sign up.
Social media or not, Pinterest has encountered child safety-related issues in the past. In 2023, NBC News reported that Pinterest's recommendation algorithm was surfacing photos and videos of young girls to adults who were "seeking" such content. Some of those users had created Pinterest boards featuring images of young girls with titles like "sexy little girls," their investigation found. The company made profiles for teens under 16 private and "not discoverable" six months later.
According to Ready, Pinterest's popularity with younger users is proof its policies are also good for the company's business. "Our experience shows that prioritizing safety and well-being doesn’t push young people away; it builds trust," he writes.
This article originally appeared on Engadget at https://www.engadget.com/social-media/pinterest-ceo-says-teens-under-16-should-be-banned-from-social-media-but-not-pinterest-211630443.html?src=rss2026-03-21 05:03:36
The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the “One Big Beautiful Bill.”
The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. “Importantly, this framework can succeed only if it is applied uniformly across the United States,” The White House writes. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
In terms of child privacy protections, the framework ask for Congress to require companies to provide tools like “screen time, content exposure and account controls” while also affirming that “existing child privacy protections apply to AI systems,” including limits on how data is collected and used for AI training. The framework also says carveout states should be allowed to enforce “their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.”
The energy-use and environmental impact of AI infrastructure is a going concern, but the White House’s policy proposals are primarily worried about the cost of data centers. The framework suggests federal AI regulation should make sure that higher electricity costs aren’t passed on to people living near data centers, while streamlining the process for permitting AI infrastructure construction, so companies can pursue “on-site and behind-the-meter power generation.” The framework also calls for fewer restrictions on the software-side of AI development, proposing “regulatory sandboxes for AI applications” and asking Congress to “provide resources to make federal datasets accessible to industry and academia in AI-ready formats.”
While a recently AI bill from Senator Marsha Blackburn (R-Ten.) attempts to eliminate Section 230, a piece of a larger law that says platforms can’t be held responsible for the speech they host, the framework appears to propose the opposite. “Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel or alter content based on partisan or ideological agendas,” the White House writes. The framework is similarly hands-off when it comes to copyright and the use of intellectual property to train AI. “Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws,” the White House writes, it supports the issue being settled in court rather than by legislation. Though, the White House does think Congress should “consider enabling licensing frameworks” so IP holders can bargain for compensations from AI providers.
The clincher in the White House’s proposal is the idea that federal regulation should preempt state law, specifically so that states don’t “regulate AI development,” don’t “unduly burden American’s use of AI for activity that would be lawful if performed without AI” and don’t punish AI companies “for a third party’s unlawful conduct involving their models.” The idea that AI companies aren’t liable for the illegal or harmful uses of their products is particularly problematic because it lies at the heart of multiple intersecting issues with AI right now, including it being used to generate sexually explicit images of children and allegedly playing a role in the suicide of users.
Ultimately, though, the framework might be too contradictory to be useful, Samir Jain, the Vice President of Policy for the Center for Democracy and Technology, writes in a statement to Engadget:
The White House’s high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids’ online safety. It rightly says that the government should not coerce AI companies to ban or alter content based on ‘partisan or ideological agendas,’ yet the Administration’s ‘woke AI’ Executive Order this summer does exactly that. On preemption, the framework asserts that states should not be permitted to regulate AI development, but at the same time rightly notes that federal law should not undermine states’ traditional powers to enforce their own laws against AI developers. States are currently leading the fight to protect Americans from harms that AI systems can create, and Congress has twice correctly decided not to pursue broad preemption.
President Donald Trump has attempted to have an active role in how AI is developed and regulated in the US with mixed results, primarily because, as Jain notes, Congress has been unwilling to give up states’ right to regulate the technology on their own terms. Without that, its hard to say how much of the framework will actually make it into federal law.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-white-house-proposes-new-ai-policy-framework-that-supersedes-state-laws-192251995.html?src=rss2026-03-21 04:44:37
After one too many of you threatened to switch to Linux, Microsoft has published a long list of changes it plans to make to Windows 11. In a lengthy blog titled "Our commitment to Windows quality," Pavan Davuluri, the executive vice president of Windows and Devices, said the company has spent a "great deal" of time in recent months reading feedback from users. "What came through was the voice of people who care deeply about Windows and want it to be better," he said. To that end, Windows Insiders can expect to see some of the changes Microsoft plans in response to all criticism begin rolling out starting this month.
Most notably, Microsoft ease up on the AI pedal. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well-crafted," writes Davuluri. As a first step, Microsoft says it will remove "unnecessary Copilot entry points," starting with apps like the Snipping Tool, Photos, Widgets and Notepad.
Elsewhere, users can look forward to additional taskbar customization, allowing them to position the interface element at the top or sides of the screen; less disruptive updates, with the option to shut down or restart your device without being forced to install a new patch; and a faster, less janky File Explorer. "Our first round of improvements will focus on a quicker launch experience, reduced flicker, smoother navigation and more reliable performance for everyday file tasks," said Davuluri.
Looking beyond the next two months, Microsoft notes it will work to improve performance across Windows, with “lowering the baseline memory footprint” of the operating system a key area of focus. Presumably, this plan of action is as much a response to the global memory shortage as it is user feedback. PC manufacturers are struggling right now, with a recent estimate warning the market could shrink as much as 8.9 percent year-over-year in 2026 due to the high cost of RAM and SSDs. On the subject of reliability, the company says reducing OS-level crashes and releasing higher quality drivers is a priority, as is making Bluetooth and USB connections less prone to errors and disconnects.
Microsoft's promise to fix Windows 11 is long overdue. In January, the company released a couple of emergency updates after what should have been a routine security patch caused bugs that left some PCs unable to shut down and broke Outlook. The general state of the operating system has led many to explore Linux alternatives like Bazzite. With Apple also recently releasing the $600 MacBook Neo, a laptop that few Windows manufacturers can match right now, Microsoft’s dominance in the PC market is looking vulnerable for the first time in more than a decade.
This article originally appeared on Engadget at https://www.engadget.com/computing/microsoft-will-yank-copilot-from-some-windows-apps-and-let-you-move-the-taskbar-again-202857203.html?src=rss2026-03-21 02:49:28
The US Attorney's Office for the Southern District of New York has charged three people with illegally exporting NVIDIA GPUs to China in violation of the Export Control Reform Act. NVIDIA's chips have become a critical component in the rush to train and run increasingly complex artificial intelligence models, one the US has sought to manipulate with export controls and profit-sharing schemes with NVIDIA.
The three people, Yih-Shyan "Wally" Liaw, Ruei-Tsang "Steven" Chang and Ting-Wei "Willy" Sun, two employees and one contractor working for US IT company Super Micro Computer, allegedly circumvented export control laws via a multi-step scheme that involved creating fake orders for servers with NVIDIA chips from Southeast Asian companies, that were then secretly sent to China. The plan involved paying a logistics company to repackage the servers in Taiwan, staging dummy servers to be inspected by Super Micro Computer's compliance team and falsifying records so Liaw, Chang and Sun's employer was unaware where the servers were actually being sent.
The DOJ claims Liaw, Chang and Sun facilitated the illegal purchase of $2.5 billion worth of servers between 2024 and 2025 in direct violation of US export laws. Super Micro Computer is not named as a defendant in the US Attorney's indictment, but the company's stock price has been impacted by the scheme, CNBC writes. In a statement released on Thursday, Super Micro Computer announced that it's distancing itself from Liaw, Chang and Sun. "The individuals charged are Yih-Shyan "Wally" Liaw, Senior Vice President of Business Development and a member of the Company's Board of Directors; Ruei-Tsang "Steven" Chang, a sales manager in Taiwan; and Ting-Wei "Willy" Sun, a contractor," the company writes. "Supermicro has placed the two employees on administrative leave and terminated its relationship with the contractor, effective immediately."
This isn't the first time people have attempted to illegally smuggle NVIDIA's products out of the US, and it likely won't be the last time. Reportedly $1 billion worth of NVIDIA's AI chips were illegally sold in the three months after the Trump administration tightened export controls, and back in December 2025, Texas authorities seized more than $50 million worth of NVIDIA GPUs bound for China. As long as there's demand for AI, there'll be demand for the hardware that makes it possible.
This article originally appeared on Engadget at https://www.engadget.com/ai/three-people-have-been-charged-with-illegally-exporting-nvidia-gpus-to-china-184928430.html?src=rss2026-03-21 02:36:35
A French officer recently leaked the location of an aircraft carrier because of a run on the sports app Strava. This is not the first time this has happened, as the app tracks location data.
It was used to access the location of US military bases back in 2018 and members of the Secret Service accidentally shared their whereabouts while protecting then-US President Joe Biden. The same has happened to President Trump and other world leaders.
🚨🇫🇷 NEW: The location of the French aircraft carrier, FS Charles de Gaulle, has been given away by a sailor using Strava whilst jogging on the ship deck
— Politics Global (@PolitlcsGlobal) March 19, 2026
[@lemondefr] pic.twitter.com/FuoKMAs06w
In other words, the use of Strava to track runs is becoming a global security risk, but it doesn't have to be. If you happen to find yourself in an undisclosed location as part of a military entourage, here are a couple of ways to keep things private.
Don't want to give up those Strava runs? Just change the settings. On the web, click on "Do Not Share My Personal Information" on the feed page and then look for "Opt Out."
This is also fairly easy for smartphone users. Just head to "Privacy Controls" for the app and follow the prompts on both iOS and Android. Both versions include an option to disable the sharing of personal information, including location data.
Most sports apps track location data, but they don't all share Strava's spotty history. There are plenty of apps out there to choose from, and some are quite good. No matter which one you download, be sure to take steps to change the privacy settings.
Believe it or not, people still jogged before smartphones. Just lace up a pair of shoes and get out there. For extra protection, leave your phone and smartwatch at home.
Are you stuck on an aircraft carrier somewhere in the middle of the ocean? It could be tough to get your steps in, so consider bothering the top brass for a treadmill.
This article originally appeared on Engadget at https://www.engadget.com/apps/heres-how-not-to-leak-military-information-with-your-strava-run-183635879.html?src=rss