The first generation of parents to have resorted, at least occasionally, to mollifying their children by putting digital screens in their hands has now seen those kids grow up. The parents themselves are increasingly reliant on products powered by algorithms, and teen-agers have become around-the-clock users of social-media apps. Concurrently, mental-health crises among teens have become legion. Do tech companies bear any of the blame? Last month, a California jury concluded that, in the case of a woman known as Kaley, they do, finding Meta and Google liable for her addiction to Instagram and YouTube, which they respectively own, and awarding her six million dollars in damages. Serving as a signal of a taste in the courts and among the public to have tech companies bear some of the costs of harm that they have allegedly caused, the verdict represents the opening legal salvo in a fight against one of the central anxieties of our time.
For decades, the understanding was that social-media companies were essentially immune from any such legal liability. Congress said, sweepingly, in Section 230 of the Communications Decency Act of 1996, that an online platform was not to be treated as a “publisher or speaker” and so could not be held responsible for potentially harmful content posted on it by third parties. But, in 2023, Kaley, who was then seventeen years old, filed suit in California, claiming that she had become addicted to social media as a child, which had caused anxiety, depression, and body dysmorphia. Many other lawsuits alleged similar harm, including one claiming that TikTok and Snapchat had contributed to the suicides of three children. Since plaintiffs were foreclosed, by Section 230, from arguing that social-media companies are liable for publishing harmful content, lawyers came up with a claim that attempted to sidestep it.
The claim was that tech companies had designed social-media apps to “maximize user engagement” with features such as infinite scroll, beauty filters, autoplay, push notifications, and tailor-made algorithms, and that, in so doing, the companies had been negligent. That is, they had failed to exercise reasonable care with respect to the dangers of social-media addiction, a condition that has been recognized by the American Psychological Association and the Surgeon General. Reasonable care, according to the lawyers’ claim, might have entailed, say, companies easing up on the user-engagement features; perhaps they could also have instituted meaningful age-verification or parental-notification measures, or limits on how often and for how long and at what times of day a child could use these apps.
Similar design claims were made against Big Tobacco, beginning in the nineteen-nineties. Those claims alleged that companies had engineered products to be more addictive; for example, by adding chemicals to speed and intensify nicotine delivery to the brain and to ease inhalation, so that smokers would become unable to quit. The claims met with some limited success in a few states, and in 2006 a federal court ordered tobacco companies to publicly state that they intentionally designed cigarettes to induce and maintain addiction.
Kaley’s case was selected as a “bellwether”—a test case to go to trial first and show how a jury would react to the claims—from more than a thousand lawsuits filed against social-media companies by individuals and school districts in California, which were consolidated into a single proceeding before a California judge, Carolyn B. Kuhl. She allowed the design claim to go to trial, meaning that the jury would decide, based on the evidence, whether the design features were addictive, whether the companies were negligent in designing them, and whether that addiction had caused harm to Kaley. As Kuhl explained it, “the allegedly addictive features of defendants’ platforms (such as endless scroll) cannot be analogized to how a publisher chooses to make a compilation of information, but rather are based on harm allegedly caused by design features that affect how plaintiffs interact with the platforms regardless of the nature of the third-party content viewed.” Thousands of similar federal lawsuits were also consolidated into a proceeding in a district court in California, and the first federal bellwether trial is scheduled for June. Separately, a coalition of dozens of states sued Meta on similar claims, and a trial in federal court, also in California, can be expected in the next year.
Kaley testified that she had been on YouTube since the age of six, had posted more than two hundred videos by age ten, and had created nine additional social-media accounts for the purpose of liking and commenting on her own content: “I spent all my time on it. I would sneak it. I would watch it in class. Every time I set limits for myself, it didn’t work. I just couldn’t get off,” she said. Social media “made” her give up hobbies and prevented her from making friends. She added that it still consumes her as a twenty-year-old woman: “I just can’t be without it.” When Mark Zuckerberg, the C.E.O. of Meta, testified at the trial, Kaley’s lawyer showed the jury a collage of hundreds of selfies that Kaley had posted to Instagram, which she said she had used since she was eleven.
Meta suggested that Kaley’s mental-health struggles were attributable not to social-media addiction but, rather, to her mother’s emotional and physical abuse and neglect, and that Kaley’s social-media use was not the source of her troubles but a way to cope with them. Kaley denied being abused or neglected, though Meta’s attorneys did show some Instagram posts about her mother screaming at her. But the strategy of attempting to pin the blame elsewhere was stymied, because California has a highly lenient standard in cases alleging that a defendant caused injury to a plaintiff: defendants can be liable if their negligence was a “substantial factor” in causing the harm—not necessarily the only cause or even the primary one. So the jury could have decided in Kaley’s favor even if it believed that the platforms’ negligent designs merely contributed to the many possible causes of her injury, such as, perhaps, school pressures, economic pressures, the political landscape, climate change—or bad parenting.
The contest over causation goes to parents’ simultaneous senses of responsibility and helplessness about their children’s fates. If parents have in the past felt they were competing with bad influences on children—questionable friends, shady neighbors, or profanity-laced music among them—the core anxiety in this era is that algorithms have made it so that there is no competition at all, undermining parents’ opportunity to steer their children right. (The day before the verdict in Kaley’s case, a New Mexico jury imposed a civil penalty on Meta of three hundred and seventy-five million dollars, under state consumer-protection laws, for misleading users about platform safety and enabling child sexual exploitation.) This generation of parents was also warned by those opposed to helicopter or tiger parenting not to monitor kids like hawks, and even to try some “free-range” parenting to let them explore and make mistakes. Meanwhile, engineers in Silicon Valley were allegedly designing ingenious ways to make explorations of digital rabbit holes irresistible. In millions of American homes, while parents were making dinner or paying bills, their kids were in another room scrolling social media and talking to chatbots.
In response to the verdict, a Meta spokesperson said that “teen mental health is profoundly complex and cannot be linked to a single app.” Google said in a statement that the case “misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.” (Both companies said that they would appeal.) In the end, though, what made the verdict remarkable was the relative ordinariness of Kaley’s story. Her testimony about her habits, her behavior, and her anxieties was relatable to many people. The jury award was a spur to understand a life recognizably shaped by social-media algorithms, in ways that were perhaps near-impossible to resist, as a serious injury to an entire generation.
But there is a more general dread about human vulnerability to technology—a growing existential fear that people are losing the authorship and agency of their own lives to, particularly, artificial intelligence—that will be reflected in an avalanche of related negligent-design legal claims. A dozen California lawsuits against OpenAI alleging injury from negligent design of A.I. chatbots have been consolidated into a proceeding that is now in its early stages. The suits include a case brought by parents alleging that ChatGPT encouraged their son to die by suicide, offered to write his suicide note, and helped him make the noose with which he hanged himself. Another suit claims that an adult man’s relationship with ChatGPT persuaded him to believe not only that the A.I. was sentient but that his “mathematical theories combined with his past traumas had somehow caused it to become sentient and that, with enough fundraising and resources, he could save the world from destruction.” The theory of the complaints is that it was reasonably foreseeable that vulnerable people would develop psychological dependencies on A.I., and that companies did not exercise reasonable care in prioritizing user engagement over user safety in designing the A.I. OpenAI has responded by asserting, among other things, that the injuries were caused by the “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
Blaming the conduct of companies may provide some comfort by asserting that the harm is caused by the acts of human beings to wrong other human beings. But the lawsuits underscore the larger-scale loss that we fear is already inevitable. The pursuit of court decisions finding that the design of algorithms made a person unable to stop using them, or that the design of a large language model took away an individual’s reasoned decision-making power is a meagre resistance to the general anxiety. Whether or not cases survive appeals or change the incentives and behavior of tech companies, they stand as recognition of the fear that what makes us most human is not the capacity to reason and make choices but rather the vulnerability to giving that up. ♦











