For this week’s Infinite Scroll column, Brady Brickner-Wood is filling in for Kyle Chayka.
Shortly after Elon Musk purchased Twitter, in 2022, he claimed that “removing child exploitation is priority #1.” It was certainly a noble goal—social-media sites had become havens for distributing abusive materials, including child pornography and revenge porn, and there was perhaps no major platform as openly hospitable to such content as Twitter. Unlike Facebook, Instagram, and TikTok, which restricted nudity and pornographic videos, Twitter allowed users to post violent and “consensually produced adult content” to their feeds without consequence. Long before Musk’s takeover, Twitter had positioned itself as anti-censorship, the “free-speech wing of the free-speech party,” as Tony Wang, the general manager of Twitter in the U.K., once put it—less concerned with policing content than with providing a public square for users to express themselves freely. But what were the limits of expression? How was Twitter to plainly determine whether an amateur pornographic video featured a sixteen-year-old or an eighteen-year-old, or if that video was consensually produced or violently coerced? Were these distinctions always so obvious? Twitter staffed trust-and-safety teams, and built tools to scan for images of sexual abuse, removing content that violated company rules. These efforts, however, could not keep up with the sheer amount of explicit content being posted to the site every day. When Musk took over the platform, he wasn’t wrong for identifying child pornography as a problem the company needed to address. But how did he plan to prevent the dissemination of dangerous and illegal materials while also making Twitter a supposed home for free-speech absolutists?
These are open questions, and borderline crises, for the company now known as X. The platform has become a bot-ravaged wilderness where engagement-farming accounts and users who pay for blue-check verification run wild, with few meaningful guardrails in place for preventing abusive or violent content from entering algorithmicized feeds. Far from having scrubbed the site clean of “child exploitation,” Musk now has an even trickier issue to contend with, one he’s helped nurture and facilitate: the proliferation of sexual images created with Grok, the A.I. chatbot developed by Musk’s artificial-intelligence company, xAI. On New Year’s Eve, Musk asked Grok to produce an image of him in a bikini—“perfect,” he said, when the program obliged—contributing to the recent surge of user-prompted requests to “undress” images of real people, some of whom appeared to be minors. (One estimate found that, amid the trend, Grok had generated roughly one nonconsensual sexualized image a minute.) Wired reported that, on the stand-alone Grok website and app, these altered images and videos were even more sophisticated and more graphic than they were on X. In a bit of damage control, Musk took to X to threaten “consequences” for anyone who used Grok to create sexual images of children. The threat feels hollow; Grok, after all, is partly designed to generate sexual material, even boasting “virtual companions,” such as an anime character named Ani who blows kisses and becomes more promiscuous as users engage with her. The most significant change that Musk has made in response to this controversy is that he has started charging users to create images, sexual or not, with Grok, which seems less a deterrent than a way to profit from the popularity of the service.
You don’t need me to tell you that porn is everywhere, that it has never been so easily accessible. Pulling out your phone and watching a hardcore sex scene is now as simple and straightforward as checking the weather or sending an e-mail. On the so-called tube sites, which aggregate user-uploaded videos—Pornhub chief among them—any iteration or genre of porn can be discovered, with every imaginable category of kink and preference available for perusal. (Last year, the Supreme Court ruled in favor of a Texas law requiring age verification for sites like Pornhub; twenty-four other states have passed similar laws.) But pornography, or at least provocatively erotic material, is not just relegated to tube sites; Instagram and TikTok overflow with soft-core content, some of which serves as advertising for porn performers and models to promote their accounts on OnlyFans, a platform where users pay a subscriber fee to view explicit, sometimes personalized, content. As more of our lives are funnelled into screens, the inescapability of porn—the ease of consuming it, the ever-present possibility of being aroused by it—has become perhaps the most undertheorized area of social upheaval in contemporary life.
Most mainstream pornography—specifically the kind filed under the “most popular” category on the tube sites, a category of porn that the writer Lillian Fishman termed “the front page”—follows a simple premise: a person appears to be pressured into sex they desperately want but which they are not supposed to want. Employees seduce bosses, teachers seduce students, stepchildren seduce stepparents, in-laws seduce one another, friends sleep with each other’s spouses—basically, any taboo that’s not biological incest or violent nonconsent is well represented in popular pornography. In these videos, many of which have millions of views, sex represents a dangerous and illicit possibility, something ravenously desired but ethically volatile. We wouldn’t do that, right? We couldn’t! But, when the characters in these videos indeed decide to do it, the release of the will-we-won’t-we tension exudes a thrill embodied by great, consensual enthusiasm. Abandoning all inhibitive and critical faculties in the face of extreme pleasure, no matter the life-shattering consequences on the other side of the pleasure, is a central tenet of mainstream porn, a trope that knows no end. As Fishman writes, “To consider these playfully coercive setups inherently nonconsensual or degrading is a deliberate misreading of this profound, ubiquitous desire, which must haunt every puritanical and sex-negative society: to disavow what we want and still get it.”
Critics of pornography have often cited the objectification and degradation of women—both the performers themselves and the characters they portray—as reasons for the form’s irredeemable immorality. The scholar Amanda Cawston argues that “pornography is a mode of externalised sexism that provides a form of mediated domination and exploitation that bypasses the usual mechanisms of personal moral evaluation,” echoing the influential feminist theorist Catharine MacKinnon’s belief that pornography sexualizes misogyny and fuels gender inequality. The philosopher Nancy Bauer, however, claims that “within the pornographic mise-en-scène, there is no space for the concept of objectification.” Within the “pornutopia,” as Bauer calls it, “the conflict between reason and sexual desire is eliminated, in which to use another person solely as a means to satisfy one’s own desire is the ultimate way to respect that person’s humanity and even humanity in general.” Other “pro-sex” arguments tend to center on porn not as demeaning entertainment but as a form of labor, one that, for its workers, can be both empowering and paradigm-shifting, a portal to a freer, more inclusive, less repressed society. Such a belief has made OnlyFans seem like a reputable home for sex workers, though the app, like many gig-economy platforms, has devised a model that favors few and silos many, excacerbating, per the writer Benjamin Weil, the “longstanding inequalities within the landscape of sex work.”
The taboos of porn aren’t just limited to those manufacturing pornographic content; they have always extended to the viewers who decide to look at it. But in our hyper-digital era, the problem of watching too much porn has become increasingly intertwined with the problem of too much screen time. In Daniel Kolitz’s viral essay for Harper’s, “The Goon Squad,” he reports on the “gooning” community, a group of people who masturbate to porn on a near full-time basis, extending climax for hours and sometimes days at a time. Many gooners identify as “pornosexuals,” meaning their sexual orientation is directed solely toward their porn consumption. Kolitz’s ethnography, crucially, does not portray gooners as some freakishly niche cohort operating at the outer edges of society. Their plight represents the plight of most people living in our tech-optimized world. Gooners can be our family members and colleagues, our neighbors and friends. Is it that difficult to imagine a person watching porn for multiple hours every day? Isn’t that how people consume social media, anyway—as an infinitely regenerating substitute for the real world?
That there are now X and Grok users who possess the power to create porn with A.I., rather than passively consuming it, indicates a further natural outgrowth of our pornified and overly technologized world. A.I. has been pitched to us, by the companies and people set to profit most from it, as a tool that will improve all areas of our lives: our jobs, our art, our heath, our longevity, our relationships, our productivity. Why would we be so naïve as to think that this logic wouldn’t also apply to sex? And, without regulation or responsible intellectual deliberation about the power and effects of these tools, why wouldn’t users push the limits of what’s possible? One might be tempted to believe that A.I. has the potential to mitigate some of the more problematic aspects of porn creation—that it might actually be more ethical to consume an A.I.-generated porn video, starring two computerized actors, than a real video filmed under more dubious circumstances. But, if mainstream pornography is predicated on the idea that the forbidden thing is always the most pleasure-inducing, then it stands to reason that, given the opportunity to make their own porn, people will hew toward real-life fantasies they “shouldn’t” have—hence the stream of deepfakes and nonconsensual images that have come to overtake X. Using Grok to develop child pornography, however, transcends any “front page” taboo fetish. Whether the bulk of people using the chatbot to sexualize images of children are rage-baiting trolls or actual child-sex offenders almost seems beyond the point; Grok’s ability to make such images in the first place is an indictment of Musk and the tool’s other makers, as it offers a new path for people interested in trafficking child pornography to do so.
In a podcast episode from 2023, Joe Rogan considered the potential benefits of an artificially intelligent President. An A.I. leader, he posited, would be “immune to bias, corruption, influence. Someone who looks at things rationally and in an intelligent way that spans all the disciplines.” This argument, though frighteningly specious, represents a growing belief that A.I. will be an all-wise, all-knowing, godlike operator, one that can benevolently guide life on earth, and beyond, better than humanity ever could. But the increased generation of abusive and violent sexual content with bots like Grok makes clear that machines do not possess an innate value system or an empirical moral code, let alone sentience. After all, these are products designed by people, and sold by corporations, with the goal of cornering as many markets, and netting as much profit, as possible. And sex sells. Musk isn’t the only person in the A.I. world to realize this. Recently, Sam Altman, the chief executive of OpenAI, announced ChatGPT’s “adult mode,” which will allow for erotica. Altman said the company wanted to “treat adult users like adults.” It’s a justification that could be used when deregulating almost anything. ♦

.jpg)













