MoreRSS

site iconAnil DashModify

A tech entrepreneur and writer trying to make the technology world more thoughtful, creative and humane. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Anil Dash

What do coders do after AI?

2026-03-13 08:00:00

For the New York Times Magazine this Sunday, I talked to Clive Thompson about one of the conversations that I'm having most often these days: What happens to coders in this current moment of extraordinarily rapid evolution in AI? LLMs are now quickly advancing to where they can virtually become entire software factories, radically changing both the economics and the power dynamics of software creation — which has so far mostly been used to displace massive numbers of tech workers.

But it's not so simple as "bosses are firing coders now that AI can write code".

For one thing, though there are certainly a lot of companies where executives are forcing teams to churn out slop code, and using that as an excuse to carry out mass layoffs, there are plenty of companies where "AI" is just a buzzword being used as a pretense for layoffs that owners have wanted to do anyway. And more importantly, there are a growing number of coders who are having a very different experience with the tools than those bosses may have expected — and a very different outcome than the Big AI labs may have intended. As I said in the story:

“The reason that tech generally — and coders in particular — see LLMs differently than everyone else is that in the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, LLMs take away the drudgery and leave the human, soulful parts to you.”

This is a point that's hard for a lot of my artist friends to understand: how come so many coders don't just hate LLMs for stealing their work the way that most writers and photographers and musicians do? The answer boils down to three things:

  • Coders have long had a history of openly sharing code with each other, as part of an open source, collaborative culture that goes back for more than half a century.
  • Tools for writing and creating code have almost always offered a certain degree of automation and reuse of work, so generating code doesn't feel like as radical a departure from past practices.
  • Software development is one of the fields with the least-advanced cultures around labor, as workers have almost no history of organizing, and many coders tend to side much more with management as they've been conditioned to think of themselves as "future founders" rather than being in solidarity with other workers.

What this means is, attitudes about automation and worker displacement in tech are radically different than they would be in something like the auto industry, and in many cases, I've found that being part of a coder workforce has meant witnessing a level of literacy about past labor movements that is shockingly low, even though their technical knowledge is obviously extremely high.

Coders, in their heads and hearts

To be somewhat reductive about it, there are two main cohorts of coders. A larger, less vocal, group who see coding as a stable, well-paying career that they got into in order to support themselves and their families, and to partake in the upward economic mobility that the tech sector has represented for the last few decades. Then there is the smaller, more visible, group who have seen coding as an avocation, which they were drawn to as a form of creative expression and problem-solving just as much as a career opportunity. They certainly haven't been reluctant to capitalize on the huge economic potential of working in tech — this is the group that most startup founders come from — but coding isn't simply something they do from 9 to 5 and then put away at the end of the day. For those of us in this group (yeah... I'm one of these folks), we usually started coding when we were kids, and we have usually kept doing it on nights and weekends ever since, even if it's not even part of our jobs anymore.

Both cohorts of coders are in for a hard time thanks to the new AI tools, but for completely different reasons.

For the 9 to 5

The people who started to write software just because it represented a stable job, but who don't see it as part of their own personal identity, are going to be devastated by the ruthlessness with which their bosses will swing the ax. These new LLM-powered software factories can generate orders of magnitude more of the standardized business code that tends to be the bread-and-butter work for these journeyman coders, and it's not the kind of displacement that can be solved by learning a new programming language on nights and weekends, or getting a new professional certification. Much of the "working class" tech industry (speaking of the roles they perform functionally within the system; these are obviously jobs that pay far more than working class salaries today) are seen as ripe targets for deskilling, where lower-paid product roles can delegate coding tasks to coding AI systems, or for being automated by management giving orders to those AI systems.

One of the hardest parts of reckoning with this change is not just the speed with which it is happening, but the level of cultural change that it reflects. Coders are generally very amenable to learning new skills; it's a necessary part of the work, and the mindset is almost never one of being change-averse. But the level at which the change is happening in this transition is one that gets closer to people's sense of self-worth and identity, rather than to their perceptions of simply having to acquire knowledge or skills. It doesn't help that the change is being catalyzed by some of the most venal and irresponsible leaders in the history of business, brazenly acting without any moral boundaries whatsoever.

For the nights and weekends

For the coders that see being a coder as part of their identity, the LLM transformation is going to represent an entirely different set of challenges. They may well survive the transition that is coming, but find themselves in an unrecognizable place on the other side of it. The way that these new LLM-based tools work is by turning into virtual software factories that essentially churn out nearly all of the code for you. The actual work of writing the code is abstracted away, with the creator essentially focused more on describing the desired end results, and making sure to test that everything is working correctly. You're more the conductor of the symphony than someone who's holding a violin.

But there are people who have spent decades honing their craft, committing to memory the most obscure vagaries of this computer processor or that web browser or that one gaming console, all in service of creating code that was particularly elegant or especially high-performing, or just really satisfying to write. There's a real art to it. When you get your code to run just so, you feel a quiet pride in yourself, and a sense of relief that there are still things in the world that work as they should. It's a little box that you can type in where things are fair. It's the same reason so many coders like to bake, or knit, or do woodworking — they're all hobbies where precisely doing the right thing is rewarded with a delightful result.

And now that's going away. You won't see the code yourself anymore, the robots will write it for you while falling around and clanking. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work. Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way.

Your job changes into describing software. Now, if you're the kind of person who only ever wanted to have the end result, maybe this is a liberation. Sometimes, that's what mattered — we wanted to fast-forward to the end result, elegance be damned. But if you were one of those crafters? The people who wrote idiomatic code that made that programming language sing? There's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either.

If ... Then?

What do we do about it? This horse is not going back in the barn. The billionaires wouldn't let it, anyway.

I've come to the personal conclusion that the only way forward is for more of the hackers with soul to seize this moment of flux and use these tools to build. The economics of creating code are changing, and it can't just be the worst billionaires in the world who benefit. The latest count is 700,000 people laid off in the last few years in the tech industry. We'll be at a million soon, at the rate things are accelerating. Each new layoff announcement is now in the thousands.

It's not going to be a panacea for all the jobs lost, and it's not the only solution we're going to need, but one part of the answer can be coders who still give a damn looking out for each other, and building independent efforts without being reliant on the economics — or ethics — of the people who are laying off their colleagues by the hundreds of thousands.

I've spent my whole career working with communities of coders, building tools for the people who build with code. I don't imagine I'll ever stop doing it. This is the hardest moment that I've ever seen this community go through, and it makes me heartsick to see so many people enduring such stress and anxiety about what's to come. More than anything else, what I hope people can remember is that all of the great things that people love about technology weren't created by the money guys, or the bosses who make HR decisions — they were created by the people who actually build things. That's still an incredible superpower, and it will remain one no matter how much the actual tools of creation continue to change.

The Neo solves Apple’s embarrassment

2026-03-08 08:00:00

Last week, Apple released a parade of hardware announcements, and the one that captured the most attention across the industry was the $600 ($500 if you’re in education!) MacBook Neo, the brightly-colored low-end laptop that they launched to great fanfare. The conventional wisdom is that this product opens up Apple to the low end of the laptop market for the first time, radically changing the dynamics of the entire market, and throwing down the gauntlet to the garbage Windows laptop market, as well as challenging a huge swath of Chromebooks which tend to dominate in the education market. This is incorrect.

Apple has, in fact, sold a MacBook Air with an M1 chip at Walmart for years, which it has intermittently discounted to $499 at key times like Black Friday and Cyber Monday. The single-core performance of that laptop (meaning, how it works for most normal tasks that people do, like browsing the web or writing email or watching YouTube videos), is very nearly equivalent to the newly-released MacBook Neo.

But. A laptop with an old design, using a chip that has an old number (the M1 chip came out six years ago!), sold exclusively through a mass-market retailer that is perceived as anything but premium, presents an enormous brand challenge for Apple. It is, to put it simply, embarrassing. Apple can have low-end products in its range. They invest lots of effort in that segment of their product line, as the new iPhone 17e shows, making a new basic entrant to their most recent series of phones. But Apple can’t have old, basic-looking products that people aren’t even able to buy at an Apple Store.

And that’s what Neo solves. It’s a smart reframing of a product that is nearly the same offering as the old M1 Air: the Neo and that old M1 machine both have 13” screens, both weigh just under 3 pounds, both have 8GB of RAM, both start at 256GB of storage, both have about 16 hours of battery life, are both about 8”x12”, both have 2 USB ports and a headphone jack, and both of course cost almost exactly the same. They did add a new yellow (citrus!) color for the Neo, though.

Wake up, Neo

What was more striking to me was Apple’s introductory video, which clearly seems aimed at people who are new to Apple computers, or maybe people who are new to laptop computers entirely. They’re imagining a user base who’s only ever had their smartphones and are buying computers for the first time — which might describe a lot of students. There’s no discussion here of the chamfers of the aluminum, or the pipelines in the GPU cores, and there’s barely even the slightest mention of AI; instead, they describe the basics of what the laptop includes, and even go out of their way to explain how it interoperates with an iPhone.

There’s also a very clear attempt to distinguish Neo’s branding from the rest of Apple’s design language. The type for the “MacBook Neo” name in the launch video, and the “Hello, Neo” text on the product homepage are a rounded typeface that’s so new that it’s not actually even an actual font that Apple’s using; they’ve rendered it as an image instead of a variation of their usual “San Francisco” font that Apple uses for everything else in their standard marketing materials. The throwback to 2000s-era design (terminal green, the word “Neo” — are we entering the Matrix?) couldn’t be more different from the “it looks expensive” vibes of something like the Apple Watch Hermès branding.

In all, it’s pretty impressive to see Apple use its marketing strengths to take a product that is remarkably similar to something that they’ve had for sale for years at the largest retailer in the world, and position it as a brand-new, category-defining new entry into a space. To me, the biggest thing this shows is the blind spot that traditional tech trade press has to the actual buying patterns and lived experience of normal people who shop at Walmart all the time; it would be pretty hard to see Neo as particularly novel if you had walked by a Walmart tech section any time in the last three years.

At a time when Apple has lost whatever moral compass it had, even though its machines still say “privacy is a human right” when you turn them on, we still want to see positive signs from the company. And a good one is that Apple is engaging with the reality that the current moment calls for products that are far more affordable. It is a good thing indeed when affordable products are presented as being desirable, when most of the product’s enclosure is made of recycled material, and when the lifespan of a product can be expected to be significantly longer than most in its category, instead of simply being treated as disposable. All it took was removing the stigma over the existing affordable laptop that Apple’s been selling for years.

Why Apple’s move to video could endanger podcasting's greatest power

2026-02-28 08:00:00

TL;DR:

  • Apple is adding support for video podcasts to their podcast app
  • Podcasts are built on an open standard, which is why they aren’t controlled by a bad algorithm and don’t have ads that spy on you
  • Apple’s new system for video podcasts breaks with the old podcast standard, and forces creators to host their video clips with a few selected companies
  • The stakes are even higher because all the indie video infrastructure companies have been bought by private equity, while Trump’s goons go after TV and consolidate the big studios
  • If Apple doesn’t open this up, it could lead to podcasts getting enshittified like all the other media

Podcasts are a radical gift

As I noted back in 2024, the common phrase “wherever you get your podcasts” masks a subtle point, which is that podcasts are built on an open technology — a design which has radical implications on today’s internet. This is the reason that the podcasts most people consume aren’t skewed by creators chasing an algorithm that dictates what content they should create, aren’t full of surveillance-based advertising, and aren’t locked down to one app or platform that traps both creators and their audience within the walled garden of a single giant tech company.

Many of those merits of the contemporary podcast ecosystem are possible because of choices Apple made almost two decades ago when they embraced open standards in iTunes when adding podcasting features. Their outsized market influence (the term “podcast” itself came from the name iPod) pushed everyone else in the ecosystem to follow their lead, and as a result, we have a major media format that isn’t as poisoned, in some ways, as the rest of social media or even mainstream media.

Sure, there are individual podcast creators one might object to, but notice how you don’t see bad actors like FCC chairman Brendan Carr illegally throwing his weight around to try to censor and persecute podcasters in the same way that he’s been silencing television broadcasters, and you don’t see MAGA legislators trying to game the refs about the algorithm the way they have with Facebook and Twitter. Even the Elon Musks of the world can’t just buy up the whole world of podcasting like he was able to with Twitter, because the ecosystem is decentralized and not controlled by any one player. This is how the Internet was supposed to work. As early Internet advocates were fond of saying, the architecture of the Internet was designed to see censorship as damage, and route around it.

The move to video

All of this is at much higher risk now due to the technical decisions Apple has made with its move to support video podcasts in its latest software versions that are about to launch. The motivations for their move are obvious: in recent years, many podcasters have moved to embrace new platforms to increase their distribution, reach, engagement and sponsorship dollars, and that has driven them to add video, which has meant moving to YouTube, and more recently, platforms like Netflix. That is also typically accompanied by putting out promotional clips of the video portion of the podcast on platforms like TikTok and Instagram. Combined with Spotify’s acquisition of multiple studios in order to produce proprietary shows that are not podcasts, but exclusive content locked into their apps, and Apple has faced a significant number of threats to their once-dominant position in the space.

So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses. For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.

The problem, though, is that Apple is only allowing these new video streams to be served by a small number of pre-approved commercial providers that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on anildash.com and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.

If I want to publish a video podcast to Apple’s new system, though, I can’t just put up a video file on my site and tell people to subscribe to my podcast. I have to sign up for one of the approved partner services, agree to their terms of service, pay their monthly fee, watch them get acquired by Facebook, wait for the stupid corporate battle between Facebook and Apple, endure the service being enshittified, have them put their thumb on the scale about which content they want to promote, deal with my subscribers being spied on when they watch my show, see Brendan Carr make up a pretense to attack the platform I’m on, watch the service use my show to cross-promote violent attacks on vulnerable people, and the entire rest of that broken tech/content culture cycle.

We don’t have to do this, Apple!

How this plays out

What will happen, by default, if Apple doesn’t change course and add support for open video hosting for podcasts is a land grab for control of the infrastructure of the new, closed video podcast technology platform. Some of the bidders may be players that want to own podcasting (Spotify, Netflix, maybe legacy media companies like Disney and Paramount), or a roll-up from a cloud provider like AWS or Google Cloud. Either way, the services will get way more expensive for creators, and far more conservative about what content they allow, while being far more consumer-hostile in terms of privacy and monetization. We’ve seen this play out already — video shows on YouTube give advertisers massive amounts of data about viewers, while podcasts can be delivered to an audience while almost totally preserving their privacy, if a creator wants to help them preserve their anonymity. The reason you see podcasters always talking about “use our promo code” in their sponsor reads is because advertisers can’t track you going from their show to their website.

This will also start to impact content. You don’t hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification. The closest thing that podcasters have to those kinds of games is when they ask you to rate them in Apple’s Podcasts app, because that has an algorithm for making recommendations, but even that is mediated by real humans making actual choices.

But once we’ve got a layer of paid intermediaries distributing video content, and Apple leans more heavily into the visual aspects of their podcast app, incentives are going to start to shift rapidly. Today, other than on laptops, phones and tablets, Apple Podcasts app only exists on their Apple TV hardware, and doesn’t even have a video playback feature. By contrast, a lot of video podcast consumption happens in YouTube’s TV apps in the living room. Apple Podcasts will soon have to be on every set top device like Roku sticks and Amazon Fire TVs and Google’s Chromecasts, as well as on smart TVs like Samsungs and LGs, with a robust video playback feature that can compete with YouTube’s own capabilities. Once that’s happened — which will take at least a year, if not multiple years — creators will immediately begin jockeying for ways to get promoted or amplified within that ecosystem. Even if Apple has allowed independent publishers to make their own video podcast feeds, it’s easy to imagine them treating them as second-class citizens when distributing those podcasts to all of the Apple Podcast users across all of these platforms.

The stakes for all of this are even higher because nearly all of the independent online platforms for video creation outside of YouTube have been bought up by a single private equity firm. In short: even if you don’t know it, if you’re trying to do video off of YouTube, all of your eggs are in one, very precarious, basket.

What to do

Apple can mitigate the risks of closing up podcasts by moving as quickly as possible to reassure the entire podcasting ecosystem that they’ll allow creators to use any source for hosting video. Right now, there’s a “fallback” video system where creators can deliver video through the traditional podcast standard, and other podcasting apps will show that video to audiences, but Apple’s apps don’t recognize it. If Apple said they’d support that specification as a second option for those who don’t want to, or can’t, use their video hosting partners, that would go a huge way towards mitigating the ecosystem risk that they’re introducing with this new shift.

If Apple can engage with a wide swath of creators and understand the concerns that are bubbling up, and articulate that they’re aware of the real, significant risks that can arise from the path that they’re currently on, they still have a chance to course-correct.

Some of these decisions can seem like arcane technical discussions. It’s easy to roll your eyes when people talk about specifications and formats and the minutiae of what happens behind the scenes when we click on a link. But the history of the Internet has shown us that, sometimes, even some of what seem like the most inconsequential choices end up leading to massive shifts in a larger ecosystem, or even in culture overall.

A generation ago, a few people at Apple made a choice to embrace an open ecosystem that was in its infancy, and in so doing, they enabled an entire culture of creators to flourish for decades. Podcasting is perhaps the last major media format that is open, free, and not easily able to be captured by authoritarians. The stakes couldn’t be higher. All it takes now is a few decision makers pushing to do the right thing, not just the easy thing, to protect an entire vital medium.

A Cookie for Dario? — Anthropic and selling death

2026-02-28 08:00:00

A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support his commission of war crimes. As has become clear this week, Anthropic CEO Dario Amodei has declined to do so. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.

Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.

To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.

We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that this is what we should expect. It’s also good to note that companies may have many reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s drunkenly stumbling his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — another lie).

Selling to the Pentagon sucks

Being on any federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to build these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.

This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been massive waves of rolling layoffs over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that threaten kids’ lives or well-being. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has laid off people at least 17 times in the last three years alone.

So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, and you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.

How did we get here?

We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in enabling the Rohingya genocide truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of the holocaust in Germany in the 1940s, and that served as the template for their work implementing apartheid in South Africa in the 1970s. IBM actually bid for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry stood up against contracts with ICE. That moment was also one of the catalyzing events that drove the tech tycoons into their group chats where they collectively decided that they needed to bring their workers to heel.

And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been openly inciting civil war for years on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a debate as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.

We don’t have to set the bar this low. We have to remind each other that this isn’t normal for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable. 
In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.

Talking through the tech reckoning

2026-02-26 08:00:00

Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.

Galaxy Brain

First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on Galaxy Brain (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.

In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.

The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:

Revolution Social

Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast Revolution.Social. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to want to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality. 
It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.

What’s next?

As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.

In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!

Taking action against AI harms

2026-02-24 08:00:00

In my last piece, I talked about the harms that AI is visiting on children through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.

Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be extraordinarily effective in standing up for the values of your community against the agenda of the Big AI companies.

But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.

Get your company off of Twitter / X

If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.

Here is some dialogue you can use or modify to catalyze a productive conversation at work:

Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.

Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?

That should be enough to trigger a useful conversation at your workplace. (You can share this link if they want a credible, business-minded link to reference.) If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had serious issues with trauma, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As some articles have noted, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.

As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.

All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.

Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.

Stop your school from using ChatGPT

An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about which AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.

As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.

First, you can begin simply by gathering resources. There are many credible stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:

  • “Maybe those kids were already depressed?” Several of the children who have been impacted by these tools were introduced to them as homework assistants, and only evolved into using them as emotional crutches at the prompting of the responses from the tool. Also: your school has children in it who are depressed, why are you willing to endanger them?
  • “Doesn’t every tool cause this?” No, this is extreme and unusual behavior. Your email software or word processor have never incited your children to commit violence against anyone, let alone themselves. Not even other LLMs prompt this behavior. And again, even if this did happen with every tool in this category, why would that make it okay? If every pill in a bottle is poisonous, does that make it okay to give the bottle of pills to our kids?
  • “They’ll be missing out on the future.” Ask the parents of the children impacted in these stories about their kids’ futures.
  • “We should just roll it out as a test.” Who will pay for monitoring all usage by all students in the test?
  • “It’s a parent’s responsibility.” Forcing a parent to invest hours of time into learning a cutting-edge technology that is being constantly updated is a full-time job. If you are going to burden them with that level of responsibility, how will you provide resources to support them? What is your plan to communicate this responsibility to them and get their consent so they can agree to take on this responsibility?
  • “The company said it’s working on the problem.” They can change their technology so that it only incites violence against their executives, or publish a notice when it has gone a full year without costing any children their lives. At that point, they may be considered for re-evaluation.

With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.

If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.

From here, you can begin a conversation that re-evaluates the goals of the initiative from first principles. "Everyone else is doing it" is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.

You have power

The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.

We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply accessing these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to require people to use these things.

The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.

Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.