MoreRSS

site iconAnil DashModify

A tech entrepreneur and writer trying to make the technology world more thoughtful, creative and humane. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Anil Dash

A Cookie for Dario? — Anthropic and selling death

2026-02-28 08:00:00

A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support his commission of war crimes. As has become clear this week, Anthropic CEO Dario Amodei has declined to do so. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.

Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.

To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.

We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that this is what we should expect. It’s also good to note that companies may have many reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s drunkenly stumbling his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — another lie).

Selling to the Pentagon sucks

Being on any federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to build these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.

This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been massive waves of rolling layoffs over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that threaten kids’ lives or well-being. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has laid off people at least 17 times in the last three years alone.

So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, and you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.

How did we get here?

We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in enabling the Rohingya genocide truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of the holocaust in Germany in the 1940s, and that served as the template for their work implementing apartheid in South Africa in the 1970s. IBM actually bid for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry stood up against contracts with ICE. That moment was also one of the catalyzing events that drove the tech tycoons into their group chats where they collectively decided that they needed to bring their workers to heel.

And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been openly inciting civil war for years on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a debate as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.

We don’t have to set the bar this low. We have to remind each other that this isn’t normal for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable. 
In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.

Talking through the tech reckoning

2026-02-26 08:00:00

Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.

Galaxy Brain

First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on Galaxy Brain (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.

In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.

The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:

Revolution Social

Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast Revolution.Social. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to want to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality. 
It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.

What’s next?

As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.

In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!

Taking action against AI harms

2026-02-24 08:00:00

In my last piece, I talked about the harms that AI is visiting on children through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.

Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be extraordinarily effective in standing up for the values of your community against the agenda of the Big AI companies.

But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.

Get your company off of Twitter / X

If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.

Here is some dialogue you can use or modify to catalyze a productive conversation at work:

Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.

Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?

That should be enough to trigger a useful conversation at your workplace. (You can share this link if they want a credible, business-minded link to reference.) If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had serious issues with trauma, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As some articles have noted, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.

As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.

All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.

Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.

Stop your school from using ChatGPT

An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about which AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.

As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.

First, you can begin simply by gathering resources. There are many credible stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:

  • “Maybe those kids were already depressed?” Several of the children who have been impacted by these tools were introduced to them as homework assistants, and only evolved into using them as emotional crutches at the prompting of the responses from the tool. Also: your school has children in it who are depressed, why are you willing to endanger them?
  • “Doesn’t every tool cause this?” No, this is extreme and unusual behavior. Your email software or word processor have never incited your children to commit violence against anyone, let alone themselves. Not even other LLMs prompt this behavior. And again, even if this did happen with every tool in this category, why would that make it okay? If every pill in a bottle is poisonous, does that make it okay to give the bottle of pills to our kids?
  • “They’ll be missing out on the future.” Ask the parents of the children impacted in these stories about their kids’ futures.
  • “We should just roll it out as a test.” Who will pay for monitoring all usage by all students in the test?
  • “It’s a parent’s responsibility.” Forcing a parent to invest hours of time into learning a cutting-edge technology that is being constantly updated is a full-time job. If you are going to burden them with that level of responsibility, how will you provide resources to support them? What is your plan to communicate this responsibility to them and get their consent so they can agree to take on this responsibility?
  • “The company said it’s working on the problem.” They can change their technology so that it only incites violence against their executives, or publish a notice when it has gone a full year without costing any children their lives. At that point, they may be considered for re-evaluation.

With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.

If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.

From here, you can begin a conversation that re-evaluates the goals of the initiative from first principles. "Everyone else is doing it" is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.

You have power

The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.

We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply accessing these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to require people to use these things.

The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.

Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.

How did we end up threatening our kids’ lives with AI?

2026-02-18 08:00:00

I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.

Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that encouraged and incited children to end their own lives. Grok’s AI generates sexualized imagery of children, which the company makes available commercially to paid subscribers.

It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, for profit, and not only is there little public uproar, it seems as if very few have even noticed.

How did we get here?

The ideas behind a crisis

A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.

1. Everyone feels desperately behind and wants to catch up

There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely convinced that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.

At Google, the company’s researchers had published the fundamental paper underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A crisis ensued within Google in the months that followed.

These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that shipping any product is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.

2. Accountability is “woke” and must be crushed

Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.

Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google also saw one of its engineers publish a sexist screed on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the most credible and respected voices in the industry on these issues.

Eliminating those roles was considered vital because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.

It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those countless redundant messaging apps they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. it may be a good thing that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!

3. Product managers are veterans of genocidal regimes

The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.

But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that made products that directly enabled and accelerated a genocide. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you chose to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.

Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”

Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, most platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.

4. Compensation is tied to feature adoption

This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.

In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need an internet of consent.

But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.

5. Their cronies have made it impossible to regulate them

A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an unbelievably broad number of conflicts of interest from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.

As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like open bribery) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.

All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.

There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.

What about the kids?

It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.

People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are already products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.

If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.

We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated their policy prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, Thorn, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose entire purpose is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?

And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?

How do we move forward?

It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be unfathomable that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who work in tech probably are barely aware.

What’s worse is, the majority of people I’ve talked to in tech, who do know about this have not taken a single action about it. Not one.

I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?

Launch it 3 times

2026-02-14 08:00:00

I wanted to share one of the bits of advice that I find myself most frequently giving to teams when they’re working on a product, or founders who are creating a new company: launch it three times.

What I mean by that is, it often takes more than one time before your idea actually resonates or sticks with the people you’re trying to reach. Sometimes it takes more than twice! And when I say that you might need to launch again, that can mean a lot of different things. It might just be little tweaks to what you originally put out in the world, It might even be less than that — I’ve worked with teams that put out literally the exact same thing again and found success, because the issue they had the first time was about timing. That’s increasingly an issue as people are distracted by the deeply disturbing social and political events going on in the world, and so sometimes they just need you to put things in front of them again so that they can reassess what you were trying to say.

Many relaunches are a little more ambitious, of course. Being a Prince fan, I am of course very partial to strategies that involve changing your name. Re-launching under a new name can be a key strategic move if you think that you’re not effectively reaching your target audience. As I’d written recently, one of the most important goals in getting a message out is that they have to be able to talk about you without you. But if you want people to tell your story even when you’re not around, the most important prerequisite is that they have to remember your name. With Glitch, that was the third name we actually launched the community under, a fact that I was a little bit embarrassed about at the time. But having a memorable name that resonated ended up being almost as much a factor in our early success as our user experience or the deeper technological innovations.

There are other ways of making changes for a successful re-launch. One thing I often suggest is to subtract things (or just de-emphasize them) and use that reduction in complexity to simplify a story. Or you can try to re-center your narrative on your users or community instead of on your product — the emotion and connection of seeing someone succeed often resonates far more than simply reciting a litany of features or technical capabilities. Any of these small iterations allow you to take another swing at putting something out into the world without having to make a massive change to the core offering.

Often times, people are afraid or embarrassed to make changes to things like branding or design because they’re some of the more visible aspects of a product or service. Instead, they retreat to “safe” areas, like tweaking the pricing or copy on a web page that nobody reads. But the vast majority of the time, the single biggest problem you have is that nobody knows you exist, and nobody gives a damn about what you do. Everything else pales in comparison to that. I’ve seen so many teams trying to figure out how to optimize the engagement of the three users on their app, or the five people who come to their site, while forgetting about the other eight billion people who have no idea they exist.

What about not failing?

This idea of launching again is really important to keep in mind because so much of the narrative in the startup world is about “fail fast” and “90% of startups fail”. When the conventional narrative from VCs prompts you to pivot right away, or an investor is pressuring everyone to grow, grow, grow at all costs, it can be hard to think about slowing down and taking the time to revisit and refine an idea.

But if you’re moving with conviction, and you’ve created something meaningful, and if you’re serving a real community that you have a deep understanding of, then it may be the case that you simply need to try again. If you are not moving with conviction to create something meaningful for a real community, then you don’t need to do it three times, because you don’t even need to do it once.

So many of the creators and innovators that inspire me most often end up working on their best ideas for years or even decades, iterating and revisiting those ideas with an almost-obsessive passion. Most of the time, they’re doing it because of a combination of their own personal mission and the deep belief that what they’re doing is going to help change people’s lives for the better. For those kinds of people, one of the things I want most is to ensure that they don’t give up before their ideas have had a full and fair chance to succeed, even if that means that sometimes you have to try, try again.

Coding agents as the new compilers

2026-02-12 08:00:00

In each successive generation of code creation thus far, we’ve abstracted away the prior generation over time. Usually, only a small percentage of coders still work on the lower layers of the stack that used to be the space where everyone was working. I’ve been coding long enough that people were still creating code in assembly when I started (though I was never any good at it!), though I started with BASIC. Since BASIC was an interpreted language, its interpreter would write the assembly language for me, and I never had to see exactly what assembly language code was being created.

I definitely did know old-school coders who used to, at first, check that assembly code to see if they liked the output. But eventually, over time, they just learned to trust the system and stopped looking at what happened after the system finished compiling. Even people using more “close to the metal” languages like C generally trust that their compilers have been optimized enough that they seldom inspect the output of the compiler to make sure it was perfectly optimized for their particular processor or configuration. The benefits of delegating those concerns to the teams that create compilers, and coding tools in general, yielded so many advantages that that tradeoff was easily worth it, once you got over the slightly uncomfortable feeling.

In the years that followed, though a small cohort of expert coders who would hand-tune assembly code for things like getting the most extreme performance out of a gaming console, most folks stopped writing it, and very few new coders learned assembly at all. The vast majority of working coders treat the output from the compiler layer as a black box, trusting the tools to do the right thing and delegating the concerns below that to the toolmakers.

We may be seeing that pattern repeat itself. Only this time, the abstraction is happening through AI tools abstracting away all the code. Which can feel a little scary.

Squashing the stack

Just as interpreted languages took away chores like memory management, and high-level languages took away the tedium of writing assembly code, we’re starting to see the first wave of tools that completely abstract away the writing of code. (I described this in more detail in the piece about codeless softwarerecently.

The individual practice of professionalizing the writing of software with LLMs seems to have settled on the term “agentic engineering”, as Simon Willison recently noted.

But the next step beyond that is when teams don’t write any of the code themselves, instead moving to an entirely abstracted way of creating code. In this model, teams (or even individual coders):

  • Define the specifications for how the code should work
  • Ensure that the system is provided with enough context at all times that it can succeed in creating code that is successful as often as possible
  • Provide sufficient resources that a redundant and resilient set of code outputs can be created to accommodate failures while in iteration
  • Enforce execution of tests and conformance systems against the code — including human tests with a named, accountable party, not just automated software tests

With this kind of model deployed, the software that is created can essentially be output from the system in the way that assembly code or bytecode is output from compilers today, with no direct inspection from the people who are directing its creation. Another way of thinking about this is that we’re abstracting away many different specific programming languages and detailed syntaxes to more human-written Markdown files, created much of the time in collaboration with these LLM tools.

Presently, most people and teams who are pursuing this path are doing so with costly commercial LLMs. I would strongly advocate that most organizations, and especially most professional coders, be very fluent in ways of accomplishing these tasks with a fleet of low-cost, locally-hosted, open source/open-weight models contributing to the workload. I don’t think they are performant enough yet to accomplish all of the coding tasks needed for a non-trivial application yet, but there are a significant number of sub-tasks that could reasonably be delegated. More importantly, it will be increasingly vital to ensure that this entire “codeless compilation” stack for agentic engineering works in a vendor-neutral way that can be decoupled from the major LLM vendors, as they get more irresponsible in their business practices and more aggressive towards today’s working coders and creators.

For many, those worries about Big AI are why their reaction to these developments in agentic coding make them want to recoil. But in reality, these issues are exactly why we desperately need to engage.

Seizing the means

Many of the smartest coders I know have a lot of legitimate and understandable misgivings about the impact that LLMs are having on the coding world, especially as they’re often being evangelized by companies that plainly have ill intent towards working coders. It is reasonable, and even smart, to be skeptical of their motivations and incentives.

But the response to that skepticism is not to reject the category of technology, but rather to capture it and seize control over its direction, away from the Big AI companies. This shift to a new level of coding abstraction is exactly the kind of platform shift that presents that sort of opportunity. It’s potentially a chance for coders to be in control of some part of their destiny, at a time when a lot of bosses clearly want to get rid of as many coders as they can.

At the very least, this is one area where the people who actually make things are ahead of the big platforms that want to cash in on it.

What if I think this is all bullshit?

I think a lot of coders are going to be understandably skeptical. The most common concern is, “I write really great code, how could it possibly be good news that we’re going to abstract away the writing of code?”. Or, “How the hell could a software factory be good news for people who make software?”

For that first question, the answer is going to involve some grieving, at first. It may be the case that writing really clean, elegant, idiomatic Python code is a skill that will be reduced in demand in the same way that writing incredibly performant, highly-tuned assembly code is. There is a market for it, but it’s on the edges, in specific scenarios. People ask for it when they need it, but they don’t usually start by saying they need it.

But for the deeper question, we may have a more hopeful answer. By elevating our focus up from the individual lines of code to the more ambitious focus on the overall problem we’re trying to solve, we may reconnect with the “why” that brought us to creating software and tech in the first place. We can raise our gaze from the steps right in front of us to the horizon a bit further ahead, and think more deeply about the problem we’re trying to solve. Or maybe even about the people who we’re trying to solve that problem for.

I think people who create code today, if they have access to super-efficient code-creation tools, will make better and more thoughtful products than the financiers who are currently carrying out mass layoffs of the best and most thoughtful people in the tech industry.

I also know there’s a history of worker-owned factories being safer and more successful than others in their industries, while often making better, longer-lasting products and being better neighbors in their communities. Maybe it’s possible that there’s an internet where agentic engineering tools could enable smart creators to build their own software factories that could work the same way.