2026-04-10 08:00:00
In just the last few weeks, we’ve seen a series of software security vulnerabilities that, until recently, would each have been the biggest exploit of the year in which they were discovered. Now, they’ve become nearly routine. There’s a new one almost every day.
The reason for this rising wave of massively-impactful software vulnerabilities is that LLMs are rapidly increasing in their ability to write code, which also rapidly improves their ability to analyze code for security weaknesses. These smarter coding agents can detect flaws in commonly-used code, and then create tools which exploit those bugs to get access to people’s systems or data almost effortlessly. These powerful new LLMs can find hundreds of times more vulnerabilities than previous generations of AI tools, and can chain together multiple different vulnerabilities in ways that humans could never think of when trying to find a system’s weaknesses. They’ve already found vulnerabilities that were lurking for decades in code for platforms that were widely considered to be extremely secure.
The rapidly-decreasing cost of code generation has effectively democratized access to attacks that used to be impossible to pull off at scale. And when exploits are less expensive to create, that means that attackers can do things like crafting precisely-targeted phishing scams, or elaborate social engineering attacks, against a larger number of people, each custom-tailored to play on a specific combination of software flaws and human weaknesses. In the past, everybody got the same security exploit attacking their computer or system, but now each company or individual can get a personalized attack designed to exploit their specific configuration and situation.
Now, we’ve had some of these kinds of exploits happening to a limited degree with the current generation of LLMs. So what’s changed? Well, we’ve been told that the new generation of AI tools, currently in limited release to industry insiders and security experts, are an order of magnitude more capable of discovering — and thus, exploiting — security vulnerabilities in every part of the world’s digital infrastructure.
This leaves us in a situation akin to the Y2K bug around the turn of the century, where every organization around the world has to scramble to update their systems all at once, to accommodate an unexpected new technical requirement. Only this time, we don’t know which of our systems are still using two digits to store the date.
And we don’t know what date the new millennium starts.
A core assumption of software development since the turn of the century, especially with the rise of open source software in the early 2000s, was that organizations could use more shared code from third parties to accelerate their coding efficiency. The adoption of code sharing through services like GitHub, knowledge sharing on communities like Stack Overflow, and the easy discovery and integration of shared code libraries through platforms like npm (which, like GitHub, is owned by Microsoft) all rapidly accelerated the trend of openly sharing code. Today, tens of millions of developers begin their coding process by gathering a large amount of code from the internet that they want to reuse as the basis for their work. The assumption is that someone else who uses that code has probably checked it to make sure it’s secure.
For the most part, this style of working from shared code has been the right choice. Shared, community-maintained code amortized the cost of development across a large number of people or organizations, and spread the responsibilities for things like security reviews across a larger community of developers. Often, part of the calculation about whether sharing code was worth it was that you might get new features or bug fixes “for free” when others made improvements to the code that they were sharing with you. But now, all of this shared code is also being examined by bad actors who have access to the same advanced LLMs that everyone else does. And those bad actors are finding vulnerabilities in every version of every single bit of shared code. Every single major platform, whether it’s the web browser on your desktop computer, or the operating systems that run powerful cloud computing infrastructure for companies like Amazon, has been found to have security vulnerabilities when these new LLMs try to pick them apart.
In years past, when major software security issues like Heartbleed or xz were discovered, the global security community would generally follow responsible disclosure practices, and the big tech vendors and open source developers would work together to provide updates and to patch critical infrastructure. Then, there would be deliberate communication to the broader public, with detailed information for technical audiences, usually followed by some more semi-sensationalistic coverage in the general press. But the recent spate of similarly-impactful security vulnerabilities have come at such a rapid clip that the leisurely pace and careful rituals of the past are already starting to break down. It’s a bit like the acceleration of the climate crisis; nobody knows how to build a system resilient enough to handle a “storm of the century” every year. Nobody knows how to properly communicate about, and respond to, the “exploit of the year” if it’s happening every six hours.
So, how is this going to play out? In society at large, we’re very likely to see a lot of disruption. Everything runs on software, even things we don’t think of as computers, and upgrading systems is really expensive. The harder a system is to upgrade, the more likely it is that organizations will either resist doing so or try to assign the responsibility to others.
In much of the West we’re in a particularly weak state because the United States has voluntarily gutted much of its regulatory and research capabilities in the relevant security disciplines. The agencies that might lead a response to this kind of urgent effort are largely led by incompetent cronies, or are captured by corrupt industry sycophants. We shouldn’t expect to see a competent coordinated execution at the federal level; this is the administration that had unvetted DOGE workers hand your personal data over to AI platforms that were not approved for federal use or verified to comply with federal privacy standards. The most basic security practices aren’t a consideration for leadership in this regime, and the policy makers like the “AI Czar” are brazenly conflicted by being direct investors in major AI players, making it impossible for them to be disinterested parties in regulating the market fairly.
So who will respond? In the United States, the response will have to happen from the people themselves, with more directly coordinated actions across the private sector, academia, individual technical subject matter experts, and governments and NGOs at the local level. In the rest of the world, strategically-aligned government responses will likely work with those in other sectors to anticipate, and react to, the threats that arise. We’ll probably see some weird and unlikely alliances pop up because many of the processes that used to rely on there being adults in the room can no longer make that assumption.
Within the tech industry, it’s been disclosed that companies like Anthropic are letting major platform vendors like Google and Microsoft and Apple test out the impacts of their new tools right now, in anticipation of finding widespread vulnerabilities in their platforms. This means that other AI companies are either doing the same already, or likely to be doing so shortly. It’s likely there will be a patchwork of disclosures and information sharing as each of the major AI platforms gets different levels of capability to assess (and exploit) security vulnerabilities, and makes different decisions about who, how and when they share their next-generation LLM technology with. Security decisions this serious should be made in the public interest by public servants with no profit motive, informed by subject matter experts. That will almost certainly not be the case.
At the same time, in the rest of the tech industry, the rumors around Apple’s next version of their Mac and iPhone operating systems are that the focus is less on shiny new features and more on “under the hood” improvements; we should expect that a lot of other phone or laptop vendors may be making similar announcements as nearly every big platform will likely have to deliver some fairly sizable security updates in the coming months. That means constantly being nagged to update our phones and apps and browsers and even our hardware — everything from our video game consoles to our wifi routers to our smart TVs.
But of course, millions and millions of apps and devices won’t get updated. The obvious result there will be people getting their data hijacked, their accounts taken over, maybe even their money or identities stolen. The more subtle and insidious effects will be in the systems that get taken over, but where the bad actors quietly lay in wait, not taking advantage of their access right away. Because of the breadth of new security vulnerabilities that are about to be discovered, it will increasingly be likely that hackers will be able to find more than one vulnerability on a person’s machine or on a company’s technical infrastructure once they get initial access. Someone who’s running an old version of one app has likely not upgraded their other apps, either.
Open source projects are really going to get devastated by this new world of attacks. Already, as I’ve noted open source projects are under attack as part of the broader trend of the open internet being under siege. Open source maintainers are being flooded by AI slop code submissions that waste their time and serve to infuriate and exhaust people who are largely volunteering their time and energy for free. Now, on top of that, the same LLMs that enabled them to be overrun by slop code are enabling bad actors to find security issues and exploit them, or in the best case, to find new security issues that have to be fixed. But even if the new security issues are reported — they still need to sift through all of the code submissions to find the legitimate security patches amongst the slop! When combined with the decline in participation in open source projects as people increasingly have their AI agents just generate code for them on demand, a lot of open source projects may simply choose to throw in the towel.
Finally, there are a few clear changes that will happen quickly within the professional security world. Security practitioners whose work consists of functions like code review for classic security shortcomings such as buffer overflows and backdoors are going to see their work transformed relatively quickly. I don’t think the work goes away, so much as it continues the trend of the last few years in moving up to a more strategic level, but at a much more accelerated pace. Similarly, this new rush of vulnerabilities will be disruptive for security vendors who sell signature-based scanning tools or platforms that use simple heuristics, though in many cases these companies have been coasting on the fact that they’re selling to companies that are too lazy to choose a new security vendor, so they may have some time to adapt or evolve before a new cohort of companies come along selling more modern tools.
Back in 2000, a lot of folks thought the Y2K bug wasn’t “real” because they didn’t see planes falling from the sky, or a global financial meltdown. In truth, the mobilization of capable technical experts around the world served to protect everyone from the worst effects of the Y2K bug, to the point where ordinary people didn’t face any real disruptions of their day at all.
I don’t know if it’s possible for history to repeat itself here with the series of security challenges that it seems like everyone is going to be facing in the weeks and months to come. There have been pledges of some resources and some money (relatively small amounts, compared to the immense sums invested in the giant AI companies) to trying to help open source and open source infrastructure organizations deal with the problems they’re going to have to tackle. A lot of the big players in the tech space are at least starting to collaborate, building on the long history of security practitioners being very thoughtful and disciplined about not letting corporate rivalries get in the way of best practices in protecting the greater good.
But it’s simply luck of the draw that Anthropic is the player that seems to be the furthest ahead in this space at the current time, and that’s the only reason we’re seeing a relatively thoughtful and careful approach to rolling out these technologies. Virtually every other frontier-level player in the LLM space, especially in the United States, will be far more reckless when their platforms gain similar capabilities. And they’ll be far more likely to play favorites about which other companies and organizations they permit to protect themselves from the coming risks.
Platforms whose funders, board members, and CEOs have openly talked about the need to destroy major journalistic institutions, or to gut civil society organizations, are certainly not going to suddenly protect those same organizations when their own platforms uncover vulnerabilities that pose an existential threat to their continued function. These aren’t just security issues — in the wrong hands, these are weapons. And that’s not to mention the global context, where the irresponsible actions of the United States’ government, which has generally had the backing of many of the big AI players’ leadership, will also incentivize the weaponization of these new security vulnerabilities.
It seems unlikely that merely keeping up with the latest software updates is going to be enough to protect everyone who needs to be protected. In the fullness of time, we’re going to have to change how we make software, how we share our code, how we evaluate trust in the entire supply chain of creating technology. Our assumptions about risk and vulnerability will have to radically shift. We should assume that every single substantial collection of code that’s in production today is exploitable.
That means some of the deeper assumptions will start to fall as well. Does that device need to be online? Do we need to be connected in this context? Does this process have to happen on this platform? Does this need to be done with software at all? The cost/benefit analysis for many actions and routines is likely to shift, maybe just for a while, or maybe for a long time to come.
The very best we can hope for is that we come out the other side of this reckoning with a new set of practices that leave us more secure than we were before. I think it’s going to be a long time until we get to that place where things start to feel more secure. Right now, it looks like it’s about ten minutes until the new millennium.
2026-04-08 08:00:00
These days, we’re all living in a constant state of crisis, foisted upon us by a world where those who are meant to keep things stable are the least stable factors in our lives. The chaos and stress of that reality makes it difficult to make any plans, let alone to make decisions if you have responsibilities for a team or organization that you’re meant to be leading. It’s easy to imagine there’s nothing we can do, or to feel hopeless. But a resource that just arrived served as a timely reminder for me that a crisis doesn’t have to be paralyzing, and we don’t have to feel overwhelmed when trying to plan how we’ll respond as leaders.
The topic of crisis has been on my mind again as I’ve been looking at the work of some friends who are the most fluent experts on the topic of crisis that I know, prompted by the release of Marina Nitze, Mikey Dickerson and Matthew Weaver's new book, Crisis Engineering.
There’s nothing more valuable than people who can step in during a moment of crisis and provide clarity, not just on how to make it through that moment, but how to seize that opportunity to actually make better things possible. A few years ago, at some of the most stressful and harrowing moments I’ve had as a leader in my business career, I got to connect with a remarkable team who ran towards the crisis that our organization was in, and helped our team get through that moment and not just persevere, but to thrive. I thought a bit about the famous Mr. Rogers line about “look for the helpers”, and Matthew, Marina, and Mikey's team at their company Layer Aleph really were the equivalent of the helpers when it comes to the place where where technology meets the real world.
I’d first heard legend of their way of working in the days and weeks after the notoriously rough launch of Healthcare.gov (This was back when the federal government aspired to competency, inability to deliver was considered a scandal, and media would accurately describe something that didn’t function as a failure.) A small, scrappy, multifunctional team had been able to transform the culture of this hidebound segment of the federal government, and deliver a set of services that are saving American lives to this day. That story is detailed well in the book, but at the time, the conventional wisdom was that this was a catastrophe so impossibly complex, in a bureaucracy so hopelessly broken, that nobody could possibly fix it. And then they did. (With the help of a lot of brilliant and motivated colleagues.)
As it turns out, this was just one of many such efforts that the team would be a part of, and helped define the overall approach that they, and their collaborators, would take in addressing these highly public crises. There are so many situations where a combination of cultural and technical challenges conspire to cause extremely visible failures or disruptions that seem intractable. But over time, a set of practices and principles emerged from their work that took the response out of the realm of superstition and guesswork and into something that was almost a science. These techniques work when systems are crashing, when machines get hacked, when data are leaked, when business models are crumbling, when leadership is in disarray, when customers are angry, when users are leaving, when competitors are attacking, when funders are fleeing. In short, when the crisis is at your door.
It was years after their evolution from those early post-Healthcare.gov days into a mature practice that I reconnected with the Layer Aleph team. By then, I was running a company, and a team, that was under an extreme amount of stress, and in a situation that could easily have amounted to an existential crisis. They were able to engage with conviction and compassion, but importantly, they weren’t making it up as they went along. I think this is an idea that’s important to understand in the current moment, too — there is such a thing as expertise. We do not have to settle for incompetence and cronyism. Good people of good character with real credentials and relevant experience can bring it to bear on even the most challenging situations, and when they do, even the most intractable problems are solvable.
And now, that expertise is something they’ve captured and shared.
I don’t often unabashedly endorse books about business and technology; too often I find them to be based on thin premises, padded out with cliches. But what the team here have done with their new book Crisis Engineering is something special — they documented their own experiences of turning real crises into a chance to design new, resilient systems.
Even better, they talk about how other organizations can do the same thing. The reason that I can testify that it works is because I have seen it, and I’ve seen my own team benefit from their work. In fact, I think it was during the conversations after the dust had settled from some of that work that the very phrase “crisis engineering” first emerged as a description of this way of thinking about complex problems. I’m thrilled that it’s become a useful shorthand for naming and discussing this powerful and unique way of tackling some of the most intimidating situations that companies or organizations might take on. It’s built confidence for myself, and my whole leadership team from that era, that we’ll be ready when the next challenge arrives. With apologies to Rihanna, I do want people to text me in a crisis.
The more confidence we can build in our teams that a crisis is an ordinary event that we can plan for, the more ready they will be for that moment when it arrives. That’s why I can’t recommend the book highly enough. Set aside some time to read it, and to make notes on how you might put it into practice when crisis inevitably comes to visit. You’ll be lucky to have had this resource before you need it.
You can read more about the book on their site. (And, as always, nothing I post on my site is sponsored content — I’m enthusiastically endorsing this book because I believe in what these folks have written and genuinely believe it’s worth your time to read if you lead an organization or team.)
2026-04-07 08:00:00
One of the most infuriating tropes that I see repeated in media is executives (usually from boring old companies) insisting that their employees don’t want to work hard. Media outlets dutifully repeat this pernicious lie, despite there being no evidence to back it up, and then cultural commentators either credulously amplify it, or actively take part in advancing the narrative as part of their agenda, even though they know it’s false. There is an apparently infinite attention appetite for commentators who troll for attention by saying how “kids these days” don’t want to work hard.
As has often been documented, the hoary chestnut of saying “nobody wants to work anymore” dates back decades, if not centuries, and it’s never been true in all those years of deletion. It is, firstly, a tactic that bosses use for negging workers in a vain attempt to try to drive down wages (and to successfully get media to blame people for their own underemployment), but it also serves as an effective demonstration of just how little society understand about what actually motivates people.
I’ve helped found six companies in my life, and been involved in the start of a handful of other startups and nonprofits, and literally every single one was full of people who love to work hard. The simple reason for that shared trait is that all of those teams were comprised of groups of people with a few key things in common:
If people have these things, and believe in what they’re doing together, they will joyfully work their asses off.
It is genuinely one of the best feelings in life to be completely exhausted while sitting next to someone who’s been right beside you, shoulder to shoulder, fighting to accomplish the same goal. I’ve known that to be true whether we were launching a new company into the world, campaigning to get a candidate we believed in elected, organizing to rally people around an issue, raising funds for an important cause, or even just trying to get people together for a big event or party.
Every time, the feeling of being soul-tired next to folks who you know you can trust because they showed up and worked their asses off just like you did, is among the most motivating and inspiring things you can experience. Nobody who’s ever been lucky enough to have had a moment like that could ever think that people “don’t want to work”.
What people face too often is being ground down by systems, institutions, and unjust leaders who insist on creating roles where people are forced to do dehumanizing, isolated, meaningless work, while not being given the agency to make smart and empowered decisions about how the work gets done. Or worse, they’re forced to do work in service of goals that are actively harmful and destructive, and contrary to their own values, or just contrary to basic human decency. It’s not that people are unwilling to work, it is that they are working — to balance their own humanity with the crushing burdens of having to provide for themselves and their families. It is exhausting for a good person to have to do bad work or harmful work or pointless work, just to pay the bills. Being less “productive” in those situations isn’t a shortcoming, it’s a measure of still having an immune system that’s resistant to these moral injuries.
Preserving your soul and sanity in an organization with no morals is very hard work. If you think your workers aren’t working hard, maybe you’re ignoring the toughest part of their job.
And even in more moderate organizations, where things aren’t overtly evil so much as frequently frustrating and burdensome and stressful, there are still plenty of reasons that people aren’t as “productive” (as defined by bosses). Many of these reasons could be addressed by leadership taking accountability for the context and communication provided to workers for their responsibilities. Empowered workers who are given high levels of trust and autonomy tend to be extremely productive, and don’t need babysitting from management. If you treat adults like idiots, they will respond in kind.
There’s also the issue of what people are provided beyond their paychecks. Ideally, everyone on a team will have enough resources to do the job properly, but in a mission-aligned organization even that can be optional at first, because scrappy teams are pretty adept at making something out of nothing if they really have to. There just needs to be a point where they’re not starved of appropriate resources anymore, and it’s a leader’s ethical responsibility to provide everything people need to thrive and be healthy and happy in the long term. The key point here is that people are not driven by greedy, selfish motivations in organizations that accomplish meaningful things; if there’s trust that they’ll be taken care of, and that leaders are worthy of that trust, people will over-deliver in service of the common goal.
But in many organizations, people are given crappy tools, miserable working environments, overbearing surveillance of their workplaces and digital workspaces, meaningless and abstract metrics to achieve, and all of these are delivered with corporate communications that don’t sound like any human being ever. The executives who inflict all of this on the workers hope that they don’t notice that none of the execs are expected to endure any of this.
Finally, fundamentally, there is pay. Compensation and real-world wages have been plummeting for decades; the growing chasm of wealth inequality has been well-documented for many years. But the quiet indignities around that degradation in standard of living have increased, as well, with the chipping away at leisure time through always-accessible digital tools making people have to be on call for their jobs during every waking hour.
The erosion of social norms around employment has been so complete over the last few decades that people born in this century don’t even believe that there was a time when it was not only routine for Americans to be union members, but for private sector companies to provide, and honor, pensions for their employees to benefit from in retirement. The mere suggestion of the idea would get a public company CEO fired in the current era.
Why would someone work for an institution that is actively working to undermine their well-being? Most large companies are spending more time strategizing against their employees than against their competitors. Too many nonprofits and other ostensibly non-corporate institutions have gotten the same idea. But it is management that does not want those workers to work — or they would act like it. If your workers aren’t massively motivated to do great work, it’s your fault. Because all you have to do is provide a worthy mission and get the fuck out of the way.
How do I know? Because I’ve gotten it right, and I’ve gotten it wrong. When I’ve taken my eye off the ball, either for unavoidable business reasons, or because I made mistakes due to inexperience or ego or distraction or competition or bad luck or whatever else, the people on my team showed it. Work stopped, quality dropped, frustration and tension increased, and all of a sudden my managers were telling me that “these folks don’t want to work”. Eventually I learned: the right thing to do is to tell those managers that we should be asking, “How are we failing?” Because, short of personal emergencies or life situations that keep them from being able to do their best work, people want to feel proud about the work they’re doing, and to feel like they’re not wasting their time every day when they go into the office. They don’t want to resent their bosses or be annoyed at their coworkers.
The few times I’ve been lucky enough to get it right have been the most satisfying times in my career. Once or twice, I’ve gotten to work for great bosses. They really inspired me to do great work, and taught me a lot that I didn’t know how to do before, or motivated me to want to learn on my own. But more importantly, they made an environment where I could collaborate with my coworkers to do more than I thought was possible, both by myself and especially together with others. I hope that at my best, the teams I’ve led have had a bit of that same feeling; I know I’ve been so proud of what I’ve seen them create and accomplish that they certainly have inspired me over the years.
But perhaps the most important lesson I’ve learned from watching great teams work is that the cynical, toxic view of people’s intrinsic motivations and work ethic that we hear so often is a damnable lie. Most people are tireless and brave and brilliant in the work they do, when it’s work that has purpose and passion. Anyone who tells you otherwise is telling on themselves, and revealing their own lack of imagination and vision about what it’s possible for people to create together.
2026-04-01 08:00:00
I finally got the chance to drop by one of my favorite podcasts, The Vergecast, where David Pierce had me on to talk about the recent conversation about Apple's moves around video podcasts, as well as the much broader big-picture considerations around keeping podcasts open. We started with grounding the conversation in the idea that "Wherever you get your podcasts" is a radical statement.
The episode also starts with a wonderful look back at Apple's first half-century as they celebrate their 50 anniversary, courtesy of Jason Snell, whose Six Colors is one of my favorite tech sites, and whose annual survey of tech expert sentiment on Apple is indispensable. He's completely fluent in Apple's culture and history, and minces no words about their recent moral failures. Definitely worth the watch! I hope you'll check out the entire episode, and let me know what you think, and I'm really glad to get to continue conversations that start on my site and bring them to a broader audience.
2026-03-31 08:00:00
Yesterday, I had the chance to witness someone who's one of the most dedicated, competent advocates for privacy and digital rights bring that message to a whole new platform. It turns out, it's pretty delightful, especially in a moment when our civil liberties and rights online couldn't matter more!
Cindy Cohn, the executive director of the Electronic Frontier Foundation, has been a tireless fighter for protecting everyone's digital civil liberties, and I was lucky enough to get to tag along as she took the story of that work to The Daily Show yesterday. It was no surprise that the conversation was so fluent and insightful on the topic, but I think a lot of people in the audience didn't expect that it would be such a fun and even delightful conversation about a topic that is, too often, confusing or complicated or boring.
Six years ago, when I first joined the board of the EFF, I was already a believer in the core principles the organization stood for, but one of my biggest hopes was that the messages and mission of the entire team could just be brought to a larger audience. That couldn't have been more perfectly accomplished than seeing Cindy translate some topics that were fairly technical, or which involved fairly arcane legal concerns, and make them very accessible. And this work is vital because both the overreaching, authoritarian government, and the irresponsible, unaccountable forces of big tech are threatening our rights more than ever.

I gotta admit, it was pretty fun to watch Cindy hand Jon a "Let's Sue the Government!" t-shirt. You can get one just like his if you donate to EFF or become a member!
More broadly, though, the interview was also just a wonderful milestone to see at a personal level. Part of the story that Cindy was telling on the show is the broader narrative she captures in her book, Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, out from MIT Press. (And full disclosure there, I recently joined their management board as well, more on that soon.) The book captures so many of the lessons that can only come from decades of fighting in the trenches, which are lessons that so many organizations are going to need in order to be resilient in the years to come, even if they're not working in the exact same disciplines. In addition to being something of a valedictory for Cindy's tenure at the EFF, the lessons of the book seem to set the stage for the new chapter that promises to unfold under the new executive director Nicole Ozer, as she carries forward this work.
But if it isn't clear enough, I'll say it directly: as happy as I am to celebrate good people getting the word out about vital work, these are dangerous and trying times. The most powerful people and companies in the world, along with the most authoritarian administration we've ever seen, are all working to try to roll back all of the digital rights that we rely on every day to benefit from the power of the Internet. The issues that EFF helps protect for us couldn't matter more. So, if you can, support the EFF with your donation (you can even get a copy of Cindy's book if you become a Gold-level member!) and take action in your own community to help push back the onslaught of bad policy and corporate overreach that threatens us all.
And finally, for those of you in NYC: If you liked the conversation above, and want to dig in even further, come out and join us on April 23, where I'll be sitting down with Cindy at the Brooklyn Public Library's Central Library. It promises to be an engaging conversation, and I hope to see some of you there!
2026-03-27 08:00:00
You must imagine Sam Altman holding a knife to Tim Berners-Lee's throat.
It's not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for "open" as we've known it on the Internet over the last few decades.
The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.
Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.
Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that's not good enough.
Now, the hectobillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.
Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don't say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is.
Calling this threat "existential" is a strong statement, so we should back that up with evidence. The point I want to make here is that this is a lot broader than just one or two isolated examples of trying to win in one market. What we are seeing is the application of the same market-crushing techniques that were used to displace entire industries with the rise of social media and the gig economy, now being deployed across the very open internet infrastructure that made the modern internet possible.
The big tech financiers and venture capitalists who are enabling these attacks are intimately familiar with these platforms, so they know the power and influence that they have — and are deeply experienced at dismantling any systems that have cultural or political power that they can't control. And since they have virtually infinite resources, they're able to carry out these campaigns simultaneously on as many fronts as they need to. The result is an overwhelming wave of threats. It's not a coordinated conspiracy, because it doesn't need to be; they just all have the same end goals in mind.
Some examples:
robots.txt functioned for decades to describe the way that tools like search engines ought to behave when accessing content on websites, but now it is effectively dead as Big AI companies unilaterally decided to ignore more than a generation of precedent, and do whatever they want with the entirety of the web, completely without consent. Similarly, long-running efforts like Creative Commons and other community-driven attempts at creating shared declarations or definitions for content use are increasingly just ignored.The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They're hardly getting rich — that's thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there's no fortune or fame in it.
Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it's the right way to connect with an audience. Publishers who've survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they're trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.
So, we're in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we're all in one fight together, and push back with the same ferocity with which we're being attacked, then we do have a shot at stopping them.
At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don't think it's any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.
Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member. That’s because I’m trying to make sure my deeds match my words!) These are the people whom I've seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web's defenders.
Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site's members without reciprocating in kind.
The good of the web only exists because of the openness of the web. They can't just keep on taking and taking without expecting people to finally draw a line and saying "enough". And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick's recent piece where he argued that one of the things that might enable a resurgence of the open web might be... AI. It would seem counterintuitive to anyone who's read everything I've shared here to imagine that anything good could come of these same technologies that have caused so much harm.
But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don't think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called "good AI". It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.
Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time.