MoreRSS

site iconAnil DashModify

A tech entrepreneur and writer trying to make the technology world more thoughtful, creative and humane. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Anil Dash

Why are the Artemis II photos on Flickr?

2026-04-30 08:00:00

If you followed along with the recent joyful celebrations of the Artemis cruise around the moon, and took a moment to dive into the photographic archives of the mission, you might have noticed that all of the original images were shared by NASA on the venerable photo sharing service Flickr. What you might not know is… why?

Here’s the TL;DR:

  • Flickr comes from (and helped start!) the Web 2.0 era, which was based on users having control over their data
  • Tools at that time began giving creators the power to decide what license they wanted to release their content under, including permissions about how it could be shared, used, or remixed
  • Because the people who made platforms back then were users and creators themselves, they thought about the long term and wanted to be able to preserve people’s work
  • After lots of corporate shuffling, Flickr ended up in the hands of a family-owned company, SmugMug, and they made the Flickr Foundation to preserve public photos for the next 100 years
  • NASA’s images should only be on a service where they can be stored in full resolution, for the long term, dedicated to the public domain — which the other social media apps of today can’t do

The Photographic Record

First, some background for folks who might not know what Flickr is, or who may have forgotten. Flickr is a social sharing site for photography which was founded in 2004, and these days people might say that it shares some of its cofounders with Slack, though back when Slack started, everybody said that the company was started by some of the founders of Flickr. That’s because Flickr was arguably the most influential site of the Web 2.0 era, helping define everything from the user interface design to the bright colors to the easy way that developers could access data from the platform. A lot of the things that we take for granted on the modern social internet, like a friendly “voice” used to communicate to users, were pioneered by Flickr, and then quickly came to be considered standard expectations for the apps and sites that followed. It’s hard to imagine that sites from Tumblr to Grindr would have omitted their final “e”s without Flickr’s precedent.

Flickr spun out of a Canadian gaming company called Ludicorp, founded by Stewart Butterfield (later CEO/co-founder of Slack) and Caterina Fake (later an investor and chair of Etsy). The photo-sharing service was extracted from the pieces of a somewhat unsuccessful attempt at multiplayer gaming called “Game Neverending”, but it retained the playfulness of that game even as it became a social app. Flickr also inherited the fine-grained privacy controls and thoughtful community features of earlier social platforms like LiveJournal — along with being actively, intentionally moderated by actual humans who worked diligently to prevent destructive behaviors on the platform. This meant that, more than 20 years ago, this early photo sharing community typically had better social norms than people see on today’s social media apps. (A little side note: Part of Flickr/Ludicorp’s initial funding was with public money. What a remarkable way to fund lasting innovation!)

With all of these groundbreaking features, Flickr didn’t just inspire lots of other entrepreneurs to create a new wave of Web 2.0 startups, it also attracted millions of users who, for the first time, began taking photos with the primary goal of sharing them online. Prior to this moment, the earliest phones with decent cameras were coming to market (it would be years until the iPhone came out), and other photo services of the time were still often oriented towards taking film to processing facilities, and then having the professionals at those facilities scan the resulting images and post them to a clunky online service where you could tediously click through them in a virtual album. Until Flickr, photo sharing online was essentially still analog, even if the experience was technically happening online.

In Focus

Flickr wasn't a social platform first — it was a photography platform first. That means it was designed to store high-resolution versions of every image, and didn't distort pictures with things like filters. Every image showed details like what kind of camera had taken the photo, and even what specific settings were used to take the shot. People started building communities around the then-new idea of using tags to help them find content by topics online — an idea that would directly influence the creation of hashtags on Twitter a few years later.

Another core idea of the time was a firm belief in open data: people should own and control their own work. Eventually, some experts (including a then-teenage Aaron Swartz, who we'd later talk about in the early days of Markdown) created a set of standards called Creative Commons licenses, now maintained by an organization of the same name. Flickr made it easy for users to describe what permissions people had for reusing or remixing any photos they posted. (I was helping out with a blogging platform back then, and I think we were the first tool to support this stuff. It felt like a big deal at the time!)

People's Flickr images started popping up in corporate PowerPoint presentations or commercial advertising almost immediately. A little sidebar: the incredibly positive and generous intent of these open licenses has since been exploited by extractive Big AI companies, who ransacked all of the images on Flickr that had permissive licenses without any consent from, or compensation to, the creators. That might be legal by most readings of the licenses, but if you have hundreds of billions of dollars and don't think you should at least have a conversation with the photographers whose work you're using, you're probably an asshole.

Archival Prints

Our close-knit community of people building the new era of web apps was keenly aware that our users were creating culture. This realization brought a huge amount of responsibility — not just in enabling users to express themselves, but in thinking about the long term for people's ownership of their works. Public institutions had just begun to use these platforms, which meant that the content being shared wasn't just a nice picture to look at: it might be socially or even historically significant.

What happened in the years that followed was… a lot of corporate machinations. Flickr got bought by Yahoo. Flickr's founders left Yahoo. Yahoo got bought by Verizon. You can imagine how all of that went; the details aren't all that important, except to say that by the time Instagram launched, Flickr had begun to fade into obscurity. People were focused on mobile phones instead of the desktop, on sharing square images with filters instead of full-resolution photography, and on connecting socially instead of caring about photos as art or a cultural record. Nobody would post the canonical historical photo of an event with a Valencia filter on it. Most of Flickr's users moved on, rarely checking their old accounts — until a family-owned photo service named SmugMug bought the service from Yahoo. A human-scale operation with some actual heart and a love of photography was a much better home for the platform than some random division of Verizon.

Commons Sense

In 2022, the new team at SmugMug that owned Flickr decided to focus on Flickr’s larger place in culture. Many major institutions around the world had chosen to archive their public photos on Flickr because of its superior support for high-resolution imagery, its unique ability to declare explicit legal licenses (including public domain licenses), and its long-term reputation for reliably hosting content without any of the harms or abuses that typical social networks had inflicted on users. Museums around the world had entire catalogs on the platform, and governments routinely used it to document their public events. When I had a photo taken at an official White House event with President Obama, his team sent me the final image afterward by sending me a Flickr link; when Zohran Mamdani met King Charles, the NYC Mayor’s Office shared those pictures on Flickr, too.

The Flickr team at SmugMug did something special with their responsibility about these public works, due to their cultural significance to the world. They made the Flickr Commons, and brought in a team with expertise in digital archiving and community. This is a project of The Flickr Foundation, designed to preserve digital legacies, and begun in collaboration with no less than the U.S. Library of Congress (back before that was an institution under siege.) They are developing a hundred year plan for how to care for these works, which is virtually unheard-of in the digital world. (You should absolutely donate to support the Flickr Foundation in their mission to preserve these vital public resources for many years in the future.)

It’s in this context that NASA has long been sharing its imagery on Flickr, for all of its missions — not just Artemis II. There’s even a special section for NASA on The Commons. And since everything is provided in incredibly high-resolution and has every single detail about the photo and how it was taken, it’s possible to combine the information about the photo with other data and create amazing resources like this beautiful timeline of the entire mission. You can see Hank Green’s wonderful narration of his inspiration and creative journey behind the timeline right here:

Why Not With Us?

Anybody who’s read my site for a while knows that I’m a huge proponent of owning your own website, and having your own content live there. Shouldn’t NASA, of all institutions, have their photos live on their own nasa.gov website? Well, yes! But.

One complication is that many large institutions, especially ones that have developed complex processes for good reasons, like government agencies and big businesses, often have trouble maintaining public-facing web infrastructure over long timeframes. Running a website that millions of people can access requires constant updates and maintenance, guarding against a never-ending onslaught of security challenges (a task that’s rapidly getting more difficult!), and the internal knowledge on how a site was created in the first place often leaves when employees do.

In contrast, platforms that are run by technically fluent, well-intentioned and thoughtful technologists can be very effective in maintaining content over a timescale of decades. The SmugMug team has been very thoughtful in managing both their business and their technical infrastructure in order to sustain Flickr’s public archives for years to come. (Though, as mentioned, you should still donate to ensure they can keep doing so!)

What’s more painful is the more recent threats to public stewardship of this kind of content. The traditional authoritarian impulse to destroy or falsify the public record has not spared the digital realm under the current administration. Wide swaths of the government’s websites have been erased, taken offline, or had their content modified to either delete or adulterate the content. Leaders who regularly post AI slop on their social media accounts, and who have begun posting lies and distortions on major websites like the White House’s, will of course not hesitate to modify or remove photos from public archives as well. By having the public’s images preserved in an independent archive in standard formats, we increase the likelihood of future generations being able to access accurate copies of these historical records.

We’ll be glad to have archives like Flickr’s in the future, and people around the world will be glad for its place in archiving even much more mundane aspects of culture.

Taking off

I was honored to get to reflect on my long history with Flickr, and with online community, in an interview with my old friend Jessamyn West, for the Flickr Foundation’s blog. In a conversation that unspooled over a few months, I think we covered so many of the themes that resonated in what I’ve mentioned here, and what struck me most was how much I wanted a new generation of people on the internet to have their own version of the communities and experiences that we got to have when sites like Flickr were first being made. People still cherish those values!

The beautiful thing about communities and platforms like Flickr is that they remind us that not everything on the internet has to be ephemeral, not everything on the web has to be hyper-commercial. Sometimes a bunch of decent people can do a good thing for the right reasons, and the result of that work can persevere for decades. Then, others who do some of the most ambitious and astounding things imaginable can build on that work to inspire us. And then, some more regular folks can build on top of that and help us waste a little bit of time just clicking around on something fun. That’s what the internet is supposed to be about!

This isn’t just about recounting old web lore — this is about explaining the internet we have right now. Hank’s timeline site is brand new, entertaining a whole new generation, and probably the majority of the audience who are looking at it weren’t even born when Flickr was first conceived. But the reason he can build that site is because of the values and the inventiveness of the team and community who created a platform like Flickr — and because those kinds of values are durable. They might not be as loud or flashy, but they are still everywhere, quietly enabling a lot of the things we enjoy most every day.

Public dollars helped make a fascinating community, then public dollars enabled a breathtaking journey into space, and then a public commons helped a creator make a novel way to explore that journey. Lots of people chose, over and over, to be generous with their genius. These are all gifts that a bunch of strangers gave each other, over hundreds of thousands of miles, and many years. Inspiration is all around us!

A Setting Earth

(One) Good AI Is Here

2026-04-28 08:00:00

The cultural battles over AI have broken down over predictable lines in the past few years, with critics rightfully calling out the big AI platforms for training on content without consent, recklessly building without considering environmental impact, and designing platforms that are unaccountable because their code and weights (the parameters that describe how an AI model works) aren’t open for third-parties to evaluate. The AI zealots have done themselves no favors, by not only dismissing all of these valid criticisms, but by also making increasingly outlandish and extreme claims about the capabilities of the Big AI platforms, while simultaneously scaremongering about the brutal effect they’ll have on people’s lives and careers. It’s no wonder the public sentiment about AI has become so negative.

But a small cohort of us who are curious about LLMs as a technology, yet deeply critical of Big AI companies for their impact on society, have been asking what would “good” AI look like? Is it possible to make versions of these technologies that provide real benefits, and actually help people, without all of the attendant harms? We’ve had prior eras of machine learning tools that were useful technologies without being massively destructive — are the negative externalities intrinsic to LLMs in general?

We might have just gotten our first glimpse at an AI that’s actually good.

This is just one small example that I saw recently, in a very unexpected place, but I can’t get it out of my mind. It’s not a tool that every person in the world is going to use, but it feels a bit like the famous William Gibson quote, “The future is already here — it's just not very evenly distributed.” This might be a little tiny bit of a good AI future, and now we just need to distribute the same kind of thing to a lot more people.

What’s good? Something that checks every box I can think of for our most immediately positive goals: it’s trained entirely with data that were consensually gathered; it’s completely open source and open weights, so anybody can examine it to know exactly how it works and what biases or flaws it might have; it’s designed to run on ordinary computers that normal people have access to — including those that can run entirely on renewable and responsible energy sources. And it is controlled by creators, not extractors, people who are inarguably on the side of artists and creatives and those who make art and culture in the world, designed to support and enable and empower their expression. No billionaires or guests of Epstein’s island were involved in the creation of this technology.

Going Green

Let’s back up a little bit. Corridor Digital is a video production shop and content studio that have been popular on YouTube since the earliest days of its independent filmmaking community. They’ve stayed relevant through many changing trends and format shifts, most recently becoming wildly popular for their ongoing series of video reactions to the visual effects and stunt sequences in popular films and TV shows. Over time, the series has earned a ton of respect from many of the top practitioners in the industry from areas like VFX, stunt work, animation, and more. They even went direct to their fans with a nice subscription service, helping support their work directly.

But still, this was basically a bunch of (mostly) guys making videos. Until something interesting happened recently.

Niko Pueringer, one of the cofounders of Corridor Digital, and one of the more prominent on-screen characters in their filmed content, is not a software developer. Then, a few weeks ago, he decided he had reached a breaking point in one of the challenges that effects artists regularly have to deal with: green screen keying. (That’s the process in which an artist extracts a foreground image from the green background when they’re creating a clip that will be composited together for an effects shot.) Basically, the current tools were crude enough that it felt like an almost manual process, requiring artists to painstakingly cut out images like they were snipping out pictures from a magazine with a dull pair of scissors.

So, Niko created a set of his own videos using CGI to simulate a green screen, and began training an AI model — in this case, a neural network — to learn how to key the footage that he'd generated for this purpose. (He was able to build the tools that carried out this training by asking one of the current popular commercial AI tools to help.) After a good bit of time, trial and error, and heavy computation, the end result was a system that was extremely effective at green screen keying. He even sent an early version of the system to other professionals in the industry to compare its results to their own commercial-grade tools, and they confirmed that it often performed comparably to some of the best tools on the market.

Niko made a video explaining the project — and released the code that would enable others to run the same tool for themselves. (Do check out the clip — the team have become very gifted storytellers, and the narrative does a wonderful job of bringing you along on the journey of the highs and lows of discovering how to try to invent something new.)

Opening up

Once the new tool, now called CorridorKey was out in the wild, a community rapidly formed, and instantly adopted the software into a full-fledged open source project — even though Niko had never led an open source project before. As is typical for such an enthusiast community, they were able to teach their leader about all the arcane processes involved in accepting code improvements from strangers around the world.

Within days, the community had made the tool significantly easier to use — especially for non-expert video editors who would struggle with the complexities of configuring conventional (super-nerdy) open source software. Other community members massively reduced the hardware requirements needed to perform the advanced video processing that the tool enables, moving from needing some of the most powerful workstations available to running on ordinary consumer desktop computers that many home filmmakers might have access to. And all of this for free. Many comparable tools would cost thousands, or even tens of thousands of dollars for video editing teams to use. As Niko said in his original video, he didn’t “want to pay rent for his paintbrush”.

In the follow-up video just two weeks later, it was clear that there had been an extraordinary response to the release of CorridorKey. And an even more extraordinary next milestone was achieved, with the announcement that Niko would be releasing all of the original training data for the creation of the tool — all of the videos and content used to create the model, so that others could replicate the work, or even create their own models if they wanted to improve upon the work itself.

For the technically-minded, CorridorKey is licensed under a modified Creative Commons license, with the intention of preventing commercial exploitation without consent. I’m sure this will prompt some hand-wringing about whether it fits everyone’s definitions of “open source”, but given that someone could certainly reimplement this approach from scratch, given all of the material that Niko and his community have shared, I think that’s a distinction without a difference. The larger point here about a turning point in the AI and LLM ecosystem is what is transformative for creators who’ve been beleaguered by the AI cheerleading for the last few years.

Importantly, using CorridorKey doesn’t impose any restrictions or obligations on people making videos. There’s no phoning home, no scraping of videos to be used for training models, not even collecting an email address for marketing purposes. It’s a stark contrast to what people are used to in the commercial software world, let alone the hyper-surveillance world of most Big AI companies.

Where does this lead?

Okay, so that’s one tool. But what if you’re not a video creator who does things with green screens? How does this help anybody else? There are a few really important breakthroughs here that start to help more people realize what’s possible.

  • The bad behaviors are a choice. The Big AI companies that take content without consent, or who refuse to let people see their code, or who insist they can’t give people control over how their models run and whether they are responsible about their environmental impact can now be definitively refuted. If this small team of creators who aren’t even a tech company can make an AI that does the right thing, how come the biggest companies in the world can’t?
  • It’s about purpose, not one-size-fits-all. There’s no risk that CorridorKey is going to tell kids to self-harm in the way that ChatGPT does. Because CorridorKey has a specific job to do. And that’s the way AI should work — solving a specific problem for a particular community, instead of trying to be all things to all people, which is when these platforms start becoming unaccountable and start harming massive numbers of people.
  • It’s under-hyped, not over-hyped. If anything, the launch of CorridorKey was buried towards the end of a longer video that was about the creative process; the launch video doesn’t even mention the name of the product! The creator doesn’t make any claims about how great it is, or say it’s better than anything else, or say it’s going to change the world. Instead, he’s humble and hopeful that it’s of use to a specific community, and they respond with enthusiasm and connection and collaboration to that sincerity. This isn’t a tool that needs to be shoved in anybody’s face.

All of these traits are things that can be replicated in many more fields, by many more passionate people who don’t have to necessarily be experts, but who care about displacing the tech tycoons’ one-size-fits-all platforms with something that is human-scale and accountable.

For years, I’ve had this conviction that a better AI is possible, and I understand why many people have felt I was being naive, or that the way tech is today makes it impossible for such a thing to survive. But I think the tide is turning, and people are so fed up with the software-brained CEOs forcing things on them that they don’t want. That doesn’t mean that people hate technology! It just means that they hate what these dudes have made technology in to.

It’s nice to be reminded of what tech can be at its best. Sometimes it’s a thing that extracts exactly what we want to see from the background we’re trying to leave behind.

Discovering Prince, Ten Years Later

2026-04-20 08:00:00

It's been a decade since we lost Prince, and I wanted to take a moment to offer a look back at some of the pieces I've written over the years, and share some of the work I've done, and hopefully it will give you a chance to explore some aspect of his artistry or legacy that you haven't yet had a chance to discover!

Perhaps a good place to start: It's time to discover Prince — a set of starting points to look at Prince's musical catalog, with selected albums (with more than 40 albums to pick from, it can be overwhelming to know where to start!) and some playlists that I created specifically to help new fans find out exactly why we love his music so much.

Another comprehensive overview: Every video Prince ever made. I walked through all of the music videos Prince made over the four decades of his career, offering some info and context that might help you find which ones are most compelling (or weird!) and worth your time.

I've also gotten to guest on a number of podcasts and in other media over the years to discuss various aspects of Prince's career. Perhaps none was more exciting for me than talking about Prince's history of technological innovation for the official Prince podcast. Then, no less than the New York Times described me as a "Prince scholar" when it covered the discovery of the earliest known footage of Prince as a child. There are a bunch of other podcast appearances (see below) but these felt like the pinnacle of legitimacy for my career as a Prince fan.

Here on my site, there are some pieces I wrote to try to explain a few of Prince's masterworks. I wanted to give a sort of x-ray view into the larger cultural and even political context behind his choices when Prince created his best-known artistic expressions:

  • I Know Times Are Changing: This is the minute-by-minute story of how the song Purple Rain was created — covering everything from the background story of how conservative rock fans had hounded Prince's band off the stage at the turn of the 80s, to a glimpse into Prince's editing process where he turned a debut of his band into his signature song.
  • How Prince Won the Super Bowl: Many people know that Prince played the greatest Super Bowl halftime show of all time, but very few know that it wasn't just a scintillating musical performance. I get into why Prince didn't play his biggest hits like "When Doves Cry" and "Kiss", and how the show was a deeply personal statement on race, equity, and legacy.
  • Prince Interactive: Shortly after Prince's passing, I collaborated with several of the people who maintained Prince's (many!) websites over the years to help create the Prince Online Museum, an archive of many of Prince's digital works over the years. The earliest of these digital experiences is the Interactive CD-ROM which Prince released in 1994. I created a walkthrough video of the game which is shared as a resource on the site for those who've never gotten a chance to see the game in the years since its release.
  • Prince's Own Liner Notes On His Greatest Hits: I have worked hard to preserve Prince's extensive digital archives over the years, and this is one of the bits I'm most proud of. For the release of his first greatest hits set in 1993, Prince compiled a list of draft notes for his former manager Alan Leeds to use as the basis of the box set's liner notes. This draft was later posted on Prince's first website, and then quickly deleted — but not before I was able to archive a copy! So I was able to share the only surviving copy of Prince's first-person commentary on the biggest hits of his career, which is well worth a read.
  • Message From The Artist: This is another bit of digital archiving from Prince's original website of a letter that was briefly posted 30 years ago before being lost to history. In it, Prince explained the spiritual and artistic reasons behind his shocking decision to change his name to an unpronounceable symbol, and laid out the battle for ownership and control of his music which would come to define the second half of his career. The letter was quickly amended to be far less personal, and then deleted completely from Prince's website, but I was able to hold onto a copy that we can now read for ourselves.

Then, there are some fun artifacts and experiences about Prince that I found to be worth sharing, and other folks have found them to be pretty fun, too. One of my most favorite stories is The Purple Raincheck, about the time that Prince invited me to his house, but I couldn't go. And yet somehow, in true Prince fashion, I ended up with an even better story in the end anyway. If you've ever wanted to know what it's like to roll up to Prince's Oscars party, this is the one for you.

At the other end of the nerdy spectrum, there's this piece about my favorite floppy disc of all time, a rarity I was able to track down which contained the obscure font that Prince's team sent out to publications when he had changed his name to an unpronounceable symbol, so that they could properly render his trademark icon. Later, with the help of the brilliant minds at Adafruit, I was able to recover the data from the disc after almost three decades, through some vintage technology and a little bit of good luck.

For Minnesota Public Radio's The Current, we also dug into Prince's history as a computer nerd. On Switched on Pop, we dug into Why U Love 2 Listen 2 Prince, with an incredible audio breakdown showing how Prince influenced everybody — including a direct connection to the biggest album of all time.

Dig, if u will

We've been lucky to have a global community of Prince scholars that's formed over the years, which regularly hosts academic symposia, publishes papers and books, delivers remarkable talks on every aspect of Prince's work and the impact of his legacy, and in general uses his art as the starting point for some pretty extraordinary cultural exploration. One manifestation of that tendency to take his work seriously is the spreadsheet of Prince recordings, which is a fan-created work designed to provide a canonical reference for the thousands of compositions that Prince created over his career, unifying the conversations and discussions that people have. This is genuine nerd stuff!

And finally, one of the things I'm most proud of is this talk I delivered just a few weeks after Prince passed, in Minneapolis on what would have been his 58th birthday. It covers a really broad swath of Prince's influences and both his technical innovations and fierce battle for artistic independence. But it also dives into a lot of my background and my family's personal history, and connects it to a lot of themes of immigration and the systems that govern how this country moves. A decade on, I think some of these themes resonate more than ever, and if you're willing to set aside some time for it, I'd really love for more people to watch it, as I think it speaks to so many of the things I care most deeply about.

In all, after the initial grief and shock of his loss, I've been pleased to see Prince's legacy and impact grow. It's been wonderful to see so many people be surprised and delighted at all the different ways his work and innovative ideas remain relevant and resonant years and even decades after he created them. And I never get tired of people around the world sending me links or images of Prince or Prince-related items, saying "this reminded me of you!". Whether it's from old friends or people I've never met, it's something very special to be connected to others through the art and creativity of a fiercely independent spirit.

Above all else, Prince wanted to encourage people to create and be creative, to have mastery over their work and their lives, to be their true selves, and to be loving and compassionate towards others. Like everyone, he was flawed and complicated and weird and contradictory. But unlike anyone, he was able to create new worlds that millions of people got to live in inside their imaginations, and to fight impossible battles against all the odds and still somehow prevail.

That's still an inspiring example everyone can follow, no matter who your are, or how you create in the world. And best of all, Prince has created a perfect soundtrack to help you do it.

The Power of Possibility

2026-04-16 08:00:00

It’s rare that you get to see work that directly helps those who most deserve it, but I want to tell you about the opportunity we so seldom get to actually contribute in a way that we know will have real impact.

I’ve been on the board of the Lower Eastside Girls Club for about a decade, getting a front row seat to seeing what a truly community-focused and effective organization can do for those in need when things are done the right way. This is the model of what we want our public institutions to be — laser-focused on the needs of its members, extremely ambitious in its goals, and measurably effective in its outcomes.

I’m asking you to support the Girls Club in one of two ways:

  • You can donate directly to support the work that the Girls Club does (If you know what a donor-advised fund is — now’s your chance to use it!)
  • Or if you’re in NYC on May 7, join us at Webster Hall for our incredible 30th Anniversary Gala where we are going to throw down

Actually changing lives

The Girls Club serves girls who are amongst the most in-need in all of New York City, and boosts important measures like graduation rates to levels 15% higher than the district average. The way that the club does it is by providing year-round programming in the arts, STEM, civic engagement, leadership, wellness, college and career pathways, and much more — including a deep connection to a sense of community. All of this happens in a facility that is nothing short of magical, where there’s a green roof, a full recording studio, a commercial-grade kitchen, a wonderful crafting room, and even an actual planetarium. And all of these resources are made available to the girls entirely for free.

The programs and support that the team provide to the girls work. It changes their lives. I know this because I’ve seen it. Now that the club has been around for a generation, we’ve seen girls grow up to become incredible students, leaders in the community, entrepreneurs, activists, artists, and even a new generation of mentors in the Girls Club itself.

Then, the backlash against DEI and this kind of community support threatened the very survival of the Girls Club.

Even though the club has always had its share of ups and downs, there had almost never been as much of a concerted attack on its foundations until the dark times of this last year. It’s taken a toll on the club and its staff, and threatened to put the programming and support for the girls at risk. After a decade on the board, I stepped up to become chair of the board to try to help.

Because the truth is, the team at the club does what works: specific, local action, that considers individuals as whole humans, and tends to their needs in a complete way. We’ve given out tens of thousands of free meals to the community as needed ever since COVID began, because people can’t learn when they are hungry. We’ve added multi-generational classes on things like wellness, because it takes the support of entire families to keep kids on the right track for their education, or to support them making big, ambitious choices to change their lives for the better. And of course there is support for every form of creativity from technology to sewing to DJing to, yes, exploring the stars in the planetarium. Because, for too long, those were areas of imagination that didn’t always get presented as options on Avenue D.

Here’s what I can promise you: every single penny that you give to support this organization will be used incredibly efficiently. The staff of the organization show up every single day to fight for these girls, and their families, and this community. I can personally attest to how accountable and effective their work is. If you are able to donate, I’ll give you a personal tour the next time you’re on the Lower Eastside, and take you through the amazing facility so that you can see for yourself the impact that you’ll be having on the future of our city, and these girls.

There’s always room for joy

Years ago, not long after I’d first joined the board of the Girls Club, we were trying to capture the spirit of what makes this place so special. It’s hard to articulate the energy, the brilliance, the optimism and spirit that the girls bring to the place through their sheer creativity and engagement. But eventually we settled on a few words that ended up becoming the slogan for the entire organization:

Joy. Power. Possibility.

I come back to those words a lot, even when things are hard, because I see it embodied in the work that has been done as alumni of the Girls Club have gone out into the world as young women who are now leaders and innovators and fearless voices across the city and across the country. We’re going to need your help to make sure we’re able to ensure that another generation of vulnerable kids get that same chance.

And the best part is that you can really experience the “joy” part of that motto if you join us at the Gala. Our annual fundraisers are not the usual stuffy nonprofit affairs. We’ve got a few tickets left for Webster Hall on May 7, where we’re honoring actress, writer, director, producer, activist and Lower Eastside legend Natasha Lyonne, H&M America’s Head of Inclusion and Diversity Donna Dozier Gordon, and our very own Lower Eastside Girls Club emerita Miladys Ramirez. Expect signature cocktails, an unforgettable dinner, and a dance floor you won't want to leave! I hope to see you there, or you can just give what you can and be there in spirit.

Y2K 2.0: The AI security reckoning

2026-04-10 08:00:00

In just the last few weeks, we’ve seen a series of software security vulnerabilities that, until recently, would each have been the biggest exploit of the year in which they were discovered. Now, they’ve become nearly routine. There’s a new one almost every day.

The reason for this rising wave of massively-impactful software vulnerabilities is that LLMs are rapidly increasing in their ability to write code, which also rapidly improves their ability to analyze code for security weaknesses. These smarter coding agents can detect flaws in commonly-used code, and then create tools which exploit those bugs to get access to people’s systems or data almost effortlessly. These powerful new LLMs can find hundreds of times more vulnerabilities than previous generations of AI tools, and can chain together multiple different vulnerabilities in ways that humans could never think of when trying to find a system’s weaknesses. They’ve already found vulnerabilities that were lurking for decades in code for platforms that were widely considered to be extremely secure.

The rapidly-decreasing cost of code generation has effectively democratized access to attacks that used to be impossible to pull off at scale. And when exploits are less expensive to create, that means that attackers can do things like crafting precisely-targeted phishing scams, or elaborate social engineering attacks, against a larger number of people, each custom-tailored to play on a specific combination of software flaws and human weaknesses. In the past, everybody got the same security exploit attacking their computer or system, but now each company or individual can get a personalized attack designed to exploit their specific configuration and situation.

Now, we’ve had some of these kinds of exploits happening to a limited degree with the current generation of LLMs. So what’s changed? Well, we’ve been told that the new generation of AI tools, currently in limited release to industry insiders and security experts, are an order of magnitude more capable of discovering — and thus, exploiting — security vulnerabilities in every part of the world’s digital infrastructure.

This leaves us in a situation akin to the Y2K bug around the turn of the century, where every organization around the world has to scramble to update their systems all at once, to accommodate an unexpected new technical requirement. Only this time, we don’t know which of our systems are still using two digits to store the date.

And we don’t know what date the new millennium starts.

How we got here

A core assumption of software development since the turn of the century, especially with the rise of open source software in the early 2000s, was that organizations could use more shared code from third parties to accelerate their coding efficiency. The adoption of code sharing through services like GitHub, knowledge sharing on communities like Stack Overflow, and the easy discovery and integration of shared code libraries through platforms like npm (which, like GitHub, is owned by Microsoft) all rapidly accelerated the trend of openly sharing code. Today, tens of millions of developers begin their coding process by gathering a large amount of code from the internet that they want to reuse as the basis for their work. The assumption is that someone else who uses that code has probably checked it to make sure it’s secure.

For the most part, this style of working from shared code has been the right choice. Shared, community-maintained code amortized the cost of development across a large number of people or organizations, and spread the responsibilities for things like security reviews across a larger community of developers. Often, part of the calculation about whether sharing code was worth it was that you might get new features or bug fixes “for free” when others made improvements to the code that they were sharing with you. But now, all of this shared code is also being examined by bad actors who have access to the same advanced LLMs that everyone else does. And those bad actors are finding vulnerabilities in every version of every single bit of shared code. Every single major platform, whether it’s the web browser on your desktop computer, or the operating systems that run powerful cloud computing infrastructure for companies like Amazon, has been found to have security vulnerabilities when these new LLMs try to pick them apart.

In years past, when major software security issues like Heartbleed or xz were discovered, the global security community would generally follow responsible disclosure practices, and the big tech vendors and open source developers would work together to provide updates and to patch critical infrastructure. Then, there would be deliberate communication to the broader public, with detailed information for technical audiences, usually followed by some more semi-sensationalistic coverage in the general press. But the recent spate of similarly-impactful security vulnerabilities have come at such a rapid clip that the leisurely pace and careful rituals of the past are already starting to break down. It’s a bit like the acceleration of the climate crisis; nobody knows how to build a system resilient enough to handle a “storm of the century” every year. Nobody knows how to properly communicate about, and respond to, the “exploit of the year” if it’s happening every six hours.

The New Security Landscape

So, how is this going to play out? In society at large, we’re very likely to see a lot of disruption. Everything runs on software, even things we don’t think of as computers, and upgrading systems is really expensive. The harder a system is to upgrade, the more likely it is that organizations will either resist doing so or try to assign the responsibility to others.

In much of the West we’re in a particularly weak state because the United States has voluntarily gutted much of its regulatory and research capabilities in the relevant security disciplines. The agencies that might lead a response to this kind of urgent effort are largely led by incompetent cronies, or are captured by corrupt industry sycophants. We shouldn’t expect to see a competent coordinated execution at the federal level; this is the administration that had unvetted DOGE workers hand your personal data over to AI platforms that were not approved for federal use or verified to comply with federal privacy standards. The most basic security practices aren’t a consideration for leadership in this regime, and the policy makers like the “AI Czar” are brazenly conflicted by being direct investors in major AI players, making it impossible for them to be disinterested parties in regulating the market fairly.

So who will respond? In the United States, the response will have to happen from the people themselves, with more directly coordinated actions across the private sector, academia, individual technical subject matter experts, and governments and NGOs at the local level. In the rest of the world, strategically-aligned government responses will likely work with those in other sectors to anticipate, and react to, the threats that arise. We’ll probably see some weird and unlikely alliances pop up because many of the processes that used to rely on there being adults in the room can no longer make that assumption.

Within the tech industry, it’s been disclosed that companies like Anthropic are letting major platform vendors like Google and Microsoft and Apple test out the impacts of their new tools right now, in anticipation of finding widespread vulnerabilities in their platforms. This means that other AI companies are either doing the same already, or likely to be doing so shortly. It’s likely there will be a patchwork of disclosures and information sharing as each of the major AI platforms gets different levels of capability to assess (and exploit) security vulnerabilities, and makes different decisions about who, how and when they share their next-generation LLM technology with. Security decisions this serious should be made in the public interest by public servants with no profit motive, informed by subject matter experts. That will almost certainly not be the case.

At the same time, in the rest of the tech industry, the rumors around Apple’s next version of their Mac and iPhone operating systems are that the focus is less on shiny new features and more on “under the hood” improvements; we should expect that a lot of other phone or laptop vendors may be making similar announcements as nearly every big platform will likely have to deliver some fairly sizable security updates in the coming months. That means constantly being nagged to update our phones and apps and browsers and even our hardware — everything from our video game consoles to our wifi routers to our smart TVs.

But of course, millions and millions of apps and devices won’t get updated. The obvious result there will be people getting their data hijacked, their accounts taken over, maybe even their money or identities stolen. The more subtle and insidious effects will be in the systems that get taken over, but where the bad actors quietly lay in wait, not taking advantage of their access right away. Because of the breadth of new security vulnerabilities that are about to be discovered, it will increasingly be likely that hackers will be able to find more than one vulnerability on a person’s machine or on a company’s technical infrastructure once they get initial access. Someone who’s running an old version of one app has likely not upgraded their other apps, either.

Open source projects are really going to get devastated by this new world of attacks. Already, as I’ve noted open source projects are under attack as part of the broader trend of the open internet being under siege. Open source maintainers are being flooded by AI slop code submissions that waste their time and serve to infuriate and exhaust people who are largely volunteering their time and energy for free. Now, on top of that, the same LLMs that enabled them to be overrun by slop code are enabling bad actors to find security issues and exploit them, or in the best case, to find new security issues that have to be fixed. But even if the new security issues are reported — they still need to sift through all of the code submissions to find the legitimate security patches amongst the slop! When combined with the decline in participation in open source projects as people increasingly have their AI agents just generate code for them on demand, a lot of open source projects may simply choose to throw in the towel.

Finally, there are a few clear changes that will happen quickly within the professional security world. Security practitioners whose work consists of functions like code review for classic security shortcomings such as buffer overflows and backdoors are going to see their work transformed relatively quickly. I don’t think the work goes away, so much as it continues the trend of the last few years in moving up to a more strategic level, but at a much more accelerated pace. Similarly, this new rush of vulnerabilities will be disruptive for security vendors who sell signature-based scanning tools or platforms that use simple heuristics, though in many cases these companies have been coasting on the fact that they’re selling to companies that are too lazy to choose a new security vendor, so they may have some time to adapt or evolve before a new cohort of companies come along selling more modern tools.

Avoiding Y2K26

Back in 2000, a lot of folks thought the Y2K bug wasn’t “real” because they didn’t see planes falling from the sky, or a global financial meltdown. In truth, the mobilization of capable technical experts around the world served to protect everyone from the worst effects of the Y2K bug, to the point where ordinary people didn’t face any real disruptions of their day at all.

I don’t know if it’s possible for history to repeat itself here with the series of security challenges that it seems like everyone is going to be facing in the weeks and months to come. There have been pledges of some resources and some money (relatively small amounts, compared to the immense sums invested in the giant AI companies) to trying to help open source and open source infrastructure organizations deal with the problems they’re going to have to tackle. A lot of the big players in the tech space are at least starting to collaborate, building on the long history of security practitioners being very thoughtful and disciplined about not letting corporate rivalries get in the way of best practices in protecting the greater good.

But it’s simply luck of the draw that Anthropic is the player that seems to be the furthest ahead in this space at the current time, and that’s the only reason we’re seeing a relatively thoughtful and careful approach to rolling out these technologies. Virtually every other frontier-level player in the LLM space, especially in the United States, will be far more reckless when their platforms gain similar capabilities. And they’ll be far more likely to play favorites about which other companies and organizations they permit to protect themselves from the coming risks.

Platforms whose funders, board members, and CEOs have openly talked about the need to destroy major journalistic institutions, or to gut civil society organizations, are certainly not going to suddenly protect those same organizations when their own platforms uncover vulnerabilities that pose an existential threat to their continued function. These aren’t just security issues — in the wrong hands, these are weapons. And that’s not to mention the global context, where the irresponsible actions of the United States’ government, which has generally had the backing of many of the big AI players’ leadership, will also incentivize the weaponization of these new security vulnerabilities.

It seems unlikely that merely keeping up with the latest software updates is going to be enough to protect everyone who needs to be protected. In the fullness of time, we’re going to have to change how we make software, how we share our code, how we evaluate trust in the entire supply chain of creating technology. Our assumptions about risk and vulnerability will have to radically shift. We should assume that every single substantial collection of code that’s in production today is exploitable.

That means some of the deeper assumptions will start to fall as well. Does that device need to be online? Do we need to be connected in this context? Does this process have to happen on this platform? Does this need to be done with software at all? The cost/benefit analysis for many actions and routines is likely to shift, maybe just for a while, or maybe for a long time to come.

The very best we can hope for is that we come out the other side of this reckoning with a new set of practices that leave us more secure than we were before. I think it’s going to be a long time until we get to that place where things start to feel more secure. Right now, it looks like it’s about ten minutes until the new millennium.

When the crisis comes

2026-04-08 08:00:00

These days, we’re all living in a constant state of crisis, foisted upon us by a world where those who are meant to keep things stable are the least stable factors in our lives. The chaos and stress of that reality makes it difficult to make any plans, let alone to make decisions if you have responsibilities for a team or organization that you’re meant to be leading. It’s easy to imagine there’s nothing we can do, or to feel hopeless. But a resource that just arrived served as a timely reminder for me that a crisis doesn’t have to be paralyzing, and we don’t have to feel overwhelmed when trying to plan how we’ll respond as leaders.

The topic of crisis has been on my mind again as I’ve been looking at the work of some friends who are the most fluent experts on the topic of crisis that I know, prompted by the release of Marina Nitze, Mikey Dickerson and Matthew Weaver's new book, Crisis Engineering.

There’s nothing more valuable than people who can step in during a moment of crisis and provide clarity, not just on how to make it through that moment, but how to seize that opportunity to actually make better things possible. A few years ago, at some of the most stressful and harrowing moments I’ve had as a leader in my business career, I got to connect with a remarkable team who ran towards the crisis that our organization was in, and helped our team get through that moment and not just persevere, but to thrive. I thought a bit about the famous Mr. Rogers line about “look for the helpers”, and Matthew, Marina, and Mikey's team at their company Layer Aleph really were the equivalent of the helpers when it comes to the place where where technology meets the real world.

I’d first heard legend of their way of working in the days and weeks after the notoriously rough launch of Healthcare.gov (This was back when the federal government aspired to competency, inability to deliver was considered a scandal, and media would accurately describe something that didn’t function as a failure.) A small, scrappy, multifunctional team had been able to transform the culture of this hidebound segment of the federal government, and deliver a set of services that are saving American lives to this day. That story is detailed well in the book, but at the time, the conventional wisdom was that this was a catastrophe so impossibly complex, in a bureaucracy so hopelessly broken, that nobody could possibly fix it. And then they did. (With the help of a lot of brilliant and motivated colleagues.)

As it turns out, this was just one of many such efforts that the team would be a part of, and helped define the overall approach that they, and their collaborators, would take in addressing these highly public crises. There are so many situations where a combination of cultural and technical challenges conspire to cause extremely visible failures or disruptions that seem intractable. But over time, a set of practices and principles emerged from their work that took the response out of the realm of superstition and guesswork and into something that was almost a science. These techniques work when systems are crashing, when machines get hacked, when data are leaked, when business models are crumbling, when leadership is in disarray, when customers are angry, when users are leaving, when competitors are attacking, when funders are fleeing. In short, when the crisis is at your door.

Putting it into practice

It was years after their evolution from those early post-Healthcare.gov days into a mature practice that I reconnected with the Layer Aleph team. By then, I was running a company, and a team, that was under an extreme amount of stress, and in a situation that could easily have amounted to an existential crisis. They were able to engage with conviction and compassion, but importantly, they weren’t making it up as they went along. I think this is an idea that’s important to understand in the current moment, too — there is such a thing as expertise. We do not have to settle for incompetence and cronyism. Good people of good character with real credentials and relevant experience can bring it to bear on even the most challenging situations, and when they do, even the most intractable problems are solvable.

And now, that expertise is something they’ve captured and shared.

I don’t often unabashedly endorse books about business and technology; too often I find them to be based on thin premises, padded out with cliches. But what the team here have done with their new book Crisis Engineering is something special — they documented their own experiences of turning real crises into a chance to design new, resilient systems.

Even better, they talk about how other organizations can do the same thing. The reason that I can testify that it works is because I have seen it, and I’ve seen my own team benefit from their work. In fact, I think it was during the conversations after the dust had settled from some of that work that the very phrase “crisis engineering” first emerged as a description of this way of thinking about complex problems. I’m thrilled that it’s become a useful shorthand for naming and discussing this powerful and unique way of tackling some of the most intimidating situations that companies or organizations might take on. It’s built confidence for myself, and my whole leadership team from that era, that we’ll be ready when the next challenge arrives. With apologies to Rihanna, I do want people to text me in a crisis.

The more confidence we can build in our teams that a crisis is an ordinary event that we can plan for, the more ready they will be for that moment when it arrives. That’s why I can’t recommend the book highly enough. Set aside some time to read it, and to make notes on how you might put it into practice when crisis inevitably comes to visit. You’ll be lucky to have had this resource before you need it.

You can read more about the book on their site. (And, as always, nothing I post on my site is sponsored content — I’m enthusiastically endorsing this book because I believe in what these folks have written and genuinely believe it’s worth your time to read if you lead an organization or team.)