2025-11-26 13:00:00
I am not allowed to set up a stand in front of my local Whole Foods and sell the eggs my chickens lay or the blueberries from our bushes. I am not allowed to sell remedies or heal the sick out of my home. I am not allowed to build an app that handles the logistics for thousands of people to do either of those things, either.
In all cases, regulations — rules enforced by government agencies and local authorities — prevent me. Every industry has regulatory structure surrounding it. Their purpose is not primarily to restrict my actions, but to protect others: the complex ecosystem of economy surrounding the cultivation, distribution, and sale of produce, the livelihood of the professionals at the “ground level,” and the well-being of consumers. The reasons for regulatory structures are as diverse and complex as the systems they govern, and the vast majority are good. I understand why they exist, and I’m glad they do.
With that in mind, when ride-sharing apps first launched, they seemed to me an affront to capitalism. From the sidelines, I felt as if I were watching a hostile takeover of an entire industry cloaked in “Innovation” propaganda. The same was true of Airbnb: a nearly overnight heist of hospitality revenue. Neither was service innovation, but interface innovation. The interface was the key to making this shakedown possible; it was designed to deconstruct the physical constraints and overhead of industry and capture the profits.
Those of us who pointed out how unreasonable it was to allow this to happen — to ignore anti-regulatory activity just because it hid behind a sexy smartphone app — were labeled anti-capitalist. Who were we to interfere with the fundamental forces of market opportunity and competition? Innovation, after all, changes the game, and not everyone can win. Old things sometimes must be left behind, as painful as that can be.
But which is the real affront to capitalism: regulations that restrict it, or regulations that restrict those subject to it? What is the difference between me setting up an egg stand and AirBnB letting me treat my spare room like a hotel?
Money, of course! The difference between me setting up a roadside egg stand and my spare room listing on Airbnb isn’t the service we’re offering or the potential harm we might cause. It’s the money backing the business and the expected return. It would take the Airbnb of backyard chickens to create meaningful competition with Whole Foods, for example. One guy selling a few dozen eggs won’t do it. I’d need an app, lots of other chicken people, and a whole bunch of startup cash. With billions invested in EggApp, whats a few hundred million more to pressure local and national legislature to look the other way?
Well, that’s exactly what has been done. Uber pushed state legislatures to pass laws that preempted local authority over ride-hailing — by the mid-2010s, dozens of states had passed laws that limited cities’ ability to regulate prices, licensing, or worker standards. In San Francisco, Airbnb spent roughly $8 million to defeat Proposition F, a 2015 ballot measure that would have imposed stricter limits on short-term rentals. Both companies spent millions more on federal and state lobbying, litigation, and grassroots campaigns designed to blunt or eliminate regulations.
So the regulations that bind me — sensible regulations I support — are selectively enforced or discarded for those who can afford to hire the lawyers, lobbyists, and public relations people to work around them.
An army of pirates, and we call it innovation. Pressure and money paid to lawmakers who will change the rules, and we don’t call it a bribe?
These so-called innovations left entire classes of workers behind, their livelihoods devalued overnight, their expertise suddenly worthless because an app made it easy for anyone to compete with them without training, licensing, or insurance. The theft was abstract enough to be deniable. The Ubers and Airbnbs could claim they didn’t really take anything — they simply offered a better way to connect people. The fact that entire industries collapsed and workers were displaced was just the invisible hand of the market doing its work.
Each wave of this so-called innovation has taken more.
*
But AI might be the grandest heist of them all. The Ubering of everything.
This time, the theft is not abstract. The makers of AI have trained their models on the output of all of us — writers, artists, designers, programmers, photographers, musicians. Every piece of work we’ve created and shared, scraped and fed into systems that will now compete with us using our own labor as training data.
Is it stealing if it’s learning? I’ll head of a common objection at the pass: Yes, human learning works this way too. We all learn by reading, looking, hearing, and experiencing existing cultural output. But there’s a difference, and it’s scale. A person can plagiarize, and in doing so creates a direct line between their source material and their copy. But AI plagiarizes at an elemental level — the output is derived from everything it has ever trained on. At a certain scale, particularly that reflected by the market capitalization of the Magnificent Seven — Apple, Microsoft, Alphabet, Amazon, Nvidia, Meta, Tesla — who all derive significant current value from AI, it takes on completely different power. There’s a hard limit on the market value of one person’s plagiarism. Distributed plagiarism masquerading as “generative AI” constitutes trillions in market value.
And before someone argues that not all AI output is plagiarism — you’re right. But the less you put into an AI prompt, the more you’re just asking it to remix what it stole.
With AI, the source material was anything published to the web — as widely distributed a “public” corpus as you could imagine. But the profit is not distributed. It’s kept by a handful of companies. And here’s the thing: simply publishing something to the web does not forfeit intellectual property rights. There is an abundance of unprotected information online — public domain whether the creators like it or not — but there is also owned information there. AI training has consumed it all equally, making no distinction between what was freely offered and what was legally protected.
The web created an implicit social contract: you publish your work, it reaches people, maybe you get paid directly through subscriptions or ads, or indirectly through exposure, reputation, opportunities. AI broke that contract. A photographer’s work gets scraped into training data, AI generates similar images that compete with their stock photos, their sales drop — but they received nothing when their work was taken. No payment, no exposure, no attribution. The creators get nothing while the AI companies capture trillions.
When we no longer have value in the market, it will be because it was stolen from us. Not metaphorically. Not as a consequence of a market transformed by digital innovation. This time, it was literally stolen — work taken without permission, without compensation, and used to build the very tools that make the makers obsolete.
Again, those who point this out are labeled resistant to progress, anti-innovation, Luddites. We’re told that this is just how technology works, that we can’t stand in the way of change, that AI won’t take our jobs but “someone using AI will.” The same tired arguments that have justified every extraction, every displacement, every theft disguised as liberation.
I wonder if we’ve finally reached the point where the pattern becomes undeniable. When it was taxi drivers, we could tell ourselves it didn’t affect us. When it was hotel workers, we could rationalize that markets evolve. Perhaps now that it’s everyone who creates anything, we’ll finally admit that something has gone very wrong.
Or maybe not. Maybe we’ll continue to celebrate disruption right up until the moment we realize there’s nothing left to disrupt, no one left with skills valuable enough to command a living wage, because all of it has been fed into systems owned by a handful of companies who convinced us that this was something other than a war.
I keep thinking about those eggs I’m not allowed to sell and the regulations that prevent me from doing any harm, however small, to my neighbors. And then I think about companies valued in the trillions, training AI on the work of millions, displacing entire professions, and facing no meaningful regulatory resistance at all.
Where is the regulation to protect all of us from the dangers of AI? The dangers of losing our livelihoods. The dangers of losing our grip on the truth. The dangers of losing our place in this world. Like any technology, AI is useful. And it’s also dangerous. We need regulations not just to reign in the greedy from trampling over intellectual property, not just to protect the jobs of worthy workers, but to protect human beings from losing control of our own creation. I’m not against AI; I’m against unregulated AI.
Right now, it seems like the rules exist to protect established power from small threats while leaving everyone else vulnerable to large ones. I can’t sell eggs, but billion-dollar companies can take the sum total of human creative output and use it to make us obsolete. You could argue that when we installed the Uber app, we consented to its terms and conditions in full: not just to the conveniences it offered us in the moment but the consequences it introduced to people and communities. But when did we consent to AI? It’s been added to our lives without our request or permission. It’s read our email, listened to our phone calls, and scraped our webpages — for years — without us really knowing it. Then we were given silly new AI toys and told “go make memes”; it showed up in our apps and accounts, its uninvited sparkle reminding us who owns what; just a few years later, we’re told “now go make another way of life.”
That escalated quickly.
For decades, the paranoid have worried about a new world order. They’ve waited for when technology will be used by some people to subjugate others — a new world, ordered by machine. Few, if any, expected the new world to be for machines. And yet, even the AI makers openly worry now about a near-term future in which the machines they made no longer need them. It doesn’t have to be that way. If there is a future, it will be because we didn’t let those with power and greed addictions destroy it. If there isn’t, the most fitting epitaph on the tomb of civilization will be “we let them do it.”
2025-11-18 13:00:00
After three years of immersion in AI, I have come to a relatively simple conclusion: it’s a useful technology that is very likely overhyped to the point of catastrophe.
The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe. This is a classic bubble scenario. We’ll all take a hit when the air is let out, and given the historic concentration of the market compared to previous bubbles, the hit will really hurt. The worst case scenario is that the people with the most money at stake in AI know it’s not what they say it is. If this is true, we get the bubble and fraud with compound motives. I have an idea about one of them that I’ll get to toward the end of this essay. But first, let’s start with the hype.
As a designer, I’ve found the promise of AI to be seriously overblown. In fact, most of the AI use cases in design tend to feel like straw men to me. I’ve often found myself watching a video about using AI “end to end” in design only to conclude that the process would never work in real work. This is usually because the process depicted assumes total control from end to end — the way it might work when creating, say, a demonstration project for a portfolio, or inventing a brand from scratch with only yourself as a decision-maker. But inserting generative AI in the midst of existing design systems rarely benefits anyone.
It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously. I can think of many examples from my own team’s client work: difficult to replicate custom illustrative styles, impossible to replicate text and image layering, direct connections between images and texts that even the most explicit prompts don’t make. A similar problem happens with layout. Generative AI can help with ideating layout, but fails to deliver efficiently within existing design systems. Yes, there are plenty of AI tools that will generate a layout and offer one-click transport to Figma, where you nearly always have to rebuild it to integrate it properly with whatever was there beforehand. When it comes to layout and UI, every designer I know who is competent will produce a better page or screen faster doing it themselves than involving any AI tool. No caveats.
My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.
(Before I continue, know also that I am a co-founder of a completely AI-dependent venture, Magnolia. Beyond the design-specific use cases I’ve described, I know what it means to build software that uses AI in a far more complex manner. The investment is enormous, and the maintenance — the effort required to maintain a level of quality and accuracy of output that can compete with general purpose AI tools like ChatGPT or even AI research tools like Perplexity — is even more so. This directly supports my argument because the only reason to even create such a venture is to capitalize on the promise of AI and the normalization of “knowledge work” around it. That may be too steep a hill to climb.)
Much has already been made of the MIT study noting the preponderance of AI initiative failures in corporate environments. Those that expect a uniform application of AI and a uniform, generalized ROI see failure, while those who identify isolated applications with specific targets experience success. The former tends to be a reaction to hype, the latter an outworking of real understanding. There are dozens of small-scale applications that have large-scale effects, most of which I’d categorize as information synthesis — search, summarization, analysis. Magnolia (and any other new, AI-focused venture) fits right in there. But the sweeping, work-wide transformation? That’s the part that doesn’t hold up.
Of course, we should expect AI to increase its usefulness over time as adoption calibrates — this is the pattern with any new technology. But calibration doesn’t mean indefinite growth, and this is where the financial picture becomes troubling. The top seven companies by market value all have mutually dependent investments in AI and one another. The more money that gets injected into this combined venture, the more everyone expects to extract. But there has yet to be a viable model to monetize AI that gets anywhere close to the desired market capitalization. This is Ed Zitron’s whole thing.
This is also the same reckoning that a dot-com inflated market faced twenty-five years ago. It was obvious that we had a useful technology on our hands, but it wasn’t obvious to enough people that it wasn’t a magic money machine.
Looking back, another product hype cycle that came right afterward sums this bubble problem up in a much shorter timescale: The Segway was hyped by venture capitalists as a technology that would change how cities were built. People actually said that. But when everyone saw that it was a scooter, that suddenly sounded awfully silly. Today, we hear that AI will change how all work is done by everyone — a much broader pronouncement than even the design of all cities. I think it’s likely to come closer than the Segway to delivering on its hype, but when the hype is that grand, the delta between scooter and normal technology is, at this point, a trillion dollar gap.
The AI bubble, as measured by the state of the financial market, is much, much bigger than any we’ve seen before. Even Sam Altman has acknowledged we’re likely in a bubble, shrugging it off like a billion-dollar miscalculation on a trillion-dollar balance sheet. The valuation numbers he is immersed in are extraordinarily large — and speculative — so, no wonder, but the market is dangerously imbalanced in its dependence upon them. A sudden burst or even a slower deflation will be a very big deal, and, unfortunately, we should expect it — even if AI doesn’t fail as a venture completely.
Meanwhile, generative AI presents a few other broader challenges to the integrity of our society. First is to truth. We’ve already seen how internet technologies can be used to manipulate a population’s understanding of reality. The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.
I really don’t like this, and to my mind, it represents, on its own, a good reason to back off from AI. Society is more than just a market. It’s a fabric of minds, all of which are vulnerable to losing coherence in the midst of AI output. Given the stated purpose of AI, such a thing would be a collateral damage, you know, like testing a nuclear bomb in the town square.
But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
And yet, as much as I doubt what we are sold in AI, I feel the same about what they — the billionaire investors in an AI future — are sold as well. I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit. Rather than previous audacious scientific goals like mapping the human genome, achieving AGI has never been precise enough to achieve. To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.
Again, I think that AI is probably just a normal technology, riding a normal hype wave.
And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.
AI companies say they need datacenters to deliver on their ground-level, day-to-day user promises while simultaneously claiming they’re nearly at AGI. That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters? And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power. The value of AI can drop to nothing, but owning the land and the flow of water through it won’t.
When the list of people who own this property is as short as it is, you have a very peculiar imbalance of power that almost creates an independent nation within a nation. Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within. Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community, so much so that they often measure up to municipal statuses without having the populace or governance that connects actual cities and towns to the systems that comprise our country.
When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.
The scale has already been tipped. I don’t worry about the end of work so much as I worry about what comes after — when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments. I know, you can picture me wildly gesticulating at my crazy board of pins and string, but I’m really just following the money and the power to their logical conclusion.
Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.
I’m more than open to being wrong; I’d prefer it. But I’ve been watching technology long enough to know that when something requires this much money, this much hype, and this many contradictions to explain itself, it’s worth asking what else might be going on. The market concentration and incestuous investment shell game is real. The infrastructure is real. The land deals are real. The resulting shifts in power are real. Whether the AI lives up to its promise or not, those things won’t go away and sooner than later, we will find ourselves citizens of a very new kind of place that no longer feels like home.
2025-11-10 13:00:00
I have heard it said that AI is “the last invention” — that when you invent something that itself can invent, it ends the entire phenomenon of human innovation. On its face, this seems impossible. All of human history is a sequence of invention. We have an idea, we make a thing, we use it to make the world around us different, living in that world gives us another idea, and so on. It’s almost inconceivable that this sequence could end.
But there’s a difference between something being impossible and impossible to understand. Perhaps AI could be “the last invention.” It would have to be the sort of AI that gives way to a completely “post-human” future — a world not only made by machines, but dominated by them. The very categories we use to think about innovation, progress, and invention might themselves be relics of a human-centered worldview that won’t survive contact with whatever comes next. As much as we like to think ourselves capable of thought that breaks the boundaries of experience, now is as good a time as any to humbly assess the reach — and accuracy — of our imagination, we fish, contemplating the shore from beneath the waves.
For now, though, AI is one of a series of technologies made by humans. And when we make a new technology, it tends to create a moment in time that is rife with tension: The illusion of arrival at the future’s doorstep sends a culture scrambling to gather consensus around how to go through it, only to quickly dissolve into a new present, with many more doors, none of which anyone was prepared for.
Ironically, that’s when we all begin to look backward.
People of a certain age have experienced this several times over.
When the compact disc arrived in the early 1980s, it was marketed as the perfection of sound recording and playback: No hiss, no crackle, no degradation. All the limitations of vinyl, 8-tracks, and cassette tapes had been overcome. And for a while, it felt like an arrival — like we had reached the end of the line for music technology. Of course, we hadn’t.
Just as CDs overtook cassette tapes, they gave way to MP3s and streaming. And just as that happened, people started collecting vinyl records again. What began as a niche eventually rose to a level that now exceeds its first run — when vinyl was all we had! This isn’t because vinyl sounds better; by any objective measure, it doesn’t. It’s because in a world of nearly infinite choice and access, the limits of a record have a renewed appeal. Nothing engages the imagination better than limits.
The same pattern played out with photography. Digital cameras promised perfect images — infinite shots, immediate feedback, no grain, no unpredictability. And yet, as digital photography became the norm, film photography experienced a renaissance. Not because it was more convenient or more accurate, but because its limits created room for imagination. The anticipation of not knowing what you’d captured until the roll was developed, the grain that suggested texture and depth, the way light leaked and colors shifted — these were not the gaps, but the territory.
Technological nostalgia is often written off as an affectation at best or Luddism at worst. But I think it’s neither. I think it is the memory of imagination. The limits of an outmoded technology — the static of a record, the grainy texture of film, the disconnected completeness of a single disc, tape, or cartridge — engaged our imagination. They provided a boundary past which our hearts yearned and our minds wondered. The technology wasn’t finished, and so neither were we.
But when a new boundary is reached, the excitement of the new is surprisingly fleeting. The wonder of all the music in the world stales quickly. Infinite choice begins to feel like no choice at all. We start to miss limits and grasp around the known. We curate playlists to simulate the constraints of albums. We seek out lo-fi aesthetics to recreate the imperfections we once tried to eliminate.
Achievement, it turns out, has nothing on ambition. And the new almost always falls short of imagination.
That’s why innovation will never end. The human heart cannot tolerate completion. It thrives on anticipation and constructs an economy of permanent speculation. Only in the heart is the promise worth more than the present.
This is what makes me skeptical of the “last invention” thesis. Not because I don’t think AI is powerful or transformative — it clearly is. But because the pattern suggests something else entirely. AI, like every technology before it, will create its own moment of tension. It will feel like arrival for a while. We’ll believe we’ve reached some endpoint, some final solution to the problem of making things. And then, inevitably, we’ll start to miss what came before.
I can already imagine the nostalgia for the first AI experiences — from the fluid cartoonishness of Will Smith eating spaghetti to those bits of media that already feel, today, as real as real. Tomorrow we’ll look back and think ourselves naive, if not blind for buying any of the slop. We’ll remember when AI-generated images had that telltale glossiness, when the hands were always wrong, when the text was gibberish. Someone will love and collect and curate those early artifacts. Not because they were better, but because they were incomplete. Because they are beacons for the imagination.
And more than that, I think we’ll invent new things in response to AI. Not just improvements on AI itself, but entirely new categories of making and thinking that we can’t yet imagine. The way digital photography didn’t just replace film but created new forms of visual culture — social media, smartphone photography, real-time documentation. The way the internet didn’t just digitize existing media but created entirely new modes of communication and connection that no one predicted.
Every technology creates the conditions for the next technology, yes. But it also creates conditions for things that aren’t technologies at all. New forms of art, new social arrangements, new ways of being human in response to the non-human. The more powerful our tools become, the more we seem to want to assert what makes us distinct from them. The more we automate, the more we value the handmade. The more we optimize, the more we crave the inefficient and the imperfect.
The real threat of AI as “the last invention” isn’t that it will actually end human innovation and put us all out to pasture in a withering culture of leisure, but that we’ll believe it will. That we’ll internalize the narrative of our own obsolescence and stop trying. That we’ll mistake the tool for the maker and forget that the heart that yearns past the boundary is what drives everything forward.
If there’s one thing human history has shown us, it’s that we are spectacularly bad at staying satisfied. We reach the summit and immediately start looking for the next mountain. We solve a problem and invent ten new ones in its place. We create tools that do exactly what we wanted, and then we start wanting something else entirely. As a believer in the wisdom of finding contentment, I don’t see this as always a good thing. But it is who we are, for better and for worse.
2025-11-03 13:00:00
A few months ago, a client was reviewing a landing page design with my team. They had created it themselves using a page builder tool — one of those platforms that integrate content management and visual design in a single interface. They wanted feedback as we prepared to take over their design systems.
The page was a mess, though not in the way you might expect. It wasn’t ugly nor was it broken in any meaningful way. But because the tool gave them so many options for constructing the page and made it so easy to just drop them in, drop them in they did — all of them. This page was as loaded as a continental breakfast buffet plate: Multiple calls to action competed with one another across the entire layout. Buttons, forms, links — all demanding attention, none of them working together.
I asked them a simple question: “What do you want the people who land on this page to do?”
They had a hard time answering. After some discussion, they eventually came to a consensus. The problem was that none of the calls to action crowding the page would lead to the outcome they had settled upon. This is because they had not asked themselves this question before creating the page. When you don’t let strategy shape how you use a tool, the tool becomes the strategy. The result was a page with confused and unclear objectives, and a design that was impossible to scan and respond to.
This is the first of what I consider the two biggest challenges facing design today.
Despite a continually growing awareness and experience of design in the marketplace, and despite the proliferation of design tools that put sophisticated capabilities in the hands of anyone with a web browser, designers and buyers of design services still lack a fundamental knowledge of what design is, how it works to capitalize on human attention, and how to apply it alongside brand expression. More than twenty years in to my career, I still spend as much time teaching the fundamentals as I do helping people navigate new things. I could view this as a frustration, but I shouldn’t: it has been what has made my career viable.
The landing page example illustrates this perfectly. The tools have made execution accessible. Anyone can now create something that looks professional, that uses modern layouts and typography, that feels designed. But producing something that feels designed does not mean that any design has happened. Most tools don’t ask you what you want someone to do. They don’t force you to make hard choices about hierarchy and priority. They offer you options, and if you don’t already understand the fundamentals of how design guides attention and serves purpose, you’ll end up using too many of them to no end.
This gap between execution and understanding has only widened as tools have become more powerful. In some ways, the democratization of design tools has made the problem worse. When design required technical skill — when you had to know your way around Photoshop or understand HTML and CSS — there was at least a barrier to entry that ensured some level of engagement with the craft. Now, the barriers are gone, but the understanding hasn’t rushed in to fill the space. And why would it? The reason these tools exist is not just to make “design” more accessible (a good thing), but to make “design” happen faster (not often a good thing). We can all make things faster than ever before, but because we’ve forgotten that friction is often a feature, not a bug, we don’t always make them better. When things took longer to make, we had time to think more about the relationship between form and function, intent and experience, input and output.
This brings me to the second challenge: the pace of change is only accelerating, and it is a serious challenge to designers to determine how much time to spend keeping up.
There is always a new tool or technology to try or learn. But then there is the work you were already doing before that thing came along. AI is the big one right now — generative tools that promise to speed up ideation, automate production, transform workflows. Before AI, it was design systems and component libraries. Before that, it was responsive design and mobile-first thinking. Before that, it was social media and the shift to user-generated content. Before that, it was Web 2.0 and the sophistication of content management systems. Before that, it was simply the web itself, and the question of how design would translate from print to screen. Before that, it was the computer itself.
I’ve been doing this for over twenty years, and this pattern has never stopped. New tools and technology have always cast a shadow on the fundamentals of design and strategy. Each wave promises to change everything, and each wave demands that designers pay attention, evaluate, learn, or risk being left behind.
The vast majority of the new things are rightly rejected. Most tools don’t solve real problems — they solve problems that the tool itself created, or they offer marginal improvements on workflows that already function well enough. But it takes time to discern the useful from the distraction, and the one thing we haven’t gotten more of over the course of my career is time.
I was an early adopter of Figma, which turned out well for me and my team. But in hindsight, it was a risk. I didn’t know then that Figma would turn out to be the dominant design tool. I got lucky. The time I invested in learning it could have been wasted on any number of tools that came and went without leaving a trace. These days, I regularly bookmark new tools under a file labeled “tools to review,” and I try to make some time each week to look back over them, assess their features and costs, and try out the ones that actually look like they could solve a problem I have now. Most don’t end up passing. Even though that means there is a time cost to simply keeping up, it seems worthwhile in case the next Figma comes along.
But here’s the tension: these two challenges work against each other.
Teaching fundamentals takes time. Understanding how design captures attention, how it serves purpose, how it operates within the constraints of brand and strategy — these things can’t be rushed. They require conversation, iteration, examples, reflection. But just as you might carve out time to help a client understand why their landing page isn’t working, you’re being pulled away to evaluate the next tool, the next technology, the next promise of transformation, and so are they. Are they listening to your explanations, or are they already in another tab, trying another new thing? One can only hope.
You can’t build foundational knowledge while chasing the new. But you can’t ignore the new entirely, or you’ll fall behind. So you split your time, and both efforts can suffer. The fundamentals remain elusive because you’re too busy keeping up. The tools remain half-learned because you’re too busy teaching.
And the irony is that the fundamentals are what actually help you discern which new tools matter. When you understand what design is for, when you know how attention works, when you can articulate what a page or interface or system should accomplish, you can quickly assess whether a new tool serves that purpose or distracts from it. The fundamentals are the filter. But to develop that filter, you need time and space that the acceleration won’t give you.
I don’t have a solution to this. I don’t think there is one, really. This is just the reality of working in a field that sits at the intersection of human behavior and technological change. Both move, but at different speeds. Human attention, cognition, emotion — these things change slowly, if at all. Technology changes constantly. Design has to navigate both.
What I do know is that the fundamentals don’t go away. The question of “what do you want people to do?” doesn’t become obsolete. The principles of hierarchy, contrast, rhythm, flow — these persist across every tool and platform. The new things come and go, but the old questions remain. Maybe that’s the only real answer: keep asking the old questions, even when the new tools make it seem like you don’t have to.
For whatever reason, we’re living at this time, and these are our design challenges. A new tool isn’t going to solve them, and we can’t change that. But we do have some control over how we spend our time and what we prioritize. For me, I have prioritized the fundamentals of design, and have made them what I focus my use of any new technology on. I always look for the ways the tools make it easy to produce something flawed, and that perspective always make me a better user of that technology and a better advisor to those that haven’t done the same.
2025-08-27 12:00:00
President Trump’s appointment of AirBnB co-founder Joe Gebbia as “Chief Design Officer” of the United States is a sickening travesty. It not only proves a fundamental misunderstanding of both design and governance, but an unbound commitment to corruption.
Gebbia’s directive to make government services “as satisfying to use as the Apple Store” within three years might serve as an appealing soundbite, but it quickly collapses under the slightest scrutiny: why? how? with what design army? The creation of the so-called National Design Studio and Gebbia’s appointment as its chief should raise serious questions about credentials, institutional destruction, and continued corruption.
Gebbia’s reputation in design rests on a shaky foundation. AirBnB’s present dominance isn’t the product of real innovation. He and his friends stumbled upon an idea after listing their apartment on Craigslist for under-the-table sublease during a popular conference. They realized money could be made, and built a website to let other people do the same thing, through them, not Craig. Good on them, but renting a space is regulated differently than selling a used couch for many good reasons. What you could once attribute to the same quaint naivety of setting up a lemonade stand, you can no longer. The people who funded them knew better, and eventually, so did they.
What AirBnB does today is no different, other than the legitimization that comes with lots of capital and a slick app. But let’s be clear: The innovation was never about design, it was about collusion: spend enough to ensure that regulatory enforcement costs more; spin enough to make theft look heroic. With Y-Combinator as a launchpad, the company rapidly built its business by systematically ignoring well-established regulations in hospitality and real estate. This sounds like a perfect match for the Trump Administration, and it’s why I cannot take any of Gebbia’s commitments now at face value. His formative business experience taught him to break existing systems rather than designing better ones, and for that he was rewarded beyond anyone’s wildest imaginations.
True design requires understanding constraints, working within complex systems, and serving users’ actual needs rather than exploiting regulatory gaps. Gebbia’s track record suggests a fundamentally different approach — one that prioritizes disruption over responsibility and profit over genuine public service. I’m not sure he can differentiate between entitlement and expertise, self and service, commerce and civics.
The hubris of this appointment becomes clearer when viewed alongside the recent dismantling of 18F, the federal government’s existing design services office. Less than a year ago, Trump and Elon Musk’s DOGE initiative completely eviscerated this team, which was modeled after the UK’s Government Digital Service and comprised hundreds of design practitioners with deep expertise in government systems. Many of us likely knew someone at 18F. We knew how much value they offered the country. The people in charge didn’t understand what they did and didn’t care.
In other words, we were already doing what Gebbia claims he’ll accomplish in three years. The 18F team had years of experience navigating federal bureaucracy, understanding regulatory constraints, and working within existing governmental structures—precisely the institutional knowledge required for meaningful reform.
Now we’re expected to believe that dismantling this expertise and starting over with political appointees represents progress. Will Gebbia simply rehire the 18F professionals who were just laid off? If so, why destroy the institutional knowledge in the first place? If not, how does beginning from scratch improve upon what already existed? It doesn’t and it won’t. This appointment has more in common with Trump’s previous appointment of his son-in-law to “solve the conflict in the Middle East,” which resulted in no such thing unless meetings about hotels and real estate counts.
Gebbia knows as much about this job as Kushner did about diplomacy, which is nothing. Despite years in “design,” I suspect Gebbia knows little of anything about it, or user experience, or public service. His expertise is in drawing attention while letting robbers in the back door.
The timeline alone reveals the proposal’s fundamental unseriousness. Gebbia promises to reform not just the often-cited 26,000 federal websites, but all government services—physical and digital—within three years. Anyone with experience in government systems or even just run-of-the-mill website design knows this is absurd. The UK’s Government Digital Service, working with a much smaller governmental structure, required over a decade to achieve significant results.
But three years is plenty of time for something else entirely: securing contracts, regulatory concessions, and other agreements that benefit private interests. Gebbia may no longer run AirBnB day-to-day, but his wealth remains tied to the company. His conspicuous emergence as a Trump supporter just before the 2024 election suggests motivations beyond public service.
Trump has consistently demonstrated his willingness to use government power to benefit his businesses and those of his collaborators. There’s a growing list of “business-minded” men granted unfettered access and authority over sweeping government initiatives under Trump who have achieved nothing other than self-enrichment. AirBnB has already disrupted hospitality; their next expansion will likely require the kind of regulatory flexibility that only comes from having allies in high government positions. Now they’ve got a man on the inside.
This appointment fits a broader pattern of regulatory capture, where industries gain control over the agencies meant to oversee them. Gebbia’s role ostensibly focuses on improving government services, but it also positions him to influence regulations that could significantly impact AirBnB’s business model and expansion plans.
The company has spent years fighting local zoning laws, housing regulations, and taxation requirements. Having a co-founder in a high-level government design role—with access to federal agencies and regulatory processes—creates obvious conflicts of interest that extend far beyond website optimization.
Full disclosure: I attended college with Joe Gebbia and quickly formed negative impressions of his character that subsequent events have only reinforced.
While personal history colors perspective, the substantive concerns about this appointment stand independently: the mismatch between promised expertise and demonstrated capabilities, the destruction of existing institutional knowledge, the unrealistic timeline claims, and the predictable potential for conflicts of interest.
Government design reform is important work that requires deep expertise, institutional knowledge, and genuine commitment to public service. It deserves leaders with proven track records in complex systems design, not entrepreneurs whose primary experience involves circumventing existing regulations for private gain.
The American people deserve government services that work better. But interacting with government could not – and should not — be more different from buying something at the Apple Store. One is an interface layer upon society – an ecosystem of its own that is irreducible to a point and inextricable from the physical and philosophical world in which it exists. The other is a store. To model one after the other is the sort of idiocy we should expect from people who either understand little to nothing about how either thing should work or just don’t care. I suspect it’s both.
2025-08-15 12:00:00
Every good piece of design has at least one detail that is the “key” to unlocking an understanding of how it works. Good designers will notice that detail right away, while most people will respond to it subconsciously, sometimes never recognizing it for what it is or what it does.
These key details are the organizing principles that make everything else possible. They’re rarely the most obvious elements — not the largest headline or the brightest color — but rather the subtle choices that create hierarchy, guide attention, and establish the invisible structure that holds a design together.
Sometimes those key details fall into place right away; they may be essential components of how an idea takes its form, or how function shapes a thing. But just as often, these keys are discovered as a designer works through iterations with extremely subtle differences. Sometimes moving elements around in a layout, perhaps even by a matter of pixels, enables a key to do its work, if not reveal itself entirely.
Without these organizing details, even technically proficient design falls flat. Elements feel arbitrary rather than purposeful. Visual hierarchy becomes muddy. The viewer’s eye wanders without direction. What separates good design from mediocre design is often nothing more than recognizing which detail needs to be the key — and having the skill to execute it properly and the discipline to clear its path.
Recently, a designer on my team and I reviewed layouts for a series of advertisements in a digital campaign. We’ve enjoyed working with this particular client — an industrial design firm specializing in audio equipment — because their design team is sophisticated and their high standards not only challenge us, but inspire us. (It may seem counter-intuitive, but it’s easier to produce good design for good designers. When your client understands what you do, they may push you harder, but they’ll also know what you need in order to deliver what they want.)
The designer had produced a set of ads that visually articulated the idea of choice — an essential psychological element for the customer profile of high-end audio technology — in a simple and elegant way. Two arrows ran in parallel until they diverged, curving in different directions. They bisected the ad space asymmetrically, with one arrow rendered in color veering off toward the left and the other, rendered in white, passing it before turning toward the right.
This white arrow was the key. It overpowered the bold, colored arrow by pushing further into the ad space, while creating a clear arc that drew the eye down toward the ad’s copy and call to action. It’s a perfect example of old-school graphic design; it will do its work without being understood by most viewers, but its function is unmistakable once you see it.
In reviewing this piece, I saw the key right away. I saw how it worked — what it unlocked. And I also recognized that the designer who made it saw it, too. I could tell based upon his choices of color, the way he positioned the arrows — the only shapes, other than text, in the entire ad — and even the way he had used the curve radius to subtly reference the distinct, skewed and rotated “o” in the brand’s logotype.
This kind of sophisticated thinking, where every element serves multiple purposes and connects to larger brand systems, separates competent design from exceptional design. The white arrow wasn’t just directing attention; it was reinforcing brand identity and creating a sense of forward momentum that aligned with the client’s messaging about innovation and choice.
I’ve often heard it said that as a designer’s career matures, the distance between their responsibility and functional details grows — that design leadership is wielded in service of the “big picture,” unencumbered by the travails of implementation so that it can maintain a purity of service to ideas and strategy.
I couldn’t disagree with this more.
While it’s true that senior designers must think strategically and guide teams rather than execute every detail personally, this doesn’t mean they should lose touch with the craft itself. The ability to recognize and create key details doesn’t become less important as careers advance — it becomes more crucial for developing teams and ensuring quality across projects.
A design director who can’t spot the organizing principle in a layout, or who dismisses pixel-level adjustments as beneath their concern, has lost touch with the foundation of what makes design work. They may be able to talk about brand strategy and user experience in broad strokes, but they can’t guide their teams toward the specific choices that will make those strategies successful.
My perspective is that no idea can be meaningful without being synchronized with reality — as informed by it as it is influential upon it. There is no “big picture” without detail. The grandest strategic vision fails when it’s not supported by countless small decisions made with precision and purpose.
No matter how one’s career matures, a designer must at least retain access to the details, if not a regular, direct experience of them. This doesn’t mean micromanaging or doing work that others should be doing. It means maintaining the ability to see how abstract concepts become concrete solutions, to recognize when something is working and when it isn’t, and to guide others toward the key details that will make their work succeed.
Without that connection to craft, we become blind to the keys at work — we lock ourselves out of an understanding of the work that could help us develop our teams or ourselves. We lose the ability to distinguish between design that looks impressive and design that actually functions. We can no longer teach what we once knew.
The best design leaders I’ve known maintain a hand in the craft throughout their careers. They may delegate execution, but they never lose their eye for the detail that makes everything else work. They understand that leadership in design isn’t about rising above the details — it’s about seeing them more clearly and helping others see them too.
Great design has always been about the details. The only thing that changes as we advance in our careers is our responsibility for ensuring those details exist in the work of others. That’s a responsibility we can only fulfill if we never stop looking for the keys ourselves.