MoreRSS

site iconChristopher ButlerModify

Chief Design Officer at Newfangled and Magnolia.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Christopher Butler

Progress Without Disruption

2026-02-05 13:00:00

There’s nothing about progress that inherently requires disruption — except our inability to cooperate for stability.

Can there be progress without disruption?

It sometimes feels as if our culture has become addicted to doom — needing time to be marked by fearful anticipation rather than something more proactive or controlled. We’ve learned to expect that change must be chaotic, that innovation must be destructive, that the future must collide with the now whether we want it to or not.

But there’s nothing about progress that inherently requires disruption except our inability to cooperate for the sake of stability.

Consider the current conversation around AI and the future of work. Most people seem to agree there are three possible scenarios:

Scenario A: AI replaces nearly all functions provided by people so quickly that society can’t respond as it has to previous industrial revolutions. Mass unemployment destabilizes social structures supported by wage taxation. Even in a soft landing — universal basic income, increased corporate taxation — this is seen as catastrophic because it is contrary to the current capitalist paradigm and leaves humans with the existential problem of separating meaning and purpose from work.

Scenario B: AI replaces most current functions, but not as quickly. Sustained unemployment persists, but the gradual shift creates opportunities for humans to differentiate themselves from machines and derive value accordingly. Painful, but manageable. And afterward, this may even make possible a more deliberate and gentle passage to a new kind of society.

Scenario C: We recognize that AI’s current trajectory is destructive to the social fabric. We slow it down, change how it’s used, possibly reject aspects of it entirely. This would be the Amish approach — where observation and discussion about how a technology benefits the community determines its acceptance, use, and integration.

Most people assume Scenario C is impossible. We’re already too far down the path, they say. The technology exists, the investment has been made, the momentum is unstoppable. You can’t put the genie back in the bottle. Perhaps power and money are too committed now — unwilling and unable to accept regulation — untouchable by those that want something different.

But perhaps not. There are cultures that show us the way.

Despite common understanding, the Amish aren’t technophobes. They do use technology, just not everything that comes along. They carefully evaluate tools communally, based on whether they strengthen or weaken their social fabric. They observe. They choose. They have agency. A telephone might help, but only if placed in a shared building rather than individual homes, so it doesn’t fragment family time. The Amish demonstrate that discernment does not mean rejection.

It seems we’ve lost the ability to do the same. More accurately, though, I believe we’ve been convinced we’ve lost it.

We’ve internalized technological determinism so completely that choosing not to adopt something — or choosing to adopt it slowly, carefully, with conditions — feels like naive resistance to inevitable progress. But “inevitable” is doing a lot of work in that sentence. Inevitable for whom? Inevitable according to whom?

The conflation of progress with disruption serves specific interests. It benefits those who profit from rapid, uncontrolled deployment. “You can’t stop progress” is a very convenient argument when you’re the one profiting from the chaos, when your business model depends on moving fast and breaking things before anyone can evaluate whether those things should be broken.

Disruption benefits the information economy. It makes a good story when it happens, and a seductive — if not addictive — constant drip of doom when it feels as if it’s just around the corner. I’d love to live in a world in which good future narratives outsold apocalyptic ones, but I don’t. And so the medium creates the message, and the message creates the moment.

Disruption has become such a powerful memetic force that we’ve simply forgotten it’s optional. We’ve been taught that technological change must be chaotic, uncontrolled, and socially destructive — that anything less isn’t real innovation. But this framing is itself a choice, one that’s been made for us by people with specific incentives.

Think about what we’ve accepted as inevitable in the last twenty-five years: the fragmentation of attention, the erosion of privacy, the monetization of human connection, the replacement of public spaces with corporate platforms, the optimization of everything for engagement regardless of human cost. We were told these were the price of progress, that resistance was futile, that the technology was neutral and the outcomes were just the natural evolution of how humans interact.

But none of it was inevitable. All of it was chosen. Not by us, but for us.

The doom addiction makes sense in this context. If change is inevitable and we have no agency over it, then the most we can do is anticipate its arrival with a mixture of dread and fascination. Doom is exciting. Doom is dramatic. Doom absolves us of responsibility because if catastrophe is coming regardless of what we do, why bother trying to prevent it?

But stability? Cooperation? Careful evaluation of whether a technology actually serves us? These feel boring, impossible, naive. They require something we seem to have lost: the belief that we can collectively decide how technology integrates into our lives rather than simply accepting whatever technologists and investors choose to build.

I am not anti-technology. I have always been fascinated, excited, and motivated by new things. I am, however, choosey. This is about reclaiming the capacity to say “not like this” or “not yet” or “only under these conditions.” It’s about recognizing that the speed and manner of technological adoption is itself a choice, and one that should be made collectively rather than imposed by those who stand to profit.

What would it take to choose Scenario C? Not to reject AI entirely, but to evaluate it the way the Amish evaluate technology — with the community’s wellbeing as the primary criterion rather than efficiency or profit or inevitability.

It would require cooperation. It would require prioritizing stability over disruption. It would require believing that we have agency over how our world changes, that progress doesn’t have to be chaotic, that we can choose to integrate new capabilities slowly and carefully rather than accepting whatever pace Silicon Valley sets.

It would require rejecting the narrative that technological change is a force of nature rather than a series of choices made by people with specific interests.

Maybe we’ve actually lost the ability to cooperate at that scale. Maybe the forces pushing for rapid deployment are too powerful, too entrenched, too good at framing their interests as inevitable progress. Maybe Scenario C really is impossible.

But I suspect it’s less that we’ve lost the ability and more that we’ve forgotten we ever had it. We’ve been told for so long that we can’t choose, that resistance is futile, that disruption is the price of progress, that we’ve internalized it as truth.

The question isn’t whether we can have progress without disruption. The question is whether we can remember that we’re allowed to choose, and whether enough of us can do that at the same time.

The Decision Before the Work

2026-02-04 13:00:00

What happens when the most consequential design decision is made before you even get started?

This is the reality of design work today. When you choose Webflow vs. WordPress vs. Shopify vs. custom development, you’re making decisions about what’s even possible to design, what fonts and components are available, how content will be structured, what the maintenance burden will be, what integrations are feasible, and what performance constraints you’ll live with.

These are foundational design decisions that shape everything downstream. But they’re rarely treated as design decisions at all. They’re business decisions (“what’s fastest and cheapest?”) or technical decisions (“what does the dev team already know?”) or procurement decisions (“we already pay for this platform”).

The pressure to do more with less, to ship faster, makes consolidation not just attractive but mandatory. You can’t justify custom development when you need to deliver three sites this quarter. You can’t argue for the perfect tool when the platform with everything integrated exists and cuts weeks off the timeline. So designers end up making the most consequential choice — the platform — based on constraints rather than design goals. And then they spend all their energy optimizing within those constraints, never questioning whether the platform itself was the right choice.

This creates a strange inversion. The design decisions that matter most — platform, structure, foundational constraints — are made by business stakeholders who already pay for a service, developers who know a particular system, or project managers looking at integration compatibility. Meanwhile, designers focus on decisions that matter less: typography within the platform’s limited font library, layout within the platform’s grid system, interactions within the platform’s available animations.

And there’s a compounding effect. Once a platform is chosen, it’s extremely difficult to switch. You’re locked in. Every subsequent project becomes “well, we’re already on Webflow, so…” The initial choice compounds over years. The design constraint becomes permanent.

One platform may integrate with a particular source for web fonts while another offers something different. One handles e-commerce elegantly but struggles with content management. Another excels at blogs but limits design flexibility. These trade-offs determine what you can make before you’ve made anything. We are rarely able to consider these things at the right time because the pressures we face make consolidation extremely attractive, if not entirely necessary.

Platform consolidation isn’t neutral. Each platform embodies assumptions about what design should be, what interactions should feel like, what structures make sense. When most design happens within platforms, most design loses its definition. It cannot distinguish itself from something else, whether by look, feel, features, or even purpose.

This makes designed artifacts with business objectives weaker, and it makes designers who produce them this way for years weaker thinkers.

The short-term question is whether designers can reclaim platform selection as a design decision. Can we advocate for the right platform early, making the case that choosing the right foundation matters more than any individual design decision that comes after? Or has the “do more with less, faster” pressure made that impossible?

Platform selection is a necessary design skill. Becoming expert at understanding what each platform enables and constrains, and being able to translate that into business language. Helping stakeholders see that a platform choice isn’t just about speed or cost — it’s about what you’ll be able to make, how it will evolve, and what constraints you’ll live with for years. All non-negotiable.

But this requires designers to think like strategists and speak the language of business constraints. It requires clients and stakeholders to trust that a designer’s platform recommendation is about design outcomes, not just designer preference. And it requires everyone to acknowledge that often the most important design decision happens before the designer has even gotten started.

But the long-term question is this: How and when do we differentiate between creativity and configuration?

When the medium determines so much that design work looks like taking a multiple-choice test, what does it mean to call yourself a designer? Are you designing, or are you configuring? Plenty of designers will think, “so what if my day job looks like that — I can be creative on my own time.” But after 20+ years of working as a designer, I know two things extremely well: creative energy is not in infinite supply, and what you do from 9-5 will shape how you think, how you see, and how you work.

Replication Is Not Innovation

2026-02-02 13:00:00

AI’s efficiency gains don’t justify trillion-dollar valuations.

The economy is both thriving and failing, depending on where you’re standing. The stock market remains strong, driven almost entirely by the valuations of seven companies. Meanwhile, most people experience a daily reality defined by inflation — everything costs more, no one is being paid more, and the gap between what the market says is happening and what life feels like keeps widening.

This fork in the economy isn’t an accident. I think it reveals something fundamental about AI that we’ve been unwilling to name: it delivers efficiency, not innovation. And the market — the combined view of currency-wielding humans on Earth — is starting to figure that out.

I consider myself an AI moderate. I think AI is a very useful technology. It can accelerate work, uncover patterns, and illuminate information a person would never find on their own. The fact that I’m using it right now to help refine this essay speaks to that utility.

To be clear: I have no issue with the kinds of AI driving scientific discoveries — protein folding, drug discovery, materials science. But those breakthroughs come from machine learning and analytical systems, not the large language model generative AI that’s driving the valuations of companies like Nvidia, Microsoft, and Alphabet. If anything, it’s bizarre that the companies using AI to advance science aren’t worth more than the ones using LLMs to force copilots on everyone.

When the internet became publicly available, it delivered immediate, ground-level transformation. Email didn’t just make it faster to send a message across the country — it made it possible to send messages virtually anywhere, at any time. No one was going to run a distributed company communicating with letters; email made that possible. E-commerce didn’t speed up shopping — it connected buyers with products no matter where they were. The internet didn’t just make existing things faster — it made entirely new things possible. That’s innovation.

AI is different. Despite being a relative triumph of research and development, it doesn’t actually deliver net-new value. It replicates human effort nearly instantaneously, but it hasn’t created anything fundamentally new in human experience. For some of us deeply immersed in it, we can see value in synthesis and analysis at inhuman speeds. But for most people, the hallmarks of AI are immediately perceivable: the uncanny valley imagery, the distortions, the flatness.

This is why AI is almost entirely pitched these days around gaining efficiency — doing what you’ve always done faster, with less cost. That’s valuable, certainly, but to whom? Companies implementing AI tools expect their employees to use them to do more, faster. When an employee uses AI to do what two employees used to do, will they get paid twice what they used to? Of course not. But the company will pocket the difference. That will make the company richer, but does it really make it a better investment? Is efficiency why any of the top traded stocks cost what they do?

And there’s a deeper problem. If AI-driven efficiency means one worker doing what two used to do, eventually it means no workers doing what used to require hundreds. Mass unemployment is the logical endpoint of efficiency without innovation. The fewer people working, the fewer people buying. No amount of efficiency will keep a company profitable without customers who can afford what they’re selling. We’re watching companies optimize themselves toward a future where no one can afford their products.

Maybe AI will create new kinds of work, the way every previous automation wave eventually did. But that’s in the future. Right now, AI isn’t delivering those opportunities to anyone, and yet it’s the basis for historically unprecedented corporate valuations with almost no trickle-down to the average person.

And this is where the forked economy comes into focus. The traded share-based economy is essentially betting on seven companies whose valuations depend on AI delivering transformative innovation. But the Economy — the one experienced by everyone who holds and spends money — is registering something else entirely. It’s seeing efficiency gains that benefit corporations without corresponding benefits to workers or consumers. We’re at a point of diversion between value and perceived value, and that diversion is costly. The market can sustain the fiction for a while, but eventually the gap becomes untenable. When trillions in valuation rest on the promise of transformation, and what’s actually being delivered is optimization, something has to give.

The reason the market remains strong despite most people’s experience deteriorating is that the market isn’t most people anymore. It’s a handful of massive valuations propped up by the promise that AI will eventually deliver what the internet did — a genuine expansion of human capability and economic possibility. But I don’t think currency-wielding humans making daily decisions about value are convinced that forward-moving innovation has actually happened.

They’re right to be skeptical. Innovation creates new markets, new possibilities, new forms of value that didn’t exist before. Efficiency optimizes existing markets, existing processes, existing value chains. Both matter, but they’re not the same thing. And they don’t justify the same valuations.

The internet made email possible, then social networks, then streaming media, then entirely new industries that no one predicted. AI has made… slightly better autocomplete? Faster image generation that still looks uncanny? Customer service bots that frustrate more than they help? The gap between the hype and the reality is a trillion-dollar problem.

I’m not saying AI won’t eventually deliver genuine innovation. It might. But right now, what we have is a technology that excels at replication — at doing what humans already do, faster and cheaper — without expanding what’s possible. And an economy built on the assumption that replication equals innovation is an economy built on sand.

I’d love to rely upon the market’s “self-preservation instinct” to sort this all out. Perhaps it will. The question is how much damage a correction will cause, whether we’ll learn anything from it, and who will be left alive and trading.

Fifteen Clocks

2026-01-31 13:00:00

On clocks as interfaces to time, and a decade measured in rotations of a single hand.

For our first wedding anniversary, I gave my wife a clock that measures years. It’s called The Present, created by Scott Thrift, a designer whose work I’d followed for some time. I’d invested in his Kickstarter campaign, and the clock arrived just in time for that first milestone. A timepiece that takes 365 days to complete a single rotation seemed like the perfect gift for marking an anniversary — a reminder that marriage is measured not in moments but in seasons.

The Present is beautiful: about twelve inches in diameter, with a domed glass front and a single white hand that passes over a rainbow gradient. The gradient flows from a sliver of white at the winter solstice through blue, green, yellow, orange, and red — watching seasons change as the hand moves from winter’s pale beginning through spring’s blue-green to summer’s yellow-orange to autumn’s deep red. We’ve watched that hand make ten complete rotations, each one marking another year together.

That clock took its place in what has become a collection of fifteen. Each one measures time differently — some track the familiar twelve-hour cycle, others span longer periods like The Present’s year-long journey. People often ask why I collect them. The answer lies somewhere between their mechanical beauty and what they represent: they are interfaces to time itself, each offering a different way of seeing our relationship with duration and change.

The collection fills our home deliberately. With some exceptions, most rooms have a single clock, and I’ve matched each to its space in a way that feels right. Most are silent — no audible ticks or chimes — but the loudest one hangs in our downstairs bathroom. Its mechanism drives what’s marketed as a “silent sweeping” second hand, but it produces a very audible hum.

In a typically silent space, that mechanical presence becomes something you can appreciate rather than tolerate.

In our downstairs hallway hangs a beautiful red Canetti clock — three stacked discs that rotate with time.

You can see it across rooms, its bright color pulling you through the house, organizing space around temporal awareness.

Another Scott Thrift piece hangs in our bedroom. This one measures a 24-hour period, pulling a single hand across a gradient from white to purple, marking the progression from sunrise to sunset.

It feels appropriate to see at the start and end of each day — a reminder that time follows natural rhythms, not just the arbitrary divisions of twelve hours repeated twice.

For me, and I hope for anyone who spends time in our home, this is more of a composition than a collection. Each clock creates its own temporal experience in its space, and together they create a kind of temporal architecture for how we move through our home and our days.

Most of the clocks in the house, even the vintage ones, now use battery-powered quartz movements — the variations are in how those movements drive hands across the clock face. Some have second hands that tick forward second by second, others have silently sweeping second hands, others have only minute or hour hands, and a few have other geometric forms that rotate to mark time.

There’s an illusion of perpetual motion — of course if the battery dies, the movement stops — but as long as it’s working, the feeling of constancy is a balm on the mind distracted by the churning woes of the world. The mechanics themselves are fascinating to me. These clocks don’t create digital worlds in the way that other personal machines I’ve written about do. But their machinery is almost like a key that opens the door onto an inner world — one that is quieter and protected from the volatility outside.

My phone shows 3:47 PM. My year clock shows late January. The phone’s precision can sometimes feel like a kind of tyranny — uniform, invisible, relentless. The year clock acknowledges that time has qualities, not just quantities.

There’s something profound about how mechanical clocks make time visible through motion and geometry. They don’t just display time; they embody it. Their movements create a kind of physical poetry. In an age where most of our interfaces try to hide their mechanisms behind smooth glass surfaces, these clocks celebrate their machinery, their gears, their essential nature as tools that transform abstract time into tangible movement.

Medieval scholars kept skulls on their desks — Memento Mori, reminders of death. I suppose that, in a way, my clocks serve a similar purpose. Each tick, each rotation, each completed cycle is a reminder of time’s inexorable passage and our own finite allocation of it. We only get so much; it is always moving forward; there is no rewind.

When The Present’s hand completes another full rotation, when another year has passed in what feels like a moment, I’m confronted with the speed of it all. Already, its hand has made ten complete journeys around that rainbow gradient. How many more?

Perhaps this is why I’m drawn to how these machines require attention — clocks need winding, adjusting, maintaining. This interaction creates a more intentional relationship with time. When I wind a clock, I’m not just maintaining a mechanism; I’m participating in the measurement of my own duration. When a battery dies and a clock stops, there’s a strange silence where constant motion used to be. It breaks the spell, reminds you that these systems of measurement are fragile, that they require care to continue.

We live at a time when digital technology distorts time. With perpetual connectivity, endless scrolling, and always more to consume is built the illusion of forever — that it will always flow — and the illusion of never — that any one bit of information has the lifespan of a mayfly. The digital wants to be timeless despite being the output of short-lived humans operating machines.

A clock can be a tether back to reality — a physical marker of duration that provides a vital reminder that our time is bounded, textured, and follows patterns older than any machine.

Each clock in my collection is an argument for a different way of seeing time, a different way of being in time. They are personal machines in the truest sense — not because they create virtual worlds, but because their machinery unlocks contemplative ones. Each creates its own temporal world, its own rhythm, its own reminder.

Population: 1.

Each, in its own way, reminds me that a moment can be as long or as brief as we need it to be. This is the long now, and this, too, shall pass.

Disruption, or Theft?

2025-11-26 13:00:00

We consented to Uber’s terms and conditions. When did we consent to AI?

I am not allowed to set up a stand in front of my local Whole Foods and sell the eggs my chickens lay or the blueberries from our bushes. I am not allowed to sell remedies or heal the sick out of my home. I am not allowed to build an app that handles the logistics for thousands of people to do either of those things, either.

In all cases, regulations — rules enforced by government agencies and local authorities — prevent me. Every industry has regulatory structure surrounding it. Their purpose is not primarily to restrict my actions, but to protect others: the complex ecosystem of economy surrounding the cultivation, distribution, and sale of produce, the livelihood of the professionals at the “ground level,” and the well-being of consumers. The reasons for regulatory structures are as diverse and complex as the systems they govern, and the vast majority are good. I understand why they exist, and I’m glad they do.

With that in mind, when ride-sharing apps first launched, they seemed to me an affront to capitalism. From the sidelines, I felt as if I were watching a hostile takeover of an entire industry cloaked in “Innovation” propaganda. The same was true of Airbnb: a nearly overnight heist of hospitality revenue. Neither was service innovation, but interface innovation. The interface was the key to making this shakedown possible; it was designed to deconstruct the physical constraints and overhead of industry and capture the profits.

Those of us who pointed out how unreasonable it was to allow this to happen — to ignore anti-regulatory activity just because it hid behind a sexy smartphone app — were labeled anti-capitalist. Who were we to interfere with the fundamental forces of market opportunity and competition? Innovation, after all, changes the game, and not everyone can win. Old things sometimes must be left behind, as painful as that can be.

But which is the real affront to capitalism: regulations that restrict it, or regulations that restrict those subject to it? What is the difference between me setting up an egg stand and AirBnB letting me treat my spare room like a hotel?

Money, of course! The difference between me setting up a roadside egg stand and my spare room listing on Airbnb isn’t the service we’re offering or the potential harm we might cause. It’s the money backing the business and the expected return. It would take the Airbnb of backyard chickens to create meaningful competition with Whole Foods, for example. One guy selling a few dozen eggs won’t do it. I’d need an app, lots of other chicken people, and a whole bunch of startup cash. With billions invested in EggApp, whats a few hundred million more to pressure local and national legislature to look the other way?

Well, that’s exactly what has been done. Uber pushed state legislatures to pass laws that preempted local authority over ride-hailing — by the mid-2010s, dozens of states had passed laws that limited cities’ ability to regulate prices, licensing, or worker standards. In San Francisco, Airbnb spent roughly $8 million to defeat Proposition F, a 2015 ballot measure that would have imposed stricter limits on short-term rentals. Both companies spent millions more on federal and state lobbying, litigation, and grassroots campaigns designed to blunt or eliminate regulations.

So the regulations that bind me — sensible regulations I support — are selectively enforced or discarded for those who can afford to hire the lawyers, lobbyists, and public relations people to work around them.

An army of pirates, and we call it innovation. Pressure and money paid to lawmakers who will change the rules, and we don’t call it a bribe?

These so-called innovations left entire classes of workers behind, their livelihoods devalued overnight, their expertise suddenly worthless because an app made it easy for anyone to compete with them without training, licensing, or insurance. The theft was abstract enough to be deniable. The Ubers and Airbnbs could claim they didn’t really take anything — they simply offered a better way to connect people. The fact that entire industries collapsed and workers were displaced was just the invisible hand of the market doing its work.

Each wave of this so-called innovation has taken more.

 

*

 

But AI might be the grandest heist of them all. The Ubering of everything.

This time, the theft is not abstract. The makers of AI have trained their models on the output of all of us — writers, artists, designers, programmers, photographers, musicians. Every piece of work we’ve created and shared, scraped and fed into systems that will now compete with us using our own labor as training data.

Is it stealing if it’s learning? I’ll head of a common objection at the pass: Yes, human learning works this way too. We all learn by reading, looking, hearing, and experiencing existing cultural output. But there’s a difference, and it’s scale. A person can plagiarize, and in doing so creates a direct line between their source material and their copy. But AI plagiarizes at an elemental level — the output is derived from everything it has ever trained on. At a certain scale, particularly that reflected by the market capitalization of the Magnificent Seven — Apple, Microsoft, Alphabet, Amazon, Nvidia, Meta, Tesla — who all derive significant current value from AI, it takes on completely different power. There’s a hard limit on the market value of one person’s plagiarism. Distributed plagiarism masquerading as “generative AI” constitutes trillions in market value.

And before someone argues that not all AI output is plagiarism — you’re right. But the less you put into an AI prompt, the more you’re just asking it to remix what it stole.

With AI, the source material was anything published to the web — as widely distributed a “public” corpus as you could imagine. But the profit is not distributed. It’s kept by a handful of companies. And here’s the thing: simply publishing something to the web does not forfeit intellectual property rights. There is an abundance of unprotected information online — public domain whether the creators like it or not — but there is also owned information there. AI training has consumed it all equally, making no distinction between what was freely offered and what was legally protected.

The web created an implicit social contract: you publish your work, it reaches people, maybe you get paid directly through subscriptions or ads, or indirectly through exposure, reputation, opportunities. AI broke that contract. A photographer’s work gets scraped into training data, AI generates similar images that compete with their stock photos, their sales drop — but they received nothing when their work was taken. No payment, no exposure, no attribution. The creators get nothing while the AI companies capture trillions.

When we no longer have value in the market, it will be because it was stolen from us. Not metaphorically. Not as a consequence of a market transformed by digital innovation. This time, it was literally stolen — work taken without permission, without compensation, and used to build the very tools that make the makers obsolete.

Again, those who point this out are labeled resistant to progress, anti-innovation, Luddites. We’re told that this is just how technology works, that we can’t stand in the way of change, that AI won’t take our jobs but “someone using AI will.” The same tired arguments that have justified every extraction, every displacement, every theft disguised as liberation.

I wonder if we’ve finally reached the point where the pattern becomes undeniable. When it was taxi drivers, we could tell ourselves it didn’t affect us. When it was hotel workers, we could rationalize that markets evolve. Perhaps now that it’s everyone who creates anything, we’ll finally admit that something has gone very wrong.

Or maybe not. Maybe we’ll continue to celebrate disruption right up until the moment we realize there’s nothing left to disrupt, no one left with skills valuable enough to command a living wage, because all of it has been fed into systems owned by a handful of companies who convinced us that this was something other than a war.

I keep thinking about those eggs I’m not allowed to sell and the regulations that prevent me from doing any harm, however small, to my neighbors. And then I think about companies valued in the trillions, training AI on the work of millions, displacing entire professions, and facing no meaningful regulatory resistance at all.

Where is the regulation to protect all of us from the dangers of AI? The dangers of losing our livelihoods. The dangers of losing our grip on the truth. The dangers of losing our place in this world. Like any technology, AI is useful. And it’s also dangerous. We need regulations not just to reign in the greedy from trampling over intellectual property, not just to protect the jobs of worthy workers, but to protect human beings from losing control of our own creation. I’m not against AI; I’m against unregulated AI.

Right now, it seems like the rules exist to protect established power from small threats while leaving everyone else vulnerable to large ones. I can’t sell eggs, but billion-dollar companies can take the sum total of human creative output and use it to make us obsolete. You could argue that when we installed the Uber app, we consented to its terms and conditions in full: not just to the conveniences it offered us in the moment but the consequences it introduced to people and communities. But when did we consent to AI? It’s been added to our lives without our request or permission. It’s read our email, listened to our phone calls, and scraped our webpages — for years — without us really knowing it. Then we were given silly new AI toys and told “go make memes”; it showed up in our apps and accounts, its uninvited sparkle reminding us who owns what; just a few years later, we’re told “now go make another way of life.”

That escalated quickly.

For decades, the paranoid have worried about a new world order. They’ve waited for when technology will be used by some people to subjugate others — a new world, ordered by machine. Few, if any, expected the new world to be for machines. And yet, even the AI makers openly worry now about a near-term future in which the machines they made no longer need them. It doesn’t have to be that way. If there is a future, it will be because we didn’t let those with power and greed addictions destroy it. If there isn’t, the most fitting epitaph on the tomb of civilization will be “we let them do it.”

What AI is Really For

2025-11-18 13:00:00

Best case: we’re in a bubble. Worst case: the people profiting most know exactly what they’re doing.

After three years of immersion in AI, I have come to a relatively simple conclusion: it’s a useful technology that is very likely overhyped to the point of catastrophe.

The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe. This is a classic bubble scenario. We’ll all take a hit when the air is let out, and given the historic concentration of the market compared to previous bubbles, the hit will really hurt. The worst case scenario is that the people with the most money at stake in AI know it’s not what they say it is. If this is true, we get the bubble and fraud with compound motives. I have an idea about one of them that I’ll get to toward the end of this essay. But first, let’s start with the hype.

As a designer, I’ve found the promise of AI to be seriously overblown. In fact, most of the AI use cases in design tend to feel like straw men to me. I’ve often found myself watching a video about using AI “end to end” in design only to conclude that the process would never work in real work. This is usually because the process depicted assumes total control from end to end — the way it might work when creating, say, a demonstration project for a portfolio, or inventing a brand from scratch with only yourself as a decision-maker. But inserting generative AI in the midst of existing design systems rarely benefits anyone.

It can take enormous amounts of time to replicate existing imagery with prompt engineering, only to have your tool of choice hiccup every now and again or just not get some specific aspect of what a person had created previously. I can think of many examples from my own team’s client work: difficult to replicate custom illustrative styles, impossible to replicate text and image layering, direct connections between images and texts that even the most explicit prompts don’t make. A similar problem happens with layout. Generative AI can help with ideating layout, but fails to deliver efficiently within existing design systems. Yes, there are plenty of AI tools that will generate a layout and offer one-click transport to Figma, where you nearly always have to rebuild it to integrate it properly with whatever was there beforehand. When it comes to layout and UI, every designer I know who is competent will produce a better page or screen faster doing it themselves than involving any AI tool. No caveats.

My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain. The larger the use case, the larger the expense. Most of the larger use cases that I have observed — where AI is leveraged to automate entire workflows, or capture end to end operational data, or replace an entire function — the outlay of work is equal to or greater than the savings. The time we think we’ll save by using AI tends to be spent on doing something else with AI.

(Before I continue, know also that I am a co-founder of a completely AI-dependent venture, Magnolia. Beyond the design-specific use cases I’ve described, I know what it means to build software that uses AI in a far more complex manner. The investment is enormous, and the maintenance — the effort required to maintain a level of quality and accuracy of output that can compete with general purpose AI tools like ChatGPT or even AI research tools like Perplexity — is even more so. This directly supports my argument because the only reason to even create such a venture is to capitalize on the promise of AI and the normalization of “knowledge work” around it. That may be too steep a hill to climb.)

Much has already been made of the MIT study noting the preponderance of AI initiative failures in corporate environments. Those that expect a uniform application of AI and a uniform, generalized ROI see failure, while those who identify isolated applications with specific targets experience success. The former tends to be a reaction to hype, the latter an outworking of real understanding. There are dozens of small-scale applications that have large-scale effects, most of which I’d categorize as information synthesis — search, summarization, analysis. Magnolia (and any other new, AI-focused venture) fits right in there. But the sweeping, work-wide transformation? That’s the part that doesn’t hold up.

Of course, we should expect AI to increase its usefulness over time as adoption calibrates — this is the pattern with any new technology. But calibration doesn’t mean indefinite growth, and this is where the financial picture becomes troubling. The top seven companies by market value all have mutually dependent investments in AI and one another. The more money that gets injected into this combined venture, the more everyone expects to extract. But there has yet to be a viable model to monetize AI that gets anywhere close to the desired market capitalization. This is Ed Zitron’s whole thing.

This is also the same reckoning that a dot-com inflated market faced twenty-five years ago. It was obvious that we had a useful technology on our hands, but it wasn’t obvious to enough people that it wasn’t a magic money machine.

Looking back, another product hype cycle that came right afterward sums this bubble problem up in a much shorter timescale: The Segway was hyped by venture capitalists as a technology that would change how cities were built. People actually said that. But when everyone saw that it was a scooter, that suddenly sounded awfully silly. Today, we hear that AI will change how all work is done by everyone — a much broader pronouncement than even the design of all cities. I think it’s likely to come closer than the Segway to delivering on its hype, but when the hype is that grand, the delta between scooter and normal technology is, at this point, a trillion dollar gap.

The AI bubble, as measured by the state of the financial market, is much, much bigger than any we’ve seen before. Even Sam Altman has acknowledged we’re likely in a bubble, shrugging it off like a billion-dollar miscalculation on a trillion-dollar balance sheet. The valuation numbers he is immersed in are extraordinarily large — and speculative — so, no wonder, but the market is dangerously imbalanced in its dependence upon them. A sudden burst or even a slower deflation will be a very big deal, and, unfortunately, we should expect it — even if AI doesn’t fail as a venture completely.

Meanwhile, generative AI presents a few other broader challenges to the integrity of our society. First is to truth. We’ve already seen how internet technologies can be used to manipulate a population’s understanding of reality. The last ten years have practically been defined by filter bubbles, alternative facts, and weaponized social media — without AI. AI can do all of that better, faster, and with more precision. With a culture-wide degradation of trust in our major global networks, it leaves us vulnerable to lies of all kinds from all kinds of sources and no standard by which to vet the things we see, hear, or read.

I really don’t like this, and to my mind, it represents, on its own, a good reason to back off from AI. Society is more than just a market. It’s a fabric of minds, all of which are vulnerable to losing coherence in the midst of AI output. Given the stated purpose of AI, such a thing would be a collateral damage, you know, like testing a nuclear bomb in the town square.

But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.

And yet, as much as I doubt what we are sold in AI, I feel the same about what they — the billionaire investors in an AI future — are sold as well. I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit. Rather than previous audacious scientific goals like mapping the human genome, achieving AGI has never been precise enough to achieve. To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

Again, I think that AI is probably just a normal technology, riding a normal hype wave.

And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.

I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.

AI companies say they need datacenters to deliver on their ground-level, day-to-day user promises while simultaneously claiming they’re nearly at AGI. That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters? And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power. The value of AI can drop to nothing, but owning the land and the flow of water through it won’t.

When the list of people who own this property is as short as it is, you have a very peculiar imbalance of power that almost creates an independent nation within a nation. Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within. Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community, so much so that they often measure up to municipal statuses without having the populace or governance that connects actual cities and towns to the systems that comprise our country.

When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style, you have a sudden imbalance of power that looks like a cancer spreading within a national body.

The scale has already been tipped. I don’t worry about the end of work so much as I worry about what comes after — when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments. I know, you can picture me wildly gesticulating at my crazy board of pins and string, but I’m really just following the money and the power to their logical conclusion.

Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.

I’m more than open to being wrong; I’d prefer it. But I’ve been watching technology long enough to know that when something requires this much money, this much hype, and this many contradictions to explain itself, it’s worth asking what else might be going on. The market concentration and incestuous investment shell game is real. The infrastructure is real. The land deals are real. The resulting shifts in power are real. Whether the AI lives up to its promise or not, those things won’t go away and sooner than later, we will find ourselves citizens of a very new kind of place that no longer feels like home.