2026-04-17 12:00:00
A growing number of design firms, publications, and professional organizations have declared outright bans on AI-generated content. No generated text in our articles. No AI-assisted imagery in our portfolios. No automated code in our deliverables. These proclamations arrive with varying degrees of righteousness, positioning themselves as ethical stands against corporate exploitation, job displacement, and the degradation of craft.
I understand the impulse. The concerns driving these declarations—about labor, creativity, and corporate power—are legitimate and urgent. But I believe these blanket embargoes make the wrong point in the wrong way. By drawing hard lines against entire categories of tools, we’re mistaking the means for the problem itself, and in doing so, we’re limiting our ability to shape how these technologies integrate into creative work.
This is a complicated subject that deserves more than binary thinking. It requires us to examine what automation has historically meant for creative professions, to grapple honestly with the ethical dimensions of how we use these tools, and to distinguish between the technology itself and the systems of power and economics that surround it.
The first, and I think easiest, aspect to address is the idea of progress. It’s worth considering whether automation is the way of the future and, if so, whether this is a good thing or worth resisting. For the sake of focus, I’ll limit my take to automation in design, but the logic applies much more widely.
I have no objection to doing much of anything the “old-fashioned” way. There are many merits to slowing down and making physical contact with our materials. I write about this all the time. But, I also believe that automation reduces unnecessary obstacles to creativity. Here’s just one example: Most typography is done today with computers, not letterpress. As a result, more people can publish things, and more people can read what they publish. At some point, it probably appeared that typographers were under assault by desktop publishing software. But in hindsight, it’s clear that the expertise of a typographer isn’t really about how text makes its way onto a surface, but about how text itself is formed to best communicate. If anything, the computer has enabled that expertise to flourish and spread at a scale impossible to achieve with physical presses. That is a good thing. Drawing a red-line at letterpress would have been as arbitrary as drawing it at chisels and stone, and just as limiting to typography.
Similarly layout, which is, I think, the most exciting arena of graphic design, has been completely transformed by automation. Leaving paste-up, photostat, halftone, and mechanical board processes behind for digital desktop publishing collapsed dozens of individual tasks and skills into nearly instantaneous results commanded by a single person with a keyboard and mouse. That transition was a painful one for many people; a person whose professional value had been defined by how exacting they were with a blade and glue suddenly had little to offer. And yet, no graphic designer would have put that skill, however critical at the time, at the core of any definition of graphic design. The craft of design was never about the mechanics of blades or brushes, nor is it now about keyboards and mice. Drawing a red line between the two would have limited graphic design to a form of craft that was always lesser than what the best designers can envision in their minds. Digital tools offered graphic designers the experience of layout they’d always wanted. That’s progress.
I think the closest parallel to these transitions that AI, as we have come to know it today, presents is in how it further accelerates what we’ve come to think of as the “hand off” from design to development. Put simply, this role switch is no longer necessary. There will remain exceptions, of course, but for the most part, the translation of documented design choices into functional code can be automated. Every version of this idea that I have already experienced in my career has come with serious compromises. Dreamweaver generated lousy code and taught the designers who used it (like me) bad habits. WYSIWYG editors enraged anyone who copied and pasted from another tool. Templated website builders oversimplified in areas where more detail was always desired and overcomplicated things no one wanted to think about. But now, many options exist to allow creative teams a faster and stabler route to execution that doesn’t undermine where their focus should be—on creative expression. Apps like Anima will read and translate your design systems in Figma to useable, responsive code. Claude can do the same, and depending upon the way you use it, can go even further, reading design documentation, assembling skills, writing to content management systems, performing code reviews, debugging, and so on. What used to take weeks at its fastest among several people can now happen in hours by one. This is as profound a collapse of skills, roles, and time as the desktop publishing transition was.
Is it progress? I think it is. Designers who must also deliver what they imagine on a canvas will be exposed to a new level of detail that cannot be unseen. It will work backward, informing their imagination and making the fruits of it more functional. Does that mean that designers suddenly have two jobs—designing and building? I suppose that’s one way of thinking of this, just as a designer might have said that desktop publishing “forced” their one job to become many: typesetting, layout, paste-up, correction, imagery, and so on. I think integration is a better word for this, and it’s better than the opposite.
Many similar transitions are under way now. In many cases, what we are transitioning to is not fully clear. It may not be comfortable to be on the unknown side of transition, but the desire for comfort is definitely what will keep us there. To be clear, resistance to AI isn’t mere comfort-seeking—there are legitimate reasons to oppose how these tools have been developed and deployed. But blanket bans on the technology itself, rather than on the exploitative systems surrounding it, may inadvertently cede our ability to shape how integration happens.
The second issue worth considering here are the ethics of automation.
I think that in most cases, these red-lines aren’t being drawn to preserve specific tasks and processes—a specific culture of work—they’re being drawn to signal objection to job loss, the unrestrained greed of a few corporations and their CEOs who have cast aside the social contract, and the capitulation of democracy to the power of money in any currency. I’m not just deeply sympathetic to these concerns. I think they’re very real, and important.
Let’s look at job loss.
Text generation is good enough to cost a writer their job. Image generation can stand in for a photographer or illustrator. Code generation for an engineer.
If a team can avoid using these tools and still deliver to whomever is funding the enterprise at a price they’re willing to pay, I say more power to them. Unfortunately, I expect that to be less and less common. I recognize that my position as an established professional gives me more freedom to experiment with these tools than someone just entering the field who faces immediate competition from automated alternatives. That asymmetry is real and troubling. For all the reasons I explored in the section above, I think that automation is only getting started with knowledge work.
But that doesn’t dismiss our obligation to consider and apply an ethical standard to how we use AI.
I first began to think deeply about this with text generation. My first reaction was to shrug it off as obviously inferior to human writing. But then I began using it and realized that the difference between good generated text and bad is the same as between good writing and bad: the writer. I spent a few months experimenting with ways of training Claude to generate text that was as close to what I would write as possible. Those experiments were very successful. What began as a long prompt I would re-use with every new chat has become a 10,000-word skill file that I continue to nurture. It makes Claude (my AI of choice) capable of producing essays that I think most readers would find indistinguishable from those I write without it. However, I don’t use it that way. Instead, it makes Claude a useful sounding board, thinking tool, fact-checker, and editor. I can have a substantive conversation with it about my ideas knowing that all the background information I think is relevant—my typical subject matter, my point of view, tone, and style, my language and structure preferences, cultural references, and my biographical details—is integrated into its processing. I still write without Claude. But I also think that some of my best writing has been the result of writing with it. Though, as I said, many readers might not be able to tell the difference between a purely generated piece of writing and one that I produced with the assistance of automation, I can. And that difference matters to me. It’s not that I would consider publishing a purely generated essay an act of deception—though I understand that perspective—it’s that I would consider it a waste. The point of text is as much to be written as it is to be read.
All that being said, a writer using automation for their own writing—however extensively—is on decent ethical grounds. But they’re not perfect. Which tool a writer uses is a decision that puts them in a place of participating in the greater agenda of the corporation that provides it. Generative AI built upon large language models has been trained on other peoples’ writing, which begins to blur if not tread over previously established lines defining intellectual property and plagiarism. Again, these are not easily dismissed issues. However, I do think it’s possible to use generative AI tools without directly stealing from someone else. It is a matter of how you use them, and how willing you are to interact with and shape their output. That said, I acknowledge that any use of LLM tools implicates me in the broader training data problem—the appropriation of countless writers’ work without compensation or consent. This is a structural issue I cannot individually resolve through careful prompting, though I believe there’s still a meaningful distinction between relying on that problematic foundation and actively using AI to plagiarize specific voices or works.
Of course, image generation provokes the same concerns, though perhaps even to a greater degree. In professional arenas, text is everyone’s currency. Automation can be used to write a book, essay, script, ad copy, or social media post as readily as an email or personal notes. Text is everywhere, and as any software user can attest to, stuffed in behind the sparkle icon to virtually every app in every industry. Not quite so of imagery. As an artist and designer, I believe that image-creation belongs to everyone, but it is a uniquely developed skillset. Choosing to generate an image rather than source one made by a human feels touchier than choosing to generate text because of how differentiated image creation has been from writing.
Like writing, I have experimented with training generative tools to reliably produce images like those I might create on my own. Unlike writing, these experiments have been less successful. What I have experienced is that the greater the specificity of direction—prescribing exact colors, line weights, spacing rules, etc.—the clumsier and more error-ridden the output. However, when the direction is more descriptive conceptually and heavily weighted with references, the better the quality. It’s interesting how this resembles the actual creative process—where discovery is not only to be expected, but is necessary. I have found that the best output from generative tools tends to be the result of a surprising combination of qualitative inputs that can then be reverse-engineered into a reliable prompt for more focused exploration. Of course, anyone can also just ask an image-generating AI for something “in the style of” someone else, but that is as unethical as plagiarism because it is plagiarism. It’s just faster and bit-washed by sending it through the machine.
It’s long been the case that most creative teams cannot afford to employ a full-time photographer, illustrator, or artist. Stock imagery has been the solution to this. The economic model hasn’t been great—certainly not better for an artist than direct compensation for custom work—but it is more ethically sound than training a machine on an artists work to the point that it makes it possible for someone else to copy it with the click of a button. This is an unsolved moral problem with automated image generation. It can be mitigated by the right kind of prompting, use, and a discriminating review of output, but those who declare an unwillingness to use these tools for creative work, I think, have a point. For an artist, it’s similar to my assessment of text-generation: the point of making an image is as much to make it as it is to be seen. But for someone unhappy with the economics of art-making, sitting out image-generation is justified. Ultimately, the individual choice to use generative imagery is going to depend as much upon one’s intent to create something new as their ability to actually use the tool and shape its output. I believe that is possible, which is why I am comfortable using image generation tools in certain circumstances. Meanwhile, it also means that stock imagery remains relevant. One only has to estimate the time it will take to train an AI to produce a desired image and compare it with the cost of just buying an image that already exists. In fact, one of the best discoveries I have made is in using AI to generate excellent compound queries that can be pasted into stock imagery search engines to reduce the very real time-suck of scrolling grids of images to find the right one.
Similar distinctions apply to code-generation, but if I’m being very honest here, I think to a much lesser degree. The reality is that code operates within tighter constraints than prose. That isn’t to diminish the creative thinking that a good developer uses daily—excellent code absolutely requires creativity. But a website or application must run on a system that has already defined the rules of the code, and those rules are standardized in ways that natural language is not. A developer that treats their code like poetry will produce a thing that simply does not work. When new features are created with new expressions of, say, CSS, various groups come together to vet them and then they slowly make their way into the code of web browsers. In other words, code is standardized to a degree that writing is not. It’s exactly the sort of thing we should expect to automate. In fact, the more we use machines to automate the generation of code, the more I expect we’ll come to think of it as strange that we ever did it ourselves. Not in every circumstance—but in your average website or app project, I think coding will collapse into a design task just as paste-up and typesetting did. In the meantime, I don’t object to anyone preferring to write their own code, but I don’t think it can come with the same clarity of outward judgement of those that don’t as with careless text or image generation. Of course, therein lies the actual issue—care or carelessness.
Beyond those issues, which are really about how one’s choice to use generative tools affects someone else’s livelihood, there is the issue of greater social and economic complicity. If a corporation is evil, is it evil to use their product? Well, I deleted my Facebook account over a decade ago because I think Facebook is evil. But I don’t think Facebook users are evil. I just don’t want to be one. A similar line of thinking is anyone’s right when it comes to using any company’s AI product because I think the concerns we collapse under the shorthand of “evil” are very real: The biggest AI companies regularly deceive their employees, shareholders, investors, each other, and the public about the true capabilities of their technology. Their haste to win a competitive race while maintaining growth targets and market value comes at an environmental cost, introduces supply chain challenges, puts pressure on governments—domestic and international—and creates dangers to the minds and bodies of users. The trajectory of AI is unknown and intentionally obscured by these companies in order to manipulate the powers that would ordinarily create and impose regulatory structure on this new industry. In the long-term, it may result in mass unemployment and put enormous burden on governments to support their populations. In the short-term, it creates a chilling effect on making plans and moving forward, with the perception of what AI is and can do prompting companies to delay contracts and sit things out as long as they can. Most companies down market can’t survive prolonged inaction and won’t. All of this rests at the feet of those personally running the AI race. I was a supporter of the various proposed “pauses” and still believe that is the right thing to do. We have more than enough to work with in current generative tools.
The various AI embargoes being declared across creative industries are understandable reactions to real threats. But drawing red lines at AI itself—rather than at careless use, exploitative corporate practices, or the abandonment of craft—risks repeating the mistakes of past technological transitions.
What matters isn’t whether we use generative tools, but how we use them and to what end. A designer who uses AI to plagiarize another artist’s style with a simple prompt is engaged in something fundamentally different from one who trains a tool to extend their own creative capacity. A writer who publishes purely generated text as their own work is making a different choice than one who uses AI as a thinking partner and editor while maintaining authorship over their ideas and voice. These distinctions matter more than blanket prohibitions. Discernment in practice means asking: Am I using this tool to extend my own capabilities or to replicate someone else’s work? Am I shaping the output or simply accepting what’s generated? Does this use serve my creative vision or just expedite a result? These aren’t always easy questions, but they’re the right ones.
The real ethical questions aren’t about the tools themselves but about the systems surrounding them: How do we ensure that automation serves human creativity rather than displacing it? How do we hold corporations accountable for training data and environmental impact? How do we create economic structures that distribute the benefits of productivity gains rather than concentrating them at the top? How do we preserve the intrinsic value of making—the importance of the creative process itself, not just its outputs?
These are questions we should be asking loudly and persistently. But we can ask them while also recognizing that thoughtful integration of AI into creative practice is both possible and, in many cases, genuinely productive. The choice isn’t between purity and complicity, between craft and automation. It’s between engagement and abdication—between shaping how these tools develop and how they’re used, or ceding that ground entirely to those with the least interest in protecting what we value about creative work.
Long-term, I believe AI will evolve toward open-source models and infrastructure rather than remaining locked inside proprietary systems controlled by a handful of corporations. The economic and technical patterns suggest this is increasingly viable, if not inevitable. If that happens, many of our current ethical concerns will resolve themselves. In the meantime, we have choices to make about how we engage with what’s already here.
Drawing red lines feels definitive and principled. But sometimes the more difficult—and more important—work is learning to navigate complexity with discernment, to make distinctions where others see only binaries, and to remain engaged with tools and systems we’re still learning to understand. That’s the work ahead of us, and blanket bans won’t help us do it.
2026-04-06 12:00:00
This is something I say in the course of nearly every conversation I have with every agency design team I consult.
I remind them, over and over again, that marketing is two kinds of persuasion: FIRST, persuading a person to actually pay attention, and THEN, persuading them of the thing you think is important. Most messages never reach people, not because they’re poorly articulated, but because they don’t get past peoples’ attention filters.
With that in mind, no matter how long or complicated your message may be, design it to be scanned. Our job is to communicate the most valuable information given the least attention, so if we can give a person who ONLY scans our information some value, the likelihood that they will slow down and actually read it all grows. Here’s another 80/20 framing–80% of your audience will never do more than scan your information; 20% will go on to read it. That doesn’t make it futile to communicate, it just changes where you put the information. The better your design, the more 20 percenters you will earn.
In other words, if it’s not scannable, it might still be readable, but it probably won’t get read.
2026-03-31 12:00:00
I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.
But I don’t think it does. And understanding why matters—not just for people like me, but for anyone who cares about the difference between things that work and things that merely exist.
The most common definition of craft is “an activity involving skill in making things by hand.” And I think most people still emphasize a literal interpretation of that “by hand” clause. AI is surfacing this assumption, if not challenging it outright. But it’s certainly not the first time our notion of craft has been tested.
To me, craft isn’t necessarily about physically touching what you make. It doesn’t even have to involve physical contact at all.
Mozart was reputed to compose complex arrangements entirely in his head, only writing down the final notation as an act of transcription. But who would argue that Mozart wasn’t a master of his craft?
Beethoven, by the end of his life, was deaf. And yet it was then that he composed some of his most celebrated work. What does it mean to craft music you cannot hear?
Obviously, “craft” is a word we use interchangeably—sometimes as a noun, a shorthand for “area of expertise,” and other times as a verb, the act of applying that expertise.
What I’m noticing is that our initial forays into AI seem to be challenging our notions of craft. But my experience has only validated the existence of craft as an elevated form of creation. It’s also deepened my sense of craft as verb—as disciplined practice, not manual labor.
The kneejerk reaction to AI usage, especially in design, has been to consider it an interference to thinking and making—not capable of processing ideas with the nuance of the human mind, nor capable of producing anything that a human, with enough time, couldn’t do better.
Both criticisms miss the point. AI is a tool through which ideas become things. The stronger the idea going in, the less reason to think the tool would degrade it in some fundamental way.
This is exactly how many initially responded to the synthesizer—as a sonic machine, not a musical instrument. But of course, the synthesizer didn’t eliminate musical craft. Knowledge of harmony, rhythm, arrangement, and dynamics still determined what made a piece of music good. The synthesizer only changed how it was made.
The same is true with AI and design. No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools.
Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels.
The craft migrates to a different level of abstraction. But it remains craft.
The second aspect has to do with the work that is or isn’t done when AI tools are involved. And for me, the key element here is repetition.
I’ve written before that the way to make good things is to make many things. Practice builds skill. There’s nothing about AI usage that challenges this fundamental truth.
The more I use AI to create something, the better the output becomes. And it’s not simply a matter of getting better at prompting. These cycles push further back into my process, causing me to rethink foundational aspects of how I make things, knowing that new points of processing and acceleration are now available.
I’m iterating more quickly. Testing more variations. Learning from failures faster. The feedback loops are tighter, which means I can refine my judgment more rapidly.
The craft hasn’t disappeared. It’s just happening at a higher level of abstraction.
Instead of iterating on “how do I code this CSS perfectly?” I’m iterating on “what’s the right structure? What’s the right hierarchy? How do I communicate this idea most clearly?” The answers change when the tools change.
The discipline, though, remains the same.
But here’s the danger.
I’ve seen dozens of AI-generated apps, webpages, and informational assets that have blown collective minds simply by not existing one minute and existing the next. The speed of generation is so breathtaking that it stands in the gap for quality—even when that gap is so wide it would never have been tolerated had the thing been made the old-fashioned way.
This typically happens when someone uses AI to synthesize a large amount of information and generate something to contain it. That it’s suddenly there—clickable, mobile-friendly, with animated charts and graphs—is powerful. The person who made it is immensely proud, though the work has been minuscule. There’s an intoxicating effect at work here and I worry it’s one we won’t become immune to quickly enough.
And I feel that immediate tension when I inevitably have a long list of critiques: Hold on, what is this meant to communicate? Actually, it’s pretty difficult to scan and read this. Yeah, these graphs look neat but they don’t really make any sense.
Had I not been there, accepting the role of wet blanket, this inferior thing would have shipped.
And that’s the risk with collapsing skills into tools. I won’t always be there to do the thing I do. Inferior designs will ship. That’s bad. But what’s worse—the thing that really stings most designers’ egos—is that most people won’t even notice.
Exhibit A of this premise is most of what’s on the web today: hastily made things using poorly designed templates. Any good designer can thoroughly critique them; most of the world doesn’t care.
AI accelerates this dynamic. It makes it even easier to produce outputs that look professional at first glance but fail at the level of craft—the considered structure, the clear communication, the thoughtful hierarchy that serves the user’s actual needs rather than just filling space.
A tool that accelerates craft enough also becomes the thing that lets people skip it entirely. And because the output looks finished, because it required so little effort, because most people can’t tell the difference anyway—why would anyone bother with iteration? With refinement? With developing the judgment that comes from years of practice? Easy satisfaction is dangerous, and up until this point, somewhat localized. AI could not only make it ubiquitous, but standard.
And just to be sure no one reading this draws the conclusion that I elevate designers above others, let me be clear: we designers are as easily seduced. Take this video, named “Designing with Claude Code”, as an example. I’ll ask you the same questions I asked my design team: When, exactly, did the design happen? The designer in the video prompts Claude to “design a simple marketing home page for a finance app” and lists off a few features he’d like to see on the page. Seconds later, Claude generates a pretty polished page. That’s about three minutes in. For the next 57, the designer restyles the page, prompting it piecemeal. This is where the title was, for me, ironically instructive: was this really design? I also asked my team, sincerely, did he make it better? At the core of this discussion is the ever-blurred line between aesthetics and order, between style and design.
To be fair, I don’t think the designer in this video intended to communicate that pre-design strategic work is no longer necessary. Nevertheless, he depicted a process that didn’t include any meaningful thought prior to generating a webpage, then spent the rest of the video re-styling that webpage to his tastes. I would have started with a text file to work out concepts, developed my visual language in a canvas tool, and then moved to Claude to accelerate the technical steps of translating my thinking to code. Craft at each stage.
This is why I keep returning to craft as mindset, not method.
Craft is the commitment to iteration, refinement, and accumulated knowledge applied toward increasingly excellent outcomes. It’s the refusal to accept the first result as final. It’s the understanding that quality emerges from disciplined practice, not from tools.
AI makes it easier to produce outputs. But it doesn’t eliminate the need for craft—it just reveals who’s practicing it and who isn’t.
Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience.
Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration.
The tool doesn’t determine whether you’re working with craft. Your approach does.
Beethoven crafted music he couldn’t hear because he had spent decades developing such deep understanding of musical structure that the physical instantiation—sound waves, instrumental performance—was almost incidental to the compositional craft.
AI lets us work at similar levels of abstraction. We can focus on intention, structure, and meaning while the tool handles implementation.
But that only works if we maintain the discipline. If we iterate. If we refuse to accept “good enough” when we know better is possible; if we understand craft not as what we touch, but as how we think; if we’re honest about the fact that most of the world won’t notice when we skip that discipline. The only thing keeping craft alive is our own commitment to it.
This isn’t really a technological problem so much as a symptom of choices made at scale. Individual choices that prioritize craft, after all, are much easier to defend and preserve, aren’t they? I could have paid half as much for the new roof on my house, for example, but it probably would have lasted a quarter as long. But it was my money to spend. My chosen design processes might take me more time at certain points than the designer “designing with Claude Code,” but it’s my time to spend. I think I’ve made the right choices, but only I am in a position to judge them. What happens when these choices are made for me? What happens when they’re made from a distance, where the outcome is obscured?
Craft is always threatened in the midst of technological change, not by the technology itself, but by the addictions we develop to what the technology makes possible: Simpler choices, lower costs, faster outcomes. Each is desirable and defensible in isolation, but as a foundation, the fastest path to a fragile future.
2026-03-28 12:00:00
Will what happened to the music recording industry happen to AI? That’s the main point Rick Beato is making in this video, which I find very intriguing.
Rick shows how easy it was for him to install local models on his own machine and is basically saying that locally-run LLMs will undercut (if not beat) the largest AI companies selling access to their models. His perspective is that the same thing that happened to the expense, access, and necessity of professional studio recording equipment will happen with AI, but faster.
I sure hope he’s right, and it fits with my view that AI will follow the path of infrastructure, not interface and that the real opportunity is not in creating the fascia of AI but in creating things with it.
I also wonder about another comparison with the music industry’s technological path. Until the smartphone, music playback was driven by single-function machines, and a diversity of them. After the smartphone, music playback became a software experience. But it seems as if that phenomenon may have run its course, as interest in physical media and “analog” and single-function media playback is growing again. The comparison I have in mind is that the current threat to everyone seems to be the multifunction AI experience — the app to end all apps. But I suspect that this won’t turn out to be the total consolidation that people expect. There’s an equal opportunity, I think, in distributed local models leading to an even more diverse ecosystem of single-purpose tools and interfaces.
2026-03-25 12:00:00
While I don’t doubt for a second that images like the one at the top of this page could be easily generated with AI, that they wouldn’t have been made by me predicts at least a million meaningful differences.
This is what I explained to my daughter as we were making art together over the weekend.
Yes, we could be making images much faster, but we’d be missing out on the experience that working more slowly, touching things, and discovering interesting connections among seemingly worthless scraps of paper provides.
There’s a place for both, of course—ponderous, handcraft and pragmatic production alike—but an image that is simply generated offers the viewer everything and the maker nothing.
The distinction between what the viewer gets and what the maker gets explains something essential about why craft persists even as generation becomes effortless. I feel lucky to have learned it early.
When I was very young, after my parents divorced, my sister and I would spend weekday afternoons at our grandparents’ house while our mother worked. We’d sit with my grandfather in his den while he watched the evening news. I developed a strange habit: I would fold a piece of paper into thirds like a brochure and write vertical columns of numbers along each side, starting with 1 and working my way down as far as I could go.
It’s obvious to me now, forty years later, that this was a way to bring order to a life that felt suddenly chaotic. It must have been obvious to my grandparents too, because they never mentioned it. They just kept the paper stocked.
I remember laboring over those pages. I would slowly inscribe the numbers, trying to keep them perfectly aligned and the columns perfectly straight. I wanted my hand to produce something that only a machine could.
Later, in elementary school, we began learning the BASIC programming language. One of the first things I tried was this simple command:
FOR I = 1 TO 100
PRINT I
NEXT
On my hand-written pages, it would have taken me the better part of the entire news broadcast to reach 100. When I hit ENTER, I felt a sudden thrill as a vertical column of numbers flowed in a blur from the top to the bottom of my screen. I felt its motion downward—the illusion of that building list pulling me into the seat of my chair. In seconds, it was done.
I re-typed the command, increasing 100 to 1000 and again hit ENTER. It only took a few seconds more. But there was no excitement this time. The exercise already seemed pointless. It was just a list of numbers now; there was nothing of me in it.
What was it in hours of hand-cramped writing that seconds in front of a computer couldn’t provide? It wasn’t about completing something. I have no memory of keeping those pages, after all. It was the act—the writing itself—that gave me the peace, or clarity, or tiny bit of control that I needed.
The computer gave me the output I thought I wanted. But it couldn’t give me what I was actually getting from the process.
This is what gets lost when we optimize only for outputs. When we measure value only by what gets produced, not by what gets learned or felt or understood through the act of making.
Making art by hand teaches my daughter—and me—things that cannot be learned by prompting:
Material constraints. What do we have to work with? How do these pieces fit together? What can we make from what we have?
Physical manipulation. How does this paper feel? How does it tear? How does glue behave? How do colors interact when they’re actually touching? How does the physicality of a material affect what we later see?
Aesthetic judgment. Does this feel right here? What happens if I move it? What does this composition need?
The satisfaction of making something exist that didn’t before. Of putting yourself into it. Of leaving traces of your decisions in the final object.
These fundamentals of art making are also necessary to developing the kind of systems thinking in people who can build genuinely novel things. It’s an education that I don’t think can be shortcut, even with the most powerful technology. It has to be earned by eye and hand.
AI is capable of mimicking the output of any artist, of that I am sure. But it cannot give an artist what she gets by making herself. And this—this irreducible value in the act of making—is why craft will persist.
The builders who remain in our AI future will those who understand this. They will know the value of labor, time, constraints, and intuition because they will have experienced them and integrated the knowledge that only they can convey.
Too easy, too fast, too permissive, too systematic—it all means too little of you in it. And when there’s too little of you in a thing, you can’t learn from it. You can only consume it.
The computer gave me my column of numbers to 1,000. But my grandfather’s den, the folded paper, the careful alignment of hand-written digits—that gave me something else entirely.
My daughter will forget the specific collages we made. But she won’t forget what it feels like to make something with her hands, to work within constraints, to put herself into an object and see it become real. That’s what making gives the maker. And no amount of effortless generation can replace it.
2026-03-18 12:00:00
“There’s no viable garage startup path in AI anymore.”
When I encounter this claim—and I encounter it often—my blood runs cold. It feels so persuasive. How could any startup compete with institutions that draw more power than a town to deliver a result that is rapidly degrading from magic to expectation? The barrier to entry is billions in computational resources, massive datacenters, chips that cost more than most companies’ entire budgets and access to which is a matter of international trade agreement. The companies that dominate AI are already the most powerful entities on earth.
But then I remember that many garage startups of the early dot-com era would have been met with a similar perspective. How could any startup compete with Microsoft? The company had operating system dominance, infinite resources, the best engineers, relationships with every major manufacturer. It looked unassailable.
But some startups did compete. Many of the software titans of the late nineties are forgotten now. Microsoft missed mobile, social, cloud—not because they weren’t smart or well-resourced, but because they were focused on something else. Their paradigm was empowering the individual machine; the new paradigm became assembling something new from the connections between machines.
The question is whether AI will follow a similar pattern. And the answer depends on whether AI is a technology or infrastructure?
Every dominant technology platform looks safe from within its paradigm. But there’s a difference between platform dominance and infrastructure dominance.
Platform dominance is temporary. Personal computers, social networks, even operating systems—these are vulnerable to paradigm shifts because they’re products competing with other products. The better product, the better business model, the better understanding of what people actually need—these things can win.
Infrastructure dominance is structural. Electricity, telecommunications, internet backbone—these don’t get disrupted by better versions of themselves. They get commoditized. And when they do, all the value migrates one layer up.
I think AI is becoming infrastructure. And if that’s true, it changes everything about who can compete and how.
Here’s what happened to previous infrastructure layers:
The pattern here is pretty consistent: infrastructure providers survive, but value migrates to whoever controls the layer above the infrastructure.
The dot-com winners didn’t try to build a better internet protocol, they create new ways of using it.
So here’s the real question about AI: will it follow this pattern? Will foundation models become commoditized infrastructure, with value flowing to whatever gets built on top? Or will AI be different—a kind of infrastructure that remains captive, where the companies that build the models also control everything built on them?
Let’s move past historical infrastructure examples and to a more recent company whose navigation of the last thirty years provides a lesson in how this value layering works.
Microsoft got the web, but they missed the internet. They crushed the first popular web browser—Netscape Navigator—by doing something very smart: bundling their own browser, Internet Explorer, with Windows. Every Windows user got it for free. It was such an aggressive move that it landed Microsoft in the middle of a US antitrust case. Even though they came out of it with penalties, they remained on top until the internet expanded beyond something people accessed through a window on their home computer and everywhere through mobile devices. Microsoft’s attempts to catch up were disasters.
Microsoft treated the internet as a location and access as a feature of their operating system, not as an entirely new computing layer that eroded the relevance of the OS. On the business side, they’ve hung on, and have evolved very well by creating cloud based infrastructure and tools of their own. But they made a mistake in treating a platform transition as a feature evolution.
Today’s AI companies face the same conceptual choice:
The ones who mistake the transition for something smaller than it is—those will be today’s equivalent of pre-internet Microsoft.
But here’s what makes this different than historical feature/product/infrastructure misperception: Speed. AI is becoming infrastructure at unprecedented speed, with explicit goals of replacing human cognitive work across all sectors, deployed by the most powerful companies in history. (No, they’re not going to stop—unless someone calls them on their loans, or unless, oh I don’t know, China surrounds Taiwan with battleships and suddenly no one can get chips, but this is another essay—until they’ve done it.)
Previous technological revolutions took decades to fully transform economies. The Industrial Revolution gave labor time to organize, governments time to regulate, new industries time to emerge. The internet and personal computing followed similar patterns—gradual adoption, clear new job categories, space for adaptation.
AI is being deployed as quickly as possible with no gentle transition in the plans and no clear “next category” of work for people to transition into.
I think this creates two possible futures.
The first is that AI becomes “normal technology”—like Excel or Photoshop. It increases capability across the board without fundamentally restructuring who has power. It enables more people to do more things. It creates new categories of work even as it automates old ones. Benefits distribute somewhat broadly. The technology serves people rather than replacing them.
In this future, foundation models become commoditized infrastructure. OpenAI, Anthropic, Google—they’re like the internet backbone providers. They survive, but the real value flows to whoever builds the right things on top. The garage startups don’t try to compete at training foundation models. They recognize infrastructure when they see it and build the “one layer up” innovations we can’t see yet. This is a better future in which innovation and creation happen like it did in the computer to network transition—in the new space created by connections among technology.
The second future is that AI becomes normalizing technology. Everyone uses the same models trained on the same data. Outputs converge toward statistical means. Power concentrates in infrastructure owners who also control the application layer. The technology creates efficiency but eliminates diversity, experimentation, novelty. Benefits flow almost entirely to capital. Most people become irrelevant to the system’s functioning.
In this future, there is no garage startup path—not because the infrastructure is too expensive to build, but because the infrastructure owners also control what gets built on it. We’ve never seen infrastructure that concentrated before. Captive infrastructure on a scale that makes platform monopolies look quaint. This is a worse future, but very possible depending upon how model API pricing evolves and how desperate investors become.
Neither future scenario will be the result of technology. These things are politically determined.
We could slow deployment. Regulate heavily. Require human oversight for critical decisions. Tax AI-driven productivity and redistribute the gains. Preserve certain categories of work from automation. Build systems that augment rather than replace.
We could treat AI infrastructure the way we treated electricity and telecommunications—as something too important to leave entirely to private control, requiring public oversight and structured to serve broad social benefit.
We could even reject it entirely, and sacrifice all the imagined fruits it might bear. There are times when I think that even the entire internet has been a net loss for society, so this route isn’t exactly a non-starter for me. But I will admit that it feels pretty impossible.
And truly, all three of these options feel quite unlikely to me based upon how things are presently.
Not because humans can’t organize—we can. But because collective action requires shared understanding of stakes, political will to regulate, international cooperation, ability to resist “inevitable progress” narratives, and an alternative vision of what we’re building toward.
And all of this runs directly into players with massive resources who benefit from the current trajectory, regulatory capture and political dysfunction, race dynamics where slowing down feels like falling behind, and exhaustion from compounding crises that have already depleted our capacity for collective response.
The window for meaningful action is narrowing. The beneficiaries are extremely powerful. The coordination problem is massive. And the framing of inevitability is already dominant.
What makes this moment different from previous technological disruptions is not just the speed, or the scale, or the concentration of power. It’s the combination of all three. Together, they could push us past a tipping point where adaptation to an abruptly changed reality is no longer possible.
Although, maybe I’m wrong. Maybe this will be like every other wave of technological disruption—painful in transition but ultimately generative. Maybe new forms of work will emerge that we can’t imagine yet. Maybe foundation models will commoditize and garage startups will build the valuable layer on top. Maybe the bubble will pop before full deployment. Maybe the feeling of impossibility is just the feeling every generation has when facing transformation they can’t control.
I hope so, but I don’t think so. I think it’s going to be harder than that.
I think the feeling of impossibility is by design. The players benefit from our sense that this is inevitable, that resistance is futile, that the only choice is to adapt to what they’re building rather than build something different.
And that’s exactly why the moment when collective choice matters most is also the moment when it feels most impossible. When the infrastructure isn’t yet fully built. When concentration isn’t yet complete. When path dependency hasn’t yet locked us in. When we can still decide whether AI infrastructure gets commoditized or remains captive. When intervention would actually matter.
When it feels impossible is when it’s most urgent.
We’re at a crossroads. One path leads to AI as normal technology—useful, widely beneficial, integrated into lives that remain fundamentally human. Infrastructure that gets commoditized, with value and opportunity flowing to the layers above it. A world where garage startups don’t try to build a better model, but a new ecosystem of transformative tools and experiences on top of the model.
The other path leads to AI as normalizing technology—efficient, concentrated, optimized for metrics that have nothing to do with human flourishing. Captive infrastructure that also controls everything built on it. A world where there truly is no garage startup path, not because the technology is too hard to build, but because the game is rigged from the infrastructure layer up.
The choice isn’t made by technology. It’s made by us. Or more accurately, it’s made by whether we choose to make it, or whether we accept the choice being made for us. And unfortunately, as much as I emotionally understand the urge to reject AI entirely on principle, the better of these two paths requires walking with it, not without it.
It feels impossible from here, for so many good reasons. But impossible and inevitable aren’t the same thing. And at the moment, holding on to that difference is what is giving me any energy to press on.