2026-03-18 12:00:00
“There’s no viable garage startup path in AI anymore.”
When I encounter this claim—and I encounter it often—my blood runs cold. It feels so persuasive. How could any startup compete with institutions that draw more power than a town to deliver a result that is rapidly degrading from magic to expectation? The barrier to entry is billions in computational resources, massive datacenters, chips that cost more than most companies’ entire budgets and access to which is a matter of international trade agreement. The companies that dominate AI are already the most powerful entities on earth.
But then I remember that many garage startups of the early dot-com era would have been met with a similar perspective. How could any startup compete with Microsoft? The company had operating system dominance, infinite resources, the best engineers, relationships with every major manufacturer. It looked unassailable.
But some startups did compete. Many of the software titans of the late nineties are forgotten now. Microsoft missed mobile, social, cloud—not because they weren’t smart or well-resourced, but because they were focused on something else. Their paradigm was empowering the individual machine; the new paradigm became assembling something new from the connections between machines.
The question is whether AI will follow a similar pattern. And the answer depends on whether AI is a technology or infrastructure?
Every dominant technology platform looks safe from within its paradigm. But there’s a difference between platform dominance and infrastructure dominance.
Platform dominance is temporary. Personal computers, social networks, even operating systems—these are vulnerable to paradigm shifts because they’re products competing with other products. The better product, the better business model, the better understanding of what people actually need—these things can win.
Infrastructure dominance is structural. Electricity, telecommunications, internet backbone—these don’t get disrupted by better versions of themselves. They get commoditized. And when they do, all the value migrates one layer up.
I think AI is becoming infrastructure. And if that’s true, it changes everything about who can compete and how.
Here’s what happened to previous infrastructure layers:
The pattern here is pretty consistent: infrastructure providers survive, but value migrates to whoever controls the layer above the infrastructure.
The dot-com winners didn’t try to build a better internet protocol, they create new ways of using it.
So here’s the real question about AI: will it follow this pattern? Will foundation models become commoditized infrastructure, with value flowing to whatever gets built on top? Or will AI be different—a kind of infrastructure that remains captive, where the companies that build the models also control everything built on them?
Let’s move past historical infrastructure examples and to a more recent company whose navigation of the last thirty years provides a lesson in how this value layering works.
Microsoft got the web, but they missed the internet. They crushed the first popular web browser—Netscape Navigator—by doing something very smart: bundling their own browser, Internet Explorer, with Windows. Every Windows user got it for free. It was such an aggressive move that it landed Microsoft in the middle of a US antitrust case. Even though they came out of it with penalties, they remained on top until the internet expanded beyond something people accessed through a window on their home computer and everywhere through mobile devices. Microsoft’s attempts to catch up were disasters.
Microsoft treated the internet as a location and access as a feature of their operating system, not as an entirely new computing layer that eroded the relevance of the OS. On the business side, they’ve hung on, and have evolved very well by creating cloud based infrastructure and tools of their own. But they made a mistake in treating a platform transition as a feature evolution.
Today’s AI companies face the same conceptual choice:
The ones who mistake the transition for something smaller than it is—those will be today’s equivalent of pre-internet Microsoft.
But here’s what makes this different than historical feature/product/infrastructure misperception: Speed. AI is becoming infrastructure at unprecedented speed, with explicit goals of replacing human cognitive work across all sectors, deployed by the most powerful companies in history. (No, they’re not going to stop—unless someone calls them on their loans, or unless, oh I don’t know, China surrounds Taiwan with battleships and suddenly no one can get chips, but this is another essay—until they’ve done it.)
Previous technological revolutions took decades to fully transform economies. The Industrial Revolution gave labor time to organize, governments time to regulate, new industries time to emerge. The internet and personal computing followed similar patterns—gradual adoption, clear new job categories, space for adaptation.
AI is being deployed as quickly as possible with no gentle transition in the plans and no clear “next category” of work for people to transition into.
I think this creates two possible futures.
The first is that AI becomes “normal technology”—like Excel or Photoshop. It increases capability across the board without fundamentally restructuring who has power. It enables more people to do more things. It creates new categories of work even as it automates old ones. Benefits distribute somewhat broadly. The technology serves people rather than replacing them.
In this future, foundation models become commoditized infrastructure. OpenAI, Anthropic, Google—they’re like the internet backbone providers. They survive, but the real value flows to whoever builds the right things on top. The garage startups don’t try to compete at training foundation models. They recognize infrastructure when they see it and build the “one layer up” innovations we can’t see yet. This is a better future in which innovation and creation happen like it did in the computer to network transition—in the new space created by connections among technology.
The second future is that AI becomes normalizing technology. Everyone uses the same models trained on the same data. Outputs converge toward statistical means. Power concentrates in infrastructure owners who also control the application layer. The technology creates efficiency but eliminates diversity, experimentation, novelty. Benefits flow almost entirely to capital. Most people become irrelevant to the system’s functioning.
In this future, there is no garage startup path—not because the infrastructure is too expensive to build, but because the infrastructure owners also control what gets built on it. We’ve never seen infrastructure that concentrated before. Captive infrastructure on a scale that makes platform monopolies look quaint. This is a worse future, but very possible depending upon how model API pricing evolves and how desperate investors become.
Neither future scenario will be the result of technology. These things are politically determined.
We could slow deployment. Regulate heavily. Require human oversight for critical decisions. Tax AI-driven productivity and redistribute the gains. Preserve certain categories of work from automation. Build systems that augment rather than replace.
We could treat AI infrastructure the way we treated electricity and telecommunications—as something too important to leave entirely to private control, requiring public oversight and structured to serve broad social benefit.
We could even reject it entirely, and sacrifice all the imagined fruits it might bear. There are times when I think that even the entire internet has been a net loss for society, so this route isn’t exactly a non-starter for me. But I will admit that it feels pretty impossible.
And truly, all three of these options feel quite unlikely to me based upon how things are presently.
Not because humans can’t organize—we can. But because collective action requires shared understanding of stakes, political will to regulate, international cooperation, ability to resist “inevitable progress” narratives, and an alternative vision of what we’re building toward.
And all of this runs directly into players with massive resources who benefit from the current trajectory, regulatory capture and political dysfunction, race dynamics where slowing down feels like falling behind, and exhaustion from compounding crises that have already depleted our capacity for collective response.
The window for meaningful action is narrowing. The beneficiaries are extremely powerful. The coordination problem is massive. And the framing of inevitability is already dominant.
What makes this moment different from previous technological disruptions is not just the speed, or the scale, or the concentration of power. It’s the combination of all three. Together, they could push us past a tipping point where adaptation to an abruptly changed reality is no longer possible.
Although, maybe I’m wrong. Maybe this will be like every other wave of technological disruption—painful in transition but ultimately generative. Maybe new forms of work will emerge that we can’t imagine yet. Maybe foundation models will commoditize and garage startups will build the valuable layer on top. Maybe the bubble will pop before full deployment. Maybe the feeling of impossibility is just the feeling every generation has when facing transformation they can’t control.
I hope so, but I don’t think so. I think it’s going to be harder than that.
I think the feeling of impossibility is by design. The players benefit from our sense that this is inevitable, that resistance is futile, that the only choice is to adapt to what they’re building rather than build something different.
And that’s exactly why the moment when collective choice matters most is also the moment when it feels most impossible. When the infrastructure isn’t yet fully built. When concentration isn’t yet complete. When path dependency hasn’t yet locked us in. When we can still decide whether AI infrastructure gets commoditized or remains captive. When intervention would actually matter.
When it feels impossible is when it’s most urgent.
We’re at a crossroads. One path leads to AI as normal technology—useful, widely beneficial, integrated into lives that remain fundamentally human. Infrastructure that gets commoditized, with value and opportunity flowing to the layers above it. A world where garage startups don’t try to build a better model, but a new ecosystem of transformative tools and experiences on top of the model.
The other path leads to AI as normalizing technology—efficient, concentrated, optimized for metrics that have nothing to do with human flourishing. Captive infrastructure that also controls everything built on it. A world where there truly is no garage startup path, not because the technology is too hard to build, but because the game is rigged from the infrastructure layer up.
The choice isn’t made by technology. It’s made by us. Or more accurately, it’s made by whether we choose to make it, or whether we accept the choice being made for us. And unfortunately, as much as I emotionally understand the urge to reject AI entirely on principle, the better of these two paths requires walking with it, not without it.
It feels impossible from here, for so many good reasons. But impossible and inevitable aren’t the same thing. And at the moment, holding on to that difference is what is giving me any energy to press on.
2026-03-11 12:00:00
Once basic needs are met—once you’re comfortable, secure, insulated from material consequence—wealth accumulation becomes divorced from survival or utility. It becomes a game. Rules, competition, scorekeeping, status. The difference between ten million and a hundred million doesn’t change your material reality. It’s just points. A way to keep track of who’s winning.
But here’s what makes it different from an actual parlour game: the gameboard is society itself.
The wealthy don’t play in some abstract financial dimension removed from the rest of us. They play on our lives, our communities, our systems. And when they make risky moves—when they place bold bets or break things for the sake of entertainment—the consequences are wildly asymmetric.
For the players, risk is temporary. For everyone else, it’s permanently deterministic, if not outright catastrophic.
When you have true wealth—defined in my mind as far more than enough—failure doesn’t threaten your position. Investments that don’t work out, business failures, and even market crashes—the things that would reduce the average person to poverty—represent little more than a pause in the game. If that.
Studied in hindsight, the most profound market crashes look less like surprises and more like strategy. Not just the perfect time for those with the most going in to use what they have to buy low, consolidate resources, and gain power while others are desperate to sell, but a deliberate campaign begun with the very moves that analysts will later classify as blunders of greed and myopia. Greed, yes. Myopia, anything but. Pumping up a market is a board-clearing strategy; crises like these are manufactured over greater numbers of years and moves than most market gameplayers even consider precisely to lull them into a sense of winning until the loss is abrupt and the take is everything.
The rest of us—we who may think of ourselves as non-players or just spectators—are playing, and probably losing. We are in no position to control the market, and only sometimes in a position to benefit from its total value. When the crashes come, the architects collect, the wealthy buy what they can, and the rest of us have our lives turned inside out: Job loss, foreclosure, bankruptcy, poverty that may echo for generations to come.
The panic of 1907, the crash of 1929, the dot-com crash of 2000, and the crisis of 2008 all have deliberate inflationary activity in common. They all demonstrate patterns of value manipulation, creditors backing debt they knew was worthless while creating mechanisms to profit when it couldn’t be repaid, new and larger institutions of control created in their aftermath.
These patterns will likely hold through the AI transition.
And these patterns are more than cultural erosion—if only they were just that bad. They’re corrosive. The angst, divisions, and conflict are part of the design, and they begin even with the language used to describe what’s happening. When things go well, we hear about people—individuals, by name, glorified for their apparent courage, determination, ingenuity, brilliance, and, of course, job-creating beneficence. They become the anecdotal cornerstones of a toxic meritocracy in which our resilience, adaptability, and collateral damage is the price for their progress. When things go poorly, we hear about abstractions—market forces, disruptions, corrections—as if we exist blindly in a vast system of grand complexity we will never fully understand or control. Market activity suddenly becomes weather. It’s a linguistic program of psychological manipulation. We’re meant to be left broken, alienated from one another, and squabbling for crumbs.
And here’s the thing: seeing the game doesn’t free you from it. Recognizing the manipulation doesn’t make you immune. You still need to eat, to house yourself, to participate in systems you didn’t design and don’t control. Righteous anger at the players feels justified—it is justified—but anger alone changes nothing. It might even be part of the design, keeping us reactive rather than generative, focused on what they’re doing rather than what we could build.
This is what happens when distance between the wealthy and everyone else becomes vast enough: risk becomes asymmetric, yes, but so does reality. The game board isn’t just the market, it’s the ground beneath our feet. The end-stages of a game are intentionally opaque to all but the very few because true control exists at the level of epistemology and ontology. We barely see and understand the moves being made because we’re all too busy arguing about what is true and how we know what is true.
Systems of organization are, in a way, logic batteries. They collapse decisions to the point of making them fundamental truths and lubricate our focus on the outcomes they power. We think about the ability to consume rather than the purpose of it. We think about what we can buy rather than whether we need it. We think about who we aspire to be because of what they did rather than what we truly want to do. The cornerstones of the financial system are these ideas; math is little more than the scaffolding.
Our world is organized by systems created by people who have very different ideas about what life is for than I do. Their power is their ability to store those ideas inside of a system in which I am forced to operate. And that is the most important thing—not my power to create and control a system, but my willingness to participate in it. I can truly accept that the world will be made in the image of those whose values I do not share; I can accept that the game is bigger than I can even comprehend, even that I cannot win. That doesn’t make me powerless. My power is in how I play, if I play at all.
The world doesn’t look like this because we are underachievers overshadowed by bigger, better people who deserve more. It doesn’t look like this because we are unwitting serfs in an enduring Feudal system updated for modernity. It looks like this because we have ceded culture to power and accepted that power is purchased. That doesn’t change by simply seeing the game for what it is, or resisting from within the game, or even sitting the game out. It’s fixed by widening our view beyond the game board, as all-encompassing as its boundaries are. It’s fixed by replacing the ideas stored in our systems batteries—thinking anew about what life is for, what makes a life well-lived.
The good news is that what this requires is simpler better game strategy, or even a better game entirely. The bad news is that it takes longer and costs more. It takes lives and many of them, lived over and over again until their shared and repeated choices become culture. The good news is that living this way is similar to how we live now; it requires embracing the paradox of significance. Recognizing our true measure—our tininess, our impotence—so that it may be magnified for good. The bad news is that it requires us to let go of righteous anger toward those whose present control is very likely to make our entire lives worse. It’s simple, not easy.
2026-03-10 12:00:00
Second Story experimented on their own website with many things, creating an experience that defied the boundaries of the browser. I loved it!

I encourage you to follow the link above to see just how interesting their “newspaper” layout feels while scrolling horizontally and vertically. Here’s a snapshot of what the design did beyond the viewport edge:

I’d love to see more websites that play with dimensions like this one did – I know they’re out there, but they’re just not that common. For the most part, I tend to think that scanning across the horizontal axis is more difficult than vertical, and so I typically advise clients to save horizontal scrolling for focused sub-sections. But I think that if the entire paradigm is shifted — where the primary axis is horizontal, and depth scrolling is vertical — it can work.
2026-03-02 13:00:00
Lots of art time with the kids over the past couple of weeks has filled more pages!
Unfortunately, the light here was harsher than I wanted so many of these images are losing some nuance.
















I found myself rescuing dozens of bits and scraps from the recycle bin this week. Mashed many of them up in here. ^ (Hi, Mom!)


2026-02-25 13:00:00
This, from JA Westenberg, is well-put:
“I’m going to argue that the pessimists have the best narratives and the worst track record. The doom scenarios require assumptions that don’t survive contact with economic history, and the psychological posture you bring to this moment actually matters for how it turns out.”
Westenberg’s essay is about why optimism at a time like the present makes sense. It’s exactly the sort of thing—when thoughtfully argued and laden with examples—that I need to read, as I tend to articulate optimism while internalizing the opposite. I tend to give every catastrophe a hearing, and even a momentary “what if this is right” leaves a mark on brain matter. That isn’t to say that ignoring all warnings is a good idea, nor is that what Westenberg recommends. But it is a helpful reminder that “catastrophist keep being wrong.” And one benefit to aging is that you can accumulate many directly-lived examples of just that.
I do appreciate the implicit message throughout this piece that reminds me of something important: we do experience great changes—the status quo isn’t untouchable—but the human drive to do something is both the most reliable force in ensuring future technological threats to our way of life and the productive afterward.