2026-03-31 12:00:00
I have a vested interest in the title of this piece being true. I’ve spent decades developing craft—not just making things, but understanding systems, seeing patterns, making judgments that can’t be reduced to prompts. If AI eliminates the need for that expertise, I’m in trouble.
But I don’t think it does. And understanding why matters—not just for people like me, but for anyone who cares about the difference between things that work and things that merely exist.
The most common definition of craft is “an activity involving skill in making things by hand.” And I think most people still emphasize a literal interpretation of that “by hand” clause. AI is surfacing this assumption, if not challenging it outright. But it’s certainly not the first time our notion of craft has been tested.
To me, craft isn’t necessarily about physically touching what you make. It doesn’t even have to involve physical contact at all.
Mozart was reputed to compose complex arrangements entirely in his head, only writing down the final notation as an act of transcription. But who would argue that Mozart wasn’t a master of his craft?
Beethoven, by the end of his life, was deaf. And yet it was then that he composed some of his most celebrated work. What does it mean to craft music you cannot hear?
Obviously, “craft” is a word we use interchangeably—sometimes as a noun, a shorthand for “area of expertise,” and other times as a verb, the act of applying that expertise.
What I’m noticing is that our initial forays into AI seem to be challenging our notions of craft. But my experience has only validated the existence of craft as an elevated form of creation. It’s also deepened my sense of craft as verb—as disciplined practice, not manual labor.
The kneejerk reaction to AI usage, especially in design, has been to consider it an interference to thinking and making—not capable of processing ideas with the nuance of the human mind, nor capable of producing anything that a human, with enough time, couldn’t do better.
Both criticisms miss the point. AI is a tool through which ideas become things. The stronger the idea going in, the less reason to think the tool would degrade it in some fundamental way.
This is exactly how many initially responded to the synthesizer—as a sonic machine, not a musical instrument. But of course, the synthesizer didn’t eliminate musical craft. Knowledge of harmony, rhythm, arrangement, and dynamics still determined what made a piece of music good. The synthesizer only changed how it was made.
The same is true with AI and design. No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools.
Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don’t vanish because I’m working through AI rather than directly manipulating pixels.
The craft migrates to a different level of abstraction. But it remains craft.
The second aspect has to do with the work that is or isn’t done when AI tools are involved. And for me, the key element here is repetition.
I’ve written before that the way to make good things is to make many things. Practice builds skill. There’s nothing about AI usage that challenges this fundamental truth.
The more I use AI to create something, the better the output becomes. And it’s not simply a matter of getting better at prompting. These cycles push further back into my process, causing me to rethink foundational aspects of how I make things, knowing that new points of processing and acceleration are now available.
I’m iterating more quickly. Testing more variations. Learning from failures faster. The feedback loops are tighter, which means I can refine my judgment more rapidly.
The craft hasn’t disappeared. It’s just happening at a higher level of abstraction.
Instead of iterating on “how do I code this CSS perfectly?” I’m iterating on “what’s the right structure? What’s the right hierarchy? How do I communicate this idea most clearly?” The answers change when the tools change.
The discipline, though, remains the same.
But here’s the danger.
I’ve seen dozens of AI-generated apps, webpages, and informational assets that have blown collective minds simply by not existing one minute and existing the next. The speed of generation is so breathtaking that it stands in the gap for quality—even when that gap is so wide it would never have been tolerated had the thing been made the old-fashioned way.
This typically happens when someone uses AI to synthesize a large amount of information and generate something to contain it. That it’s suddenly there—clickable, mobile-friendly, with animated charts and graphs—is powerful. The person who made it is immensely proud, though the work has been minuscule. There’s an intoxicating effect at work here and I worry it’s one we won’t become immune to quickly enough.
And I feel that immediate tension when I inevitably have a long list of critiques: Hold on, what is this meant to communicate? Actually, it’s pretty difficult to scan and read this. Yeah, these graphs look neat but they don’t really make any sense.
Had I not been there, accepting the role of wet blanket, this inferior thing would have shipped.
And that’s the risk with collapsing skills into tools. I won’t always be there to do the thing I do. Inferior designs will ship. That’s bad. But what’s worse—the thing that really stings most designers’ egos—is that most people won’t even notice.
Exhibit A of this premise is most of what’s on the web today: hastily made things using poorly designed templates. Any good designer can thoroughly critique them; most of the world doesn’t care.
AI accelerates this dynamic. It makes it even easier to produce outputs that look professional at first glance but fail at the level of craft—the considered structure, the clear communication, the thoughtful hierarchy that serves the user’s actual needs rather than just filling space.
A tool that accelerates craft enough also becomes the thing that lets people skip it entirely. And because the output looks finished, because it required so little effort, because most people can’t tell the difference anyway—why would anyone bother with iteration? With refinement? With developing the judgment that comes from years of practice? Easy satisfaction is dangerous, and up until this point, somewhat localized. AI could not only make it ubiquitous, but standard.
And just to be sure no one reading this draws the conclusion that I elevate designers above others, let me be clear: we designers are as easily seduced. Take this video, named “Designing with Claude Code”, as an example. I’ll ask you the same questions I asked my design team: When, exactly, did the design happen? The designer in the video prompts Claude to “design a simple marketing home page for a finance app” and lists off a few features he’d like to see on the page. Seconds later, Claude generates a pretty polished page. That’s about three minutes in. For the next 57, the designer restyles the page, prompting it piecemeal. This is where the title was, for me, ironically instructive: was this really design? I also asked my team, sincerely, did he make it better? At the core of this discussion is the ever-blurred line between aesthetics and order, between style and design.
To be fair, I don’t think the designer in this video intended to communicate that pre-design strategic work is no longer necessary. Nevertheless, he depicted a process that didn’t include any meaningful thought prior to generating a webpage, then spent the rest of the video re-styling that webpage to his tastes. I would have started with a text file to work out concepts, developed my visual language in a canvas tool, and then moved to Claude to accelerate the technical steps of translating my thinking to code. Craft at each stage.
This is why I keep returning to craft as mindset, not method.
Craft is the commitment to iteration, refinement, and accumulated knowledge applied toward increasingly excellent outcomes. It’s the refusal to accept the first result as final. It’s the understanding that quality emerges from disciplined practice, not from tools.
AI makes it easier to produce outputs. But it doesn’t eliminate the need for craft—it just reveals who’s practicing it and who isn’t.
Someone who generates an interface with AI and calls it done isn’t practicing craft. They’re consuming convenience.
Someone who generates an interface, inspects it, questions what it’s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they’re practicing craft. They’re building knowledge through iteration.
The tool doesn’t determine whether you’re working with craft. Your approach does.
Beethoven crafted music he couldn’t hear because he had spent decades developing such deep understanding of musical structure that the physical instantiation—sound waves, instrumental performance—was almost incidental to the compositional craft.
AI lets us work at similar levels of abstraction. We can focus on intention, structure, and meaning while the tool handles implementation.
But that only works if we maintain the discipline. If we iterate. If we refuse to accept “good enough” when we know better is possible; if we understand craft not as what we touch, but as how we think; if we’re honest about the fact that most of the world won’t notice when we skip that discipline. The only thing keeping craft alive is our own commitment to it.
This isn’t really a technological problem so much as a symptom of choices made at scale. Individual choices that prioritize craft, after all, are much easier to defend and preserve, aren’t they? I could have paid half as much for the new roof on my house, for example, but it probably would have lasted a quarter as long. But it was my money to spend. My chosen design processes might take me more time at certain points than the designer “designing with Claude Code,” but it’s my time to spend. I think I’ve made the right choices, but only I am in a position to judge them. What happens when these choices are made for me? What happens when they’re made from a distance, where the outcome is obscured?
Craft is always threatened in the midst of technological change, not by the technology itself, but by the addictions we develop to what the technology makes possible: Simpler choices, lower costs, faster outcomes. Each is desirable and defensible in isolation, but as a foundation, the fastest path to a fragile future.
2026-03-28 12:00:00
Will what happened to the music recording industry happen to AI? That’s the main point Rick Beato is making in this video, which I find very intriguing.
Rick shows how easy it was for him to install local models on his own machine and is basically saying that locally-run LLMs will undercut (if not beat) the largest AI companies selling access to their models. His perspective is that the same thing that happened to the expense, access, and necessity of professional studio recording equipment will happen with AI, but faster.
I sure hope he’s right, and it fits with my view that AI will follow the path of infrastructure, not interface and that the real opportunity is not in creating the fascia of AI but in creating things with it.
I also wonder about another comparison with the music industry’s technological path. Until the smartphone, music playback was driven by single-function machines, and a diversity of them. After the smartphone, music playback became a software experience. But it seems as if that phenomenon may have run its course, as interest in physical media and “analog” and single-function media playback is growing again. The comparison I have in mind is that the current threat to everyone seems to be the multifunction AI experience — the app to end all apps. But I suspect that this won’t turn out to be the total consolidation that people expect. There’s an equal opportunity, I think, in distributed local models leading to an even more diverse ecosystem of single-purpose tools and interfaces.
2026-03-25 12:00:00
While I don’t doubt for a second that images like the one at the top of this page could be easily generated with AI, that they wouldn’t have been made by me predicts at least a million meaningful differences.
This is what I explained to my daughter as we were making art together over the weekend.
Yes, we could be making images much faster, but we’d be missing out on the experience that working more slowly, touching things, and discovering interesting connections among seemingly worthless scraps of paper provides.
There’s a place for both, of course—ponderous, handcraft and pragmatic production alike—but an image that is simply generated offers the viewer everything and the maker nothing.
The distinction between what the viewer gets and what the maker gets explains something essential about why craft persists even as generation becomes effortless. I feel lucky to have learned it early.
When I was very young, after my parents divorced, my sister and I would spend weekday afternoons at our grandparents’ house while our mother worked. We’d sit with my grandfather in his den while he watched the evening news. I developed a strange habit: I would fold a piece of paper into thirds like a brochure and write vertical columns of numbers along each side, starting with 1 and working my way down as far as I could go.
It’s obvious to me now, forty years later, that this was a way to bring order to a life that felt suddenly chaotic. It must have been obvious to my grandparents too, because they never mentioned it. They just kept the paper stocked.
I remember laboring over those pages. I would slowly inscribe the numbers, trying to keep them perfectly aligned and the columns perfectly straight. I wanted my hand to produce something that only a machine could.
Later, in elementary school, we began learning the BASIC programming language. One of the first things I tried was this simple command:
FOR I = 1 TO 100
PRINT I
NEXT
On my hand-written pages, it would have taken me the better part of the entire news broadcast to reach 100. When I hit ENTER, I felt a sudden thrill as a vertical column of numbers flowed in a blur from the top to the bottom of my screen. I felt its motion downward—the illusion of that building list pulling me into the seat of my chair. In seconds, it was done.
I re-typed the command, increasing 100 to 1000 and again hit ENTER. It only took a few seconds more. But there was no excitement this time. The exercise already seemed pointless. It was just a list of numbers now; there was nothing of me in it.
What was it in hours of hand-cramped writing that seconds in front of a computer couldn’t provide? It wasn’t about completing something. I have no memory of keeping those pages, after all. It was the act—the writing itself—that gave me the peace, or clarity, or tiny bit of control that I needed.
The computer gave me the output I thought I wanted. But it couldn’t give me what I was actually getting from the process.
This is what gets lost when we optimize only for outputs. When we measure value only by what gets produced, not by what gets learned or felt or understood through the act of making.
Making art by hand teaches my daughter—and me—things that cannot be learned by prompting:
Material constraints. What do we have to work with? How do these pieces fit together? What can we make from what we have?
Physical manipulation. How does this paper feel? How does it tear? How does glue behave? How do colors interact when they’re actually touching? How does the physicality of a material affect what we later see?
Aesthetic judgment. Does this feel right here? What happens if I move it? What does this composition need?
The satisfaction of making something exist that didn’t before. Of putting yourself into it. Of leaving traces of your decisions in the final object.
These fundamentals of art making are also necessary to developing the kind of systems thinking in people who can build genuinely novel things. It’s an education that I don’t think can be shortcut, even with the most powerful technology. It has to be earned by eye and hand.
AI is capable of mimicking the output of any artist, of that I am sure. But it cannot give an artist what she gets by making herself. And this—this irreducible value in the act of making—is why craft will persist.
The builders who remain in our AI future will those who understand this. They will know the value of labor, time, constraints, and intuition because they will have experienced them and integrated the knowledge that only they can convey.
Too easy, too fast, too permissive, too systematic—it all means too little of you in it. And when there’s too little of you in a thing, you can’t learn from it. You can only consume it.
The computer gave me my column of numbers to 1,000. But my grandfather’s den, the folded paper, the careful alignment of hand-written digits—that gave me something else entirely.
My daughter will forget the specific collages we made. But she won’t forget what it feels like to make something with her hands, to work within constraints, to put herself into an object and see it become real. That’s what making gives the maker. And no amount of effortless generation can replace it.
2026-03-18 12:00:00
“There’s no viable garage startup path in AI anymore.”
When I encounter this claim—and I encounter it often—my blood runs cold. It feels so persuasive. How could any startup compete with institutions that draw more power than a town to deliver a result that is rapidly degrading from magic to expectation? The barrier to entry is billions in computational resources, massive datacenters, chips that cost more than most companies’ entire budgets and access to which is a matter of international trade agreement. The companies that dominate AI are already the most powerful entities on earth.
But then I remember that many garage startups of the early dot-com era would have been met with a similar perspective. How could any startup compete with Microsoft? The company had operating system dominance, infinite resources, the best engineers, relationships with every major manufacturer. It looked unassailable.
But some startups did compete. Many of the software titans of the late nineties are forgotten now. Microsoft missed mobile, social, cloud—not because they weren’t smart or well-resourced, but because they were focused on something else. Their paradigm was empowering the individual machine; the new paradigm became assembling something new from the connections between machines.
The question is whether AI will follow a similar pattern. And the answer depends on whether AI is a technology or infrastructure?
Every dominant technology platform looks safe from within its paradigm. But there’s a difference between platform dominance and infrastructure dominance.
Platform dominance is temporary. Personal computers, social networks, even operating systems—these are vulnerable to paradigm shifts because they’re products competing with other products. The better product, the better business model, the better understanding of what people actually need—these things can win.
Infrastructure dominance is structural. Electricity, telecommunications, internet backbone—these don’t get disrupted by better versions of themselves. They get commoditized. And when they do, all the value migrates one layer up.
I think AI is becoming infrastructure. And if that’s true, it changes everything about who can compete and how.
Here’s what happened to previous infrastructure layers:
The pattern here is pretty consistent: infrastructure providers survive, but value migrates to whoever controls the layer above the infrastructure.
The dot-com winners didn’t try to build a better internet protocol, they create new ways of using it.
So here’s the real question about AI: will it follow this pattern? Will foundation models become commoditized infrastructure, with value flowing to whatever gets built on top? Or will AI be different—a kind of infrastructure that remains captive, where the companies that build the models also control everything built on them?
Let’s move past historical infrastructure examples and to a more recent company whose navigation of the last thirty years provides a lesson in how this value layering works.
Microsoft got the web, but they missed the internet. They crushed the first popular web browser—Netscape Navigator—by doing something very smart: bundling their own browser, Internet Explorer, with Windows. Every Windows user got it for free. It was such an aggressive move that it landed Microsoft in the middle of a US antitrust case. Even though they came out of it with penalties, they remained on top until the internet expanded beyond something people accessed through a window on their home computer and everywhere through mobile devices. Microsoft’s attempts to catch up were disasters.
Microsoft treated the internet as a location and access as a feature of their operating system, not as an entirely new computing layer that eroded the relevance of the OS. On the business side, they’ve hung on, and have evolved very well by creating cloud based infrastructure and tools of their own. But they made a mistake in treating a platform transition as a feature evolution.
Today’s AI companies face the same conceptual choice:
The ones who mistake the transition for something smaller than it is—those will be today’s equivalent of pre-internet Microsoft.
But here’s what makes this different than historical feature/product/infrastructure misperception: Speed. AI is becoming infrastructure at unprecedented speed, with explicit goals of replacing human cognitive work across all sectors, deployed by the most powerful companies in history. (No, they’re not going to stop—unless someone calls them on their loans, or unless, oh I don’t know, China surrounds Taiwan with battleships and suddenly no one can get chips, but this is another essay—until they’ve done it.)
Previous technological revolutions took decades to fully transform economies. The Industrial Revolution gave labor time to organize, governments time to regulate, new industries time to emerge. The internet and personal computing followed similar patterns—gradual adoption, clear new job categories, space for adaptation.
AI is being deployed as quickly as possible with no gentle transition in the plans and no clear “next category” of work for people to transition into.
I think this creates two possible futures.
The first is that AI becomes “normal technology”—like Excel or Photoshop. It increases capability across the board without fundamentally restructuring who has power. It enables more people to do more things. It creates new categories of work even as it automates old ones. Benefits distribute somewhat broadly. The technology serves people rather than replacing them.
In this future, foundation models become commoditized infrastructure. OpenAI, Anthropic, Google—they’re like the internet backbone providers. They survive, but the real value flows to whoever builds the right things on top. The garage startups don’t try to compete at training foundation models. They recognize infrastructure when they see it and build the “one layer up” innovations we can’t see yet. This is a better future in which innovation and creation happen like it did in the computer to network transition—in the new space created by connections among technology.
The second future is that AI becomes normalizing technology. Everyone uses the same models trained on the same data. Outputs converge toward statistical means. Power concentrates in infrastructure owners who also control the application layer. The technology creates efficiency but eliminates diversity, experimentation, novelty. Benefits flow almost entirely to capital. Most people become irrelevant to the system’s functioning.
In this future, there is no garage startup path—not because the infrastructure is too expensive to build, but because the infrastructure owners also control what gets built on it. We’ve never seen infrastructure that concentrated before. Captive infrastructure on a scale that makes platform monopolies look quaint. This is a worse future, but very possible depending upon how model API pricing evolves and how desperate investors become.
Neither future scenario will be the result of technology. These things are politically determined.
We could slow deployment. Regulate heavily. Require human oversight for critical decisions. Tax AI-driven productivity and redistribute the gains. Preserve certain categories of work from automation. Build systems that augment rather than replace.
We could treat AI infrastructure the way we treated electricity and telecommunications—as something too important to leave entirely to private control, requiring public oversight and structured to serve broad social benefit.
We could even reject it entirely, and sacrifice all the imagined fruits it might bear. There are times when I think that even the entire internet has been a net loss for society, so this route isn’t exactly a non-starter for me. But I will admit that it feels pretty impossible.
And truly, all three of these options feel quite unlikely to me based upon how things are presently.
Not because humans can’t organize—we can. But because collective action requires shared understanding of stakes, political will to regulate, international cooperation, ability to resist “inevitable progress” narratives, and an alternative vision of what we’re building toward.
And all of this runs directly into players with massive resources who benefit from the current trajectory, regulatory capture and political dysfunction, race dynamics where slowing down feels like falling behind, and exhaustion from compounding crises that have already depleted our capacity for collective response.
The window for meaningful action is narrowing. The beneficiaries are extremely powerful. The coordination problem is massive. And the framing of inevitability is already dominant.
What makes this moment different from previous technological disruptions is not just the speed, or the scale, or the concentration of power. It’s the combination of all three. Together, they could push us past a tipping point where adaptation to an abruptly changed reality is no longer possible.
Although, maybe I’m wrong. Maybe this will be like every other wave of technological disruption—painful in transition but ultimately generative. Maybe new forms of work will emerge that we can’t imagine yet. Maybe foundation models will commoditize and garage startups will build the valuable layer on top. Maybe the bubble will pop before full deployment. Maybe the feeling of impossibility is just the feeling every generation has when facing transformation they can’t control.
I hope so, but I don’t think so. I think it’s going to be harder than that.
I think the feeling of impossibility is by design. The players benefit from our sense that this is inevitable, that resistance is futile, that the only choice is to adapt to what they’re building rather than build something different.
And that’s exactly why the moment when collective choice matters most is also the moment when it feels most impossible. When the infrastructure isn’t yet fully built. When concentration isn’t yet complete. When path dependency hasn’t yet locked us in. When we can still decide whether AI infrastructure gets commoditized or remains captive. When intervention would actually matter.
When it feels impossible is when it’s most urgent.
We’re at a crossroads. One path leads to AI as normal technology—useful, widely beneficial, integrated into lives that remain fundamentally human. Infrastructure that gets commoditized, with value and opportunity flowing to the layers above it. A world where garage startups don’t try to build a better model, but a new ecosystem of transformative tools and experiences on top of the model.
The other path leads to AI as normalizing technology—efficient, concentrated, optimized for metrics that have nothing to do with human flourishing. Captive infrastructure that also controls everything built on it. A world where there truly is no garage startup path, not because the technology is too hard to build, but because the game is rigged from the infrastructure layer up.
The choice isn’t made by technology. It’s made by us. Or more accurately, it’s made by whether we choose to make it, or whether we accept the choice being made for us. And unfortunately, as much as I emotionally understand the urge to reject AI entirely on principle, the better of these two paths requires walking with it, not without it.
It feels impossible from here, for so many good reasons. But impossible and inevitable aren’t the same thing. And at the moment, holding on to that difference is what is giving me any energy to press on.
2026-03-11 12:00:00
Once basic needs are met—once you’re comfortable, secure, insulated from material consequence—wealth accumulation becomes divorced from survival or utility. It becomes a game. Rules, competition, scorekeeping, status. The difference between ten million and a hundred million doesn’t change your material reality. It’s just points. A way to keep track of who’s winning.
But here’s what makes it different from an actual parlour game: the gameboard is society itself.
The wealthy don’t play in some abstract financial dimension removed from the rest of us. They play on our lives, our communities, our systems. And when they make risky moves—when they place bold bets or break things for the sake of entertainment—the consequences are wildly asymmetric.
For the players, risk is temporary. For everyone else, it’s permanently deterministic, if not outright catastrophic.
When you have true wealth—defined in my mind as far more than enough—failure doesn’t threaten your position. Investments that don’t work out, business failures, and even market crashes—the things that would reduce the average person to poverty—represent little more than a pause in the game. If that.
Studied in hindsight, the most profound market crashes look less like surprises and more like strategy. Not just the perfect time for those with the most going in to use what they have to buy low, consolidate resources, and gain power while others are desperate to sell, but a deliberate campaign begun with the very moves that analysts will later classify as blunders of greed and myopia. Greed, yes. Myopia, anything but. Pumping up a market is a board-clearing strategy; crises like these are manufactured over greater numbers of years and moves than most market gameplayers even consider precisely to lull them into a sense of winning until the loss is abrupt and the take is everything.
The rest of us—we who may think of ourselves as non-players or just spectators—are playing, and probably losing. We are in no position to control the market, and only sometimes in a position to benefit from its total value. When the crashes come, the architects collect, the wealthy buy what they can, and the rest of us have our lives turned inside out: Job loss, foreclosure, bankruptcy, poverty that may echo for generations to come.
The panic of 1907, the crash of 1929, the dot-com crash of 2000, and the crisis of 2008 all have deliberate inflationary activity in common. They all demonstrate patterns of value manipulation, creditors backing debt they knew was worthless while creating mechanisms to profit when it couldn’t be repaid, new and larger institutions of control created in their aftermath.
These patterns will likely hold through the AI transition.
And these patterns are more than cultural erosion—if only they were just that bad. They’re corrosive. The angst, divisions, and conflict are part of the design, and they begin even with the language used to describe what’s happening. When things go well, we hear about people—individuals, by name, glorified for their apparent courage, determination, ingenuity, brilliance, and, of course, job-creating beneficence. They become the anecdotal cornerstones of a toxic meritocracy in which our resilience, adaptability, and collateral damage is the price for their progress. When things go poorly, we hear about abstractions—market forces, disruptions, corrections—as if we exist blindly in a vast system of grand complexity we will never fully understand or control. Market activity suddenly becomes weather. It’s a linguistic program of psychological manipulation. We’re meant to be left broken, alienated from one another, and squabbling for crumbs.
And here’s the thing: seeing the game doesn’t free you from it. Recognizing the manipulation doesn’t make you immune. You still need to eat, to house yourself, to participate in systems you didn’t design and don’t control. Righteous anger at the players feels justified—it is justified—but anger alone changes nothing. It might even be part of the design, keeping us reactive rather than generative, focused on what they’re doing rather than what we could build.
This is what happens when distance between the wealthy and everyone else becomes vast enough: risk becomes asymmetric, yes, but so does reality. The game board isn’t just the market, it’s the ground beneath our feet. The end-stages of a game are intentionally opaque to all but the very few because true control exists at the level of epistemology and ontology. We barely see and understand the moves being made because we’re all too busy arguing about what is true and how we know what is true.
Systems of organization are, in a way, logic batteries. They collapse decisions to the point of making them fundamental truths and lubricate our focus on the outcomes they power. We think about the ability to consume rather than the purpose of it. We think about what we can buy rather than whether we need it. We think about who we aspire to be because of what they did rather than what we truly want to do. The cornerstones of the financial system are these ideas; math is little more than the scaffolding.
Our world is organized by systems created by people who have very different ideas about what life is for than I do. Their power is their ability to store those ideas inside of a system in which I am forced to operate. And that is the most important thing—not my power to create and control a system, but my willingness to participate in it. I can truly accept that the world will be made in the image of those whose values I do not share; I can accept that the game is bigger than I can even comprehend, even that I cannot win. That doesn’t make me powerless. My power is in how I play, if I play at all.
The world doesn’t look like this because we are underachievers overshadowed by bigger, better people who deserve more. It doesn’t look like this because we are unwitting serfs in an enduring Feudal system updated for modernity. It looks like this because we have ceded culture to power and accepted that power is purchased. That doesn’t change by simply seeing the game for what it is, or resisting from within the game, or even sitting the game out. It’s fixed by widening our view beyond the game board, as all-encompassing as its boundaries are. It’s fixed by replacing the ideas stored in our systems batteries—thinking anew about what life is for, what makes a life well-lived.
The good news is that what this requires is simpler better game strategy, or even a better game entirely. The bad news is that it takes longer and costs more. It takes lives and many of them, lived over and over again until their shared and repeated choices become culture. The good news is that living this way is similar to how we live now; it requires embracing the paradox of significance. Recognizing our true measure—our tininess, our impotence—so that it may be magnified for good. The bad news is that it requires us to let go of righteous anger toward those whose present control is very likely to make our entire lives worse. It’s simple, not easy.
2026-03-10 12:00:00
Second Story experimented on their own website with many things, creating an experience that defied the boundaries of the browser. I loved it!

I encourage you to follow the link above to see just how interesting their “newspaper” layout feels while scrolling horizontally and vertically. Here’s a snapshot of what the design did beyond the viewport edge:

I’d love to see more websites that play with dimensions like this one did – I know they’re out there, but they’re just not that common. For the most part, I tend to think that scanning across the horizontal axis is more difficult than vertical, and so I typically advise clients to save horizontal scrolling for focused sub-sections. But I think that if the entire paradigm is shifted — where the primary axis is horizontal, and depth scrolling is vertical — it can work.