2026-02-25 13:00:00
This, from JA Westenberg, is well-put:
“I’m going to argue that the pessimists have the best narratives and the worst track record. The doom scenarios require assumptions that don’t survive contact with economic history, and the psychological posture you bring to this moment actually matters for how it turns out.”
Westenberg’s essay is about why optimism at a time like the present makes sense. It’s exactly the sort of thing—when thoughtfully argued and laden with examples—that I need to read, as I tend to articulate optimism while internalizing the opposite. I tend to give every catastrophe a hearing, and even a momentary “what if this is right” leaves a mark on brain matter. That isn’t to say that ignoring all warnings is a good idea, nor is that what Westenberg recommends. But it is a helpful reminder that “catastrophist keep being wrong.” And one benefit to aging is that you can accumulate many directly-lived examples of just that.
I do appreciate the implicit message throughout this piece that reminds me of something important: we do experience great changes—the status quo isn’t untouchable—but the human drive to do something is both the most reliable force in ensuring future technological threats to our way of life and the productive afterward.
2026-02-23 13:00:00
I’ve been thinking about the science fiction novels that have stayed with me across years—the ones I return to in my mind even when I’m not actively reading them. Replay. The Sparrow. The Listeners. Way Station. Earth Abides. The Lathe of Heaven. I Who Have Never Known Men.
When I look at them together, I see thematic consistencies that I didn’t recognize when reading them individually: the burdens of carrying knowledge or experience that others cannot understand or believe, of enduring helplessness amidst mystery, loss, and transformation across vast stretches of time, of accepting the limits of human understanding, of attending to a world in the midst of quiet apocalypse. They’re all about isolation and bearing witness.
I’ve been drawn to these stories my entire life because they reflect something I’ve always known about myself but rarely compiled and articulated: I have always felt isolated and that my purpose is to witness; one ensures the other.
Despite being one of seven siblings, I have nonetheless felt “in but not of.” In every group, every gathering, every community I’ve been part of, I’ve felt distance. Not exclusion—set apart. As if my role isn’t to be fully inside but to observe from the edge. Perhaps this gives it purpose, gives me the function of serving others by seeing patterns that only become visible from afar. That idea assuages the loneliness and the self-recrimination that washes over me when I wonder if I’m just doing this to myself. But beneath that is the lurking certainty that my isolation serves a different purpose, one I don’t and may never fully understand.
This distance is an interior experience, and explaining it runs the risk of painting a distorted picture of my life. I have a large family. I have many friends. I have learned to navigate many different social conditions with the appearance of comfort. From the outside, I can imagine how my experience of things must seem impossible. But amidst others, I almost always think to myself, “I should belong here, shouldn’t I? I am one of them, aren’t I?” while feeling an almost physical boundary, as if surrounded by a magnetic field naturally tuned to keep me removed. I’ve aged into a kind of acceptance of this. But acceptance doesn’t erase the pain, it just makes it familiar. I’ve learned to make a home at the edges.
The purpose of it—this notion of bearing witness — this has been the harder thing to understand. I’ve said it many times, written it in various ways, but the feeling has stayed with me my entire life: the purpose of my existence isn’t to achieve something, to earn distinction, to build a legacy, to be known. It’s to observe something. To see clearly and bear what I see. It produces a perpetual feeling of heightened anticipation that has shaped so much in my life—productive sometimes, unsettling often. Anxiety that, though I medicate, maintains a feeling that something, perhaps the thing, is about to happen. That I’m watching for a moment of significance I’ll recognize when it arrives.
But what if there is no moment?
What if the perpetual anticipation isn’t pointing toward a single future event but the recognition that I’m already in the midst of what I was positioned to witness? What if the thing I’ve been waiting to see is already happening, and my work is simply to name it clearly while others are caught up in narratives that obscure what’s actually occurring?I think about the transformations I’ve documenting in real time. The web, social media, new devices, and now AI reshaping work, design, economics, attention, the very nature of consistency and standardization. I’m in it—using the tools daily, feeling the effects, understanding both capabilities and costs—but not of it. I can see the gaps between what we’re told these technologies mean and what they actually require. Between stated values and actual incentives. Between promises of democratization and realities of extraction. Between narratives of progress and patterns of harm.
I’m in but not of—occupying a liminal space from which certain truths are seen more clearly. I write this with some unease that it will sound as if I see removal as elevation, that I’m claiming some kind of special, prophetic status. Not at all.
The books I love reflect this dynamic. Characters who witness things others can’t see or won’t believe. Who bear knowledge that isolates them further. Who must sit with mystery and uncertainty. Who understand that observation is not passive—it’s a form of purpose, even when it doesn’t lead to resolution or change. None of whom enjoy this place or profit from it in riches or glory. They see, and are themselves unseen.
Emilio in The Sparrow returns from first contact with trauma and knowledge no one can comprehend. Ish in Earth Abides watches civilization fade across his lifetime, powerless to prevent it but compelled to understand it. The scientists in The Listeners wait decades for signals that may never come, bearing the patient discipline of attention. George in The Lathe of Heaven witnesses reality rewrite itself while everyone else forgets what was.
Instead of following heroes whose actions drive the plot forward, these stories ask us to sit behind the eyes of those who themselves can do little else but see, and feel what they are powerless to escape feeling. Who live with difficulty rather than resolve it. Who find meaning not in changing what they see but in seeing it clearly.
I think this is what I do. What I’ve always done. It doesn’t feel like a choice, exactly, but a configuration.
The essays I write are attempts to name what is happening—to articulate patterns I’m seeing from my particular position, slightly outside the center of things. To witness clearly in a moment when narratives are thick and obscuring: Disruption presented as innovation when it’s often extraction. Efficiency sold as progress when it consolidates rather than creates value. Technological determinism framed as inevitability when it’s actually a series of choices made by people with specific interests. Consistency presented as maturity when it might be primitive constraint. Information overload treated as abundance when it functions as pollution.
I see these things because I’m close enough to be inside the systems but positioned enough outside them to notice the contradictions. I use the tools. I experience the transformation. But I’m not so invested in the narrative that I can’t see where it diverges from reality.
The perpetual anticipation I’ve felt my entire life—that sense of waiting for something significant—maybe it isn’t about a single moment. Maybe it is about recognizing that we are always in the midst of transformation, and the work is to witness it clearly, moment by moment, pattern by pattern. Not to change it, necessarily; not to stop it or accelerate it. Just to see it and say it.
This might also sound like passivity with many excuses. But witness is active. It requires discipline, attention, willingness to sit with discomfort and uncertainty. It requires being willing to name difficult truths even when they’re unpopular. To refuse easy narratives even when they’re seductive. To maintain clarity when everything around you is optimized for engagement over understanding.
It is, however, contrary to the modern individualistic paradigm which, in all corners of culture, emphasizes the power of a person to perform at a level worthy of the attention, desire, admiration, and memory of others. I don’t mean to be self-deprecating, but self-aware, when I say that will not be me. If I put additional action to this, it is in urging others to live this way, too. The senseless speed of today’s world is sublimating thought—refuse it and you will see.
The books I love taught me this. That witness is purpose. That isolation can be vantage point. That being the one who sees doesn’t mean being able to fix, but it does mean being able to articulate—and that articulation matters, even when it doesn’t change outcomes.
I don’t know what comes next. I don’t know if the transformation I’m witnessing will be remembered as progress or catastrophe or something more complex than either. I don’t know if my observations will matter to anyone beyond myself.
I do know that this is who I am, and this is what I have always done. I can be at home anywhere because everywhere is at the edge of something. And so I will remain, as a witness, perhaps to something specific that is yet to come or just to all that is while I was here. Accepting one’s place, after all, and the pain and privileges it provides, is a life’s work.
*
Replay
Ken Grimwood
1986
A man dies at 43 and awakens in his 18-year-old body, reliving his life with full memory of the future, only to repeat this cycle multiple times.
The Sparrow
Mary Doria Russell
1996
A Jesuit priest leads humanity’s first contact mission to an alien civilization, returning as the sole survivor with devastating physical and spiritual trauma.
The Listeners
James Gunn
1972
Scientists and researchers dedicate decades to listening for extraterrestrial signals, exploring the patience and dedication required for humanity’s search for contact.
The Light of Other Days
Arthur C. Clarke and Stephen Baxter
2000
A technology that allows viewing any moment in history destroys privacy and transforms human civilization as all secrets become accessible.
Way Station
Clifford D. Simak
1963
A Civil War veteran lives in isolation for over a century as the keeper of an alien transportation station, watching humanity from the outside.
Earth Abides
George R. Stewart
1949
A man survives a plague that destroys civilization and witnesses humanity’s gradual regression across his lifetime, powerless to preserve what was lost.
The Lathe of Heaven
Ursula K. Le Guin
1971
A man discovers his dreams can alter reality, and each change rewrites the past so that only he remembers what was.
I Who Have Never Known Men
Jacqueline Harpman
1995
A woman imprisoned with thirty-nine others since childhood escapes into a post-apocalyptic world, never understanding why she was captive or what happened to civilization.
2026-02-19 13:00:00
Over the last week, I’ve been setting up design-to-code systems that have had a profound effect on how our design team makes and ships things. We create files in Figma that are translatable by machine—integrated with Anima to produce HTML and CSS files, compiled and refactored by Claude, then implemented by providing Claude with contextually relevant information and code. Existing templates, stylesheets, configuration files—all fed into a system that can maintain, expand, and transform what a human team built over years, comparatively instantly, with one person and a machine.
It’s sped up the process at every point. Made the designs better. Enabled much greater reach for a small team.
That’s a conclusion many have reached over the past year, though I think the general awakening to this is accelerating as AI tools improve and more people integrate them into their work. But here’s what this makes me realize: if one person with AI can instantly maintain and expand software architecture that took years to build, what does this mean a few years from now?
I think it means the end of software as we know it. As radical as that sounds, it’s a clear trajectory. The popular notion of software began as a program written to a disc that runs on a machine you own. Anyone who buys that program has the same experience—bugs and all. As corporate structures behind software development grew, they could “service” software by releasing subsequent versions with bug fixes, called service packs. People bought them. As the internet matured, software moved to the cloud, and the notion of everyone experiencing the same program became even more secure by centralizing it—making everyone’s machine merely a door that opens into the same room.
This is what we’ve become accustomed to: software as a service. You pay for access to a uniform experience. Everyone using Gmail sees Gmail. Everyone using Shopify works within Shopify’s constraints. We have standardized software not because it enables a good user experience—that’s a rationalization that will be difficult for designers to accept—but because it is an economic imperative. You build something once, sell it many times, support one version.
But when AI enables nearly instant software creation, the software no longer needs to be uniform across a paying population. The AI becomes infrastructure—like paying for the flow of electricity or water—and the software becomes whatever you need, as unique as that may be.
The economic system of software, a convention of packaging digital solutions to shared problems and selling them—of selling interfaces—becomes much simpler: Selling access to a mind. Software becomes a promise, not a solution. And what we use becomes as personal as a hand-made outfit.
This has implications that extend far beyond software.
Design emerged because manufacturing required standardization. You design something once, produce it many times, distribute identical units, support consistent experiences. Economies of scale. Recognizable patterns. Users learn once, apply everywhere. Design is fundamentally about creating standardized solutions to shared problems.
But when creation becomes instant and individualized, standardization stops making sense. If AI can generate unique solutions at no additional cost, why would you make them identical? Every artifact becomes bespoke because there’s no reason for it not to be.
Arthur C. Clarke’s idea that “sufficiently advanced technology is indistinguishable from magic” is familiar to most people in technology and design. Many have picked it apart as a prompt for imagining the future. But what fascinates me now is thinking about why such technology would be indistinguishable from magic, not how.
And the why is because technology represented by artifacts—objects, tools, interfaces, surfaces—becomes wildly inconsistent.
Perhaps consistency is not the maturity we took it for, but civilizational infancy. Perhaps consistency is primitive. Consistency, after all, can be considered merely a constraint imposed by the hard costs of manufacturing, distribution, use, and maintenance. When those constraints disappear, so does consistency. Perhaps truly advanced technology isn’t standardized because it doesn’t need to be.
Imagine a world in which materials are as easy to replicate and manipulate as digital information is today. In such a world, conventional design of components becomes unnecessary. Two vehicles could look completely different from one another and operate in completely different ways. Not because they’re trying to be different, but because there’s no reason for them to be the same.
In UFO lore, one of the confounding aspects of sighting data is the diversity of objects seen—the diversity of shapes, maneuvers, behaviors—almost to the point of being nonsensical. Skeptics point to this as evidence that the reports can’t all be real: if these were actual craft from an advanced civilization, surely they’d have standardized designs, consistent technology, recognizable patterns.
But what if that’s backward? What if the diversity is precisely what makes them plausible as artifacts of advanced technology? Not standardized designs mass-produced by a civilization that has cracked mysteries of physics, but one-off, bespoke objects manifested by post-AI technology. Each unique because uniqueness costs nothing. Each inconsistent because consistency is primitive.
We’re already seeing hints of this in software. As AI systems become more capable, interfaces will stop being standardized. Your version of a tool will be different from mine because it adapted to how you work. The next time you open it, it might be different again because your needs changed. There will be no “learning the interface” because the interface is always learning you. This creates interesting challenges.
If everything is bespoke, how do we share knowledge? How do we teach someone to use a tool when their version is different from ours? How do we build on each other’s work when there are no shared reference points? How do we create a commons when nothing is shared?
If software becomes access to a “mind” rather than ownership of a product, who controls that mind? When AI infrastructure becomes as essential as electricity or water, what happens when it’s owned by a handful of companies? What leverage do users have? What happens to agency?
When you’re always describing what you want rather than making it, do you lose the capacity to imagine things you don’t know how to describe? Does the act of building—of wrestling with constraints, of solving problems, of iterating through failures—teach you something that prompting can’t?
If artifacts become trivial to create, do they lose meaning? If design becomes parameter-setting rather than making, does the discipline become vestigial? What does it mean to call yourself a designer when you’re not designing artifacts but rather describing boundaries within which a machine generates them?
Each of these questions strikes a nerve in me. I immediately find myself posturing to one side or another of a possible debate. And to be sure, these questions—and more—should be debated. But what several tumultuous years of non-stop progress in AI have taught me is to maintain observant neutrality for as long as possible. What prompts intrigue—and outrage—changes tomorrow. Regardless of how I feel from one day to the next, the trajectory is clear. My team is already living in the transition—where one person with AI can do what used to require many, where interfaces can be generated rather than designed, where the limiting factor is no longer technical capability but imagination and judgment.
Many aspects of our current experience of AI trouble me. I worry about the lack of thought, the greed motivating the speed, the ecological outcomes, the resilience of human dignity in a world turning over by machine. But I can also envision a world in which AI unlocks a future of abundance and greater human potential. It’s a technology in its relative childhood right now, and the future will be determined by how we raise it. It could go horribly wrong; I just hope it doesn’t.
Every child teaches its parent. AI is already teaching me that the future is imagined in advance but experienced in hindsight. We’re already at a point that many still think is far ahead. Our advancements are already being made primitive. And so perhaps consistency is among them.
A sufficiently advanced civilization might be one of technologically supported solipsism—a social fabric unwoven by customization, all of us proximate aliens to one another. Or, it could expand what it means to be human. The magic of advanced technology isn’t really power, is it, but imagination. To think that a thing can be as distinct as the one who makes it, and that anyone can make exactly the thing they imagine—that is about as magical a future as I can envision.
From our perspective now, that future may be indistinguishable from chaos. But in the midst of it, it could be indistinguishable from real life.
2026-02-17 13:00:00
At long last — catching up here! It’s been months since my last visual-journal entry here, but I have kept up offline. Here’s a sampling.








A friend asked me recently about my collage source material. At this point, I very rarely cut something out of a book or magazine directly and fix it onto a page.
Instead, I most often collect digital images and process them a bit before printing them at home and using them like any other collage material. Sometimes, I run already printed pages several more times through the printer in order to layer images and create textures. Other times, I will build pages in digital composition programs before printing them and using them as base layers for what I add in the book.



One of my favorite things to do is create very small collages while listening to music. Once I’m done, I’ll trim them and add them to the book.







Shameless promotion of my son’s art here ^








All twenty-six letters right there on the left!



2026-02-15 13:00:00
Several years ago, I created a new design for my website and attempted to alter my Blot theme to match it. It didn’t work out especially well, and I became quickly frustrated by what I perceived to be a limitation of the theme to handle what I considered to be pretty simple design choices. I was wrong, not about how simple the implementation would be, but about how to do it.
Blot themes use Mustache, which defines templates without explicit logic — inserting content dynamically by wrapping labels in braces ({{) that end up looking like mustaches. I’d read through support documentation a few times before but never fully understood it. It’s not especially complicated; I just never tried to learn it properly. So what changed? I made a teacher.
First, of course, I wasted a bunch of time by feeding Claude several sources of information about Mustache. This was unnecessary — Claude already knows it. Then I asked Claude to explain it to me, setting the stage by reminding Claude that though I understand HTML, CSS, Javascript, and have done interaction design for a long time, I would benefit from it assuming I was completely green. Things started to click.
Next, I finalized new designs in Figma and translated them to code using Anima. I gave the resulting HTML and CSS files to Claude and asked it to critique, simplify, and recompile them.
Then, I gave Claude all my Blot theme files and asked it to recreate them from scratch to match what I had designed. This was where things got fun, because I was able to describe features and behaviors I wanted and Claude was able to make recommendations for how to best achieve them both with my theme files but also with how I structure my markdown for entries to the site. There were so many inefficiencies in how I created my entry text files that created creative resistance. Dialoguing with Claude helped me optimize this process, which, after more than a decade of maintaining my website with a flat file CMS, feels like a revelation on par with when I first set it up.
I just took a “warning I’m messing with my site” banner down, but that doesn’t mean everything is perfect. There are plenty of old entries I haven’t gotten to yet, so there will be some display oddities there due to how I had set them up in their text file. I’ll eventually update everything. But for now, I’m very happy with how this is taking shape. I haven’t enjoyed doing actual web development so much in a very long time.
It goes without saying that AI was a critical tool in this process. But what I think is most important to stress here is that AI didn’t redesign and rebuild my website for me. I’ve been very critical of headlines that oversimplify how AI can be used to create a website. Not because it isn’t possible to “have AI make a website for you,” but because what “AI,” “make,” and “website” mean in that statement will vary widely depending upon many factors that the very person likely to find such an idea compelling will not understand. Look, five years from now, there probably will be a magic website maker button. But I suspect that won’t stop people from making creative and technical choices that afford them greater control over the results at the cost of more work. I couldn’t have updated my site as quickly as I did without using AI at certain points in this process, that is for sure. But I also wouldn’t have learned as much about how everything underneath this page works as quickly as I did if I hadn’t used AI. And that, for me, is the most interesting thing at the moment: culture seems inordinately fixated upon AI as a “doer”, which should be heavily debated, and in my observation, less interested in AI as a teacher.
Anyway, I’m going on much longer than I intended. Perhaps more on that doing vs. teaching thing another time :)
If you notice anything you like, dislike, or seems obviously broken, let me know: [email protected]!
2026-02-05 13:00:00
Can there be progress without disruption?
It sometimes feels as if our culture has become addicted to doom — needing time to be marked by fearful anticipation rather than something more proactive or controlled. We’ve learned to expect that change must be chaotic, that innovation must be destructive, that the future must collide with the now whether we want it to or not.
But there’s nothing about progress that inherently requires disruption except our inability to cooperate for the sake of stability.
Consider the current conversation around AI and the future of work. Most people seem to agree there are three possible scenarios:
Scenario A: AI replaces nearly all functions provided by people so quickly that society can’t respond as it has to previous industrial revolutions. Mass unemployment destabilizes social structures supported by wage taxation. Even in a soft landing — universal basic income, increased corporate taxation — this is seen as catastrophic because it is contrary to the current capitalist paradigm and leaves humans with the existential problem of separating meaning and purpose from work.
Scenario B: AI replaces most current functions, but not as quickly. Sustained unemployment persists, but the gradual shift creates opportunities for humans to differentiate themselves from machines and derive value accordingly. Painful, but manageable. And afterward, this may even make possible a more deliberate and gentle passage to a new kind of society.
Scenario C: We recognize that AI’s current trajectory is destructive to the social fabric. We slow it down, change how it’s used, possibly reject aspects of it entirely. This would be the Amish approach — where observation and discussion about how a technology benefits the community determines its acceptance, use, and integration.
Most people assume Scenario C is impossible. We’re already too far down the path, they say. The technology exists, the investment has been made, the momentum is unstoppable. You can’t put the genie back in the bottle. Perhaps power and money are too committed now — unwilling and unable to accept regulation — untouchable by those that want something different.
But perhaps not. There are cultures that show us the way.
Despite common understanding, the Amish aren’t technophobes. They do use technology, just not everything that comes along. They carefully evaluate tools communally, based on whether they strengthen or weaken their social fabric. They observe. They choose. They have agency. A telephone might help, but only if placed in a shared building rather than individual homes, so it doesn’t fragment family time. The Amish demonstrate that discernment does not mean rejection.
It seems we’ve lost the ability to do the same. More accurately, though, I believe we’ve been convinced we’ve lost it.
We’ve internalized technological determinism so completely that choosing not to adopt something — or choosing to adopt it slowly, carefully, with conditions — feels like naive resistance to inevitable progress. But “inevitable” is doing a lot of work in that sentence. Inevitable for whom? Inevitable according to whom?
The conflation of progress with disruption serves specific interests. It benefits those who profit from rapid, uncontrolled deployment. “You can’t stop progress” is a very convenient argument when you’re the one profiting from the chaos, when your business model depends on moving fast and breaking things before anyone can evaluate whether those things should be broken.
Disruption benefits the information economy. It makes a good story when it happens, and a seductive — if not addictive — constant drip of doom when it feels as if it’s just around the corner. I’d love to live in a world in which good future narratives outsold apocalyptic ones, but I don’t. And so the medium creates the message, and the message creates the moment.
Disruption has become such a powerful memetic force that we’ve simply forgotten it’s optional. We’ve been taught that technological change must be chaotic, uncontrolled, and socially destructive — that anything less isn’t real innovation. But this framing is itself a choice, one that’s been made for us by people with specific incentives.
Think about what we’ve accepted as inevitable in the last twenty-five years: the fragmentation of attention, the erosion of privacy, the monetization of human connection, the replacement of public spaces with corporate platforms, the optimization of everything for engagement regardless of human cost. We were told these were the price of progress, that resistance was futile, that the technology was neutral and the outcomes were just the natural evolution of how humans interact.
But none of it was inevitable. All of it was chosen. Not by us, but for us.
The doom addiction makes sense in this context. If change is inevitable and we have no agency over it, then the most we can do is anticipate its arrival with a mixture of dread and fascination. Doom is exciting. Doom is dramatic. Doom absolves us of responsibility because if catastrophe is coming regardless of what we do, why bother trying to prevent it?
But stability? Cooperation? Careful evaluation of whether a technology actually serves us? These feel boring, impossible, naive. They require something we seem to have lost: the belief that we can collectively decide how technology integrates into our lives rather than simply accepting whatever technologists and investors choose to build.
I am not anti-technology. I have always been fascinated, excited, and motivated by new things. I am, however, choosey. This is about reclaiming the capacity to say “not like this” or “not yet” or “only under these conditions.” It’s about recognizing that the speed and manner of technological adoption is itself a choice, and one that should be made collectively rather than imposed by those who stand to profit.
What would it take to choose Scenario C? Not to reject AI entirely, but to evaluate it the way the Amish evaluate technology — with the community’s wellbeing as the primary criterion rather than efficiency or profit or inevitability.
It would require cooperation. It would require prioritizing stability over disruption. It would require believing that we have agency over how our world changes, that progress doesn’t have to be chaotic, that we can choose to integrate new capabilities slowly and carefully rather than accepting whatever pace Silicon Valley sets.
It would require rejecting the narrative that technological change is a force of nature rather than a series of choices made by people with specific interests.
Maybe we’ve actually lost the ability to cooperate at that scale. Maybe the forces pushing for rapid deployment are too powerful, too entrenched, too good at framing their interests as inevitable progress. Maybe Scenario C really is impossible.
But I suspect it’s less that we’ve lost the ability and more that we’ve forgotten we ever had it. We’ve been told for so long that we can’t choose, that resistance is futile, that disruption is the price of progress, that we’ve internalized it as truth.
The question isn’t whether we can have progress without disruption. The question is whether we can remember that we’re allowed to choose, and whether enough of us can do that at the same time.