2025-05-30 12:00:00
Like most technologists of a certain age, many of my expectations for the future of computing were set by Star Trek production designers. It’s quite easy to connect many of the devices we have today to props designed 30-50 years ago: The citizens of the Federation had communicators before we had cellphones, tricorders before we had smartphones, PADDs before we had tablets, wearables before the Humane AI pin, and voice interfaces before we had Siri. You could easily make a case that Silicon Valley owes everything to Star Trek.
But now, there seems to be a shared notion that the computing paradigm established over the last half-century has run its course. The devices all work well, of course, but they all come with costs to culture that are worth correcting. We want to be more mobile, less distracted, less encumbered by particular modes of use in certain contexts. And, it’s also worth pointing out that the creation of generative AI has made the corporate shareholders ravenous for new products and new revenue streams. So whether we want them or not, we’re going to get new devices. The question is, will they be as revolutionary as they’re already hyped up to be? I’m old enough to remember the gasping “city-changing” hype of the Segway before everyone realized it was just a scooter with a gyroscope. But in fairness, there was equal hype to the iPhone and it isn’t overreaching to say that it remade culture even more extensively than a vehicle ever could have.
So time will tell.
I do think there is room for a new approach to computing, but I don’t expect it to be a new device that renders all others obsolete. The smartphone didn’t do that to desktop or laptop computers, nor did the tablet. We shouldn’t expect a screenless, sensor-ridden device to replace anyone’s phone entirely, either. But done well, such a thing could be a welcome addition to a person’s kit. The question is whether that means just making a new thing or rethinking how the various computers in our life work together.
As I’ve been pondering that idea, I keep thinking back to Star Trek, and how the device that probably inspired the least wonder in me as a child is the one that seems most relevant now: the Federation’s wearables. Every officer wore a communicator pin — a kind of Humane Pin light — but they also all wore smaller pins at their collars signifying rank. In hindsight, it seems like those collar pins, which were discs the size of a watch battery, could have formed some kind of wearable, personal mesh network. And that idea got me going…
The future isn’t a zero-sum game between old and new interaction modes. Rather than being defined by a single new computing paradigm, the future will be characterized by an increase in computing: more devices doing more things.
I’ve been thinking of this as a PAC — Personal Ambient Computing.
At its core is a modular component I’ve been envisioning as a small, disc-shaped computing unit roughly the diameter of a silver dollar but considerably thicker. This disc would contain processing power, storage, connectivity, sensors, and microphones.
Not exactly unlike what people are already speculating that the forthcoming io/OpenAI device will be.
The difference here is that a PAC disc could be worn as jewelry or embedded in a wristwatch with its own display or housed in a handheld device like a phone or reader or integrated into a desktop or portable (laptop or tablet) display or embedded in household appliances. This approach could create a personal mesh network of PAC modules, each optimized for its context, rather than forcing every function in our lives through a smartphone.
There has been plenty of earned dunking on the Humane Pin and the Rabbit R1 — they just weren’t good devices — which has lead to a rampage of preemptive dunking on what io/OpenAI will make. But in all fairness, some of the ideas behind those devices aren’t bad ones; the craft just wasn’t there. But the implicit idea of a singular AI-driven device to replace your phone is a bad idea. No one wants that; no one will find forcing all the interactions they’ve become accustomed to through a voice interface “delightful.”
But adding new kinds of devices to the mix is a good idea. That is what PAC should be all about.
The key to making this work lies in the standardized form factor. I imagine a magnetic edge system that allows the disc to snap into various enclosures — wristwatches, handhelds, desktop displays, wearable bands, necklaces, clips, and chargers. By getting the physical interface right from the start, the PAC hardware wouldn’t need significant redesign over time, but an entirely new ecosystem of enclosures could evolve more gradually and be created by anyone.
A worthy paradigm shift in computing is one that makes the most use of modularity, open-source software and hardware, and context. Open-sourcing hardware enclosures, especially, would offer a massive leap forward for repairability and sustainability.
In my illustration above, I even went as far as sketching a smaller handheld — exactly the sort of device I’d prefer over the typical smartphone. Mine would be proudly boxy with a larger top bezel to enable greater repair access to core components, like the camera, sensors, microphone, speakers, and a smaller, low-power screen I’d depend upon heavily for info throughout the day. Hey, a man can dream.
The point is, a PAC approach would make niche devices much more likely.
The disc itself could operate at lower power than a smartphone, while device pairings would benefit from additional power housed in larger enclosures, especially those with screens. This creates an elegant hierarchy where the disc provides your personal computing core and network connectivity, while housings add context-specific capabilities like high-resolution displays, enhanced processing, or extended battery life.
Simple housings like jewelry would provide form factor and maybe extend battery life. More complex housings would add significant power and specialized components. People wouldn’t pay for screen-driving power in every disc they own, just in the housings that need it.
This modularity solves the chicken-and-egg problem that kills many new computing platforms. Instead of convincing people to buy an entirely new device that comes with an established software ecosystem, PAC could give us familiar form factors — watches, phones, desktop accessories — powered by a new paradigm. Third-party manufacturers could create housings without rebuilding core computing components.
It’s worth saying that this is not something I particularly want Apple to create. Nor Google, Microsoft, Meta, OpenAI, or anyone else already worth a billion or more dollars.
This vision of personal ambient computing aligns with what major corporations already want to achieve, but with a crucial difference: privacy. The current trajectory toward ambient computing comes at the cost of unprecedented surveillance. Established tech corporations already envision futures where computing is everywhere, but where they monitor, control and monetize the flow of information.
That’s a control over culture that I fundamentally reject.
PAC demands a different future — one that leaves these corporate gatekeepers behind. A personal mesh should be just that: personal. Each disc should be configurable to sense or not sense based on user preferences, allowing contextual control over privacy settings. Users could choose which sensors are active in which contexts, which data stays local versus shared across their mesh, and which capabilities are enabled in different environments. A PAC unit should be as personal as your crypto vault.
Admitedly, this is an idea with a lot of technical and practical hand-waving at work. And at this vantage point, it isn’t really about technical capability — I’m making a lot of assumptions about continued miniaturization, network bandwidth, storage capacities, and other things. It is about computing power returning to individuals rather than being concentrated in corporate silos. PAC represents ambient computing without ambient surveillance. And it is about computing graduating its current form and becoming more humanely and elegantly integrated into our day to day lives.
The smartphone isn’t going anywhere. And we’re going to get re-dos of the AI devices that have already spectacularly failed. But we won’t get anywhere especially exciting until we look at the personal computing ecosystem holistically.
PAC offers a more distributed, contextual approach that enhances rather than replaces effective interaction modes. It’s additive rather than replacement-based, which historically tends to drive successful technology adoption. I know I’m not alone in imagining something like this. I’d just like to feel more confident that people with the right kind of resources would be willing to invest in it.
By distributing computing across multiple form factors while maintaining continuity of experience, PAC could deliver on the promise of ubiquitous computing without sacrificing the privacy, control, and interaction diversity that make technology truly personal.
The future of computing shouldn’t be about choosing between old and new paradigms. It should be about computing that adapts to us, not the other way around.
2025-05-28 12:00:00
For the past twenty to thirty years, the creative services industry has pursued a strategy of elevating the perceived value of knowledge work over production work. Strategic thinking became the premium offering, while actual making was reframed as “tactical” and “commoditized.” Creative professionals steered their careers toward decision-making roles rather than making roles. Firms adjusted their positioning to sell ideas, not assets — strategy became the product, while labor became nearly anonymous.
After twenty years in my own career, I believe this has been a fundamental mistake, especially for those who have so distanced themselves from craft that they can no longer make things.
The strategic pivot created two critical vulnerabilities that are now being exposed by AI:
For individuals: AI is already perceived as delivering ideas faster and with greater accuracy than traditional strategic processes, repositioning much of what passed for strategy as little better than educated guesswork. The consultant who built their career on frameworks and insights suddenly finds themselves competing with a tool that can generate similar outputs in seconds.
For firms: Those who focused staff on strategy and account management while “offshoring” production cannot easily pivot to new means of production, AI-assisted or otherwise. They’ve created organizations optimized for talking about work rather than doing it.
In hindsight, the homogeneity of interaction design systems should have been our warning. We became so eager to accept tools that reduced labor — style guides that eliminated design decisions, component libraries that standardized interfaces, templates that streamlined production — that we literally cleared the decks for AI replacement.
Many creative services firms now accept AI in the same way an army-less nation might surrender to an invader: they have no other choice. They’ve systematically dismantled their capacity to make things in favor of their capacity to think about things. Now they’re hoping they can just re-boot production with bots. I don’t think that will work.
AI, impressive as it is, still cannot make anything and everything. More importantly, it cannot produce things for existing systems as efficiently and effectively as a properly equipped person who understands both the tools and the context.
The real world still requires:
These aren’t strategic challenges — they’re craft challenges. They require the kind of deep, hands-on knowledge that comes only from actually making things, repeatedly, over time.
I see the evidence everywhere in my firm’s client accounts: there’s a desperate need to move as quickly as ever, motivated by the perception that AI has created about the overall pace of the market. But there’s also an acknowledgment that meaningful progress doesn’t come at the push of a button.
The value of simply doing something — competently, efficiently, and with an understanding of how it fits into larger systems — has never been higher.
This is why I still invest energy in my own craft and in communicating design fundamentals to anyone who will listen. Not because I’m nostalgic for pre-digital methods, but because I believe craft represents a sustainable competitive advantage in an AI-augmented world.
The fundamental issue is that we confused talking about work with doing work. We elevated advice-giving over action-taking. We prioritized the ability to diagnose problems over the ability to solve them.
But clients don’t ultimately pay for insights — they pay for outcomes. And outcomes require action. They require the messy, iterative, problem-solving work of actually building something that works in the real world.
The firms and individuals who will thrive in the coming years won’t be those with the best strategic frameworks or the most sophisticated AI prompts. They’ll be those who can take an idea — whether it comes from a human strategist or an AI system — and turn it into something real, functional, and valuable.
In my work, I regularly review design output from teams across the industry. I encounter both good ideas and bad ones, skillful craft and poor execution. Here’s what I’ve learned: it’s better to have a mediocre idea executed with strong craft than a brilliant idea executed poorly. When craft is solid, you know the idea can be refined — the execution capability exists, so iteration is possible. But when a promising idea is rendered poorly, it will miss its mark entirely, not because the thinking was wrong, but because no one possessed the skills to bring it to life effectively.
The pendulum that swung so far toward strategy needs to swing back toward craft. Not because technology is going away, but because technology makes the ability to actually build things more valuable, not less.
In a world where everyone can generate ideas, the people who can execute those ideas become invaluable.
2025-05-25 12:00:00
Many Grids
I’ve been making very small collages, trying to challenge myself to create new patterns and new ways of connecting form and creating space.
Well, are we?
The last page in a book I started last year.
2025-05-16 12:00:00
Great design comes from seeing — seeing something for what it truly is, what it needs, and what it can be — both up close and at a distance. A great designer can focus intently on the smallest of details while still keeping the big picture in view, perceiving both the thing itself and its surrounding context. Designers who move most fluidly between these perspectives create work that endures and inspires.
But there’s a paradox at the heart of design that’s rarely discussed: the discipline that most profoundly determines how lasting and inspiring a work of design can be is a designer’s ability to look away — not just from their own work, but from other solutions, other possibilities, other designers’ takes on similar problems.
This runs counter to conventional wisdom. We’re told to study the masters, to immerse ourselves in the history of our craft, to stay current with trends and innovations. There’s value in this, of course — foundational knowledge creates the soil from which original work can grow. But there comes a point where looking at too many existing solutions becomes not illuminating but constraining.
Design, as I’ve defined it before, is about giving form to intent. Intent is a matter shared between those with a need and those with a vision for a solution. What makes solutions truly special is when that vision is deeply personal and unique — when it emerges from within rather than being assembled from external reference points.
The most distinctive voices in design history all approached creative problems with an obsessive level of attention to detail and the highest standard for the appropriateness of their solutions. But they all also trusted that their unique sensibilities would not just set their work apart but be embraced for its humanity. Dieter Rams didn’t create his revolutionary product designs by studying how others had approached similar problems — he developed principles based on his own sense of what makes design “good.” Susan Kare didn’t design her iconic Apple interface elements by mimicking existing computer graphics — she drew inspiration from everyday symbols, folk art, and her background in fine arts to create a visual language that felt both novel and instantly familiar. Jony Ive’s groundbreaking Apple products didn’t merely iterate on existing consumer electronics and make them smoother and shinier — they emerged from his obsession with materials, manufacturing processes, and a relentless pursuit of simplicity that often meant ignoring industry conventions. All were met with hot takes as instantly as the reverence we remember.
The most innovative solutions often come from designers who are aware of conventions but not beholden to them. They know the rules well enough to break them purposefully. They understand context but aren’t limited by precedent. They’ve cultivated the discipline to look away from existing solutions when it matters most — during the critical phases of ideation and development when uniqueness of vision is most vulnerable to external influence.
This discipline of looking away preserves the singularity that makes great design resonant. When we constantly reference existing solutions, our work inevitably gravitates toward the mean. We solve for expectations rather than needs. We optimize for recognition rather than revelation. We produce work that feels familiar and safe but lacks the distinctive character that makes design truly compelling. Looking away creates space for intuition to operate. It allows us to draw from deeper wells of experience and insight rather than responding to surface-level trends and patterns. It gives permission for the unexpected connections and novel approaches that define breakthrough work.
This is perhaps the most difficult discipline in design — harder than mastering software, harder than learning color theory, harder than understanding grids and proportions. It requires confidence to trust your own vision when countless examples of “how it’s done” are just a search away. It demands the courage to pursue a direction that hasn’t been validated by others. It necessitates comfort with uncertainty when established patterns offer the security of the proven. It requires an acceptance, if not a desire, for risk — of failure, rejection, being misunderstood, or just being overlooked. Sometimes that’s something we can learn from; sometimes it’s just a matter of creating in a very crowded world.
The most valuable thing a designer brings to any problem is not their knowledge of existing solutions but their unique perspective — their particular way of seeing and making sense of the world. This perspective is preserved and strengthened not by constant reference to what others have done, but by the discipline of looking away and trusting what emerges from within. Trust that what is truly weird this year will become next year’s standard.
Great design requires both looking and looking away — studying and ignoring, learning and forgetting, absorbing and creating. The magic happens not just in what we choose to see, but in what we deliberately choose not to see.
2025-05-14 12:00:00
Today’s metrics-obsessed design culture is too fixated on action. Clicks, conversions, and other easily quantified metrics have become our purpose. We’re so focused on outcomes that we’ve lost sight of what makes them valuable and what even makes them possible in the first place: order and understanding.
The primary function of design is not to prompt action. It’s to bring form to intent through order: arranging and prioritizing information so that those who encounter it can see it, perceive it, and understand it.
Why has action become our focus? Simple: it’s easier to measure than understanding. We can track how many people clicked a button but not how many people grasped the meaning behind it. We can measure time spent on a page but not comprehension gained during that time. And so, following the path of least resistance, we’ve collectively decided that what’s easy to measure must be what’s most important to optimize, leaving action metrics the only means by which the success of design is determined.
This is backward. Action without understanding is merely manipulation — a short-term victory that creates long-term problems. Users who take actions without fully comprehending why become confused, frustrated, and ultimately distrustful of both the design and the organization behind it. A dirty little secret of action metrics is how often the success signal — a button click or a form submission — is immediately followed by a meandering session of actions that obviously signals confusion and possibly even regret. Often, confusion is easier to perceive from session data than much else.
Even when action is an appropriate goal, it’s not a guaranteed outcome. Information can be perfectly clear and remain unpersuasive because persuasion is not entirely within the designer’s control. Information is at its most persuasive when it is (1) clear, (2) truthful, and (3) aligned with the intent of the recipient. As designers, we can only directly control the first two factors.
As for alignment with user intent, we can attempt to influence this through audience targeting, but let’s be honest about the limitations. Audience targeting relies on data that we choose to believe is far more accurate than it actually is. We have geolocation, sentiment analysis, rich profiling, and nearly criminally invasive tracking, and yet, most networks think I am an entirely different kind of person than I am. And even if they got the facts right, they couldn’t truly promise intent-alignment at the accuracy they do without mind-reading.
The other dirty secret of most marketing is we attempt to close the gap with manipulation designed to work on most people. We rationalize this by saying, “yeah, it’s cringe, but it works.” Because we prioritize action over understanding, we encourage designs that exploit psychological triggers rather than foster comprehension. Dark patterns, artificial scarcity, misleading comparisons, straight up negging — these are the tools of action-obsessed design. They may drive short-term metrics, but they erode trust and damage relationships with users.
This misplaced emphasis also distorts our design practice. Specific tactics like button placement and styling, form design, and conventional call-to-action patterns carry disproportionate weight in our approach. These elements are important, but fixating on them distracts designers from the craft of order: information architecture, information design, typography, and layout — the foundational elements essential to clear communication.
What might design look like if we properly valued order over action?
None of this means that action isn’t important. Of course it is. A skeptic might ask: “What is the purpose of understanding if no action is taken?” In many cases, this is a fair question. The entire purpose of certain designs — like landing pages — may be to engage an audience and motivate their action. In such cases, measuring success through clicks and conversions not only makes sense, it’s really the only signal that can be quantified.
But this doesn’t diminish the foundational role that understanding plays in supporting meaningful action, or the fact that overemphasis on action metrics can undercut the effectiveness of communication. Actions built on misunderstanding are like houses built on sand — they will inevitably collapse.
When I say that order is more important than action, I don’t mean that action isn’t important. But there is no meaningful action without understanding, and there is no understanding without order. By placing order first in our design priorities, we don’t abandon action — we create the necessary foundation for it. We align our practice with our true purpose: not to trick people into doing things, but to help them see, know, and comprehend so they can make informed decisions about what to do next.