2024-02-10 04:15:39
The recent Beeper controversy briefly brought the “blue bubbles vs. green bubbles” topic back into the mainstream. Here’s a brief review for those of you who are (blessedly) unaware of this issue. Messages sent using the iMessage service appear in blue text bubbles within the Messages app. Messages sent using something other than the iMessage service (e.g., SMS, or (soon) RCS) appear in green text bubbles.
The iMessage service and the Messages app are only available on Apple devices. This is usually presented as a competitive advantage for the iPhone. If you want to use the iMessage service, the only (legitimate) way to do so is to buy an Apple device. If Apple were to make iMessage available on non-Apple platforms, that would remove one reason to buy an iPhone—or so the argument goes.
I think this popular conception of the issue is slightly wrong—or right for a different reason, at least. The iMessage service is not so good that it makes the iPhone more attractive to customers. It’s the iPhone that makes iMessage attractive. The iPhone gives iMessage its cachet, not the other way around.
This truth is plainly evident at the core of the “blue bubbles vs. green bubbles” debate. One of the biggest reasons green bubbles are looked down upon is that they indicate that the recipient doesn’t have an iPhone. iPhones are expensive, fancy, and desirable. Blue bubbles put the sender into the “in” crowd of iPhone owners.
The iMessage service itself, when considered in isolation, has considerably less draw. Here’s an assessment from 2013 from within Apple, as revealed during the recent Epic trial by internal emails discussing the idea of making iMessage work on non-Apple devices.
Eddy Cue: We have the best messaging app and we should make it the industry standard. […]
Craig Federighi: Do you have any thoughts on how we would make switching to iMessage (from WhatsApp) compelling to masses of Android users who don’t have a bunch of iOS friends? iMessage is a nice app/service, but to get users to switch social networks we’d need more than a marginally better app.
While I appreciate Eddy’s enthusiasm, I think Craig is closer to the mark: if iMessage is better than its competitors at all—and this is highly debatable—it is only marginally so.
Those Apple emails were written more than a decade ago. In the years since, iMessage has improved, but so has the competition. Today, it still feels like the iPhone is carrying iMessage. Anecdotally, both my teenage children have iPhones, but their group chats with their friends take place in WhatsApp.
Apple has almost certainly missed the most advantageous window of time to make iMessage “the industry standard” messaging service. But as the old saying goes, the best time to plant a tree is 30 years ago, and the second-best time is now. Apple has little to lose by expanding iMessage to other platforms, and there still may be something to be gained (even if it’s just making mixed Android/iPhone conversations in Messages a bit more smooth).
2024-01-31 06:44:06
The graphical user interface on the original Macintosh was a revelation to me when I first used it at the tender age of 8 years old. Part of the magic was thanks to its use of "direct manipulation." This term was coined in the 1980s to describe the ability to control a computer without using the keyboard to explain what you wanted it to do. Instead of typing a command to move a file from one place to another, the user could just grab it and drag it to a new location.
The fact that I’m able to write the phrase “grab it and drag it to a new location” and most people will understand what I mean is a testament to the decades-long success of this kind of interface. In the context of personal computers like the Mac, we all understand what it means to “grab” something on the screen and drag it somewhere using a mouse. We understand that the little pictures represent things that have meaning to both us and the computer, and we know what it means to manipulate them in certain ways. For most of us, it has become second nature.
With the advent of the iPhone and ubiquitous touchscreen interfaces, the phrase “direct manipulation” is now used to draw a contrast between touch interfaces and Mac-style GUIs. The iPhone has “direct manipulation.” The Mac does not. On an iPhone, you literally touch the thing you want to manipulate with your actual finger—no “indirect” pointing device needed.
The magic, the attractiveness, the fundamental success of both of these forms of “direct manipulation” has a lot to do with the physical reality of our existence as human beings. The ability to reason about and manipulate objects in space is a cornerstone of our success as a species. It is an essential part of every aspect of our lives. Millions of years of natural selection has made these skills a foundational component of our very being. We need these skills to survive, and so all of us survivors are the ones who have these skills.
Compare this with the things we often put under the umbrella of “knowing how to use computers”: debugging Wi-Fi problems, understanding how formulas work in Excel, splitting a bezier curve in Illustrator, converting a color image to black and white in Photoshop, etc. These are all things we must learn how to do specifically for the purpose of using the computer. There has not been millions of years of reproductive selection to help produce a modern-day population that inherently knows how to convert a PDF into a Word document. Sure, the ability to reason and learn is in our genes, but the ability to perform any specific task on a computer is not.
Given this, interfaces that leverage the innate abilities we do have are incredibly powerful. They have lower cognitive load. They feel good. “Ease of use” was what we called it in the 1980s.
The success of the GUI was driven, in large part, by the fact that our entire lives—and the lives of all our ancestors—have prepared us with many of the skills necessary to work with interfaces where we see things and then use our hands to manipulate them. The “indirection” of the GUI—icons that represent files, windows that represent documents that scroll within their frames—fades away very quickly. The mechanical functions of interaction become second nature, allowing us to concentrate on figuring out how the heck to remove the borders on a table in Google Docs1, or whatever.
The more a user interface presents a world that is understandable to us, where we can flex our millennia-old kinesthetic skills, the better it feels. The Spatial Finder, which had a simple, direct relationship between each Finder window and a location in the file hierarchy, was a defining part of the classic Macintosh interface. Decades later, the iPhone launched with a similarly relentlessly spatial home-screen interface: a grid of icons, recognizable by their position and appearance, that go where we move them and stay where we put them.
Now here we are, 40 years after the original Macintosh, and Apple is introducing what it calls its first "spatial computer." I haven’t tried the Vision Pro yet (regular customers won’t receive theirs for at least another three days), but the early reviews and Apple’s own guided tour provide a good overview of its capabilities.
How does the Vision Pro stack up, spatially speaking? Is it the new definition of “direct manipulation,” wresting the title from touch interfaces? In one obvious way, it takes spatial interfaces to the next level by committing to the simulation of a 3D world in a much more thorough way than the Mac or iPhone. Traditional GUIs are often described as being “2D,” but they’ve all taken advantage of our ability to parse and understand objects in 3D space by layering interface elements on top of each other, often deploying visual cues like shadows to drive home the illusion.
Vision Pro’s commitment to the bit goes much further. It breaks the rigid perpendicularity and shallow overall depth of the layered windows in a traditional GUI to provide a much deeper (literally) world within which to do our work.
Where Vision Pro may stumble is in its interface to the deep, spatial world it provides. We all know how to reach out and “directly manipulate” objects in the real world, but that’s not what Vision Pro asks us to do. Instead, Vision Pro requires us to first look at the thing we want to manipulate, and then perform an “indirect” gesture with our hands to operate on it.
Is this look-then-gesture interaction any different than using a mouse to “indirectly” manipulate a pointer? Does it leverage our innate spatial abilities to the same extent? Time will tell. But I feel comfortable saying that, in some ways, this kind of Vision Pro interaction is less “direct” than the iPhone’s touch interface, where we see a thing on a screen and then literally place our fingers on it. Will there be any interaction on the Vision Pro that’s as intuitive, efficient, and satisfying as flick-scrolling on an iPhone screen? It’s a high bar to clear, that’s for sure.
As the Vision Pro finally starts to arrive in customers’ hands, I can’t help but view it through this spatial-interface lens when comparing it to the Mac and the iPhone. Both its predecessors took advantage of our abilities to recognize and manipulate objects in space to a greater extent than any of the computing platforms that came before them. In its current form, I’m not sure the same can be said of the Vision Pro.
Of course, there’s a lot more to the Vision Pro than the degree to which it taps into this specific set of human skills. Its ability to fill literally the entire space around the user with its interface is something the Mac and iPhone cannot match, and it opens the door to new experiences and new kinds of interfaces.
But I do wonder if the Vision Pro’s current interaction model will hold up as well as that of the Mac and iPhone. Perhaps there’s still at least one technological leap yet to come to round out the story. Or perhaps the tools of the past (e.g., physical keyboards and pointing devices) will end up being an essential part of a productive, efficient Vision Pro experience. No matter how it turns out, I’m happy to see that the decades-old journey of “spatial computing” continues.
Select the whole table, then click the “Border width” toolbar icon, then select 0pt
. ↩
2024-01-12 02:51:57
While the utility of Generative AI is very clear at this point, the moral, ethical, and legal questions surrounding it are decidedly less so. I’m not a lawyer, and I’m not sure how the many current and future legal battles related to this topic will shake out. Right now, I’m still trying to understand the issue well enough to form a coherent opinion of how things should be. Writing this post is part of my process.
Generative AI needs to be trained on a vast amount of data that represents the kinds of things it will be asked to generate. The connection between that training data and the eventual generated output is a hotly debated topic. An AI model has no value until it’s trained. After training, how much of the model’s value is attributable to any given piece of training data? What legal rights, if any, can the owners of that training data exert on the creator of the model or its output?
A human’s creative work is inextricably linked to their life experiences: every piece of art they’ve ever seen, everything they’ve done, everyone they’ve ever met. And yet we still say the creative output of humans is worthy of legal protection (with some fairly narrow restrictions for works that are deemed insufficiently differentiated from existing works).
Some say that generative AI is no different. Its output is inextricably linked to its “life experience” (training data). Everything it creates is influenced by everything it has ever seen. It’s doing the same thing a human does, so why shouldn’t its output be treated the same as a human’s output?
And if it generates output that’s insufficiently differentiated from some existing work, well, we already have laws to handle that. But if not, then it’s in the clear. There’s no need for any sort of financial arrangement with the owners of the training data any more than an artist needs to pay every other artist whose work she’s seen each time she makes a new painting.
This argument does not sit well for me, for both practical and ethical reasons. Practically speaking, generative AI changes the economics and timescales of the market for creative works in a way that has the potential to disincentivize non-AI-generated art, both by making creative careers less viable and by narrowing the scope of creative skill that is valued by the market. Even if generative AI develops to the point where it is self-sustaining without (further) human input, the act of creation is an essential part of a life well-lived. Humans need to create, and we must foster a market that supports this.
Ethically, the argument that generative AI is “just doing what humans do” seems to draw an equivalence between computer programs and humans that doesn’t feel right to me. It was the pursuit of this feeling that led me to a key question at the center of this debate.
Computer programs don’t have rights1, but people who use computer programs do. No one is suggesting that generative AI models should somehow have the rights to the things they create. It’s the humans using these AI models that are making claims about the output—either that they, the human, should own the output, or, at the very least, that the owners of the model’s training data should not have any rights to the output.
After all, what’s the difference between using generative AI to create a picture and using Photoshop? They’re both computer programs that help humans make more, better creative works in less time, right?
We’ve always had technology that empowers human creativity: pencils, paintbrushes, rulers, compasses, quills, typewriters, word processors, bitmapped and vector drawing programs—thousands of years of technological enhancement of creativity. Is generative AI any different?
At the heart of this question is the act of creation itself. Ownership and rights hinge on that act of creation. Who owns a creative work? Not the pencil, not the typewriter, not Adobe Photoshop. It’s the human who used those tools to create the work that owns it.
There can, of course, be legal arrangements to transfer ownership of the work created by one human to another human (or a legal entity like a corporation). And in this way, value is exchanged, forming a market for creativity.
Now then, when someone uses generative AI, who is the creator? Is writing the prompt for the generative AI the act of creation, thus conferring ownership of the output to the prompt-writer without any additional legal arrangements?
Suppose Bob writes an email to Sue, who has no existing business relationship with Bob, asking her to draw a picture of a polar bear wearing a cowboy hat while riding a bicycle. If Sue draws this picture, we all agree that Sue is the creator, and that some arrangement is required to transfer ownership of this picture to Bob. But if Bob types that same email into a generative AI, has he now become the creator of the generated image? If not, then who is the creator?
Where is the act of creation?
This question is at the emotional, ethical (and possibly legal) heart of the generative AI debate. I’m reminded of the well-known web comic in which one person hands something to another and says, “I made this.” The recipient accepts the item, saying “You made this?” The recipient then holds the item silently for a moment while the person who gave them the item departs. In the final frame of the comic, the recipient stands alone holding the item and says, “I made this.”
This comic resonates with people for many reasons. To me, the key is the second frame in which the recipient holds the item alone. It’s in that moment that possession of the item convinces the person that they own it. After all, they’re holding it. It’s theirs! And if they own it, and no one else is around, then they must have created it!
This leads me back to the same question. Where is the act of creation? The person in the comic would rather not think about it. But generative AI is forcing us all to do so.
I’m not focused on this point for reasons of fairness or tradition. Technology routinely changes markets. Our job as a society is to ensure that technology changes things for the better in the long run, while mitigating the inevitable short-term harm.
Every new technology has required new laws to ensure that it becomes and remains a net good for society. It’s rare that we can successfully adapt existing laws to fully manage a new technology, especially one that has the power to radically alter the shape of an existing market like generative AI does.
In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act. And if the same act (absent any prior legal arrangements) confers ownership in one context but not in another, then perhaps it’s not the best candidate.
I’m not sure what the right answer is, but I think I’m getting closer to the right question. It’s a question I think we’re all going to encounter a lot more frequently in the future: Who made this?
Non-sentient computer programs, that is. If we ever create sentient computer programs, we’ll have a whole host of other problems to deal with. ↩
2023-10-30 04:11:46
I first read about the “blue ocean” strategy in a story (probably in Edge magazine) about the Nintendo Wii. While its competitors were fighting for supremacy in the game-console market by producing ever-more-powerful hardware capable of high-definition visuals, Nintendo chose not to join this fight. The pursuit of graphics power was a “red ocean” that was already teeming with sharks, fighting over the available fish and filling the water with blood.
Nintendo’s “blue ocean” strategy was to stake out a position where none of its competitors were present. The idea of creating a standard-definition game console in the generation when all the other consoles were moving to HD seemed ridiculous, but that’s exactly what Nintendo did. In place of impressive graphics, the Wii differentiated itself with its motion controls and a low price. It was a hit.
Lately, I’ve been thinking about the blue ocean strategy in the context of Apple. Like Nintendo, Apple has made some bold moves with its products, many of which were ridiculed at the time: a smartphone without a physical keyboard, a candy-colored desktop computer with no floppy drive and no legacy ports, a $695 (in 2023 dollars) portable music player, a digital music store in the age of ubiquitous music piracy.
Unlike Nintendo, Apple has seen its competitors move quickly to imitate its innovations, turning these oceans red and leaving Apple to compete on the basis of execution…until it finds its next blue ocean.
But what is that? It’s tempting to point to the Vision Pro. AR/VR headsets are not new, but then, neither were smartphones or portable music players. The Vision Pro hasn’t shipped yet, so the jury’s still out. Let’s keep an eye on it.
I have something else in mind. It’s actually related to one of Apple’s earlier "blue ocean" changes: the elimination of removable batteries. In the beginning, Apple’s laptops all used removable battery packs. Some even let the user pull out the floppy-drive module and replace it with a second battery.
Starting in 2009, Apple began to phase out removable batteries across its laptop line in favor of batteries that were sealed inside the case and were not user-accessible. The iPod and the iPhone arguably started this trend by never including removable batteries to begin with. (The iPhone defied so many other norms that the sealed battery was less remarked upon than it might have been, but it was still noted.)
The upsides, which Apple touted, were many: lighter weight, smaller size, better reliability, longer battery life. We are still reaping these benefits today, and we Apple fans rarely question them. Today, predictably, non-removable batteries are a red ocean in many product categories. They are the norm, not an innovation.
When thinking about Apple’s next blue ocean, it’s tempting to ignore past innovations. Technological progress seems like an arrow pointing in only one direction, never turning back. But I just can’t shake the idea that a return to removable, user-accessible batteries has now become a blue-ocean opportunity just waiting for Apple to seize it.
Follow me, here. Yes, sealed batteries still offer all the same advantages they always have. And, yes, a return to removable batteries would bring back all their problems: increased size and weight, increased risk of liquid and dust ingress, decreased aesthetic elegance.
But some things have changed in the past couple of decades. Battery technology has improved, and Apple has moved its entire product line to its own silicon chips that lead the industry in power efficiency. There’s more headroom than there has ever been to accommodate a tiny bit more size and weight in Apple’s portable products.
That’s still a step backwards, right? But there are several countervailing forces, one of which is rapidly increasing in importance. The first is the fact that, as noted earlier, removable batteries are now a blue ocean. Apple would be alone among its biggest competitors if it made a wholesale change (back) to removable batteries in any of its product lines.
Second, people still crave the advantages of removable batteries that were left behind: increasing battery life by swapping batteries instead of using a cumbersome external battery pack, inexpensively and conveniently extending the life of a product by replacing a worn-out battery with a new one—without paying for someone else to perform delicate surgery on the device.
Finally, related to that last point, worn-out batteries are an extremely common reason that old tech products are traded in, recycled, or replaced. Removable batteries are an easy way to extend the useful life of a product. This leads to less e-waste, which is perfectly aligned with Apple’s environmental goals as 2030 approaches.
Of course, longer product lifetimes means fewer product sales per unit time, which seems to run counter to Apple’s financial goals. But this is a problem that can be solved using one of Apple’s favorite financial tools: higher product margins. If Apple can actually make products that have a longer useful life, it can charge more money for the extra value they provide.
It’s easy to think of product ideas that run counter to accepted wisdom; it’s harder to think of the right one. Sometimes a blue ocean is free from sharks simply because there are no fish there. But I think this idea has merit. I am not making a prediction, but I am making a suggestion.
I know some of you remain unconvinced. How can a removable battery be easy to swap and yet also be sealed against the elements? Won’t removable batteries ruin the appearance of Apple’s existing products by adding unsightly cut lines? Won’t they become unacceptably large and heavy? How can structural integrity be maintained with a giant hole cut out of the product frame? What about the risk of fire due to faulty battery connections or battery packs coming in contact with something metal in someone’s pocket? The list of problems goes on and on.
Innovation is never easy, but since when has Apple shied away from a challenge? As the industry leader in consumer-electronics design and manufacturing, Apple is best positioned to overcome the obstacles and reap the benefits of removable batteries. There’s no question it will be difficult, but if done well, it will undoubtedly be a hit. And as the company that led the transition away from removable batteries, it’s only fitting1 for Apple to be the one to bring them back.
2023-08-19 00:44:19
“The Plumber Problem” is a phrase I coined to describe the experience of watching a movie that touches on some subject area that you know way more about than the average person, and then some inaccuracy in what’s depicted distracts you and takes you out of the movie. (This can occur in any work of fiction, of course: movies, TV, books, etc.)
Here’s an example. A plumber is watching a movie with a scene where something having to do with pipes is integral to the plot. But it’s all wrong, and the plumber’s mind rebels. No one else in the audience is bothered. They’re all still wrapped up in the narrative. But the plumber has a problem.
I’m not sure how long ago I came up with this phrase. The earliest recorded occurrence I can find is from 2021, in episode #153 of Reconcilable Differences (at 47:02) where I explain it to my cohost, Merlin, so it obviously predates that.
The Plumber Problem is loosely related to the “Gell-Mann amnesia effect” which is “the phenomenon of experts believing news articles written on topics outside of their fields of expertise, yet acknowledging that articles written in the same publication within their fields of expertise are error-ridden and full of misunderstanding.”
Anyway, I was thinking about this today thanks to some people on Mastodon sending me examples of The Plumber Problem. Here are a few (lightly edited):
Simon Orrell: My first exposure to “The Plumber Problem” was sitting in a theatre with my dad in 1973 watching “Emperor of the North” and my dad leans over to whisper, “They didn’t make culvert pipe like that back in the ’30s. It was plate, not corrugated.”
Tim Allen: In Speed 2, a plot point involves a laden oil tanker about to collide explosively. My wife, native to a major oil port city, couldn’t follow the plot because she could tell the tanker was empty just by looking at it, so she didn’t understand why everyone was saying it would explode.
Dan Morgan: Interstellar’s farming scenes were just SO BAD. I’m not going to detail them here, but this retired farmer and agronomist found it hard to watch. I’m sure the physics were fine though. 😂
Someone also mentioned that “The Plumber Problem” is not an easy phrase to look up online, so here’s hoping this post remedies that situation.
Here’s one more bonus post that I enjoyed:
magic: In Star Wars, Luke turns off his targeting computer to use the Force for his attack run on the Death Star. I’ve flown from one side of this galaxy to the other. I’ve seen a lot of strange stuff, but I’ve never seen anything to make me believe there’s one all-powerful Force controlling everything. There’s no mystical energy field that controls my destiny.