2025-03-20 21:57:43
Re-reading some of the quotes curated by Michael Tsai in the already-discussed Rotten commentary round-up, I noticed this bit by Om Malik, which had escaped my attention for some reason:
I have my own explanation, something my readers are familiar with, and it is the most obvious one. Just as Google is trapped in the 10-blue-link prison, which prevents it from doing something radical, Apple has its own golden handcuffs. It’s a company weighed down by its market capitalization and what stock market expects from it.
They lack the moral authority of Steve Jobs to defy the markets, streamline their product lineup, and focus the company. Instead, they do what a complex business often does: they do more. Could they have done a better job with iPadOS? Should Vision Pro receive more attention?
The answer to all those is yes. Apple has become a complex entity that can’t seem to ever have enough resources to provide the real Apple experience. What you get is “good enough.” And most of the time, I think it is enough – because what others have on the market is worse. They know how to build great hardware; it’s the software where they falter.
I agree with this almost completely, save for the part I have emphasised in bold.
From a hardware standpoint, the gap between Apple and the competition isn’t as wide as it was when the first generation of Apple Silicon CPUs debuted in 2020. Intel chips have got better, but AMD has especially upped their game. The AMD Ryzen AI MAX+ 395 (codename: Strix Halo) delivers impressive performance, as shown in this video by Dave2D reviewing the Asus Flow Z13 — which is, in turn, an impressive gaming 2‑in‑1 ’Surface-like’ device.
Design-wise, there are brands like Lenovo or Asus which, on the one hand offer home and business laptops with austere, iterative designs, but on the other hand get more creative in other lineups (Asus with their gaming laptops, Lenovo with their convertibles). Then there are brands like Framework: their laptops may not win industrial design awards, but what they’re doing on the modularity and repairability fronts is perhaps unmatched at the moment. And they, too, have recently upped their game with their new offerings.
Apple’s hardware design is still remarkable, but increasingly more on the inside than the outside of their machines. MacBooks haven’t changed much since the unibody chassis was introduced in 2008, and what seems to characterise them today is a notch on the top centre of their displays, which is among the most idiotic hardware design choices I’ve seen in more than 30 years.
Software-wise, well, I have a bit of a bias, having used Macs since 1989. I clearly know the Mac (and iOS/iPadOS) ecosystem and software selection far better than any other platform. But in recent years I’ve been familiarising myself with Linux (mainly Ubuntu and Crunchbangplusplus) and have been using Windows 10 and 11 on my ThinkPads, my Surface Pro, and my Lenovo Legion 7i gaming laptop. And overall it’s been a pleasant experience. With hiccups here and there, but again, mostly deriving from lack of habit or familiarity. I think Windows 10 and Windows 11 have been good examples of UI improvement on Microsoft’s part. And as far as reliability goes, I’ve been using my Legion 7i gaming laptop for more than a year now, and I’ve had zero issues with Windows 11. No weird crashes, no instability, no misbehaving apps, nothing.
I’ve also recently discovered the probably-still-niche world of e‑ink tablets, devices that are mostly used for note-taking, sketching, and reading. These devices may not compete with iPads in terms of versatility, but the fact that they don’t want to be jacks-of-all-trades is also their core strength, in my opinion. Which means they are more focused devices not suffering from identity crisis like the iPad, and in my experience (owning one of them) they actually offer a better user experience when it comes to handwriting ‘feel’, and an e‑ink display still looks more natural to me when writing and reading, especially during long sessions. Other advantages are the much reduced eye-straining, of course, and the exceptional battery life.
I know that, from a financial standpoint, suggesting you actually, extensively try different platforms may be unfeasible. I’ve done it largely by acquiring second-hand devices and computers, and even by receiving generous gifts from readers of this blog who wanted to get rid of stuff without increasing e‑waste. Of course we all ultimately have our preferences, but I think it’s healthier to have preferences without prejudices. I’ve been guilty of this myself until circa 2016. Before that, I was using Apple devices and software 95% of the time; the rest was superficial knowledge, mostly gathered through hearsay and sporadic usage of non-Apple platforms.
And yes, coincidentally Apple had also the best UI during Jobs’s tenure, along with some striking industrial design. But it’s important to understand that the ‘good enough’ Apple offers today isn’t necessarily better than what the competition offers — everyone is ‘good enough’ in tech now, though this ‘good enough’ for some is a step up from their previous mediocrity. For Apple is a step down, especially in the UI and UX departments, where they inarguably used to excel.
2025-03-16 04:27:11
As usual, Michael Tsai assembles a remarkable roundup of opinions and reactions after John Gruber’s recent piece Something is rotten in the State of Cupertino, where he finally criticises Apple for essentially over-over-promising and under-under-underdelivering on Apple Intelligence, especially regarding the announced improvements to Siri.
What I find involuntarily funny in this specific wave of criticism is that for some of these people this has been the straw that broke the camel’s back when it comes to Apple breaking trust and bullshitting their customers and user base.
Siri is the epitome of overpromising and underdelivering in Apple’s history.
The bullshitting isn’t recent either, though perhaps the smell is more pungent now. I love how, in my circles, whenever I ask for an example of vintage bullshitting on Apple’s part, the most remembered episode was the You’re holding it wrong ‘Antennagate’ affair at the time of the iPhone 4, in 2010. While Apple’s (Jobs’s) reaction was certainly defensive and in full damage-control mode, the whole iPhone 4’s signal reception issue was grossly exaggerated. From the Wikipedia entry: “[…] Jobs cited figures from AppleCare which showed that only 0.55 percent of all iPhone 4 users have complained to the company about the issue, while the number of phones returned to Apple was 1.7 percent – 4.3 percentage points less than the number of iPhone 3GS models that were returned in the first month of the phone’s launch.” At the time I easily believed those statistics based on personal experience and second-hand, third-hand, and fourth-hand accounts from other iPhone 4 users. I’ll add that the iPhone 4 was my daily driver from late 2010 to early 2015(!) and in that time frame I never had reception issues or dropped calls.
So no, I wouldn’t really put Antennagate on the bullshitting list. The subtle bullshitting can be found in all instances of design/manufacturing defects or issues where Apple blatantly downplayed the problem or the number of devices (and thus people) affected by it.
Take a look at the list of issues related to the iPhone 6, summarised in this section of its Wikipedia entry. When describing the touchscreen failure (nicknamed ‘touch disease’), the entry reads:
Initially, Apple did not officially acknowledge this issue. The issue was widely discussed on Apple’s support forum—where posts discussing the issue have been subject to censorship. The touchscreen can be repaired via microsoldering: Apple Stores are not equipped with the tools needed to perform the logic board repair, which had led to affected users sending their devices to unofficial, third-party repair services. An Apple Store employee interviewed by Apple Insider reported six months after they first started noticing the problem, Apple had issued guidance instructing them to tell affected users this was a hardware issue that could not be repaired and that their iPhone had to be replaced. However, some in-stock units have also been afflicted with this issue out of the box, leading to an employee stating they were “tired of pulling service stock out of the box, and seeing the exact same problem the customer has on the replacement”.
The iPhone 7’s ‘loop disease’ is also worth mentioning. Here, the related bit in the Wikipedia entry is rather terse, but I remember at the time that it was mentioned frequently, and Apple acknowledged the issue only internally. Note for how long that internal memo lasted (emphasis mine):
Some iPhone 7 devices suffer from a problem that affects audio in the device. Users reported a grayed-out speaker button during calls, grayed-out voice memo icon, and occasional freezing of the device. A few users also complained that lightning EarPods failed to work with the device and that the Wi-Fi button would be grayed out after restarting the iPhone. On May 4, 2018, Apple acknowledged the issue through an internal memo. If an affected iPhone 7 was no longer covered by warranty, Apple said its service providers could request an exception for this particular issue. The exemptions abruptly ended in July 2018 when Apple deleted the internal document. Many customers have complained Apple has charged customers around $350 to fix the issue. Many customers complain the issue first appeared after a software update.
On the Mac front, the obvious reminder is the whole butterfly keyboard fiasco, plaguing different generations of MacBooks from 2015 to 2019. It took Apple an insufferably long time to acknowledge the issue and take remediating actions, and despite the bad press, customer complaints, and lawsuits, the company always tried to downplay the issue, saying it only affected a relatively small percentage of users. I have a sizeable archive of email messages from friends, acquaintances, and readers of my blog telling me their horror stories and dreadful Apple Store experiences, like undergoing 3 or 4 keyboard replacements where only the first (or, in rare cases, the first two) came at no cost for the customer. This happened before the extended keyboard service program Apple finally launched in 2018. I have readers of my blog who wrote me at the time telling me that their MacBook with butterfly keyboard had cost them in keyboard replacements almost as much as the initial cost of the laptop itself, not to mention their forced downtime during repairs.
And many of those who vented their frustration to me shared the same sentiment — they felt betrayed, cheated, and sometimes even gaslit when complaining at Apple Stores. Reader Kelly G. — after bringing her 2015 12-inch retina MacBook to an Apple Store for the third time with unresponsive keys — told me that they made her feel as if the cause of the issue was her mishandling the MacBook rather than a design flaw.
And all these people ultimately made the same remark in their messages: if Apple had handled the whole thing with honesty, candour, and directness from the start, they in turn would have been more understanding and Apple’s reputation wouldn’t have taken the hit it took. Dieter Bohn wrote at The Verge in 2020:
More than anything else, though, the whole butterfly keyboard saga has been a huge reputation hit for Apple.
For those who thought Apple was sacrificing functionality for thinness across its entire product lineup, the butterfly keyboard looked like confirmation. For those who felt Apple was intentionally making its devices harder to repair as a way to further lock them down and also cut out third-party repair shops, it was another data point. For those who felt Apple had stopped paying attention to the Mac, here was a prime example of a problem allowed to languish to years. For those who felt Apple is still trying to create a “reality distortion field” where everything it makes is great but the truth is much more mundane, well… you get the picture.
The butterfly keyboard hurt Apple’s reputation precisely because the outlines of its problems and Apple’s response to them lined up with some of the biggest complaints people have about the company.
Earlier, I called Apple’s attempts to save the butterfly keyboard obstinate, but a less charitable way of putting it is simply to call it hubris. For some, it called Apple’s judgment into question. How could the company fail to see — or refuse to admit — that it was shipping a bad product?
In more recent times there was the display issue affecting 24-inch M1 iMacs. I already talked about it in this post from last September; essentially, customers who bought this iMac, about one year and half after purchase, saw the appearance of persistent horizontal lines at the bottom of the display, a problem caused by a degrading display cable, for which the only solution (due to how the iMac is designed) is to replace the whole LCD. In their reporting for MacRumors in October 2024, Joe Rossignol wrote: Some customers who contacted Apple about the issue said the company offered them an exemption, resulting in their iMac being repaired for free, but other customers said they had to pay for service. My brother-in-law was affected by the issue, and the service cost him around €705. In my post on the matter, I concluded:
Now, back to the iMac display issue, as the technician contacted by “Jotap62” explains, if the iMac’s display flex cable “has to sustain a very high voltage (around 50V) to power the LCD (this despite the iMac’s power supply being 15.9V)”, I find it hard to believe that none of the hardware gurus at Apple didn’t know that. I’m not an engineer, nor a hardware guru, but what I suspect is that those responsible of designing and assembling the innards of the 24-inch M‑series iMac were given the daunting task of fitting everything into that super-thin chassis, and something got to give. And this kind of flex cable was a compromise, the ‘okay-enough’, ‘it’ll last enough’ solution.
What infuriates me is that this is the kind of problem the manufacturer certainly knows about, but they also know it won’t trigger immediately. Customers then are faced with a costly out-of-warranty replacement, where the right thing to do would be to treat this as a known manufacturing issue and offer a free replacement. (Especially considering that — and this is the other infuriating bit — even after a replacement the issue is likely to reoccur). Maybe it’s also a case of components that are below Apple’s standards or requirements, but the outcome is the same — customers shouldn’t pay for these mistakes.
Well, yes, but the problem — the bullshitting — has made its way into the company’s attitude since Apple’s main bullshit filter passed away in October 2011. Ever since Cook became CEO and scrambled the org chart of executives, the impression I’ve had is that a lot of other, less apparent things got scrambled inside the company. And the software side suffered as a consequence.
I have given Cook the benefit of the doubt a lot of times, and I’m not putting the blame entirely on him, but for me that something in the State of Cupertino which is rotten now has been rotting for years under Cook’s tenure.
The trajectory taken by user interface design, system software quality and first-party software production has been on a steady decline since… let’s say 2014, with the advent of Mac OS X 10.10 Yosemite. Yosemite was a visual departure from its predecessors, and the Mac OS equivalent of iOS 7 on the iPhone, with many similar controversial UI decisions — like loss of depth and contrast in the interface, loss of legibility in redesigned UI elements such as buttons and text fields, but most baffling for me was the change of system font from Lucida Grande to Neue Helvetica, something that thankfully was swiftly rectified in the following release, OS X 10.11 El Capitan.
Just like the iOS releases that came after iOS 7 attempted to correct or attenuate the most radical UI changes and decisions, the same happened on Mac OS from El Capitan to Catalina. Then Big Sur was another ‘iOS 7‑like’ reset, and Monterey to Sequoia the following corrective iterations.
But this is just the visuals, the most superficial aspect. The substance — UI and software quality — has got progressively more brittle. Solid UI foundations have been weakened by constantly ‘fixing’ what was not broken, undoing consistent and well-thought UI decisions often simply for the sake of ‘giving the place a splash of fresh paint’. Important UI elements like buttons and scroll bars have been flattened and ‘disappeared’ in the name of a questionable ultra-minimalist approach coming from a design team seemingly confusing industrial design with haute couture, or taking inspiration from Dieter Rams simply by looking at his designs and not reading about why such designs look like that.
Over the years Mac OS has become more locked-down, more dumbed-down, more bugged, more insidious to troubleshoot when things don’t go as intended. Aspects that ‘just worked’ now work more mediocrely, such as wireless connections, Disk Utility, Time Machine.
Three years ago, in Raw power alone is not enough, I wrote:
Apple’s first-party applications included with Mac OS are mediocre at best. Their pro apps appear to be more maintained than developed with the aim of advancement, with the possible exception of Final Cut Pro (video professionals, feel free to chime in). Apps that were previously good-quality, powerful, and versatile have been neutered and have become ‘just okay’ or ‘good enough’. The Utilities folder in Mac OS has been slowly but surely depopulated over time. iOS apps with an ingenious premise, like Music Memos, are being left behind as flashes in the pan. The consensus with iTunes was that Apple should have split it into different apps so that these could be better at handling specific tasks than the old monolithic media manager. Apple eventually did split iTunes into different apps, but forgot the second part of the assignment. The result is that I still go back to a Mac with iTunes to handle my media, and I’m not the only one.
Aperture overall was a better application than Adobe Lightroom when the two apps coexisted. Apple could have kept improving Aperture and kept making it better than Lightroom. Instead they gave up. We now have Photos as sole ‘sophisticated’ Apple photo tool. Which is neither fish (iPhoto) nor flesh (Aperture).
[…] Apple’s chip and hardware advancements have inspired the competition (Intel) to do better, and that’s a great thing. On the software side, I’ve seen very little from Apple to be considered remotely inspirational.
In that piece I also talk about iWeb and iBooks Author, two applications with great potential, which ended up basically thrown in the rubbish.
iMovie, possibly the oldest prosumer first-party Mac app (it first appeared bundled with the iMac G3 DV in 1999!), kept getting better in future iterations until iMovie ’11. Everything after that was just maintenance and stagnation.
The iWork suite is another example of a series of apps that started out with the best intentions, but from a UI and functionality standpoint the first versions — from iWork ’05 to iWork ’09 — are the better, with iWork ’09 being perhaps the most mature and versatile, in my opinion. After the 2013 overhaul, things got worse. Quoting Wikipedia:
On October 22, 2013, Apple announced an overhaul of the iWork software for both the Mac and iOS. Both suites were made available via the respective App Stores. […]
The new OS X versions have been criticized for losing features such as multiple selection, linked text boxes, bookmarks, 2‑up page views, mail merge, searchable comments, ability to read/export RTF files, default zoom and page count, integration with AppleScript. Apple has provided a road-map for feature re-introduction, stating that it hopes to reintroduce some missing features within the next six months.
Some features were reintroduced later, continues the entry, but the old Apple Support document referenced by Wikipedia is (surprise surprise) no longer available.
I haven’t kept much track of these coming and going features, as I’ve been using Keynote, Pages, and Numbers very sparingly over the years, and for tasks with very limited scope. But ever since that 2013 overhaul, they have all felt pretty much the same, version after version. This is not the ‘good’ consistency, just stagnation.
Yes, but it is important to understand that things don’t typically happen in a vacuum. Apple’s stance towards hardware blunders, and Apple’s negligence towards their own software are all underlying currents that have been — sometimes subtly, sometimes not so subtly — shaping the trajectory the company is currently on. And I’m just an outside observer, with almost-zero knowledge of what happens inside Apple Park, but I can’t shake the feeling that the point of origin of this trajectory has been the post-Jobs internal restructuring.
Under Jobs, Apple was rather selective regarding the markets they wanted to be participating in. Under Cook, there’s this constant urge of being present everywhere, whether with a product or a service. Consequence: many more internal departments popping up, more managers micromanaging, more secrecy and fear of leaks probably leading to worse interdepartmental communication, more resource fragmentation. And we see design choices that seem more like the result of too many people having a say, or product directions dictated by teams not directly involved in the product, and so forth.
Of the excerpts reported by Tsai in his post, those that ring the truest to me are by Jesper, Tim Bray, and Pierre Igot.
Jesper absolutely nails it in his piece:
My thought after leaving this to fester a bit is that Apple today is focused on being Apple, and some might say on staying Apple. Apple before was focused on building products. […]
The things John Gruber noted, pretty much to a T, would not have been issues if Apple was all about just building the product. Most of the hot water that Apple is in, no matter what the reason, it wouldn’t be in if it was not first focused on being Apple.
Which is a charitable way of saying what I would have said — that today Apple is more focused on style and brand than on substance. And Bray’s remark, in all its bluntness, is the correct answer to the question I and many others have routinely raised: Why do Apple’s priorities seem so fucked up? — the most recent example being Apple Intelligence and the rumoured interface overhaul coming for iOS 19 and Mac OS 16.
And Pierre Igot’s observation, not mincing words either: Something IS indeed rotten in the State of Cupertino, but that rot is not new. To me, it feels like the Apple Intelligence fiasco is the accumulation of Apple’s software failures over the past 10–15 years finally coming to a head. They are just not very good at making software anymore.
Let’s remember these words at the next WWDC, when Federighi will tell us all about Mac OS’s ‘new look’ and superficial retouches, while we’re aching for better quality software, fixing what is indeed broken, and a more usable and useful operating system.
2025-03-09 23:01:10
Some people have written to me in the past few days for different reasons, but in their communications there was also curiosity about my switch to Android and how it was going. Some asked out of genuine interest, others had a provocative tone, like, Are you missing your iPhone already?
Since three months have passed since my last follow-up, I thought it was time for an update; possibly the last on the topic, as I think that at this point there isn’t much to add. So, the short answer is: My switch to Android is going well, much better than I anticipated, in the sense that the adjustment phase happened faster than I thought.
However, I can’t package my experience in a single, monolithic block of advice and tell you that if you plan to switch from iOS to Android, things will work out as smoothly as they did in my case.
I generally take a couple of devices with me when I’m out and about. For a while, they were my primary iPhone and an Android phone. As I explained in a previous post, my switch has been a literal switch: I still take a couple of devices with me when I’m out and about, only now they are my primary Android phone (Nothing Phone 2a), and my old iPhone. The purposes have changed slightly, though. While in the first scenario the secondary Android device served as a way to familiarise with the platform, what happens now is that my secondary iPhone is basically a camera device for taking pictures with a few photo apps I don’t feel like abandoning and for which there isn’t an Android counterpart. But to answer that question above — Are you missing your iPhone already? — well, no, I’m not.
I don’t miss it because when I take it with me as a creative tool for shooting, it’s there. And when I don’t take it with me, it’s because I don’t need it. And there have been times when I meant to take it with me but forgot to — and it wasn’t a big deal.
(Brief aside: As I’m explaining this I’m realising I’m selling my Nothing Phone 2a short. It’s a great device that does everything I need. I’ve set it up and customised the way I want, I found and downloaded all the apps I used the most on iPhone or found decent alternatives, and I haven’t had any particular hiccup in my interactions or ‘flow’ when using this phone).
Another aspect of my relative ease in switching platforms is that in all my years with the iPhone, it has never become a device I’ve heavily relied upon for my digital life. I’ve always preferred the Mac for working, writing, reading and entertainment, and for the past 13 years I’ve always added an iPad to the mix. In such a context, the role of an iPhone becomes less central, de-emphasised. It’s the device you take with you when you go out, to use as a phone, as a written communication device (Messages, Telegram, Signal, email), as a quick way to check social media and read the occasional article, and as a tool to look up information (Maps, Wikipedia, the occasional Web or dictionary search), and as an instant camera. Apart from certain fun photo apps, there’s never been anything ‘specialised’ in my iPhone usage. And therefore, nothing absolutely irreplaceable.
Years ago I would have added that the iPhone’s user experience was the irreplaceable variable, but that’s not strictly true anymore. The clunkiness and awkwardness of many Android versions is now a thing of the past. When I first took a good look at Android in 2014, I would have never considered switching. When I bought my first Android phone in 2019 (essentially for work-related reasons, as I was localising Android apps at the time and needed direct experience with the UI) the situation had already significantly improved. When I shared my impressions of that Xiaomi MI A2 in November 2019, in my final observations I wrote:
Five years ago, doing a complete platform switch and going from iOS to Android and vice-versa, implied a certain amount of friction that felt less problematic the more tech-savvy you were. The two experiences felt really different and, as far as I’m concerned, Android felt second-class. Even on more powerful handsets, basic stuff like scrolling and animations could end up being jerky and stutter with annoying frequency. The system looked more utilitarian than well-designed to provide an effortless, pleasant user experience.
Today, from my first-hand experience, I can say that this once very noticeable gap is essentially gone. Android has improved on all fronts, while iOS has for the most part rested on its laurels (and in certain areas has actually got buggier than it used to). The overall experience is similar between the two platforms. An increasing number of operations, interactions, and UI behaviours have become barely distinguishable from one another (share sheets, for example, look and work in a similar way). For three months I’ve been carrying both my iPhone 8 and the Mi A2 with me, keeping the iPhone as primary device, but for two weeks I purposefully inverted the roles, and I noticed that — save for a few favourite iOS apps — I could have left the iPhone at home.
And this, in 2025, keeps being true.
A primary concern in my embracing Android as a primary platform was the friction of not having an essential tool as AirDrop for quickly exchanging files between my Macs and my Nothing Phone 2a. But then I found LocalSend. At first glance, this service just seems too good to be true: free? open source!? cross-platform?!? But then you try it out and you immediately realise that yes, it’s that good, and the experience is that polished and seamless. There is just an additional, fractional step when compared to AirDrop. When you share a file via AirDrop, you initiate the sending on your Apple device 1, and the destination Apple device 2 automatically receives the file or automatically displays a pop-up asking for confirmation.
When you share a file with LocalSend, you have to open the app on both devices first, then you send the file from the source device, and in the LocalSend window on the destination device you’ll have to accept the file. But that’s it. Transfer times are comparable with AirDrop’s. (Update, March 11: This is the default behaviour, but there is an option that lets you auto-accept the incoming file(s) on the destination device, streamlining the process).
And there’s an added benefit: being cross platform means that now I can share files as seamlessly across a multitude of different devices, for example from my Windows 11 gaming laptop to my Mac mini, from my Nothing Phone 2a to my iPad, from my iMac to an old ThinkPad running Linux, from my iPhone SE 3 to my Surface Pro convertible, and so forth. It almost feels like having Apple’s Continuity everywhere.
If one of the major aspects preventing you from leaving iOS behind is AirDrop, I can’t recommend LocalSend enough.
There are two other Android devices in my personal ecosystem I don’t talk much about. One is the Microsoft Surface Duo I purchased about a year ago. I still mean to write a proper article about it here, but I’ll say that despite it being considered ‘yet another Microsoft blunder’ by the tech cool kids’ circle, its digital book concept is the only way that a foldable device has made sense to me so far. Reading books in the Kindle app on the Duo is very cool and very practical because it’s like holding a physical book in your hands (or rather a slim metallic notebook, but you get the idea). The ability to display and use two apps at a time, one on the left screen, one on the right, is the most organic form of multitasking for me, and I’m happy to see that Microsoft managed to realise some of what was known as the Courier project back in 2009.
I use the Surface Duo primarily as a digital Moleskine. It works well with styluses like the Surface Pen and similar third-party products (like my Renaisser Raphael 530 active stylus), and I can use apps like Bamboo Paper and Sketchbook to draw sketches, and Microsoft OneNote to take notes, even handwritten ones, which are quicker to jot down rather than typing them.
The second device, the most recent acquisition is an Onyx BOOX Go 10.3 e‑ink Android-powered tablet, which brings this experience of reading, drawing, and taking handwritten notes to a whole other level. First, because it’s e‑ink, which means improved reading experience right away. Second, because it’s a 10.3‑inch tablet, which means improved reading/writing/drawing experience as well. Third, because its main interface is designed around note-taking, and that means having a lot of specialised tools to write, draw, select and convert text, organise notes in different folders, sync everything in the cloud, etc. Fourth, because writing and drawing with a stylus feels very natural overall. It’s not exactly like using a pencil or a pen on a paper notebook, but close enough that you tend to forget it’s a digital support. All of this, again, in a comfortable, practical ‘digital notebook’ form factor and user experience that for me — as a writer who has used pen and paper all his life — feels better than using an iPad for the same tasks. It’s not at all like touching or scribbling on a glass surface, like it happens on an iPad.
So while I still, for the most part, take my Nothing Phone 2a and my iPhone SE 3 when I’m on the go, there are days when I leave the iPhone at home and take the Nothing Phone and the Surface Duo or the BOOX Go 10.3 as ancillary devices. It’s been a few years since I realised it’s unwise to stick to a single ecosystem or walled garden, and that it’s much more stimulating to learn what other platforms have to offer, experiment with them, take what’s useful to me, and create a sort of personal ecosystem or digital environment. It’s not a minimalistic endeavour, but it teaches you to better adapt to changes, and keeps you cognitively nimble. On a pragmatic level, it also gives you an exit strategy when the big tech company you’ve grown so dependent on ends up letting you down.
2025-02-21 20:39:36
I was going with reblanded in the title as a provoking wordplay, but then I was reminded of that special portion of my audience chronically lacking in sense of humour and sending me messages and emails like, There’s a typo in your title etc.
Oh well. What an intro, eh?
In 2019, Samsung launched the Galaxy S10 line; there were two flagship models, the S10 and S10+, a bigger premium S10 5G, and later in 2020 Samsung introduced the S10 Lite, a midrange version of the S10. But this line also featured another model, perhaps the most interesting — the S10e. It wasn’t a ‘lite’ version of the S10, just a more compact variant which didn’t really skimp on features apart from having a slightly-lesser-quality display, a smaller battery, and lacking the telephoto camera. It had personality; it was the S10 for those who wanted a smaller phone. The title of The Verge’s YouTube review of the S10e sums it up pretty nicely — “Smaller, cheaper, better”. It is perhaps the last good small smartphone with a headphone jack.
I don’t know whether that ‘e’ meant something for Samsung. It’s the only occurrence of such suffix in the Galaxy S line. I don’t know why, but this kind of suffix always suggests ‘economy’ to me, in the air travel sense. But while the Samsung S10e did cost less than the other S10 flagships, it wasn’t a ‘cheap’ phone from a hardware quality standpoint.
The iPhone 16e isn’t either.
The Samsung S10e’s essence was probably best encapsulated by Engadget’s title for their video review: Smaller, but not lesser.
What is the iPhone 16e? To me, it’s confusion. I would add ‘aimlessness’, but then I’d have to read several rebukes in my email messages, from people who would tell me that Apple has a plan, a strategy behind it, like the company always has in everything they do. And yes, of course there is strategy here somewhere. But this latest iPhone, this new ‘addition to the family’ — a family made up of many models with too little differentiation — seems rather confusing to me.
And to Luke Miani, who in his first-impressions video on the iPhone 16e, visibly shares the same kind of puzzlement. This is what he concludes, in a breathless exposition tour de force:
You or I [technology enthusiasts] might be able to sit here and go, “iPhone 15 has a mute switch, 16e has an action button, and 16 has an action button and a camera control. The iPhone 15 is on the A16 chip, the iPhone 16e on the A18 chip with one less GPU core than the iPhone 16 on the also A18 chip with an additional GPU core. iPhone 15 has a Dynamic Island, 16e doesn’t; the iPhone 16 does again, but the iPhone 16e without the Dynamic Island has the better battery life of all three phones, better than the 15 and the 16″.
The average consumer is going to be so freaking confused by this. Why is the cheapest phone better and worse than the newer and older more expensive phones? It’s just too much; it’s too much, dude… I don’t really know how to describe it, other than Too Much. The benefit of the iPhone SE was that it was cheap. You didn’t buy it because of features; you bought it because you wanted an iPhone that would get software updates for years to come at the lowest possible price, and the iPhone 16e is not that. It is yet another midrange iPhone with a confusing suffix and a list of features that doesn’t make sense to most people.
It’s not a back-to-basics smartphone that you buy when you don’t know what else to get — it’s just another confusing addition to the middle ground, the $500–800 smartphone range, and frankly I think that this was a bad move. I don’t know, I really want to get my hands on this phone ’cause I think [that] as a phone it will probably be very good, but as a part of Apple’s iPhone lineup, I think it just adds confusion.
Miani speaks of the now-defunct iPhone SE line in pragmatic terms, an iPhone model targeted at pragmatic, budget-conscious customers. But what I liked of the SE line was that, conceptually, it was a standalone line with its own release schedule and its own peculiarities. Whether you liked it or not, it maintained a sort of quaint distinctiveness through its first three generations.
In my October 2024 article on the iPhone SE trajectory, I mused:
Now, imagine a hypothetical fourth-generation iPhone with an A18 Bionic chip (or perhaps a specially-designed A17 Bionic, sort of a nerfed-A18?), the single-camera setup and technology of the iPhone XR, and of course the external design of the iPhone XR, featuring a 6.1‑inch screen (maybe with a slightly updated display technology), Face ID, etc. Let’s say it would replace both the third-generation iPhone SE and the iPhone 14 in Apple’s current offering. Its trade-offs battle would be against the regular iPhone 15. And it would be a tough one. Yes, it would have a better chip, but given how recent performance gains in iPhones have become basically imperceptible in everyday use, would such an iPhone SE 4 be a better proposition over the 15 when all it had would be same or better CPU speed and a lower price? The display would have the same size, the display technology would be worse, it would feature a notch while the iPhone 15 has a dynamic island, it would feature a decidedly worse camera setup… Sure, $429 would be a bargain compared to the $699 of the iPhone 15. But its form factor is too similar and, apart from the CPU, all the rest would be the same stuff but worse in all respects. Unless Apple is planning to do some unexpected changes, like offering a single-camera setup but with a better camera than the XR’s 12-megapixel affair, to make the next iPhone SE more appealing, I don’t see anything particularly special or worth considering in it. […]
But you know what I think would make more sense? I know I come from a biased position, but to me it would make more sense if the design and form factor of the next iPhone SE would be those of the iPhone 12/13 mini. Maybe the 13 mini, since it has a smaller notch on the front and a better battery performance. […]
Overall, it would still feel like a ‘Special Edition’ phone: compared to the mainstream iPhone lineup, it would be different/special enough, appealing enough, modern enough, all the while maintaining that classic, truly iconic design that harks back to the lines of the iPhone 4 and 5. Apple could even sell it at $499 instead of $429.
What Apple managed to assemble is a sandwich of uninterestingness and raise its final price to $599. They discontinued a line of iPhone models that was ‘midrange with personality’, and released something that isn’t distinctive in any way, its price positioning makes it difficult to recommend, and finally its name ties it to a specific iPhone release — so you’re left wondering, Is this 16e a one-off thing, like the Samsung S10e was six years ago, or are we to expect an iPhone 17e, 18e, and so on?
I’m also left wondering, Who is it for? What was the reasoning behind this iPhone? But if there’s a device that best encapsulates the overall state of Apple today, it is, without doubt, this iPhone 16e.
2025-02-19 04:27:16
Welcome to the twelfth instalment of my annual overview of my most interesting discoveries made during the previous year. Traditionally, the structure of this kind of post includes different categories of resources: blogs, YouTube channels, cool stuff on the Web, and so forth. Such structure isn’t going to change, but if my previous instalment was perhaps unusually brief, I’m afraid the current one is going to be even briefer. There are a few reasons as to why:
One. For more than half of 2024, my attention was primarily focused on personal matters. Having to find a new place to live, the process of purchasing such place, the move, and finally settling in the new apartment was a time and energy sink for both my wife and I. My time online was mostly spent working and engaging in some light social media activity, and not much else.
Two. What I wrote last year speaking about 2023 didn’t change much in 2024: I’ve often mentioned this low tide brought up by a general feeling of ‘tech fatigue’; as a consequence, [during 2023] my interest in adding technology-related sources to my reads was rather low. I even neglected to stay up-to-date with the people and blogs I was already following. That feeling of tech fatigue started receding a bit towards the end of 2024, when I received a Nothing Phone 2a as a birthday gift — an event that gave me the final push to switch to Android as my primary phone platform, leaving my iPhone SE 3 as a secondary device.
Three. Another thing I wrote in the previous instalment of this series, was this:
This exhaustion stage, this tech burnout, is necessary as well. I’m more and more convinced that more people ought to reach this stage, to then try to approach tech in a different — hopefully healthier — way. Because the next stage is to focus on whatever good remains out there after the squeeze. That’s why I’m trying to approach 2024 with the goal of finding out who and what’s really worth following, who and what is truly distinctive, who and what is ultimately worth my (and your) time. Mind you, it’s what I’ve always been trying to do when compiling these yearly overviews; the only little thing that has changed is that from now on I’ll try to be even more selective.
You know what happens when you get even more selective? That maybe you follow a link to a blog article, and you like the article, but then you explore that blog further and you realise that such article — and perhaps a couple more — is the only highlight of that blog, and you start wondering, Is this website worth adding to my RSS feeds, or should I just share the link to that specific article and let others decide?
In most cases, I’ve ended up bookmarking & sharing articles instead of adding blogs to my reading list. But what if it turns out to be a mistake and I miss out on some good writers/bloggers? Well, if I bookmark something, chances are I’ll return to that article and website at a later date, and if I find enough stuff I like on my subsequent visits, I may decide to recommend the whole package. Also, if the author keeps writing good stuff, it’s very likely I’ll get other recommendations about them, so I don’t really miss out on anybody. And even if I do — let’s be real for a second — time is a finite resource; I’ll never be able to read or watch everything from everyone I cross paths with.
Another thing that happens when you get more selective is that you start looking harder and harder at the resources you’ve already discovered — all those RSS feeds, all those YouTube channels, etc. — and you reassess them with a fresh pair of eyes. This is why, during 2024, I’ve been subtracting rather than adding to my resources’ reservoir, so to speak. Interests change, people change (or don’t — and that, sometimes, can be a problem), the quality of a blog or YouTube creator’s output may become less consistent or patently decline… And so it’s time for some pruning and tidying up.
Just two:
I’m not typically a fan of the newsletter format; I can’t exactly explain why. The fact that, once you subscribe, the newsletter is something that comes to you instead of you going to it should be a convenient and preferable dynamic. Instead, I often end up treating it like advertising email, and ultimately ignore it or just skim the part that’s visible in my email client. Over the years I’ve subscribed to many newsletters on a whim — they were genuinely interesting and well written — but I’ve also ultimately unsubscribed from most of them due to lack of time and engagement.
The sole exception I made in 2024 was for Ed Zitron’s Where’s Your Ed At? which I basically treat as a long-form blog. I receive the email updates, but I’m also subscribed to the feed. If technology and the tech industry are your main interests, you should already know who Ed Zitron is. But if you don’t, well, it’s best if I link to the newsletter’s About section. You’ll find everything you need to know. I really, really recommend Ed’s newsletter. Each instalment is generally a long read, but very worth your time.
I started following Ed on pre-Musk Twitter years ago, and was reminded of his work again in recent times when I was looking for materials and information about ‘AI’. And I found out that Ed and I share basically the same (negative) views about it, only Ed has the know-how to talk about it with much more clarity and authority that I have on the subject. A lot of people have asked me to talk more often and more at length about ‘AI’, LLMs, the industry, and why I think it’s largely bullshit. My advice is to subscribe to Ed’s newsletter if that’s a subject of particular interest to you. You’ll find a lot of information, and you’ll know that Ed and I are on the same page.
Around September 2024 I looked at my YouTube subscriptions list and was horrified to realise that I was following 136 channels. Yeah, things had got rather out of hand, and so I started unsubscribing from a lot of channels I had added simply after discovering a single video or following a recommendation for a single video. Despite being a mature platform, I’m routinely baffled by how rudimentary YouTube’s tools are for organising content. For instance, I’d love to have the ability to categorise my subscriptions and put them in separate folders, like one does with RSS feeds, so that I can more easily get to those creators whose content could be filed under ‘photography’ or ‘tech’ or ‘gaming’ or ‘lifestyle’ or ‘cooking’ or ‘architecture’, and so forth. Instead, all YouTube offers is an unsorted list on the left sidebar of the home page, vaguely organised by creator activity/frequency of uploads. It gets messy, fast.
After spending the best part of an afternoon reviewing my subscriptions and mercilessly remove a lot of unwanted or uninteresting ones, I ended up with half the initial amount — which is still a lot, but becomes way more manageable. Again, follow my self-imposed Be more selective guideline, the only discovery really worth sharing is, in my opinion, Howtown.
The channel description is perhaps a bit terse: The “How Do They Know” show from journalists Adam Cole and Joss Fong. So it’s better if you watch their short introduction video. Essentially, Cole and Fong create video essays on different subjects to answer the question How do they know or How do we know about this particular fact or topic? In their words:
We want to tell you our guiding principles so you can hold us to them. First, we approach our stories with curiosity above all. So this isn’t a commentary channel. We’re here to make sense of the evidence. We rely on primary sources and interviews, and we’ll share those sources with you with each video. If we make any factual errors, we will post corrections that explain exactly what we got wrong. Finally, we never take money in exchange for coverage. Our sponsors don’t have any control over what we make.
I find Cole and Fong to be entertaining, personable, and likeable; their videos are well researched and produced, and the fact that they don’t upload content frequently is a good sign in my book, because it means they’re taking time to do their homework before presenting a new essay. If you’re an intellectually curious person as I am, I think you’ll like their channel.
Another year, another round of copying-and-pasting the same quote from a few years ago:
In 2019 I unsubscribed from all the podcasts I was following, and I haven’t looked back. I know and respect many people who use podcasts as their main medium for expression. My moving away from podcasts is simply a pragmatic decision — I just don’t have the time for everything. I still listen to the odd episode, especially if it comes recommended by people I trust. You can find a more articulate observation on podcasts in my People and resources added to my reading list in 2019.
If you’re wondering why I keep the Podcast section in these overviews when I clearly have nothing to talk about, it’s because to this day I receive emails from people un-ironically asking me for podcast recommendations.
Yet again, nothing new to report on this front. I’m still using the same apps I’ve been using on all my devices for the past several years, and I haven’t found better RSS management tools / apps / services worth switching to. In my previous overviews, I used to list here all the apps I typically use to read feeds on my numerous devices, but ever since I broke my habit of obsessively reading feeds everywhere on whatever device, I’ll only list the apps on the devices I’ve used over the past year or so. If you’re curious to read the complete rundown, check past entries (see links at the bottom of this article):
In reverse chronological order:
I hope this series and my observations can be useful to you. Also, keep in mind that some links in these past articles may now be broken. And as always, if you think I’m missing out on some good writing or other kind of resource you believe might be of interest to me, let me know via email, Mastodon, or Bluesky. Thanks for reading!
2025-01-27 05:42:34
I was perusing some past issues of ACM Interactions magazine, and I stumbled on an interview with Don Norman, a figure I’ve always admired and one of the main forces of inspiration for me to delve deeper in matters of usability, design, and human-machine interaction.
The interview, titled A conversation with Don Norman, appeared on Volume 2, Issue 2 of the magazine, published in April 1995. And of course it’s a very interesting conversation between Don Norman and John Rheinfrank, the magazine editor at the time. There’s really very little to add to the insights I’ve chosen to extrapolate. While discovering them, my two main reactions were either, How things have changed in 30 years (especially when Norman talks about his work and experience at Apple); or, 30 years have passed yet this is still true today. I’ll keep my observations at a minimum, because I want you to focus on Norman’s words more than mine.
Don Norman: […] John, you deserve much of the credit for making me try to understand that there are many forces that come to bear in designing. Now that I’ve been at Apple, I’ve changed my mind even more. There are no ‘dumb decisions.’ Everybody has a problem to solve. What makes for bad design is trying to solve problems in isolation, so that one particular force, like time or market or compatibility or usability, dominates. The Xerox Star is a good example of a product that was optimized based on intelligent, usability principles but was a failure for lots of reasons, one of which was it was so slow as to be barely functional.
John Rheinfrank: Then your experience at Apple is giving you a chance to play out the full spectrum of actions needed to make something both good and successful?
DN: […] At Apple Computer the merging of industrial design considerations with behavior design considerations is a very positive trend. In general, these two disciplines still tend to be somewhat separate and they talk different languages. When I was at the university, I assumed that design was essentially the behavioral analysis of tasks that people do and that was all that was required. Now that I’ve been at Apple, I’ve begun to realize how wrong that approach was. Design, even just the usability, let alone the aesthetics, requires a team of people with extremely different talents. You need somebody, for example, with a good visual design abilities and skills and someone who understands behavior. You need somebody who’s a good prototyper and someone who knows how to test and observe behavior. All of these skills turn out to be very different and it’s a very rare individual who has more than one or two of them. I’ve really come to appreciate the need for this kind of interdisciplinary design team. And the design team has to work closely with the marketing and engineering teams. An important factor for all the teams is the increasing need for a new product to work across international boundaries. So the number of people that have to be involved in a design is amazing.
Observation: This was 1995, so before Steve Jobs returned at Apple. But Jobs’s Apple seemed to approach design with this mixture of forces. The results often showed the power of these synergies at play behind the scenes. Today’s Apple perhaps still works that way within the walls of Apple Park, but often the results don’t seem to reflect synergetic forces between teams or across one design team — It’s more like, there were conflicts along the way, and an executive decision prevailed. (No, not like with Jobs, because he better understood design and engineering than current Apple executives).
JR: You just said that there may be some things about the computer industry, or any industry, that make it difficult to do good design. You said that design could only improve with industry restructuring. Can you say more?
DN: Let’s look at the personal computer, which had gotten itself into a most amazing state, one of increasing and seemingly never-ending complexity. There’s no way of getting out. Today’s personal computer has an operating system that is more complex than any of the big mainframes of a few years ago. It is so complex that the companies making the operating systems are no longer capable of really understanding them themselves. I won’t single out any one company; I believe this is true of Hewlett-Packard, Silicon Graphics, Digital Equipment Corporation, IBM, Apple, Microsoft, name your company — these operating systems are so complex they defy convention and they defy description or understanding. The machines themselves fill your desk and occupy more and more territory in your office. The displays are ever bigger, the software is ever more complex.
In addition, business has been pulled into the software subscription model. The way you make money in software is by getting people to buy the upgrade. You make more money in the upgrade than in the original item. Well, how do you sell somebody an upgrade? First, you have to convince them that it’s better than what they had before and better means it must do everything they had before plus more. That guarantees that it has to be more complicated, has to have more commands, have more instructions, be a bigger program, be more expensive, take up more memory — and probably be slower and less efficient.
DN: […] Now, how on earth do you move the software industry from here to there? The surety of the installed base really defeats us. For instance, Apple has 15,000,000 computers out there. We cannot bring out a product that would bring harm to those 15,000,000 customers. In addition, if we brought out a revolutionary new product, there’s the danger that people would say the old one is not being supported, so they’ll stop buying it. But they don’t trust this new one yet. “Apple might be right but meanwhile we better switch to a competitor.” This story is played out throughout the computer industry. It’s not just true of Apple. Look at Microsoft, which has an even worse problem, with a much larger installed base. It’s been a problem for many companies. I think the reason why a lot of companies don’t make the transition into new technologies is that they can’t get out of their installed base.
Mind you, the installed base insists upon the current technology. There’s a wonderful Harvard Business Review article on just this: Why don’t companies see the new technology coming? The answer is, they do. The best companies often are developing new technology. But look at the 8‑inch disk drive which has replaced the 14-inch Winchester drives. It was developed and checked with the most forward-looking customers, who said, “That will never work for us.” So the 8‑inch drive wasn’t pushed. Despite everything being done to analyze the market, in retrospect, the wrong decision was made. At the time, by the way, it was thought to be the correct decision.
It’s really hard to understand how you take a mature industry and change it. The model that seems to work is that young upstart companies do it. Change almost always seems to come from outside the circle of major players in the industry and not within. There are exceptions, of course, of which IBM is an interesting one. IBM was once the dominant force in mechanical calculating machines and young Thomas Watson, Jr., the upstart, thought that digital computers were the coming thing. Thomas Watson, Sr. thought this was an idiotic decision. But actually Junior managed to get the company to do create the transformation. It’s one of the better examples of change in technological direction, and it also was successful.
About Norman’s last remarks, see Wikipedia: “Watson became president of IBM in 1952 and was named as the company’s CEO shortly before the death of his father, Watson Sr., in 1956. Up to this time IBM was dedicated to electromechanical punched card systems for its commercial products. Watson Sr. had repeatedly rejected electronic computers as overpriced and unreliable, except for one-of-a-kind projects such as the IBM SSEC. Tom Jr. took the company in a new direction, hiring electrical engineers by the hundreds and putting them to work designing mainframe computers. Many of IBM’s technical experts also did not think computer products were practical since there were only about a dozen computers in the entire world at the time.”
JR: So it looks as though we have another transition to manage. It’s very strange that they call these devices ‘personal computers.’
DN: Yes. First of all they’re not personal and second, we don’t use them for computing. We’re using these things to get information, to build documents, to exchange ideas with other people. The cellular phone is actually a pretty powerful computer that is used for communication and collaboration.
Observation: This brief remark by Norman about mobile phones is rather amazing, considering that it was made back in 1995 when smartphones didn’t exist yet — the functions of what we now consider a smartphone were still split between mobile phones and Personal Digital Assistants (PDAs). Also the mention that these devices (personal computers) are not really personal still sounds especially relevant today, for different reasons. See for example this recent piece by Benj Edwards: The PC is Dead: It’s Time to Make Computing Personal Again.
JR: So in what direction do you think computer-interface design should go? Many companies are making moves to simplify entry and interaction (Packard Bell’s Navigator and Microsoft’s BOB). In the short term, how does this fit your vision?
DN: The question really is, in what direction do I see our future computers moving? Microsoft has introduced BOB as a social interface, which they think is an important new direction. Let me respond to the direction and I’ll comment later on BOB. As I’ve said before, I believe our machines have just become too complex. When one machine does everything, it in some sense does nothing especially well, although its complexity increases. My Swiss Army knife is an example: It is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or a scissors. My Swiss Army knife also has so many tools I don’t think I ever open the correct one first. Whenever I try to get the knife, I always get the nail file and whenever I try to get the scissors, I get the awl, etc. It’s not a big deal but it’s only about six parts. Imagine a computer with hundreds or thousands of ‘parts.’ I think the correct solution is to create devices that fit the needs of people better, so that the device ‘looks like’ the task. By this I just mean that, if we become expert in the task, then the device just feels natural to us. So my goal is to minimize the need for instruction and assistance and guidance.
Microsoft had another problem. Their applications are indeed very complex and their model is based on the need to have multiple applications running to do, say, a person’s correspondence, communication, checkbook, finances. How did they deal with the complexity with which they were faced? There has been some very interesting social-science research done at Stanford University by Cliff Reeves and Byron Nash, which argues that people essentially treat anthropomorphically the objects with which they interact, that is they treat them as things with personalities. We kick our automobile and call it names. Responding to computers in fact has a tendency to go further because computers actually enter into dialogues with people, not very sociable dialogues, but dialogues nevertheless. So from their research, Reeves and Nash did some interesting analysis (somewhat controversial, by the way) in the social-science community about the social interactions between people and inanimate objects. That’s all very fine, and you can take that research and draw interesting conclusions from it. It’s a very big step, however, to take that research and say that, because people impart devices with personalities, you should therefore build a personality into a device. That was not supported by the research. There was no research, in fact, about how you should use these results in actual device construction.
Observation: The bit I emphasised in Norman’s response made me wonder. And made me think that maybe this is one of the reasons why most automated ‘AI’ assistants — Alexa, Siri, etc. — remain ineffectual ways to devise and implement human-machine interaction to this day. Perhaps it’s because we fundamentally want to always be the ones in charge in this kind of relationship, and do not like devices (or even abstract entities such as ‘AI’ chatbots) to radiate perceived personality traits that weren’t imparted by us. By the way, I hope we’ll keep holding on to that feeling, because, among others, it’s at the root of a healthy distrust towards this overhyped ‘artificial intelligence’.
It’s very difficult to decide what is the very best way of building something which has not been studied very well. I think where Microsoft went wrong was that, first of all, they had this hard problem and they tried to solve it by what I consider a patch, that is, adding an intelligent assistant to the problem. I think the proper way would have been to make the problem less complex in the first place so the assistance wouldn’t be needed. I also think they may have misread some of the research and tried to create a character with an extra cute personality.
In his response, Norman continues with another interesting remark (emphasis mine, again). Despite referring to a product we now know did not succeed — Microsoft BOB — I think he manages to succinctly nail the problem with digital assistants and offer a possible, radical workaround; though I seriously doubt tech companies today would want to engage in this level of rethinking, preferring to keep shoving ‘AI’ and digital assistants down our throats.
JR: It seems as if substantial changes in design will take a long time to develop. Will we have something good enough for the ten-year-old with ‘Nintendo thumb’ before he or she grows up?
DN: I think for a while things aren’t going to look very different. The personal computer paragon could be with us another decade. Maybe in a decade it will be over with. I’d like to hope it will be. But as long as it’s with us, there aren’t too many alternatives. We really haven’t thought of any better ways of getting stuff in or out besides pushing buttons, sound, voice, and video. Certainly we could do more with recognition of simple gestures; that’s been done for a very long time, but we don’t use gestures yet in front of our machines. I mean gestures like lifting my hand up in the air. We could, of course, have pen-based gestures as well and we could have a pen and a mouse and a joystick and touch-sensitive screens. Then there is speech input, which will be a long time in coming. Simple command recognition can be done today but to understand, that’s a long time away.
So in my opinion the real advance is going to be in making devices that fit the task. For instance, I really believe within five years most dictionaries will be electronic, within ten years even the pulp novel, the stuff you buy in the airport to read on the airplane, will have a reader. What you’ll do is go to the dispenser and instead of the best 25 best-selling books, it will have 1,000 or 2,000 books for browsing. When you find a book that you like, you’ll put in your credit card and the book will download to your book reader. The reader will be roughly the size of a paperback book today and look more like a book than a computer. The screen will be just as readable as a real book. Then look at any professional, say a design professional. You couldn’t really do your design without a pencil. Look how many pencils good artists will use. They may have 50 or 70 or 100 different kinds of drawing implements. We have to have at least that kind of fine-detail variation in the input style in the world of computers. I don’t think we’ll have the power that we have today with manual instruments until we reach that level. I think the only way to get that power, though, is to have task-specific devices. That’s the direction in which I see us moving.
Observation: There was, indeed, a time, when tech seemed to move in the direction envisaged by Norman, with devices designed for specific tasks. When Steve Jobs illustrated the ‘digital hub’ in the first half of the 2000s, the Mac was the central hub where we would process and work with materials coming from different, specialised devices: the digital camera, the camcorder, the MP3 player, the audio CD, the DVD, the sound-recording equipment. At the time, all these devices were the best at their designed tasks.
But then the iPhone came (and all the competing smartphones based on its model), and it turned this ‘digital hub’ inside out. Now you had a single device taking up the tasks of all those separate devices. Convenient, but also a return to the Swiss Army knife metaphor Don Norman was mentioning earlier in what I indicated as section №5: “My Swiss Army knife […] is very valuable because it does so many things, but it does none of the single things as well as a specialized knife or a screwdriver or scissors.”
If you think about it, the Swiss Army knife is also a good metaphor to explain a big part of the iPad’s identity crisis. A big smartphone, a small laptop, a smarter and more versatile graphic tablet, among other things; and yet, it tends to do better at the task it ‘looks more like’: a tablet you use with a stylus to make digital artworks.
After years of smartphone (and similar ‘everything bucket’ devices) fatigue, it seems that we may be moving again towards task-specific devices, with people rediscovering digicam photography, or listening to music via specialised tools like old iPods and even portable CDs and MiniDisc players. The e‑ink device market seems to be in good health, especially when it comes to e‑ink tablets for note-taking and drawing; products like the Supernote by Ratta or the BOOX line by Onyx; or the one that likely started the trend — the ReMarkable. I have recently purchased one of these tablets, the BOOX Go 10.3, and it’s way, way better than an iPad for taking notes, drawing, and of course reading books and documents for long stretches of time.
I hope we’ll keep moving in this direction, honestly, because this obsession for convenience, the insistence on eliminating any kind of friction and any little cognitive load, and wanting single devices that ‘do everything’ is what is making interfaces become more and more complex, and making tech companies come up with debatable solutions to make such interfaces less complex. See for instance how Apple’s operating systems have been simplified at the surface level to appear cleaner, but in doing so have removed a lot of UI affordances and discoverability, burying instead of solving all the complexity that these systems have inexorably accumulated over time.
Or see for example how digital assistants have entered the picture in exactly the same way Microsoft came up with the idea of BOB in the 1990s. As Norman says, an intelligent assistant was added to the problem, becoming part of the problem instead of solving it. So we have complex user interfaces, but instead of working on how to make these interfaces more accessible, less convoluted, more discoverable, intuitive, and user friendly, tech companies have come up with the idea of the digital assistant as a shortcut. Too bad digital assistants have introduced yet another interface layer riddled with the usability and human-machine interaction issues we all know and experience on a daily basis. Imagine if we could remove this layer of awkwardness from our devices and had better-designed user interfaces that completely removed the need of a digital assistant.
[The full magazine article is available here.]