MoreRSS

site iconSpyglass Modify

A collection of written works, thoughts, and analysis by M.G. Siegler, a long-time technology investor and writer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spyglass

AI Am Become Death

2026-03-03 21:22:34

AI Am Become Death

In a way, it feels like the endless talk about AI over the past few years has been leading up to this moment. The discussions have always ranged from 'is AI just a silly toy?' to 'is AI the end of humanity?' but such talk has flowed between intellectual backroom gatherings, to internet chat rooms, to dinner parties, to comment sections, to company town halls, to social media, and back again. Now here we are with the United States and several other countries actively engaged in armed confrontation – many would call this "war" though no one has declared it – and the AI debate swirls around it.

The timing is odd. We now know with certainty that last week the US was preparing to preemptively strike Iran. At the same time, the key cog in that machine, the Department of War (the artist formerly known as the Department of Defense), was also actively engaged in discussions with Anthropic – yes, the AI startup – over the usage of their models for purposes related to national security. I mean maybe, just maybe, punt the conversation until a better time?

But perhaps it's related, because the US knew these strikes were coming and that some level of Anthropic's technology would be used in them. And because there had been talk that Anthropic was questioning the use of their models in the raid to oust Nicolás Maduro from power in Venezuela a couple months prior, maybe the DoW wanted to get any usage squared away ahead of this new operation. Or perhaps Anthropic was pressing and the Pentagon, knowing what was in motion, got fed up. Or perhaps they viewed it as a potential point of leverage in such conversations. Who knows. Subsequent reporting will undoubtedly make all the timelines more clear once the smoke literally settles. But for now, this all just seems wild. As the US was ramping up to battle Iran, they were also battling one of their own technology companies.

Talking through the situation with Alex Kantrowitz on his Big Technology Podcast yesterday shook loose a few thoughts that I wanted to jot down. First and foremost, at the highest level, this all really might be as simple as the fact that the current administration and the leadership team at Anthropic, led by co-founder Dario Amodei, just really don't like each other. This disdain isn't a secret, there are plenty of public comments on the matter – in particular from administration officials noting their problems with Amodei's overall ideologies.

And in that light, it's even more strange that all of this is happening! Why on Earth is the DoW using Anthropic's models if they're so uncertain about the people building them – in control of them? The answer there may lay in their use of other technology from Palantir and Amazon, which integrates Claude. Or it may simply be that the technology is that good. Or, perhaps, the government has just been looking for the right excuse to shove them out the door. The timing wasn't great, but this squabbling over legal terms during a build up to war was the final straw.

Let's put aside Occam's razor here a moment, because there's also been an escalation of the argument to a far higher level: that this really is about private company rights, government control, democracy itself, and, of course, nuclear war.

Again, I suspect this is more likely a fairly straightforward culture clash and I think the fact that the Pentagon was so quick to sign a deal with OpenAI showcases that further. Again, in the middle of war, they're hashing out new contracts with AI players. Perhaps because all they really needed was someone they deemed more palatable to sign their existing contract. In walks Sam Altman...

The fact that OpenAI felt the need to quickly amend those documents – undoubtedly after rather immense and immediate public backlash – also points to this idea. Altman said he was simply trying to de-escalate the whole situation by, um, stepping in and taking over the contract from his biggest rival, but he also admits now how bad that looked. "Sloppy," as he puts it.

Though, naturally, they'll be keeping the contract...

Anyway, everyone – including Altman and Amodei – clearly wants to make this into something more than a contract dispute and an inevitable conscious uncoupling between two parties that simply don't like each other. And there are certainly interesting debates to be had there. But I also think it doesn't serve anyone's real interest to blow this completely out of proportion.

But that keeps happening with AI because it's AI. Depending on the situation and your own vantage point and bias, it's either the answer to or the problem with everything. AI will solve productivity. AI will displace jobs. AI will cure disease. AI will lead to more suicide. AI will free us. AI will enslave us. It literally slots in everywhere in both directions depending on the argument to be made.

It's the Rorschach um, tech.

Reading over the reactions to this latest brouhaha, it seems to me that it may be time to lay to rest one analogy that's very much top of mind and at the center of this again right now: that AI is the new nuclear weapon.

From Altman talking about the Manhattan Project. To magazines comparing him to Oppenheimer. To Amodei explicitly comparing selling NVIDIA chips to China as selling nukes to North Korea.1 The entire analogy has escalated too far. And it's clearly a big part of what is fueling this most recent debate.

But the comparison breaks down immediately at the most fundamental level. A nuclear weapon is just that, a weapon. It has one purpose – well, maybe two if you consider deterrence a purpose – and that is to destroy. Sure, we could argue that nuclear technology has other purposes, notably power, but come on, that's not the argument or comparison anyone is actually making here – aside from, historically, Iran! This is saying that AI is the biggest threat the world has faced since the advent of the atom bomb during World War II.

The difference, of course, is that AI has positive uses as well as negative ones. What is the positive usage of an atom bomb? Even if you want to say deterrence, that's decidedly the opposite of the usage of it. Ending the war with Japan? Sure, but that wasn't the initial goal and point of the project. It was simply to beat others – notably Germany – to ensure such power didn't end up in the wrong hands.

And that's exactly why the comparisons with AI keep getting drawn. Obviously, there are parallels in the build out of AI and the race to AGI – the atom bomb in this scenario. But again, AGI would presumably have good uses as well as bad – perhaps even profoundly so. Sure, some people view the race as ensuring that America gets control of such technology first for defensive reasons (which may shift into offensive reasons just as the aforementioned Department of Defense has shifted back into the Department of War). But most of those building it view it as simply trying to move technological and thus, societal, progress forward.

Yes, many disagree with those notions. But certainly no one would say that there aren't any good, positive uses for AI. Those who are so worried about it simply view the negatives as outweighing the positives – ranging from day-to-day usage to again, all the way up to the end of humanity. But even the so-called "doomers" would not deny the potential for positive usage too. Again, what is the positive use case of a nuclear bomb?

So can we please de-escalate from that analogy? It just makes everyone crazed – on both sides of such debates. Even now, it's what's guiding a lot of the back-and-forth around if a private company should have some say over what a government can or cannot do. The hypothetical is what if a company controlled a nuke? Or mass produced them?

That is, of course, illegal. But we're not talking about making the creation or advancement of AI illegal. We are talking about putting some level of guardrails in place, and yes, governments are undoubtedly far behind the ability to reliably to so simply because they act far too slowly and AI moves far too quickly. Still, no one wants to deem work on AI illegal – well, perhaps some do, but no one serious – and that is just not the case with nuclear weapons.

So if people really want to use that analogy, they should also be saying that the government should take over control of the build-out of the technology. Many think they should be more involved – including the companies themselves, not least of which because they need money and red tape cut – but no one (again, no one serious) is suggesting the government takes over full control of production.

And while some have suggested an actual Manhattan-like Project for AI, it's too late for that. Again, the technology is moving far too fast and any government would move far too slow.

I know it's tempting to use the nuclear analogy especially given the current conflicts – both actual conflicts around the world in which the US is engaged at the moment and the political ones happening within the United States itself – and the natural adjacencies. But it also just doesn't seem helpful, in part because of those actual conflicts. AI is not a nuclear weapon and we shouldn't portray it as such.

I don't know if the better analogy is electricity or the internet or some other profound breakthrough with both good and bad implications. And I want to acknowledge that it's entirely possible that AI ends up in a state where the bad does outweigh the good. I personally don't believe that will be likely, but it's certainly possible. But I do know there are no good variables with nuclear weapons. We now start wars over such beliefs...

AI Am Become Death

1 Two sub-problems here. First, North Korea already has nuclear weapons, of course. Second, this splinters the analogy because here it's NVIDIA chips which are the nukes, not AI itself, which of course is the byproduct of those chips.

Increasingly, Big Tech Owns Big AI

2026-03-02 20:59:26

Increasingly, Big Tech Owns Big AI

I feel like I'm taking crazy pills. For all the endless talk about both Big Tech and Big AI, it seems wild how little talk there is around the fact that increasingly, Big Tech owns Big AI. I mean this quite literally, as in ownerships stakes. True, they're not controlling positions, but undoubtedly only because that would actually raise red flags. Instead, these are massive, growing bets that are clear hedges on their own internal AI work, and to try to counter competitors own similar deals.

I've actually written about this a number of times. But with the latest OpenAI and Anthropic funding rounds, it's probably worth updating and spelling it out again. Perhaps a bit more clearly this time, with math...

TUDUMB

2026-02-27 19:21:01

TUDUMB

Congratulations on saying the biggest number, Paramount. $111B for a company that a year ago had a market cap of around $20B. For a company that shrunk in their most recent quarter, and in fact, for the entire year, with revenue down 5% to $37.3B. Paramount may not be buying the Titanic, but only because they already own that IP.

At the same time, kudos to Netflix for showing true discipline. Ted Sarandos kept insisting they would, but that's obviously far easier said than done when you've already talked yourself into a deal which you thought you had "won". They undoubtedly anticipated somewhat of a circus when they swooped in and stole it from under the nose of Paramount – the original bidder, remember – in December, but it turned into a full-on clown show.1 It turned into a battle against not just Paramount (as expected), but also politicians (to be expected), and Wall Street (probably should have been expected). Perhaps the real wild card though was the hatred from Hollywood itself (more on this in a minute). So it was clearly better to swallow that pride and walk away with a nearly $3B check of pure profit.

In a way, not bad for a quarter's worth of work distraction. It's almost exactly what Netflix made in actual profit last quarter.

Predictably, investors love this move. Netflix's stock has popped nearly 10% in after-hours trading. The share price had been ground down almost 25% since the deal was announced. The message was clear: you're the present and future of entertainment, why are you putting this albatross around your neck? The century-old past of entertainment stuck in perpetual decline?

At the highest level, it's why Netflix's deal was a surprise in the first place. Sure, you take a look at the deal, because why not? But nothing in Netflix's history suggested that they would take this too seriously. But Ted Sarandos surprised us. Clearly he saw a path to take such a storied legacy and shift it into the future. Netflix has proven their worth as an IP accelerant, what if they bought perhaps the best library available? It's an interesting idea, though I'm still not sure it made any business sense at $83B. At $100+B – the amount Netflix probably would have had to counter with (with a discount for the TV assets they wouldn't be buying, which Paramount will be buying) – I mean...

Of course, Netflix could have absorbed such a cost. It's a $400B company (well, before this deal, anyway) – double Disney! Paramount Skydance? They're worth $11B. Yes, they're paying almost exactly $100B more than they're worth for WBD. Yes, it's looney. But really, it's leverage.

To be clear, Netflix was going to pay for the deal with debt too, but they have a clear path to repay such debts. They have a great, growing business. They don't require the backstop of one of the world's richest men, who just so happens to be the father of the CEO. How on Earth is Paramount going to pay down this debt? I'm tempted to turn to another bit of Paramount IP for the answer:

1. Step one
2. Step two
3. ????
4. PROFIT!!!

Or maybe David Ellison should start an AI company, raise billions, then merge it with Paramount. I mean, this has worked in the past for deals that make absolutely no sense on paper!

But really, the answer will undoubtedly be a combination of huge cuts – "synergies" – mixed with the hope that rates keep going down so this can all be constantly refinanced with the buck being passed around until they figure out a way to spin out some assets and burden them with the debt. You know, just like David Zaslav was about to do!

I'm being harsh. The truth is that this is the deal that I thought Paramount should do! As Skydance was in the midst of acquiring National Amusements, and thus, Paramount, I wrote the following in July 2024:

There's been a lot of talk amidst the Paramount dealings that WBD might be a good home/partner. What if, once the Skydance/Paramount deal is closed, *they actually buy WBD*? Yes, there are debt issues, but a year from now, hopefully WBD head David Zaslav will have a better answer and path there. Ellison has spoken a few times about Paramount+ in particular. Most assume they'll either spin it off or merge it with another player, like WBD's Max or Comcast's Peacock. And perhaps they will. But again, I'm not sure they shouldn't just buy *all of* WBD to bulk up into one of the major players themselves.

Well, they took my advice. And just over a year later, with their deal finally closed, they made their bid. But that bid was $19/share – and a mixture of cash and stock. That would have valued WBD at just below $50B (before their own aforementioned sizable debt was taken into account). After eight more offers from Paramount, and yes, the one from Netflix (including switching their own offer from cash/stock to all cash), here we are.

Anyone who took issue with David Zaslav's pay package should apologize immediately. He may not have been great at running WBD's actual business, but the financial engineering required here to turn $50B into $80B into $111B in just a few short months – again, while the core business declined – is truly something.

But again, while this deal makes no sense financially, it does feel like Paramount needed it. For Netflix, Warner Bros was a nice-to-have. For Paramount, it was existential. Without the Warner Bros studio, which, before a last-minute holiday surge by Disney was the number one movie studio for much of last year, Paramount was the distant fifth place player in a group of five. Adding WB vaults them close to or at the top with Disney.

Meanwhile, in streaming, Paramount+ is also the fifth place player, but there, it could be worse: they could be Peacock. Still, despite some decent numbers – thanks Taylor Sheridan (who subsequently bailed) and the NFL – they were far behind the "major" players. Including, yes, HBO Max. This deal vaults them near the top there too. Ahead of Disney but behind... Netflix.

So yeah, you can see why Paramount felt the need to win this deal, no matter the cost. Without it, they were a sub-scale player. With it, they're a real player.

Of course, it's in a game stuck in secular decline. Disney makes most of their money from the theme parks and cruises, not the movies. There's a reason why Parks chief Josh D'Amaro is the new CEO. Yes, the IP fuels it and keeps the flywheel going, but even Disney has to manage the fact that the actual movie business is simply not a great one anymore.

You'd think Hollywood would recognize this. But it's hard when their jobs literally rely on them not recognizing it. They're blinded by box office results that exist in a magical realm where inflation doesn't seem to. If we look at tickets sold – butts in seats – the situation looks dire. And it looks even worse if you put it through the lens of per capita moviegoing in the US over time.

Netflix carved a new path forward in the form of streaming. No, it's not as good of a business as the heyday of movie theaters – or even DVD sales – but it clearly works for Netflix. They're on their way to becoming the first $1T media company. Again, they thought Warner Bros' IP could have accelerated that, but now they'll find another way. One big question: without the need for Sarandos' endless promises of doing proper theatrical releases and windows, will Netflix still go down this path? I still think they will – unless Sarandos decides the path forward with Hollywood is more scorched Earth in light of their reaction to his deal.

I wouldn't be shocked if Netflix goes the other way: aiming to show Hollywood what they missed by moving to dominate the box office themselves. Then again, I predicted this move long before this actual deal.

Perhaps it's simply, "They won. We lost. Next."2

Here's the thing: Hollywood absolutely shot themselves in the foot here. They thought the Netflix/Warner deal signaled the end of their industry, when really it showcased the best possible path forward. To be fair, it's not like the industry loves this Paramount deal either, but what they wanted was no deal. For Warner Bros to continue on as it was, forever. That was simply not tenable and not an option. If Netflix was a path to growth, and Paramount is a path to slower decay, the status quo would have been a quicker collapse under the burden of steady, managed decline.

I'm not trying to be a dick, I'm trying to paint a realistic picture. The only studio that can survive on its own is the one that has for the past century: Disney. And again, that's thanks to their other businesses propping up the studio. For a long while this was cable. Now it's the aforementioned parks.3 There's a reason why every other studio has spent much of the past 100 years being passed around various conglomerates like trading cards. These are not great businesses! Certainly not in our modern age. And when a modern age player came calling, Hollywood freaked the fuck out and threw a tantrum until they walked away. Nice work.

I'll end by once again quoting some Warner Bros IP, fittingly from the fictional media mogul I kicked off by quoting. "Money wins."

But really, it's more like: "Debt wins." Good luck.

Disclosure: I own a relatively small amount of shares in Netflix and have for years (though not as long as I should have), for the reasons outlined above. As I've said throughout with these disclosures, it's probably better for the stock in the short term to *not* do this deal. And here we are.
👇
Previously, on Spyglass...
Hollywood Cuts Off Its Future to Spite Its Present
Netflix is obviously the best path forward for Warner Bros, you fools…
TUDUMB
Oh No, a Tech Company is Buying a Movie Studio
This is the end of Hollywood? Come on.
TUDUMB
The $1T Media Company
Netflix has owned Hollywood, and aims to keep doing so…
TUDUMB
The Grand Netflix Hollywood Unification Theory
Warner Bros/HBO is phase one of Netflix’s bigger play here…
TUDUMB
The Albanian Army Closes in on Warner Bros
In a stunning turn, Netflix enters pole position to take over Warner Bros and HBO…
TUDUMB
Paramount Skydance’s Blockbuster Bid for Warner Bros Discovery
One good idea, so many names…
TUDUMB
How to Scale Paramount
Can Skydance finally, actually bridge Silicon Valley and Hollywood?
TUDUMB

1 Strange how this keeps happening with Skydance deals? Also, I'm not really going to delve into the political aspects here, but I very much look forward to future reporting on that particular aspect, which sure seems to have a strong waft of a bunch of bullshit.

2 Though "next" remains figuring out a way to combat YouTube. You know, the real competition here... To that end, probably not crazy to think that Netflix may be able to buy at least some of these assets – at far more firesale prices! – in a few years...

3 Yes, Warner Bros has a small parks business too, thanks mainly to Harry Potter and deals with Universal. Paramount has been trying to get back into the game (after selling off some amusement parks back in the day).

Apple is About to Have *Two* Toaster/Fridge Hybrids

2026-02-27 00:42:56

Apple is About to Have *Two* Toaster/Fridge Hybrids

The year was 2012. Tim Cook had only been in place as (permanent) CEO of Apple for a handful of months. During their Q2 earnings call, when asked about Microsoft's strategy of converging the laptop and tablet with their then-forthcoming Windows 8 operating system, Cook had the quip ready:

"You can converge a toaster and a refrigerator, but those things are probably not going to be pleasing to the user."

A lot can change in 14 years. Including, it seems, kitchens...

Tasty AI

2026-02-25 23:34:11

"The only problem with Microsoft is that they just have no taste."
Tasty AI

This Steve Jobs quote from a 1995 interview – notably before he returned to Apple – has long lingered in the back of my mind. To me, it's more than just a succinct evisceration of his rival, it speaks to a problem with a lot of technology. It's the notion that, with products in particular, there's an art alongside the science. And it sure feels like this will be a crucial component in AI.

It feels like we've been seeing this come up more and more as AI starts to permeate everything, but especially the aspects bordering on everyday life. As the technology continues to become more capable, the limitations shift from what it can do, to how it does such things. And why it makes the choices that it does. Famously, no one really knows on a granular level – not even those creating the technology. It's all simply too complex, with too many inputs, and increasingly is teaching itself. We know the high-level ways in which it works, at least with LLMs, but we're also starting to branch beyond that. Which, of course, many believe will lead to AGI.

But before we get there, it does seem like we may need to reconcile this notion of taste. I've previously written about the fear that our current AI may be incapable of truly original thought, and that anything we're seeing that may appear as such is really just algorithmic anomalies that we don't understand. The output may look the same, but it's not the same. Because it's the input that may matter most. To truly come up with new discoveries and epiphanies – to create actual iconoclastic thinking – we may be missing some ingredients as everything sort of gravitates towards an ultimate mean.

Our current AI may be able to find any needle in any haystack, but can't write Don Quixote.1 Well, it can now, but couldn't in 1605, had such technology existed then. But actually given the knowledge of every Spanish word and being able to run infinite combinations of those words, AI would technically write Don Quixote in one scenario. Cervantes may have been more efficient in doing so, but it's just a matter of enough compute and resources to mimic the path his mind charted.

Still, someone – or something – would be required to distill those infinite versions of Don Quixote into the definitive one. And how would it choose? Certainly a human could do this, but could AI? If forced to, it would undoubtedly make such a decision, but again, how and why? It would go back into the endless algorithms and data sets. But a human would just go with a "gut instinct" – how one version made them feel versus the others.

This is taste.

Steve Jobs, of course, meant "good taste" with his comment. But that's obviously subjective. Even Microsoft's "bad taste" or "poor taste" or technically even "no taste" is still taste. And while he levied the charge on the entire company, it was a series of decisions by individuals that led to that taste for which he had distaste.

I think you could argue that one key ingredient for taste are limitations. That is, while AI can run infinite permutations and sort through the infinite results, human beings cannot. At some point, we have to choose based on what is feasible. That includes the way we come up with words. In a way, I suppose it's similar to what AI does – we go to what we've learned before and try to string things together in a coherent manner – but again, we cannot do it to infinity. A machine technically could (given enough compute and time) and so it uses the strings found in data sets to make decisions based on probabilities.

Said another way: the data created by humans that the machines have ingested is the proxy for "taste". It's the way those decisions are made at the highest level.

And as synthetic data has entered the equation from the outputs of those decisions, the AI can output new variables that are only reliant upon humanity once-removed. As you go further afield, earlier iterations would collapse upon themselves, perhaps because they lacked the guardrails of humanity's decisions – or taste. Maybe it's not too dissimilar to how the end of the Game of Thrones television series devolved into sloppiness because it had no guardrails in the form of George R. R. Martin's books. But I digress...

With the wave of agentic AI currently sweeping into our lives, taste is shifting from a nice-to-have to a must-have for many. Personally, even if I believed that AI is now capable enough to do certain tasks for me, I still mostly wouldn't trust it to do so. This goes for things as simple as organizing folders on my computer to far more sensitive matters such as taking over control of my email inbox. It has become less about technical abilities and more about decision-making. And yes, in a way, taste.

Programmers seemingly also had a similar issue early on, but at least some now believe that tools such as Codex and Claude Code are good enough on this front. Matt Shumer even specifically called out this notion of "taste" as his "aha" moment in his recent viral post "Something Big is Happening":

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

And later on, he reiterates:

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

Meanwhile, the very same word came up again in this past week's viral post from Alap Shah and Citrini Research, "The 2028 Global Intelligence Crisis", noting that in the future, humans may still be needed as a sort of last line of defense around AI outputs "directing for taste".

So AI "taste" may be good enough for coding now, but still may not be for general AI usage. And even in the most optimistic (in terms of AI's near-future capabilities – decidedly pessimistic for humanity) take on where AI is heading, we may not get to AI "taste" anytime soon. The reality, of course, is probably somewhere in between the two.

Clearly the AI labs believe that they can solve for "taste" by amping up personalization. As AI learns from your responses, it is tailoring those to be more in line with what it believes you like. But beyond the sycophantic (and other) concerns, there's the notion that "taste" actually encompasses far more than your own preferences, but that you may seek out the preferences of others to tell you what they like for you.

It's nuanced, but different.

So how do we get AI to do that? The answer may still be personalization, but rather than it just mirroring you and your own tastes, you sort of do rounds of "dating" for lack of a better phrase to see which AI is most compatible with your own tastes. Forget the Turning test, how about a simple "taste test"?

This resonates with me because I've been doing this for a while now. Currently, I'm paying to use ChatGPT, Gemini, and Claude to see which suits my own style and preferences best. Right now, I think it's Claude, but that has morphed over time and I suspect it will again as each of these models continue to evolve.

Still, as deep as I am in all of this, I have a hard time believing that I'm going to trust one of these systems to handle harder workflows anytime soon. And it's not about the tech, it's about the taste.

Take, for example, the use case everyone in tech always turns to for demos: booking travel. I fully believe these services will be able to technically do that end-to-end soon enough, but I simply don't believe that the decisions they make will align with the ones I would make. Some of that may be logistics, but I also think these systems will get past that once they have full access to your calendar and email and the like. But that's just the start. From here we quickly enter a maze of a hundred little preferences that are altered by thousands of real-world variables. If the powers that be thought the game of Go was a good, complex task to prove out AI, wait until it gets a load of trying to book travel for a family with children.

Do I believe that AI will pick the place that will be best for me and my family? Sight unseen? Of course not. So the AI will prompt me just as Claude Cowork now does to ensure it's picking the right places and services. But it's not long before that's more work than simply doing it yourself. And so that doesn't work.

We'll check off all the technical boxes but the taste ones may yet remain. Because how do you code up "there are a bunch of options, this one looks nice, let's try it"? The AI might ask what you mean by "looks nice" since all of the images on the travel site that they've ingested through an API technically "look nice". And so you'll explain to the AI that you're not sure, it's just a "vibe" you're getting. And the AI will say it understands, because those are words it has ingested, but it will not actually understand. Maybe, perhaps, because an AI hasn't lived. And, as such, doesn't have the memories forged in the real world that subconsciously alter the way an image makes you feel.

In the past, people have obviously used travel agents to make such calls. And others use personal assistants – the actual human variety. But those are human beings with taste. And you can suss out if you trust their taste to match your own.

If AI technically has no taste... I turn back to Steve Jobs:

"They have absolutely no taste. And what that means is – I don't mean that in a small way, I mean that in a big way. In the sense that they don't think of original ideas."
👇
Previously, on Spyglass...
Love It If We Made It
AI will disrupt work. We will adapt.
Tasty AI
It’s The Thought That Counts
The diminished state of thinking could be decimated by AI…
Tasty AI
AI Can Reproduce Writing, But Not the Process of Writing
And that’s the most important part…
Tasty AI
AI Needs Its Steve Jobs
Everyone seemingly wants to shoot the current AI messengers…
Tasty AI
Artificial Iconoclasts
A foil might be needed for artificial intelligence…
Tasty AI

1 Try using AI to get to the bottom of where the "needle in a haystack" phrase originated, I dare you! Gemini believes it's Sir Thomas More. ChatGPT is sure it was Chaucer. Claude thinks it's Cervantes. Digging around myself, I believe the answer is that More used a similar (but different) metaphor (involving a meadow), while the Don Quixote reference actually stems from the English translation of the book because no one outside of Spain would know what "To go looking for Dulcinea in El Toboso like looking for Marica in Ravenna..." would mean. But the actual phrase undoubtedly predates that in ways too – it seems unlikely one translator is that clever – that book translation just helped establish it widely. When pushed on Chaucer, ChatGPT kept right on hallucinating in ways that were both fascinating and entirely unhelpful in terms of conveying certainty! Anyway, my own taste here guides me towards using the Don Quixote reference.

Exbox

2026-02-24 20:12:20

Exbox

Almost exactly two years ago, I wrote a post about the muddled mistakes Microsoft seemed to be making with their "Xbox Everywhere" strategic shift. Re-reading it now, in light of the news of the major shakeup atop the Xbox division, and I think it pretty much nailed the failures we're now seeing play out, which have culminated in these changes – so much so that I'm going to steal my old URL slug for this title. Because I do think this signals the end for the endeavor...