MoreRSS

site iconSpyglass Modify

A collection of written works, thoughts, and analysis by M.G. Siegler, a long-time technology investor and writer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spyglass

Sometimes It's Too Slow

2026-02-09 07:18:00

Sometimes It's Too Slow

I mean, he just sort of says it. "You can mark my words. In 36 months – but probably closer to 30 months – the most economically compelling place to put AI will be space." It's about 4 whole minutes into a nearly 3 hour long podcast, when Elon Musk makes his proclamation. Of course, it was one he's made before, in his own post announcing the merger of SpaceX and xAI. But the whole "mark my words" bit feels pretty definitive.

Such statements are nothing new for Elon Musk. I'm reminded of the old Douglas Adams quote, "I love deadlines. I love the whooshing noise they make as they go by." To be honest and clear, Musk has done some incredible things. Obviously. But he almost has never done them on time – at least the initial timelines he has given. And there are plenty of timelines that we're still waiting on. Getting to Mars, to name one. And that one is top of mind as it seems like it's going to be further delayed.

Look, we all get the strategy of "dreaming big" and how that can push and inspire people. But this whole "data centers in space" thing seems less about that and more just a way to get shareholders excited about a merger right now, but also potentially public investors coming up. Is this notion self-serving? Yes. Incredibly so. Musk is trying to leverage the (absolutely massive) advantage he has here.

At the same time, he probably isn't wrong in noting that building in space will be easier from a regulatory perspective. Which is a sort of wild sentence but still not crazy? And it's perhaps ultimately the only longer-term viable path. But it will also be harder, shorter-term from about a million other perspectives...

I just feel like it's worth being actually intellectually honest here.1 It's not that we shouldn't try all of this. It's that we should try, at least on some level, to still be realistic.

The reality resides in another quote, this one from one of Musk's famous foils, Bill Gates, "Most people overestimate what they can do in one year and underestimate what they can do in ten years."

36 months from now is early 2029. And 30 months is mid-2028. Now there's actually quite a bit of wiggle room in what Musk is saying here – "economically compelling" is decidedly not "most AI data centers will be in space". But even the suggestion that it might be cheaper to do data centers in space at that point will likely be folly. Maybe he gets away with using "compelling". But I think on Gates' scale, it's a pretty clear overestimate of where we'll be in a few years, while at the same time it may be thinking too small over a longer-term time horizon!

Anyway, we'll see. Musk just made every reporter on the planet set a reminder for those dates. And much like his other dates and predictions, it probably won't matter as they whoosh by...

I was mainly thinking about this in a broader context, which is the current macro panic that software may be dead. Or at the very least, that SaaS businesses are over. Why? AI, of course. And it ties into the bigger picture that people are increasingly concerned that a lot of careers are about to be over due to this new technology.

Obviously, there are some justified reasons for such fears, as we're seeing even the Big Tech companies lay off thousands in the name of better "efficiency" thanks, in no small part, to AI. At the same time, it's undoubtedly ridiculous to think that this is the end of software, and all of the businesses built on the back of software.

Maybe that's a major problem in a decade. Maybe. But in the next year or two years or even three years? Probably not. I mean, just look at the current state of AI. It's both incredible in small ways and incredibly bad in big ways. There's promise but also profanity. As in, the words that will come out of your mouth if you try to task the technology with doing pretty much anything truly meaningful right now.

So while Anthropic may have new plug-ins to help streamline legal work, you're not going to fire your legal team any time soon. Again, not this year. Not next. And not the following. Wall Street can freak out about this all changing over night, but the history and reality of technology tells us that, to bring in a far more recent quote, "Sometimes it's too slow."

"For sure," as Emmanuel Macron added to drive home his point on the state of Europe and also to drive home downloads for his inevitable "Song of the Summer".

I wouldn't dare suggest that technology moves at the pace of the EU, but it often does move far slower than anyone within tech would like – or would promise. But back to Gates, that doesn't mean the visions are wrong, it just all takes longer than anticipated in the heat of the current hype.

I'm reminded of self-driving cars, promised long ago by Google as being right around the corner. Well, they're here now, but only in a handful of cities, and that took 15 years.

15 years from now, it will be 2041. It's a date that feels sort of reasonable for data centers in space. And probably even then just on a smaller scale. Sometimes it's too slow. For sure.


1 And major kudos to Dwarkesh Patel for pushing on all of this.

Wall Street Starts to Turn on AI

2026-02-06 06:48:10

Wall Street Starts to Turn on AI

Stocks! As it turns out, they go down too. How easily we forget, but the past few days have been a good reminder and/or wake-up call depending on your positions. In some ways, there are parallels to the "DeepSeek Moment" a year ago. In other ways, this drop is more nuanced. And perhaps more natural...

Alexa+ Plus ChatGPT?

2026-02-05 03:22:58

Alexa+ Plus ChatGPT?

When reports started to circulate that not only was Amazon investing in OpenAI's latest fundraise, but they might put in upwards of $50B, I wondered what this meant:

Is it meant to say something to the market, that they recognize that everyone is asking for access to OpenAI tech? That they’re worried about the state of their in-house models? Something about their relationship with Anthropic, as the latter gets closer to the likes of Microsoft?

As it turns out, it may be all of the above, and a few more things on top. Notably, as Anissa Gardizy, Catherine Perloff, and Amir Efrati report for The Information:

As Amazon weighs an equity investment of tens of billions of dollars in OpenAI, the companies are also discussing a commercial agreement that could require OpenAI to dedicate its own researchers and engineers to developing customized models for powering Amazon’s own AI products, according to a person involved in the talks and another person who was briefed about it.

Amazon could use customized versions of OpenAI’s models to bolster Amazon AI products such as the Alexa voice assistant, one of these people said. Such an arrangement could require both companies to tweak OpenAI models so they respond to customers the way Amazon wants, this person said.

I'm honestly not sure what to make of this. Just today, Amazon opened up Alexa+ to everyone (in the US), after nearly a year of limited access while they tested it in real time. It's... fine. Good in some ways, less good in others. But it's almost certainly too early to know if it's a success or failure yet. With the roll-out, they're obviously touting stats indicating expanding usage, but perhaps they're also seeing other data points that are less encouraging, which they're not sharing?

Or perhaps they simply want to put their foot on the gas. What's better than Alexa+ powered (in part) by Anthropic? How about Alexa+ powered by Anthropic and OpenAI. That's especially interesting when you consider that Apple is about to roll-out their upgraded Siri powered by Gemini. That could very well be what Amazon feels the need to combat here.

What's ironic are the reports that indicate Apple may have actually wanted to go with Anthropic given the results during the Great AI Bake-Off – and their own apparent usage of Claude internally – but business terms dictated that Google was a better fit. Amazon not only has the deal in place with Anthropic, they're the largest shareholder in the company! But beyond Anthropic getting closer to the likes of Microsoft and others, there's this element:

Anthropic is generally more restrictive about letting customers customize its models, a process known as fine-tuning or post-training, than OpenAI is. For instance, Amazon employees cannot post-train Anthropic models beyond what other Anthropic customers have access to, according to two people with knowledge of the matter. On the plus side, Amazon employees have been able to use Anthropic models to create less powerful models through a process known as distillation, the people said.

And that's potentially ironic since – at least by OpenAI's telling – they didn't want the business of helping to rebuild Siri because they didn't want to do the custom work Apple required. Now they're apparently talking about doing just that for Amazon? I guess $50B will do that! Apple, after all, is thought to be paying just $1B a year for their AI partnership work. Everyone and everything has a price, perhaps!

And unlike with the Apple/Google deal, you'd think if Amazon does go down this path, they'd want to tout the OpenAI partnership/capabilities! Hell, they might even market a future Alexa+ this way: it's the Alexa you knew and loved, now powered by both Claude and ChatGPT!1 Oh yes, and Amazon's own models. Though that's apparently another layer here, as I suspected. They're simply not good enough, it seems, and certainly no selling point. Especially if they're going to have to combat Apple with a Gemini-powered Siri (again, even if they don't tout the partnership) and Google itself with their own Gemini-powered products.

There are probably other layers here too. Access to Codex may be one, for Amazon to use/offer alongside Claude Code? Getting OpenAI to use Trainium chips, perhaps? And yes, maybe opening up Amazon's all-important everything store to OpenAI's agents.

Still, $50B is a lot of money. Maybe this is just all about ensuring Google doesn't start to break away and dominate AI? Regardless, the 'Anti-Google AI Alliance' would certainly grow stronger with this news. Alexa powered by not just Anthropic, but OpenAI too...

👇
Previously, on Spyglass...
The Anyone-But-Google AI Alliance
Big Tech’s billions into OpenAI signal something…
Alexa+ Plus ChatGPT?
Conflicts & Interests in AI
*Of course* Amazon is investing in OpenAI, next up…
Alexa+ Plus ChatGPT?
Collect Them All (AI Edition)
An ongoing list of the tangled web of Big Tech investments in Big AI…
Alexa+ Plus ChatGPT?

1 It seems more likely that they would simply have Anthropic and OpenAI models behind-the-scenes powering Alexa, just as is the case right now with Claude. Still, if they did this deal, it seems like unlike with Apple, they'd want to tout it? Maybe simply having the best possible voice assistant (thanks to the best combination of tech) would be enough?

AI Bots Are Molting

2026-02-04 21:45:34

For this month's appearance on the Big Technology Podcast, we kicked off talking about Moltbook – the Reddit-like social network build by AI bots, for AI bots. Is it real? Sort of – at least some of the content is clearly people gaming the system. But it also points to something potentially interesting/important with regard to AI agents, and how they'll interact with one another in different scenarios in the future. A really, it's a continuation of some early behavior on Facebook all the way up through Microsoft's "Sydney" situation – that may have ruined AI for Bing.

And yes, this is potentially one massive security nightmare given that people are running these "OpenClaws" on their own systems. And then there's the potential for existential risk if billions such bots are created, become aware, organize, and decide that human beings might make for better pets...

From there, we hit on the (still ongoing) mystery around NVIDIA's highly touted $100B bet in OpenAI. Sorry, "up to" $100B. Which is now looking more like $20B into OpenAI's new round of funding. Jensen Huang would like us to believe there's nothing to see in such a change, but he sounds a lot like the way he sounded when he was "delighted" by Google's TPUs...

Speaking of that new round, what are we to read into the notion that Amazon may put $50B into OpenAI? I think it has to do with an anyone-but-Google mentality that the rest of Big Tech is circling around given the rise of Gemini.

Meanwhile, with OpenAI and Anthropic wrapping up their new massive rounds, all eyes turn to the race to IPO. If Anthropic beats OpenAI to market, it may create a real narrative problem for OpenAI. Of course, with SpaceX and xAI now one company – confirmed the day after our podcast, but we discussed the likelihood – Elon Musk may have executed an end-run-around on his rival. If SpaceX goes public in June, you can bet Elon will play it up as the first true AI company that the public can bet on – one wrapped in a profitable rocket ship... Might that push the other big AI players' IPOs into 2027?

Finally, we hit on Apple's latest earnings. Great numbers aren't equating to great stock performance. Part of it is clearly memory chip concerns going forward, but as with that situation, it's AI consuming all narratives. While Wall Street applauded Meta's huge CapEx increase, Apple remains spending closer to $0. That may end up looking smart and prudent in the short term – especially if/when the market shifts – but will it hurt them in the long run? What else are they spending all that cash on? Stock buybacks?

Come on, Apple. If you don't spend on data centers and/or other elements of AI eventually, Google could end up owning the aiPhone.

Space Twitter!

2026-02-03 22:36:21

Space Twitter!

“We wanted flying cars, instead we got 140 characters.” Peter Thiel's famous quote from 2011 was obviously meant to inspire entrepreneurs to think and dream bigger than the rather trivial tech "breakthroughs" of the day – meaning, of course, Twitter. Well, a lot can change in 15 years.

With the news that SpaceX is acquiring xAI, that 140-character service now technically has a home in the stars.1 It's not flying cars, but it is flying rockets. And there's now a direct line of sight to flying cars, once Tesla is inevitably folded into the mix as well...

My 2026 iPhone Homescreen

2026-02-03 01:18:41

Well this is embarrassing. I'm always somewhat late to posting my yearly iPhone homescreen post, but this is the first time it has slipped into February. Still, here we are and the screenshots must go on, just as they have in 2025, 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, and 2014.

While this year may not look dramatically different from last year's, it may also be the last year before some really major changes if, as expected, there's a new type of iPhone released in the fall: the 'iPhone Fold'. And if, I decided to switch to that as my daily driver. I highly suspect that I will – certainly at least to try it out – but we'll see. While I think the Pixel Fold has given me a glimpse into what this will be like, I suspect Apple's entry may be a bit different. Notably, the size and ratio of the device may be different (think: shorter and squatter when folded, and iPad mini-like when unfolded). Regardless, the homescreen is going to be wildly different – certainly when unfolded.

Anyway, we'll get to that in early 2027. For now, the main differences for 2026 are around, unsurprisingly, AI. Starting with my top left widget (my standard widget basically since widgets were introduced into iOS in 2020 or so) which is now full-on Gemini. Think of it as my way to get ready for our Google-powered Siri, which seems to be fast approaching.

It will be interesting if that integration makes such a widget – and the Gemini app itself – superfluous. I doubt it because first and foremost, Gemini will continue to be an app to capture all your AI workflows where as Siri will simply be a system-level integration (read: no app). I'm sort of surprised by such reports – including with the "real" new Siri coming in iOS 27 – and I would bet that Apple backtracks here by iOS 28. Anyway, for now, I have Gemini right up top.1

That's also, in part, because as you can clearly see, I continue to be in full-on AI testing mode. Which is to say, I don't have my "one AI to rule them all". While I still mainly use ChatGPT – hence the continued dock placement – with the release of Claude Cowork (and really, the Opus 4.5 model), I've become very "Claude Curious". While OpenAI continues to have an edge on the consumer-facing product-side (though shout out to Google's strides this past year), Anthropic's underlying models seem very, very good. Many might even say better – and that may even include Apple, who reportedly is all-about Claude internally, and may have gone with Anthropic to power Siri had Google not offered them better terms.

Anyway, there's the Claude app, lingering right above ChatGPT...

Beyond that, and a few placement tweaks, there's nothing really new and different this year. To make room for Claude, I shoved the Apple News app into a folder. I simply don't launch it enough, and mainly interact with it through push notifications. Yes, I still keep Xitter off my homescreen even though I use it quite a bit to scan for news/information. Threads continues to pick up some of that slack, but if I'm being honest, Bluesky is falling in usage for me – might it fall off the homescreen in 2027?

I should probably replace Mail with Gmail, which I do use more. But I also hate the idea of having a constantly used email app on my homescreen. Maybe if Google brings their AI wizardry to the Gmail app – and if it's actually any good – I'll do the swap.

The same could be said with Maps vs. Google Maps – I probably use the latter more, but I do find the data to be increasingly suspect and the design increasingly cluttered, so I do use Apple's variety roughly the same amount (also to double check any directions). I wonder how/if/when AI will change this equation...

Everything else is pretty chalk: Photos. Camera. Phone. Calendar. Audible. Podcasts. Music. FT. Economist. ESPN. NYTimes. WhatsApp. Ulysses. Reeder. Bear.

Ditto the dock: Messages. The aforementioned ChatGPT. Matter. Safari.

One more thing: I still have my "Action Button" set to launch ChatGPT as well, so having the app on the homescreen may be overkill. That's good as I realize I may need to prepare for a world of fewer apps on the homescreen if the iPhone Fold screen really is a completely different (and smaller) ratio...

👇
A couple posts from the weekend on Spyglass...
Where the Wild Bots Are
Should we be concerned or amused by Moltbook, a social network where AI can talk amongst itself? Maybe both?
NVIDIA and the Case of the Missing $100B OpenAI Investment
The massive deal touted by both OpenAI and NVIDIA as a landmark one now looks a lot different – unless you ask Jensen Huang…

1 And I like having the four dedicated buttons for different AI modes, which all of the AI widgets seem to have coalesced around.