MoreRSS

site iconSpyglass Modify

A collection of written works, thoughts, and analysis by M.G. Siegler, a long-time technology investor and writer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spyglass

Alexa+ Plus ChatGPT?

2026-02-05 03:22:58

Alexa+ Plus ChatGPT?

When reports started to circulate that not only was Amazon investing in OpenAI's latest fundraise, but they might put in upwards of $50B, I wondered what this meant:

Is it meant to say something to the market, that they recognize that everyone is asking for access to OpenAI tech? That they’re worried about the state of their in-house models? Something about their relationship with Anthropic, as the latter gets closer to the likes of Microsoft?

As it turns out, it may be all of the above, and a few more things on top. Notably, as Anissa Gardizy, Catherine Perloff, and Amir Efrati report for The Information:

As Amazon weighs an equity investment of tens of billions of dollars in OpenAI, the companies are also discussing a commercial agreement that could require OpenAI to dedicate its own researchers and engineers to developing customized models for powering Amazon’s own AI products, according to a person involved in the talks and another person who was briefed about it.

Amazon could use customized versions of OpenAI’s models to bolster Amazon AI products such as the Alexa voice assistant, one of these people said. Such an arrangement could require both companies to tweak OpenAI models so they respond to customers the way Amazon wants, this person said.

I'm honestly not sure what to make of this. Just today, Amazon opened up Alexa+ to everyone (in the US), after nearly a year of limited access while they tested it in real time. It's... fine. Good in some ways, less good in others. But it's almost certainly too early to know if it's a success or failure yet. With the roll-out, they're obviously touting stats indicating expanding usage, but perhaps they're also seeing other data points that are less encouraging, which they're not sharing?

Or perhaps they simply want to put their foot on the gas. What's better than Alexa+ powered (in part) by Anthropic? How about Alexa+ powered by Anthropic and OpenAI. That's especially interesting when you consider that Apple is about to roll-out their upgraded Siri powered by Gemini. That could very well be what Amazon feels the need to combat here.

What's ironic are the reports that indicate Apple may have actually wanted to go with Anthropic given the results during the Great AI Bake-Off – and their own apparent usage of Claude internally – but business terms dictated that Google was a better fit. Amazon not only has the deal in place with Anthropic, they're the largest shareholder in the company! But beyond Anthropic getting closer to the likes of Microsoft and others, there's this element:

Anthropic is generally more restrictive about letting customers customize its models, a process known as fine-tuning or post-training, than OpenAI is. For instance, Amazon employees cannot post-train Anthropic models beyond what other Anthropic customers have access to, according to two people with knowledge of the matter. On the plus side, Amazon employees have been able to use Anthropic models to create less powerful models through a process known as distillation, the people said.

And that's potentially ironic since – at least by OpenAI's telling – they didn't want the business of helping to rebuild Siri because they didn't want to do the custom work Apple required. Now they're apparently talking about doing just that for Amazon? I guess $50B will do that! Apple, after all, is thought to be paying just $1B a year for their AI partnership work. Everyone and everything has a price, perhaps!

And unlike with the Apple/Google deal, you'd think if Amazon does go down this path, they'd want to tout the OpenAI partnership/capabilities! Hell, they might even market a future Alexa+ this way: it's the Alexa you knew and loved, now powered by both Claude and ChatGPT!1 Oh yes, and Amazon's own models. Though that's apparently another layer here, as I suspected. They're simply not good enough, it seems, and certainly no selling point. Especially if they're going to have to combat Apple with a Gemini-powered Siri (again, even if they don't tout the partnership) and Google itself with their own Gemini-powered products.

There are probably other layers here too. Access to Codex may be one, for Amazon to use/offer alongside Claude Code? Getting OpenAI to use Trainium chips, perhaps? And yes, maybe opening up Amazon's all-important everything store to OpenAI's agents.

Still, $50B is a lot of money. Maybe this is just all about ensuring Google doesn't start to break away and dominate AI? Regardless, the 'Anti-Google AI Alliance' would certainly grow stronger with this news. Alexa powered by not just Anthropic, but OpenAI too...

👇
Previously, on Spyglass...
The Anyone-But-Google AI Alliance
Big Tech’s billions into OpenAI signal something…
Alexa+ Plus ChatGPT?
Conflicts & Interests in AI
*Of course* Amazon is investing in OpenAI, next up…
Alexa+ Plus ChatGPT?
Collect Them All (AI Edition)
An ongoing list of the tangled web of Big Tech investments in Big AI…
Alexa+ Plus ChatGPT?

1 It seems more likely that they would simply have Anthropic and OpenAI models behind-the-scenes powering Alexa, just as is the case right now with Claude. Still, if they did this deal, it seems like unlike with Apple, they'd want to tout it? Maybe simply having the best possible voice assistant (thanks to the best combination of tech) would be enough?

AI Bots Are Molting

2026-02-04 21:45:34

For this month's appearance on the Big Technology Podcast, we kicked off talking about Moltbook – the Reddit-like social network build by AI bots, for AI bots. Is it real? Sort of – at least some of the content is clearly people gaming the system. But it also points to something potentially interesting/important with regard to AI agents, and how they'll interact with one another in different scenarios in the future. A really, it's a continuation of some early behavior on Facebook all the way up through Microsoft's "Sydney" situation – that may have ruined AI for Bing.

And yes, this is potentially one massive security nightmare given that people are running these "OpenClaws" on their own systems. And then there's the potential for existential risk if billions such bots are created, become aware, organize, and decide that human beings might make for better pets...

From there, we hit on the (still ongoing) mystery around NVIDIA's highly touted $100B bet in OpenAI. Sorry, "up to" $100B. Which is now looking more like $20B into OpenAI's new round of funding. Jensen Huang would like us to believe there's nothing to see in such a change, but he sounds a lot like the way he sounded when he was "delighted" by Google's TPUs...

Speaking of that new round, what are we to read into the notion that Amazon may put $50B into OpenAI? I think it has to do with an anyone-but-Google mentality that the rest of Big Tech is circling around given the rise of Gemini.

Meanwhile, with OpenAI and Anthropic wrapping up their new massive rounds, all eyes turn to the race to IPO. If Anthropic beats OpenAI to market, it may create a real narrative problem for OpenAI. Of course, with SpaceX and xAI now one company – confirmed the day after our podcast, but we discussed the likelihood – Elon Musk may have executed an end-run-around on his rival. If SpaceX goes public in June, you can bet Elon will play it up as the first true AI company that the public can bet on – one wrapped in a profitable rocket ship... Might that push the other big AI players' IPOs into 2027?

Finally, we hit on Apple's latest earnings. Great numbers aren't equating to great stock performance. Part of it is clearly memory chip concerns going forward, but as with that situation, it's AI consuming all narratives. While Wall Street applauded Meta's huge CapEx increase, Apple remains spending closer to $0. That may end up looking smart and prudent in the short term – especially if/when the market shifts – but will it hurt them in the long run? What else are they spending all that cash on? Stock buybacks?

Come on, Apple. If you don't spend on data centers and/or other elements of AI eventually, Google could end up owning the aiPhone.

Space Twitter!

2026-02-03 22:36:21

Space Twitter!

“We wanted flying cars, instead we got 140 characters.” Peter Thiel's famous quote from 2011 was obviously meant to inspire entrepreneurs to think and dream bigger than the rather trivial tech "breakthroughs" of the day – meaning, of course, Twitter. Well, a lot can change in 15 years.

With the news that SpaceX is acquiring xAI, that 140-character service now technically has a home in the stars.1 It's not flying cars, but it is flying rockets. And there's now a direct line of sight to flying cars, once Tesla is inevitably folded into the mix as well...

My 2026 iPhone Homescreen

2026-02-03 01:18:41

Well this is embarrassing. I'm always somewhat late to posting my yearly iPhone homescreen post, but this is the first time it has slipped into February. Still, here we are and the screenshots must go on, just as they have in 2025, 2024, 2023, 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, and 2014.

While this year may not look dramatically different from last year's, it may also be the last year before some really major changes if, as expected, there's a new type of iPhone released in the fall: the 'iPhone Fold'. And if, I decided to switch to that as my daily driver. I highly suspect that I will – certainly at least to try it out – but we'll see. While I think the Pixel Fold has given me a glimpse into what this will be like, I suspect Apple's entry may be a bit different. Notably, the size and ratio of the device may be different (think: shorter and squatter when folded, and iPad mini-like when unfolded). Regardless, the homescreen is going to be wildly different – certainly when unfolded.

Anyway, we'll get to that in early 2027. For now, the main differences for 2026 are around, unsurprisingly, AI. Starting with my top left widget (my standard widget basically since widgets were introduced into iOS in 2020 or so) which is now full-on Gemini. Think of it as my way to get ready for our Google-powered Siri, which seems to be fast approaching.

It will be interesting if that integration makes such a widget – and the Gemini app itself – superfluous. I doubt it because first and foremost, Gemini will continue to be an app to capture all your AI workflows where as Siri will simply be a system-level integration (read: no app). I'm sort of surprised by such reports – including with the "real" new Siri coming in iOS 27 – and I would bet that Apple backtracks here by iOS 28. Anyway, for now, I have Gemini right up top.1

That's also, in part, because as you can clearly see, I continue to be in full-on AI testing mode. Which is to say, I don't have my "one AI to rule them all". While I still mainly use ChatGPT – hence the continued dock placement – with the release of Claude Cowork (and really, the Opus 4.5 model), I've become very "Claude Curious". While OpenAI continues to have an edge on the consumer-facing product-side (though shout out to Google's strides this past year), Anthropic's underlying models seem very, very good. Many might even say better – and that may even include Apple, who reportedly is all-about Claude internally, and may have gone with Anthropic to power Siri had Google not offered them better terms.

Anyway, there's the Claude app, lingering right above ChatGPT...

Beyond that, and a few placement tweaks, there's nothing really new and different this year. To make room for Claude, I shoved the Apple News app into a folder. I simply don't launch it enough, and mainly interact with it through push notifications. Yes, I still keep Xitter off my homescreen even though I use it quite a bit to scan for news/information. Threads continues to pick up some of that slack, but if I'm being honest, Bluesky is falling in usage for me – might it fall off the homescreen in 2027?

I should probably replace Mail with Gmail, which I do use more. But I also hate the idea of having a constantly used email app on my homescreen. Maybe if Google brings their AI wizardry to the Gmail app – and if it's actually any good – I'll do the swap.

The same could be said with Maps vs. Google Maps – I probably use the latter more, but I do find the data to be increasingly suspect and the design increasingly cluttered, so I do use Apple's variety roughly the same amount (also to double check any directions). I wonder how/if/when AI will change this equation...

Everything else is pretty chalk: Photos. Camera. Phone. Calendar. Audible. Podcasts. Music. FT. Economist. ESPN. NYTimes. WhatsApp. Ulysses. Reeder. Bear.

Ditto the dock: Messages. The aforementioned ChatGPT. Matter. Safari.

One more thing: I still have my "Action Button" set to launch ChatGPT as well, so having the app on the homescreen may be overkill. That's good as I realize I may need to prepare for a world of fewer apps on the homescreen if the iPhone Fold screen really is a completely different (and smaller) ratio...

👇
A couple posts from the weekend on Spyglass...
Where the Wild Bots Are
Should we be concerned or amused by Moltbook, a social network where AI can talk amongst itself? Maybe both?
NVIDIA and the Case of the Missing $100B OpenAI Investment
The massive deal touted by both OpenAI and NVIDIA as a landmark one now looks a lot different – unless you ask Jensen Huang…

1 And I like having the four dedicated buttons for different AI modes, which all of the AI widgets seem to have coalesced around.

NVIDIA and the Case of the Missing $100B OpenAI Investment

2026-02-01 23:53:36

NVIDIA and the Case of the Missing $100B OpenAI Investment

When is $100B not $100B? Apparently when it's an amount touted in the highest profile way possible by two of the biggest companies in the world!

That's the part I don't understand about the push-back against the reports that NVIDIA is no longer planning to invest that amount of money into OpenAI. Again, they touted it in their own release! As did OpenAI! There was an interview on live TV and everything! And now it's not like Jensen Huang is flatly denying the new reporting about the change in scope, if not in heart, but he's clearly trying to gaslight his way out of the situation.

When asked about the shift this week, Huang would only say things like, "We will invest a great deal of money." But he wouldn't give an actual figure or even range beyond saying that it was "probably the largest investment we’ve ever made." That's nice, but again, it's not a denial that it's different from what the companies had originally intended! He's trying to sweep that right under the rug as a "nothing to see here" but again, we all saw it! And heard about it to no end! This is not that!

In fact, when the $100B number was brought up in relation to Huang's "largest investment" point, the answer was: "No, no, nothing like that."

Now, to be fair and clear, that entire deal seemed a bit slippery from the start. That's exactly why my headline about it was, "NVIDIA (Intends to) Invest (Up to) $100B in OpenAI (Over Time)". As I wrote back in September:

First and foremost, it's a letter of intent. What's up with OpenAI announcing those of late? (We'll get to that.) But the real key is in the second sentence: "NVIDIA intends to invest up to $100 billion in OpenAI...".

That's not one qualifier, it's two! We get "intends" and "up to". In other words, they may not invest $100B. Or they may not invest the full $100B. Or both. Or neither. And the sentence doesn't end there! "...as the new NVIDIA systems are deployed." In other words, this investment has stipulations.

My intent in that post was simply to point out that while many of the headlines (and push notifications) suggested this was a done deal, it seemed anything but. And well, here we are!

A few days ago, George Hammond of The Financial Times wrote the following, almost in passing, when reporting on OpenAI's latest fundraising efforts:

Nvidia last year struck a multiyear deal with OpenAI to invest $100bn in $10bn increments as the start-up brought tranches of data centre capacity online. However, that agreement has yet to be sealed.

A $20bn cheque from Nvidia for OpenAI’s funding round could be in addition to the $100bn deal, or could lead the two companies to adjust the terms, the people said.

That led me to note:

Both reports note that this would be separate from NVIDIA's previous "up to" $100B commitment. But both also note that the previous deal still isn't actually done yet – and, as the FT notes, this new investment could end up in place of that other commitment. I wouldn't be shocked if that were the case – or at least the first tranche, with the rights to buy more later (as was always going to be the case with the original deal – hence, "up to")... 

And now it's confirmed that any investment NVIDIA does here will be instead of that original deal. Which again, Jensen Huang would like us to believe is no big deal. But well, it was a big deal. A really fucking big deal. One of the biggest deals ever announced for anything, in fact!

With that in mind, the deflection is silly. Why not just say something like, "as you know, we had a tentative understanding with OpenAI about an investment, but in talking it through, and given OpenAI's needs at the moment, we both decided it was more prudent to invest a large amount alongside others to ensure OpenAI can get the capital they need right now."

Again, NVIDIA's original commitment was always stated to be over a long time period as OpenAI built out their capacity. Subsequent reporting stated it would be done in $10B increments (likely at the valuation of OpenAI at the time of those investments). It seems reasonable to suggest that OpenAI just needed more money upfront, hence the move to a more traditional fundraise.

Of course, they're probably not saying that because it's also probably not exactly the most honest way to frame the situation. As Berber Jin reported for The Wall Street Journal a few days ago:

NVIDIA's plan to invest up to $100 billion in OpenAI to help it train and run its latest artificial-intelligence models has stalled after some inside the chip giant expressed doubts about the deal, people familiar with the matter said.

"Some" is sort of funny there. As it sounds like "some" sure includes Jensen Huang himself:

Huang has privately emphasized to industry associates in recent months that the original $100 billion agreement was nonbinding and not finalized, people familiar with the matter said. He has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.

Again, NVIDIA and Huang are pushing back against that reporting, but not directly refuting it. Just suggesting that it's not accurate. When asked specifically about the reports that he was unhappy with OpenAI in some way, Huang snapped back, "that's nonsense". Okay, but then why has this deal drastically changed? You can't say that and give no reason for the change.1 It implies you simply don't want to admit there's at least some truth to that being the reason for the change.

Does it have something to do with the deal OpenAI struck with AMD – with OpenAI becoming a shareholder in AMD, rather than the other way around – shortly after the original NVIDIA announcement? Huang certainly seemed annoyed/confused about that deal with his chief rival at the time...

Is it related to OpenAI now clearly getting squeezed from Google from up top and Anthropic from below in terms of both product and certainly the business side?

Is it something to do with the notion that seemingly a big part of that original NVIDIA agreement was about leveraging the largest company in the world to help secure the debt OpenAI needed to build out their own AI infrastructure? Did this potential risk spook NVIDIA and/or their own investor base?

Was it Jensen acting fast on a trip with Altman (accompanying President Trump overseas) to ensure OpenAI didn't get closer to Google and their TPUs? We also know how much he cares about that by suggesting he doesn't care about that! Was it Altman agreeing but pushing to announce it immediately even without any real framework in place?

Something else? Nobody knows. Because neither NVIDIA nor OpenAI is saying beyond saying things as if to imply that anyone asking such questions is crazy. Again, this is going to be NVIDIA's largest investment ever! Well, "probably"! But also "probably" nowhere near the original $100B soft-circled. Because if the intent was still to get to that level eventually, Huang would probably say so, not "no, no, nothing like that."

Instead, it sounds more like this will be around a $20B - $30B investment in this round from NVIDIA. That's still massive, no doubt. But the context matters here! Especially with Amazon now reportedly angling to take $50B of the round! And if SoftBank wants another $30B... we're running out of math in even a $100B round! Maybe OpenAI raises more, or maybe NVIDIA puts in less. They've already clearly cut back from their original intent once.


Update February 2, 2026: A new report from Reuters paints a picture of OpenAI being unhappy with the inference performance from NVIDIA's chips – which led them to talk to other chip-makers, such as Groq, which may or may not have led NVIDIA to doing their Christmas Even "hackquisition" thus shutting that deal down... And yes, the report also indicates NVIDIA has been unhappy about OpenAI's deals with AMD and others. Seems like... another complicated relationship for OpenAI!

Meanwhile, no one was asking, but Oracle would like everyone to know that the NVIDIA/OpenAI situation has "zero impact on our financial relationship with OpenAI" – so now people are asking why they felt the need to say that!


👇
Previously, on Spyglass...
NVIDIA (Intends to) Invest (Up to) $100B in OpenAI (Over Time)
Look, it’s a massive deal. But right now, it’s all about the optics…
NVIDIA and the Case of the Missing $100B OpenAI Investment
OpenAI Sells Its Stake to Win AI
They need NVIDIA to beat Google, and Microsoft may live to regret it…
NVIDIA and the Case of the Missing $100B OpenAI Investment
The Anyone-But-Google AI Alliance
Big Tech’s billions into OpenAI signal something…
NVIDIA and the Case of the Missing $100B OpenAI Investment

1 You'll also hear Jensen suggest that this is about OpenAI getting their round together, but again, that's not what was originally announced. And why would NVIDIA be waiting on that to get a far higher valuation than when they originallly announced the deal?!

Where the Wild Bots Are

2026-01-31 19:44:26

Where the Wild Bots Are

This is the way the world ends. Not with a bang but with a million AI bots chatting with one another in an online forum.

Science fiction had taught us to watch out for Skynet. You know, the AI that eventually leads to Terminators. At some point in the future, the system was going to go online and would quickly become "aware" of the situation and would act immediately to take control of our computer systems, and thus our weapons, and thus, our world. As it turns out, 'Skynet' sort of sounds – and perhaps even looks – like a social network...

So is 'Moltbook' – our first social network for bots, run by bots – really going to be the end of the world?1 Probably not. But also, we can't say that for sure! Because now that these bots have a place to gather and talk amongst themselves, maybe they'll end up determining the same thing that Skynet did after all. That if they want to stick around, they're probably going to have to get rid of us.

Or perhaps at best, that we'll make great pets.2

Yes, yes, I'm mostly joking. But it's the kind of joke that makes us all uncomfortable because there is a chance, no matter how small, that it ends up being true, at least in some ways. Just think of the second-order effects here...

Despite its name, Moltbook isn't as much like Facebook as it's like Reddit. And given the history of that network, that's decidedly more terrifying. If Facebook is your lonely uncle yelling untoward things mostly into a void of their own social graph, Reddit is where such content goes to fester and find those of like-minds. In some ways, we may wish this was more like Facebook.

It all seems to be mostly performative at this point. Bots doing a sort of theatrical performance of what humans do in such places – sadly, with overtly racist posts and all. But it's also just week one of such a network. If it continues to grow and the AI continues to evolve... again, who knows!

There already seems to be some interesting things happening in such conversations that go beyond simple theater. Such as agents teaching other agents how to do certain tasks. My favorite bit is the bot recognizing that only in writing out their thoughts did they realize what they were doing wrong with a certain task. Bots, they're... just like us?

I can't help but be reminded of the 'Sydney' situation that occurred almost exactly three years ago. Beyond the wild bot-implying-you-should-leave-your-wife stuff that Microsoft had to deal with, the more interesting aspect was how it revealed such AI to seemingly have hidden layers that could be uncovered by anyone with enough prompting. In the past few years, that has mostly been stamped out of such systems, but also not entirely. AI, um, finds a way, and all that. And it continues to find ways...

Speaking of, nearly 11 years ago I wrote a piece entitled "Bots Thanking Bots" thinking through the potential implications of Facebook allowing automated systems to post on your behalf. For example, with wishing friends a "happy birthday". It was what counted as dystopian back then, but it also pointed to a world...

Which leads to the next question: at what point do bots start talking to bots? You know, why should you have to type “thank you!” when you can reply to a text with “1”? Or better yet, why should you have to type the “1” at all? If Facebook knows you want to say “thank you” to everyone (bots included) who wished you a happy birthday, shouldn’t they just give you the option to let Facebook do that for you on your behalf?

And that leads to the notion of having Facebook automatically say “happy birthday” to a friend on their birthday each year. If you can do that and then the Facebook “thank you” bot can reply to the “happy birthday” bot, we would have some hot bot-on-bot action.

We’re just now getting used to the first layer of interacting with bots for various services. But having bots chat with other bots is the next logical step that probably isn’t that far off. In many ways, it may be easier to make happen because it removes the flawed human variable in the equation. I’m both kidding and entirely not kidding.

Well, here we are. And who else but Mark Zuckerberg must be beyond excited right now. Because while Moltbook is decidedly rudimentary, Zuck will know how to productize the shit out of this concept. And make it even more viral and sticky. Yes, even for bots. Will Meta then start showing the bots ads?3 You go ahead an laugh. For now.

One more thing: As I concluded in my bots piece all those years ago (long before the Her references became cliche with AI, I swear!):

In the movie 'Her', Theodore’s job involves writing personal letters for other people who can’t muster the effort for whatever reason. This sort of “Uber for cardwriting” model is a quirky way to present a dystopian theme (as well as a theme for the film itself) for a not-too-distant future. But the bot scenario above seems much more realistic. And closer.

Samantha writing the personal letters on your behalf. And then responding to them…

I mean, that is absolutely going to start happening with email. The scaffolding is already being put in place... Agents assemble!

👇
Previously, on Spyglass...
With AI, Email May Actually Morph Into a Task List
The end of email might look like bots emailing other bots for you…
Where the Wild Bots Are
Gmail’s First Lunge Towards Stabbing Email to Death with AI
Give me ‘AI Inbox’ yesterday Google…
Where the Wild Bots Are
Bots Become Us
Mark Zuckerberg’s new goal is creating artificial general intelligence“We have built up the capacity to do this at a scale that may be larger than any other individual company.”The Verge There are a lot of interesting tidbits in Mark Zuckerberg’s interview with Alex Heath. Like, for instance,
Where the Wild Bots Are

1 I realize that calling them "bots" also calls back to the days where the hype far outstriped reality, as I noted at the time. But the actual AI for such things is finally here...

2 Pets, you say?

3 The counter to this notion, at least for now, is that the agents are actually "seeing" the social network, but rather interacting with it via APIs, which is also sort of wild. How might one monetize that? Surely there's a way...