MoreRSS

site iconSpyglass Modify

A collection of written works, thoughts, and analysis by M.G. Siegler, a long-time technology investor and writer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spyglass

"Hello, Computer."

2026-01-14 01:28:32

"Hello, Computer."

If the vocal computing category has a boy crying wolf, I may be it. I've been writing about the notion that operating our computers through voice is "right around the corner" for almost two decades. And long before that, I was an early adopter to many a PC microphone in the 1990s and later Bluetooth earpieces in the 2000s in an attempt to run all of my computing through my voice (and ears).1 As an industry, we've made great strides over that time span. But we're still not living aboard the USS Enterprise in Star Trek, blabbing away to our machines. Not yet.

Given my track record here, I feel a bit silly writing this, but I really do believe we're at some sort of inflection point for voice and computing. Why? AI, of course.

Yes, technically "AI" is why I thought we were closing in on this future 15 years ago when I was a reporter breaking the news that Siri integration would be the marquee feature of iOS 5 and what would become the iPhone 4S (perhaps for 'Siri'). Pushed by Steve Jobs, Apple was trying to jump ahead to the next paradigm in computing interaction after leveraging multitouch to revolutionize the world with the iPhone (not to mention on the Mac with the mouse, back in the day). Again, voice technology had been around for a long time, but the only place it really worked was in science fiction. Armed with Siri, a startup they had acquired the year before, Apple thought now was the time. "Now" being 2011.

It didn't exactly work out that way. To the point where Apple is actually the boy who cried wolf when it comes to Siri. After the buzzy launch in 2011, 2012 was going to be the year they made Siri work well. Then 2013. Then 2014. Then Amazon launched Alexa and thanks to a better strategy around vocal computing at the time, started to eat Apple's lunch. Millions of Echo devices later and Google entered the space and it looked like we were off to the races...

But it was all sort of a head fake. A hands-free way to set timers and play music. Maybe a few trivia games. And not much else. Amazon couldn't figure out how to get people to shop in a real way with voice. Google couldn't figure out the right ads format. Billions were burned.

All the while, Apple kept telling us that 2015 was the year of Siri. Then 2016. Then 2017. 2018. 2019... All the way up until WWDC 2024, when this time, Apple meant it. Thanks to the latest breakthroughs in AI, Siri was finally going to get grandma home from that airport using some simple voice commands. It was coming that Fall. Then the following Spring. Then never. Is never good for you?

Fast forward to today, 2026. That functionality may now actually be coming this Spring. Something I obviously would never in a million years believe given Apple's track-record here. Except that they've seemingly outsourced the key parts – the AI – to Google.

So... we'll see!

Regardless, AI was the key missing ingredient. We just didn't realize it because we thought we had that technology covered. Sure, it was early, but it would get better. But as it turns out, what powered Siri, and Alexa, and even Google's Home devices wasn't the right flavor of AI. Depending on the task, it could taste okay. But most tasks left you throwing up... your hands in frustration. By 2017, it was clear that the world was shifting again, as I wrote in an essay entitled "The Voice":

And then there’s Siri. While Apple had the foresight to acquire Siri and make it a marquee feature of the iPhone — in 2011! — before their competitors knew what was happening, Apple has treated Siri like, well, an Apple product. That is, iterate secretly behind the scenes and focus on new, big functionality only when they deem it ready to ship, usually timed with a new version of iOS. That’s great, but I’m not sure it’s the right way forward for this new computing paradigm — things are changing far too quickly.

This is where I insert buzzwords. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning…

But really: AI. Machine Learning.

In hindsight, all of this was correct. But even then, we didn't realize that "Machine Learning" – the specialty which brought John Giannandrea from Google to Apple – was closer, but still needed to evolve too. Into LLMs.

As that revolution, ushered in by OpenAI with ChatGPT, building on the back of insights shockingly discarded by Google, has washed over the entire tech industry and has started to seep into the broader population, it seems like the time may be at hand for voice to work, for real this time.

This is what I saw a glimpse of with OpenAI's GPT-4o launch a couple years ago, and wrote about at the time with "OpenAI Changes the Vocal Computing Game!":

Said another way, while this is undoubtedly a series of large breakthroughs in technology, it's just as big of a breakthrough in *presentation*. And this matters because 99% of the world are not technologists. They don't care how impressive and complicated the technology powering all this stuff may be, they just care that Siri can't understand what they're actually looking for and keeps telling them that in the most robotic, cold way possible. Which is perhaps even more infuriating.

Some of this got buried under the hoopla created when Sam Altman directly reference the movie Her and got everyone up in arms about one of the voices that sounded perhaps a bit too much like that of Scarlett Johansson. But part of it was also that while we kept inching closer, we still weren't quite there yet with regard to voice and computing.

The voice modes across all of the different services really are pretty incredible now – certainly when compared to the old school Siri, Alexa, and the like – but it's still not quite enough to make the AI sing, perhaps quite literally. Part of that is the underlying models, which for voice are slightly inferior to the text-based models – something which OpenAI is actively working on addressing – but another part of it is simply a UI one. While all the services keep moving it around to spur usage, voice mode is still very secondary in most of the AI services. Because they're chatbots. The old text-based paradigm is a strength and a weakness. As I wrote:

One side of that equation: the actual "smarts" of these assistants have been getting better by leaps and bounds over the past many months. The rise of LLMs has made the corpus of data that Siri, Alexa, and the like were drawing from feel like my daughter's bookshelf compared to the entirety of the world wide web. But again, that doesn't matter without an interface to match. And ChatGPT gave us that for the first time 18 months ago. But at the end of the day, it's still just a chatbot. Something you interact with via a textbox. That's fine but it's not the end state of this.

The past 18 months have seen a lot of reports about projects trying to break outside of that textbox. While the early attempts quickly – and in some cases spectacularly – failed, undoubtedly because they were trying to be too ambitious, and do too much, a new wave is now coming to tackle the problem. This is led by none other than OpenAI itself, which acquired the hardware startup co-founded by one Jony Ive to clearly go after this space. To make an "anti-iPhone" as it were. A deceptively simple companion device powered by AI and driven by voice.

That's just a guess, of course. But it's undoubtedly a good one. And you can see all of the other startups coalescing around all of this as well. Hardware startups too! Pendants, and clips, and bracelets, and note-taking rings – not one, but two separate, similar projects – oh my. All of them clearly believe that voice is on the cusp of taking off, for real this time.

And right on cue, Alexa is back, after some fits and starts, resurrected as Alexa+ powered by LLMs. Google Home is on the verge of being reborn, powered by Gemini. Siri too! Maybe, hopefully, really for real this time!

2026 feels pretty key for all of this. The models have to be refined and perfected for voice. In some cases, perhaps even shrunken down to perform in real-time on-device. Then we need to figure out the right form-factors for said devices. Sure, the smartphone will remain key, and will probably serve as the connection for most companion tech, but we're going to get a range of purpose-built hardware for AI out in the wild which will be predominantly controlled via voice.

Smart glasses too, of course. Even Apple Watch. And AirPods should continue to morph into the tiny computers that they are in your ears. Voice is the key to fully unlocking all of this.2 And, one day, the true next wave: robots. Are you going to text C-3PO what you want him to do?3 Of course not, you're going to tell him.


1 Yes, I was that guy.

2 With a special shout-out to Meta's wrist-input device (born directly out of our old GV investment in CTRL Labs!) as a wild card here...

3 And with that, I have successfully conflated Star Trek and Star Wars, you're welcome, Gandalf.

And the Winner of Apple's Great AI Bakeoff is... Google

2026-01-13 01:48:08

Apple picks Google’s Gemini to run AI-powered Siri coming this year
Google’s market value surpassed Apple for the first time since 2019 as it rolls out updated artificial intelligence features.
And the Winner of Apple's Great AI Bakeoff is... Google
And the Winner of Apple's Great AI Bakeoff is... Google

No surprise, but now it's official:

Apple is joining forces with Google to power its artificial intelligence features, including a major Siri upgrade later this year, the tech giants said on Monday.

The multi-year partnership will lean on Google’s Gemini models and cloud technology for future Apple foundational models, according to a statement obtained by CNBC’s Jim Cramer.

Sort of weird that they would announce such a big deal this way rather than official releases/interviews/etc, then again, the talk has been – at least on Apple's side – to downplay the partnership. We get it, it's sort of embarrassing to have to outsource your work in such a key aspect of technology, let alone one you believed you were at the forefront of not that long ago, at least with regard to Siri. And one you promised would get grandma home from the airport soon, only to fail to launch. So now you're stuck outsourcing that work to not just someone else, but one of your chief rivals for years and years. Ouch.

Of course, even as Android battled the iPhone, the two companies remained wedded in Search – one of the most lucrative and divisive deals of all time. And this deal undoubtedly expands upon that one. While they're declining to comment on terms, Mark Gurman of Bloomberg pegged it at around $1B a year back in November. That seems low, especially when we know the Search deal itself is $20B+ a year. But there are a bunch of details we don't yet know. Apple's actual statement on the matter is interesting:

“After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users,” Apple said in a statement.

This makes it sound as if Apple won't run Gemini straight-up, but instead will use Gemini to train (distill?) their own foundational models. Unless those models are really just a white-labeled version of Gemini, which they may be at first.

Another option may be to pipe Gemini into Siri as an option alongside the current ChatGPT partnership – which Apple said isn't changing with this news. Perhaps such integrations are even more pronounced in a future build of iOS this Spring...

If you squint, you can see a three-step strategy from Apple here:

1) Place Gemini as an option for Siri alongside ChatGPT

2) Use different Gemini flavors to help train Apple's own Foundation Models

3) Work on your own custom Foundation Models without Gemini

Again, if they go with #1, I have to imagine they make the placement much more prominent. Even if not in the UI, perhaps they'll just make Siri default to Gemini and/or ChatGPT (depending on which the user chooses?) much more often – perhaps for basically all but the system-level queries (setting timers, etc).

This would buy them some time to work on #2, getting their own new models up to speed. Perhaps for iOS 27 in the Fall, or perhaps even later. Presumably, they'll get first access to new Gemini work from Google to stay at the cutting edge with this deal. And it seems like Apple will probably become one of Google Cloud's biggest partners, if they're not already? As I wrote last November:

There are probably a few other interesting wrinkles in there – one of which may be Apple's willingness to do this because a lot of their cloud infrastructure is already running on Google Cloud. So this may not be as heavy of a lift and as big of an ask as it may seem on the surface. And while the "walled off" aspect is clearly a must for Apple here, you could also imagine that the company may be willing to share some data – fully anonymized, of course – back to help constantly improve the model. And that may speak to why Google would want to do this deal (well, that and the money). Apple has devices in the wild at a scale that basically no one can match. Maybe Samsung, but this potentially unlocks a totally new user base.

Then with #1 and #2, or some combination, #3 would give Apple even more time – years – to completely rebuild and rework their own in-house AI that's less dependent on others. Apple doesn't even have a leader in place for that work yet to replace John Giannandrea and the remaining team has been gutted by Meta so... they need some time. As a bonus, this deal gives access to the best technology right now while they can take their time to figure out if LLMs are fully worth doing on their own, or if other, newer types of models/tech comes into favor...

Update: as Kalley Huang reminds us at The New York Times, Apple did poach Amar Subramanya from Microsoft last month, seemingly to spearhead their AI efforts. But unlike JG, who reported directly to Tim Cook, Subramanya will report to Craig Federighi, thus making Federighi the actual, de-facto head of the AI initiatives. I still wonder if they don't need someone higher up, a bigger name... but again, they have time to figure that out now. Especially since before his brief stint at Microsoft, Subramanya cut his teeth helping to build and launch... Gemini.

One more thing: The report also notes in passing that this Gemini deal is not exclusive. That's probably more of an olive branch to regulators (sorry, Elon Musk), but it technically would also allow Apple to mix and match models from others as they see fit, I suppose...

It is interesting that Apple emphasizes the "careful consideration" aspect of the process here. Presumably that means they weighed continuing to do this on their own, but also the possibility of partnering with Anthropic, as has also been reported previously, or going deeper with OpenAI. Not surprising that Apple would go with Google – who yes, just passed them in market cap – over a startup. Still, this must be disappointing to OpenAI given the current partnership. With Anthropic, it just seems like something they only would have done if Apple made it make sense monetarily, which they clearly couldn't agree upon. Back to what I wrote in November:

Apple may not have wanted to pay Anthropic $1.5B a year to use Claude but $1B a year to a partner that is paying you $20B+ for that Search deal? That can be just an in-kind deal! "Google, you know that $25B you owe us this year? Make it $24B, but we'll take a custom build of Gemini. Deal?"

Deal.

And the Winner of Apple's Great AI Bakeoff is... Google

Update: There is now an official joint statement, at least on Google's blog.

Similar to the one Apple gave to CNBC, but a bit longer:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

After careful evaluation, Apple determined that Google's Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards.

That last bit seems key. The wording is vague (undoubtedly on purpose), but it seems to suggest that Apple will be able to train Apple Intelligence with Gemini to still run locally on devices – and for the non-local queries, presumably Private Cloud Compute is already running in Google Cloud, even if they don't exactly tout it.


👇
Previously, on Spyglass...
Apple Finally Agrees to Fix Siri
They’ll swap Google billions for Gemini trillions (of parameters)…
And the Winner of Apple's Great AI Bakeoff is... Google
Apple’s Great AI Bake Off
Gemini enters the tent alongside Claude, ChatGPT, and Apple’s own models to see who can bake the best Siri…
And the Winner of Apple's Great AI Bakeoff is... Google
Hey Siri, Time for that Lobotomy
As predicted, Apple may outsource Siri to ChatGPT (or Claude)!
And the Winner of Apple's Great AI Bakeoff is... Google
Apple Should Swap Out Siri with ChatGPT
Not forever, but for now. Until a new, better Siri is actually ready to roll — which may be *years* away…
And the Winner of Apple's Great AI Bakeoff is... Google
Apple and Google Are So Back
The famous frenemies seems awfully aligned again…
And the Winner of Apple's Great AI Bakeoff is... Google

AI Needs Its Steve Jobs

2026-01-12 21:58:05

AI Needs Its Steve Jobs
AI Needs Its Steve Jobs

There are two camps. Either you're in the camp that thinks AI is absolutely the future of everything and anyone who says otherwise is a moron. Which includes the other camp: those that think that AI is the worst thing in the world and is going to ruin everything. There are varying degrees within those camps, of course – for example, the "Doomers" who think that not only is AI a problem, it could quite literally be the end of the world – but you're on some spectrum in one of those camps...

The "AI-Generated Hit Movie" Horror Story

2026-01-10 23:42:37

Roku CEO Talks New $3/Month Ad-Free Streamer, Predicts First ‘100% AI-Generated Hit Movie’ Will Be Released in Next Three Years
Roku CEO and founder Anthony Wood predicts big things for AI and Hollywood in the next three years.

As the CEO of a major streaming player, you'd think Anthony Wood would rather not get Hollywood all riled up. Maybe he's taking a page out of Ted Sarandos' former playbook?

“I have no idea if there’s an investment bubble, but I know that AI is going to be huge, and it already is,” Wood said during a headliner conversation with Variety‘s co-editor-in-chief Cynthia Littleton at the Variety Entertainment Summit at CES Wednesday. “So it’s going to affect lots of different industries, but in our industry, it’s going to lower the cost of content production. I predict within the next three years, we’ll see the first 100% AI-generated hit movie, for example.”

The first part is obviously a reasonable position. Most people think it is an investment bubble, but also that it's perhaps a needed one given the prospects of the technology, which is where I generally fall (outside of a handful of outlandish deals). But Wood is not in the business of investment advice or thoughts. Again, he is in the entertainment business and so not only talking about the prospects of "100% AI-generated" movies, but predicting that they'll be here within three years, producing hits, is, um, controversial.

I also tend to think it's silly. Will we be able to produce 100% AI-generated content in three years? For sure. But we can already do that now. It's what is flooding social network feeds as we speak. What started as AI-generated images has morphed into video. And what started as short clips is naturally morphing into longer ones. But really all you need to do in order to make longer ones is simply stitch shorter ones together, obviously. And so the AI-generated trailers that have been flooding our feeds ranging from fake Avengers movies to fake Zelda movies are just one step removed from full movies. Well, maybe a few steps. But really, it's just time. Anyone can do that right now.

The bigger question is if there will be an audience for that content. And that's the controversial part of Wood's comment. It's not the creation – again, that will happen – but the notion that it would lead to a "hit movie". I have a hard time believing that will happen for a range of reasons, including the current powers that be.

I suppose it depends on how you define "hit". But I take it to mean a movie that millions watch via "traditional" methods, be it in theaters or now on streaming services (or on "regular" television, I suppose). Three years is a long time in AI, but in the grand scheme of entertainment, it's short. Movies are just being greenlit right now that will come out in three years. The mechanisms are already established to distribute those movies. So where is this AI hit movie going to slot in? Certainly not in a movie theater!

Okay, so what about a streaming service? Maybe. But again, the most obvious such service to create hits, Netflix, is in the midst of trying to buy a movie studio and as such, Ted Sarandos is now busy backtracking from his aforementioned previous comments. Certainly Netflix, like Roku, will use AI to enable talent (and yes, drive down costs), but I have a hard time believing they'll go forward with a full AI-generated movie in that timeframe. Also, just imagine the lawsuits?

Other services? Maybe! But it certainly won't be Disney. I don't think it will be Amazon. HBO will either be owned by Netflix or Paramount, and again, it won't be them. So it could be a smaller player, but will they be able to make a movie a true "hit"? Could Roku?

The wildcard would be YouTube. Because of the UGC underpinnings of the service, you'll certainly see movie-length AI-generated content uploaded. Especially from those outside the current systems – that's a good thing! But will any of it actually "hit"? Maybe as a novelty some will get a ton of "views", but it's hard to imagine it would be considered a true "hit" movie. More just like a proof-of-concept type thing, perhaps.

People will point to music and note that there are some AI-generated "hits" popping up. Two things there. First, I still suspect there is a novelty aspect to this. Second, a three-minute song is a different beast than a two-hour movie. Many people are willing to give up three-minutes of their time. Two-hours is a commitment. One other element that our AI-enabled future is going to highlight: time is the ultimate premium.

Now, longer than 3 years for that AI-generated hit movie, all bets are off. But that window Wood gives seems too short. Not because of AI, but because of the way distribution currently works.

I would also just say that I continue to bet that while such AI-generated content is coming, I think once that novelty factor wears off, you'll be hard-pressed to find big hits there in the way you do with human-created movies and television. And, counterintuitively, I think as AI-generated content continues to flood every surface, it will raise the demand (and thus, value) for human-generated content.

I do think there will be "hybrid" content – part human-created, part AI-created – but I think people will prefer to watch work that was created by other people, because they will value the time and care spent on it. We'll slowly realize it's just as much about the input, as the output.

To be fair, Wood does hit on some of this in the same chat:

“I think people underestimate how dramatic that’s going to be,” Wood said. “I mean, obviously I don’t think people are going to get replaced. Humans are still the creative force behind creating content and hit shows, but the cost is going to come down dramatically, and that’s going to change a lot of companies’ business models. So I’m focused on, how do we take advantage of that? That’s a big opportunity for us.”

Again, that's reasonable. The AI-generated movie topping the box office in 2029? Less so.

One more thing: I continue to be intrigued by Roku's "Howdy" play:

For Howdy, Wood describes the $3-per-month Howdy offering as “not designed to replace a major streaming service like Netflix or Disney, it’s designed to be an add-on service.”

“The opportunity for Howdy was, if you just look at what’s going on in the streaming world with streaming services, they’re getting more and more expensive,” Wood said. “They keep raising prices, and they keep adding larger and larger ad loads. So the part of the market where it actually started, low cost and no ads, is gone now. There’s no streaming services that addressed that portion of the market. That’s the opportunity for Howdy. It’s three bucks a month and no ads and it’s doing extremely well. Just like we built the Roku Channel using the promotional power of our platform, that’s what we’re doing with Howdy. We’re using that to grow it. But Howdy has very broad appeal. There’s lots of people in the world that want a $3-a-month streaming service with no ads. So we’ll start on Roku, but we’ll also take it off platform as well. I think it’s going to be a really large business.”

I've been critical of Roku with regard to ads in the past, but this makes a lot of sense to me. And I suspect we'll see others start to copy this model too.

👇
Previously, on Spyglass...
Terminating the AI vs. Hollywood Tropes
James Cameron has some interesting – and some refreshing – and some controversial – thoughts about AI…
Oh No, a Tech Company is Buying a Movie Studio
This is the end of Hollywood? Come on.
Hollywood vs. AI: The Movie
My god, the open letter is full of stars -- especially Cate Blanchett
Sora’s Slop Hits Different
It’s about creative comedy creation, stupid
People at a Premium
AI will change Hollywood -- for the better

The Incredible Valuation Heights of the OpenAI "Constellations"

2026-01-09 23:53:06

The Incredible Valuation Heights of the OpenAI "Constellations"

Last March, I set out to map the OpenAI "Constellations" – that is, the startups not only in OpenAI's orbit, but those directly tied to it, founded by entrepreneurs that had previously been at OpenAI. No surprise, given both our moment in time with AI and the current funding environment around AI, there are a lot. So I set a threshold at the $1B valuation mark. By my math back then – again, a whole 10 months ago – those entities had a combined valuation of around $200B. And when you added in the valuation of OpenAI itself at the time, we were right at the $500B mark.

Just six months later, those numbers had changed drastically, so I checked back in on those valuations. At the end of August, that $200B aggregate valuation was past $400B. And with OpenAI's updated valuation, we were right around $1T in combined value.

Well, we're not quite six months since then, but given that Anthropic, xAI, and OpenAI itself – the three main drivers of those valuations, obviously – have all either just raised or are in the process of raising again, let's check back in...

Ursa Super Major

  • Anthropic was in the process of raising the round that would value them at $183B back then. Just months later, they're working on a new round which will value them at $350B – before the investment. And while that process may have been kicked off by NVIDIA and Microsoft – yes, the same Microsoft that owns 27% of OpenAI – committing to invest, that money will apparently be on top of this new $10B that they're likely to raise here from other investors. If history is any indication, Anthropic will raise more than $10B, but let's just keep it there for now and add it to the $15B in commitments from Microsoft and NVIDIA, and that gives them a new post-money of $375B.
  • xAI was said to be trying to raise at $170B to $200B last August, which I gave them the credit for because of course Elon Musk was going to be able to raise whatever he wanted, comps be damned! Well, it took a bit longer, but they have now announced the $20B Series E, which they really, really would like you to know exceeded their target. They didn't announce the valuation, but reports put it "above $230B" which is weirdly worded in there, but I take to mean $235B.

Given that these two companies formed by those formerly affiliated with OpenAI are valued above $600B by themselves, it's probably worth breaking them into their own category, and we know how much AI companies love to append "Super" to everything, so let's go with "Ursa Super Major".

Ursa Major

  • Safe Superintelligence, speaking of "super", it seems like Ilya Sutskever's startup is still "stuck" at the $32B valuation from their "seed" round. I'll comically note that this seems almost prudent given the rate at which everyone else is raising. Sutskever may be giving his company some optionality, as a few of the big players would undoubtedly still be open to acquiring them – well, "hackquiring" at least – in this general price range, to get access to Sutskever if nothing else. And given that they're down a co-founder (and first CEO) thanks to Meta's mad scramble to catch up in AI (which included trying to acquire Safe Superintelligence, but instead they ended up investing – before they poached Daniel Gross...). Sutskever has opened up a bit more about what SSI is doing recently, and that included an explanation of how they can compete without needing to raise endless capital, like their rivals. Have I mentioned that Sutskever's own OpenAI shares may be worth tens of billions all the way up to $100B now, so he can probably fund SSI's "return to research" himself. Still, don't be shocked if he raises again sometime soon and resets their valuation...
  • Thinking Machines Lab, would provide the reason for SSI to raise again if they're able to complete a round valuing them at $50B or $60B! That was rumored a few months ago already and doesn't seem to have happened yet. So I'll keep them behind SSI for now on the list, but give them credit for the reported valuation jump from the "mere" $12B valuation "seed" round in July. They have also lost a co-founder, also to Meta, since then. But hey, at least they have a product in market now. And promises of more to come this year...
  • Perplexity would also have been vaulted if Thinking Machines gets that round done. Their most recent round reportedly pushed their valuation to $20B, up from the $18B just a couple months prior (when I last checked in). One of my predictions for 2026 is that someone will buy Perplexity, as they may need to step off the hamster wheel of funding. Samsung is the most obvious acquirer given their partnership (and great financial situation at the moment), but they've also been going deeper with Google... just as the other would-have-been buyer, Apple, has been...

These combined numbers add up to over $100B now, just for Ursa Major, without the 'Supers'!

Ursa Minor

  • Harvey jumped up to an $8B valuation in December, up from $5B at the last check in.
  • Periodic Labs may be a bit above the $1B mark during the last check-in as the round seems to have grown from a rumored $200M to a confirmed $300M. Some reports suggest $1.5B or even perhaps $2B.
  • Cresta is one I actually missed in the previous reports, probably because it was co-founded way back in 2017 by early OpenAI researcher Tim Shi. It currently sits at a $1.6B valuation, which it hit back in 2022.
  • Eureka Labs still hasn't raised any outside capital, it seems. Though Andrej Karpathy obviously could at any time, probably at any valuation.

So that's a solid haul for this group just over $10B.


Okay, so grouped all together, we're looking at an aggregate valuation of just over $730B – a number which has nearly quadrupled in less than a year. And nearly doubled just since August. Obviously it's top-heavy, but even taking out the 'Supers', you'd still have a group valued collectively at $123B. Would that alone be the highest valued group post-PayPal Mafia? Or even if you remove Elon from the full equation, you're looking at almost exactly a $500B aggregate valuation for the OpenAI diaspora.

And if you add OpenAI itself into the mix, with their newly targeted $830B valuation, well, you're truly in the stars now: just over $1.5T.

That's awfully close to Meta's market cap. And if/when OpenAI and Anthropic go public, both may shoot past that mark by themselves given likely investor demand (depending on timing). And if/when xAI merges with Tesla, making them an AI company as well...

Disclosure: GV, where I was a partner for over a decade, is an investor in Harvey and Thinking Machines Lab. Google, which is the LP of GV, is an investor in Anthropic and, I believe, Safe Superintelligence.
👇
Previously, on Spyglass...
Peering Back at the OpenAI “Constellations”
The aggregate valuation looks to surge past $400B…
The Incredible Valuation Heights of the OpenAI "Constellations"
The OpenAI Constellations
The out-of-this-world funding numbers in OpenAI’s orbit
The Incredible Valuation Heights of the OpenAI "Constellations"
Collect Them All (AI Edition)
An ongoing list of the tangled web of Big Tech investments in Big AI…
The Incredible Valuation Heights of the OpenAI "Constellations"

Gmail's First Lunge Towards Stabbing Email to Death with AI

2026-01-08 22:56:09

Gmail's First Lunge Towards Stabbing Email to Death with AI

Do you like email?

It is, of course, a rhetorical question as no one likes email.1 It's a fact we're reminded of each and every new year after the holidays are over when you open your email service and your face is met with a fist in the form of your inbox. And while you might think that Google likes email because they run the most popular email service in Gmail, with 3 billion plus users, there's obviously a very real association risk here. If a company makes the product you most hate to use... there are negative halo effects, just ask Meta.

Anyway, the good news for Google is that they may finally have the appropriate tools to combat the email problem. While they've long been inserting AI into Gmail here and there – the auto-completion/correction features were probably some of the first AI that a lot of the world engaged with in a forward-facing manner – Gemini now seems robust enough to insert it everywhere. They've obviously been ramping up doing that in Search so as best to disrupt themselves before someone else does, and now it would seem to be Gmail's turn.

In a blog post today, the company outlines what Gmail will look like in the "Gemini Era". A lot of it you've undoubtedly already seen in the form of 'AI Overviews' – though this is seemingly getting a nice expansion to your entire inbox based on queries, and not just individual overviews at the top of emails. But there's also the more standard and straightforward: 'Help Me Write', 'Suggested Replies', and, of course, 'Proofread'. But the real key is something you haven't seen to date: 'AI Inbox'.

While Google is just starting to test it now, you can see what it looks like and how it will function in their post. They describe it thusly:

Your inbox is filled with updates; some are critical, others are just noise. The new AI Inbox filters out the clutter so you can focus on what’s most important.

AI Inbox is like having a personalized briefing, highlighting to-dos and catching you up on what matters. It helps you prioritize, identifying your VIPs based on signals like people you email frequently, those in your contacts list and relationships it can infer from message content. Crucially, this analysis happens securely with the privacy protections you expect from Google, keeping your data under your control. This lets high-stakes items — like a bill due tomorrow or a dentist reminder — rise to the top. We’re giving trusted testers access to AI Inbox before making it more broadly available in the coming months.

The first part sounds like just an expansion of what Gmail has long done with algorithmic sorting. I turned this off long ago as I found it not that useful when it worked and insanely frustrating when it didn't. But the second part is the key. From the sound (and look) of it, 'AI Inbox' is going to completely blow up your inbox as you've known it to date.

To be clear, that old inbox will still be there. Google isn't crazy enough to force 3B+ users into this new reality – he says this and then immediately remembers Google+ being shoved in the faces of billions of users – but apparently there will be a new 'AI Inbox' in the sidebar above your regular, old, hated inbox.

And in this new inbox you'll find not email, but information. Things you need to know or do, automatically surfaced for you from that dreaded old inbox. The key, of course is Gemini. Google's AI may finally be good enough to fulfill the promise of killing your inbox. Or at least beating it into submission. A place you go from time to time when you want to remember the pain you left behind.

But let's not get ahead of ourselves. Plenty have promised such solutions in the past. For this to work, it has to be incredibly accurate and useful. It has to overcome old habits and falling back into the "I'll just reply to the email myself" mentality. Sure, it may take just a few seconds to respond to that email, but in aggregate, we're all spending hours every week and as such, days every year on email. Days. Completely consumed by email. Gone, never to come back. Because of the scale of your inbox.

Google can use that scale to their advantage to train their AI to make this work. No, they're not reading your emails to do this, but using other signals and data at scale to construct this new type of inbox. Honestly, they may be the only ones who can do this.

And if they do, it's step one to a potential true end of email. I mean, it will always stay around as a sort of fall-back – a true cockroach of the internet – but the way you interact with it will change drastically. I wrote about this topic last June in a post entitled: With AI, Email May Actually Morph Into a Task List. A couple passages I'll highlight since it was for members of The Inner Ring:

Moving this concept over to email, if AI can write your email and read your email, it's easy now to joke that in many jobs, such as the ones I've done in my career, you could just go on vacation. Of course, you couldn't really do that for the same reason as mentioned above: at some point, someone – a human – has to be in the loop about something. I mean, honestly, for a lot of email back-and-forths, probably not – but for some, there are real-world tasks to be completed. By someone. One day, bots – robots – may be able to handle those too. But in the more immediate future, I think this looks more like emails being distilled into actual action items.

Yes, your inbox will go from being a de-facto to-do list (generated by someone else), to an actual to-do list (generated by your AI).

To some, this will sound like absolute hell. But I suspect it may lead to actual productivity gains for most people because again, the scaling of the email inbox has become untenable. Boiling email down to its essence could work.

That's what Google is creating here with 'AI Inbox'. But again, it's just step one. The next step is obviously using AI Agents to do many of these tasks/to-dos created,2 including responding to the messages:

Anyway, the point is that I can see a world where AI actually does lead to the end of email, in a way. It won't eliminate it, but it could eliminate your need to do it. Or, at the very least, cut back on it quite a bit because it will be abstracted into an AI layer above it, where you talk to your AI assistant about tasks and to-dos. Sure, you'll still be able to send it, but it will be more like disengaging the autopilot to fly manually. For most things, it will become something the bots do on your behalf.

Speaking of Agents:

And this antiquated technology could become the ultimate fallback for "agentic" communication when various newer protocols don't align for whatever reason. In many ways, email already is that fallback for many things today. Being the cockroach of the internet has some advantages...

Wouldn't that be fun? A sort of "have your agent email my agent"...3

One more thing: even beyond the whole end-of-email thing, there's another upside I see here, a broader one:

To others, this will sound downright dystopian. We're taking human-written email and turning it into bulleted action items generated by AI. But again, it feels inevitable. And, in many ways, needed.

Oddly, it may lead to a world in which letters – old school letters – make a comeback. I've long been of the notion that I think one of the second order effects of AI is that human-made creations will *increase* in value, and we might see something along these lines with handwritten notes.

Yes, this may lead to a new sub-economy where people are paid to write these personalized notes a la Theodore Twombly in 'Her'. Yes, this is a 'Her' reference without mentioning Samantha. Until now. Damnit.

Killing email to restore the value of writing to humanity. Who says no?


1 I'm sure some have hated email longer than I have, but I'm very well documented...

2 We used to call these "bots" part 1.

3 We used to call these "bots" part 2.