MoreRSS

site iconSpyglass Modify

A collection of written works, thoughts, and analysis by M.G. Siegler, a long-time technology investor and writer.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Spyglass

The Vision Pro Slam Dunk

2026-01-15 00:06:43

The Vision Pro Slam Dunk

It took a lot of work, but I finally did it. I finally booted up my Vision Pro to watch the first full NBA game shot in the "Apple Immersive" format. It was... very cool. With some very real caveats. And it seemingly points to the future of the device itself. You don't even have to really squint to see it.

Honestly, given the dearth of content over the first two years of the Vision Pro's lifespan, I'm sort of shocked that Apple pulled this off. Just going by how long it has taken Apple to release short highlight footage from other sporting events, I would have assumed we would start getting full games weeks (or months!) after they first aired. The fact that they not only turned around this Lakers vs. Bucks game right after it was played, but showed it live to the small subset of folks in the right market (or with the right VPN setting), is rather incredible.1 Again, given Apple's previous cadence with such content, I would have thought this would be a 2027 or 2028 thing. I'm not trying to be a jerk, that's just how painfully slow they've been with releasing this type of content!

At the same time, such content is clearly – clearly – the way to move the needle when it comes to the Vision Pro. Granted, plenty of other hurdles remain – more on those in a bit – but if they want people to actually be excited about not just buying, but actually using the device, they need content people are excited about watching! It's not rocket science, it's behavioral science. It's human nature. So it's good to see Apple hustle here to at least try to drum up interest in the device.

While I obviously didn't watch it live, Jason Snell did, and in his thoughts for Six Colors on the experience, he described it as "surprisingly... normal?" I agree with that. After the initial "wow" factor wore off of being transported to Los Angeles with Crypto.com Arena wrapped around you, it felt like... you were watching a basketball game. It wasn't exactly like watching it on TV, but it also wasn't exactly like watching it in person. It was sort of... in-between.

Depending on the vantage point, it sort of veered between the television experience and the in-person experience. And that was the most jarring element of watching it – Apple kept cutting between those vantage points. You had no say over the matter, you were just zoomed from one area of the arena to another on the whim of the producers. It wasn't as jarring as it was in those aforementioned short highlight clips of other sporting events because you did get to linger longer in each spot given that the entire experience (meaning, the entire game) was just over two-hours long. Still, during the actual game, the cuts between cameras behind the basket depending on where the action was happening was... weird. You were forced to reorient yourself constantly on the fly. I sort of got used to it as the game went on, but it's still felt a bit like a brain teaser – especially the cuts between the same perspective just on opposite ends of the court.

Ben Thompson clearly hated this aspect, as his fun Stratechery rant going after Apple for not understanding their product makes clear. All he wanted was a single vantage point, ideally court side, where you were planted and never left. That would, he argues, be actually immersive. Because it wouldn't make you do the constant mental calisthenics I describe above. I don't disagree, but I also don't think that's all Apple should do. I think that should be an option.2

My feedback would be an extension of this: give us options for how we want to view the game. You can have the option to watch it court side. Or the option to watch it from behind the basket. Or the option to watch it in the press booth. Or the option to have a couple other vantage points in the crowd; you know, to feel actually immersed. And the option to cut between these views as you, the viewer, saw fit. Or the current and only option to have someone else make those calls!

Obviously, this wouldn't be the most immersive experience possible because it would break the wall of illusion since there are not options to immediately cut between vantage points in real life. But again, I don't view this as a replacement of going to a real life game. It's a more immersive version of television. In some ways, it can be better than either, but in others it will be worse. It's just a new, cool format. Apple should lean into that.

In a way, it's the same initial takeaway I had after getting the Vision Pro two years ago. For years, the world had wondered when Apple would create their own television set. And famously, some of Steve Jobs last words to his biographer Water Isaacson before he passed away pointed to him finally "cracking" the problem for Apple. As I wrote in February 2024:

Again, as noted above, Jobs clearly wasn't talking about the Apple TV set top box. It sure sounded like another product, *an actual television set*. But what if everyone was reading that too literally and he did mean something *entirely* different. A true game-changer, Apple-style. Something like a *new kind* of television. One which seamlessly syncs to all of your devices via iCloud. One with no need for remotes. One with the simplest user interface you could imagine...

One you wear, perhaps. "Headphones for video" as it were.

I know, it's a bit of a stretch. But nevertheless, that is what Apple has stumbled upon here. Well, that's not fair. I think Apple knows exactly what they have from a content-perspective with the Vision Pro, I just think they're muddying the message with all of the other stuff they're trying to showcase. And in part, they may feel like they have to because the device is $3,500.

As mentioned, given how long it has taken Apple to get up-to-speed on the content front for Vision Pro, I'm no longer sure they knew what they had with the product when it launched (which it should not have, at least not fully, in its current state). But I think these NBA games, alongside the first concert footage, and the first (short) movie actually shot for Vision Pro, and even, oddly, 3D movies (which Apple keeps pushing heavily in the Vision Pro Apple TV app), makes it perfectly clear. This is a content consumption device. This is their television set.

Back to the game, I definitely preferred the mid-court, scorer's table angle. It was a bit low, but fun to see Bucks' coach Doc Rivers and Lakers' coach JJ Redick stalk the sidelines, getting awfully close at points. That vantage point also had some blindspots – such as when a player is taking a corner 3 – and that's in part why I would like a multi-vantage-point option.

The camera angle behind the basket was too high for my taste. But I assume they also don't want to destroy their undoubtedly expensive cameras! But it's from these lower angles where you can tell just how big LeBron James is. And just how much bigger Giannis Antetokounmpo is than LeBron!3

It was interesting to mostly not cut away from the floor during time-outs when the rest of those watching remotely would obviously go to commercial. This made it more like being at the arena, but augmented by some commentary at points. It was also fun to watch the halftime activities and half-court-shots-for-money this way. Though there was some just pure "dead time" when you would just look around – again, just like being at the actual game, but clearly with more voyeuristic feeling.4 Presumably if this format does take off, commercials will be shoved into such crevices eventually.

I appreciated the inclusion of a virtual scoreboard when you looked down at the court because while looking up to the actual scoreboard in the area works, it also highlights just how heavy the Vision Pro is!

Speaking of... the biggest problem Apple actually has with the Vision Pro remains just how much of a pain it is to put on and use. As both a sports and Apple fan, I should have jumped at the opportunity to view this content the second I could. But I waited because I knew I would have to make sure the Vision Pro was charged, and that it was a time my family wasn't around because putting despite Apple's comically misguided attempts to make using such a device in public less awkward with some extremely awkward looking eyes, it's a device best used alone.

That said, if Apple does nail this format – and yes, brings the cost of the Vision Pro down eventually – I could see this becoming an interesting way to watch a game with someone. Unlike with a movie, it's natural to talk to friends/family during a game, and wouldn't it be cool if you had two different vantage points and you could tell the other person to "check the action out from here"? Or to watch a replay from another angle they witnessed live? Etc. The fact that all of this could be done without people being in the same room – and it would probably be less awkward that way! – is potentially even more interesting.

My multi-angle idea also opens up some re-watchability options that don't typically exist for sports. Granted, most people still aren't going to rewatch a game they've already seen, but die-hard fans probably would from a different angle than the one they chose the first time if it was a good game. And again, watching replays from different angles of your own choosing would be pretty killer in-game.

Anyway, I come away after watching this Lakers/Bucks game thinking that Apple is actually closer to figuring this out, at least on the content side, than I would have thought going into it. Yes, there are obviously things they need to tweak, but those seem like relatively straightforward fixes – well, as straightforward as it can be to record and stream live from five or six different immersive cameras around a venue. Ben Thompson wants something simple – his static, fully immersive view from one vantage point – but I want something a bit more: options. And I think my desire more closely matches what the masses might want – and that includes the option for this more "packaged" telecast that's more like traditional TV too, BTW. The bigger issue remains the masses ever getting near a Vision Pro with its price point and general inconvenience.

But those things can change too over time if Apple sticks with it. Meta seems to be backing away from the space now, but I wonder if Apple can't sort of do an end run around the Quest here. Because there's no way this type of experience will be available on mere Smart Glasses any time soon. Apple has the money and patience to make this work for the Vision Pro. And so the biggest thing holding them back may be the rights to all of this content.5 But as the leagues make moves to take over more control of such rights themselves, there's a window for Apple here.6

Now they just need to make a version of the Vision Pro where I can actually drink a beer while watching a game while fully immersed. Thank god for long neck beer bottles, I guess?

👇
Previously, on Spyglass...
Behold: The Apple Television
A revolutionary new productivity tool. A truly amazing gaming device. And an insanely great content viewer. These were not three separate products,1 but instead, the promise of one product: the Vision Pro. Unfortunately, at least for now, it sure feels like only the last of those experiences is the
The Vision Pro Slam Dunk
Apple’s Vision PROblem
Apple’s Immersive Video problemOn Thursday Apple debuted its first immersive video since the Vision Pro launched, a five-minute-long compilation of highlights from the MLS Cup playoffs late last year. Without even seeing the vid…Six Colors Earlier this week, Apple finally – and I do mean finally, it took them
The Vision Pro Slam Dunk
Enter Vision Pro
Metallica’s “Apple Immersive” concert is... stunningly awesome
The Vision Pro Slam Dunk
The Vision Pro Starts to Come Into Focus
We’re starting to cook with fire. I’m writing this about the Vision Pro, in the Vision Pro. Which I really only did for a post or two for the novelty of it right after the device’s initial release. But now I’m doing it because it’s actually a good way to
The Vision Pro Slam Dunk
‘Submerged’ Plunges Us Into a Possible Spatial Movie Future
The Edward Berger short film for Apple is good, but points to something great…
The Vision Pro Slam Dunk

1 Given the small Vision Pro user base and the restricted live markets here, how many people do we think actually watched this live? A few hundred, max?

2 Ben had similar feedback around the immersive Metallica concert, wanting just to be in the crowd like real life. But I would much prefer to next to the band on stage, something not possible in real life! (And yes, the option to see it from the crowd too. And backstage. Etc.)

3 There's no way Giannis isn't 7-feet, right? He's listed as 6'11", but come on, these vantage points show he's much taller than 6'9" LeBron...

4 One of the more interesting elements would be when you'd see, say, a security guard just off to the side using their phone. Or a person sitting near you just drinking a beer. All things you could at the game itself but again, weird that they can't possibly know you're focused on them in that moment...

5 Just think of the money to be made here though. If you could sell both one-off and season "tickets" to these games. $20 a game? More? Concerts $50?

6 And that's presumably why Apple prefers more encompassing rights in their own sports rights deals. How long until all MLS games are available to stream in Apple Immersive on the Vision Pro? What about F1?

"Hello, Computer."

2026-01-14 01:28:32

"Hello, Computer."

If the vocal computing category has a boy crying wolf, I may be it. I've been writing about the notion that operating our computers through voice is "right around the corner" for almost two decades. And long before that, I was an early adopter to many a PC microphone in the 1990s and later Bluetooth earpieces in the 2000s in an attempt to run all of my computing through my voice (and ears).1 As an industry, we've made great strides over that time span. But we're still not living aboard the USS Enterprise in Star Trek, blabbing away to our machines. Not yet.

Given my track record here, I feel a bit silly writing this, but I really do believe we're at some sort of inflection point for voice and computing. Why? AI, of course.

Yes, technically "AI" is why I thought we were closing in on this future 15 years ago when I was a reporter breaking the news that Siri integration would be the marquee feature of iOS 5 and what would become the iPhone 4S (perhaps for 'Siri'). Pushed by Steve Jobs, Apple was trying to jump ahead to the next paradigm in computing interaction after leveraging multitouch to revolutionize the world with the iPhone (not to mention on the Mac with the mouse, back in the day). Again, voice technology had been around for a long time, but the only place it really worked was in science fiction. Armed with Siri, a startup they had acquired the year before, Apple thought now was the time. "Now" being 2011.

It didn't exactly work out that way. To the point where Apple is actually the boy who cried wolf when it comes to Siri. After the buzzy launch in 2011, 2012 was going to be the year they made Siri work well. Then 2013. Then 2014. Then Amazon launched Alexa and thanks to a better strategy around vocal computing at the time, started to eat Apple's lunch. Millions of Echo devices later and Google entered the space and it looked like we were off to the races...

But it was all sort of a head fake. A hands-free way to set timers and play music. Maybe a few trivia games. And not much else. Amazon couldn't figure out how to get people to shop in a real way with voice. Google couldn't figure out the right ads format. Billions were burned.

All the while, Apple kept telling us that 2015 was the year of Siri. Then 2016. Then 2017. 2018. 2019... All the way up until WWDC 2024, when this time, Apple meant it. Thanks to the latest breakthroughs in AI, Siri was finally going to get grandma home from that airport using some simple voice commands. It was coming that Fall. Then the following Spring. Then never. Is never good for you?

Fast forward to today, 2026. That functionality may now actually be coming this Spring. Something I obviously would never in a million years believe given Apple's track-record here. Except that they've seemingly outsourced the key parts – the AI – to Google.

So... we'll see!

Regardless, AI was the key missing ingredient. We just didn't realize it because we thought we had that technology covered. Sure, it was early, but it would get better. But as it turns out, what powered Siri, and Alexa, and even Google's Home devices wasn't the right flavor of AI. Depending on the task, it could taste okay. But most tasks left you throwing up... your hands in frustration. By 2017, it was clear that the world was shifting again, as I wrote in an essay entitled "The Voice":

And then there’s Siri. While Apple had the foresight to acquire Siri and make it a marquee feature of the iPhone — in 2011! — before their competitors knew what was happening, Apple has treated Siri like, well, an Apple product. That is, iterate secretly behind the scenes and focus on new, big functionality only when they deem it ready to ship, usually timed with a new version of iOS. That’s great, but I’m not sure it’s the right way forward for this new computing paradigm — things are changing far too quickly.

This is where I insert buzzwords. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning…

But really: AI. Machine Learning.

In hindsight, all of this was correct. But even then, we didn't realize that "Machine Learning" – the specialty which brought John Giannandrea from Google to Apple – was closer, but still needed to evolve too. Into LLMs.

As that revolution, ushered in by OpenAI with ChatGPT, building on the back of insights shockingly discarded by Google, has washed over the entire tech industry and has started to seep into the broader population, it seems like the time may be at hand for voice to work, for real this time.

This is what I saw a glimpse of with OpenAI's GPT-4o launch a couple years ago, and wrote about at the time with "OpenAI Changes the Vocal Computing Game!":

Said another way, while this is undoubtedly a series of large breakthroughs in technology, it's just as big of a breakthrough in *presentation*. And this matters because 99% of the world are not technologists. They don't care how impressive and complicated the technology powering all this stuff may be, they just care that Siri can't understand what they're actually looking for and keeps telling them that in the most robotic, cold way possible. Which is perhaps even more infuriating.

Some of this got buried under the hoopla created when Sam Altman directly reference the movie Her and got everyone up in arms about one of the voices that sounded perhaps a bit too much like that of Scarlett Johansson. But part of it was also that while we kept inching closer, we still weren't quite there yet with regard to voice and computing.

The voice modes across all of the different services really are pretty incredible now – certainly when compared to the old school Siri, Alexa, and the like – but it's still not quite enough to make the AI sing, perhaps quite literally. Part of that is the underlying models, which for voice are slightly inferior to the text-based models – something which OpenAI is actively working on addressing – but another part of it is simply a UI one. While all the services keep moving it around to spur usage, voice mode is still very secondary in most of the AI services. Because they're chatbots. The old text-based paradigm is a strength and a weakness. As I wrote:

One side of that equation: the actual "smarts" of these assistants have been getting better by leaps and bounds over the past many months. The rise of LLMs has made the corpus of data that Siri, Alexa, and the like were drawing from feel like my daughter's bookshelf compared to the entirety of the world wide web. But again, that doesn't matter without an interface to match. And ChatGPT gave us that for the first time 18 months ago. But at the end of the day, it's still just a chatbot. Something you interact with via a textbox. That's fine but it's not the end state of this.

The past 18 months have seen a lot of reports about projects trying to break outside of that textbox. While the early attempts quickly – and in some cases spectacularly – failed, undoubtedly because they were trying to be too ambitious, and do too much, a new wave is now coming to tackle the problem. This is led by none other than OpenAI itself, which acquired the hardware startup co-founded by one Jony Ive to clearly go after this space. To make an "anti-iPhone" as it were. A deceptively simple companion device powered by AI and driven by voice.

That's just a guess, of course. But it's undoubtedly a good one. And you can see all of the other startups coalescing around all of this as well. Hardware startups too! Pendants, and clips, and bracelets, and note-taking rings – not one, but two separate, similar projects – oh my. All of them clearly believe that voice is on the cusp of taking off, for real this time.

And right on cue, Alexa is back, after some fits and starts, resurrected as Alexa+ powered by LLMs. Google Home is on the verge of being reborn, powered by Gemini. Siri too! Maybe, hopefully, really for real this time!

2026 feels pretty key for all of this. The models have to be refined and perfected for voice. In some cases, perhaps even shrunken down to perform in real-time on-device. Then we need to figure out the right form-factors for said devices. Sure, the smartphone will remain key, and will probably serve as the connection for most companion tech, but we're going to get a range of purpose-built hardware for AI out in the wild which will be predominantly controlled via voice.

Smart glasses too, of course. Even Apple Watch. And AirPods should continue to morph into the tiny computers that they are in your ears. Voice is the key to fully unlocking all of this.2 And, one day, the true next wave: robots. Are you going to text C-3PO what you want him to do?3 Of course not, you're going to tell him.


1 Yes, I was that guy.

2 With a special shout-out to Meta's wrist-input device (born directly out of our old GV investment in CTRL Labs!) as a wild card here...

3 And with that, I have successfully conflated Star Trek and Star Wars, you're welcome, Gandalf.

And the Winner of Apple's Great AI Bakeoff is... Google

2026-01-13 01:48:08

Apple picks Google’s Gemini to run AI-powered Siri coming this year
Google’s market value surpassed Apple for the first time since 2019 as it rolls out updated artificial intelligence features.
And the Winner of Apple's Great AI Bakeoff is... Google
And the Winner of Apple's Great AI Bakeoff is... Google

No surprise, but now it's official:

Apple is joining forces with Google to power its artificial intelligence features, including a major Siri upgrade later this year, the tech giants said on Monday.

The multi-year partnership will lean on Google’s Gemini models and cloud technology for future Apple foundational models, according to a statement obtained by CNBC’s Jim Cramer.

Sort of weird that they would announce such a big deal this way rather than official releases/interviews/etc, then again, the talk has been – at least on Apple's side – to downplay the partnership. We get it, it's sort of embarrassing to have to outsource your work in such a key aspect of technology, let alone one you believed you were at the forefront of not that long ago, at least with regard to Siri. And one you promised would get grandma home from the airport soon, only to fail to launch. So now you're stuck outsourcing that work to not just someone else, but one of your chief rivals for years and years. Ouch.

Of course, even as Android battled the iPhone, the two companies remained wedded in Search – one of the most lucrative and divisive deals of all time. And this deal undoubtedly expands upon that one. While they're declining to comment on terms, Mark Gurman of Bloomberg pegged it at around $1B a year back in November. That seems low, especially when we know the Search deal itself is $20B+ a year. But there are a bunch of details we don't yet know. Apple's actual statement on the matter is interesting:

“After careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users,” Apple said in a statement.

This makes it sound as if Apple won't run Gemini straight-up, but instead will use Gemini to train (distill?) their own foundational models. Unless those models are really just a white-labeled version of Gemini, which they may be at first.

Another option may be to pipe Gemini into Siri as an option alongside the current ChatGPT partnership – which Apple said isn't changing with this news. Perhaps such integrations are even more pronounced in a future build of iOS this Spring...

If you squint, you can see a three-step strategy from Apple here:

1) Place Gemini as an option for Siri alongside ChatGPT

2) Use different Gemini flavors to help train Apple's own Foundation Models

3) Work on your own custom Foundation Models without Gemini

Again, if they go with #1, I have to imagine they make the placement much more prominent. Even if not in the UI, perhaps they'll just make Siri default to Gemini and/or ChatGPT (depending on which the user chooses?) much more often – perhaps for basically all but the system-level queries (setting timers, etc).

This would buy them some time to work on #2, getting their own new models up to speed. Perhaps for iOS 27 in the Fall, or perhaps even later. Presumably, they'll get first access to new Gemini work from Google to stay at the cutting edge with this deal. And it seems like Apple will probably become one of Google Cloud's biggest partners, if they're not already? As I wrote last November:

There are probably a few other interesting wrinkles in there – one of which may be Apple's willingness to do this because a lot of their cloud infrastructure is already running on Google Cloud. So this may not be as heavy of a lift and as big of an ask as it may seem on the surface. And while the "walled off" aspect is clearly a must for Apple here, you could also imagine that the company may be willing to share some data – fully anonymized, of course – back to help constantly improve the model. And that may speak to why Google would want to do this deal (well, that and the money). Apple has devices in the wild at a scale that basically no one can match. Maybe Samsung, but this potentially unlocks a totally new user base.

Then with #1 and #2, or some combination, #3 would give Apple even more time – years – to completely rebuild and rework their own in-house AI that's less dependent on others. Apple doesn't even have a leader in place for that work yet to replace John Giannandrea and the remaining team has been gutted by Meta so... they need some time. As a bonus, this deal gives access to the best technology right now while they can take their time to figure out if LLMs are fully worth doing on their own, or if other, newer types of models/tech comes into favor...

Update: as Kalley Huang reminds us at The New York Times, Apple did poach Amar Subramanya from Microsoft last month, seemingly to spearhead their AI efforts. But unlike JG, who reported directly to Tim Cook, Subramanya will report to Craig Federighi, thus making Federighi the actual, de-facto head of the AI initiatives. I still wonder if they don't need someone higher up, a bigger name... but again, they have time to figure that out now. Especially since before his brief stint at Microsoft, Subramanya cut his teeth helping to build and launch... Gemini.

One more thing: The report also notes in passing that this Gemini deal is not exclusive. That's probably more of an olive branch to regulators (sorry, Elon Musk), but it technically would also allow Apple to mix and match models from others as they see fit, I suppose...

It is interesting that Apple emphasizes the "careful consideration" aspect of the process here. Presumably that means they weighed continuing to do this on their own, but also the possibility of partnering with Anthropic, as has also been reported previously, or going deeper with OpenAI. Not surprising that Apple would go with Google – who yes, just passed them in market cap – over a startup. Still, this must be disappointing to OpenAI given the current partnership. With Anthropic, it just seems like something they only would have done if Apple made it make sense monetarily, which they clearly couldn't agree upon. Back to what I wrote in November:

Apple may not have wanted to pay Anthropic $1.5B a year to use Claude but $1B a year to a partner that is paying you $20B+ for that Search deal? That can be just an in-kind deal! "Google, you know that $25B you owe us this year? Make it $24B, but we'll take a custom build of Gemini. Deal?"

Deal.

And the Winner of Apple's Great AI Bakeoff is... Google

Update: There is now an official joint statement, at least on Google's blog.

Similar to the one Apple gave to CNBC, but a bit longer:

Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.

After careful evaluation, Apple determined that Google's Al technology provides the most capable foundation for Apple Foundation Models and is excited about the innovative new experiences it will unlock for Apple users. Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards.

That last bit seems key. The wording is vague (undoubtedly on purpose), but it seems to suggest that Apple will be able to train Apple Intelligence with Gemini to still run locally on devices – and for the non-local queries, presumably Private Cloud Compute is already running in Google Cloud, even if they don't exactly tout it.


👇
Previously, on Spyglass...
Apple Finally Agrees to Fix Siri
They’ll swap Google billions for Gemini trillions (of parameters)…
And the Winner of Apple's Great AI Bakeoff is... Google
Apple’s Great AI Bake Off
Gemini enters the tent alongside Claude, ChatGPT, and Apple’s own models to see who can bake the best Siri…
And the Winner of Apple's Great AI Bakeoff is... Google
Hey Siri, Time for that Lobotomy
As predicted, Apple may outsource Siri to ChatGPT (or Claude)!
And the Winner of Apple's Great AI Bakeoff is... Google
Apple Should Swap Out Siri with ChatGPT
Not forever, but for now. Until a new, better Siri is actually ready to roll — which may be *years* away…
And the Winner of Apple's Great AI Bakeoff is... Google
Apple and Google Are So Back
The famous frenemies seems awfully aligned again…
And the Winner of Apple's Great AI Bakeoff is... Google

AI Needs Its Steve Jobs

2026-01-12 21:58:05

AI Needs Its Steve Jobs
AI Needs Its Steve Jobs

There are two camps. Either you're in the camp that thinks AI is absolutely the future of everything and anyone who says otherwise is a moron. Which includes the other camp: those that think that AI is the worst thing in the world and is going to ruin everything. There are varying degrees within those camps, of course – for example, the "Doomers" who think that not only is AI a problem, it could quite literally be the end of the world – but you're on some spectrum in one of those camps...

The "AI-Generated Hit Movie" Horror Story

2026-01-10 23:42:37

Roku CEO Talks New $3/Month Ad-Free Streamer, Predicts First ‘100% AI-Generated Hit Movie’ Will Be Released in Next Three Years
Roku CEO and founder Anthony Wood predicts big things for AI and Hollywood in the next three years.

As the CEO of a major streaming player, you'd think Anthony Wood would rather not get Hollywood all riled up. Maybe he's taking a page out of Ted Sarandos' former playbook?

“I have no idea if there’s an investment bubble, but I know that AI is going to be huge, and it already is,” Wood said during a headliner conversation with Variety‘s co-editor-in-chief Cynthia Littleton at the Variety Entertainment Summit at CES Wednesday. “So it’s going to affect lots of different industries, but in our industry, it’s going to lower the cost of content production. I predict within the next three years, we’ll see the first 100% AI-generated hit movie, for example.”

The first part is obviously a reasonable position. Most people think it is an investment bubble, but also that it's perhaps a needed one given the prospects of the technology, which is where I generally fall (outside of a handful of outlandish deals). But Wood is not in the business of investment advice or thoughts. Again, he is in the entertainment business and so not only talking about the prospects of "100% AI-generated" movies, but predicting that they'll be here within three years, producing hits, is, um, controversial.

I also tend to think it's silly. Will we be able to produce 100% AI-generated content in three years? For sure. But we can already do that now. It's what is flooding social network feeds as we speak. What started as AI-generated images has morphed into video. And what started as short clips is naturally morphing into longer ones. But really all you need to do in order to make longer ones is simply stitch shorter ones together, obviously. And so the AI-generated trailers that have been flooding our feeds ranging from fake Avengers movies to fake Zelda movies are just one step removed from full movies. Well, maybe a few steps. But really, it's just time. Anyone can do that right now.

The bigger question is if there will be an audience for that content. And that's the controversial part of Wood's comment. It's not the creation – again, that will happen – but the notion that it would lead to a "hit movie". I have a hard time believing that will happen for a range of reasons, including the current powers that be.

I suppose it depends on how you define "hit". But I take it to mean a movie that millions watch via "traditional" methods, be it in theaters or now on streaming services (or on "regular" television, I suppose). Three years is a long time in AI, but in the grand scheme of entertainment, it's short. Movies are just being greenlit right now that will come out in three years. The mechanisms are already established to distribute those movies. So where is this AI hit movie going to slot in? Certainly not in a movie theater!

Okay, so what about a streaming service? Maybe. But again, the most obvious such service to create hits, Netflix, is in the midst of trying to buy a movie studio and as such, Ted Sarandos is now busy backtracking from his aforementioned previous comments. Certainly Netflix, like Roku, will use AI to enable talent (and yes, drive down costs), but I have a hard time believing they'll go forward with a full AI-generated movie in that timeframe. Also, just imagine the lawsuits?

Other services? Maybe! But it certainly won't be Disney. I don't think it will be Amazon. HBO will either be owned by Netflix or Paramount, and again, it won't be them. So it could be a smaller player, but will they be able to make a movie a true "hit"? Could Roku?

The wildcard would be YouTube. Because of the UGC underpinnings of the service, you'll certainly see movie-length AI-generated content uploaded. Especially from those outside the current systems – that's a good thing! But will any of it actually "hit"? Maybe as a novelty some will get a ton of "views", but it's hard to imagine it would be considered a true "hit" movie. More just like a proof-of-concept type thing, perhaps.

People will point to music and note that there are some AI-generated "hits" popping up. Two things there. First, I still suspect there is a novelty aspect to this. Second, a three-minute song is a different beast than a two-hour movie. Many people are willing to give up three-minutes of their time. Two-hours is a commitment. One other element that our AI-enabled future is going to highlight: time is the ultimate premium.

Now, longer than 3 years for that AI-generated hit movie, all bets are off. But that window Wood gives seems too short. Not because of AI, but because of the way distribution currently works.

I would also just say that I continue to bet that while such AI-generated content is coming, I think once that novelty factor wears off, you'll be hard-pressed to find big hits there in the way you do with human-created movies and television. And, counterintuitively, I think as AI-generated content continues to flood every surface, it will raise the demand (and thus, value) for human-generated content.

I do think there will be "hybrid" content – part human-created, part AI-created – but I think people will prefer to watch work that was created by other people, because they will value the time and care spent on it. We'll slowly realize it's just as much about the input, as the output.

To be fair, Wood does hit on some of this in the same chat:

“I think people underestimate how dramatic that’s going to be,” Wood said. “I mean, obviously I don’t think people are going to get replaced. Humans are still the creative force behind creating content and hit shows, but the cost is going to come down dramatically, and that’s going to change a lot of companies’ business models. So I’m focused on, how do we take advantage of that? That’s a big opportunity for us.”

Again, that's reasonable. The AI-generated movie topping the box office in 2029? Less so.

One more thing: I continue to be intrigued by Roku's "Howdy" play:

For Howdy, Wood describes the $3-per-month Howdy offering as “not designed to replace a major streaming service like Netflix or Disney, it’s designed to be an add-on service.”

“The opportunity for Howdy was, if you just look at what’s going on in the streaming world with streaming services, they’re getting more and more expensive,” Wood said. “They keep raising prices, and they keep adding larger and larger ad loads. So the part of the market where it actually started, low cost and no ads, is gone now. There’s no streaming services that addressed that portion of the market. That’s the opportunity for Howdy. It’s three bucks a month and no ads and it’s doing extremely well. Just like we built the Roku Channel using the promotional power of our platform, that’s what we’re doing with Howdy. We’re using that to grow it. But Howdy has very broad appeal. There’s lots of people in the world that want a $3-a-month streaming service with no ads. So we’ll start on Roku, but we’ll also take it off platform as well. I think it’s going to be a really large business.”

I've been critical of Roku with regard to ads in the past, but this makes a lot of sense to me. And I suspect we'll see others start to copy this model too.

👇
Previously, on Spyglass...
Terminating the AI vs. Hollywood Tropes
James Cameron has some interesting – and some refreshing – and some controversial – thoughts about AI…
Oh No, a Tech Company is Buying a Movie Studio
This is the end of Hollywood? Come on.
Hollywood vs. AI: The Movie
My god, the open letter is full of stars -- especially Cate Blanchett
Sora’s Slop Hits Different
It’s about creative comedy creation, stupid
People at a Premium
AI will change Hollywood -- for the better

The Incredible Valuation Heights of the OpenAI "Constellations"

2026-01-09 23:53:06

The Incredible Valuation Heights of the OpenAI "Constellations"

Last March, I set out to map the OpenAI "Constellations" – that is, the startups not only in OpenAI's orbit, but those directly tied to it, founded by entrepreneurs that had previously been at OpenAI. No surprise, given both our moment in time with AI and the current funding environment around AI, there are a lot. So I set a threshold at the $1B valuation mark. By my math back then – again, a whole 10 months ago – those entities had a combined valuation of around $200B. And when you added in the valuation of OpenAI itself at the time, we were right at the $500B mark.

Just six months later, those numbers had changed drastically, so I checked back in on those valuations. At the end of August, that $200B aggregate valuation was past $400B. And with OpenAI's updated valuation, we were right around $1T in combined value.

Well, we're not quite six months since then, but given that Anthropic, xAI, and OpenAI itself – the three main drivers of those valuations, obviously – have all either just raised or are in the process of raising again, let's check back in...

Ursa Super Major

  • Anthropic was in the process of raising the round that would value them at $183B back then. Just months later, they're working on a new round which will value them at $350B – before the investment. And while that process may have been kicked off by NVIDIA and Microsoft – yes, the same Microsoft that owns 27% of OpenAI – committing to invest, that money will apparently be on top of this new $10B that they're likely to raise here from other investors. If history is any indication, Anthropic will raise more than $10B, but let's just keep it there for now and add it to the $15B in commitments from Microsoft and NVIDIA, and that gives them a new post-money of $375B.
  • xAI was said to be trying to raise at $170B to $200B last August, which I gave them the credit for because of course Elon Musk was going to be able to raise whatever he wanted, comps be damned! Well, it took a bit longer, but they have now announced the $20B Series E, which they really, really would like you to know exceeded their target. They didn't announce the valuation, but reports put it "above $230B" which is weirdly worded in there, but I take to mean $235B.

Given that these two companies formed by those formerly affiliated with OpenAI are valued above $600B by themselves, it's probably worth breaking them into their own category, and we know how much AI companies love to append "Super" to everything, so let's go with "Ursa Super Major".

Ursa Major

  • Safe Superintelligence, speaking of "super", it seems like Ilya Sutskever's startup is still "stuck" at the $32B valuation from their "seed" round. I'll comically note that this seems almost prudent given the rate at which everyone else is raising. Sutskever may be giving his company some optionality, as a few of the big players would undoubtedly still be open to acquiring them – well, "hackquiring" at least – in this general price range, to get access to Sutskever if nothing else. And given that they're down a co-founder (and first CEO) thanks to Meta's mad scramble to catch up in AI (which included trying to acquire Safe Superintelligence, but instead they ended up investing – before they poached Daniel Gross...). Sutskever has opened up a bit more about what SSI is doing recently, and that included an explanation of how they can compete without needing to raise endless capital, like their rivals. Have I mentioned that Sutskever's own OpenAI shares may be worth tens of billions all the way up to $100B now, so he can probably fund SSI's "return to research" himself. Still, don't be shocked if he raises again sometime soon and resets their valuation...
  • Thinking Machines Lab, would provide the reason for SSI to raise again if they're able to complete a round valuing them at $50B or $60B! That was rumored a few months ago already and doesn't seem to have happened yet. So I'll keep them behind SSI for now on the list, but give them credit for the reported valuation jump from the "mere" $12B valuation "seed" round in July. They have also lost a co-founder, also to Meta, since then. But hey, at least they have a product in market now. And promises of more to come this year...
  • Perplexity would also have been vaulted if Thinking Machines gets that round done. Their most recent round reportedly pushed their valuation to $20B, up from the $18B just a couple months prior (when I last checked in). One of my predictions for 2026 is that someone will buy Perplexity, as they may need to step off the hamster wheel of funding. Samsung is the most obvious acquirer given their partnership (and great financial situation at the moment), but they've also been going deeper with Google... just as the other would-have-been buyer, Apple, has been...

These combined numbers add up to over $100B now, just for Ursa Major, without the 'Supers'!

Ursa Minor

  • Harvey jumped up to an $8B valuation in December, up from $5B at the last check in.
  • Periodic Labs may be a bit above the $1B mark during the last check-in as the round seems to have grown from a rumored $200M to a confirmed $300M. Some reports suggest $1.5B or even perhaps $2B.
  • Cresta is one I actually missed in the previous reports, probably because it was co-founded way back in 2017 by early OpenAI researcher Tim Shi. It currently sits at a $1.6B valuation, which it hit back in 2022.
  • Eureka Labs still hasn't raised any outside capital, it seems. Though Andrej Karpathy obviously could at any time, probably at any valuation.

So that's a solid haul for this group just over $10B.


Okay, so grouped all together, we're looking at an aggregate valuation of just over $730B – a number which has nearly quadrupled in less than a year. And nearly doubled just since August. Obviously it's top-heavy, but even taking out the 'Supers', you'd still have a group valued collectively at $123B. Would that alone be the highest valued group post-PayPal Mafia? Or even if you remove Elon from the full equation, you're looking at almost exactly a $500B aggregate valuation for the OpenAI diaspora.

And if you add OpenAI itself into the mix, with their newly targeted $830B valuation, well, you're truly in the stars now: just over $1.5T.

That's awfully close to Meta's market cap. And if/when OpenAI and Anthropic go public, both may shoot past that mark by themselves given likely investor demand (depending on timing). And if/when xAI merges with Tesla, making them an AI company as well...

Disclosure: GV, where I was a partner for over a decade, is an investor in Harvey and Thinking Machines Lab. Google, which is the LP of GV, is an investor in Anthropic and, I believe, Safe Superintelligence.
👇
Previously, on Spyglass...
Peering Back at the OpenAI “Constellations”
The aggregate valuation looks to surge past $400B…
The Incredible Valuation Heights of the OpenAI "Constellations"
The OpenAI Constellations
The out-of-this-world funding numbers in OpenAI’s orbit
The Incredible Valuation Heights of the OpenAI "Constellations"
Collect Them All (AI Edition)
An ongoing list of the tangled web of Big Tech investments in Big AI…
The Incredible Valuation Heights of the OpenAI "Constellations"