2026-01-18 04:55:55

The ads are coming! The ads are coming!
Look, everything in OpenAI's blog post announcing that advertising is coming to ChatGPT seems reasonable. Fidji Simo smartly frames it as wanting to ensure they can bring their AI to more people, but at the same time, as everyone is well aware by now, it costs a lot of money to do that. Charging for more/better/faster access has worked well to date for the company, but that model will never be able to scale the way 'free' and/or 'cheap' can. It's a story as old as time – or at least, as old as advertising. Sam Altman may have said that he hoped to avoid this fate – "a last resort" – but it was always inevitable. You either die as an ad-free service or you live long enough to become an ad vessel.1
That's a bit unfair. But it's directionally true. And it's exactly why you hire someone like Simo to be the CEO reporting to the CEO. She has all the experience to make this work for OpenAI, not only thanks to her Meta days, but her Instacart days as well. She has been able to help get advertising working across several different types of businesses (and I think Instagram should be considered slightly different from Facebook itself – we'll get to that).
So, will this work for OpenAI?
I mean, the short answer is that it has to. Given the costs involved and the competition keeping pricing in check, this will be the only way to scale to billions without breaking the bank – which still might break, even with ads, by the way. The problem is that OpenAI clearly doesn't yet know how it will work. Because no one does. It's a new product and new type of experience – and my guess would be that to truly work, it will require a new type of advertising.
Google is the Google you know today because they figured this out. Meta is Meta because Facebook – and again, later Instagram – figured this out.
Conventional wisdom right now would suggest that the right ad model is going to be closer to that of Google because much like Search, what you type into a text box is key. And much like with Google Search, such text often indicates an intent of some type. But whereas Google aimed to match this intent with a website – well, at first, at least – ChatGPT tries to get you right to an answer. This is a problem for the Google-style of advertising because that was predicated around the notion of a click. Famously, because it's trying to get you the information directly rather than linking to a site that has the information, there are far fewer clicks on ChatGPT.
But Google's model was even better than that. Because a user was trained to know they'd be getting those famous "10 blue links" back as a result, they were also trained to quickly scan them. Sure, Google would try to surface the one it found most relevant to your query first, but with millions then billions of web pages and an infinite pool of queries, it wasn't reasonable to think that the top link would always be the correct one. That naturally let Google "take over" one of those link slots for an ad unit, knowing the "impression" would be there.
Even better, they could move such a unit to the top of the results without too much damage done to the product. Is the ad not what you were looking for? Then just don't click. But even better still, for certain types of queries, the ad often was what you were looking for, because the real genius and killer feature of the model was getting basically all businesses to advertise against the keyword queries related to their businesses – and to those of their rivals! As a result, when I search for "iPhone" the top result is often an Apple ad that Google gets paid to display. Because if not, Samsung would try to control that top slot and may divert a customer with clear smartphone interest to their Galaxy lineup.
Everyone knows all of this, of course, but it's worth spelling out just how brilliant this is. It's not an exaggeration to say that it may be the best business model in history. But it's also one that breaks down quickly without those clicks.
Now, ChatGPT is clearly making product tweaks to lead to more clicks. Conversations around shopping are the most clear example of this. Because at the end of the day, you can talk all you want about buying something, but if you actually want to buy it, you're going to have to click somewhere to go do that – even if it's within ChatGPT itself (which is what they're working on, obviously).2
Anyway, anything with some sort of purchase intent is clearly going to get more click-inducing UI tweaks. This isn't necessarily a bad thing, but it will also help the advertising push. Yet it still feels like that won't be the big breakthrough here. And certainly it will never convert as Google Search ads have. I'm more intrigued by OpenAI's notion that you might want to chat with an ad. On the surface, this sounds gross, but it's perhaps a clever way to integrate ads more natively into the product. Want to know more about the iPhone's specs versus that Galaxy device? Maybe Apple pays to sponsor ChatGPT's answer (without any oversight of the response aside from insuring it's accurate?) with a link to buy. And maybe Samsung pays too for their own link, just in case you deem that the better path.
Travel. Tickets. Etc. The categories will undoubtedly be the same, but the key will be not destroying user trust in the chat by letting advertisers turn said chat into a giant advertisement. ChatGPT is saying all the right things now, but... such things have a way of morphing over time. Bending towards the arc of more monetization...
But all of the above is too obvious. The more interesting ads may be simply about information, the native content of ChatGPT. Can they figure out a way to get advertisers to pay to sponsor relevant information – again, not spammy/pitchy – in response to "regular" chats? Is there some sort of new "cost-per-information" model?3 Or if a user engages with it, "cost-per-interaction"? There obviously needs to be some sort of signal to showcase that it was useful to a user and again, it may not be a click. Otherwise, we're just back in the less interesting, and probably less useful, not to mention less lucrative, impression game.
Speaking of, Meta has historically played that game better because their breakthrough, the feed, is naturally more visual in nature. And the natural interaction there, the scroll, allows for easy ad insertion without annoying the user too much – if they don't like an ad, they just keep scrolling. But Meta, even more so than Google, at least in the early days, was able to leverage key knowledge about you thanks to their social graph. So while they didn't have the same search intent Google did, they had better targeting capabilities.
That's interesting as it relates to OpenAI because they should have sort of a hybrid of these two worlds and models (and, to be clear, Google undoubtedly knows more about you these days than Meta thanks to Gmail and Photos and Maps and everything else). ChatGPT can glean intent from what you type, but also better target ads based on all the knowledge it has about you from previous conversations. Google had this too with previous searches, but you're typing so much more, about so many more things, into ChatGPT than you would into a Search box.
Everyone – even Sam Altman – highlights how good Instagram ads are. It's obviously a combination of Meta's data and targeting mixed with the visual nature of that product. In a way, they're the new glossy magazine ads versus Google's ads which were more like old classified ads (on steroids, of course). Is there a way ChatGPT could leverage those types of ads too? Maybe if they continue to fully build out their image generation tools and can make it a true hub people visit. Meta is obviously trying to do this with their AI products. But it's not clear that will be a natural use behavior with AI content, we'll see.
Sora and AI video would be the next holy grail after that, of course. If OpenAI can break into the video advertising game – aka, commercials – well, that's another massive business opportunity. It's one YouTube is still in the process of siphoning away from TV, with TikTok and yes, Instagram pushing hard as well with some success. And then there's Netflix, Amazon, and everyone in streaming making their moves too.
But we're getting way ahead of ourselves here. First, OpenAI needs to find the right chat-based model for ads inside of ChatGPT. Again, it may be more similar to the Google Search model, but it won't be exactly like it, and new metrics will probably be needed to work at scale.
And there's a privacy angle, which OpenAI doesn't shy away from in their post, which makes it even more of a challenge. Many people are clearly already telling ChatGPT things that they don't necessarily want to be advertised against. And yet, an area like health is one of the most lucrative areas of advertising! So that's going to be a fine line to walk... For now, ChatGPT is probably wisely steering clear.
At the end of the day, OpenAI undoubtedly still wants to believe that they can have a more hybrid model than either Google or Meta have, in that they have millions of paying users from the get-go as they start this ads push – those other companies did not, at least not for the core products: search and social feeds. Can they take their existing model and layer in advertising, versus it consuming the entire product, ensuring "enshittification"? In that way, it's almost more like Netflix.
But again, the key will be finding the right mix of where/when to serve ads and what type of ads to serve. Oh, that's all? The company is confident enough to think that the business will bring in "low billions" this year, but can they do that without ruining the product experience? I think it's going to take thinking differently about ads.

1 Just ask Apple. ↩
2 Or, I guess, you could have an AI agent go do this for you, which every AI player is also rushing to enable. But this is more complicated, with major players like Amazon quickly moving to block such bots. Anyway, that's a different topic. Undoubtedly related, but still also in the future... ↩
3 And does this eventually complement a way to pay publishers/content producers for such information that is surfaced by ChatGPT? The one-off data deals also aren't scalable. ↩
2026-01-16 23:38:49
Ted Sarandos is done fucking around. Well, nearly.
With the endless swirl around the fight for Warner Bros, he's been making the rounds trying to make the case for why Netflix should be the winner. Well, they already are technically the winner, so it's really why they should remain the winner – both against Paramount and against any regulatory push-back. And, deal terms aside – this interview doesn't really hit on those – this is probably his best pitch yet. But I still might suggest altering his approach slightly.
Right off the bat, there's a problem:
I think it was a lot of loud voices, but not necessarily a lot of them. I think a lot of it was folks who questioned, rightfully so, our intent with theatrical because we hadn’t said anything about it. A lot of it was the emotions around that more than anything else.
Yes, that is four "a lots" in three short sentences. That's a lot of "a lots". Also, I'm not sure what to make of the brain twister how there can be both "a lot of loud voices" but also "not necessarily a lot of them". Regardless, he's both too defensive and deflecting too much here. He sort of addresses it later on when asked more directly, but Netflix – and he in particular – has talked about the movie theater business, quite a lot. He can argue semantics (as he does in a later question), but there's far too much smoke over the years for there to be no fire here. He should just own it and say something like:
He does mention the idea of seeing Warner's actual theatrical numbers changing his mind later on, and it's a great, compelling answer. People are allowed to change their minds – and it's especially good and reassuring if they do so when presented with new facts. That's the sign of a good leader, so he should tie that into the broader question about his past statements on the matter, not be defensive about such statements.
I do love that he doesn't shy away from talking about Paramount directly:
If you take a beat and think about who’s been building and who’s been collapsing, it’s the best news possible. When we buy the studio, we’ll be releasing more movies together than we were separately. Our forecast is to grow the content spend of the combined companies several years out. So it’s really good news for the town that we’re going to continue to grow the business.
On the Paramount side, between the $3 billion that they’ve already cut and the $6 billion they’re proposing, those are real jobs. That’s cutting back on production. That endless search for profit by cutting people, jobs and making fewer movies — that’s not our intention at all. We need all those movies. We need all those TV shows.
This is probably Netflix's single greatest strength, even more so than the money they're offering, in their pitch – especially to outsiders. Essentially: "We're not going to cut jobs, Skydance is. How do I know that? Look at what they've done with Paramount, mere months ago." His second best argument?
I honestly think there would have been reactions like that from anyone who was going to do the deal. What people would like to see is no deal. But that’s not possible. There are two outcomes of this deal, and we have a signed deal done.
This isn't Netflix vs. Paramount – I mean, it is – but the bigger battle is this deal vs. reality. If no one does this deal, Warner Bros is going to die. Not today, not tomorrow, but eventually, slowly. It has been decades since the major movie studios could stand on their own as businesses, which is why all but Disney is owned by larger entities (and even Disney is saved by their other businesses, like theme parks and cruise ships, and for a time, cable). A stand-alone Warner Bros would not last long, and WBD is going to split off the cable division which is still bringing in money so... There needs to be some deal done here. Hollywood seems to think Netflix is the worst option, but, plot twist: they might actually be the best option. By far.
After two great answers as to why they decided to do this deal – again, essentially admitting they were wrong in some of their assumptions and realizing that actually their business could augment and accelerate Warner's in ways – we get another small setback. When Sperling (rightfully) pushes back, noting that people are still skeptical of Netflix's actual commitment to movie theaters here, Sarandos gets weak again:
I understand that folks are emotional about it because they love it and they don’t want it to go away. And they think that we’ve been doing things to make it go away. We haven’t.
Again, I wouldn't be defensive there, it seems disingenuous. I would say something more along the lines of:
But I'm also slightly taking this bit out of context, because his very next line is meant to be the knockout punch:
When this deal closes, we will own a theatrical distribution engine that is phenomenal and produces billions of dollars of theatrical revenue that we don’t want to put at risk. We will run that business largely like it is today, with 45-day windows. I’m giving you a hard number. If we’re going to be in the theatrical business, and we are, we’re competitive people — we want to win. I want to win opening weekend. I want to win box office.
There's obviously still a slight equivocation with "largely" but that seems fair, there are going to have to be some things Netflix tweaks – and there are obviously some things that Netflix should tweak – but one of them is not the 45-day theatrical commitment for Warner Bros movies,1 as he makes clear: "I'm giving you a hard number."
That is his updated pitch directly to Hollywood and theater owners. It's less an olive branch and more just a full-on offering: Netflix will commit to a 45-day window. Not 17 days, as has been reported by Deadline and others. 45 days. Exactly what the theater owners are asking for, obviously not coincidentally.
Sperling pushes back again, noting Sarandos' "outmoded idea" quote from not even a year ago. This seems to annoy him, as he thinks everyone has read his quote the wrong way. Again, it's far too defensive. He just should have answered with some variation of the above quotes I've already laid out, or simply said he should have given a better answer at the time – a tactic he's used in the past – rather than arguing semantics.
When pushed again on the notion that Netflix has led fewer people to leave their houses to go to theaters, Sarandos gives a good answer:
You have to give them something to watch. And I think we’ve got to take ownership of the idea that when people are excited to go out and see something, they go. You’ve seen it in some really nice upside at the box office this year. You’ve seen it in our “Stranger Things” finale experience. You saw it in our “KPop Demon Hunters” experience with people. You give people a reason to leave the house, they will gladly leave the house.
This is good because yes, Netflix saw unquestionable success with those efforts. And it shows that they've been evolving in their thinking, even ahead of this Warner Bros deal – yes, something which someone – cough, cough – predicted... Beyond the Warner Bros movies continuing in theaters, Netflix could actually help the industry by continuing to push these outside-the-box experiences.
I would say one of the other myths about all this is that we thought of going to the theaters as competition for Netflix. It absolutely is not. When you go out to see a movie in the theater, if it was a good movie, when you come home, the first thing you want to do is watch another movie. If anything, I think it helps, you know, encourage the love of films.
I did not get in this business to hurt the theatrical business. I got into this business to help consumers, to help movie fans.
Yes, this is a good response. Less good:
Do you think theater owners believe that?
I’ve got a great relationship with theater owners.
Way, way, way, way too flippant. Come on, Ted. Keep your head in the game. This response gives Sperling the rope to put around his neck with the fact that the theater owners group just went to Congress to ask them to stop this deal. "Great relationship" alright! But Sarandos comes back strong:
Like I said, there’s only two outcomes of this deal. We’re going to be the buyer who keeps Warner Bros. running, releasing movies in theaters the way they always have. That keeps HBO completely intact. It keeps Warner Bros. television, producing television, and it creates jobs.
At the end of the day, that may be the argument that actually wins over Hollywood and allows this deal to happen. It's still very much up in the air, and they're going to have to start moving the messaging heavily around the actual competition being YouTube – which it is – but they're nearly there on the talking points now. They just need Ted to fully lock in and stop being defensive. Funny is fine though:
Will you relocate Netflix from its base in Hollywood to the Warner Bros. lot in Burbank? Will you be sitting at the famous desk used by Jack Warner, a founder of Warner Bros.?
I’ll probably have a space in both. Probably neither of them will be Jack Warner’s desk. But it is a beautiful desk.
Tell the truth, you’re doing all this for the desk?
It’s mostly for the desk.








1 I might just point out there's wiggle room here in that he's committing to this window for Warner Bros movies and not necessarily all Netflix movies. But I think that's fair. If it works out well for Warner Bros movies, Netflix may want to use the same window for some of their own. And if not, well... not everything has to be a Warner Bros movie! ↩
2026-01-16 18:40:56
I was back on the Big Technology Podcast this week to discuss some recent posts with Alex Kantrowitz. First and foremost, AI's perception problem, and if a pitchman like Steve Jobs is needed to sell this technology to the public. While Sam Altman has largely been serving in that role to date, it's clearly causing a lot of backlash, fairly or not. Could someone like Demis Hassabis or Panos Panay have better luck resonating with the masses? It feels like Jensen Huang is the closest, but NVIDIA is obviously playing in a different part of the stack and not selling directly to consumers (beyond their graphics cards, of course), at least not yet. And it's just harder to pitch what is essentially software on the user-facing side...
Will this year's slate of Super Bowl commercials help? Probably not, but it will be interesting to see what angles OpenAI and undoubtedly Google and probably Anthropic take in their ads. Microsoft too? Amazon? Meta? It feels like pushing more towards science and discovery should help with messaging, but again, it's not as day-to-day consumer focused. Health certainly is, but there are myriad issues when it comes to marketing such features...
Meanwhile, The Chaos Ladder – where the various players in AI stand ranging from relatively stable to chaotic – saw a lot of movement in 2025. Meta and Apple reset their teams. Amazon reset their tech. Microsoft reset their deal with OpenAI. Google, after being downtrodden from a stock-perspective for the first half of last year, came roaring back to vault into the number two market cap position, joining the $4T club. Anthropic is sneakily stable – a fact which is now being highlighted due to the current chaos swirling around another OpenAI "Constellation": Thinking Machines Lab.
With Big Tech at least, it feels like things are starting to stabilize a bit more as we start 2026. But does OpenAI and Anthropic going for IPOs change that? What about Elon doing Elon things? Does any larger player that's not a part of Big Tech leverage AI in a way to step into those big leagues?
Finally we hit on a few broader predictions for 2026. I'm feeling like the iPhone Fold will be a hit – which I know sounds sort of obvious; an Apple product, a hit? – but Apple has a tendency to start slow out of the gates with such products, certainly recently. But I think the Fold, despite what will undoubtedly be a big price, could excite a lot of people around Apple hardware again. And what if it's touted as one of the first real AiPhones?
There is also the question of who will be presenting such a device to the world. Will it be Tim Cook, or will he be retired (though undoubtedly in a new role as Chairman of Apple's board) by the end of 2026? Feels like there's way too much smoke for it not to happen at some point this year. Though one wild card could be if Apple does make a bigger acquisition, perhaps to bring on more AI talent. Safe Superintelligence is probably a bridge too far, but what about Thinking Machines in their aforementioned state of chaos?
I no longer think it will be Perplexity – feels like they moved on from that idea. But it also feels like someone will buy Perplexity. Samsung spending some of their surging cash thanks to memory chips? Microsoft, still trying to make Google dance? Someone else?
2026-01-15 17:40:59
I mean...
Thinking Machines cofounders Barret Zoph and Luke Metz are leaving the fledgling AI lab and rejoining OpenAI, the ChatGPT-maker announced on Wednesday. OpenAI’s CEO of applications, Fidji Simo, shared the news in a memo to staff this afternoon.
And it's not just those two, a third key employee – I would say "early" but this company is not even a year old, so they're all early – Sam Schoenholz, is also bolting back to OpenAI. And this is after they had already lost another co-founder, Andrew Tulloch, in October. At least he went to Meta, finally giving in to one of Mark Zuckerberg's "Godfather" offers. That somehow looks better than this situation...
Two narratives are already forming about what prompted the departures. The news was first reported on X by technology reporter Kylie Robison, who wrote that Zoph was fired for “unethical conduct.”
A source close to Thinking Machines alleged that Zoph had shared confidential company information with competitors. WIRED was unable to verify this information with Zoph, who did not immediately respond to WIRED’s request for comment.
According to the memo from Simo, Zoph told Thinking Machines CEO Mira Murati on Monday he was considering leaving. He was then fired on Wednesday. Simo went on to write that OpenAI doesn’t share the same concerns about Zoph as Murati.
My god, the drama! We would seem to have a literal he said/she said situation here. Though it seems neither hard nor a stretch to connect the dots that Zoph told Murati he might jump ship back to OpenAI (from where they both came, of course) and so Thinking Machines implemented the old "you can't quit, you're fired!" maneuver. And as a kick in the ass out the door, perhaps there was a "for cause" wrapper, the allegation of "unethical conduct" in sharing "confidential company information with competitors", which reads a lot like an accusation that Zoph, who was clearly in contact with OpenAI, may have told them things about Thinking Machines Lab – perhaps even just the notion that he would be willing to leave could be framed as "confidential company information" if you stretch, I suppose! [Update: more on this below.]
All speculation, of course. But it's an easy picture to paint, especially given what Fidji Simo is explicitly putting out there in not having the "concerns" that Thinking Machines Lab does in their move to fire Zoph. And while it's still being sorted out, they're clearly going to give Zoph some lofty new title, as he'll be reporting directly to Simo, the CEO behind the CEO of OpenAI.
Mix all of this with the reports around Thinking Machines seemingly outlandish fundraising efforts – which is certainly saying something in our current environment – at first reportedly refusing to share much of anything with would-be investors, and perhaps demanding (and getting) a problematic level of rights and control for Murati, and now supposedly trying to raise at a $50B (or $60B!) valuation on the back of I guess a single, smaller product in market,1 but mainly promises and vibes. Oh, and talent. A lot of great OpenAI talent, no doubt.
Of course, they've just lost half of that talent at the co-founder level, so... are investors going to get half off their investments?
Speaking of, in a way, I suppose you could say that Thinking Machine Labs is just following in the footsteps of OpenAI when it comes to co-founder departures. And actually, all of the AI research labs seem to suffer from this co-founder departure affliction right now. Well, except Anthropic. Read into that what you will...
So where does this leave Thinking Machines Lab? Unclear. One guess would be that either we'll hear about that new fundraise soon in an effort to combat this extremely problematic narrative (for recruiting, if nothing else), or perhaps that the team gets "hackquired". Zuck apparently tried, and failed, once here. Apple, which has a relationship with Murati – her departure from OpenAI may have contributed to the Apple/OpenAI funding discussions going off the rails – and may have also kicked the tires back in the day and could use some AI talent, I hear...
Update January 17, 2026: The drama continues with multiple reports now claiming Zoph's alleged misconduct may be tied to an inner-office relationship – which he may have lied about when confronted. Per this telling, this may have led Zoph to look for other opportunties, including talking to Meta, before landing back at OpenAI...
At the same time, others continue to leave Thinking Machines Lab, which may or may not be related to the Zoph situation (presumably the company losing a key technical co-founder, not the relationship part). Most damning are probably the sources stating that "the startup lacks a clear product or business strategy", and as such, has been struggling to raise that massive new round of funding. None of this will help...



1 To be fair, that's more than, say, Safe Superintelligence can say at $32B. A startup which has also lost a co-founder (and CEO no less!) to Meta. Then again, maybe Sutskever is all you need... ↩
2026-01-15 00:06:43

It took a lot of work, but I finally did it. I finally booted up my Vision Pro to watch the first full NBA game shot in the "Apple Immersive" format. It was... very cool. With some very real caveats. And it seemingly points to the future of the device itself. You don't even have to really squint to see it.
Honestly, given the dearth of content over the first two years of the Vision Pro's lifespan, I'm sort of shocked that Apple pulled this off. Just going by how long it has taken Apple to release short highlight footage from other sporting events, I would have assumed we would start getting full games weeks (or months!) after they first aired. The fact that they not only turned around this Lakers vs. Bucks game right after it was played, but showed it live to the small subset of folks in the right market (or with the right VPN setting), is rather incredible.1 Again, given Apple's previous cadence with such content, I would have thought this would be a 2027 or 2028 thing. I'm not trying to be a jerk, that's just how painfully slow they've been with releasing this type of content!
At the same time, such content is clearly – clearly – the way to move the needle when it comes to the Vision Pro. Granted, plenty of other hurdles remain – more on those in a bit – but if they want people to actually be excited about not just buying, but actually using the device, they need content people are excited about watching! It's not rocket science, it's behavioral science. It's human nature. So it's good to see Apple hustle here to at least try to drum up interest in the device.
While I obviously didn't watch it live, Jason Snell did, and in his thoughts for Six Colors on the experience, he described it as "surprisingly... normal?" I agree with that. After the initial "wow" factor wore off of being transported to Los Angeles with Crypto.com Arena wrapped around you, it felt like... you were watching a basketball game. It wasn't exactly like watching it on TV, but it also wasn't exactly like watching it in person. It was sort of... in-between.
Depending on the vantage point, it sort of veered between the television experience and the in-person experience. And that was the most jarring element of watching it – Apple kept cutting between those vantage points. You had no say over the matter, you were just zoomed from one area of the arena to another on the whim of the producers. It wasn't as jarring as it was in those aforementioned short highlight clips of other sporting events because you did get to linger longer in each spot given that the entire experience (meaning, the entire game) was just over two-hours long. Still, during the actual game, the cuts between cameras behind the basket depending on where the action was happening was... weird. You were forced to reorient yourself constantly on the fly. I sort of got used to it as the game went on, but it's still felt a bit like a brain teaser – especially the cuts between the same perspective just on opposite ends of the court.
Ben Thompson clearly hated this aspect, as his fun Stratechery rant going after Apple for not understanding their product makes clear. All he wanted was a single vantage point, ideally court side, where you were planted and never left. That would, he argues, be actually immersive. Because it wouldn't make you do the constant mental calisthenics I describe above. I don't disagree, but I also don't think that's all Apple should do. I think that should be an option.2
My feedback would be an extension of this: give us options for how we want to view the game. You can have the option to watch it court side. Or the option to watch it from behind the basket. Or the option to watch it in the press booth. Or the option to have a couple other vantage points in the crowd; you know, to feel actually immersed. And the option to cut between these views as you, the viewer, saw fit. Or the current and only option to have someone else make those calls!
Obviously, this wouldn't be the most immersive experience possible because it would break the wall of illusion since there are not options to immediately cut between vantage points in real life. But again, I don't view this as a replacement of going to a real life game. It's a more immersive version of television. In some ways, it can be better than either, but in others it will be worse. It's just a new, cool format. Apple should lean into that.
In a way, it's the same initial takeaway I had after getting the Vision Pro two years ago. For years, the world had wondered when Apple would create their own television set. And famously, some of Steve Jobs last words to his biographer Water Isaacson before he passed away pointed to him finally "cracking" the problem for Apple. As I wrote in February 2024:
Again, as noted above, Jobs clearly wasn't talking about the Apple TV set top box. It sure sounded like another product, *an actual television set*. But what if everyone was reading that too literally and he did mean something *entirely* different. A true game-changer, Apple-style. Something like a *new kind* of television. One which seamlessly syncs to all of your devices via iCloud. One with no need for remotes. One with the simplest user interface you could imagine...
One you wear, perhaps. "Headphones for video" as it were.
I know, it's a bit of a stretch. But nevertheless, that is what Apple has stumbled upon here. Well, that's not fair. I think Apple knows exactly what they have from a content-perspective with the Vision Pro, I just think they're muddying the message with all of the other stuff they're trying to showcase. And in part, they may feel like they have to because the device is $3,500.
As mentioned, given how long it has taken Apple to get up-to-speed on the content front for Vision Pro, I'm no longer sure they knew what they had with the product when it launched (which it should not have, at least not fully, in its current state). But I think these NBA games, alongside the first concert footage, and the first (short) movie actually shot for Vision Pro, and even, oddly, 3D movies (which Apple keeps pushing heavily in the Vision Pro Apple TV app), makes it perfectly clear. This is a content consumption device. This is their television set.
Back to the game, I definitely preferred the mid-court, scorer's table angle. It was a bit low, but fun to see Bucks' coach Doc Rivers and Lakers' coach JJ Redick stalk the sidelines, getting awfully close at points. That vantage point also had some blindspots – such as when a player is taking a corner 3 – and that's in part why I would like a multi-vantage-point option.
The camera angle behind the basket was too high for my taste. But I assume they also don't want to destroy their undoubtedly expensive cameras! But it's from these lower angles where you can tell just how big LeBron James is. And just how much bigger Giannis Antetokounmpo is than LeBron!3
It was interesting to mostly not cut away from the floor during time-outs when the rest of those watching remotely would obviously go to commercial. This made it more like being at the arena, but augmented by some commentary at points. It was also fun to watch the halftime activities and half-court-shots-for-money this way. Though there was some just pure "dead time" when you would just look around – again, just like being at the actual game, but clearly with more voyeuristic feeling.4 Presumably if this format does take off, commercials will be shoved into such crevices eventually.
I appreciated the inclusion of a virtual scoreboard when you looked down at the court because while looking up to the actual scoreboard in the area works, it also highlights just how heavy the Vision Pro is!
Speaking of... the biggest problem Apple actually has with the Vision Pro remains just how much of a pain it is to put on and use. As both a sports and Apple fan, I should have jumped at the opportunity to view this content the second I could. But I waited because I knew I would have to make sure the Vision Pro was charged, and that it was a time my family wasn't around because despite Apple's comically misguided attempts to make using such a device in public less awkward with some extremely awkward looking eyes, it's a device best used alone.
That said, if Apple does nail this format – and yes, brings the cost of the Vision Pro down eventually – I could see this becoming an interesting way to watch a game with someone. Unlike with a movie, it's natural to talk to friends/family during a game, and wouldn't it be cool if you had two different vantage points and you could tell the other person to "check the action out from here"? Or to watch a replay from another angle they witnessed live? Etc. The fact that all of this could be done without people being in the same room – and it would probably be less awkward that way! – is potentially even more interesting.
My multi-angle idea also opens up some re-watchability options that don't typically exist for sports. Granted, most people still aren't going to rewatch a game they've already seen, but die-hard fans probably would from a different angle than the one they chose the first time if it was a good game. And again, watching replays from different angles of your own choosing would be pretty killer in-game.
Anyway, I come away after watching this Lakers/Bucks game thinking that Apple is actually closer to figuring this out, at least on the content side, than I would have thought going into it. Yes, there are obviously things they need to tweak, but those seem like relatively straightforward fixes – well, as straightforward as it can be to record and stream live from five or six different immersive cameras around a venue. Ben Thompson wants something simple – his static, fully immersive view from one vantage point – but I want something a bit more: options. And I think my desire more closely matches what the masses might want – and that includes the option for this more "packaged" telecast that's more like traditional TV too, BTW. The bigger issue remains the masses ever getting near a Vision Pro with its price point and general inconvenience.
But those things can change too over time if Apple sticks with it. Meta seems to be backing away from the space now, but I wonder if Apple can't sort of do an end run around the Quest here. Because there's no way this type of experience will be available on mere Smart Glasses any time soon. Apple has the money and patience to make this work for the Vision Pro. And so the biggest thing holding them back may be the rights to all of this content.5 But as the leagues make moves to take over more control of such rights themselves, there's a window for Apple here.6
Now they just need to make a version of the Vision Pro where I can actually drink a beer while watching a game while fully immersed. Thank god for long neck beer bottles, I guess?





1 Given the small Vision Pro user base and the restricted live markets here, how many people do we think actually watched this live? A few hundred, max? ↩
2 Ben had similar feedback around the immersive Metallica concert, wanting just to be in the crowd like real life. But I would much prefer to next to the band on stage, something not possible in real life! (And yes, the option to see it from the crowd too. And backstage. Etc.) ↩
3 There's no way Giannis isn't 7-feet, right? He's listed as 6'11", but come on, these vantage points show he's much taller than 6'9" LeBron... ↩
4 One of the more interesting elements would be when you'd see, say, a security guard just off to the side using their phone. Or a person sitting near you just drinking a beer. All things you could at the game itself but again, weird that they can't possibly know you're focused on them in that moment... ↩
5 Just think of the money to be made here though. If you could sell both one-off and season "tickets" to these games. $20 a game? More? Concerts $50? ↩
6 And that's presumably why Apple prefers more encompassing rights in their own sports rights deals. How long until all MLS games are available to stream in Apple Immersive on the Vision Pro? What about F1? ↩
2026-01-14 01:28:32

If the vocal computing category has a boy crying wolf, I may be it. I've been writing about the notion that operating our computers through voice is "right around the corner" for almost two decades. And long before that, I was an early adopter to many a PC microphone in the 1990s and later Bluetooth earpieces in the 2000s in an attempt to run all of my computing through my voice (and ears).1 As an industry, we've made great strides over that time span. But we're still not living aboard the USS Enterprise in Star Trek, blabbing away to our machines. Not yet.
Given my track record here, I feel a bit silly writing this, but I really do believe we're at some sort of inflection point for voice and computing. Why? AI, of course.
Yes, technically "AI" is why I thought we were closing in on this future 15 years ago when I was a reporter breaking the news that Siri integration would be the marquee feature of iOS 5 and what would become the iPhone 4S (perhaps for 'Siri'). Pushed by Steve Jobs, Apple was trying to jump ahead to the next paradigm in computing interaction after leveraging multitouch to revolutionize the world with the iPhone (not to mention on the Mac with the mouse, back in the day). Again, voice technology had been around for a long time, but the only place it really worked was in science fiction. Armed with Siri, a startup they had acquired the year before, Apple thought now was the time. "Now" being 2011.
It didn't exactly work out that way. To the point where Apple is actually the boy who cried wolf when it comes to Siri. After the buzzy launch in 2011, 2012 was going to be the year they made Siri work well. Then 2013. Then 2014. Then Amazon launched Alexa and thanks to a better strategy around vocal computing at the time, started to eat Apple's lunch. Millions of Echo devices later and Google entered the space and it looked like we were off to the races...
But it was all sort of a head fake. A hands-free way to set timers and play music. Maybe a few trivia games. And not much else. Amazon couldn't figure out how to get people to shop in a real way with voice. Google couldn't figure out the right ads format. Billions were burned.
All the while, Apple kept telling us that 2015 was the year of Siri. Then 2016. Then 2017. 2018. 2019... All the way up until WWDC 2024, when this time, Apple meant it. Thanks to the latest breakthroughs in AI, Siri was finally going to get grandma home from that airport using some simple voice commands. It was coming that Fall. Then the following Spring. Then never. Is never good for you?
Fast forward to today, 2026. That functionality may now actually be coming this Spring. Something I obviously would never in a million years believe given Apple's track-record here. Except that they've seemingly outsourced the key parts – the AI – to Google.
So... we'll see!
Regardless, AI was the key missing ingredient. We just didn't realize it because we thought we had that technology covered. Sure, it was early, but it would get better. But as it turns out, what powered Siri, and Alexa, and even Google's Home devices wasn't the right flavor of AI. Depending on the task, it could taste okay. But most tasks left you throwing up... your hands in frustration. By 2017, it was clear that the world was shifting again, as I wrote in an essay entitled "The Voice":
And then there’s Siri. While Apple had the foresight to acquire Siri and make it a marquee feature of the iPhone — in 2011! — before their competitors knew what was happening, Apple has treated Siri like, well, an Apple product. That is, iterate secretly behind the scenes and focus on new, big functionality only when they deem it ready to ship, usually timed with a new version of iOS. That’s great, but I’m not sure it’s the right way forward for this new computing paradigm — things are changing far too quickly.
This is where I insert buzzwords. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning. AI. Machine Learning…
But really: AI. Machine Learning.
In hindsight, all of this was correct. But even then, we didn't realize that "Machine Learning" – the specialty which brought John Giannandrea from Google to Apple – was closer, but still needed to evolve too. Into LLMs.
As that revolution, ushered in by OpenAI with ChatGPT, building on the back of insights shockingly discarded by Google, has washed over the entire tech industry and has started to seep into the broader population, it seems like the time may be at hand for voice to work, for real this time.
This is what I saw a glimpse of with OpenAI's GPT-4o launch a couple years ago, and wrote about at the time with "OpenAI Changes the Vocal Computing Game!":
Said another way, while this is undoubtedly a series of large breakthroughs in technology, it's just as big of a breakthrough in *presentation*. And this matters because 99% of the world are not technologists. They don't care how impressive and complicated the technology powering all this stuff may be, they just care that Siri can't understand what they're actually looking for and keeps telling them that in the most robotic, cold way possible. Which is perhaps even more infuriating.
Some of this got buried under the hoopla created when Sam Altman directly reference the movie Her and got everyone up in arms about one of the voices that sounded perhaps a bit too much like that of Scarlett Johansson. But part of it was also that while we kept inching closer, we still weren't quite there yet with regard to voice and computing.
The voice modes across all of the different services really are pretty incredible now – certainly when compared to the old school Siri, Alexa, and the like – but it's still not quite enough to make the AI sing, perhaps quite literally. Part of that is the underlying models, which for voice are slightly inferior to the text-based models – something which OpenAI is actively working on addressing – but another part of it is simply a UI one. While all the services keep moving it around to spur usage, voice mode is still very secondary in most of the AI services. Because they're chatbots. The old text-based paradigm is a strength and a weakness. As I wrote:
One side of that equation: the actual "smarts" of these assistants have been getting better by leaps and bounds over the past many months. The rise of LLMs has made the corpus of data that Siri, Alexa, and the like were drawing from feel like my daughter's bookshelf compared to the entirety of the world wide web. But again, that doesn't matter without an interface to match. And ChatGPT gave us that for the first time 18 months ago. But at the end of the day, it's still just a chatbot. Something you interact with via a textbox. That's fine but it's not the end state of this.
The past 18 months have seen a lot of reports about projects trying to break outside of that textbox. While the early attempts quickly – and in some cases spectacularly – failed, undoubtedly because they were trying to be too ambitious, and do too much, a new wave is now coming to tackle the problem. This is led by none other than OpenAI itself, which acquired the hardware startup co-founded by one Jony Ive to clearly go after this space. To make an "anti-iPhone" as it were. A deceptively simple companion device powered by AI and driven by voice.
That's just a guess, of course. But it's undoubtedly a good one. And you can see all of the other startups coalescing around all of this as well. Hardware startups too! Pendants, and clips, and bracelets, and note-taking rings – not one, but two separate, similar projects – oh my. All of them clearly believe that voice is on the cusp of taking off, for real this time.
And right on cue, Alexa is back, after some fits and starts, resurrected as Alexa+ powered by LLMs. Google Home is on the verge of being reborn, powered by Gemini. Siri too! Maybe, hopefully, really for real this time!
2026 feels pretty key for all of this. The models have to be refined and perfected for voice. In some cases, perhaps even shrunken down to perform in real-time on-device. Then we need to figure out the right form-factors for said devices. Sure, the smartphone will remain key, and will probably serve as the connection for most companion tech, but we're going to get a range of purpose-built hardware for AI out in the wild which will be predominantly controlled via voice.
Smart glasses too, of course. Even Apple Watch. And AirPods should continue to morph into the tiny computers that they are in your ears. Voice is the key to fully unlocking all of this.2 And, one day, the true next wave: robots. Are you going to text C-3PO what you want him to do?3 Of course not, you're going to tell him.
1 Yes, I was that guy. ↩
2 With a special shout-out to Meta's wrist-input device (born directly out of our old GV investment in CTRL Labs!) as a wild card here... ↩
3 And with that, I have successfully conflated Star Trek and Star Wars, you're welcome, Gandalf. ↩