2026-04-08 20:00:10

It's a question of commitment. And incentives. And scale.
To me, that's how I'd boil down the current state of AI relative to humans. It's extremely oversimplified, but I'm not sure it's wrong.
I started thinking about these notions when writing about the value of writing in the age of AI. This naturally led to thinking through the value of thinking in the age of AI. But what really drove home the concept was reading all the coverage around Anthropic's latest model, Mythos. You know, the one too dangerous to be released.
You can't help but read all of these stories about all the bugs, vulnerabilities, and exploits that Anthropic's model is finding across basically all computing systems out there in the real world and think "holy shit, we're cooked." While 'Project Glasswing' seems like a valiant effort to get ahead of the issues, come on, we know how this movie ends...
But my main takeaway is that it has less to do with the genius of these AI models – I mean, that's part of it, and clearly Mythos seems to be the smartest yet – but it's more about the breadth. Both of knowledge and time.
Said another way, reading all these security experts and researchers talk about Mythos, it's pretty clear that the model isn't so much finding issues that human beings cannot, but it's finding issues that human beings have not, and most depressingly, finding things that human beings will not.
Why? Again, time and incentives.
If you tasked a capable human with finding every single bug in a certain system, they presumably could do it – if given enough time and resources. These issues don't require super human knowledge, in fact, they require human knowledge. But often times to find them all, it requires human knowledge scaled in super human ways. Again, spending more time on it than any human reasonably would. Because again, the incentives are simply not there for a human to spend their entire life looking for bugs. Perhaps if the vulnerability was great enough, sure. But that's sort of an unknown until such things are found. No one creates systems to have obvious vulnerabilities for others to fix. They're the byproduct of a million little variables – a scale a human isn't suited to deal with.
But AI is. Issues that might take a human years to find and fix can be found and solved almost instantly by such systems. We know this to be true because Mythos is finding issues in systems that are a couple decades old! Despite some level of usage the entire time, humans simply never found the issues.
Luckily, it seems, never did attackers. That's the thing, the flip side is the real problem here. Historically, many vulnerabilities have been fixed only after someone exploited them in some way. Again, that's because the incentives are in favor of the attacker versus the defender. If and when Mythos-caliber tools are put in the hands of hackers... yeah.
That's obviously exactly why Anthropic isn't releasing Mythos to the public and also why they've set up Glasswing. While the company may be first to such capabilities, they won't be the last. They probably don't even have long to try to get ahead of the situation. While I generally dislike the nuclear weapons analogy for AI, I must admit, this all does feel a bit Manhattan Project-y. The good guys are racing against the clock to implement a new technology before the bad guys catch up. But they will. They always do.
And sadly, there's no real hope of deterrence here as with nukes. Again, incentives. Is the Glasswing gang going to unleash Mythos to take out the would-be hackers? I mean, maybe they could for a big enough evil organization. But most such bad actors will either be lone wolves or operate in tiny teams. Even if you could preemptively attack, you simply won't be able to know where to focus such concerns at all times. I mean, maybe AI would? Maybe? But that's probably overly optimistic.1
Anyway, point is that Mythos is clearly great at finding exploits and while the powers-that-be are trying to use it fast to fix such issues, the bad guys will eventually get their hands on it as well. So it will be a cat-and-mouse game both in tracking down those would-be bad guys, but more importantly, tracking down the vulnerabilities and hoping the good guys can stay one-step-ahead technologically.
But I go back to the notion of scale. Given the issues Mythos has already found – across every operating system and seemingly every piece of software they've looked into – it's hard to feel anything other than overwhelmed here. And again, that to me is sort of the story of AI right now. It's less about "superintelligence", and more about intelligence scaled in a way that humanity cannot.
There are incredible potential upsides to this idea, such as in drug discovery and disease eradication. Again, these systems can run basically infinite scenarios – possibilities so large that a human simply cannot even fathom, let alone execute. The only limiting factor is resources – as in compute, not time. Incentives are no longer needed to lead down one path because AI can go down every path (though incentives remain on the human side of the equation, tasking such systems, of course).
This will apply to other scientific discoveries, obviously. In space, in the deep sea, etc. Humans may technically have the capabilities, but not the time.
This same general idea is what is taking coding out of our hands. And that too is being applied to other "white collar" areas of work. Reviewing legal documents is tedious and time consuming. But not for AI. Etc.
Creative endeavors feel more protected. And that's because while AI technically could write the works of Shakespeare – again, time is not an issue, endless possibilities are literal – the system wouldn't necessarily know when it had. It would only know which version to pick if compared against the existing works of Shakespeare. But what about future Shakespeares?
Creativity comes from constraints, not the lack thereof.
This is taste. Which has sadly become a buzzword amongst tech bros. But it does matter in the future of our interaction with AI. It's a part of what's going to raise the relative value of human-made work. But the bigger part is the other constraint, the larger one: time. People are going to learn that they're not paying for output, they're paying for input. How much time was spent on something – the most precious resource that a human being has. The variable that doesn't limit AI.
To bring it back to the moment at hand, reading about Mythos paints a clear picture of a future in which problems are both solved and created by the human-centric notions of time and incentives being thrown out the window with AI.
And it seemingly points directly to the next big technological quandary if and when the comparatively unlimited resources of quantum computers can both make new discoveries by doing computation at a scale that's impossible right now while at the same time likely cracking traditional cryptography. It's the same general high-level notion. And it's likely to define the next decades of both computing and the world.
In a way, it's the same idea that has defined computers from the get go. But at a scale that can now both break and fix the real world. Perhaps in real time. With an almost casualness that's impossible for the human mind to comprehend. It's both absolutely exhilarating and completely terrifying.





1 And let's not even delve into the Minority Report element of "pre-crime" here – attacking a target before a crime has been committed. What are the lines there? ↩
2026-04-08 02:33:13
10 months after their nearly $15B deal to buy Scale AI and reboot/restart their AI efforts, it seems like we're about the see the first true fruits of labor out of the Meta Superintelligence Lab. The first models are supposedly a bit past deadline and undoubtedly over budget, but wait, is there a wrinkle?
Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions of those models via an open source license, Axios has learned.
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether.
Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources.
I don't know, that reads like "eventually" is doing a lot of work there. And if that's the case, that's not surprising at all. Meta will release some "open" version of their new models... eventually. This is the exact same strategy that Google, and OpenAI, and pretty much everyone else follows. They ship their frontier models and products, then sometime down the road, they release smaller, less capable open source variants. OpenAI was late to this particular party, but finally ran this playbook last year. Google just released their latest version of "Gemma", their "open" version of Gemini.
They all do this to keep their key work under lock-and-key, using the blanket of safety as the (undoubtedly at least partially legitimate) rationale. Meta will be no different here.
I guess the angle is that Meta is still going to do "open" at all after the "Llama" debacle. But it seemed like they were always going to – that's how their teams would say with a somewhat straight face that they weren't abandoning "open", that it would always be a part of the playbook. As I wrote last July:
The open source strengths that Zuckerberg touted, turned out to perhaps be weaknesses when it came to competing at the highest end of the market. It doesn't mean open source (again, open weight) is bad – it just means Meta's strategy here may have been flawed. And I suspect where they'll net out is the same strategy that Anthropic, Google, and soon OpenAI are doing. That is, keep the cutting-edge models closed and open source older and smaller models more selectively.
Bingo. Back to Fried:
The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers.
Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely as possible around the world.
Again, there are already US-made options for developers, the smaller, less performant "open" models. That's different from Meta's previous strategy with the Llama models, where they were "open" from the get-go. That strategy, of course, did not work. Hence Meta needed to spend $15B on Scale and billions more on other, fresh AI talent.
By the way, the US angle here is clearly meant to counter the rise of "open" models coming out of China. But even there, the large companies seem to be moving into a more proprietary posture. Why? To try to make the technology work as an actual business and to win the AI day. As it turns out, there are downsides to "open" – Meta learned this the hard way when some of those Chinese models seemingly used Llama as a base model from which to distill their own. Of course, some of those model makers may have been doing this with OpenAI models as well – and not the "open" ones. We'll see what the next version of DeepSeek looks like, which also seems pretty delayed at this point.
The more interesting element may be the notion of focusing on consumers. This, of course, has been OpenAI's strength. But while they're not abandoning it, they're clearly taking their foot off the consumer product to slam on the enterprise gas, in order to counter Anthropic. So perhaps Meta does have a bit of an opening here...
The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances.
Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said.
Billions and billions spent to get a model out the door that doesn't compete at the frontier? Of course, Microsoft is in the same boat, promising that they simply got a late start to the frontier game (thanks for nothing, OpenAI) but that they're closing in. But again, it's not like OpenAI and Anthropic – let alone Google – are just going to hit pause and let everyone else catch up.
And while xAI previously seemed to prove the ability to catch up was doable with enough money burned and corners cut, it hasn't really mattered. Will it for Meta?
Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram — free services with global scale that competitors can't easily match.
But that was the same strategy with Llama. Again, it didn't work. So will these new proprietary models change the equation? I doubt it. But we'll see!
One more thing: might Meta keep the 'Llama' naming scheme around for the "open" variants of their models? Unclear. They may just want to move away from the tainted brand, but it was always a cute/clever name. The new fruit codenames – Avocado, Mango etc – just seem to be copying OpenAI – though I still wish they would have actually shipped those names!



2026-04-07 20:27:21

Since the dawn of Spyglass, easily one of the most consistent requests I get is for an audio version. That is, posts read aloud. As someone who has listened to nearly everything I read for years and years at this point, that obviously resonated with me. So I'm happy to say that now it's here for you to hear.
I'm launching this today as a feature for members of The Inner Ring. Whenever I publish a 'From Afar' column, paid members will also get access to an audio version – henceforth dubbed 'Spyglass Aloud' – just in case you prefer to consume the content audibly.
Thanks to a new partnership between Ghost and the podcasting service Transistor, this is fairly seamless. With your subscription, you'll get access to a private audio feed tied to your account. This feed will allow you to listen in your podcast player of choice with a couple clicks. If you unsubscribe, you'll lose access to the feed, as this is all synced in the background between the two services.
Note that I'm only enabling this for columns – i.e. longer posts, not the shorter ones – and it will only be for posts going forward at the moment. I may record some older hits retroactively over time, which would show up in your podcast feed.
Speaking of recording, I suspect this will be somewhat controversial, so I won't beat around the bush: I'm using AI. Specifically, I've used the service Eleven Labs to clone my voice.
YOU DID WHAT?
It actually requires quite a bit of work – well, quite a bit of reading aloud – to get the voice up to a quality that I think is solid. And I honestly think it's quite a bit better than me actually trying to read every single post aloud and recording it that way. Believe me, I tried. After countless demo runs, I realize that I sort of suck at reading things out loud. Especially longer works, where you're inevitably going to trip up and have to go back/edit. It's not like a podcast where you just power through, it's fairly tedious – which I don't mind, again this is a paid feature – but again, the end result, if I'm being honest, just isn't as good as the AI version. I did a lot of tests on this. It wasn't even close.
Again, I know that some people won't like this approach philosophically, especially right now because AI is such a hot-button issue. But I don't think this should be controversial, I'm opting in with my own voice. Am I worried that there will be an army of robot M.G. Sieglers in the future walking around talking your ear off in my voice? I mean, that would be sort of fun. As long as they weren't murderous.
Honestly, the technology is fairly incredible (and as such, not exactly cheap at the moment). It's not perfect, of course, but it's actually more perfect than what I was able to achieve the old fashioned way, talking into a mic like an animal.
Shout out to my friend Casey Newton who had the courage to journey down this controversial path first. After he launched an audio version of Platformer, it immediately became the way I prefer to consume his content. It's good and fun/nice to hear it in Casey's voice. Even if it's RoboCasey.
Anyway, I'll embed the first such recording below so everyone can get a taste of what it sounds like. And if you're interested, you can join The Inner Ring here. And if you're not, well, you don't have to listen! Typed words will remain the primary interface around these parts.
Thank you, as always, for reading – and now perhaps listening.
Once you sign up for The Inner Ring, be on the look out for a welcome email which will show you how to get access to your podcast feed. But I've also published the guide on the About page, and I'll put it below for good measure...
Paid members will see the following new 'Podcasts' area on your account page, with a note to 'Access your private podcast feed'.


Clicking on 'View' will take you to a Spyglass landing page hosted on Transistor where the feed is housed.

Clicking on 'Spyglass Aloud' here will take you to a page where you can easily subscribe via your podcast player of choice, and also sign up for email alerts when new episodes are published if you want (if you subscribe via a podcast player, they'll automatically appear, of course). This all works on mobile and desktop.

2026-04-07 04:14:52
Happy Easter Monday. Hopefully everyone is rooting for Michigan tonight in the NCAA Championship Game. #GoBlue
Spyglass Dossier is a newsletter featuring links and commentary from M.G. Siegler on timely topics found around the web.
🦸🏻♂️ OpenAI's "Super App" – An interesting side note of this strategy: it shifts us back from a mobile-first and focused world, to a desktop-centric one. That's not surprising given the coding focus of Codex (and Claude Code), but it's decidedly not where the masses are these days, and ChatGPT is at such a scale – 1B actives any day now – that mobile will presumably always end up as the most-used surface for them (even if/when their own devices hit). The strategy – get Codex in front of non-engineers through ChatGPT and morph it into a general purpose agent – isn't a bad one, but there are real risks in execution. And given that OpenAI is full-on chasing Anthropic here, what might they be focusing on – shoring up Claude Code/Cowork or something new? [Sources 🔒]
🔀 OpenAI Shuffles Leadership – Holy Good Friday news dump. COO shifting to "special projects". CMO stepping down (to focus on health). CEO (of AGI Development) taking a leave of absence (to focus on health). This all comes on the heels of the $122B fundraise, a questionable M&A deal, and a mandate to kill off "side quests" and focus on the task at hand: taking on Anthropic. The latter two were spearheaded by aforementioned CEO of AGI (now on leave) Fidji Simo. Clearly they waited to announce all of this until the Friday before Easter. Will Simo come back as COO (since it seemed like that's what she essentially was anyway?). The company is saying no (that they won't appoint a new COO). Hard to give up a CEO title, I guess – even a secondary one. Never a dull moment at OpenAI. [Bloomberg 🔒]
📺 YouTube Leans Back – While "Stations" are essentially YouTube's answer to FAST channels elsewhere, it has nothing to do with price – since, of course, YouTube is already free (though you definitely should consider YouTube Premium to remove the insane amount of ads if you watch a lot of YouTube). It's all about the experience of just being able to put something on and not worry about it. Yes, like old school TV or cable. Not having to select something else when it ends. Just putting something on in the background and letting it wash over you. Endlessly. For hours. Sometimes, people simply like not feeling alone in their homes. I feel like I've been writing about this notion for a decade. Because I have. [Verge]
"This is horrific. I knew this kind of bullshit would happen eventually, but I didn't expect it so soon."
– Zach Manson, a software developer who noted that Microsoft had injected an ad into his GitHub pull request. What was the ad for? Copilot, of course. Microsoft quickly removed the "feature" after the backlash. Better than Apple in that regard, I guess...

Below, members of The Inner Ring will find thoughts on:
• OpenAI's Internal IPO Tension
• OpenAI's Industrial Policy
• Cursor's Claude Code & Codex Competitor
• and more...
2026-04-05 06:59:27

Apple crossed the five decade mark this past week. I saved so many retrospectives to read this weekend, but first I wanted to jot down some of my own thoughts. Since it was clearly finally okay for Apple itself to look back, I figured I would too.
I've previously written about how I was a full-blown PC kid growing up. People seem to find this funny since when I was a tech reporter, I in no small way made a name for myself covering Apple. My inbox and the comment sections of TechCrunch and VentureBeat from about 2005 to 2011 were constantly clogged by the term "fanboy". Which truly never bothered me. I was a fan!
Anyway, the backstory is a bit more nuanced. While the first computer our family owned was an IBM PS/2 Model 55 SX (eat your heart out with that branding, Microsoft), the first computer I actually used was at school. My elementary school in Ohio, like most schools back then, was filled with Apple machines. Specifically, the Apple IIe, the third iteration of the Apple II (after the original and the Apple II Plus). That was the first computer I actually used in my life.
Throughout school, the computer labs slowly sprinkled some Macs into the mix as well. And while I always appreciated the UI – the trash can in particular – I was a full-blown Windows aficionado at that point. The Mac just felt foreign to me.
By that point, we were well into the 1990s and Apple had started struggling. Seemingly piecemeal variants of the Mac started showing up in our computer labs. And some Mac clones – the horror – started showing up in the computer stores I would frequent. At the same time, Microsoft stepped on the gas. As a teenager, I lined up at midnight for the launch of two things: Pearl Jam albums and Windows 95.
Apple receded further into my computing background. One of my friends was a Mac loyalist – and we constantly made fun of him for it.1 The cool kids had Gateway 2000 PCs shipped in their big ass cow print boxes. Or, at the very least, you were getting a Dell, dude. Specs were king in the age of Pentium. Much like today, RAM reigned supreme. Apple seemed lost.
And then... the Mac crept back into the computer stores of the world. Rumor grew of a new operating system being worked on in the west. Whispers of a new UI. And Steve Jobs perceived. Apple's time had now come.
Still, I went off to college with a new Gateway tower PC and 44-pound, 19-inch monitor (yes, seriously). Paired with the rocket-fast ethernet connection in the dorm rooms, I was in computing heaven. This was also, I should point out, the heyday of Napster, LimeWire, and every other P2P file-sharing variant.
One day during freshman year I recall seeing some banners around campus to come check out the latest wares from Apple. Out of curiosity and boredom, I swung by a computer lab dotted with new iMacs, where some Apple reps were showing off early builds of OS X. It looked amazing – on the surface, not too dissimilar from how it looks today, 25+ years later – but also seemed fairly slow and buggy. I took a "Flower Power" iMac banner – yes, the banner, not the Mac – home with me.
While I didn't go to the midnight launch of Windows XP in 2001, I did buy it on day one. And Best Buy had a launch day deal where you got a free MP3 player with the purchase. It was the Intel Personal Audio Player 3000. Yes, my first MP3 player was not an iPod, but rather was made by Intel. Yes, Intel!
Amazingly, this was just two days after that original iPod was announced on stage by Steve Jobs.2 I remember thinking it was strange that Apple was making such a big bet on a music player.
It was not strange.
Like so many, that ended up being my entry point into the Apple ecosystem. The year was now 2004 and I had just graduated from college. I would be driving all the way across the country by myself. The nearly 2,400 mile, 10-state journey would take about 35 hours spread over a few days. I needed something to kill the time. Like, say, a thousand songs in my pocket.
Technically, by then, it was far more than 1,000, more like 10,000, as I bought the 40GB fourth-generation iPod for $399. And technically those songs weren't in my pocket, but attached to some crappy third-party FM transmitter which required a station change every few hundred miles due to radio interference. But my god was that device glorious. It stored every song I owned!3 It made my old Intel MP3 player – with 64MB of memory – seem like a piece of junk.
If those OS X demos laid the soil, the iPod planted the Apple seed.
I made it to California in no small part thanks to that iPod. And by the time I started working doing various jobs in Hollywood, it was all Apple, all the time. While the Dell laptop that made the journey with me was far faster, I was slowly learning about Apple software that simply wasn't available on a PC. It wasn't one thing, it was a million little touches that only became clear upon using these Apple products regularly: there was care put into the process. These devices were thoughtfully designed. A joy to use. They were everything a PC was not.
Before too long, I broke down, despite being basically broke, and bought an iBook.
No, not one of the fun candy-colored variety – I was a bit too late for those. Instead, I had a pristine, all-white iBook G4. It was tiny – 12.1" screen, just under 5 lbs – and beautiful. My Dell started to gather dust.
That was the last PC I ever owned. The iBook led to an iMac. And that led to a few dozen other Macs over the years, the most recent being the new MacBook Neo, on which I type this right now.
It seems wild to think now that the iPhone launched only a few years after I bought my first iPod. Perhaps because I was still so new to the Apple ecosystem, I also wasn't sold on the latest and greatest – despite Jobs' masterful presentation. Believe it or not, I was basically in the Steve Ballmer camp; $500? For a phone?!
Then I happened to find myself in an Apple Store on launch day in June 2007. I've never so quickly gone from thinking I would not buy something to buying it. Simply holding it for a few seconds was enough. This sounds hyperbolic or ridiculous. But it really was almost like an out-of-body experience. I just knew I was holding the future.
From time to time I'll pick up my iPhone – I've now bought one every single year since 2007, which seems like a problem, but I justify it with the notion that it's the most important device I own and use it far more than anything else, so always having the fastest version seems like a no-brainer – and I still feel this way. I still remember the hours spent every night on my PC waiting for things to load. And the hours spent trying to connect to the internet via dial-up modems. And the internet before there was even the world wide web. This context still makes the iPhone seem like magic. It's hard to imagine it will ever stop feeling that way.
From that Apple IIe in elementary school to now, that's roughly 40 years of Apple usage out of Apple's 50. But it has been heavily back-weighted, with really the past 20 years being all-Apple, all the time. I undoubtedly give Apple more grief these days, which I view as both warranted and an accurate reading of the broader room as they've grown into what has been the largest and arguably most powerful company in the world for much of the past decade. But I still absolutely love the products.
Yes, even the Vision Pro, which I believe Apple erred in releasing when they did. But I was actually using it last night for the first time in a few weeks, and they continue to refine it both software-wise and content-wise, to the point where it actually is getting more impressive with age. Granted, I still wouldn't recommend anyone spend $3,499 on it right now. But you no longer have to squint to see a world in which a far more svelte variety (in both price and size) is in the future cards.
My only real concern now for the company is that Apple could get lapped if they don't fully control their own AI destiny. I think the Gemini partnership is a good (and necessary) step. But it probably needs to be a stopgap and bridge to buy them time to get to where they need to be internally. I believe they understand that, but I also believe no one yet fully knows how this will all play out. And it's probably the biggest risk for those next fifty years as the robots take over the world and whatnot.
For now, Apple remains positioned well. With the iPhone as the overall most-used device (that will continue to evolve into the central computing hub as new devices come), and the iPad as the main computer for many people, and with the Mac as the core machine for "real work" for an incredibly still growing number of people. Happy 50th, Apple.
1 Though that same friend did get a Power Mac G4 Cube at one point, which we all agreed looked amazing. That CD slot! But we also agreed it was the most beautiful paperweight ever created. ↩
2 And yes, this was all just a few weeks after the attacks of September 11, 2001. ↩
3 We'll use "owned" loosely here. See also: the aforementioned Napster days... ↩
2026-04-03 19:53:29

"The ability to influence the behaviour of others and obtain desired outcomes through attraction and co-option." I like the framing of the term "Soft Power" by The British Academy, for my purposes here.1 I think it serves as a succinct explanation for OpenAI's maneuver in acquiring TBPN.2
Yes, the company which just made a big to-do about killing off "side quests" and focusing – including, most notably, killing off their video product Sora – immediately followed that up by buying a video podcast. And yes, they announced it the day after April Fools. And no, it was not a joke. And yes, everyone made that joke.
I too had to dish out my initial snark upon hearing the news. Which was surprising to the point that it's the most I've been messaged by people with varying levels of "WTF?" in recent months. But actually, thinking more about it, and reading over some of the reports and statements, I think it makes some level of sense. I'm not sure it will work the way OpenAI hopes. But it's not the worst idea to try.
First and foremost, OpenAI's CEO of... something,3 Fidji Simo actually lays it out pretty plainly in her remarks on the matter:
As I've been thinking about the future of how we communicate at OpenAI, one thing that's become clear is that the standard communications playbook just doesn't apply to us. We're not a typical company. We're driving a really big technological shift. And with the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.
That pretty much says it all right there. OpenAI clearly does not like the way the current narrative around AI is being framed. And they undoubtedly see the trend where it's shifting even more negative as time goes on, certainly in the US. I might argue, as I have, that a lot of this has to do with the actual messengers and messaging around the technology coming out of the AI labs and companies. But that's a hard thing to change. Steve Jobs, sadly, is not walking through that door. So instead, you take a page from the Don Draper playbook and "change the conversation".
Draper's method to do that would be through advertising. And well, OpenAI has been trying that. It doesn't seem to be working. Another way? Own a media company.
That’s exactly what TBPN has built. So rather than trying to recreate that ourselves, it made a lot of sense to bring them in, support what they’re doing, and help them scale—while keeping what makes them special. A core part of this is editorial independence. TBPN will continue to run their programming, choose their guests, and make their own editorial decisions. That’s foundational to their credibility, and it’s something we’re explicitly protecting as part of this agreement.
Again, Simo says it right there. Clearly, OpenAI was thinking about building up their own media entity in-house. But while a lot of companies have tried this to varying degrees of success in the era of "going direct", Simo clearly thought it made sense to buy versus build here.4
It's an expensive buy – "low hundreds of millions," reports George Hammond of The Financial Times – certainly when you're going to throw out the actual business (which seemed to be working quite well for them, which is why OpenAI had to pay such a premium). But it's all relative for a company that just raised um, $122B.
But the other important distinction here is that OpenAI is saying they're going to leave the TBPN guys alone to do their thing as they have been with the show. Famous last words and all that, but I do believe that's the intention here because again, to me this is about soft power. That is, this isn't about acquiring TBPN to turn it into a propaganda arm of OpenAI. That would be dumb because obviously that would backfire. Instead, Simo realized that TBPN already had alignment in their mandate to cover the tech world, and AI more specifically, more positively than many other outlets are doing at the moment.5 Spreading such gospel will help OpenAI's own interests quite naturally.
Now, there's a very real and fair question of what happens if that mandate changes. What if, say, AI starts to lead to some outcomes which are actually bad in very tangible ways for society (yes, some will say this is already happening, but I mean indisputable here for the sake of the argument)? Presumably, TBPN's independence would allow them to change their coverage and tone to meet the moment. Obviously, OpenAI – nor TBPN – thinks that will happen. And if it did, OpenAI would probably just cut the show loose.
But that sort of points to what's left unsaid here. That while TBPN may stay the same on paper, in practice, the way others interact with it will change. Is, say, Dario Amodei going to come on the show? Probably not! But that's a bit unfair since he wasn't a guest before the deal. A more interesting one is Mark Zuckerberg, who has been on the show. Will he be back? Probably not any time soon.
Other rivals will be in similar camps. Some may suck it up and go on to reach the audience. And if TBPN can indeed keep growing, fueled by OpenAI's resources, perhaps it will force the hand of Zuck and others to keep playing ball. That's undoubtedly the hope of both the TBPN team and OpenAI here. They have to know that pledge of independence or not, this deal changes those dynamics of the show. They just think they can overcome them. We'll see.
And that ties into the other potential issue here. While TBPN has built up an impressive audience in a relatively short amount of time, the reported 70k regular viewers per episode is obviously tiny compared to any number of other media outlets, let alone endless other shows on YouTube. If TBPN doesn't keep growing, the soft power playbook loses its effectiveness, fast.
Prior to this deal, the TBPN founders have been on the record noting that they didn't need to scale the show to reach a mainstream audience. That they were happy playing in their niche. Which made a lot of sense, it was a good niche that they were monetizing well, clearly! And not raising VC money gave them the optionality of staying in that niche. It was big enough.
This deal changes that equation. They no longer have to monetize, but for the soft power to work, they need to expand their reach. Otherwise they're just talking to the base over and over again. That works for Fox News, but that's a far different scale. And while there's clearly soft power at play there, it's also still a business.
This is just a soft power play. But for that to work, you need actual power. OpenAI is going to need to boost TBPN into something much larger than it is, otherwise, what's the point here? Again, TBPN was already doing what they're doing without the acquisition.
Maybe branching into adjacencies helps them escape the echo chamber effect – they mention events as one area of continued interest – but this is all very TBD for TPBN.
All that said, I still think this is an interesting tactic to try. As Simo notes, and as we're all well aware, OpenAI is not a normal company. The implication is that normal strategies won't work for them – or at least work as well. Clearly, the narrative has kept shifting away from them. Part of that is the macro view of AI (again, certainly in the US), but part of it is also their own self-owns. TBPN will try to help with the former, while founders Jordi Hays and John Coogan will work in-house in their apparent spare time to help with the latter. Will it work? We'll all get to watch in real-time!
I'll just end by quoting Don Draper earlier in the same conversation:
"Your concern over public opinion shows a guilty conscience. But what good is that serving you if what is to be done is already underway?"





1 The term, of course, was popularized by American political scientist Joseph Nye (who also happened to be a British Academy Fellow), in his 1990 book, Bound to Lead: The Changing Nature of American Power. Nye passed away just a year ago. ↩
2 Yes, while "TB" technically started as "Technology Brothers" – in an ironic sense – "Technology Business Podcast Network" has a decidedly more professional and ESPN ring to it... ↩
3 Technically now "CEO, AGI Deployment" but it's a title that keeps shifting like Ace Rothstein's in Casino. But it's not CEO of the entire company, at least not yet ;) ↩
4 A lesson from her Meta days? ↩
5 Which throws me back to my days, many moons ago, at TechCrunch, where we were constantly levied with such a charge, but instead made it a strength. It was... a very different time, of course. And I swear this is the last time I'll point out that what we were trying to do with OMG/JK wasn't entirely different than TBPN (just more Pardon the Interruption style). ↩