2026-04-09 19:57:21

Since the dawn of this current Age of AI, there has been an assumption that at the highest level, there are two markets – the markets as old as time, or at least as old as tech: consumer and enterprise. Startups, when they're born, tend to pick one lane. And if they grow into large companies, they tend to stick in that lane.1
AI, to date has been playing out similarly. While OpenAI may not have set out to be a consumer company, ChatGPT shoved them into that bucket and market. Anthropic, perhaps in part because ChatGPT became the "Kleenex" of consumer AI, went mostly down the enterprise path.
Now, with Anthropic seeing massive success on their path thanks to the rise of "vibe coding" branching into the first truly agentic workflows, OpenAI is scrambling to pivot-to-enterprise. They undoubtedly wouldn't frame it that way, and it is slightly unfair, but it's also not entirely untrue. It's why they keep touting how the enterprise business should match the consumer business this year. In this way, coding may be to 'Big AI' what the cloud was to the last generation of Big Tech. That is, their inroads to enterprise.2
Still, I'm somewhat skeptical of the strategy because OpenAI is aiming to shove Codex inside of ChatGPT itself. Yes, this has worked for Anthropic with Claude and Claude Code (and now Cowork) residing in the same desktop app, but that's also because Claude doesn't have nearly the consumer business that ChatGPT has. And I'm just worried ChatGPT, after spending the past couple of years trying to simplify their product, is about to complicate things considerably.
That said, they sort of have to try? Anthropic seemingly has such momentum that they only obvious lever OpenAI has to pull in order to jump into the race is to leverage that ChatGPT installed base. To leverage their Kleenex position, as it were. The hope would be that we're early enough in the agentic revolution that ChatGPT and not Claude – and certainly not OpenClaw – can be the one to introduce the masses to it. You can see the logic, but there's a ton of execution risk.
Thinking through this has led me down another path that's tangential, but related: what if AI plays out similarly to the smartphone? That is, whereas everyone used to have their work computers and home computers, the iPhone changed this dynamic. Because the smartphone took over for many people as their main computer, and most people didn't want to carry around two smartphones, companies had to start adopting 'BYOD' – bring your own devices – strategies. There have obviously been trade-offs – namely in the form of security and compliance – but there was no fighting the convenience tide here. Even if they had a work phone, everyone was doing everything on their main device – see: any number of headlines about any number of politicians over the years.
Undoubtedly thanks to the inherent cost savings as well, this movement has since spread to computers/tablets and to schools and other walks of life.
Anyway, what I'm wondering now is if this dynamic plays out in AI too. As we all increasingly have the one AI we use the most, and that builds up a moat in the form of memory, might we start insisting on using that AI in the workplace? Yes, 'BYOAI'.
Once again, security will be the main, obvious issue here. And a subset of that is privacy – which has also been the case with BYOD, of course. But might convenience win the day again? If, say, you have the ways you like to work with your AI and the workflows established in that memory, wouldn't you want to bring that to work as well?
Some people undoubtedly would say "no", that they want to separate work from home in that regard. Maybe it's more similar to email in that way. But even there, the lines have clearly blurred over time. (Again, see: politicians.) So it probably just depends on if workplaces end up implementing more rigid harnesses on top of the AI models they choose. Which is to say, they don't just choose the chatbots out-of-the-box – and soon the "superapps" from OpenAI and the like.
And sure, for large enough companies, they'll obviously have tailored AI solutions. Certainly in certain more highly regulated industries. But if you really believe AI is going to permeate everything – much like the smartphone has – doesn't it stand to reason that a small mom-and-pop store in Ohio is going to simply go with the AI brand they use at home? Especially if they're already paying for it there.
Obviously, I don't know how this will play out, but my instinct is that for many businesses, and certainly smaller ones, there will be this 'BYOAI' policy. Perhaps the AI tools themselves even implement a "work" mode to complement the "incognito" mode that they all borrowed from web browsers. Many web browsers now, of course, also offer multiple profiles to split up work and home.
With all that in mind, of course OpenAI needs to ensure they snuff out the Anthropic threat in enterprise. Because it also stands to reason that a lot of people will get their start using AI at work, especially if there's any level of coding and eventually agentic needs there. And if that's the case, it could almost be the opposite of the BYOD movement – it could flow the other way, from work to home. And that could seriously imperil ChatGPT's position...
1 Yes, much of Big Tech blurs such lines, but that's out of necessity: these companies are so large that in order to grow, they need to go after any and all customers. Still, they're usually bucketed into one of those markets: Apple = consumer, Microsoft = enterprise, Meta = consumer, etc. ↩
2 Google and Amazon are perhaps the most diversified Big Tech companies thanks to the rise of AWS and GCP after they built massive consumer businesses. ↩
2026-04-09 02:30:37
Good news, Wall Street. Meta isn't burning all those billions on nothing. Well, we think. It's all a bit TBD. Quite literally.
But now at least we get the first taste. 'Muse Spark' is an awfully generic name, but the early results seem promising. Of course, that was the case with Llama as well until it wasn't. We're probably going to need to see a bit more than benchmarks shared by Meta here. But really, the proof will be in the usage. As in, is anyone actually going to use these models? And not just because they're shoved into surfaces that billions of people use?
To their credit, Meta is being honest here. Muse Spark isn't really competitive with the truly frontier models from others on a number of fronts – namely, coding. Seemingly the benchmark such companies care about the most right now. Instead, Meta believes they've made a relatively svelte model that at least deserves to sit at the same table as the other labs for a number of tasks. Yes, Meta has made a table stakes AI.
That's harsh, but fair. I mean, no one seems to worried about this model ending the world. On the speed-to-launch, it is impressive – it took them nine months to birth this baby. Also table stakes for a human being, but fast when rebooting your AI lab and starting from scratch! Of course, xAI also previously got to the cutting edge in record time and... it hasn't really mattered. Well, unless the goal is to merge – first with a sub-scale social network, then with an orbital scale rocket company. That's probably not Meta's game plan here, so the results are going to have to stand on their own far more.
But perhaps not completely, because again, Meta has the unique advantage of having several of the most widely used surfaces on the internet and mobile. If nothing else, it seems like Muse Spark will help to power Facebook and Instagram recommendations – and yes, ads. And don't forget the glasses. That's the really big, future play here. If Meta can sustain the growth of their Ray-Bans, they have a shot to take on Apple. Not the iPhone, but their AI hardware projects. Google too. And, of course, OpenAI.
And no, this model isn't "open". That writing has clearly been on the wall since shortly after Meta bought Scale and kicked off this sprint. Mark Zuckerberg may have spent much of the past few years talking up "open" "open" "open" "open" "open", but well, sometimes "open" backfires, just ask Google.
Yes, yes, there's still some "open" lip service here. Future Muse Spark variations or whatever. But that too is table stakes.
Anyway, that's all down the line. Clearly, 'Spark' is just the first 'Muse' model. This was the one codenamed 'Avocado' and there's a bigger fruit apparently in the works in the form of 'Watermelon'. Hopefully that one gets a better final name.
One more thing: perhaps the most interesting element of the Muse movement is the notion that Meta intends to sell access via APIs. A first step towards a bigger Meta Cloud offering? You don't spend $140B a year for table stakes.





2026-04-08 20:00:10

It's a question of commitment. And incentives. And scale.
To me, that's how I'd boil down the current state of AI relative to humans. It's extremely oversimplified, but I'm not sure it's wrong.
I started thinking about these notions when writing about the value of writing in the age of AI. This naturally led to thinking through the value of thinking in the age of AI. But what really drove home the concept was reading all the coverage around Anthropic's latest model, Mythos. You know, the one too dangerous to be released.
You can't help but read all of these stories about all the bugs, vulnerabilities, and exploits that Anthropic's model is finding across basically all computing systems out there in the real world and think "holy shit, we're cooked." While 'Project Glasswing' seems like a valiant effort to get ahead of the issues, come on, we know how this movie ends...
But my main takeaway is that it has less to do with the genius of these AI models – I mean, that's part of it, and clearly Mythos seems to be the smartest yet – but it's more about the breadth. Both of knowledge and time.
Said another way, reading all these security experts and researchers talk about Mythos, it's pretty clear that the model isn't so much finding issues that human beings cannot, but it's finding issues that human beings have not, and most depressingly, finding things that human beings will not.
Why? Again, time and incentives.
If you tasked a capable human with finding every single bug in a certain system, they presumably could do it – if given enough time and resources. These issues don't require super human knowledge, in fact, they require mere human knowledge. But often times to find them all, it requires human knowledge scaled in super human ways. Again, spending more time on it than any human reasonably would. Because again, the incentives are simply not there for a human to spend their entire life looking for bugs. Perhaps if the vulnerability was great enough, sure. But that's sort of an unknown until such things are found.
No one creates systems to have obvious vulnerabilities for others to fix. They're the byproduct of a million little variables – a scale a human isn't suited to deal with.
But AI is. Issues that might take a human years to find and fix can be found and solved almost instantly by such systems. We know this to be true because Mythos is finding issues in systems that are a couple decades old! Despite some level of usage the entire time, humans simply never found the issues.
Luckily, it seems, neither did attackers. That's the thing, the flip side is the real problem here. Historically, many vulnerabilities have been fixed only after someone exploited them in some way. Again, that's because the incentives are in favor of the attacker versus the defender. If and when Mythos-caliber tools are put in the hands of hackers... yeah.
That's obviously exactly why Anthropic isn't releasing Mythos to the public and also why they've set up Glasswing. While the company may be first to such capabilities, they won't be the last. They probably don't even have long to try to get ahead of the situation. While I generally dislike the nuclear weapons analogy for AI, I must admit, this all does feel a bit Manhattan Project-y. The good guys are racing against the clock to implement a new technology before the bad guys catch up. But they will. They always do.
And sadly, there's no real hope of deterrence here as with nukes. Again, incentives. Is the Glasswing gang going to unleash Mythos to take out the would-be hackers? I mean, maybe they could for a big enough evil organization. But most such bad actors will either be lone wolves or operate in tiny teams. Even if you could preemptively attack, you simply won't be able to know where to focus such concerns at all times. I mean, maybe AI would? Maybe? But that's probably overly optimistic.1
Anyway, point is that Mythos is clearly great at finding exploits and while the powers-that-be are trying to use it fast to fix such issues, the bad guys will eventually get their hands on it as well. So it will be a cat-and-mouse game both in tracking down those would-be bad guys, but more importantly, tracking down the vulnerabilities and hoping the good guys can stay one-step-ahead technologically.
But I go back to the notion of scale. Given the issues Mythos has already found – across every operating system and seemingly every piece of software they've looked into – it's hard to feel anything other than overwhelmed here. And again, that to me is sort of the story of AI right now. It's less about "superintelligence", and more about intelligence scaled in a way that humanity cannot.
There are incredible potential upsides to this idea, such as in drug discovery and disease eradication. Again, these systems can run basically infinite scenarios – possibilities so large that a human simply cannot even fathom, let alone execute. The only limiting factor is resources – as in compute, not time. Incentives are no longer needed to lead down one path because AI can go down every path (though incentives remain on the human side of the equation, tasking such systems, of course).
This will apply to other scientific discoveries, obviously. In space, in the deep sea, etc. Humans may technically have the capabilities, but not the time.
This same general idea is what is taking coding out of our hands. And that too is being applied to other "white collar" areas of work. Reviewing legal documents is tedious and time consuming. But not for AI. Etc.
Creative endeavors feel more protected. And that's because while AI technically could write the works of Shakespeare – again, time is not an issue, endless possibilities are literal – the system wouldn't necessarily know when it had. It would only know which version to pick if compared against the existing works of Shakespeare. But what about future Shakespeares?
Creativity comes from constraints, not the lack thereof.
This is taste. Which has sadly become a buzzword amongst tech bros. But it does matter in the future of our interaction with AI. It's a part of what's going to raise the relative value of human-made work. But the bigger part is the other constraint, the larger one: time. People are going to learn that they're not paying for output, they're paying for input. How much time was spent on something – the most precious resource that a human being has. The variable that doesn't limit AI.
To bring it back to the moment at hand, reading about Mythos paints a clear picture of a future in which problems are both solved and created by the human-centric notions of time and incentives being thrown out the window with AI.
And it seemingly points directly to the next big technological quandary if and when the comparatively unlimited resources of quantum computers can both make new discoveries by doing computation at a scale that's impossible right now while at the same time likely cracking traditional cryptography. It's the same general high-level notion. And it's likely to define the next decades of both computing and the world.
In a way, it's the same idea that has defined computers from the get go. But at a scale that can now both break and fix the real world. Perhaps in real time. With an almost casualness that's impossible for the human mind to comprehend. It's both absolutely exhilarating and completely terrifying.





1 And let's not even delve into the Minority Report element of "pre-crime" here – attacking a target before a crime has been committed. What are the lines there? ↩
2026-04-08 02:33:13
10 months after their nearly $15B deal to buy Scale AI and reboot/restart their AI efforts, it seems like we're about the see the first true fruits of labor out of the Meta Superintelligence Lab. The first models are supposedly a bit past deadline and undoubtedly over budget, but wait, is there a wrinkle?
Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions of those models via an open source license, Axios has learned.
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether.
Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources.
I don't know, that reads like "eventually" is doing a lot of work there. And if that's the case, that's not surprising at all. Meta will release some "open" version of their new models... eventually. This is the exact same strategy that Google, and OpenAI, and pretty much everyone else follows. They ship their frontier models and products, then sometime down the road, they release smaller, less capable open source variants. OpenAI was late to this particular party, but finally ran this playbook last year. Google just released their latest version of "Gemma", their "open" version of Gemini.
They all do this to keep their key work under lock-and-key, using the blanket of safety as the (undoubtedly at least partially legitimate) rationale. Meta will be no different here.
I guess the angle is that Meta is still going to do "open" at all after the "Llama" debacle. But it seemed like they were always going to – that's how their teams would say with a somewhat straight face that they weren't abandoning "open", that it would always be a part of the playbook. As I wrote last July:
The open source strengths that Zuckerberg touted, turned out to perhaps be weaknesses when it came to competing at the highest end of the market. It doesn't mean open source (again, open weight) is bad – it just means Meta's strategy here may have been flawed. And I suspect where they'll net out is the same strategy that Anthropic, Google, and soon OpenAI are doing. That is, keep the cutting-edge models closed and open source older and smaller models more selectively.
Bingo. Back to Fried:
The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers.
Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely as possible around the world.
Again, there are already US-made options for developers, the smaller, less performant "open" models. That's different from Meta's previous strategy with the Llama models, where they were "open" from the get-go. That strategy, of course, did not work. Hence Meta needed to spend $15B on Scale and billions more on other, fresh AI talent.
By the way, the US angle here is clearly meant to counter the rise of "open" models coming out of China. But even there, the large companies seem to be moving into a more proprietary posture. Why? To try to make the technology work as an actual business and to win the AI day. As it turns out, there are downsides to "open" – Meta learned this the hard way when some of those Chinese models seemingly used Llama as a base model from which to distill their own. Of course, some of those model makers may have been doing this with OpenAI models as well – and not the "open" ones. We'll see what the next version of DeepSeek looks like, which also seems pretty delayed at this point.
The more interesting element may be the notion of focusing on consumers. This, of course, has been OpenAI's strength. But while they're not abandoning it, they're clearly taking their foot off the consumer product to slam on the enterprise gas, in order to counter Anthropic. So perhaps Meta does have a bit of an opening here...
The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances.
Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said.
Billions and billions spent to get a model out the door that doesn't compete at the frontier? Of course, Microsoft is in the same boat, promising that they simply got a late start to the frontier game (thanks for nothing, OpenAI) but that they're closing in. But again, it's not like OpenAI and Anthropic – let alone Google – are just going to hit pause and let everyone else catch up.
And while xAI previously seemed to prove the ability to catch up was doable with enough money burned and corners cut, it hasn't really mattered. Will it for Meta?
Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram — free services with global scale that competitors can't easily match.
But that was the same strategy with Llama. Again, it didn't work. So will these new proprietary models change the equation? I doubt it. But we'll see!
One more thing: might Meta keep the 'Llama' naming scheme around for the "open" variants of their models? Unclear. They may just want to move away from the tainted brand, but it was always a cute/clever name. The new fruit codenames – Avocado, Mango etc – just seem to be copying OpenAI – though I still wish they would have actually shipped those names!
Update April 8, 2026: And here the new model is... and, as expected, 'Muse Spark' is not open source. To start, at least...




2026-04-07 20:27:21

Since the dawn of Spyglass, easily one of the most consistent requests I get is for an audio version. That is, posts read aloud. As someone who has listened to nearly everything I read for years and years at this point, that obviously resonated with me. So I'm happy to say that now it's here for you to hear.
I'm launching this today as a feature for members of The Inner Ring. Whenever I publish a 'From Afar' column, paid members will also get access to an audio version – henceforth dubbed 'Spyglass Aloud' – just in case you prefer to consume the content audibly.
Thanks to a new partnership between Ghost and the podcasting service Transistor, this is fairly seamless. With your subscription, you'll get access to a private audio feed tied to your account. This feed will allow you to listen in your podcast player of choice with a couple clicks. If you unsubscribe, you'll lose access to the feed, as this is all synced in the background between the two services.
Note that I'm only enabling this for columns – i.e. longer posts, not the shorter ones – and it will only be for posts going forward at the moment. I may record some older hits retroactively over time, which would show up in your podcast feed.
Speaking of recording, I suspect this will be somewhat controversial, so I won't beat around the bush: I'm using AI. Specifically, I've used the service Eleven Labs to clone my voice.
YOU DID WHAT?
It actually requires quite a bit of work – well, quite a bit of reading aloud – to get the voice up to a quality that I think is solid. And I honestly think it's quite a bit better than me actually trying to read every single post aloud and recording it that way. Believe me, I tried. After countless demo runs, I realize that I sort of suck at reading things out loud. Especially longer works, where you're inevitably going to trip up and have to go back/edit. It's not like a podcast where you just power through, it's fairly tedious – which I don't mind, again this is a paid feature – but again, the end result, if I'm being honest, just isn't as good as the AI version. I did a lot of tests on this. It wasn't even close.
Again, I know that some people won't like this approach philosophically, especially right now because AI is such a hot-button issue. But I don't think this should be controversial, I'm opting in with my own voice. Am I worried that there will be an army of robot M.G. Sieglers in the future walking around talking your ear off in my voice? I mean, that would be sort of fun. As long as they weren't murderous.
Honestly, the technology is fairly incredible (and as such, not exactly cheap at the moment). It's not perfect, of course, but it's actually more perfect than what I was able to achieve the old fashioned way, talking into a mic like an animal.
Shout out to my friend Casey Newton who had the courage to journey down this controversial path first. After he launched an audio version of Platformer, it immediately became the way I prefer to consume his content. It's good and fun/nice to hear it in Casey's voice. Even if it's RoboCasey.
Anyway, I'll embed the first such recording below so everyone can get a taste of what it sounds like. And if you're interested, you can join The Inner Ring here. And if you're not, well, you don't have to listen! Typed words will remain the primary interface around these parts.
Thank you, as always, for reading – and now perhaps listening.
Once you sign up for The Inner Ring, be on the look out for a welcome email which will show you how to get access to your podcast feed. But I've also published the guide on the About page, and I'll put it below for good measure...
Paid members will see the following new 'Podcasts' area on your account page, with a note to 'Access your private podcast feed'.


Clicking on 'View' will take you to a Spyglass landing page hosted on Transistor where the feed is housed.

Clicking on 'Spyglass Aloud' here will take you to a page where you can easily subscribe via your podcast player of choice, and also sign up for email alerts when new episodes are published if you want (if you subscribe via a podcast player, they'll automatically appear, of course). This all works on mobile and desktop.

2026-04-07 04:14:52
Happy Easter Monday. Hopefully everyone is rooting for Michigan tonight in the NCAA Championship Game. #GoBlue
Spyglass Dossier is a newsletter featuring links and commentary from M.G. Siegler on timely topics found around the web.
🦸🏻♂️ OpenAI's "Super App" – An interesting side note of this strategy: it shifts us back from a mobile-first and focused world, to a desktop-centric one. That's not surprising given the coding focus of Codex (and Claude Code), but it's decidedly not where the masses are these days, and ChatGPT is at such a scale – 1B actives any day now – that mobile will presumably always end up as the most-used surface for them (even if/when their own devices hit). The strategy – get Codex in front of non-engineers through ChatGPT and morph it into a general purpose agent – isn't a bad one, but there are real risks in execution. And given that OpenAI is full-on chasing Anthropic here, what might they be focusing on – shoring up Claude Code/Cowork or something new? [Sources 🔒]
🔀 OpenAI Shuffles Leadership – Holy Good Friday news dump. COO shifting to "special projects". CMO stepping down (to focus on health). CEO (of AGI Development) taking a leave of absence (to focus on health). This all comes on the heels of the $122B fundraise, a questionable M&A deal, and a mandate to kill off "side quests" and focus on the task at hand: taking on Anthropic. The latter two were spearheaded by aforementioned CEO of AGI (now on leave) Fidji Simo. Clearly they waited to announce all of this until the Friday before Easter. Will Simo come back as COO (since it seemed like that's what she essentially was anyway?). The company is saying no (that they won't appoint a new COO). Hard to give up a CEO title, I guess – even a secondary one. Never a dull moment at OpenAI. [Bloomberg 🔒]
📺 YouTube Leans Back – While "Stations" are essentially YouTube's answer to FAST channels elsewhere, it has nothing to do with price – since, of course, YouTube is already free (though you definitely should consider YouTube Premium to remove the insane amount of ads if you watch a lot of YouTube). It's all about the experience of just being able to put something on and not worry about it. Yes, like old school TV or cable. Not having to select something else when it ends. Just putting something on in the background and letting it wash over you. Endlessly. For hours. Sometimes, people simply like not feeling alone in their homes. I feel like I've been writing about this notion for a decade. Because I have. [Verge]
"This is horrific. I knew this kind of bullshit would happen eventually, but I didn't expect it so soon."
– Zach Manson, a software developer who noted that Microsoft had injected an ad into his GitHub pull request. What was the ad for? Copilot, of course. Microsoft quickly removed the "feature" after the backlash. Better than Apple in that regard, I guess...

Below, members of The Inner Ring will find thoughts on:
• OpenAI's Internal IPO Tension
• OpenAI's Industrial Policy
• Cursor's Claude Code & Codex Competitor
• and more...