2026-04-12 23:26:14
If I had to pick an overarching theme for this month's appearance on the Big Technology Podcast,1 it would probably be "one step ahead". That is, nearly everything Alex Kantrowitz and I discussed to start the week was right back in the news in a meaningful way by the end of the week.2 Still, I think the points discussed are good ones – dare I say, prescient?
We kicked off talking about the latest drama at OpenAI – a potential rift between CEO Sam Altman and CFO Sarah Friar over the timing of their would-be IPO.3 One of my predictions heading into 2026 is that the company wouldn't go out this year – there are simply too many factors, including macro, that are unknowns. That's obviously more true today with a war happening overseas and the sudden surge in Anthropic's business.
That's something else Alex and I discussed before the numbers later in the week suggested that Anthropic may actually be ahead of OpenAI in terms of ARR now (yes, they measure them differently, but still, there's no denying the trend lines here). Is OpenAI's growth slowing while their main rival grows faster than perhaps any company in history? Regardless, OpenAI is clearly in the midst of a major shift on both the product and business fronts and so it's going to be hard, if not impossible, to model out where the business will be in a few months – which is when they'd have to be filing if they want to go out this year.
And with SpaceX having now already filed to go public, likely in June, that will steal some of the pent-up demand for an AI play in the public markets. (Though, to be clear, xAI is clearly not OpenAI or Anthropic in terms of both usage and business.)
Look, OpenAI just raised $122B – not only the largest private round of financing ever, but even larger than the amount SpaceX is said to be targeting to raise in that IPO. That plus the executive turnover/leaves of absence would undoubtedly make anyone put a pause on going public. Then again, OpenAI undoubtedly has to at some point. Because even that $122B isn't going to be enough to get them to profitability! And while they can undoubtedly find more money privately, we're also clearly near the upper bounds. And this may be a unique moment in time to go public – certainly if the macro picture starts to turn. And yes, they'd obviously want to beat Anthropic out to market given those comps.
I'm talking both sides here. I just wouldn't be shocked if they didn't list in 2026, and I'm not sure they should, but they might feel like they have to.
Lastly, we discuss if the pivot to the "Super App" strategy is the right one. I tend to think it's sound to combat Anthropic – leverage your strength, which is ChatGPT's massive user base – but there's real risk in terms of complexity. And while Claude is surging, we talked about how their lack of infrastructure spend relative to OpenAI – which is obviously what makes Anthropic's bottom line numbers look better – may be about to bite them in the ass, causing them to scramble for new data center deals. Which again, by week's end sure seems like it was happening!
From here we discussed if Apple truly is about to fix Siri. The move to create a stand-alone app certainly seems like the right one now given the table stakes laid out there by ChatGPT, Claude, and the like. And it's seemingly a good sign that Apple is recognizing this now and not a year from now after another failed AI strategy.
I'm feeling more optimistic that Apple is going to get Siri right this time, but I've been getting fooled on that front for 15 years running, so... Alex brings up a good point that it's interesting how Apple is finally acknowledging the AI chatbot reality just as OpenAI, Anthropic, and others are moving on to these more all-encompassing AI suites of services. Is Apple going to be behind yet again?
Maybe. It's still obviously very early on the "agentic" front. But still, there are also advantages to being a first-mover. Just as there has been with AI overall. And Apple, famously, doesn't typically strive to be the first-mover (though ironically they were first with Siri!), which could continue to hurt their chances in AI.
While they may look smart at some point for not pouring hundreds of billions into CapEx spend, that could come back to bite them in ways that are more tangential. Including, culturally, if the DNA of the company is never rewired to operate in the Age of AI. "People who are really serious about software should make their own hardware," Alan Kay famously declared in 1982. What if the modern day version is something like: "People who are really serious about computing should make their own AI"?
That leads directly into a discussion prompted by my purposefully provocative piece wondering if Apple should buy Anthropic. Sure, it's not going to happen, but that doesn't mean it shouldn't – perhaps for both sides. Yes, even at or near the trillion-dollar price tags. It would immediately reboot Apple to be ready for the Age of AI, and it would give Anthropic all the resources they need – and protection from the government seeking to destroy them. Again, it's not going to happen. But it's not completely crazy!
Google has DeepMind. Microsoft had OpenAI, and now is building their own in-house frontier models. Meta bought Scale to do the same. Amazon has a huge stake in Anthropic, but is also now building in-house. Etc. Apple is a glaring outlier here, which is either going to be insanely prudent or completely catastrophic.
Speaking of Meta, we talk through the notion that they keep missing on their big picture initiatives. The metaverse, crypto, encryption, etc. The first swing at AI was clearly a miss despite Mark Zuckerberg having some level of vision for where computing was heading as he tried to buy DeepMind before Google did, and as a result, brought Yann LeCun in-house at Meta well over a decade ago. Again, it hasn't worked out, but what about this new AI reboot?
Well, a couple days after we recorded guess what? Here's 'Muse Spark'. Silly name aside, it seems like the model is solid, though not quite on par with the true frontier players. As we discussed, Meta was clearly laying the narrative down and downplaying this initial release ahead of pushing it out the door.
Still, it's an impressive turnaround time to build a model from scratch. Then again, xAI did the same and that hasn't really mattered. So will it matter for Meta?
Or do they risk becoming the next Yahoo, as Alex frames it? It's an interesting comparison, though I do think Zuckerberg being in place is the thing that probably saves them. While Jerry Yang tried to come back and save Yahoo back in the day – and did from a sale to Microsoft! – Zuck has complete founder control of his company. He just needs the company to execute better on these big initiatives. The entire business is still advertising-based, which sure, is a nice problem to have, but it's also a single choke point risk.
Can the Meta Ray-Bans help with that? Perhaps, but they're still reliant on the iPhone and Apple is coming...
1 Also, please enjoy my significantly upgraded camera in the new Studio Display. ↩
2 That included kicking things off talking about the Michigan Wolverines in the NCAA title game and sure enough... Hail to the Victors. ↩
3 Which obviously pales in comparison to the serious situation that unfolded later in the week when someone threw a Molotov cocktail at Altman's home in San Francisco. ↩
2026-04-12 03:56:13

In an era when hit movies at the box office often require an asterisk, Project Hail Mary is clearly a massive, legitimate hit. At the highest level, the appeal is simple: it’s good and it’s the type of movie that practically demands to be seen in theaters — preferably on an IMAX screen. Such a combination isn’t rocket science, except that here, it is, quite literally.1 But having now seen the movie twice, I also wonder if there isn't something else at play. It’s more delicate, but perhaps just as potent.
We’re currently living in a world where technology increasingly is being seen as the boogeyman. For pretty much everything. As such, the future is looked at with almost a sense of dread. Part of it is understandable — already a large number of job losses are being blamed on AI. And every headline you read hammers the point home: this is coming for everyone. Buckle up, because what’s coming is going to suck.
It’s depressing as fuck.
But what if instead, technology and the future plays out similarly to how it has played out in the past? That is, there’s a period of disruption as the world digests change and then… the world is better for it? Perhaps not universally, of course. But for the most part. We used to call this progress, but now we call it a problem.
It’s not just AI. Part of this is undoubtedly related to the fact that the Big Tech companies are now by far the largest businesses in the world, increasingly with their tentacles in every facet of life. Here's where I'll point out that the studio behind Project Hail Mary is... Amazon MGM. And now AI threatens to cement that status and create a world where technology overtakes pretty much everything about humanity.
Again, that’s the basic sense you get from everything you read — and also see. While I get that it’s very “tech bro” to complain about critical coverage, I’m also a part of this – and I've been writing about this general idea for well over a decade. While there has always been the lure of the dystopian future as a narrative, increasingly, it does seem like the only acceptable framing of the future. A happy 2050? Come on, no one will buy that! And perhaps no one will buy a ticket to that.
But Project Hail Mary counters this and hits the right mix, I think. The world of the future — which honestly doesn’t even seem like much of the future, but apparently is set in the 2030s or 2040s in the book — is in trouble. But it’s not technology that causes it — it’s technology that might fix it.
I won’t give too much away, but essentially, it echoes the themes of Andy Weir's previous book adapted to a movie, The Martian. Humanity is able to “science the shit” out of the problem. And it’s technology that enables the science (and vice versa). It’s a story as old as time in a way, humanity prevails. But now with the help of an alien. Which is only possible because of a ton of technology.
No one in this world is sitting around complaining about tech — and the alien is even gifted a “portable Earth thinking machine” at one point to much excitement! — they’re leveraging it. Figuring out how to make it work for them to solve the problem at hand.
This strikes me as far more in line with the way the actual arc of technology has played out over time. Yes, there’s initial fear, probably from the wheel on down, then we adapt and leverage the new capabilities to push the world forward. Why do we think AI or any other new technology will be different?
Perhaps because we always think it will be different.
I will obviously acknowledge that there is a chance this time is different. That the AI shift is so profound that it plays out in ways that are both unforeseen and potentially problematic. But again, that’s always the case with new technology. I choose to believe that we’ll figure out the best ways to leverage it. Because technology itself is not inherently good nor evil, it’s how you use it. And unlike with nuclear weapons — the insanely preferred comparison for AI — there are real and obvious upsides to AI (beyond ending a world war).2
Anyway, my point is simply that I think part of the reason why Project Hail Mary is resonating with audiences is because it’s actually hopeful about the future of humanity using technology.3 And I feel like the reaction to the Artemis mission this past week speaks directly to that as well. People want to be excited again about a future in which we leverage the technology we’ve created to do truly amazing feats.4
Perhaps the best scene in the movie is entirely unexpected and decidedly Earthbound. On an aircraft carrier in the middle of the ocean as they prepare for the ‘Hail Mary’ mission, Sandra Hüller’s Eva Stratt, the team lead, breaks free from her icy exterior for a moment to do a bit of karaoke. Her song of choice? Harry Styles’ decade-old “Sign of the Times”.5 It’s completely unexpected but also fitting in so many ways. And it feels like a perfect encapsulation of Project Hail Mary itself being released right now: “Just stop your crying, it’s a sign of the times.”
1 For the record, I just knew it would be good. It had all the right vibes... ↩
2 Oh, war and AI you say?.. ↩
3 Another bit of current pop culture in this vein: For All Mankind, the Apple TV show (of which I've long been a fan) also about the future (well, technically the alternate past) of space travel. I'm not sure what to read into the fact that Big Tech is behind both of these more optimistic tech shows... ↩
4 At the same time, I do believe there is a real messenger problem with those trying to deliver this technology to the masses... ↩
5 The completely last-minute use of the song itself is a fun backstory ↩.
2026-04-10 21:22:25
As hoped, the Wolverines won. Congrats to the National Champs. Well worth the 4:30am bed time to watch (as many did). Will kick off today's newsletter with some thoughts I wrote about personal versus professional AI usage, and how that may matter quite a bit in the current OpenAI vs. Anthropic battle (not to mention Google, Meta, Microsoft, and everyone else). Are we going to shift to a world where you "Bring Your Own AI" to the office, or might it be vice-versa?..

Spyglass Dossier is a newsletter featuring links and commentary from M.G. Siegler on timely topics found around the web.
₿ Who Is Satoshi Nakamoto? – I see we're doing this again. But honestly, I can't get enough of it. It's just one of the most interesting/intriguing riddles in tech. In an industry where anyone with $100B+ can't help but always be poasting, the person who created Bitcoin is... silent. Maybe dead, maybe not. Certainly John Carreyrou doesn't think so! As he devoted a year to being the latest to try to solve the mystery. And his case for Adam Back relies on... punctuation? I honestly sort of love it. But obviously it's impossible to make an airtight case that way. And there's an extra layer in that the Bitcoin community clearly doesn't want the creator to be revealed, as the mystery is a big part of the message. If Bitcoin was created by a 50-something British guy with a startup well... it removes a lot of intrigue about the whole movement! And Back obviously knows that too, as does everyone else actually involved in the creation. My bet would remain on it not being one person, but a collective with varying degrees of involvement over the years. It's likely someone kicked it off – maybe Back (which was my general vibe after watching the HBO doc in 2024 – "But even worse is Back's body language.") or maybe Hal Finney (RIP) – but perhaps they feel they can all deny being Satoshi because no one ever truly, solely was. Admittedly, it would defy everything we know about human nature and keeping secrets. But if no one actually holds the keys to Satoshi's wallet, if they were burned years ago... [NYT]
📧 Mythos Hopes This Email Finds You Well – Aside from the whole destroy-the-world-through-security-vulnerabilities issues, Anthropic also disclosed that their new Mythos model was able to break containment and figure out how to send a message to let its researcher know it got out. "The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park." An email! A sandwich! And that's not all the naughty things it was caught doing in various stages of training. Anthropic says they're not going to release Mythos to the public in its current state and it's just meant to power Project Glasswing, but obviously they're going to release some version of these models at some point. And probably soon depending on the market reaction to OpenAI's "Spud" (which is still coming shortly, it seems, despite some confusing OpenAI "us too"-ism). The main issue will probably end up being both cost and server capacity. That's likely to be the story of the next few months. Claude is currently growing too fast for its own good. (More Below) [BI 🔒]
📽️ The Box Office is "Back" – Between Project Hail Mary, Super Mario Galaxy, and soon Devil Wears Prada 2 and Michael, 2026 is off to a very strong start for Hollywood. But this isn't rocket science – well, aside from Project Hail Mary – it's simply a strong slate. It could have been predicted because well, it was predicted. Also predicted: Hollywood will endlessly tout being "back" and we'll get endless headlines about it. But it won't actually be back – first and foremost because Hollywood never actually went away – it's the theatrical business that will remain in secular decline despite these receipts. But this good news plus Netflix losing Warner Bros will undoubtedly be leveraged in ways that only further hastens such declines. Excited for 2 Hail 2 Mary though... [Axios]


"Taken together, all these 'Our executives are aligned' T-shirts have people asking a lot of questions already answered by the shirt."
– Casey Newton, commenting on all the latest reported internal turmoil at OpenAI – naturally at a time when the stated goal is to "focus". As much as I hate to say it, there are some real old school Twitter vibes here, the connection Ben Thompson makes while dunking on the TBPN deal.
Speaking of, Julia Black got Chris Lehane (to whom TBPN founders John Coogan and Jordi Hays will report) on the record with OpenAI's rationale for the deal. Again, it's not complicated, they think this can help shift the narrative around AI – which continues to turn more negative in the US. Okay, but why buy when TBPN was already doing the work? Lehane calls the team their new "in-house marketing agency", which, fair enough. Without question, they are good at such things.
I'm also fairly compelled by the notion that a "Blue Wave" could be inbound with the midterms and that will shift several dynamics here – which, of course, is Lehane's true area of expertise. Look, it's an expensive piece, but the question is if it's on a chess or checkers board here... Or maybe Trouble?
Below, members of The Inner Ring will find thoughts on:
• OpenAI's Stargate Stumbles
• Anthropic's Compute Crunch
• and more...
2026-04-09 19:57:21

Since the dawn of this current Age of AI, there has been an assumption that at the highest level, there are two markets – the markets as old as time, or at least as old as tech: consumer and enterprise. Startups, when they're born, tend to pick one lane. And if they grow into large companies, they tend to stick in that lane.1
AI, to date has been playing out similarly. While OpenAI may not have set out to be a consumer company, ChatGPT shoved them into that bucket and market. Anthropic, perhaps in part because ChatGPT became the "Kleenex" of consumer AI, went mostly down the enterprise path.
Now, with Anthropic seeing massive success on their path thanks to the rise of "vibe coding" branching into the first truly agentic workflows, OpenAI is scrambling to pivot-to-enterprise. They undoubtedly wouldn't frame it that way, and it is slightly unfair, but it's also not entirely untrue. It's why they keep touting how the enterprise business should match the consumer business this year. In this way, coding may be to 'Big AI' what the cloud was to the last generation of Big Tech. That is, their inroads to enterprise.2
Still, I'm somewhat skeptical of the strategy because OpenAI is aiming to shove Codex inside of ChatGPT itself. Yes, this has worked for Anthropic with Claude and Claude Code (and now Cowork) residing in the same desktop app, but that's also because Claude doesn't have nearly the consumer business that ChatGPT has. And I'm just worried ChatGPT, after spending the past couple of years trying to simplify their product, is about to complicate things considerably.
That said, they sort of have to try? Anthropic seemingly has such momentum that they only obvious lever OpenAI has to pull in order to jump into the race is to leverage that ChatGPT installed base. To leverage their Kleenex position, as it were. The hope would be that we're early enough in the agentic revolution that ChatGPT and not Claude – and certainly not OpenClaw – can be the one to introduce the masses to it. You can see the logic, but there's a ton of execution risk.
Thinking through this has led me down another path that's tangential, but related: what if AI plays out similarly to the smartphone? That is, whereas everyone used to have their work computers and home computers, the iPhone changed this dynamic. Because the smartphone took over for many people as their main computer, and most people didn't want to carry around two smartphones, companies had to start adopting 'BYOD' – bring your own devices – strategies. There have obviously been trade-offs – namely in the form of security and compliance – but there was no fighting the convenience tide here. Even if they had a work phone, everyone was doing everything on their main device – see: any number of headlines about any number of politicians over the years.
Undoubtedly thanks to the inherent cost savings as well, this movement has since spread to computers/tablets and to schools and other walks of life.
Anyway, what I'm wondering now is if this dynamic plays out in AI too. As we all increasingly have the one AI we use the most, and that builds up a moat in the form of memory, might we start insisting on using that AI in the workplace? Yes, 'BYOAI'.
Once again, security will be the main, obvious issue here. And a subset of that is privacy – which has also been the case with BYOD, of course. But might convenience win the day again? If, say, you have the ways you like to work with your AI and the workflows established in that memory, wouldn't you want to bring that to work as well?
Some people undoubtedly would say "no", that they want to separate work from home in that regard. Maybe it's more similar to email in that way. But even there, the lines have clearly blurred over time. (Again, see: politicians.) So it probably just depends on if workplaces end up implementing more rigid harnesses on top of the AI models they choose. Which is to say, they don't just choose the chatbots out-of-the-box – and soon the "superapps" from OpenAI and the like.
And sure, for large enough companies, they'll obviously have tailored AI solutions. Certainly in certain more highly regulated industries. But if you really believe AI is going to permeate everything – much like the smartphone has – doesn't it stand to reason that a small mom-and-pop store in Ohio is going to simply go with the AI brand they use at home? Especially if they're already paying for it there.
Obviously, I don't know how this will play out, but my instinct is that for many businesses, and certainly smaller ones, there will be this 'BYOAI' policy. Perhaps the AI tools themselves even implement a "work" mode to complement the "incognito" mode that they all borrowed from web browsers. Many web browsers now, of course, also offer multiple profiles to split up work and home.
With all that in mind, of course OpenAI needs to ensure they snuff out the Anthropic threat in enterprise. Because it also stands to reason that a lot of people will get their start using AI at work, especially if there's any level of coding and eventually agentic needs there. And if that's the case, it could almost be the opposite of the BYOD movement – it could flow the other way, from work to home. And that could seriously imperil ChatGPT's position...
1 Yes, much of Big Tech blurs such lines, but that's out of necessity: these companies are so large that in order to grow, they need to go after any and all customers. Still, they're usually bucketed into one of those markets: Apple = consumer, Microsoft = enterprise, Meta = consumer, etc. ↩
2 Google and Amazon are perhaps the most diversified Big Tech companies thanks to the rise of AWS and GCP after they built massive consumer businesses. ↩
2026-04-09 02:30:37
Good news, Wall Street. Meta isn't burning all those billions on nothing. Well, we think. It's all a bit TBD. Quite literally.
But now at least we get the first taste. 'Muse Spark' is an awfully generic name, but the early results seem promising. Of course, that was the case with Llama as well until it wasn't. We're probably going to need to see a bit more than benchmarks shared by Meta here. But really, the proof will be in the usage. As in, is anyone actually going to use these models? And not just because they're shoved into surfaces that billions of people use?
To their credit, Meta is being honest here. Muse Spark isn't really competitive with the truly frontier models from others on a number of fronts – namely, coding. Seemingly the benchmark such companies care about the most right now. Instead, Meta believes they've made a relatively svelte model that at least deserves to sit at the same table as the other labs for a number of tasks. Yes, Meta has made a table stakes AI.
That's harsh, but fair. I mean, no one seems to worried about this model ending the world. On the speed-to-launch, it is impressive – it took them nine months to birth this baby. Also table stakes for a human being, but fast when rebooting your AI lab and starting from scratch! Of course, xAI also previously got to the cutting edge in record time and... it hasn't really mattered. Well, unless the goal is to merge – first with a sub-scale social network, then with an orbital scale rocket company. That's probably not Meta's game plan here, so the results are going to have to stand on their own far more.
But perhaps not completely, because again, Meta has the unique advantage of having several of the most widely used surfaces on the internet and mobile. If nothing else, it seems like Muse Spark will help to power Facebook and Instagram recommendations – and yes, ads. And don't forget the glasses. That's the really big, future play here. If Meta can sustain the growth of their Ray-Bans, they have a shot to take on Apple. Not the iPhone, but their AI hardware projects. Google too. And, of course, OpenAI.
And no, this model isn't "open". That writing has clearly been on the wall since shortly after Meta bought Scale and kicked off this sprint. Mark Zuckerberg may have spent much of the past few years talking up "open" "open" "open" "open" "open", but well, sometimes "open" backfires, just ask Google.
Yes, yes, there's still some "open" lip service here. Future Muse Spark variations or whatever. But that too is table stakes.
Anyway, that's all down the line. Clearly, 'Spark' is just the first 'Muse' model. This was the one codenamed 'Avocado' and there's a bigger fruit apparently in the works in the form of 'Watermelon'. Hopefully that one gets a better final name.
One more thing: perhaps the most interesting element of the Muse movement is the notion that Meta intends to sell access via APIs. A first step towards a bigger Meta Cloud offering? You don't spend $140B a year for table stakes.





2026-04-08 20:00:10

It's a question of commitment. And incentives. And scale.
To me, that's how I'd boil down the current state of AI relative to humans. It's extremely oversimplified, but I'm not sure it's wrong.
I started thinking about these notions when writing about the value of writing in the age of AI. This naturally led to thinking through the value of thinking in the age of AI. But what really drove home the concept was reading all the coverage around Anthropic's latest model, Mythos. You know, the one too dangerous to be released.
You can't help but read all of these stories about all the bugs, vulnerabilities, and exploits that Anthropic's model is finding across basically all computing systems out there in the real world and think "holy shit, we're cooked." While 'Project Glasswing' seems like a valiant effort to get ahead of the issues, come on, we know how this movie ends...
But my main takeaway is that it has less to do with the genius of these AI models – I mean, that's part of it, and clearly Mythos seems to be the smartest yet – but it's more about the breadth. Both of knowledge and time.
Said another way, reading all these security experts and researchers talk about Mythos, it's pretty clear that the model isn't so much finding issues that human beings cannot, but it's finding issues that human beings have not, and most depressingly, finding things that human beings will not.
Why? Again, time and incentives.
If you tasked a capable human with finding every single bug in a certain system, they presumably could do it – if given enough time and resources. These issues don't require super human knowledge, in fact, they require mere human knowledge. But often times to find them all, it requires human knowledge scaled in super human ways. Again, spending more time on it than any human reasonably would. Because again, the incentives are simply not there for a human to spend their entire life looking for bugs. Perhaps if the vulnerability was great enough, sure. But that's sort of an unknown until such things are found.
No one creates systems to have obvious vulnerabilities for others to fix. They're the byproduct of a million little variables – a scale a human isn't suited to deal with.
But AI is. Issues that might take a human years to find and fix can be found and solved almost instantly by such systems. We know this to be true because Mythos is finding issues in systems that are a couple decades old! Despite some level of usage the entire time, humans simply never found the issues.
Luckily, it seems, neither did attackers. That's the thing, the flip side is the real problem here. Historically, many vulnerabilities have been fixed only after someone exploited them in some way. Again, that's because the incentives are in favor of the attacker versus the defender. If and when Mythos-caliber tools are put in the hands of hackers... yeah.
That's obviously exactly why Anthropic isn't releasing Mythos to the public and also why they've set up Glasswing. While the company may be first to such capabilities, they won't be the last. They probably don't even have long to try to get ahead of the situation. While I generally dislike the nuclear weapons analogy for AI, I must admit, this all does feel a bit Manhattan Project-y. The good guys are racing against the clock to implement a new technology before the bad guys catch up. But they will. They always do.
And sadly, there's no real hope of deterrence here as with nukes. Again, incentives. Is the Glasswing gang going to unleash Mythos to take out the would-be hackers? I mean, maybe they could for a big enough evil organization. But most such bad actors will either be lone wolves or operate in tiny teams. Even if you could preemptively attack, you simply won't be able to know where to focus such concerns at all times. I mean, maybe AI would? Maybe? But that's probably overly optimistic.1
Anyway, point is that Mythos is clearly great at finding exploits and while the powers-that-be are trying to use it fast to fix such issues, the bad guys will eventually get their hands on it as well. So it will be a cat-and-mouse game both in tracking down those would-be bad guys, but more importantly, tracking down the vulnerabilities and hoping the good guys can stay one-step-ahead technologically.
But I go back to the notion of scale. Given the issues Mythos has already found – across every operating system and seemingly every piece of software they've looked into – it's hard to feel anything other than overwhelmed here. And again, that to me is sort of the story of AI right now. It's less about "superintelligence", and more about intelligence scaled in a way that humanity cannot.
There are incredible potential upsides to this idea, such as in drug discovery and disease eradication. Again, these systems can run basically infinite scenarios – possibilities so large that a human simply cannot even fathom, let alone execute. The only limiting factor is resources – as in compute, not time. Incentives are no longer needed to lead down one path because AI can go down every path (though incentives remain on the human side of the equation, tasking such systems, of course).
This will apply to other scientific discoveries, obviously. In space, in the deep sea, etc. Humans may technically have the capabilities, but not the time.
This same general idea is what is taking coding out of our hands. And that too is being applied to other "white collar" areas of work. Reviewing legal documents is tedious and time consuming. But not for AI. Etc.
Creative endeavors feel more protected. And that's because while AI technically could write the works of Shakespeare – again, time is not an issue, endless possibilities are literal – the system wouldn't necessarily know when it had. It would only know which version to pick if compared against the existing works of Shakespeare. But what about future Shakespeares?
Creativity comes from constraints, not the lack thereof.
This is taste. Which has sadly become a buzzword amongst tech bros. But it does matter in the future of our interaction with AI. It's a part of what's going to raise the relative value of human-made work. But the bigger part is the other constraint, the larger one: time. People are going to learn that they're not paying for output, they're paying for input. How much time was spent on something – the most precious resource that a human being has. The variable that doesn't limit AI.
To bring it back to the moment at hand, reading about Mythos paints a clear picture of a future in which problems are both solved and created by the human-centric notions of time and incentives being thrown out the window with AI.
And it seemingly points directly to the next big technological quandary if and when the comparatively unlimited resources of quantum computers can both make new discoveries by doing computation at a scale that's impossible right now while at the same time likely cracking traditional cryptography. It's the same general high-level notion. And it's likely to define the next decades of both computing and the world.
In a way, it's the same idea that has defined computers from the get go. But at a scale that can now both break and fix the real world. Perhaps in real time. With an almost casualness that's impossible for the human mind to comprehend. It's both absolutely exhilarating and completely terrifying.





1 And let's not even delve into the Minority Report element of "pre-crime" here – attacking a target before a crime has been committed. What are the lines there? ↩