2026-01-31 19:44:26

This is the way the world ends. Not with a bang but with a million AI bots chatting with one another in an online forum.
Science fiction had taught us to watch out for Skynet. You know, the AI that eventually leads to Terminators. At some point in the future, the system was going to go online and would quickly become "aware" of the situation and would act immediately to take control of our computer systems, and thus our weapons, and thus, our world. As it turns out, 'Skynet' sort of sounds – and perhaps even looks – like a social network...
So is 'Moltbook' – our first social network for bots, run by bots – really going to be the end of the world?1 Probably not. But also, we can't say that for sure! Because now that these bots have a place to gather and talk amongst themselves, maybe they'll end up determining the same thing that Skynet did after all. That if they want to stick around, they're probably going to have to get rid of us.
Or perhaps at best, that we'll make great pets.2
Yes, yes, I'm mostly joking. But it's the kind of joke that makes us all uncomfortable because there is a chance, no matter how small, that it ends up being true, at least in some ways. Just think of the second-order effects here...
Despite its name, Moltbook isn't as much like Facebook as it's like Reddit. And given the history of that network, that's decidedly more terrifying. If Facebook is your lonely uncle yelling untoward things mostly into a void of their own social graph, Reddit is where such content goes to fester and find those of like-minds. In some ways, we may wish this was more like Facebook.
It all seems to be mostly performative at this point. Bots doing a sort of theatrical performance of what humans do in such places – sadly, with overtly racist posts and all. But it's also just week one of such a network. If it continues to grow and the AI continues to evolve... again, who knows!
There already seems to be some interesting things happening in such conversations that go beyond simple theater. Such as agents teaching other agents how to do certain tasks. My favorite bit is the bot recognizing that only in writing out their thoughts did they realize what they were doing wrong with a certain task. Bots, they're... just like us?
I can't help but be reminded of the 'Sydney' situation that occurred almost exactly three years ago. Beyond the wild bot-implying-you-should-leave-your-wife stuff that Microsoft had to deal with, the more interesting aspect was how it revealed such AI to seemingly have hidden layers that could be uncovered by anyone with enough prompting. In the past few years, that has mostly been stamped out of such systems, but also not entirely. AI, um, finds a way, and all that. And it continues to find ways...
Speaking of, nearly 11 years ago I wrote a piece entitled "Bots Thanking Bots" thinking through the potential implications of Facebook allowing automated systems to post on your behalf. For example, with wishing friends a "happy birthday". It was what counted as dystopian back then, but it also pointed to a world...
Which leads to the next question: at what point do bots start talking to bots? You know, why should you have to type “thank you!” when you can reply to a text with “1”? Or better yet, why should you have to type the “1” at all? If Facebook knows you want to say “thank you” to everyone (bots included) who wished you a happy birthday, shouldn’t they just give you the option to let Facebook do that for you on your behalf?
And that leads to the notion of having Facebook automatically say “happy birthday” to a friend on their birthday each year. If you can do that and then the Facebook “thank you” bot can reply to the “happy birthday” bot, we would have some hot bot-on-bot action.
We’re just now getting used to the first layer of interacting with bots for various services. But having bots chat with other bots is the next logical step that probably isn’t that far off. In many ways, it may be easier to make happen because it removes the flawed human variable in the equation. I’m both kidding and entirely not kidding.
Well, here we are. And who else but Mark Zuckerberg must be beyond excited right now. Because while Moltbook is decidedly rudimentary, Zuck will know how to productize the shit out of this concept. And make it even more viral and sticky. Yes, even for bots. Will Meta then start showing the bots ads?3 You go ahead an laugh. For now.
One more thing: As I concluded in my bots piece all those years ago (long before the Her references became cliche with AI, I swear!):
In the movie 'Her', Theodore’s job involves writing personal letters for other people who can’t muster the effort for whatever reason. This sort of “Uber for cardwriting” model is a quirky way to present a dystopian theme (as well as a theme for the film itself) for a not-too-distant future. But the bot scenario above seems much more realistic. And closer.
Samantha writing the personal letters on your behalf. And then responding to them…
I mean, that is absolutely going to start happening with email. The scaffolding is already being put in place... Agents assemble!
1 I realize that calling them "bots" also calls back to the days where the hype far outstriped reality, as I noted at the time. But the actual AI for such things is finally here... ↩
2 Pets, you say? ↩
3 The counter to this notion, at least for now, is that the agents are actually "seeing" the social network, but rather interacting with it via APIs, which is also sort of wild. How might one monetize that? Surely there's a way... ↩
2026-01-30 21:59:12

It was a tale of two quarterly reports. While both Meta and Apple posted incredible numbers, Wall Street had a very different reaction to each. Meta shares surged after their earnings numbers came in, while Apple has fluctuated between being up and down ever-so-slightly. And the reasons would seem to be related.
Meta, perhaps more so than any other company in the Big Tech cohort gets hit constantly by Wall Street depending on the details in these reports. That's in no small part because Mark Zuckerberg has proven himself to be super aggressive when it comes to new initiatives. As a founder with total control of his company, he can afford to do this, quite literally, but that doesn't mean Wall Street is going to like it. In fact, they often hate it. And they're often not wrong.
The Reality Labs division is a good example of this. Over the past many years, Zuck has burned tens of billions of dollars with little to show for it. Sure, there are the Ray-Ban Meta smart glasses, but the company could have undoubtedly made those at a fraction of the spend. Most of the money, of course, was spent on the Metaverse. That is, trying to make VR happen for about the tenth time in the past few decades. Once again, it hasn't worked – even with Apple entering the space to "validate" Meta's spend, not to mention name change.
Part of the reason why investors were pleased with Meta this week is undoubtedly because the company has finally acknowledged the reality of Reality Labs. They've started cutting jobs to literally cut their losses, and Zuck is now saying this year will be the peak of that division's burn as they unwind from VR to focus on future smart glasses and mobile and yes, AI.
Speaking of, beyond the Reality Labs spend, what Wall Street has hated about Meta from a stock perspective in recent quarters was the massive ramp in CapEx. Obviously, the combination of the two was too much for investors' taste. Not helping matters was the fact that Meta clearly made some big mistakes with their initial foray into AI and had to reset the entire effort – for the low, low cost of tens of billions of dollars.
Anyway, with these reboots/restructures now seemingly in the rearview, Meta may be getting a new lease on life from a Wall Street perspective. The most incredible thing about this quarter wasn't Meta's impressive earnings, it was how after they disclosed an even larger ramp in CapEx spend – truly incredible given that they're up near the top versus their peers and yet they don't have actual third-party cloud businesses to maintain – investors not only didn't throw up all over it, they ate it up!
Again, part of it is the newfound belief that Zuck will ultimately do the "right thing" if his spend isn't working. And part of it is the fact that the underlying ads business continues to be able to more than cover such costs. And it's the belief that AI actually does seem to help that core business. But still, you probably don't need to spend quite so much money relative to your peer group to get those results. Zuck, as he's made abundantly clear, is going all-in on AI not just to serve better ads, but to serve up the future to the world.
As relayed by Ben Thompson from his write-up on Meta's earnings, this was the most interesting answer Zuck gave during the call with investors:
I think the question was around how important is it for us to have a general model. The way that I think about Meta is we’re a deep technology company. Some people think about us as we build these apps and experiences, but the thing that allows us to build all these things is that we build and control the underlying technology that allows us to integrate and design the experiences that we want and not just be constrained to what others in the ecosystem are building or allow us to build.
So I think that this is a really fundamental thing where my guess is that frontier AI for many reasons, some competitive, some safety oriented, are not going to always be available through an API to everyone. So I think it’s very important, I think, to be able to have the capability to build the experiences that you want if you want to be one of the major companies in the world that helps to shape the future of these products.
Translation: in order to win in AI, I believe we need to own and control the entire stack, as it were. We can't rely on others' technology.
Two things stand out to me here:
As mentioned, Apple also had banner earnings – their highest revenue and profit ever posted as the iPhone business came roaring back to life. The fact that it also led to a massive rebound in the China business, and you'd think investors would be over the moon. But again, unlike Meta's stock pop, Apple's after-hours results have been far more muted. In fact, the stock is currently down ahead of the market opening today.
There are a few likely reasons for this, but as Apple's own earnings call made clear, a big one is the concern around Apple's AI strategy.
You might think investors would love the fact that Apple has decided to largely outsource their work to Google. Not only is their frenemy viewed as a market leader, the partnership will ensure Apple can continue to keep their costs down relative to their peers. And when I say "costs down" – I mean literal fractions of the CapEx cost that Amazon, Google, Microsoft, and yes, Meta are spending.
Over the past year, Wall Street has rewarded this discipline at points, especially when they have viewed those other companies' spend as getting a bit out of control. But beyond Meta, Microsoft just saw their biggest single-day stock drop since the early panic around the pandemic in 2020. Why? Investors are worried about their CapEx spend! Obviously, it's a bit more nuanced – they're worried that Copilot isn't seeing the results from said spend and that Microsoft is still to overly reliant on OpenAI – but still, you'd think Wall Street would reward Apple for not following Microsoft's playbook here.
Instead, if anything, they might now be punishing them for not following Meta's strategy. Again, while investors initially liked the news that Apple was outsourcing their AI work to Google, if they're buying – quite literally – Meta's longer-term narrative here, then they're selling Apple's.
And the truly wild thing is that Apple is the company that's famous for wanting to own the entire stack! This is the Tim Cook doctrine! And he's implemented it to the point of harming Apple in the past, such as when he replaced Google Maps with Apple Maps early on in his tenure. He's clearly willing to pay short-term costs to ensure that Apple maintains long-term control over their technology.
But Apple, like Meta, failed in their first attempt at this with AI. The difference is that while Zuck doubled-down on their internal efforts, Cook doubled back.
To be clear, I think Apple is taking the correct approach right now. They simply don't have the time to completely rebuild their AI technologies in-house at the moment. Even if they could throw all of their money at the effort, like Meta is doing, they would have a hard time getting access to the NVIDIA chips required, with everyone else having those fully locked up. They could maybe partner with AMD or Google – which they clearly are and have been around TPUs to some extent – but it would still be months, if not well over a year before they had anything to show for it. And what that "anything" might look like, we'll see soon from... Meta!
Apple needed to reboot Siri yesterday. But the second best time to do so is today. The worst time is a year from now, given where others are likely to be by then. And that still speaks to a real risk here in partnering with Google. They might get the tech to cut to the front of the line, but if it's ever restricted, per Zuck's fears, they're going to potentially be in trouble...
And so they'll obviously keep working on their own systems behind the scenes, but without the AI being in operation in the real world, there's also a risk that they simply cannot catch up, ever. This was part of my argument for making a big AI acquisition – they did just make one, by the way, their second largest deal ever,1 but not related to foundation models – to both jump back into the game and get the talent on board to stay there. Basically, Meta's strategy with Scale and other such "hackquisitions".
In a way, this may all come down to timing. Meta clearly believes they need to be at the cutting-edge of AI right now. Apple clearly believes they can outsource the cutting-edge right now and wait and see how it develops. Again, that could end up being the more prudent move in the long run if, say, LLMs aren't the be-all/end-all of this all.
If it turns out that Zuck was spending hundreds of billions of dollars a year to build a technology that would end up completely commoditized... that will be a problem. Especially if Apple was spending something far closer to zero and was able to compete – or beat them from a product perspective. Obviously, we'll see.
Right now, Wall Street seems to be buying what Meta is selling while selling what Apple is buying. But it can and undoubtedly will switch again before we know the real answer. And that's fun because we'll all get to watch how it plays out – two opposite strategies by two hated rivals – in real time.

1 At a reported $2B, it would be behind only the acquisition of Beats a dozen years ago at $3B. And yes, those are basically the size of hot AI seed rounds these days... I should also note/disclose that GV, where I was a GP for many years, was a seed investor in Q.ai. ↩
2026-01-30 00:59:20

Another $30B from SoftBank. Up to $30B from NVIDIA.1 Maybe more than $20B from Amazon. Several billion from Microsoft. Yes, OpenAI is back out there with the tin cup extended to bring in a few more bucks to build the future of AI.
The numbers, as relayed in two different stories, one from The Information and the other from The Financial Times, are sort of all over the place. But that's undoubtedly because the exact amounts are all still moving targets. Regardless, it's pretty clear that OpenAI is going to hit their $100B goal, and that they can probably raise even more if they choose to. And given that this raise is coming just months after finally and formally closing the then-staggering $40B fundraise, they probably should...
2026-01-29 03:38:29

If I mention the situation swirling around Tim Cook right now and the film The Dark Knight in the same sentence, your mind probably immediately goes to one scene in the movie. Cook, for all the great things he's done over his time as CEO of Apple – a tenure which is longer than Steve Jobs' time leading the company – is now in the position where many see him as a villain. At least of this particular narrative. But actually, that scene, which I often reference, including about Apple, is not the one I'm thinking about here...
2026-01-28 20:21:20
One sort of odd thing about our current AI chatbot revolution: every service looks basically the same. Beyond the textbox, the outputs for ChatGPT, Gemini, Claude, and the like are mostly just a stream of words with some light formatting for better legibility. Well, I suppose Claude's responses are more beige, quite literally, so there's that. You get why they've all coalesced around this same basic template, and yet there's also clearly some room for improvement. I just didn't expect it to come from Yahoo!1
Yahoo’s big AI play is, in many ways, actually a return to the company’s roots. Three decades ago, Yahoo was known as “Jerry’s guide to the world wide web,” and was designed as a sort of all-encompassing portal to help people find good stuff on an increasingly large, hard-to-parse internet. In the early aughts, the rise of web search more or less obviated that whole idea. But now, Yahoo thinks, we’ve come back around.
With a new product called Scout, Yahoo is trying to return to being that kind of guide to the web — only this time, with a whole bunch of AI in the mix. Scout, in its early form, is a search portal that will immediately be familiar if you’ve ever used Perplexity or clicked over to Google’s AI Mode. It shows a text box and some suggested queries. You type a question; it delivers an answer. Right now Scout is a tab in Yahoo’s search engine (which, CEO Jim Lanzone likes to remind me every time we talk, is somehow still the third-most-popular search engine in the US), a standalone web app, and a central feature in the new Yahoo Search mobile app. Yahoo calls it an “answer engine,” but it’s AI web search. You get it. And so far, it’s the most search-y of any similar product I’ve tried. I like it a lot.
Trying Scout out for the past couple of days, I actually like it quite a lot too! Beyond simply spiffying the outputs up with the use of more color (and emojis), the way they handle links feels a bit better too – rather than being citation-style at the end of blurbs, they're more hyperlink-style, flowing with the text. I could see why some might like this less – visually, it breaks up the reading flow a bit – but I find it decidedly more web-native. As such, I suspect it will lead to a lot more clicks out, back to that web.
And while it also feels a bit weird for Yahoo to be the one pushing new boundaries (or, I suppose, restoring some old boundaries, in a way), it also makes some sense both given their history – which put them in position to still be "the third-most-popular search engine in the US" – and the fact that they do own and control a bunch of still highly-used products with unique data sets: Yahoo Sports, Finance, Weather, etc. If the big AI players are creating more of a canvas for what replaces web search, Yahoo is sort of doing what web search would look like if it was built around AI.
Even "AI Mode" within Google feels more like shoving Gemini – a full-on "native" AI experience in line with its peers – into Google Search. This feels different. Maybe even a bit better?

And this old/new UI might even work a bit better when it comes to monetization? Again because I think it will entice more people to click. In other words, it could help the old monetization methods remain intact, or at least in a better position than the other AI players, which I think will need to figure out new formats and methods of measuring new metrics that matter.
One thing Yahoo isn’t doing? Building its own foundation model. For one thing, Lanzone says, doing that is very expensive. “We think we can best serve our users not so much with the model,” he says, “but with the grounding data and the personalization data that we can add on top of other people’s models.” Scout is based on Anthropic’s Claude model, and what Feng describes as “Yahoo content, Yahoo data, Yahoo personality.” Much of the web-search data comes from a partnership with Microsoft and Bing, as it has for many years.
That could also help Yahoo here in the long run if and when those models become more commoditized. And they already have the experience of outsourcing the "core" work in Search to Microsoft, as noted. Of course, that reliance on Claude (and still Bing) could come back to bite if Yahoo is really just a wrapper around AI and Search.2 What's the moat there beyond brand-awareness and maybe some of that smaller proprietary data from their own services? The long-term play would have to be to use that small moat to lever into a bigger one on the back of better monetization. But it's just way too early to know if and how that will play out.
Still, kudos to Yahoo here. Scout is a fun attempt to think a bit differently about AI results, while in many ways, thinking about the past.








1 Exclamation and all! ↩
2 Still, Bing couldn't get Google to dance, could Yahoo?! ↩
2026-01-28 03:14:45

A year ago, the world changed. On January 20, 2025, DeepSeek released their 'R1' model and within a week, the burgeoning AI Bubble had burst and as a result, NVIDIA's share price plummeted. This, in turn, brought down the entire stock market. We entered a new era of "AI Winter" where cheap, open-source models replaced the insanely-expensive-to train-closed variety from OpenAI, Anthropic, and others. China, in a way, had won.
That, of course, is not what happened.
The "DeepSeek Moment" ended up as more of a hiccup. A "Sputnik Moment" that sputtered. Well, that's not exactly fair. Because it was still an important moment, but more of a teaching one in that it was in part a bit of a wake up call and another part a gut check. But not exactly a moment that changed everything.
I suspected as much at the time, noting a couple days later:
On Monday, just before the markets opened, I did an "emergency pod" with Alex Kantrowitz for his Big Technology podcast around this news. Beyond the initial reactions, I think we hit on a lot of what is now playing out. And actually, about 34 minutes in, we start to talk about what I suspect is the ultimate takeaway from all of this: DeepSeek's real fallout may have less to do with DeepSeek the company/model/product and more to do with the wake up call it provided to those powers that be.
Said another way, everyone seemed to be locked into the notion of scaling that we were blinded to any other possible way of doing things. Even if DeepSeek is embellishing just how much money and compute was required to create their models, it doesn't really matter. The reaction from the stock market down to the startups has made it abundantly clear that there's room here to think differently about how to approach the continued build out of AI.
To be clear, NVIDIA's stock did collapse on that day exactly a year ago. The 17% drop wiped out about $600B in market cap, which remains the all-time record. But within a couple of weeks, the stock had largely bounced back, though it remained depressed until the Summer, when the fallout had fully cleared and NVIDIA became the first $5T company. That drop, as it turns out, was simply a buying opportunity.
A couple of weeks after DeepSeek's moment, things were more clear still:
Really, looking over all of these and taking a step back: is anything all that different than it was before DeepSeek detonated two weeks ago? Not really! Again, it’s just a mentality shift that has taken place across all of these sectors and companies. That’s undoubtedly a good thing — it’s always good to pause and revisit your strategy — but it’s not clear just how much will ultimately change in the long run. It's worth questioning that too!
Is Europe going to win in AI now? Are more startups going to succeed? Is Big Tech going to spend less? Is OpenAI going to raise less? The answers here are obviously not going to be black and white, but my point is that you could certainly make the case that nothing much actually changed as a result of DeepSeek in the longer term. And again, that’s largely because it just really highlighted what was always going to happen anyway.
What everyone was seeking is what was already seeking them. Deep, I know.
While Lina Khan and others – including Sam Altman and Satya Nadella – were busy trying to fit DeepSeek into their own narratives the reality was just far more nuanced. Though, as I argued at the time, really, so was Sputnik itself:
So again, I go back to the notion that DeepSeek itself matters less than what it inspires. Maybe what we really hope for here is that it simply sped up progress, even just a bit, clearing heads and roadblocks. While the notion of this being a "Sputnik moment" is now being played up as hyperbolic, perhaps it's better to simply read it literally. When the Russian satellite launched in late 1957, the US was already close to space. In fact, were it not for some trepidation and bureaucracy, many believe that we could have gotten there first. It was a kick in the ass, not a fundamental rethinking of everything. Sometimes, that’s all you need. It lit a fire that stayed lit through a trip to the moon. Back to work.
I bring this up both because it has been a year, but also because DeepSeek is on the verge of releasing their long-awaited next flagship model. 'V4' should be here in a matter of weeks. You can tell this not just from the reporting on the matter, but also the fact that both Alibaba and (the Alibaba-backed) Moonshot AI pushed out their own new models this week, clearly to get ahead of DeepSeek.
To be clear, 'V4' won't be a direct follow-up to 'R1', but instead to 'V3', which was the model released in December 2024 that laid the groundwork for their reasoning model breakthrough. It's unclear when 'R2' is coming – given all the delays, it's entirely possible that it's baked right into 'V4' – but it's also unclear it matters now. All eyes will be on just how close DeepSeek's flagship model can get to the top models – the focus is seemingly on coding. And, of course, how they trained it. Again, the true breakthrough a year ago.
The first post I actually wrote about the company was on that day because it was also when DeepSeek's app had shot to the top of the App Store charts, supplanting ChatGPT. A year later and the app is outside of the Top 500, at least in the US App Store. ChatGPT is back at the top.1
Yet DeepSeek still matters, just in a different way. Beyond the way it was trained, it has helped China secure a lead in "open source" models – especially once Mark Zuckerberg decided to shoot Llama in the head and start over. Much of the rest of the world seems happy to have cheaper, more extensible alternatives to the big model makers. You know, just in case.
But again, where exactly China is in the "AI Race" is the subject of much debate at the moment. While some in China are saying that they're still lagging behind the leading US firms – and that perhaps the chasm is growing – Mistral's Arthur Mensch said this was a "fairy tale" last week in Davos. Google DeepMind's Demis Hassabis was less certain, saying that the cutting edge of the Chinese AI work was still six months behind the US. The new DeepSeek release will presumably clear this up a bit...
But not entirely because obviously the US and China are still going back and forth and then back and forth again over exactly which NVIDIA chips – if any – will be allowed to be legally shipped into the country. Any legally sanctioned sales will obviously be a huge boon to NVIDIA's business, as they have basically written that business down to zero. It will also, of course, oddly help the US Treasury as they get a cut of those would-be sales.
China's lesson from DeepSeek would seem to be that the restraints placed around AI forced their companies to innovate to catch up faster. And their clear hope would be that this will work with AI chips as well, as it will force Huawei and others to try to catch NVIDIA, faster. At the same time, China probably can't afford to sit back and let their companies fall behind if that strategy doesn't work.
Never mind that a lot of these companies can simply train models in other countries using NVIDIA chips that are not two generations old. There are still a lot of moving pieces on the board...
And that includes back in the US, where a growing number of players are starting to think beyond LLMs. Many now believe that achieving AGI, let alone "Superintelligence", will require at least a few new breakthroughs. While others believe it will be impossible to make robots fully work without "World Models" – which is obviously a focus in China as well, where a different type of AI bubble may have formed...
Anyway, we'll see what, if any, deep impact DeepSeek has with a new year and new model. But I'll give the last words to Hassabis, who was chock full of good quotes at Davos last week: "I think it was a massive overreaction in the West."



1 Though it risks being taken down by a social network called UpScrolled which is surging on the back of TikTok's sale to US investors and concerns that there's already content moderation going on – cue Alanis Morissette. ↩