2026-03-21 22:48:40

Anyone complaining about the latest coat of paint on top of macOS should stop and read this post by Ed Bott. Or this one. Or this one. Or any of these. But Bott in particular eviscerates Microsoft this week for their (mis)management of Windows.
Granted, it has been a long time since I used Windows on a regular basis. While it dominated the first 20 years of my computing life (post-MS DOS), the last 20 years have been almost entirely on macOS (post-OS X). So I guess I didn't realize just how bad Windows 11 apparently is. Maybe not a Vista-level disaster,1 but judging from Microsoft's own post about their (apparently newfound) "commitment to quality", it sure reads like basically everything in Windows 11 well, sucks.
Here's Bott commenting on that post:
What's most remarkable about this post is what it doesn't contain. Here's how Davuluri kicked things off:
"Every day, we hear from the community about how you experience Windows. And over the past several months, the team and I have spent a great deal of time analyzing your feedback. What came through was the voice of people who care deeply about Windows and want it to be better."
That paragraph belongs in the non-apology Hall of Fame, with a cross-reference to "Friday news dump" – a classic PR technique that aims to minimize media coverage of the awkward news being released.
When I read that paragraph, I was gobsmacked. They "spent months analyzing feedback"? Seriously? They needed charts and graphs to figure out that people just want Windows to work?
The laundry list of changes really does read almost as if it would be easier to throw the entire OS out and just start over – by doing exactly the opposite of what has been done to date.
These are complaints that have been going on for years, and have seemingly grown in-step as Microsoft slowly forced Windows 11 upon the majority of their user base. That is, the only reason everyone wasn't complaining on day one is simply because most of them chose to stay on Windows 10. Once they upgrade, the complaints start.
Again, this sounds somewhat similar to macOS Tahoe on the surface. But again, it's just that: the surface. Most of the complaints there are superficial, ranging from icon design to transparency settings. Yes, most of it stems from the "Liquid Glass" updates, which were clearly conceived for iOS, but also brought to the Mac in an attempt to unify the design language. But this seems quite different from the situation with Windows, where some pretty fundamental elements of the OS just seem broken or buggy or worse.
Sure, we're in the post-Windows era of Microsoft, but still, there are billions of users of that OS on a daily basis. Microsoft may no longer really care about that user base as it's no longer the primary driver for the company, but still... it's wild that they would let Windows degrade in such a way. These list of changes almost read as if no one inside of Microsoft actually uses Windows any more.
But really, the complaints read mostly like: just give us the old Windows back. And sure, people generally hate change, we know that. But they seem to especially hate it when you're shoving constant updates and notifications (or worse) in their face to try to upsell them on whatever initiative Microsoft does care about right now. This is clearly a "milking the base" situation. And when you're spending well over a hundred billion dollars a year to build out your future in AI, that Windows user base must look like the world's largest herd of cows. Back to Bott:
As I noted earlier this year, Microsoft has been relentlessly shoehorning AI features into places where they absolutely don't belong. I follow feedback in forums carefully, and I would estimate that roughly 99% of the comments about AI features boil down to a simple request: Please stop.
In a blog post welcoming 2026, CEO Satya Nadella argued that "we need to get beyond the arguments of slop vs. sophistication." In response, the internet made "Microslop" the most popular meme of the new year.
Bowing to that feedback, Microsoft now says it is backing off. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted," Davuluri says. Specifically: "We are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets, and Notepad."
Copilot is clearly a mess for Microsoft on a few fronts, hence why they're loading up the cockpit with even more copilots (while perhaps shoving the captain out with the most golden of parachutes). But Microsoft obviously thought that AI would be the future of Windows. And they seemingly had an opening there given their prescient early bet on OpenAI. Then it all went to shit. Now they have to rip that shit out of Windows, lest that user base start jumping out...
Speaking of, the timing here does seem more interesting than a typical Friday news dump. Apple, of course, just unveiled the MacBook Neo. Their $599 computer is priced to move. And a lot of that movement will likely be first-time Mac buyers – as is clearly already the case. Those users probably aren't coming from Linux. And a large subset of them probably aren't coming from anywhere – but they are undoubtedly would-be PC (or Chromebook) buyers for school. That, of course is not good news for Microsoft. But it's perhaps especially not good news when you have an OS that your current user base hates.
Is $599 a pricey fix for Windows 11? No doubt. But it's almost a permanent one.
One that goes far beyond Steve Jobs' famous "glass of ice water in hell". This isn't one good piece of software installed in a burning sea of lava, this is extinguishing the fiery hell pit with the arctic ocean that is macOS. Well, provided you can live with Liquid Glass. Which pretty much any Windows user would gladly take over this situation at this point, one suspects.
Microsoft took Windows to 11. Perhaps they should have stayed at 10.



1 Looking back, it seems almost as if Microsoft is locked into a situation where nearly every other version of Windows is a dud. It's sort of like the thing where only every other Star Trek movie would be good. With Microsoft, it feels like every time they try to branch out a bit, they mess up Windows to the point where they have to course correct, mainly by backtracking on the bigger changes. ↩
2026-03-20 20:33:05
The year is 2014. Amazon, realizing that the world has almost entirely shifted to smartphones for their computing needs, felt left out. They were early to the newfangled gadget race with the Kindle. And dipped their toes in the tablet waters with the Kindle Fire in 2011.1 But now, directed by Jeff Bezos himself, it was time for Amazon to go for it. The Fire Phone was a big swing. And miss. In part because no one would tell Bezos "no". But in part because Amazon was just far too late.
The year is 2026...
The latest effort, known internally as “Transformer,” is being developed within its devices and services unit, according to four people familiar with the matter. The phone is seen as a potential mobile personalization device that can sync with home voice assistant Alexa and serve as a conduit to Amazon customers throughout the day, the people said.
Bezos may be gone,2 but history repeats...
As envisioned, the new phone’s personalization features would make buying from Amazon.com, watching Prime Video, listening to Prime Music or ordering food from partners like Grubhub easier than ever, the people said. They asked for anonymity because they were not authorized to discuss internal matters.
Come on, there is no way anyone needs that device. These needs were covered 12 years ago. They're more covered now – still by Apple and Google. BUT...
A key focus of the Transformer project has been integrating artificial intelligence capabilities into the device, the people said. That could eliminate the need for traditional app stores, which require downloading and registering for applications before they can be used.
Alexa would likely be a core feature but not necessarily the primary operating system of the phone, the people said.
I'm going to zag a bit here in thinking that this may not be totally insane.Silly 3D screen gimmicks may not have been enough to get people to switch phones but there is a world in which AI can. We're not quite in that world yet. But we're perhaps closing in on it. And this Amazon project sounds pretty early.
A month ago, I made the case that despite the endless consternation about Apple's place in the Age of AI, they may actually end up positioned well thanks to one thing: the iPhone. First and foremost, there's certainly a case to be made that the best device for AI is the one you use most often. And that's the iPhone. Second, all the newfangled devices in the works being built around AI – including by Apple – are undoubtedly going to rely upon the iPhone as their main hub for the foreseeable future. Why? That's where the connection is.
And again, it's the device you already have on you. Even OpenAI knows that you're unlikely to give that up, no matter how good any new AI device may be – even if it's one made by the guy who designed the iPhone! Sure, everyone learned from the lesson which Humane (a startup in which Sam Altman was the largest investor, mind you) learned the hard way. But they also learned from said Fire Phone.
But again, what if there is an opening here? While everyone else, including Meta, is conceding to Apple's position at the moment, what if it is a time to try to re-enter the phone business? With a device completely rebuilt for AI? I'm not saying it will work. In fact, there's a very good chance it won't – least of which because aside from the product itself, competing in the phone business is insanely hard for a number of other more logistical reasons. But it may be worth a shot? Especially if you're in the midst of spending, say, $200B this year on AI infrastructure? Why not throw a few billion at an AI phone project?
And Amazon has a potentially interesting guy to do it. Last May, I noted what seemed to be Amazon's attempt to jump into the new AI device race, with the 'ZeroOne' project. And that was being led by J Allard, a name which long ago dropped off the radar, but those of us around long enough will remember well from the original Xbox days and later the Zune at Microsoft. The ZunePhone never happened – nor did the ill-fated "Courier" project – but might Allard be able to get to take his swing under the guidance of his old Microsoft compatriot Panos Panay?
Again, it all sounds pretty early. To this point:
Three people who have worked on the Transformer project said the phone is still under development. The company has explored both a traditional smartphone and a so‑called “dumbphone” with more limited features that could help counter screen addiction. Amazon has not yet sought wireless carrier partners for the device, these people said.
One inspiration for the new phone has been the Light Phone, two of the people said, a $700 minimalist smartphone with a camera, map, calendar and not much else, such as an app store or web browser.
A dumbphone or feature phone could also help Amazon market it as a potential second handset to accompany iPhones and Samsung Galaxies already in customers’ pockets, the people said. Such handsets, like the Light Phone and flip phones, accounted for 15% of global handset sales in 2025, according to Counterpoint Research.
Certainly the positioning as a "second device" would be the safer path for Amazon to attempt here. But I'm not certain they shouldn't try to go for the big dogs. Especially given Apple's clear vulnerability in AI. To that end, and given Zuck's intense hatred of being beholden to Apple, I'm sort of shocked Meta hasn't been working on a new Facebook Phone!3
I would also just note that Amazon not only has their huge investment in Anthropic, but is pulling OpenAI a lot closer. What if OpenAI's new device didn't need to pair with the iPhone, but with an Amazon phone instead? And what if some combination of Claude, ChatGPT, and Alexa could power Amazon's device? A true Anyone-But-Google AI Alliance!
I will end by noting what I closed with just about a year ago:
Beyond the AI moment, all of this movement also seems tied to the fact that the iPhone – and smartphones in general – are starting to feel a bit long in the tooth. People are perhaps primed for that "what's next". Which is a billion times easier said than done, of course. But Allard has a good pedigree – even the Zune, which we all make fun of now, was a pretty good device and ahead of its time in ways. It just tried to fast-follow the iPod without a lot of Apple's built-in strengths in consumer. And it came in brown.
Could be worse. Could have been the Fire Phone. Let's not do that again, Amazon.
Or maybe let's?



1 A story I broke way back when! ↩
2 Well, from day-to-day operations, but he's still chairman of the board, of course. Not to mention the largest shareholder... ↩
3 Yes, another failed attempt to compete back in the day. It would obviously need a new name now... MetaPhone? ↩
2026-03-20 01:59:43

"We cannot miss this moment because we are distracted by side quests." Begun, the Fidji Simo era of OpenAI truly has?
Ever since she was hired last May, my read – and I believe the read of many – was that there was a reason they gave her that secondary CEO title. I mean, it may have taken that to convince her to take the role, but it also perhaps pointed to what the actual role would be. That is: the CEO in all but name. But actually, with the name too.
As with all things OpenAI, it's complicated...
2026-03-19 05:49:00

The MacBook Neo should not be a device for me. I have a Mac in my office with an M3 Max chip. And I have an M4 MacBook Air maxed out with 32GB of RAM. You know, just in case. When the new Macs hit a couple weeks back, I assumed I would be tempted by one of the M5 variants. But just like any good consumer, I'm compelled by the idea of "new". Despite its iPhone-class chip and meager 8GB of RAM, I had to try out the MacBook Neo.
And I sort of love it.
I keep trying to find a breaking point in my own workflows, but I honestly can't. Yes, it's ever-so-slightly slower at certain tasks. And yes, my workflows are admittedly pretty light, dominated by web-usage with a few native Mac apps sprinkled in here and there. But I gave it a solid week of daily usage. There's nothing I would consider a deal-breaker here or even close to it. So I'm keeping it.
I'm honest-to-god surprised by this. I had assumed I might return it and opt for an M5 machine. Instead, I think I'm gonna trade in one of my other Macs.
I realize this reads like one of those gimmicky posts where someone forces themself to do something, perhaps in order to write a post about it. Like when I quit email many years ago. I promise this is not that!
I just really like the MacBook Neo. Yes, I like the color, but I also like the footprint, which is ever-so-slightly more compact than the MacBook Air. I also just like the way it feels versus the Air. I was someone who still sort of preferred the "teardrop" shape of the OG Airs, so I don't mind the ever-so-slightly chunkier form factor. It's the exact same weight as the Air, and that smaller footprint makes it fit a bit better in my bag.
I also like the screen even though its 13" is ever-so-slightly smaller than the 13" of the Air. One thing I notice: because there's no "lip" here – the cut out area for the camera found on both MacBook Air and Pro screens in recent years – when you expand an app full-screen, which I often do, the screen real estate is actually even more similar to the Air because that "full screen" pushes everything below the lip. On the Neo, with no lip, it goes right to the bezels.
I even sort of like the speakers right there at the front of the machine.





I thought the actually-clickable trackpad would annoy me. It's slightly louder, but I like the feel of the click. And it clicks all over. No corner has been left un-clicked. The only downside I've found is the lack of the "deeper" click which the fake-click-trackpad enabled. Here, there's an option to do that with a three-finger tap instead. I honestly even like the sound it makes on-click! It reminds me of a more old-school machine. Ditto with the sound the keyboard keys make.
Speaking of, I love the slight color-matching those keys take on. I find black keys always end up looking greasy, whereas white keys always end up looking dirty. Because these are yellow-ish – almost like a "cream" color – I'm hopefully they'll age better than those. Time will tell, I suppose.
On the color front, I obviously opted for "Citrus" and in my mind it's the clear winner of the colors. I checked out the others at an Apple Store, and while I was tempted by "Indigo", it was a little too dark for my taste. That made it appear more gray or dark silver, or even one of the million variants of black that Apple rotates through. I did like those off-blue keys though! "Blush" was way too subtle. And "Silver" way too boring. I implore Apple to have more fun here.
Taking the Citrus out-and-about garners a lot of looks. People clearly know it's a MacBook but perhaps haven't seen the Neo before and might think it's a custom paint job. That plus the Apple logo blending in far more as it's color-matched and not metallic. Like many of Apple's machines, the color shifts depending on the lighting situation. In daylight, the "Citrus" Neo almost glows.1 It's fun to look at while using it!
When I think about my appreciation for all of the above and then I remember that this machine is $599 (well, $699, as I got the Touch ID version, which I'd obviously recommend), I'm sort of astounded. Apple is a company that famously doesn't do cheap. This machine in no way feels cheap and yet is cheap. Certainly relative to other Macs. But even relative to other machines by other manufacturers, I have to imagine this is the best value you can possibly find. Sure, there will be machines which are less expensive, but those are undoubtedly junky in some ways. The MacBook Neo is truly, honest-to-goodness, great.
Not great for its price. Just great.
Which is undoubtedly why those other manufacturers are freaking the fuck out at the moment. Even if they could somehow match the build quality here, they'd still have to run Windows. Ouch.
That's why I truly believe this machine is the smartest move Apple has done in years. It is going to vastly expand their Mac base. That has famously been growing over many years thanks to the halo effects of at first the iPod and then the iPhone and iPad, but I suspect this machine is going to boost the business with no halo required. It's just a really great machine at a really great price.
One more thing: I will admit that one reason why I think I'm so gung-ho about keeping (and using) the Neo is that part of me wonders if it's the last time I might be able to before AI workflows require beefier specs again. Right now, with all services in the cloud, it mostly doesn't matter. But if AI moves more local – and not just with Apple, but with tools like OpenClaw, and whatever comes after – not only will 8GB of RAM not cut it, you're going to want the most RAM and best chips possible. And so again, this may be perfect timing for Apple to do the Neo.



1 I swear at points that it really does glow almost a neon green. It's pretty wild and striking to behold. I tried to capture it below... ↩


Seeing green? But it's even more neon-y in person!
2026-03-18 18:26:08

Oh look, another new copilot for Microsoft Copilot. Here's Jordan Novet for CNBC:
Microsoft said Tuesday that it’s bringing together the engineering groups for its commercial and consumer Copilot assistants, which have yet to gain broad adoption.
Jacob Andreou, a former Snap executive who works in Microsoft’s artificial intelligence unit, will become an executive vice president in charge of the consumer and commercial Copilot experience, CEO Satya Nadella wrote in a memo to employees.
Andreou will report to Nadella. Executives Ryan Roslansky, Perry Clarke and Charles Lamanna, who will also report to Nadella, will lead Microsoft 365 applications and the Copilot platform, Nadella wrote.
While it undoubtedly makes sense to try to unify these disparate Copilot groups, which are a branding, if not organizational nightmare, the fact that all of these people will now report directly to Satya Nadella is sort of wild. Yes, there are now four copilots of Copilot – well, actually, five:
The Copilot moves will free up executive Mustafa Suleyman, a former co-founder of AI lab DeepMind that Google bought in 2014, to focus more on building new models.
“The next phase of this plan is to restructure our organization to enable me to focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years,” Suleyman wrote in a memo. “These models will enable us to build enterprise tuned lineages that help improve all our products across the company.”
"Free up" seems like just about the most generous way possible to frame this shift. As it's a pretty clear acknowledgement that Suleyman's endless efforts to create AI products for Microsoft to compete with ChatGPT and Gemini have failed. It now even looks like Microsoft is quickly falling behind Claude...
Microsoft’s Copilot app had 6 million daily active users in February, while OpenAI’s ChatGPT had 440 million and Google’s Gemini had 82 million, according to data from app analytics company Sensor Tower.
Sensor Tower said that so far in March, Anthropic’s Claude, which has gotten extensive media attention because of Anthropic’s standoff with the U.S. Department of Defense, has reached 9 million daily users, while Copilot still stands at 6 million.
None of this is hugely surprising, in fact, it was one of my main predictions for 2026:
Meta and Microsoft reboot their AI efforts – again – When Microsoft pulled off the first "hackquisition" in the form of their Inflection deal, it was clear Satya Nadella felt it was a needed gamble to hedge against their OpenAI bet, which became problematic after "The Blip". In March, we'll hit the two year anniversary of that deal and Mustafa Suleyman being charged with getting Microsoft some sort of traction in AI. With the new OpenAI arrangement giving Microsoft more flexibility, Nadella – now in "founder mode" – is clearly going to want to see some results, fast. I don't see that happening so... do they make another big hackqusition? Do something else to try to vault into the race? Meanwhile, Meta obviously just made one such deal – a second one – to try to catch up in AI after dropping the ball with Llama. I remain skeptical that they can just buy their way back in. And if it's not working, how fast does Mark Zuckerberg act yet again? Another deal?...
Well, it's March. Almost two years to the date of the Inflection buy. Maybe a coincidence, maybe not. Regardless, a natural point to look back to see what is working and what is not.1 And seemingly very little is working, certainly when it comes to consumer and commercial AI.
2026-03-18 01:40:32

I honestly didn't mean to watch the entire NVIDIA GTC Keynote. I have another post I'm working on where a small sliver is relevant, but before I knew it, there I was sitting through an entire two and a half hour presentation. Thank god for 1.5x speed. But honestly, I didn't stop because I didn't want to stop. It was compelling! Because Jensen Huang is so compelling as a showman. Others are good at such presentations – notably Snap's Evan Spiegel – but no one is in command like this since you-know-who. And arguably Jensen's command is more impressive, because it's far more technical with products far less tangible. And despite such technicalities, he's able to weave a narrative that is actually captivating. Perhaps not quite to the mainstream, but certainly to the masses. The attendance and interest in this event has grown in such a way that he can't help but poke fun at it:
"I just want to remind you, this is a tech conference."
The first hour was more or less a history lesson as to how NVIDIA, and the AI industry itself, got here. And a love letter to CUDA, NVIDIA's software layer celebrating 20 years of being an all-important moat for the company. GeForce to pixel shaders to RTX. DLSS 5 – the horribly generic name for a new AI upscaling technology for video games – looks amazing and naturally is already quite controversial. Why? AI, of course.
Speaking of, we finally got into the meat of the presentation from here. A massive slide behind Jensen highlighted a simple message: three key moments over the past two years. ChatGPT in 2023 ushered in the generative AI era. OpenAI's o1 model (and really, o3 model) brought us into the reasoning era in 2024. And last year, Claude Code gave us the agentic era. With that, AI is now able to do productive work, and that means a new era for NVIDIA and AI in general: "The inflection point of inference has arrived."
To hear Jensen tell it, they've been planning on this all along. And certainly there's some truth there in that NVIDIA has been talking up inference for a few years now. But that's mainly been in a defensive manner to insist to everyone that it wasn't a potential vulnerability for this company, but was also the strength.
It felt like you could tell how much this criticism annoyed Jensen in the part of his presentation where he pretended to hold up a wrestling-style championship belt (virtually made for him by SemiAnalysis) on stage. NVIDIA has body slammed the competition! From the top rope! It wasn't even a close call for the "Token King", you see.

Except, NVIDIA's actual actions the past year suggest that it might have been getting a bit too close for comfort as agents and coding came fully into view. Or at the very least, an area of concern.
It was just a few short months ago that NVIDIA made their move to "hackquire" Groq, the AI chip startup focused on, what else? Inference. Why did a company with no concerns about their position in the inference market feel the need to make a $20B deal/no-deal – on Christmas Eve, no less?
Jensen actually addressed this on stage while showing off their (insanely fast) integration of Groq technology within NVIDIA's new stack. Basically, he acknowledged that the use of SRAM in Groq's purpose-built LPUs (Language Processing Units) gave a level of inference speed which NVIDIA couldn't hope to match with their HBM-based (high-bandwidth memory) systems.
But he was also quick to note that Groq couldn't go it alone with only that approach, that they needed the parameter sizes and context that NVIDIA's systems allowed for (because the Groq systems operate with much smaller amounts of their memory). In other words, the systems needed to be paired together, and NVIDIA figured out a way to do that with Dynamo (their "operating system for AI factories"). They can route the inference workflows depending on the situation and need.
Two more interesting elements of the deal: 1) doing this as a "hackquire" clearly allowed them to do it fast, otherwise there's no chance they'd be ready to have Groq technology integrated so quickly. 2) Groq's chips are actually made by Samsung, giving NVIDIA some diversification away from TSMC (and HBM)...
All of this is in line with the various reports about why NVIDIA did the deal – and why Groq felt the need to do the deal. But it also points to Jensen's narratives which seem to shift with the benefit of hindsight. In a way, this has seemingly been the story of NVIDIA from the get-go. He built the company to be perfectly oriented to ride waves that he couldn't possibly foresee coming, only to later note that it was all so obvious. To me, the key to NVIDIA is malleability, not necessarily seeing the exact future. And this Groq situation is no different.
And that's no less impressive, by the way!
Hopefully that helps Jensen calm down a bit when it comes to the competition from AMD and more recently Google with their TPUs. Then again, that fire even when they're crushing – sorry, body slamming – the competition is undoubtedly part of what makes Jensen, Jensen. And what makes NVIDIA the most valuable company on Earth.
Speaking of, did you hear that Jensen now believes they're going to do $1T in sales (well, purchase orders) through next year? Of course, you did, because he made that a focal point of the presentation and that, in turn, made the headlines easy. "DO YOU HEAR THAT WALL STREET? ONE TRILLION." Love, Jensen.
Forget him holding up the wrestling belt, right now I'm envisioning Jensen as Maximus in Gladiator – two literal men in two literal arenas – "ARE YOU NOT ENTERTAINED?" We are Jensen! We are! We bow to thee.
Anyway, Jensen believes these past two years has seen computing demand increase one million times over. That's a pretty precise number for something so imprecise, but he has his own math to back it up: the amount of token generation has increased by ("roughly") 10,000x and the amount of usage has increased by ("probably") 100x. There you go, 1,000,000x.
He then reiterated the most important equation in all of AI at the moment: how they're getting the math to work for the data center build-outs. Basically: if companies get more AI generation capacity, they can generate more tokens, that means revenues are going up, and also more people are using it, which in turn makes the AI smarter. The "positive flywheel system" as Jensen calls it.
Again, basically all of AI is built around this notion at the moment. OpenAI most overtly, but other less forward-facing companies such as private credit players doing the debt financings for many of these data center deals need this system to keep going. No one wants to see what will happen if a "negative flywheel system" starts. Including NVIDIA.
Still, while there are a number of macro risks lingering out there that could puncture The AI Bubble, NVIDIA shows no sign of slowing. In fact, their revenue growth has been increasing again in recent quarters, which is just astonishing at their current size. They also just posted the most profitable quarter ever. Well, for any company aside from one year when an oil crisis fueled Saudi Aramco to new heights. An oil crisis you say...
Never mind all that, at least for now. Jensen's message with all the above was loud and clear: all NIVIDA partners can make their infrastructure commitments with complete confidence. Please and thank you.
To that end, a large portion of the keynote was devoted to a Vera Rubin show-and-tell. Not only are the systems on track to ship in the second half of 2026 (yes, including the new Groq component) – "probably in the Q3 timeframe" – Microsoft has already installed one such rack in Azure. And what used to take two days to install, now takes two hours.
As for the rest of the roadmap, 'Rubin Ultra' – the 'tock' to the 'Rubin' 'tick' – is already taping out according to Jensen and set for 2027. Then comes 'Feynman' in 2028 (after American physicist Richard Feynman). It will be paired with the 'Rosa' (after American physicist Rosalyn Sussman Yalow) CPU and an all-new LPU made with Groq technology, the 'LP40' (no fun scientific naming scheme here, it seems).
From here, Jensen hit on the whole data-centers-in-space thing. But the most interesting aspect may have been how quickly he moved through it – even noting as such. To hear him tell it, there's still a lot of work to do here, most notably with the conduction/convection issues in the vacuum of space. He says they have great engineers working on how to do it but it's "very complicated". One suspects it will be pitched as decidedly less complicated during the SpaceX IPO roadshow...
Far more time was devoted to OpenClaw. Calling it the most successful open source project in history, and just as big of a deal as HTML and Linux, Jensen framed it as the operating system for the "agentic computer". But then he got to the catch: security. Which, naturally, is what NVIDIA viewed as their opportunity here – in particular within enterprise. And while 'NemoClaw' is dropping the 'Open' from the name, Jensen says it will still be open, and the fact that they worked with creator Peter Steinberger, who himself of course now works at OpenAI – would seem to give that credence. Still, we're clearly about to see every large company come in with their own solution here – Perplexity already has, and Meta just did with Manus (as predicted).
As an aside, this section also brought one of my favorite moments of the keynote as Jensen really had to stretch to turn the 'SaaS' acronym into 'AgaaS'. It took me a minute to figure out what he was saying with "A Gas" – but it's 'Agent-as-a-Service', which of course would more naturally brand itself as 'AaaS'. Butt, well...
We stayed on the topic of "open" as Jensen talked up NVIDIA's 'Nemotron' open models, with version 4 in the works, partnering with Black Forest Labs, Cursor, Mistral, Perplexity, Thinking Machines, and others. Honestly, the most interesting bit of this may have been the notion of future companies using token access as a recruiting tool. (Similar to part of Meta's recent MSL pitch for compute access.)
Jensen closed out the keynote with robots. Noting that NVIDIA had been working on self-driving technology for a long time – well before the current AI movement, of course – and now they seem to be right-place/right-time again, with much of the industry lining up with them in time for their own "ChatGPT moment".
Oh yes, and Olaf! A clear highlight for either anyone under 10, or with kids under 10, to see a walking/talking character from Frozen. For as good as Jensen is on stage, Olaf clearly needs some more reps as the interaction was a bit awkward. Still, of all the things to critique, a working, interactive robot on stage... I'll let it go.



