2026-02-18 02:36:41
The most obvious reference would be to The Godfather. So much so that I almost can't believe neither Netflix nor Warner Bros made it in their comments around the re-opening of discussions with Paramount. There are just so many to choose from! Even beyond any number of Corleone family quotes, there's the whole horses-head-in-the-bed bit. And so many others which are especially top of mind right now with the passing of Robert Duvall. Yes, yes, an activist, Ancora Holdings, already sort of played that hand. But come on. I mean, Paramount now literally has to make an offer that WBD cannot refuse!
Still, for some reason my mind is drawn towards another Paramount release: Willy Wonka & the Chocolate Factory. At the end of that film – 55 year spoiler alert – Wonka's demeanor suddenly switches from one of seeming disinterest in Charlie and Grandpa Joe as they're leaving, to one of anger. As Grandpa Joe pushes for the free chocolate Charlie was supposed to receive, Wonka points out the contract they signed. "It's all there, black and white, clear as crystal! You stole fizzy lifting drinks! You bumped into the ceiling which now has to be washed and sterilized, so you get nothing! You lose! Good day, sir!"
As Paramount has relentlessly pushed from all angles to suggest that they actually should have been the winners of the Warner Bros sweepstakes, Netflix has just sort of kept going. Making the case to Washington (and Hollywood) about the deal, but largely ignoring the "loser". Until today.
“Throughout the robust and highly competitive strategic review process, Netflix has consistently taken a constructive, responsive approach with WBD, in stark contrast to Paramount Skydance (PSKY),” Netflix said in a statement. “While we are confident that our transaction provides superior value and certainty, we recognize the ongoing distraction for WBD stockholders and the broader entertainment industry caused by PSKY’s antics.”
"Distraction" and "antics" are just the start.
“Accordingly, we granted WBD a narrow seven-day waiver of certain obligations under our merger agreement to allow them to engage with PSKY to fully and finally resolve this matter,” the streamer continued. “This does not change the fact that we have the only signed, board-recommended agreement with WBD, and ours is the only certain path to delivering value to WBD’s stockholders.”
This is Wonka getting out the magnifying glass to read the fine print...
Netflix reiterated that its deal with Warner Bros. would “deliver more choice and greater value to audiences worldwide with expanded access to exceptional films and series – both at home and in theaters.”
It also said the deal “is centered on growth, opportunity, and a reinforced commitment to creating world-class films and television – not consolidation and layoffs” and would expand production capacity, increase its investment in original content and create jobs.
In other words, this is Netflix calling Paramount's bluff. First and foremost, the fact that they keep floating the notion of raising their bid, without actually doing so. But also, the notion that they keep trying to spin a narrative that their deal, even without the price change, is better. Netflix has a pretty clear counter argument to that. It's a bit more than one word, but to boil it down: bullshit.
At the same time, Netflix blasted Paramount, arguing it has “repeatedly mischaracterized the regulatory review process by suggesting its proposal will sail through, misleading WBD stockholders about the real risk of their regulatory challenges around the world.” For example, the company noted that it received clearance from foreign investment authorities in Germany on Jan. 27 — the same day as Paramount.
It also said that the foreign funding backing Paramount-Skydance’s bid is “already raising serious national security concerns” and that it expects the Committee on Foreign Investment in the United States (CFIUS), Team Telecom in the U.S. and European authorities to scrutinize Paramount’s backing from Middle Eastern investors.
“In reality, PSKY is far from obtaining all of the regulatory clearances required,” the company said. “Enforcers will focus on the impact of PSKY’s proposal on competition, job losses, reduced output, and downward pressure on wages for film and television workers.”
Netflix also warned that the Paramount offer would create “significant horizontal overlaps” that will concern antitrust enforcers, including combining two of the five major Hollywood studios, two major theatrical distribution channels, two of the major TV studios, two major news networks and two major sports distributors.
Again, this reads like Wonka unloading on poor Grandpa Joe! And it keeps going:
Additionally, the streamer argued that Ellison’s “aggressive financing package, rapid deleveraging plans, and performance track record pose tremendous risks to both the completion of their proposed deal and the industry” and that Paramount would be over-leveraged with approximately $84 billion in debt and a roughly 7 times leverage ratio.
In order to hit the midpoint of its deleveraging targets, Netflix said it would need to realize roughly $16 billion of cost savings — far in excess of its previously disclosed $6 billion synergy figure — through “greater, even deeper job cuts that would irreparably harm the entertainment industry.” It added that Paramount is undershooting its guidance for 2026 adjusted operating income by 15%, which could mean even more cost cuts.
“This extraordinary execution risk and track record of operational underperformance could impact PSKY’s ability to fund and close a transaction,” the company concluded. “A business plan that is dependent upon $16 billion in cost savings should be an unmistakable red flag for regulators, policymakers, union leaders and creatives.”
In other words, Netflix to Paramount: put up (more money) or shut up. Netflix to WBD shareholders: even if they happen to put up a couple more bucks per share, you'd be crazy to go with their offer. We're so confident in this, that we're giving them a week to come back with something. After that, no more bullshit, let's get this deal done.
The problem for Netflix, of course, is that investors – at least those who own huge chunks of WBD shares – undoubtedly only care about the bottom-line here in this deal. If Paramount moves to $31 or $32 or $33/share, Netflix is likely going to have to counter with something beyond words – no matter how compelling those words may be! It's just math and investors are for the most part, stupid. They need the math to be done for them.
But, to a point sort of tangential to Netflix's own points, there might be a reason why Paramount hasn't yet raised their offer: because they're already insanely levered here. This would be a relatively small company buying a much larger one, versus the Netflix situation, which is the opposite. Both are using debt, of course. But only one needs a personal backstop from one of the richest men in the world – and the father of the CEO of the acquiring company – on that debt. It's, um, strange. And it doesn't really speak to an easy path forward here for the newly combined companies. Netflix and Warner Bros though? It may or may not be a success, but the path is far more straightforward.
Perhaps Paramount's play was to push this to the breaking point before raising their bid and trying to run away with the bag. Or maybe Netflix has one more trick up their sleeve as well. Either way, the next week will be fun! Will someone wake up with a horse head in the bed? Or are we about to go for a ride in the glass elevator? All options are back on the table thanks to a clearly pissed off Netflix, who got sick of WBD's meek attempts to tell Paramount to piss off.
Thanks for reading, if you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)





2026-02-17 03:17:34

As the world awaits the actual second DeepSeek situation, with the company's 'V4' model on the verge of launching seemingly any day now, we may have gotten a second such "watershed" moment from a different Chinese player: ByteDance. No, this isn't about TikTok – well, at least not directly – but clearly their new 'Seedance' video model is causing chaos. Certainly in Hollywood, but there are far larger tech ramifications as well.
Perhaps most interesting is that while 'Seedance 1.0' launched without much fanfare – only just about 8 months ago – it's this new 2.0 version which is exploding.1 And it's easy to see why, quite literally. With the shortest of prompts, the model can create scenes that look like they're from Hollywood movies. Often, at least amongst the ones being shared, because they are from Hollywood movies, but with elements tweaked a bit. Or a lot! The whole Brad Pitt vs. Tom Cruise martial arts fight is getting all the press at the moment. But just judging from my own social feeds, there are hundreds and undoubtedly thousands of such scenes.
And well, you have to sort of see them to believe them. I'm sure those actually in the industry look at them with some level of disdain – "come on, that type of punch would never be thrown" – but I'm also sure that many more in the industry are currently shitting their pants. Why? Because they won't shut up about it.
"I hate to say it. It’s likely over for us."
Rhett Reese, a writer of the Deadpool movies, wrote on Xitter.
This is yet another end-of-Hollywood moment, which sounds suspiciously like another reason for various unions to go on strike again soon, but I digress. It's a big deal. But also probably not as big of a deal as Hollywood would have you believe.
Yes, Mr. Pour Cold Water On It strikes again!
Look, first and foremost, there are obviously multiple levels of infringement going on here. I'm no lawyer, but I imagine you can't just take a scene from Attack of the Clones, enhance the dialog and anatomy and be okay. Sure, there are parody precedents and rights, but not of the actual footage? This becomes even more gray if you believe the models in question were trained on the footage in question, ingesting it alongside myriad other copyrighted work. Was Seedance trained on TikTok videos? ByteDance isn't saying, but surely it must be, at least in part? Maybe the model-makers never admit to this, or maybe it should be "fair use" to some degree, but come on.
That stuff is all fairly straightforward and should be sorted out relatively quickly. Lawsuits will do that. Disney. Paramount. Etc. ByteDance is already saying that they're complying with take-down requests. China may not be the US when it comes to such laws, but it's all just too blatant for it to be tenable. Just as it was with Sora, early on. OpenAI moved fast to lock things down and... the app clearly got far less popular and was far less viral as a result. Funny that.
And it points to a secondary issue here. People love this content when it features Hollywood talent they know and love. When it doesn't? It's going to be decidedly less viral. Honestly, it won't be viral at all. You can create stunning, amazing visual scenes with AI and maybe aside from some technical folks being impressed, the masses will not care. That's just reality.
Hollywood, for all its bullshit, works. It's a great marketing engine and flywheel for great talent. But without that talent...
Though that also points to the bigger issue and potential outcome here. What if that talent pool suddenly broadens – exponentially – thanks to the technology we're now seeing in Seedance?
Without question, these videos are technically impressive and do point to a world in which Hollywood itself can create such scenes on the cheap. Let's be clear: that's the real fear here. That Hollywood will start using such tech to cut out many currently needed in the film production chain. And it's a legitimate concern! Probably not this year or next, but eventually, costs have a way of trying to be cut. Especially when conglomerates control the means of production. This is the way, sadly.
Said another way: Hollywood shouldn't be concerned about a kid in their basement using AI to make a rogue version of Star Wars, they should be worried about Disney using AI to make a version of Star Wars without much of the headcount currently needed to make a Star Wars. This is the real disruption here.
And yet they are currently worried about the kid in the basement making the viral Star Wars clip. Because, hey, they can't do that! And yes, as discussed, they technically can't. Well, technically they technically can, but they legally can't.
But can they, if, say, they make it for their own purposes using their own, locally-run models? Again, I'll leave that for the lawyers, but the edges start to get grayer still. I'm reminded here more of the OpenAI/Studio Ghibli debates last year. What if you're an excellent artist and can draw your own art that looks exactly like Studio Ghibli work for your own amusement? Is that illegal? No? So why would it be to use AI in such a way for your own purposes? Because you're technically not drawing it? Why is a prompt not a type of art? Because of how the AI was trained? Because of something else?
Video is more divisive because it's more visceral. But also, when real actors are involved, new lines are drawn. You can't just put Tom Cruise into your video, right? But what if you put the guy who looks a lot like Tom Cruise into that video? Actually, let's cut to the extreme: What if, say, Tom Cruise had a twin brother, Dom Cruise, who obviously wasn't Tom Cruise but looked exactly like Tom Cruise? Can you not create footage with Dom because he looks exactly like Tom?
Obviously not without Dom's permission, but what if you had it? Could you make a Mission: Impossible-like scene with Dom Cruise? This is, of course, theoretical. But also perhaps instructive for future legal fights. If Tom Cruise never filmed any of the scenes in your AI usage, was it still Tom Cruise in them? Where does his likeness end? Again, perhaps back to the training, but what if the model just trained on that guy who looked a lot like Tom Cruise? Not even his hypothetical twin?
Further, what if an actor allows for their likeness to be used this way? Here I'm thinking about people like Bruce Willis, who was busy making seemingly endless B-level direct-to-DVD movies before anyone really knew about his horrible health ailments that were slowly making it impossible for him to work anymore. What if such a situation led an actor to sign over their rights to be reproduced, as it were? Obviously, there would be boundaries to that, but what if they signed them over to an AI player, such as Disney did with some (decidedly non human) rights with OpenAI?
That's probably the best case scenario for all of this because it draws more firm guardrails around such usage. And that's undoubtedly why we're seeing actors like Matthew McConaughey cut deals for their AI usage. We're clearly going to see more and more of this for all types of likeness rights.
But that's going to obviously take a while to work out. And one suspects there will be dozens or hundreds or even thousands of legal battles between now and then. But at the end of the day, it seems unlikely that the technology gets put back into the proverbial box, so it's a question of the way Hollywood figures out how to leverage it. And if that's simply the (unfortunate) situation of fewer people working on movies. Or if it allows more movies to be made and that scale leading to more (but different) jobs...
And if – as I've long been harping on – all of this doesn't lead to a world in which human-made creations aren't more highly valued than those created by AI? Because we recognize that it's the input – time – that matters, and not just the output. The reality is probably somewhere in between, because a lot of movies will use AI to some degree. But those that are more "human-first" may end up doing better as a result. Not necessarily because the end result is better, but simply because humans tend to like and appreciate work made by other humans.
With all that in mind, I'm not sure how much of a 'Sputnik Moment' this will also be. (I mean, can there even be multiple "Sputnik Moments" — "Sputnii Moments"?) I do appreciate that while we were busy worrying about Sora, a Chinese company walks into the bar full-on "hold my beer" style. But I also suspect that a half dozen other models with similar video capabilities will launch shortly. That's just how this tends to work. So the bigger question remains...
Is it "likely over" for Hollywood? No, not likely. But it’s yet another wake up call. The gates are being thrown open and gatekeepers tend not to like that… But they should, because it may be the key to all of this working in the end.
I will just quote Rhett Reese again:
"In next to no time, one person is going to be able to sit at a computer and create a movie indistinguishable from what Hollywood now releases. True, if that person is no good, it will suck. But if that person possesses Christopher Nolan’s talent and taste (and someone like that will rapidly come along), it will be tremendous."
Yes. This. The copyright stuff will get sorted. Sure, there may be a period of pain just as there was with piracy, but we'll figure that out. I don't mean to fully downplay this, I just feel the need to counter the doom-and-gloom somewhat. And I can't help but wonder if we're focused on the wrong things here...
Thanks for reading, if you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)



1 Certainly shades of when DeepSeek 'V3' launched to realtive little fanfare before 'R1' launched... ↩
2026-02-16 02:01:36

Remember what a stir Google Glass caused a decade-plus ago when it launched far too early into the world and gave us the "Glasshole"? Also remember the stir Meta causes when it does... well basically anything? Well, here's a new report from Kashmir Hill, Kalley Huang, and Mike Isaac for The New York Times:
Five years ago, Facebook shut down the facial recognition system for tagging people in photos on its social network, saying it wanted to find “the right balance” for a technology that raises privacy and legal concerns.
Now it wants to bring facial recognition back.
Meta, Facebook’s parent company, plans to add the feature to its smart glasses, which it makes with the owner of Ray-Ban and Oakley, as soon as this year, according to four people involved with the plans who were not authorized to speak publicly about confidential discussions. The feature, internally called “Name Tag,” would let wearers of smart glasses identify people and get information about them via Meta’s artificial intelligence assistant.
Look, technology aside, maybe – just maybe – read the room here Meta? People generally seem to at best distrust AI and at worst, dislike AI. Certainly in your core market at the moment. We can debate if this is warranted and how much of it has to do with messaging but... it is what it is right now. And right now, you've managed to get around this issue with the Ray-Ban Meta smart glasses. I think that's largely been the case because these glasses are not, um, framed around AI, but rather just as decidedly regular-looking glasses (thanks, EssilorLuxottica) meshed by seemingly straightforward, fun technology. You know what will change that perception fast? Turning on facial recognition capabilities. You know how I know that? Because third-parties have already done the helpful field work for you here ahead of time. If and when you enable this, it's going to be a total shitshow.
Naturally, they seem to know this too...
Meta’s plans could change. The Silicon Valley company has been conferring since early last year about how to release a feature that carries “safety and privacy risks,” according to an internal document viewed by The New York Times. The document, from May, described plans to first release Name Tag to attendees of a conference for the blind, which the company did not do last year, before making it available to the general public.
Meta’s internal memo said the political tumult in the United States was good timing for the feature’s release.
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.
Jesus Fucking Christ. This is like the ultimate crescendo of cluelessness. You tell me, is "the blind leading the literal blind" too on the nose? No? How about putting in writing your plan to try to put this out there when the rest of the world is distracted? I mean, even if there's validity in that strategy, you don't say that out loud. Actually, you do say that out loud, what you don't do is put it in email!
I don't really understand how Meta is so bad that this, but I'd be lying if I said it wasn't fun to watch. They're seemingly on the verge of taking a pretty popular product and weaponizing it, turning it into the most polarizing form of AI yet.
What insanely controversial project will Meta think of next?
Thanks for reading, if you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)




2026-02-12 22:12:06

Stop me if you've heard this before: Apple may have to delay the roll-out of some features for the new version of Siri. Oh, you have heard this before? A half-dozen times just in the past few years? Weird. It's almost like Apple is having some major issues with their AI implementation and strategy. They should probably look into that. Perhaps before Mark Gurman does for Bloomberg?
Apple Inc.’s long-planned upgrade to the Siri virtual assistant has run into snags during testing in recent weeks, potentially pushing back the release of several highly anticipated functions.
After planning to include the new capabilities in iOS 26.4 — an operating system update slated for March — Apple is now working to spread them out over future versions, according to people familiar with the matter. That would mean possibly postponing at least some features until at least iOS 26.5, due in May, and iOS 27, which comes out in September.
Fool me once, shame on you, fool me twice, shame on me, fool me a dozen times... just shame. Shame. Shame. Shame.
As I see it, Apple has two problems. The first one is the bigger one: they need to fix Siri. But the second one is tangentially related to that quest: they need to plug the leak of detailed information about the Siri roadmap and timeline.
Officially, Apple has only said that they planned to launch a new version of Siri in 2026. This is embarrassing in and of itself considering that it is functionality they promised at WWDC two years ago and infamously ran commercials about new features even though it was effectively vaporware at the time. Many of those features got pushed into 2025. And then they had to reshuffle the entire effort – including the team overseeing AI – which pushed everything into 2026. We're only two months into 2026, so normally Apple would have some time here. The problem is that they incredible amount of leaks, almost all of which result in Mark Gurman scoops for Bloomberg, have seemingly given the public far more granular details about the timing and yes, issues.
Apple is a famously secretive company, and yet these highly specific leaks have been going on for years at this point. It's honestly pretty weird. I used to do such reporting for a living with my share of Apple scoops back in the day. When those would happen, Apple would move fast and decisively to try to plug any holes, and they often would. I'm honestly not sure what to make of the fact that they haven't here – again, over years and years. And this is a perfect example of how damaging it can be to the company. Again, without these reports, no one outside the company would likely know that there are ongoing issues. Instead:
In the spring of last year, Apple delayed the rollout, saying the new Siri would instead arrive in 2026. It never announced more specific timing. Internally, though, Apple settled on the March 2026 target — tying it to iOS 26.4 — a goal that remained in place as recently as last month.
But testing uncovered fresh problems with the software, prompting the latest postponements, said the people, who asked not to be identified because the deliberations are private. Siri doesn’t always properly process queries or can take too long to handle requests, they said.
Thanks to Gurman's reporting, the entire market has been guided to expect a new version of Siri coming in that iOS 26.4 update. And given that iOS 26.3 just launched yesterday, the first beta builds of this new Siri update are undoubtedly imminent. In fact, we know the exact targeted week: February 23. How? Yet another Gurman report. On something as granular as the timing for a beta roll-out.
Anyway, everyone – including myself – was starting to get excited. Apple was finally going to fix Siri! Of course, we all had no reason to believe Apple here after 15 years of such promises, except that this time they made the hard (and correct) decision to outsource the AI work, going so far as to announce the partnership with Google to use Gemini. That's how dire the situation had become. The super secretive Apple, a company which famously aims to own and control their entire stack, had to publicly announce this partnership – with a major rival, no less – despite it being a de facto admission that they messed up Siri badly enough, and their AI strategy more broadly, that they couldn't fix it themselves.
And they had to announce this, in part, because of the reporting on the matter. Everyone knew they were weighing partnering to fix Siri. And that a bake-off was going on. And between whom. So Apple had little choice but to comment on it when it was over and Google was picked as the winner – never mind the fact that Apple may have wanted to pick Anthropic but the AI startup wanted too much money to use customize Claude, per who else... With the announcement, it looked like Apple was finally putting the AI shitshow behind them. The market cheered.
And again, thanks to these Gurman reports, we knew to expect to see some progress shortly with these beta builds. Not so fast...
In recent days, Apple instructed engineers to use the upcoming iOS 26.5 in order to test new Siri features, implying that the functionality may have been moved back by at least one release. Internal versions of that update now include a notice describing the addition of some Siri enhancements.
One feature is especially likely to slip: the expanded ability for Siri to tap into personal data. That technology would let users ask the assistant to, say, search old text messages to locate a podcast shared by a friend and immediately play it.
The tapping into personal data feature was literally the marquee selling-point of Siri in that WWDC 2024 keynote. We're clearly going to go a full two years without Apple being able to ship that, even despite the Google partnership.
To be clear, I found it a little odd that the timing of the first Siri fixes were coming so soon after Apple announced the partnership. While there are undoubtedly some things that the two sides could plug-and-play, it seems like there might be a million little edge cases for things that would break when swapping models. And well, they're breaking...
Testers have also reported accuracy issues, as well as a bug that causes Siri to cut users off when they’re speaking too quickly. And there are problems handling complex queries that require longer processing times.
Another challenge: The new Siri sometimes falls back on its existing integration with OpenAI’s ChatGPT instead of using Apple’s own technology. That can happen even when Siri should be capable of handling the request.
Again, these details are beyond embarrassing. And makes it seem ridiculous that Apple would tout the partnership publicly – certainly without giving an updated time frame as to when to expect the fruits of such labor. Instead, we have all been going off of Gurman's reporting. And so when timetables inevitably slip...
It's more egg on the face for Apple!
And while you might think the public doesn't care about such things, the "Siri sucks" narrative has clearly gone mainstream. As has the "Apple is behind in AI" talk, which had been problematic for their stock price. Yes, it has bounced back a bit thanks in part to being a hedge against the Big Tech CapEx situation, but more so on the resurgence of the iPhone sales. But still, Apple spent years and years as the most valuable company in the world, now they're battling to stay in second or even third place on most days. Why? AI of course. Whether or not you believe in its value right now, the long-term prospects have boosted all of their competitors. And it has fueled this narrative that Apple is behind, and continuing to slide.
You know what doesn't help that narrative? Reports that Siri continues to slip...
Anyway, my point is that it's wild how Apple cannot plug these leaks when its so clearly hurting the company in very tangible ways. But yes, the bigger issues seems to remain that they cannot fix Siri.
And while I was excited for this Google partnership and for Apple to put this sad chapter behind them, I'm no longer sure they'll actually be able to. And that Siri may continue to suck for the 15th year in a row...
Update February 13, 2026: After the report of the Siri delay (and a report about the regulators looking into Apple News bias) Apple had its worst day on the stock market in nearly a year, closing down 5%. This prompted Apple to tell CNBC that the new Siri remains on track for 2026 – which is exactly the framing I described above and again points to why the leaks are such a problem for the company.
Thanks for reading, if you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)



2026-02-11 23:56:13

Seemingly my entire social feed is filled right now with people sharing "Something Big Is Happening", an essay by entrepreneur Matt Shumer. I think it's a pretty good overview of the current situation in AI meant to be read by the layperson so that they can share the thoughts for discussion around the proverbial dinner table. And actually, so they can get prepared for what's coming and take action. To that end, he kicks off by framing it as a pandemic-like situation, where the world is about to change. Time to stockpile that toilet paper and hold on to your butts.
It's mainly good because it's digestible – unlike, say, Anthropic CEO Dario Amodei's latest essay, "The Adolescence of Technology", which Shumer cites, and which is about 20,000 words. Shumer's post clocks in closer to 5,000 words. So "short" that he actually published it on Xitter – which remains just a truly awful place to read anything longer than 140 characters, but I digress...
My read on this read is that it's a bit too alarmist, but still a useful thought exercise for most people. I mainly say that because my belief is that for as fast as all of this is moving – and to be clear, it is moving insanely fast – I suspect the ramifications will still take far longer to play out. I make this "cold water" prediction quite often these days, but it's nothing profound or particularly insightful, it's simply the way most things play out, certainly with technology.
Yes, the pandemic swept in and changed things faster. But that's part of why it's not a great analogy here. Is the world about to change in a couple weeks? No. At the same time, is the world about to change in more long-term ways than it did from that pandemic? Yes.
After reading the post, I actually followed Shumer's advice and had AI do my work for me. My "work" here being to respond to this essay – how meta! I asked Claude – and specifically, Opus 4.6, one of the breakthrough models that Shumer cites – to study some of my previous writing and write a response in my style. That response, which I'll paste below these words that I actually did write – I swear! – is pretty good! I don't think it nails my exact style, but it has its moments – and honestly, others can probably judge that better than I can, as I'm maxxxx biased in this case. It makes some good counter points, including about the pandemic analogy, which may have shaped my own paragraph above! I'd like to think I would have said that regardless, and perhaps the AI just was able to predict that, but... how far down this rabbit hole do we want to go here?!
Anyway, I laughed at points. ("The virus didn't need a Substack.") And was generally impressed by the output.1 I often find this to be the case with Claude, which I've recently been working into my daily routine alongside ChatGPT and Gemini (yes, I pay for each, to constantly test them, which is also Shumer's advice, which I definitely agree with, though it's certainly fine for most people to just pick, and pay for, one to test).
Wait. I should back up.
As I'm suggesting above, as strange as it may seem, I actually haven't used AI to write something for me in my own style before. I mean, I think I did in the very early days of ChatGPT to see what might happen, but it was pretty bad and rudimentary at that point. In the intervening years, I've both never had the urge to do this or felt the need. It's not that I'm afraid of doing it, and not even really that I think it's below me (though yes, I do), I just really don't see the point. Because to me, as yes, I've written about before, the point of writing is just as much about the process of doing it as much as the output. Actually, I think it's far more about that process, which is my big takeaway from our current AI revolution and this latest experiment.
Yes, AI could write a rebuttal for me. And yes, it can be quite good! But what is the point of that? Just to put something out there? To what end? I guess maybe if I wanted a quick and easy way to "thought lead". But even then, to what end? I wouldn't actually be "thought leading" because they wouldn't actually be my thoughts! They may look and/or sound like them, but that doesn't change the fact that they're not them in that I didn't actually think them. That might not matter to others, but it matters to me!
Because again, what I get out of writing this is from the very process of writing it! The thinking about it! Forming thoughts and letting my mind wander. Expressing my actual opinion on a matter, not outsourcing that thinking to technology.
Sure, I guess if I wanted to make a quick buck in monetizing those thoughts in some way, I could do that. But we have a word for that, it's called spam.
All of this points to some thoughts I had around the whole "Moltbook" situation. We're in a world where bots can talk to other bots, and I think that's interesting and eventually useful for all of the "agentic" stuff we want and need AI to do for us. But there remain a lot of things that you're going to want to do for yourself. Not because an AI can't do them, but because you actually derive value from doing them. To me, writing is the best example of that. But there are many others. And we're going to increasingly discover them in this new world we're entering.
Said another way, and to harp on the point I keep making, the inputs matter just as much as the outputs, and in the Age of AI, they're probably going to matter even more!
The clear impetus for Shumer's post is that he's a developer and his "holy shit" moment was realizing that that OpenAI's latest GPT-5.3 Codex model could fully do the coding work he needed, from writing, to testing, to deploying. This is clearly where AI is going to have the biggest and most immediate impact on our world. It's already happening in the fact that, as Shumer notes, the AI is being used to write the AI applications themselves. You don't have to extrapolate out too far to see a world in which AI starts improving itself, and this is the "breakaway" moment that Amodei and others have been talking about and warning us about. It will happen and it is something we need to watch closely, obviously.
But the disruption of the day-to-day code writing seems unlikely to play out as seamlessly across other industries. As many have noted in the past, AI is uniquely suited to write code because of the way LLMs work. Other jobs and just jobs-to-be-done will likely require other variants of AI that perhaps aren't as probabilistic.
Even still, I might argue that if there's no value that developers get out of the input of coding – actually doing it – perhaps it's better if it is automated away? I suspect some developers do derive value from coding though. So they might want to do it anyway? Or it might be a hybrid situation where they do the parts they want – perhaps the creative parts – and they let AI do the rest. This has already been happening, of course. And if there is a coding job to be done that can simply be automated away with a few commands, that's probably for the best for everyone aside from maybe the entry-level coders who just spent a lot of time learning a specific programming language. That sucks for them, but I also suspect they'll go on to find other and better uses of their time!
In my own world, I think about email. I've always hated it and would love not to have to do it. So I will gladly outsource that task to AI if and when the technology is up for it. But even then, there will be parts that I want to and/or need to do so that the knowledge from some of that work is in my own brain.
The above probably applies to some legal work as well (another example Shumer cites and is obviously hot right now in the world of AI). There's the tedious document reviews that a human probably doesn't need to or want to be doing. But there will undoubtedly be other legal work that humans actually want to be doing. Maybe AI could do some of it, but if the lawyers actually derive value from it, maybe it's worth the cost. The cost being time and perhaps a lack of cost savings for the law firm (which, admittedly, is another complicated matter).
In general, we will need to find new business models for many jobs. But that has been the case throughout much of history. And actually, I suspect that human-created work will actually rise in value in this new age. I know this is hard to see now, but again, look to the inputs. A human being – your fellow human being – took time out of their lives to do this. I just hope this doesn't break down into yet another class battle, where the wealthy pay for human-made creations while those less fortunate "settle" for the AI work.
We'll see...
Here's where I'll note that I think Shumer's strongest point, and one that resonates with me the most, is about how best to situate our children for this new world. The answer, of course, is that nobody knows. But to that end, I think his point about making sure that kids are malleable enough to adapt, both in their education and eventually in their work, will be critical. Because it's not like things are just going to change and that will be it. The situation – the world – will keep evolving. Again, it's not a pandemic-like situation. It's perhaps more like climate change.
This is all a long-winded – though less long-winded than Shumer's post, which itself was less long-winded than Amodei's post – way of saying I think we're going to make it. AI is going to disrupt a lot of tasks and industries, but not in two weeks and not in two months, and probably not even in two years for many jobs. For coding? Probably. Legal document review? Ideally. Email? Hopefully! But things like writing? I mean, if you want to task AI with that, then sure! But you might not want to, at least not for everything.
That's the point. We're going to figure out the right balance here. Just because AI may eventually be able to do everything doesn't mean that it should. Or that it will. Because we will choose for it not to for certain things we wish to do and/or that we get value out of. Longer-term our mindset around work will have to shift from tedious tasks that we have to do to recognizing and focusing on the work that we want to do. This sounds great! But I suspect that it's going to be no less of a hard shift to make, mentally, emotionally, and even physically. Because it will just be different. As the focal point shifts from the output to the input...







1 For the record, I would have relied less heavily on crypto and certain VR as the counter-examples as they're simply not strong enough points. Sorry Claude. ↩
There's a blog post making the rounds this week by Matt Shumer — an AI startup founder and investor — called "Something Big Is Happening." It's essentially a letter to friends and family telling them to wake up because AI is about to change everything, like, now. It uses the COVID pandemic as its framing device. It predicts the imminent end of most white-collar jobs. And it has gone quite viral on the strength of its urgency and its "I'm just telling you what happened to me" sincerity.
I think the post is well-intentioned. I also think it's almost perfectly engineered to be wrong in the ways that matter most.
Let's start with the COVID comparison, because it tells you a lot about the rest of the piece. Shumer writes that we're in the "this seems overblown" phase of something much bigger than COVID. The implication being that the skeptics are the ones who weren't paying attention in February 2020, and that he — an AI insider — is the one stockpiling toilet paper.
The problem is that COVID worked as a shock precisely because it was fast and binary. One week you were going to restaurants, the next week restaurants didn't exist. AI is not that. AI is a technology that has been improving — sometimes rapidly, sometimes not — for over a decade. We've had approximately 47 "everything just changed" moments in the past three years alone. Some of them were real! Most of them were vibes.
COVID didn't need evangelists to write blog posts telling you it was real. It just was real, and it hit you whether you believed in it or not. If AI were truly at the "two weeks before lockdown" stage, the last thing you'd need is a 5,000-word essay begging people to pay attention. The virus didn't need a Substack.
Now, to be fair, there's a kernel of truth in the piece. AI has gotten significantly better. The models released in early February — Opus 4.6, GPT-5.3 Codex — are genuinely impressive. If you haven't used the best available models recently, you probably should. On this point, Shumer and I agree completely.
But there's a vast chasm between "these tools are really good and getting better" and "50% of entry-level white-collar jobs will be eliminated within one to five years." And Shumer leaps across that chasm with the confidence of someone who has never watched a technology adoption cycle play out in the real world.
I have. I spent years covering the tech industry as a reporter and then over a decade as a venture investor. And the single most reliable pattern I've observed is this: the people building a technology are constitutionally incapable of accurately predicting how fast society will absorb it. They always, always think it will be faster than it is. Not because they're lying, but because they're extrapolating from their own experience — and their own experience is not representative of anything.
Shumer's big revelation is that he can now describe an app to AI and have it built without much intervention. I believe him! That's genuinely cool. But the leap from "AI can write code for an AI startup founder who has been using these tools for six years" to "AI will replace your lawyer, your doctor, and your accountant within a couple of years" is... well, it's a leap. It's the kind of leap you make when you've been too deep inside the bubble for too long.
Let me address the specific claims, because they deserve scrutiny.
Shumer cites Dario Amodei's prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years, and then says "many people in the industry think he's being conservative." He presents this as though it were a sober assessment from a credible authority. And Amodei is credible — probably the most thoughtful CEO in AI. But it's also worth noting that Amodei runs a company whose valuation is directly tied to the belief that AI will become extraordinarily powerful extraordinarily quickly. Every AI CEO in the world has an incentive to hype the timeline. That doesn't make them wrong. But it does mean you should apply a discount rate to their predictions, the same way you would to any CEO talking about the future of their own industry.
The METR benchmarks get cited — AI completing tasks that would take a human expert "nearly five hours," with the number doubling every seven months. This sounds terrifying until you think about what "tasks" means in the context of a benchmark. Benchmarks measure what benchmarks measure. They're useful indicators of progress, but the history of AI is littered with benchmarks that were "solved" long before the real-world equivalent of the benchmark was anywhere close to solved. Passing the bar exam on a multiple-choice test is not the same thing as practicing law. Completing a coding task end-to-end in a controlled environment is not the same thing as shipping production software at a Fortune 500 company with legacy systems, compliance requirements, and a VP who keeps changing the spec.
Then there's the "AI helped build itself" moment, which Shumer presents as the most important and least understood development. OpenAI said GPT-5.3 Codex was "instrumental in creating itself" — used to debug training, manage deployment, and diagnose evaluations. This is interesting! It is also... not the singularity. Software has been used to build software for as long as software has existed. Compilers compile themselves. The question isn't whether AI can contribute to AI development — of course it can — but whether this creates the runaway recursive loop that Shumer implies. And on that, the evidence is far from clear. The gains from AI-assisted AI development could plateau. They could be large but linear. The "intelligence explosion" framing assumes a specific exponential dynamic that is not guaranteed by the underlying mechanics.
The most revealing part of the piece, honestly, is the advice section. Because the advice is... fine? "Start using AI seriously." Yes. "Lean into what's hardest to replace." Sure. "Get your financial house in order." Always good advice, AI or no AI. "Spend an hour a day experimenting." Not bad!
But here's what's funny about this: if AI were truly about to do to white-collar work what COVID did to in-person dining — which is the explicit comparison Shumer makes — then "spend an hour a day experimenting with AI" would be hilariously inadequate advice. You don't tell someone to spend an hour a day experimenting with pandemic preparedness when the pandemic is two weeks away. The modesty of the advice quietly undermines the extremity of the prediction.
What Shumer is actually describing, if you strip away the apocalyptic framing, is a technology that is very useful, improving quickly, and that will probably change a lot of jobs over the next five to ten years. Which is... correct. And also not a novel observation. And also not COVID. It's closer to the internet, which did in fact transform virtually every industry — but over the course of decades, not months, and in ways that were far more nuanced and surprising than anyone predicted in 1995.
I should be clear about something: I am not an AI skeptic. I use these tools daily. I've tracked the capital expenditure numbers — the hundreds of billions that Amazon, Google, Microsoft, and Meta are committing to AI infrastructure. I take this seriously. I've invested in companies in this space. The technology is real and it is going to matter enormously.
But I've also been around long enough to recognize the genre of what Shumer wrote. It's the "I need you to understand what I understand" post. The "I've been holding back but now I owe you the truth" post. It's a genre that recurs with every major technology wave, and it always has the same structure: personal revelation, exponential trend extrapolation, dire warning, call to action. I've seen this post written about the internet, about mobile, about crypto, about the metaverse. Sometimes the underlying technology really is transformative. It just never transforms on the timeline or in the way that the "something big is happening" post predicts.
The people who wrote the equivalent post in 1995 about the internet were right about the big picture and wrong about almost every specific prediction. The people who wrote it about crypto were... mostly just wrong. The question with AI isn't whether it matters — it obviously does — but whether the specific doom-and-urgency framing is warranted by the current evidence.
I don't think it is. Not yet, anyway. And I say that as someone who has felt the ground shake — perhaps a bit earlier than most, given where I sit. But earthquakes and volcanos are different things. The ground shaking doesn't necessarily mean everything is about to be buried in lava.
What I'd actually tell the friends and family that Shumer is addressing: use these tools. They're remarkable. Pay the $20 a month. Experiment. Be curious, not panicked. And when someone who runs an AI startup tells you that AI is about to change everything in the next one to five years, remember that someone who ran a crypto exchange told you the same thing in 2021, and someone who ran a VR company told you the same thing in 2016. The technology may be different this time. The incentive structure for the predictions is exactly the same.
Thanks for reading all the way to the bottom! If you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)