2026-02-11 23:56:13

Seemingly my entire social feed is filled right now with people sharing "Something Big Is Happening", an essay by entrepreneur Matt Shumer. I think it's a pretty good overview of the current situation in AI meant to be read by the layperson so that they can share the thoughts for discussion around the proverbial dinner table. And actually, so they can get prepared for what's coming and take action. To that end, he kicks off by framing it as a pandemic-like situation, where the world is about to change. Time to stockpile that toilet paper and hold on to your butts.
It's mainly good because it's digestible – unlike, say, Anthropic CEO Dario Amodei's latest essay, "The Adolescence of Technology", which Shumer cites, and which is about 20,000 words. Shumer's post clocks in closer to 5,000 words. So "short" that he actually published it on Xitter – which remains just a truly awful place to read anything longer than 140 characters, but I digress...
My read on this read is that it's a bit too alarmist, but still a useful thought exercise for most people. I mainly say that because my belief is that for as fast as all of this is moving – and to be clear, it is moving insanely fast – I suspect the ramifications will still take far longer to play out. I make this "cold water" prediction quite often these days, but it's nothing profound or particularly insightful, it's simply the way most things play out, certainly with technology.
Yes, the pandemic swept in and changed things faster. But that's part of why it's not a great analogy here. Is the world about to change in a couple weeks? No. At the same time, is the world about to change in more long-term ways than it did from that pandemic? Yes.
After reading the post, I actually followed Shumer's advice and had AI do my work for me. My "work" here being to respond to this essay – how meta! I asked Claude – and specifically, Opus 4.6, one of the breakthrough models that Shumer cites – to study some of my previous writing and write a response in my style. That response, which I'll paste below these words that I actually did write – I swear! – is pretty good! I don't think it nails my exact style, but it has its moments – and honestly, others can probably judge that better than I can, as I'm maxxxx biased in this case. It makes some good counter points, including about the pandemic analogy, which may have shaped my own paragraph above! I'd like to think I would have said that regardless, and perhaps the AI just was able to predict that, but... how far down this rabbit hole do we want to go here?!
Anyway, I laughed at points. ("The virus didn't need a Substack.") And was generally impressed by the output.1 I often find this to be the case with Claude, which I've recently been working into my daily routine alongside ChatGPT and Gemini (yes, I pay for each, to constantly test them, which is also Shumer's advice, which I definitely agree with, though it's certainly fine for most people to just pick, and pay for, one to test).
Wait. I should back up.
As I'm suggesting above, as strange as it may seem, I actually haven't used AI to write something for me in my own style before. I mean, I think I did in the very early days of ChatGPT to see what might happen, but it was pretty bad and rudimentary at that point. In the intervening years, I've both never had the urge to do this or felt the need. It's not that I'm afraid of doing it, and not even really that I think it's below me (though yes, I do), I just really don't see the point. Because to me, as yes, I've written about before, the point of writing is just as much about the process of doing it as much as the output. Actually, I think it's far more about that process, which is my big takeaway from our current AI revolution and this latest experiment.
Yes, AI could write a rebuttal for me. And yes, it can be quite good! But what is the point of that? Just to put something out there? To what end? I guess maybe if I wanted a quick and easy way to "thought lead". But even then, to what end? I wouldn't actually be "thought leading" because they wouldn't actually be my thoughts! They may look and/or sound like them, but that doesn't change the fact that they're not them in that I didn't actually think them. That might not matter to others, but it matters to me!
Because again, what I get out of writing this is from the very process of writing it! The thinking about it! Forming thoughts and letting my mind wander. Expressing my actual opinion on a matter, not outsourcing that thinking to technology.
Sure, I guess if I wanted to make a quick buck in monetizing those thoughts in some way, I could do that. But we have a word for that, it's called spam.
All of this points to some thoughts I had around the whole "Moltbook" situation. We're in a world where bots can talk to other bots, and I think that's interesting and eventually useful for all of the "agentic" stuff we want and need AI to do for us. But there remain a lot of things that you're going to want to do for yourself. Not because an AI can't do them, but because you actually derive value from doing them. To me, writing is the best example of that. But there are many others. And we're going to increasingly discover them in this new world we're entering.
Said another way, and to harp on the point I keep making, the inputs matter just as much as the outputs, and in the Age of AI, they're probably going to matter even more!
The clear impetus for Shumer's post is that he's a developer and his "holy shit" moment was realizing that that OpenAI's latest GPT-5.3 Codex model could fully do the coding work he needed, from writing, to testing, to deploying. This is clearly where AI is going to have the biggest and most immediate impact on our world. It's already happening in the fact that, as Shumer notes, the AI is being used to write the AI applications themselves. You don't have to extrapolate out too far to see a world in which AI starts improving itself, and this is the "breakaway" moment that Amodei and others have been talking about and warning us about. It will happen and it is something we need to watch closely, obviously.
But the disruption of the day-to-day code writing seems unlikely to play out as seamlessly across other industries. As many have noted in the past, AI is uniquely suited to write code because of the way LLMs work. Other jobs and just jobs-to-be-done will likely require other variants of AI that perhaps aren't as probabilistic.
Even still, I might argue that if there's no value that developers get out of the input of coding – actually doing it – perhaps it's better if it is automated away? I suspect some developers do derive value from coding though. So they might want to do it anyway? Or it might be a hybrid situation where they do the parts they want – perhaps the creative parts – and they let AI do the rest. This has already been happening, of course. And if there is a coding job to be done that can simply be automated away with a few commands, that's probably for the best for everyone aside from maybe the entry-level coders who just spent a lot of time learning a specific programming language. That sucks for them, but I also suspect they'll go on to find other and better uses of their time!
In my own world, I think about email. I've always hated it and would love not to have to do it. So I will gladly outsource that task to AI if and when the technology is up for it. But even then, there will be parts that I want to and/or need to do so that the knowledge from some of that work is in my own brain.
The above probably applies to some legal work as well (another example Shumer cites and is obviously hot right now in the world of AI). There's the tedious document reviews that a human probably doesn't need to or want to be doing. But there will undoubtedly be other legal work that humans actually want to be doing. Maybe AI could do some of it, but if the lawyers actually derive value from it, maybe it's worth the cost. The cost being time and perhaps a lack of cost savings for the law firm (which, admittedly, is another complicated matter).
In general, we will need to find new business models for many jobs. But that has been the case throughout much of history. And actually, I suspect that human-created work will actually rise in value in this new age. I know this is hard to see now, but again, look to the inputs. A human being – your fellow human being – took time out of their lives to do this. I just hope this doesn't break down into yet another class battle, where the wealthy pay for human-made creations while those less fortunate "settle" for the AI work. We'll see...
Here's where I'll note that I think Shumer's strongest point, and one that resonates with me the most, is about how best to situate our children for this new world. The answer, of course, is that nobody knows. But to that end, I think his point about making sure that kids are malleable enough to adapt, both in their education and eventually in their work, will be critical. Because it's not like things are just going to change and that will be it. The situation – the world – will keep evolving. Again, it's not a pandemic-like situation. It's perhaps more like climate change.
This is all a long-winded – though less long-winded than Shumer's post, which itself was less long-winded than Amodei's post – way of saying I think we're going to make it. AI is going to disrupt a lot of tasks and industries, but not in two weeks and not in two months, and probably not even in two years for many jobs. For coding? Probably. Legal document review? Ideally. Email? Hopefully! But things like writing? I mean, if you want to task AI with that, then sure! But you might not want to, at least not for everything.
That's the point. We're going to figure out the right balance here. Just because AI may eventually be able to do everything doesn't mean that it should. Or that it will. Because we will choose for it not to for certain things we wish to do and/or that we get value out of. Longer-term our mindset around work will have to shift from tedious tasks that we have to do to recognizing and focusing on the work that we want to do. This sounds great! But I suspect that it's going to be no less of a hard shift to make, mentally, emotionally, and even physically. Because it will just be different. As the focal point shifts from the output to the input...







1 For the record, I would have relied less heavily on crypto and certain VR as the counter-examples as they're simply not strong enough points. Sorry Claude. ↩
There's a blog post making the rounds this week by Matt Shumer — an AI startup founder and investor — called "Something Big Is Happening." It's essentially a letter to friends and family telling them to wake up because AI is about to change everything, like, now. It uses the COVID pandemic as its framing device. It predicts the imminent end of most white-collar jobs. And it has gone quite viral on the strength of its urgency and its "I'm just telling you what happened to me" sincerity.
I think the post is well-intentioned. I also think it's almost perfectly engineered to be wrong in the ways that matter most.
Let's start with the COVID comparison, because it tells you a lot about the rest of the piece. Shumer writes that we're in the "this seems overblown" phase of something much bigger than COVID. The implication being that the skeptics are the ones who weren't paying attention in February 2020, and that he — an AI insider — is the one stockpiling toilet paper.
The problem is that COVID worked as a shock precisely because it was fast and binary. One week you were going to restaurants, the next week restaurants didn't exist. AI is not that. AI is a technology that has been improving — sometimes rapidly, sometimes not — for over a decade. We've had approximately 47 "everything just changed" moments in the past three years alone. Some of them were real! Most of them were vibes.
COVID didn't need evangelists to write blog posts telling you it was real. It just was real, and it hit you whether you believed in it or not. If AI were truly at the "two weeks before lockdown" stage, the last thing you'd need is a 5,000-word essay begging people to pay attention. The virus didn't need a Substack.
Now, to be fair, there's a kernel of truth in the piece. AI has gotten significantly better. The models released in early February — Opus 4.6, GPT-5.3 Codex — are genuinely impressive. If you haven't used the best available models recently, you probably should. On this point, Shumer and I agree completely.
But there's a vast chasm between "these tools are really good and getting better" and "50% of entry-level white-collar jobs will be eliminated within one to five years." And Shumer leaps across that chasm with the confidence of someone who has never watched a technology adoption cycle play out in the real world.
I have. I spent years covering the tech industry as a reporter and then over a decade as a venture investor. And the single most reliable pattern I've observed is this: the people building a technology are constitutionally incapable of accurately predicting how fast society will absorb it. They always, always think it will be faster than it is. Not because they're lying, but because they're extrapolating from their own experience — and their own experience is not representative of anything.
Shumer's big revelation is that he can now describe an app to AI and have it built without much intervention. I believe him! That's genuinely cool. But the leap from "AI can write code for an AI startup founder who has been using these tools for six years" to "AI will replace your lawyer, your doctor, and your accountant within a couple of years" is... well, it's a leap. It's the kind of leap you make when you've been too deep inside the bubble for too long.
Let me address the specific claims, because they deserve scrutiny.
Shumer cites Dario Amodei's prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years, and then says "many people in the industry think he's being conservative." He presents this as though it were a sober assessment from a credible authority. And Amodei is credible — probably the most thoughtful CEO in AI. But it's also worth noting that Amodei runs a company whose valuation is directly tied to the belief that AI will become extraordinarily powerful extraordinarily quickly. Every AI CEO in the world has an incentive to hype the timeline. That doesn't make them wrong. But it does mean you should apply a discount rate to their predictions, the same way you would to any CEO talking about the future of their own industry.
The METR benchmarks get cited — AI completing tasks that would take a human expert "nearly five hours," with the number doubling every seven months. This sounds terrifying until you think about what "tasks" means in the context of a benchmark. Benchmarks measure what benchmarks measure. They're useful indicators of progress, but the history of AI is littered with benchmarks that were "solved" long before the real-world equivalent of the benchmark was anywhere close to solved. Passing the bar exam on a multiple-choice test is not the same thing as practicing law. Completing a coding task end-to-end in a controlled environment is not the same thing as shipping production software at a Fortune 500 company with legacy systems, compliance requirements, and a VP who keeps changing the spec.
Then there's the "AI helped build itself" moment, which Shumer presents as the most important and least understood development. OpenAI said GPT-5.3 Codex was "instrumental in creating itself" — used to debug training, manage deployment, and diagnose evaluations. This is interesting! It is also... not the singularity. Software has been used to build software for as long as software has existed. Compilers compile themselves. The question isn't whether AI can contribute to AI development — of course it can — but whether this creates the runaway recursive loop that Shumer implies. And on that, the evidence is far from clear. The gains from AI-assisted AI development could plateau. They could be large but linear. The "intelligence explosion" framing assumes a specific exponential dynamic that is not guaranteed by the underlying mechanics.
The most revealing part of the piece, honestly, is the advice section. Because the advice is... fine? "Start using AI seriously." Yes. "Lean into what's hardest to replace." Sure. "Get your financial house in order." Always good advice, AI or no AI. "Spend an hour a day experimenting." Not bad!
But here's what's funny about this: if AI were truly about to do to white-collar work what COVID did to in-person dining — which is the explicit comparison Shumer makes — then "spend an hour a day experimenting with AI" would be hilariously inadequate advice. You don't tell someone to spend an hour a day experimenting with pandemic preparedness when the pandemic is two weeks away. The modesty of the advice quietly undermines the extremity of the prediction.
What Shumer is actually describing, if you strip away the apocalyptic framing, is a technology that is very useful, improving quickly, and that will probably change a lot of jobs over the next five to ten years. Which is... correct. And also not a novel observation. And also not COVID. It's closer to the internet, which did in fact transform virtually every industry — but over the course of decades, not months, and in ways that were far more nuanced and surprising than anyone predicted in 1995.
I should be clear about something: I am not an AI skeptic. I use these tools daily. I've tracked the capital expenditure numbers — the hundreds of billions that Amazon, Google, Microsoft, and Meta are committing to AI infrastructure. I take this seriously. I've invested in companies in this space. The technology is real and it is going to matter enormously.
But I've also been around long enough to recognize the genre of what Shumer wrote. It's the "I need you to understand what I understand" post. The "I've been holding back but now I owe you the truth" post. It's a genre that recurs with every major technology wave, and it always has the same structure: personal revelation, exponential trend extrapolation, dire warning, call to action. I've seen this post written about the internet, about mobile, about crypto, about the metaverse. Sometimes the underlying technology really is transformative. It just never transforms on the timeline or in the way that the "something big is happening" post predicts.
The people who wrote the equivalent post in 1995 about the internet were right about the big picture and wrong about almost every specific prediction. The people who wrote it about crypto were... mostly just wrong. The question with AI isn't whether it matters — it obviously does — but whether the specific doom-and-urgency framing is warranted by the current evidence.
I don't think it is. Not yet, anyway. And I say that as someone who has felt the ground shake — perhaps a bit earlier than most, given where I sit. But earthquakes and volcanos are different things. The ground shaking doesn't necessarily mean everything is about to be buried in lava.
What I'd actually tell the friends and family that Shumer is addressing: use these tools. They're remarkable. Pay the $20 a month. Experiment. Be curious, not panicked. And when someone who runs an AI startup tells you that AI is about to change everything in the next one to five years, remember that someone who ran a crypto exchange told you the same thing in 2021, and someone who ran a VR company told you the same thing in 2016. The technology may be different this time. The incentive structure for the predictions is exactly the same.
Thanks for reading all the way to the bottom! If you enjoyed this, perhaps:
🍺 Buy Me a Pint
🍺🍺 Buy Me 2 Pints (a month)
🍻 Buy Me 20 Pints (a year)
2026-02-10 07:15:08

One thing that I think about a lot is not thinking a lot about what I don't think about a lot. That's cute, but also true. All I'm saying is that I wish I had more time to think. And I really do think about not having enough time to think. And it's something I've been thinking about – and writing about – for years.
I think about this because it's only in the moments where I do find time to think that I think about such things, and realize what a shame it is to not have that time in our modern world. At best, we all have a sliver of the time we used to have.
And while I suspect every generation talks about – and yes, thinks about – this issue with their own waves of technology entering lives and changing such dynamics, I do also believe that the past 25 to 30 years have accelerated the change. Basically, the internet was step one, the search engine was step two, the smartphone was step three, and now, of course, AI is step four.
To be clear, this isn't some sort of "TV is rotting your brain" argument – well, it might be, in a way – but it's not really about the content here. Some of it is good, some of it is bad (the same, of course, could be said for television), but it's the fact that it's just everywhere, every second of every day. Basically all of human knowledge and output is right there in your pocket and so it's going to be right there in your hand when a free moment of time opens up. It's not necessarily good or bad – again, it can be both good and bad – it's just reality. And such reality has rendered thinking, the process of actually getting lost in thought, all but extinct.
People talk about this in the context of "boredom" and how it has been more or less eliminated from lives thanks to the above technological steps. Boredom obviously has a natural negative connotation, but in recent years everyone seems to have woken up to some upsides to it. One, beyond perhaps learning patience, is thinking. Letting your mind wander. Because there's nothing else to do.
Again, I can only speak for myself, but with the benefit of hindsight, I actually think it was pretty critical to my childhood and growing up. Thinking obviously isn't something you learn how to do, but at the same time, I believe that if you don't actually practice it, you naturally won't be as good at it as someone else who devotes more time to the lost art.
And that alone may be the best argument against kids and smartphones. Ensuring they have that time to think without the endless feeds and streams. And to learn how to think without Google and now ChatGPT.
This is one of the things I love most about writing. It's a forcing function to make you sit down and think. Sure, there are still distractions, but at least for myself, if I can actually sit down to do it, I can focus and thoughts naturally form. This is happening right now. Even though I've thought briefly about this topic most of the day, only to allow myself to be distracted by something else so as not to fully think about it. But it's the input that matters just as much as the output. And I suspect the entire world is going to wake up to that notion in the Age of AI.
Listening to the people building this new technology and setting forth the notions of what change it will bring, you hear time and time again the notion of freeing up time. I have a hard time believing this will be the case because history is full of examples where time freed up by technology is simply filled in by other things. The positive side of that equation is that it often naturally means at least some level of productivity growth. And that comes with a potential silver-lining in that while there are all the fears of job displacement, it's very likely that people find new jobs to do (while yes, recognizing that there will likely be a transition period that is rough for many). The negative side of the above is that we end up in a state where we're somehow busier than ever.
Again, that would be my guess as to one way AI plays out at the highest level. The new technology starts being deployed to do more things that we currently do, but instead of going to the proverbial or literal beach, we just find new things to do. This is not profound, it's prosaic. It's simply what always happens.
And with my argument above, it's a bad situation because that beach would have at least theoretically given us all more time to think. And I think that could be a really good thing for humanity. But instead, I suspect, we'll continue to think even less.
Those building AI also like to paint a picture of the technology achieving breakthroughs that humans simply cannot. Or, perhaps we could, but it's more a matter of happenstance when we do. Like the ideas around scientific discovery, where they're basically using AI to "brute force" every possible scenario in a way that no human could do in a lifetime. That's undoubtedly a good thing – and likely a very good thing – for areas like drug discovery. But it still points back to the world where humans have stopped achieving such breakthroughs because we've stopped thinking. We've essentially outsourced the thinking to these thinking machines that are far more capable of thinking at scale than we can ever hope to be. So why bother?
Yet we're undoubtedly also too heavily discounting some intangibles there, like "eureka" moments. Perhaps they are simple matters of chance where a human stumbles into something by simply thinking. But you do have to wonder what is lost if those go away... Especially if you believe that machine thinking is not the same thing as human thinking, and that there are plusses and minuses to each, but that the human brain may, for example, introduce chemical variables that leads to an output – a thought – that a machine can only replicate after the fact but can't come up with out of the blue.
And what if it's the mere process of thinking that matters to humans, not the actual thoughts? Again, the inputs, not the outputs.
And yes, you could say that humans should work with the machines to augment such breakthroughs. And it's a nice thought and I hope that it happens. But it also seems just as likely that we'll continue to offload more and more to such systems while we go and do something else. It's like how no one knows phone numbers any more, or directions, or math, or spelling, or many, many other things where we've essentially outsourced such knowledge to different types of technology over the past few decades. Again, this is convenient and good in many ways, undoubtedly even most ways, but there are also downsides to offloading all of this.
Over the past 20 years or so, we've all learned to just Google something when we don't know an answer. And AI is the evolution of that, with even less human hunting – and yes, thinking – needed. "I've given that to my AI to think through and I'll be alerted when it's done." This is happening. Right now.
We used to have to take time to think about things. To remember things. And that naturally led our brains in myriad directions. To wander in our minds. But we don't really do that anymore. And I fear we're on the edge of doing that even less!
Anyway, just a thought I've been thinking about and yes, avoiding thinking about.



2026-02-10 00:44:56
Remember the Apple Car? You know, the project that was all anyone wanted to talk about before AI took the world – and Apple – by storm? Honestly, given the current state of the car market, Apple was probably wise to kill it off, with manufacturers rushing to pivot from their all-in-on-electric strategies amidst a shift in politics, incentives, and perhaps taste. Oh yes, and tariffs.
Anyway, we can still dream about what could have been. And that dream sure seems more tangible now with the Ferrari "Luce". Their first all-electric vehicle, which I wrote about back in 2024 before it had a name. Why? Because they had contracted LoveFrom to work on the design. Yes, the company led by Jony Ive (the part that didn't go over to OpenAI with the io acquisition).
If you're familiar with the designs that Apple produced under Ive's tenure, particularly in the era beginning with the iPhone 4, you'll feel right at home here. The overall aesthetic is one dominated by squircles and circles, all with absolute, minute perfection and symmetry.
At first blush, it's a bit clinical, but dig deeper, start poking and prodding, and you'll see there's a real sense of charm here. Fun little details and genuinely satisfying tactility begin to reveal themselves. The key, for example, has a yellow panel with an E Ink background. Push the key into the magnetized receiver in the center console, and the yellow on the key dims, moving across to glow through the top of the glass shifter. It’s meant to symbolize a sort of transference of life.
Tim Stevens does a nice job conveying the touches here, but you should really watch the video shared by Mike Matas, the designer you may know from projects back in the day such as Facebook Paper and various work within the Apple ecosystem – including at Apple, and yes, more recently at LoveFrom.
The entire project looks like a beautiful um, hybrid of digital and tactile elements. That's notable as one of the knocks against Ive towards the end of his Apple tenure is that form was winning out over function, with buttons cast away in favor of whittling down the device to just the essence – which mainly meant, of course, the screen. But despite Tesla's best – IMO, sort of tacky – attempts, that doesn't really fly in vehicles. So it's great to hear LoveFrom is inspired by both worlds:
The shifter isn't the only thing that's glass. There are 40-odd pieces of Corning Gorilla Glass scattered throughout the cockpit, everything from the shifter surround to the slightly convex lenses in the gauge cluster. What isn't glass is aluminum, much of it anodized in your choice of three colors: gray, dark gray and rose gold.
What does that sound like?...
The center display is a 10.12-inch OLED perforated with plenty of holes to allow some pleasingly chunky toggle switches through, plus a glass volume knob. The little clock in the upper-right can turn into a stopwatch or a compass, with its needles swinging about depending on the mode. The whole central control panel pivots and swivels. Just grab the big handle below and drag it where you want it.
I bet that swivel feels just great...
Ive was on hand to unveil the interior, clearly a little nervous about showing all this for the first time. After five years of working confidentially on this topic, Ive said he was "enormously excited" and "completely terrified" to provide our first real glimpse at the Luce.
Marc Newson, who founded LoveFrom with Ive, said: "Jony and I share a really, really deep interest in automotive things and vehicles. Actually, I'd go so far as to say that that is probably a hobby of both of ours."
And it was almost much more than a hobby, as Newson was of course at Apple as well! While obviously the interior of a Ferrari – no word on pricing yet – would have undoubtedly been different than what an Apple Car may have looked like (it certainly wouldn't have been cheap, probably not in the Ferrari range – or maybe the "Pro" model may have been), there's probably a lot of ideas that transferred over...
One more thing: remember who is on the board of Ferrari? One Eddy Cue...


Update February 10, 2026: This Wallpaper profile goes far more in-depth and with more images to boot. Beyond the overall clear emphasis on physical toggles versus touchscreens which "doesn't belong in cars" according to Ive, the "LAUNCH" component is probably my favorite (in the cars own custom typeface, naturally).
They have yet to show off the actual exterior design of the car. That will happen in May. Interesting to note that this is not being targeted at usual Ferrari buyers...




When the key is depressed, starting the engine...
2026-02-09 07:18:00

I mean, he just sort of says it. "You can mark my words. In 36 months – but probably closer to 30 months – the most economically compelling place to put AI will be space." It's about 4 whole minutes into a nearly 3 hour long podcast, when Elon Musk makes his proclamation. Of course, it was one he's made before, in his own post announcing the merger of SpaceX and xAI. But the whole "mark my words" bit feels pretty definitive...
2026-02-06 06:48:10

Stocks! As it turns out, they go down too. How easily we forget, but the past few days have been a good reminder and/or wake-up call depending on your positions. In some ways, there are parallels to the "DeepSeek Moment" a year ago. In other ways, this drop is more nuanced. And perhaps more natural...