MoreRSS

site iconEd ZitronModify

CEO of national Media Relations and Public Relations company EZPR
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ed Zitron

Could Microsoft Kill OpenAI?

2025-06-21 01:15:44

Sincerity Wins The War

2025-06-16 22:50:13

Hello Where’s Your Ed At Subscribers! I’ve started a premium version of this newsletter with a weekly Friday column where I go over the most meaningful news and give my views, which I guess is what you’d expect. Anyway, it’s $7 a month or $70 a year, and helps support the newsletter. I will continue to do my big free column too! Thanks.


What wins the war is sincerity.

What wins the war is accountability.

And we do not have to buy into the inevitability of this movement.

Nor do we have to cover it in the way it has always been covered. Why not mix emotion and honesty with business reporting? Why not pry apart the narrative as you tell the story rather than hoping the audience works it out? Forget “hanging them with their own rope” — describe what’s happening and hold these people accountable in the way you would be held accountable at your job. 

Your job is not to report “the facts” and let the readers work it out. To quote my buddy Kasey, if you're not reporting the context, you're not reporting the story. Facts without context aren’t really facts. Blandly repeating what an executive or politician says and thinking that appending it with “...said [person]” is sufficient to communicate their biases or intentions isn’t just irresponsible, it’s actively rejecting your position as a journalist.

You don’t even have to say somebody is lying when they say they’re going to do something — but the word “allegedly” is powerful, reasonable and honest, and is an objective way of calling into question a narrative. 

Let me give you a few examples.

A few weeks ago, multiple outlets reported that Meta would partner with Anduril, the military contractor founded by Palmer Luckey, the former founder of VR company Oculus whichMeta acquired in 2014, only to oust Luckey four years later for donating $10,000 to an anti-Hilary Clinton group. In 2024, Meta CTO Andrew “Boz” Bosworth, famous for saying that Facebook’s growth is necessary and good, even if it leads to bad things like cyberbullying and  terror attacks, publicly apologized to Luckey

Now the circle is completing, with Luckey sort-of-returning to Meta to work with the company on some sort of helmet called “Eagle Eye.” 

One might think at this point the media would be a little more hesitant in how they cover anything Zuckerberg-related after he completely lied to them about the metaverse, and one would be wrong.

The Washington Post reported that, and I quote:

To aid the collaboration, Meta will draw on its hefty investments in AI models known as Llama and its virtual reality division, Reality Labs. The company has built several iterations of immersive headsets aimed at blending the physical and virtual worlds — a concept known as the metaverse.

Are you fucking kidding me?

The metaverse was a joke! It never existed! Meta bought a company that made VR headsets — a technology so old, they featured in an episode of Murder She Wrote — and an online game that could best be described as “Second Life, but sadder.” Here’s a piece from the Washington Post agreeing with me! The metaverse never really had a product of any kind, and lost tens of billions of dollars for no reason! Here’s a whole thing I wrote about it years ago! To still bring up the metaverse in the year of our lord 2025 is ridiculous!

But even putting that aside… wait, Meta’s going to put its AI inside of this headset? Palmer Luckey claims that, according to the Post, this headset will be “combining an AI assistant with communications and other functions.” Llama? That assistant? 

You mean the one that it had to rig to cheat on LLM benchmarking tests? The one that will, as reported by the Wall Street Journal, participate in vivid and gratuitous sexual fantasies with children? The one using generative AI models that hallucinate, like every other LLM? That’s the one that you’re gonna put in the helmet for the military? How is the helmet going to do that exactly? What will an LLM — an inconsistent and unreliable generative AI system — do in a combat situation, and will a soldier trust it again after its first fuckup?

Just to be clear, and I quote Palmer Luckey, the helmet that will feature an “ever-present companion who can operate systems, who can communicate with others, who you can off-load tasks onto … that is looking out for you with more eyes than you could ever look out for yourself right there right there in your helmet.” This is all going to be powered by Llama? 

Really? Are we all really going to accept that? Does nobody actually think about the words they’re writing down?

Here’s the thing about military tech: the US DOD tends to be fairly conservative when it comes to the software it uses, and has high requirements for reliability and safety. I could talk about these for hours — from coding guidelines, to the ADA programming language, which was designed to be highly crash-resistant and powers everything from guided missiles to F-15 fighter jet — but suffice it to say that it’s highly doubtful that the military is going to rely on an LLM that hallucinates a significant portion of the time. 

To be clear, I’m not saying we have to reject every single announcement that comes along, but can we just for one second think critically about what it is we are writing down.

We do not have to buy into every narrative, nor do we have to report it as if we do so. We do not have to accept anything based on the fact someone says it emphatically, or because they throw a number at us to make it sound respectable. 

Here’s another example. A few weeks ago, Axios had a miniature shitfit after Anthropic CEO said that “AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years.” 

What data did Mr. Amodei use to make this point? Who knows! Axios simply accepted that he said something and wrote it down, because why think when you could write.

This is extremely stupid! This is so unbelievably stupid that it makes me question the intelligence of literally anybody that quotes it! Dario Amodei provided no sourcing, no data, nothing other than a vibes-based fib specifically engineered to alarm hapless journalists. Amodei hasn’t done any kind of study or research. He’s just saying stuff, and that’s all it takes to get a headline when you’re the CEO of one of the top two big AI companies.

It is, by the way, easy to cover this ethically, as proven by Allison Morrow of CNN, who, engaging her critical thinking, correctly stated that “Amodei didn’t cite any research or evidence for that 50% estimate,” that “Amodei is a salesman, and it’s in his interest to make his product appear inevitable and so powerful it’s scary,” and that “little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work.”

Morrow’s work is compelling because it’s sincere, and is proof that there is absolutely nothing stopping mainstream press from covering this industry honestly. Instead, Business Insider (which just laid off a ton of people and lazily recommended their workers read books that don’t exist because they can’t even write their own emails without AI), Fortune, Mashable and many other outlets blandly covered a man’s completely made up figure as if it was fact. 

This isn’t a story. It is “guy said thing,” and “guy” happens to be “billionaire behind multi-billion dollar Large Language Model company,” and said company has made exactly jack shit as far as software that can actually replace workers. 

While there are absolutely some jobs being taken by AI, there is, to this point, little or no research that suggests that it’s happening at scale, mostly because Large Language Models don’t really do the things that you need them to do to take someone’s job at scale. Nor is it clear that those jobs were lost because AI — specifically genAI — can actually do them as well, or better, than a person, or because an imbecile CEO bought into the hype and decided to fire up the pink slip printer, and when those LLMs inevitably shit the bed, those people will be hired back. 

You know, like Klarna literally just had to. 

These scare tactics exist to do one thing: increase the value of companies like Anthropic, OpenAI, Microsoft, Salesforce, and anybody else outright lying about how “agents” will do our jobs, and to make it easier for the startups making these models to raise funds, kind-of how a pump-and-dump scammer will hype up a doomed penny stock by saying how it’s going to the moon, not disclosing that they themselves own a stake in the business.

Let’s look at another example. A recent report from Oxford Economics talked about how entry-level workers were facing a job crisis, and vaguely mentioned in the preview of the report that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” 

One might think the report says much more than that, and one would be wrong. On the very first page, it says that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” On page 3, it claims that the “high adoption rate by information companies along with the sheer employment declines in [some roles] since 2022 suggested some displacement effect from AI…[and] digging deeper, the largest displacement seems to be entry-level jobs normally filled by recent graduates.” 

In fact, fuck it, take a look.

That’s it! That’s the entire extent of its proof! The argument is that because companies are getting AI software and there’s employment declines, it must be AI. There you go! Case closed. 

This report has now been quoted as gospel. Axios claimed that Oxford Economics’ report provided “hard evidence” that “AI is displacing white-collar workers.” USA Today said that positions in computer and mathematical sciences have been the first affected as companies increasingly adopt artificial intelligence systems.”

And Anthropic marketing intern/New York Times columnist Kevin Roose claimed that this was only the tip of the iceberg, because, and I shit you not, he had talked to some guys who said some stuff.

No, really.

In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that A.I. companies are racing to build “virtual workers” that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become “A.I.-first,” testing whether a given task can be done by A.I. before hiring a human to do it.

One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by A.I. coding tools. Another told me that his start-up now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company.

Yet Roose’s most egregious bullshit came after he admitted that these don’t prove anything:

Anecdotes like these don’t add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Trump’s economic policies.

But among people who pay close attention to what’s happening in A.I., alarms are starting to go off.

That’s right, anecdotes don’t prove his point, but what if other anecdotes proved his point? Because Roose goes on to quote Amodei’s 50% quote, and say that they now claim its Claude Opus 4 model can “code for several hours without stopping,” a statement that Roose calls “a tantalizing possibility if you’re a company accustomed to paying six-figure engineer salaries for that kind of productivity” without thinking “does that mean the code is good?” or “what does it do for those hours?”

Roose spends the rest of the article clearing his throat, adding that “even if AI doesn’t take all entry-level jobs right away” that “two trends concern [him],” namely that he worries companies are “turning to AI too early, before the tools are robust enough to handle full entry-level workloads,” and that executives believing that entry-level jobs are short-lived will “underinvest in job training, mentorship and other programs aimed at entry-level workers.” 

Kevin, have you ever considered checking whether that actually happens?

Nah! Why would he? Kevin’s job is to be a greasy pawn of the AI industry and the markets at large. An interesting — and sincere! — version of this piece would’ve intelligently humoured the idea then attempted to actually prove it, and then failed because there is no proof that this is actually happening other than that which the media drums up.

It’s the same craven, insincere crap we saw with the return to office “debate” which was far more about bosses pretending that the office was good than it was about productivity or any kind of work. I wrote about this almost every week for several years, and every single media outlet participated, on some level, in pushing a completely fictitious world where in-office work was “better” due to ‘serendipity,” that the boss was right, and that we all had to come back to the office. 

Did they check with the boss about how often they were in the office? Nope! Did they give equal weight to those who disagreed with management — namely those doing the actual work? No. But they did get really concerned about quiet quitting for some reason, even though it wasn’t real, because the bosses that don’t seem to actually do any work had demanded that it was.

Anyway, Kevin Roose was super ahead of the curve on that one. He wrote that “working from home is overrated” and that “home-cooked lunches and no commuting…can’t compensate for what’s lost in creativity” in March, 2020. My favourite quote is when he says “...research also shows that what remote workers gain in productivity, they often miss in harder-to-measure benefits like creativity and innovative thinking,” before mentioning some studies about “team cohesion,” linking to an article from The Atlantic from 2017 that does not appear to include a study other than the Nicholas Bloom study that Roose himself linked that showed remote work was productive and another about “proximity boosting productivity” that it does not link to, adding that “the data tend to talk past each other.”

I swear to god I am not trying to personally vilify Kevin Roose — it’s just that he appears to have backed up every single boss-coddling market-driven hype cycle with a big smile, every single time. If he starts writing about Quantum Computing, it’s tits up for AI.

This is the same thing that happened when corporations were raising prices and the media steadfastly claimed that inflation had nothing to do with corporate greed (once again, CNN’s Allison Morrow was one of the few mainstream media reporters willing to just say “yeah corporations actually are raising prices and blaming it on inflation”), desperately clinging to whatever flimsy data might prove that corporations weren’t price gouging even as corporations talked about doing so publicly.

It’s all so deeply insincere, and all so deeply ugly — a view from nowhere, one that seeks not to tell anyone anything other than that whatever the rich or powerful is worried or excited about is true, and that the evidence, no matter how flimsy, always points in the way they want it to. 

It’s lazy, brainless, and suggests either a complete rot in the top of editorial across the entire  business and tech media or a consistent failure by writers to do basic journalism, and as forgiving I want to be, there are enough of these egregious issues that I have to begin asking if anybody is actually fucking trying

It’s the same thing every time the powerful have an idea — remote work is bad for companies and we must return to the office, the metaverse is here and we’re all gonna work in it, prices are higher and it’s due to inflation rather than anything else, AI is so powerful and strong and will take all of our jobs, or whatever it is — and that idea immediately become the media’s talking points. Real people in the real world, experiencing a different reality, watch as the media repeatedly tells them that their own experiences are wrong. Companies can raise their prices specifically to raise their profits, Meta can literally not make a metaverse, AI can do very little to actually automate your real job, and the media will still tell you to shut the fuck up and eat their truth-slop.

You want an actual conspiracy theory? How about a real one: that the media works together with the rich and powerful to directly craft “the truth,” even if it runs contrary to reality. The Business Idiots that rule our economy — work-shy executives and investors with no real connection to any kind of actual production — are the true architects of what’s “real” in our world, and their demands are simple: “make the news read like we want it to.”

Yet when I say “works together,” I don’t even mean that they get together in a big room and agree on what’s going to be said. Editors — and writers — eagerly await the chance to write something following a trend or a concept that their bosses (or other writers’ bosses) come up with and are ready to go. I don’t want to pillory too many people here, but go and look at who covered the metaverse, cryptocurrency, remote work, NFTs and now generative AI in gushing terms.

Okay, but seriously, how is it every time with Casey and Kevin

The illuminati doesn’t need to exist. We don’t need to talk about the Bilderberg Group, or Skull and Bones, or reptilians, or wheel out David Icke and his turquoise shellsuit. The media has become more than willing to follow whatever it needs to once everybody agrees on the latest fad or campaign, to the point that they’ll repeat nonsensical claim after nonsensical claim.

The cycle repeats because our society — and yes, our editorial class too — is controlled by people who don’t actually interact with it. They have beliefs that they want affirmed, ideas that they want spread, and they don’t even need to work that hard to do so, because the editorial rails are already in place to accept whatever the next big idea is. They’ve created editorial class structures to make sure writers will only write what’s assigned, pushing back on anything that steps too far out of everybody’s agreed-upon comfort zone.

The “AI is going to eliminate half of white collar jobs” story is one that’s taken hold because it gets clicks and appeals to a fear that everyone, particularly those in the knowledge economy who have long enjoyed protection from automation, has. Nobody wants to be destitute. Nobody with six figures of college debt wants to be stood in a dole queue.  

It’s a sexy headline, one that scares the reader into clicking, and when you’re doing a half-assed job at covering a study, you can very easily just say “there’s evidence this is happening.” It’s scary. People are scared, and want to know more about the scary subject, so reporters keep covering it again and again, repeating a blatant lie sourced using flimsy data, pandering to those fears rather than addressing them with reality.

It feels like the easiest way to push back on these stories is fairly simple: ask reporters to show the companies that have actually done this.

No, I don’t mean “show me a company that did layoffs and claims they’re bringing in new efficiencies with AI.” I mean actually show me a company that has laid off, say, 10 people, and how those people have been replaced by AI. What does the AI do? How does it work? How do you quantify the work it’s replaced? How does it compare in quality? Surely with all these headlines there’s got to be one company that can show you, right?

No, no, I really don’t mean “we’re saying this is the reason,” I mean show me the actual job replacement happening and how it works. We’re three years in and we’ve got headlines talking about AI replacing jobs. Where? Christopher Mims of the Wall Street Journal had a story from June 2024 that talked about freelance copy editors and concept artists being replaced by generative AI, but I can find no stories about companies replacing employees. 

To be clear, I am not advocating for this to happen. I am simply asking that the media, which seems obsessed with — even excited by — the prospect of imminent large-scale job loss, goes out and finds a business (not a freelancer who has lost work, not a company that has laid people off with a statement about AI) that has replaced workers with generative AI. 

They can’t, because it isn’t happening at scale, because generative AI does not have the capabilities that people like Dario Amodei and Sam Altman repeatedly act like they do, yet the media continues to prop up the story because they don’t have the basic fucking curiosity to learn about what they’re talking about.

Hell, I’ll make it easier for you. Why don’t you find me the product, the actual thing, that can do someone’s job? Can you replace an accountant? No. A doctor? No. A writer? Not if you want good writing. An artist? Not if you want to actually copyright the artwork, and that’s before you get to how weird and soulless the art itself feels. Walk into your place of work tomorrow and look around you and start telling me how you would replace each and every person in there with the technology that exists today, not the imaginary stuff that Dario Amodei and Sam Altman want you to think about.

Outside of coding — which, by the way, is not the majority of a software engineer’s fucking job, if you’d take the god damn time to actually talk to one! — what are the actual capabilities of a Large Language Model today? What can it actually do? 

You’re gonna say “it can do deep research,” by which you mean a product that doesn’t really work. What else? Generate videos that sometimes look okay? “Vibe code”? Bet you’re gonna say something about AI being used in the sciences to “discover new materials” which proved AI’s productivity benefits. Well, MIT announced that it has “no confidence in the provenance, reliability or validity of the data, and [has] no confidence in the validity of the research contained in the paper.” 

I’m not even being facetious: show me something! Show me something that actually matters. Show me the thing that will replace white collar workers — or even, honestly, “reduce the need for them.” Find me someone who said “with a tool like this I won’t need this many people” who actually fired them and then replaced them with the tool and the business keeps functioning. Then find me two or three more. Actually, make it ten, because this is apparently replacing half the white collar workforce.

There are some answers, by the way. Generative AI has sped up transcription and translation, which are useful for quick references but can cause genuine legal risk. Generative AI-based video editing tools are gaining in popularity, though it’s unclear by how much. Seemingly every app that connects to generative AI can summarise a message. Software engineers using LLM tools — as I talked about on a recent episode of Better Offline — are finding some advantages, but LLMs are far from a panacea. Generative AI chatbots are driving people insane by providing them an endlessly-configurable pseudo-conversation too, though that’s less of a “use case” and more of a “text-based video game launched at scale without anybody thinking about what might happen.” 

Let’s be real: none of this is transformative. None of this is futuristic. It’s stuff we already do, done faster, though “faster” doesn’t mean better, or even that the task is done properly, and obviously, it doesn’t mean removing the human from the picture. Generative AI is best at, it seems, doing very specific things in a very generic way, none of which are truly life-changing. Yet that’s how the media discusses it. 

An aside about software engineering: I actually believe LLMs have some value here. LLMs can generate outputs to generate and evaluate code, as well as handle distinct functions within a software engineering environment. It’s pretty exciting for some software engineers - they’re able to get a lot of things done much faster! - though they’d never trust it with things launched in production. These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can. Anyway, bots can, to quote Thomas Ptacek, “poke around your codebase on their own…author files directly…run tools…compile code…run tests…and iterate on the results,” to name a few things.” These are all things - under the watchful eye of an actual person - that can speed up some software engineers’ work. 

(A note from my editor, Matt Hughes, who has been a software engineer for a long time: I’m not sure how persuasive this stuff is. Coders have been automating things like tests, code compilation, and the general mechanics of software engineering long before AI and LLMs were the hot thing du jour. You can do so many of the things that Ptacek mentioned with cronjobs and shell scripts — and, undoubtedly, with greater consistency and reliability.)Ptacek also adds that “if truly mediocre code is all we ever get from LLM, that’s still huge, [as] it’s that much less mediocre code humans have to write.”  

Back to Ed: In a conversation with The Internet of Bugs’ (and veteran software engineer) Carl Brown as I was writing this newsletter, he recommended I exercise caution with how I discussed LLMs and software engineering, saying that “...there are situations at the moment (unusual problems, or little-used programming languages or frameworks) where the stuff is absolutely useless, and is likely to be for a long time.”In a previous draft, I’d written that mediocre code was “fine if you knew what to look for,” but even then, Brown added that “...the idea that a human can ‘know what code is supposed to look like’ is truly problematic.  A lot of programmers believe that they can spot bugs by visual inspection, but I know I can't, and I'd bet large sums of money they can't either — and I have a ton of evidence I would win that bet.”

Brown continued: “In an offline environment, mediocre code may be fine when you know what good code looks like, but if the code might be exposed to hackers, or you don't know what to look for, you're gonna cause bugs, and there are more bugs than ever in today's software, and that is making everyone on the Internet less secure.”

He also told me the story of the famed Heartbleed bug, a massive vulnerability in a common encryption library that millions of smart, professional security experts and developers looked at for over two years before someone saw a single error — one single statement — that somebody didn’t check that led to a massive, internet-wide panic leaving hundreds of millions of websites vulnerable.

So, yeah, I dunno man. On one hand, there are clearly software developers that benefit from using LLMs, but it’s complicated, much like software engineering itself. You cannot just “replace a coder,” because “coder” isn’t really the job, and while this might affect entry-level software engineers at some point, there’s yet to be proof it’s actually happening, or that AI’s taking these jobs and not, say, outsourcing.

Perhaps there’s a simpler way to put it: software engineering is not just writing code, and if you think that’s the case, you do not write software or talk to software engineers about what it is they do. 

Seriously, put aside the money, the hype, the pressure, the media campaigns, the emotions you have, everything, and just focus on the product as it is today. What is it that generative AI does, today, for you? Don’t say “AI could” or “AI will,” tell me what “AI does.” Tell me what has changed about your life, your job, your friends’ jobs, or the world around you, other than that you heard a bunch of people got rich.

Yet the media continually calls it “powerful AI.” Powerful how? Explain the power! What is the power? The word “powerful” is a marketing term that the media has adopted to describe something it doesn’t understand, along with the word “agent,” which means “autonomous AI that can do things for you” but is used, at this point, to describe any Large Language Model doing anything. 

But the intention is to frame these models as “powerful” and to use the term “agents” to make this technology seem bigger than it is, and the people that control those terms are the AI companies themselves.

It’s at best lazy and at worst actively deceitful, a failure of modern journalism to successfully describe the moment outside of what they’re told to, or the “industry standards” they accept, such as “a Large Language Model is powerful and whatever Anthropic or OpenAI tells me is true.”

It’s a disgrace, and I believe it either creates distrust in the media or drives people insane as they look at reality - where generative AI doesn’t really seem to be doing much - and get told something entirely different by the media.


When I read a lot of modern journalism, I genuinely wonder what it is the reporter wants to convey. A thought? A narrative? A story? Some sort of regurgitated version of “the truth” as justified by what everybody else is writing and how your editor feels, or what the markets are currently interested in? What is it that writers want readers to come away with, exactly?

It reminds me a lot of a term that Defector’s David Roth once used to describe CNN’s Chris Cilizza — “politics, noticed”:

This feels, from one frothy burble to the next, like a very specific type of fashion writing, not of the kind that an astute critic or academic or even competent industry-facing journalist might write, but of the kind that you find on social media in the threaded comments attached to photos of Rihanna. Cillizza does not really appear to follow any policy issue at all, and evinces no real insight into electoral trends or political tactics. He just sort of notices whatever is happening and cheerfully announces that it is very exciting and that he is here for it. The slugline for his blog at CNN—it is, in a typical moment of uncanny poker-faced maybe-trolling, called The Point—is “Politics, Explained.” That is definitely not accurate, but it does look better than the more accurate “Politics, Noticed.”

Whether Roth would agree or not, I believe that this paragraph applies to a great deal of modern journalism. Oh! Anthropic launched a new model! Delightful. What does it do? Oh they told me, great, I can write it down. It’s even better at coding now! Wow! Also, Anthropic’s CEO said something, which I will also write down. The end!

I’ll be blunt: making no attempt to give actual context or scale or consideration to the larger meaning of the things said makes the purpose of journalism moot. Business and tech journalism has become “technology, noticed.” While there are forays out of this cul-de-sac of credulity — and exceptions at many mainstream outlets — there are so many more people who will simply hear that there’s a guy who said a thing, and that guy is rich and runs a company people respect, and thus that statement is now news to be reported without commentary or consideration.

Much of this can be blamed on the editorial upper crust that continually refuses to let writers critique their subject matter, and wants to “play it safe” by basically doing what everybody else does. What’s crazy to me is that many of the problems with the AI bubble — as with the metaverse, as with the return to office, as with inflation and price gouging — are obvious if you actually use the things or participate in reality, but such things do not always fit with the editorial message.

But honestly, there are plenty of writers who just don’t give a shit. They don’t really care to find out what AI can (or can’t) do. They’ve come to their conclusion (it’s powerful, inevitable, and already doing amazing things) and thus will write from that perspective. It’s actually pretty nefarious to continually refer to this stuff as “powerful,” because you know their public justification is how this stuff uses a bunch of GPUs, and you know their private justification is that they have never checked and don’t really care to. It’s much easier to follow the pack, because everybody “needs to cover AI” and AI stories, I assume, get clicks.

That, and their bosses, who don’t really know anything other than that “AI will be big,” don’t want to see anything else. Why argue with the powerful? They have all the money.

But even then…can you try using it? Or talking to people that use it? Not “AI experts” or “AI scientists,” but real people in the real world? Talk to some of those software engineers! Or I dunno, learn about LLMs yourself and try them out? 

Ultimately, a business or tech reporter should ask themselves: what is your job? Who do you serve? It’s perfectly fine to write relatively straightforward and positive stuff, but you have to be clear that that’s what you’re doing and why you’re doing it. 

And you know what, if all you want to do is report what a company does, fine! I have no problem with that, but at least report it truthfully. If you’re going to do an opinion piece suggesting that AI will take our jobs, at least live in reality, and put even the smallest amount of thought into what you’re saying and what it actually means. 

This isn’t even about opinion or ideology, this is basic fucking work. 

And it is fundamentally insincere. Is any of this what you truly believe? Do you know what you believe? I don’t mean this as a judgment or an attack — many people go through their whole lives with relatively flimsy reasons for the things they believe, especially in the case of commonly-held beliefs like “AI is going to be big” or “Meta is a successful company.” 

If I’m honest, I really don’t mind if you don’t agree with something I say, as long as you have a fundamentally-sound reason for doing so. My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock. Its success does not say much about the AI bubble other than it continues, and even if I am wrong, somehow, long term, at least I was wrong for reasons I could argue versus the general purpose sense that “AI is the biggest thing ever.” 

I understand formats can be constraining — many outlets demand an objective tone — but this is where words like “allegedly” come in. For example, The Wall Street Journal recently said that Sam Altman had claimed, in a leaked recording, that buying Jony Ive’s pre-product hardware startup would add “$1 trillion in market value” to OpenAI. As it stands, a reader — especially a Business Idiot — could be forgiven for thinking that OpenAI was now worth, or could be worth, over a trillion dollars, which is an egregious editorial failure.

One could easily add that “...to this date, there have been no consumer hardware launches at this scale outside of major manufacturers like Apple and Google, and these companies had significantly larger research and development budgets and already-existent infrastructure relationships that OpenAI lacks.”

Nothing about what I just said is opinion. Nothing about what I just said is an attack, or a sleight, and if you think it’s “undermining” the story, you yourself are not thinking objectively. These are all true statements, and are necessary to give the full context of the story.  

That, to me, is sincerity. Constrained by an entirely objective format, a reporter makes the effort to get across the context in which a story is happening, rather than just reporting exactly the story and what the company has said about it. By not including the context, you are, on some level, not being objective: you are saying that everything that’s happening here isn’t just possible, but rational, despite the ridiculous nature of Altman’s comment. 

Note that these are subjective statements. They are also the implication of simply stating that Sam Altman believes acquiring Jony Ive’s company will add $1 trillion dollars in value to OpenAI. By not saying how unlikely it is — again, without even saying the word “unlikely,” but allowing the audience to come to that conclusion by having the whole story — you give the audience the truth.

It really is that simple.


The problem, ultimately, is that everybody is aware that they’re being constantly conned, but they can’t always see where and why. Their news oscillates from aggressively dogmatic to a kind of sludge-like objectivity, and oftentimes feels entirely disconnected from their own experiences other than in the most tangential sense, giving them the feeling that their actual lives don’t really matter to the world at large. 

On top of that, the basic experience of interacting with technology, if not the world at large, kind of fucking sucks now. We go on Instagram or Facebook to see our friends and battle through a few ads and recommended content, we see things from days ago until we click stories, and we hammer past a few more ads to get a few glimpses of our friends. We log onto Microsoft Teams, it takes a few seconds to go through after each click, and then it asks why we’re not logged in, a thing that we don’t need to be able to do to make a video call. 

Our email accounts are clogged with legal spam — marketing missives, newsletters, summaries from news outlets, notifications from UPS that require us to log in, notifications that our data has been leaked, payment reminders, receipts, and even occasionally emails from real people. Google Search is broken, but then again, so is searching on basically any platform, be it our emails, workspaces or social networks. 

At scale, we as human beings are continually reminded that we do not matter, that any experiences of ours outside of what the news say makes us “different” or a “cynic,” that our pain points are only as relevant as those that match recent studies or reports, and that the people that actually matter are either the powerful or considered worthy of attention. News rarely feels like it appeals to the listener, reader or viewer, just an amorphous generalized “thing” of a person imagined in the mind of a Business Idiot. The news doesn’t feel the need to explain why AI is powerful, just that it is, in the same way that “we all knew” that being back in the office was better, even if there were far more people who disagreed than didn’t.

As a result of all of these things, people are desperate for sincerity. They’re desperate to be talked to as human beings, their struggles validated, their pain points confronted and taken seriously. They’re desperate to have things explained to them with clarity, and to have it done by somebody who doesn’t feel chained by an outlet. 

This is something that right wing media caught onto and exploited, leading to the rise of Donald Trump and the obsession with creating the “Joe Rogan of the Left,” an inherently ridiculous position based on his own popularity with young men (which is questionable based on recent reports) and its total misunderstanding of what actually makes his kind of media popular. 

However you may feel about Rogan, what his show sells on is that he’s a kind of sincere, pliant and amicable oaf. He does not seem condescending or judgmental to his audience, because he himself sits, slack-jawed, saying “yeah I knew a guy who did that” and genuinely seems to like them. While you (as I do) may deeply dislike everything on that show, you can’t deny that they seem to at least enjoy themselves, or feel engaged and accepted. 

The same goes for Theo Von (real name: Theodor Capitani von Kurnatowski III, and no, really!), whose whole affable doofus motif disarms guests and listeners. 

It works! And he’s got a whole machine that supports him, just like Rogan, money, real promotion, and real production value. They are given the bankroll and the resources to make a high-end production and a studio space and infrastructural support and then they get a bunch of marketing and social push too. There’s entire operations behind them, other than the literal stuff they do on the set, because, shocker, the audience actually wants to see them not have a boxed lunch with “THE THINGS TO BELIEVE” written on it by a management consultant. 

This is in no way a political statement, because my answer to this entire vacuous debate is to “give a diverse group of people that you agree with the beliefs of the actual promotional and financial backing and then let them create something with their honest-to-god friendships.” Bearing witness to actual love and solidarity is what will change the hearts of young people, not endless McKinsey gargoyles with multi-million-dollar budgets for “data.” 

I should be clear that this isn’t to say every single podcast should be in the format I suggest, but that if you want whatever “The Joe Rogan Of The Left” is, the answer is “a podcast with a big audience where the people like the person speaking and as a result are compelled by their message.” 

It isn’t even about politics, it’s that when you cram a bunch of fucking money into something it tends to get big, and if that thing you create is a big boring piece of shit that’s clearly built to be — and even signposted in the news as built to be — manipulative, it is in and of itself sickening.

I’m gonna continue clearing my throat: the trick here is not to lean right, nor has it ever been. Find a group of people who are compelling, diverse and genuinely enjoy being around each other and shove a whole bunch of advertising dollars into it and give it good production values to make it big, and then watch in awe as suddenly lots of people see it and your message spreads. Put a fucking trans person in there — give Western Kabuki real money, for example — and watch as people suddenly get used to seeing a trans person because you intentionally chose to do so, but didn’t make it weird or get upset when they don’t immediately vote your way. 

Because guess what — what people are hurting for right now is actual, real sincerity. Everybody feels like something is wrong. The products they use every day are increasingly-broken, pumped full of generative AI features that literally get in the way of what they’re trying to do, which already was made more difficult because companies like Meta and Google intentionally make their products harder to use as a means of making more money.  And, let’s be clear, people are well aware of the billions in profits that these companies make at the customer’s expense. 

They feel talked down to, tricked, conned, abused and abandoned, both parties’ representatives operating in terms almost as selfish as the markets that they also profit from. They read articles that blandly report illegal or fantastical things as permissible and rational and think, for a second, “am I wrong? Is this really the case? This doesn’t feel the case?” while somebody tells them that despite the fact that they have less money and said money doesn’t go as far, they’re actually experiencing the highest standard of living in history. 

Ultimately, regular people are repeatedly made to feel like they don’t matter. Their products are overstuffed with confusing menus, random microtransactions, the websites they read full of advertisements disguised as stories and actual advertisements built to trick them, their social networks intentionally separating them from the things they want to see. 

And when you feel like you don’t matter, you look to other human beings, and other human beings are terrified of sincerity. They’re terrified of saying they’re scared, they’re angry, they’re sad, they’re lonely, they’re hurting, they’re constantly on a fucking tightrope, every day feels like something weird or bad is going to happen either on the news (which for no reason other than it helps rich people constantly tries to scare them that AI will take their jobs), and they just want someone to talk to, but everybody else is fucking unwilling to let their guard down after a decade-plus of media that valorized snark and sarcasm, because the lesson they learned about being emotionally honest was that it’s weird or they’re too much or it’s feminine for guys or it’s too feminine for women.

Of course people feel like shit, so of course they’re going to turn to media that feels like real people made it, and they’ll turn to the media they’ll see the easiest, such as that given to them by the algorithm, or that which they are made to see by advertisement, or, of course, word of mouth. And if you’re sending someone to listen to something, and someone describes it in terms that sound like they’re hanging out with a friend, you’d probably give it a shot.

Outside of podcasting, people’s options for mainstream (and an alarming amount of industry) news are somewhere between “I’m smarter than you,” “something happened!” “sneering contempt,” “a trip to the principal’s office,” or “here’s who you should be mad at,” which I realize also describes the majority of the New York Times opinion page. 

While “normies” of whatever political alignment might want exactly the slop they get on TV, that slop is only slop because the people behind it believe that regular people will only accept the exact median person’s version of the world, even if they can’t really articulate it beyond “whatever is the least-threatening opinion” (or the opposite in Fox News’ case).

Really, I don’t have a panacea for what ails media, but what I do know is that in my own life I have found great joy in sincerity and love. In the last year I have made — and will continue to make, as it’s my honour to — tremendous effort to get to know the people closest to me, to be there for them if I can, to try and understand them better and to be my authentic and honest self around them, and accept and encourage them doing the same. Doing so has improved my life significantly, made me a better, more confident and more loving person, and I can only hope I provide the same level of love and acceptance to them as they do to me.

Even writing that paragraph I felt the urge to pare it back, for fear that someone would accuse me of being insincere, for “speaking in therapy language,” for “trying to sound like a hero,” not that I am doing so, but because there are far more people concerned with moderating how emotional and sincere there are than those willing to stop actual societal harms.

I think it’s partly because people see emotions as weakness. I don’t agree. I have never felt stronger and more emboldened than I have as I feel more love and solidarity with my friends, a group that I try to expand at any time I can. I am bolder, stronger (both physically and mentally), and far happier, as these friendships have given me the confidence to be who I am, and I offer the same aggressive advocacy to my friends in being who they are as they do to me. 

None of what I am saying is a one-size-fits-all solution. There is so much room for smaller, more niche projects, and I both encourage and delight in them. There is also so much more attention that can be given to these niche projects, and things are only “niche” until they are given the time in the light to become otherwise. There is also so much more that can be done within the mainstream power structures, if only there is the boldness to do so.

Objective reporting is necessary — crucial, in fact! — to democracy, but said objectivity cannot come at the cost of context, and every time it does so, the reader is failed and the truth is suffocated. And I don’t believe objective reporting should be separated from actual commentary. In fact, if someone is a reporter on a particular beat, their opinion is likely significantly more-informed than that of someone “objective” and “outside of the coverage,” based on stuff like “domain expertise.” 

The true solution, perhaps, is more solidarity and more sincerity. It’s media outlets that back up their workers, with editorial missions that aggressively fight those who would con their readers or abuse their writers, focusing on the incentives and power of those they’re discussing rather than whether or not “the markets” agree with their sentiment.

In any case, the last 15+ years of the media has led to a flattening of journalism, constantly swerving toward whatever the next big trend is — the pivot to video, contorting content to “go viral” on social media, SEO, or whatever big coverage area (AI, for example) everybody is chasing instead of focusing on making good shit people love. Years later, social networks have effectively given up on sending traffic to news, and now Google’s AI summaries are ripping away large chunks of the traffic of major media outlets that decided the smartest way to do their jobs was “make content for machines to promote,” never thinking for a second that those who owned the machines were never to be trusted.

Worse still, outlets have drained the voices from their reporters, punishing them for having opinions, ripping out anything that might resemble a personality from their writing to meet some sort of vague “editorial voice” despite readers and viewers again and again showing that they want to read the news from a human being not an outlet.

I maintain that things can change for the better, and it starts with a fundamental acceptance that those running the vast majority of media outlets aren’t doing so for their readers’ benefit. Once that happens, we can rebuild around distinct voices, meaningful coverage and a sense of sincerity that the mainstream media seems to consider the enemy. 

What're We Even Doing?

2025-06-14 03:59:58

Never Forget What They've Done

2025-06-10 02:07:01

Soundtrack: Queens of the Stone Age - Villains of Circumstance 

Listen to my podcast Better Offline if you haven't already.


I want my fucking tech industry back. 

Maybe you think I sound insane, but technology means a lot to me. It’s the way that I speak to most of my friends. It’s my lifeline when I’m hurting or when those close to me hurt, and it’s the way I am able to make a living and be a creative — something I only was able to become because of technology. Social networks have been a huge part of me being able to become a functional human being, and you can judge me for that all you want, but you are a coward and a hypocrite for doing so, and you’re going to read to the end of this blog anyway.

Really, seriously, honestly — the Ed Zitron you know was and is only possible because of my deep connection to technology. This was how I made friends. This was how I got the confidence to meet real people. This was how I started my company. This was how I met the people closest to me, people I love with all my heart. I was only able to do any of this because I was able to get on the computer. 

I am bombastic and frankly a little much today, and was the literal opposite less than 5 years ago, and I was even more reserved 10 years before that. Technology allowed me to find a way to be human on my terms, in ways that I don’t think are possible anymore because most of the interconnecting fabric that I used has been interfered with by bad actors and the rest with slop and SEO.

I think there are far more people out there like me than will admit to it. I think more people miss the past, or at least realize now what they lost.

There was a time this didn’t suck, when it wasn’t a struggle to do basic things, when my world was not a constant war with my god damn apps, when things weren’t necessarily turn-key but my phone wasn’t randomly burning through half of its battery life in an hour and a half because one app on the App Store is poorly configured. I swear to god, back in like, 2019, Zoom just fucking connected. I remember things being better, and on top of that, I see how much better things could be.

But that’s not the tech industry we’re allowed to have, because the people that run the tech industry do not give a shit.

It’s not enough to have your data, your work, your art, your posts, your friends, the things you’ve taken photos of, and the things you’ve searched for. The industry must have that of your children, and their children, as early as possible, even if it means helping them cheat on their homework so that they too can live a life where they’ve skipped having any responsibility or learning anything about the world other than how one can extract as much as possible without having to give anything in return. 

Big tech is sociopathic and directionless, swinging wildly to try and find new ways to drag any kind of interaction out of a customer they’ve grown to loathe for their unwillingness to be more profitable. Decades of powerful Big Tech Business Idiots have chased out true value-creation in Silicon Valley in favour of growth economics, sending edict after edict down to the markets and the media about what’s going to be “hot” next, inventing business trends rather than actual solutions to problems. After all, that might involve — eugh! — experiencing the real world rather than authoring a new version of it every few years.

Apple barely escapes the void because its principle value proposition has, on some level, always been “our stuff works.” The problem is that Apple needs to grow, and thus its devices are slowly but surely becoming mired in sludge. The App Store is an abomination, your iPhone settings look like a fucking Escher painting, and in its desperation to follow the pack it shoved Apple Intelligence out the doorone of the most invasive and annoying pieces of software to ever grace a computer

Apple’s willingness to do this shows that it’s rotten just like the rest of them — it's just better at hiding it. After all, look at the way in which it flaunted court orders telling it to open up third-party payments as a means of squeezing every penny out of the App Store. Loathsome. And it still ended up losing.

I adore tech. Tech made me who I am today. I use and love technology for hours a day, yet that experience is constantly mangled by the warring intentions of almost every product I use. I’m forced to log into the newspaper website and back into Google Calendar multiple times a week, my phone randomly resets — as every single iPhone has for multiple years — at least twice a week, my Apple Watch stops being willing to read my heart rate, websites I want to read sometimes simply do not load, and sometimes when I load websites on an iPad they just won’t scroll. 

Everything feels like a fucking chore, but I love the actual things that technology does for me, like letting me take notes with ease, like building and maintaining my fitness through a series of connected products like Tonal and Fight Camp, like using Signal to talk to friends hundreds or thousands of miles away, like posting dumb stuff on Bluesky and interacting with my followers, like recording a podcast wherever I am in the world because USB-C mics are cheap and easy to use and sound great. 

There are so many great things about technology, things I fucking love, and Large Language Models do not resemble their form or intention. There is nothing about an LLM that feels like it’s built to provide a real service, other than some sort-of fraudulent copy of something else lacking its soul or utility. Those that actually use them in their daily work talk about them as exciting tools that help them improve workflows - not like they're the next big thing.

The original iPhone, even in its initial form, promised a world where two or three devices became one, where your music and a camera were always on you, and where you could do your banking and grocery shopping while sitting in the back of a taxi. It promised access to the world’s knowledge from a slab of glass in your pocket.  If i’m honest, the smartphone has absolutely delivered on those promises — and more. 

Where do we extrapolate from LLMs? What am I meant to be seeing in ChatGPT? 

The “iPhone moment” wasn’t a result of one thing, but a collection of different bits that formed an obvious whole — one device that did a bunch of things really, really well.

LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output. What am I meant to see from here? They’re not autonomous, and have shown no proof that they can be, and in fact kind of feel antithetical to autonomy itself, which requires consistency, reliability and replicability, more things that LLMs cannot do.

And that, ultimately, was what made the smartphone amazing too. Within a few years, phones were competent web browsers. The mobile web took a minute to catch up, sure, but you could see it taking form immediately, as you could with the App Store. They immediately made sense as a way to listen to music, because they were effectively an iPod, a beloved MP3 player, and the iPhone’s camera was good enough for most people at the time, and quickly became better than most of the point-and-shoots that people used to take on vacations and to parties. Now, most people are pretty happy with their phone cameras regardless of who makes them. 

All of this made total sense from the very beginning the moment you picked one up. What if the camera was better? It happened. What if the screen was bigger? It happened. There were immediate signs the iPhone  would improve.  It wasn’t fantastical to believe that in 10-to-20 years you’d have a bigger, faster and thinner iPhone with a camera that produced shots alarmingly close to what you’d capture with a DSLR. 

It makes sense that Google freaked out the second it picked one up. It was fucking wild what it could do, even in its first form. Each iteration and improvement — as with other smartphones — offers a new twist on a formula you already know works, and sometimes “better” means something different. For example, I don’t use Android, but I think the foldable Motorola phones are cool as shit. Palm’s WebOS was a stroke of UI genius, and it’s criminal to see how HP mishandled the company after its acquisition, ultimately killing one of the earliest and most iconic mobile brands.

Sidenote: In anticipation of a “well, akchually” from the peanut gallery, different can also mean bad. 3D phones were portable migraine-causers. The BlackBerry Storm’s weird SurePress technology — where the touchscreen screen kind-of ‘clicked’ through haptic feedback whenever you pressed something — was an abomination that put RIM on a terminal trajectory. And Samsung’s decision to include a built-in firelighter in the Samsung Galaxy Note 7 will remain one of the most expensive errors in mobile hardware history. It really blew up, but not in the way they wanted it to.     

What does the “better” version of ChatGPT look like, exactly? What’s cool about ChatGPT? Where’s that “oooh” moment? Are you going to tell me you’re that impressed by the pictures and the words? Is it in the resemblance of its outputs to human beings? Because the actual answer is “a ChatGPT that actually works.” One that you can just ask to do some shit and know it’ll do it, and it’d also be very obvious what it could actually do, which is not the case right now. A better ChatGPT would quite literally be a different product. 

What’s particularly horrifying about the AI bubble is that it’s shown that when they decide to, big tech can put hundreds of billions behind whatever the fuck they want. They are able to mobilize incredible amounts of capital and the industrial might of multiple companies with multi-trillion dollar market capitalisations to build entire infrastructure dedicated to one thing, and the one thing they are choosing is generative AI.

They’re all fully capable of uniting around an ideal — it’s just that said ideal exists entirely to automate human beings out of the picture, and even more offensively, it doesn’t seem to be able to do so, and the more obvious that becomes, the more obvious the powerful’s hunger becomes for a world where they never see or talk to us, and they get all of our money and attention. 

And it’s not just their greed — it’s how obviously they love the idea of automating human beings away, and creating a world where we’re increasingly disconnected and beholden to technology that they entirely control. No creators, no connections, and best of all, no customers — just people cranking a giant, energy-guzzling slot machine and maybe getting the thing they wanted at the end.

Except it doesn’t work. It obviously doesn’t work. It hasn’t ever worked, and there’s never really been a sign of it working other than people very confidently saying “this will eventually work.” 

They now need this to be several echelons BIGGER than the iPhone to be worth it. Hundreds of billions of capital expenditures and endless media attention are begging for an actual payoff — something truly amazing and societally relevant other than the amount of investment and attention it’s getting. They need this to be the single biggest consumer tech phenomenon ever while also being the panacea to the dwindling growth of the Software as a Service and enterprise IT markets, and it needs to start doing that within the next 12 months, without fail, if it even has that long.

You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future. 

Imagine if they’d done something else.

Imagine if they’d done anything else.

Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.

Imagine, because right now that’s the closest you’re going to fucking get.


Mid-break Soundtrack: Spinerette - A Prescription For Mankind

We all feel like we’re at war right now. Every person I know, on some level, feels like they’re in their own battle, their own march toward something, or against something, or away from something. It’s constant, a drumbeat, a war song, a funeral dirge, and so rarely an anthem. 

All of us feel like we’re individually suffering. We echo with conflict and we reverberate with our own doubts, even the most confident and successful of us. Even our devices are wars within themselves — wars within software that is built to interfere with its own purpose, our ability to connect with others, or find the things we. This suffering is often an unfortunate byproduct of an advertising channel that makes Sundar Pichai or Mark Zuckerberg a hundred million dollars or more. 

We struggle to do the things we need to do, as we do with the things we want to do, because there are so many warring incentives that it literally slows our mobile browsers down because they all want to shove a fucking cookie into our phones, or a page has to phone home to a hundred different tracking services. And we fail to see the big picture, how this is literally robbing us of the one thing we know to be finite — time. 

We tell ourselves these problems are minor, because if we accept how frustrating they are, we must accept how frustrating all of them are, and how many of them there are, and that we’re surrounded by digital ants biting us with little or no rhyme or reason other than their thirst for their queen’s growth. 

While we may feel increasingly divided, these problems unite us. Everybody faces them mostly in equal measure, though the poorer you are, the more likely you’re burdened by a cheap, shitty laptop like the ACER Aspire 1 that I used last year that took over an hour to set up and took forever to do anything in its advertisement-filled litterbox of an operating system. The more likely you’re unable to afford the subscriptions that afford you a bit of dignity in the digital world, like YouTube Premium, which saves you from having to see five minutes of advertising for every 10-minutes of video you watch.

We all use social networks that actively experiment on us to see how much advertising we’ll take, what content we might engage with — not like, enjoy — and we all have the same fucking awful version of Google Search. Even expensive iPhones are plagued with the cursed Apple Intelligence software, and even if you turn it off, you still deal with Apple’s actively evil App Store and a mobile internet full of websites that are effectively impossible to browse on a mobile.

We ache not so much for the old world of the computer, but the world we know is possible if these fucking bastards wouldn’t keep ruining it. It’s magical that we can have a video chat with someone halfway across the world, or play a fast-paced videogame with them, watch the same movies that we both stream, casually looking something up on a search engine, or looking at a friend’s photos they posted on a social network. Even if it’s for work, it’s kind of amazing that we can take big files and send them across the internet. The cameras in our phones are truly incredible. Connected fitness has changed my entire life. Handheld gaming PCs are cool as shit. 

We live in the future, and the future is cool.

Or it would be cool, if it wasn’t for all these fucking bastards.

Even for those of us too young to remember a less-algorithmic internet, we can all see the potential. We see what technology can do. We see what the remarkable advances in smaller chips and batteries and processors have allowed us to do. We know what’s possible, but we see — whether we acknowledge it or just feel its sheer force shearing off bits of our fucking soul — what these companies are choosing to do to us. 

There is nothing making Mark Zuckerberg force algorithmic Instagram and Facebook feeds upon people by default other than sheer, unadulterated greed and the growth-at-all-costs rot economics that have made him a multi-billionaire. We know what we want from his network, he knows what value we get out of it, but unlike Mark Zuckerberg, we have no voice in the conversation other than choosing to accept whatever punishment he offers. We know exactly what it is we want to do, and for some reason we rarely talk about the man responsible for getting in our way. 

I don’t know, maybe you think I’m being dramatic, but I feel like shit about this, because I know it doesn’t have to be this way. I have spent the last year of my life cataloguing why companies like Google (Prabhakar Raghavan) and Facebook (Zuckerberg, Gleit, Mosseri, Backstrom, Sandberg, Bosworth) make their products worse, and I don’t know why more people don’t talk about the scale of these harms, and the unbelievable, despicable intentionality behind their decision making. Sundar Pichai and Mark Zuckerberg have personally overseen the destruction of society’s access to readily-available information. You can dance around it all you want, you can claim these things aren’t a big deal, but you’re fucking wrong.

Google and Facebook were, on some level, truly societal marvels, and they have been poisoned and twisted into a form of advertising parasite that you choose to let feed on you so that you can speak to your friends or find something out. 

Let me put it in simpler terms: isn’t it fucking weird how hard it is to do anything? Don’t you remember when it was easier? It’s harder now because of Mark Zuckerberg and Sundar Pichai, and the information you look for is worse because of Sam Altman and Satya Nadella, whose deranged attachment to Large Language Models have pumped our internet full of bullshit at a time when Google had actively abandoned any duty to the web or its users.

This isn’t a situation with grey areas, especially when it comes to Mark Zuckerberg, a man who cannot be fired. He chose to make things bad, and he chooses to keep them this bad every day. Sundar Pichai is responsible for the destruction of Google Search along with the now-deposed Prabhakar Raghavan. 

Sam Altman is a con artist that worked studiously for over a decade to accumulate power and connections until he found a technology and a time when the tech industry was out of ideas, and from everything I’ve read, it feels like he fell ass-backwards into ChatGPT and was surprised by how much everybody else liked it. 

In any case, he is a great salesman to a legion of Business Idiots that had run out of growth ideas — the Rot-Com Bubble I discussed a year ago — and would take something, anything, even if it was horrifyingly expensive, even if it wasn’t clear if it would work, because Sam Altman could spin a fucking yarn, and he’d spent a long time investing in media relationships to make sure that he’d have their buy in.

And honestly, the tech media was ready for a fun new story. I heard people saying in 2022 that it was “nice to get excited about something again,” and in many ways Altman gave hope to an industry that felt fucking bleak after getting hoodwinked twice by crypto and the metaverse, by which I mean a far more convincing story with an actual product to look at, sold by a guy the media already liked who had convinced everybody he was very smart.

Then Satya Nadella, a management consultant cultist of the growth mindset, lost, realizing there were no more growth markets, decided that he must have ChatGPT in Bing, and then Sundar Pichai chose to follow too. At any point these men could’ve looked ahead and seen exactly what would happen, but they chose not to, because there was nowhere else to shove their money, and both the markets and the media yearned for good news.

Notice how none of this — from the media to the executive sect — is about you or me. None of this is about products, or the future, or even the present, just whatever “the next big thing” might be that will keep the Rot Economy’s growth-at-all-costs party going. 

Nowhere along the line did anyone actually see an opportunity to sell people something they wanted or needed. Large Language Models were able to generate a lot of text or generate pictures, and that barely approximated a thing that society wanted or needed other than it was something that people used to be willing to pay more for — and businesses had been interested in doing these things cheaper, usually by offshoring or underpaying contractors, and this allowed them to potentially reduce costs further. 

The fact that three years later we still have trouble describing why these things exist is enough of a sign that the tech industry has no real interest in building artificial intelligence at all — because AI is, at least based on the time before ChatGPT, meant to be about doing stuff for us, which Large Language Models are pretty fucking poor at, because the idea of getting something “done for you” is that you’re outsourcing both the production and the quality control. 

In any case, it’s enough to make anyone feel crazy. Over the last decade we’ve watched — and while I’m talking about the tech industry, I think we can all say it’s been everywhere else too — the things we love get distanced from us so that somebody else can get unbelievably rich, the things we used to do easily made more difficult, confusing and/or expensive, and the ways we used to connect with people become increasingly abstracted and exploitative. 

I don’t know what to tell you about these people other than the fact that you should know that they are responsible for the world around you feeling like it’s in fucking ruins. I cannot give you a plan for the future, I cannot tell you what will fix things, but however things get fixed starts with people knowing who these people are and what they have done. 

I can give you their names. Mark Zuckerberg. Sam Altman. Sundar Pichai. Satya Nadella. Tim Cook. Sheryl Sandberg. Adam Mosseri. Prabhakar Raghavan. There are others, many others, and they are fully responsible for how broken everything feels.

And some of the guilty aren’t tech CEOs, or fabulously wealthy, but rather their collaborators in the tech media that have carried water for the sociopaths ruining our digital — and, often, physical — world. 

The reason I am so hard on my peers in the media is that it has never been more urgent that we hold these people accountable. Their ability to act both unburdened by regulation and true criticism has emboldened them to cause harm to billions of people so that they may continue to make billions of dollars, in part because the media continually congratulates them for doing so. 

And let’s be honest, what they’re doing is horribly, awfully wrong. 

Fighting back starts with the truth, said regularly, said boldly and clearly with emotion and sincerity. I don’t have other answers. I don’t have bold plans. I don’t know what to do, other than to explain how I feel, and if you feel the same, at the very least make you feel less afraid. 

If you ever need to talk, email me at [email protected]. I don’t care. I have cracked myself open and spilled myself onto my podcast and newsletter for no reason other than the fact that I feel more alive doing so, and have become a stronger and happier person doing so. 

All this is possible thanks to technology, and while I have no plan, I know I feel more free and alive when I write and speak about this stuff. I write this knowing that speaking in this way feels “too much” or some other way of attacking me for experiencing emotion, and if you’re feeling that way reading this, look deep within yourself and see if you’re simply uncomfortable with somebody capable of feeling things.

We die alone, but we choose whether we live that way. Remember that billions of us are suffering in the same way, and remember who to fucking thank for doing it to us.

Desperate Times, Desperate Measures

2025-05-28 01:02:28

Next year is going to be big. Well, I personally don't think it'll be big, but if you ask the AI industry, here's the things that will happen by the end of 2026:

How much of this actually sounds plausible to you?

Jony Ive and "The Device"

I thought I couldn't be more disappointed in parts of the tech media, then OpenAI went and bought former Apple Chief Design Officer Jony Ive's "Io," a hardware startup that it initially invested in to create some sort of consumer tech device. As part of the ridiculous $6.5 billion all-stock deal to acquire Io, Jony Ive will take over all design at OpenAI, and also build a device of some sort.

At this point, no real information exists. Analyst Ming-Chi Kuo says it might have a "form factor as compact and elegant as an iPod shuffle," yet when you look at the tweet everybody is citing Kuo's quotes from, most of the "analysis" is guesswork outside of a statement about what the prototype might be like. 

Let's Talk About Ming-Chi Kuo!

It feels like everybody is quoting analyst Ming-Chi Kuo as a source as to what this device might be as a means of justifying writing endless fluff about Jony Ive and Sam Altman's at-this-point-theoretical device.

Kuo is respectable, but only when it comes to the minutiae of Apple — changes in strategy around the components and near-term launches. He has a solid reputation when it comes to finding out what’s selling, what isn’t, and what the company plans to launch. That’s because analysts work by speaking to people — often people working at companies in the less glamorous element of the iPhone and Mac supply chain, like those that manufacture specific components — and asking what orders they’ve received, for what, and when. If a company massively cuts down on production for, say, iPhone screens, you can infer that Apple’s struggling to shift the latest version of the iPhone. Similarly, if a company is having to work around the clock to manufacture an integrated circuit that goes into the newest MacBook, you can assume that sales are pretty brisk. 

Outside of that, Kuo is fucking guessing, and assuming much more than that allows reporters to make ridiculous and fantastical guesses based on nothing other than vibes. If you are writing that Kuo "revealed details" about the device you have failed your readers, first by putting Kuo on a mythological pedestal (which he already has, to some extent), and secondly by failing to put into context what an analyst does, and what an analyst can’t do. 

And yeah, Kuo is guessing. Jony Ive may have worked at Apple, but he is not Apple. Ive was not a hardware guy — at least when it came to the realm beyond industrial and interface design —, nor did he handle operations at Apple. While Kuo's sources may indeed have some insight, it's highly doubtful he magically got his sources to talk after the announcement, meaning that he's guessing.

Kuo also predicted in 2021 that Apple would release 15-20 million foldable iPhones in 2023, and predicted Apple would launch some sort of AR headset almost every year, claiming it would arrive in 2020, 2022 (with glasses in 2025!), second quarter 2022, "late 2022" (where he also said that Apple would somehow also launch a second-generation version in 2024 with a lighter design), or 2023, but then in mid-2022 decided the headset would be announced in January 2023, and become available "2-4 weeks after the event," and predicted that, in fact, Apple would ship 1.5 million units of said headset in 2023. Sadly, by the end of 2023, Kuo said that the headset would be delayed until the second half of 2023, before nearly getting it right, saying that the device would be announced at WWDC 2023 (correct!), but that it would ship "the second or third quarter of 2023."

Not content with being wrong this many times, Kuo doubled down (or quadrupled down, I’ve lost count)  in February 2023, saying that Apple would launch "high-end and low-end versions of second-generation headset in 2025," at a point in time when Apple had yet to announce or ship the first generation. Then, finally, literally a day before the announcement of the Vision Pro, Kuo predicted it "could launch as late as 2024," the kind of thing you could've learned from a single source at Apple telling you what would be announced in 24 hours, or, I dunno, the press embargo

On December 25 2023, Kuo successfully predicted that the Vision Pro would launch "in late January or Early February 2024." It launched in the US February 2 2024. Mark Gurman of Bloomberg reported that Apple planned to launch the device "by February 2024" five days earlier on December 20 2023.

Kuo then went on to predict Apple would only produce "up to 80,000 Vision Pro headsets for launch" on January 11 2024, only to say that Apple had sold "up to 180,000" of them 11 days later. On February 28 2024, after predicting no less than twice that Apple would make multiple models, said that Apple had not started working on a second-generation or lower-priced Vision Pro.

This was a very long-winded way to say that anybody taking tweets by Ming-Chi Kuo as even clues as to what Jony Ive and Sam Altman are making is taking the piss. He has a 72.5% track record for getting things right, according to Apple Track, which is decent, but far from perfect. Any journalist that regurgitates a Ming-Chi Kuo prediction without mentioning that is committing criminal levels of journalistic malpractice. 

So, now that we've got that out the way, here's what we actually know — and that’s a very load-bearing “know” — about this device, according to the Wall Street Journal:

OpenAI Chief Executive Sam Altman gave his staff a preview Wednesday of the devices he is developing to build with the former Apple designer Jony Ive, laying out plans to ship 100 million AI “companions” that he hopes will become a part of everyday life.

...

Altman and Ive offered a few hints at the secret project they have been working on. The product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk, and will be a third core device a person would put on a desk after a MacBook Pro and an iPhone.

The Journal earlier reported that the device won’t be a phone, and that Ive and Altman’s intent is to help wean users from screens. Altman said that the device isn’t a pair of glasses, and that Ive had been skeptical about building something to wear on the body.

Let's break down what this actually means:

  • It will be "...capable of being fully aware of a user’s surroundings and life": Multimodal generative AI that can accept both visual and audio inputs are already a feature in basically every major Large Language Model.
  • "It will be unobtrusive, able to rest in one’s pocket or on one’s desk" (and it won't have a screen?): I cannot express how bad it is that this device, which will allegedly ship in a year, is so vague about how big it is. How big are your pockets? Is it smartphone sized? Smaller? If it's able to be "aware" that suggests that it'll have a bunch of sensors and maybe a camera inside it? If that’s the case, wouldn’t putting it in your pocket defeat the point? 
  • "...will be a third core device a person would put on a desk after a MacBook Pro and an iPhone": This means absolutely nothing. It's a statement made to the journalist or from marketing material intentionally shared with the journalist. A "third core device" has never really taken shape — Apple has sold, at best, a hundred million Apple Watches, and sales have begun to tumble. Products like Google’s Glass similarly failed — partially because it was expensive, partially because they became fatally uncool overnight, and partially because the battery life was dismal. The only "third core device" that's stuck is...tablets. And that's a computer! 
    • Also, calling a tablet a “core device” is, at best, a push. According to Canalys — a fairly reliable analyst firm that does the kind of supply-chain investigations I mentioned earlier — fewer than 40 million tablets were shipped worldwide in Q4 last year. That’s talking about shipments, not sales, and it also takes into account demand from educational and business customers, who likely represent a large proportion of global tablet demand. 

The Journal's story also has one of the most ludicrous things I've read in the newspaper: that "...Altman suggested the $6.5 billion acquisition has the potential to add $1 trillion in value to OpenAI," which would mean that OpenAI acquiring a washed former Apple designer who has designed basically nothing since 2019 to create a consumer AI device — a category that has categorically failed to catch on — would somehow nearly quadruple its valuation. Printing that statement is journalistic malpractice without a series of sentences about how silly it is.

But something about Jony Ive gives reporters, analysts and influencers a particular kind of madness. Reporters frame this acquisition as "the big bet that Jony Ive can make AI hardware work," that this is OpenAI "crashing Apple's party," that this is "a wake up call" for Apple, that this is OpenAI "breaking away from the pack" by making "a whole range of devices from the ground up for AI."

Based on this coverage, one might think that Jony Ive has been, I dunno, building something since he left Apple in 2019, which CNBC called "the end of the hardware era at Apple" about six months before Apple launched its M1 series processors and markedly improved its hardware as a result.Hell, much of Apple’s hardware improvement has been because it walked away from Ive’s dubious design choices. Ive’s obsession with thinness led to the creation of the Butterfly Keyboard — a keyboard design that was deeply unpleasant to type on, with very little travel (the distance a key moves when pressed), and a propensity to break at the first glimpse of a speck of dust.

Millions of angry customers — including famed film director Taika Waititi — and a class-action lawsuit later, Apple ditched it and returned to the original design. Similarly, since Ive’s exit, Apple has added HDMI ports, SD card readers, and MagSafe charging back to its laptops. Y’know, the things that people — especially creatives — wanted and liked, but had to be eliminated because they added negligible levels of heft to a laptop. 

What Has Jony Ive Been Up To?

On leaving Apple in 2019 — where he'd been part time since 2015 (though the Wall Street Journal says he returned as a day-to-day executive in 2017, just in time to promise and then never ship the AirPower charging pad) — Ive formed LoveFrom, a design studio with Apple as its first (and primary) client with a contract valued at more than $100 million, according to the New York Times, which reported the collapse of the relationship in 2022:

In recent weeks, with the contract coming up for renewal, the parties agreed not to extend it. Some Apple executives had questioned how much the company was paying Mr. Ive and had grown frustrated after several of its designers left to join Mr. Ive’s firm. And Mr. Ive wanted the freedom to take on clients without needing Apple’s clearance, these people said.

In 2020, LoveFrom signed a non-specific multi-year relationship to “design the future of Airbnb.” LoveFrom also worked on some sort of seal for King Charles to give away during the coronation to — and I quote — “recognize private sector companies that are leading the way in creating sustainable markets.” It also designed an entirely new font for the multi-million dollar event, which does not matter to me in the slightest but led to some reporters writing entire stories about it. The project involves King Charles encouraging space companies. I don’t know, man.

I cannot find a single thing that Jony Ive has done since leaving Apple other than "signing deals." He hasn't designed or released a tech product of any kind. He was a consultant at Apple until 2022, though it's not exactly obvious what it is he did there since the death of Steve Jobs. People lovingly ascribe Apple's every success to Ive, forgetting that (as mentioned) Ive oversaw the truly abominable butterfly keyboard, as well as numerous other wonky designs, including the trashcan-shaped Mac Pro, the PowerMac G4 Cube (a machine aimed at professionals, with a price to match, but limited upgradability thanks to its weird design), and the notorious “hockey puck” mouse.

In fact, since leaving Apple, all I can confirm is that Jony Ive redesigned Airbnb in a non-specific way, made a new font, made a new system for putting on clothing, made a medal for the King of England to give companies that recycle, and made some non-specific contribution to creating an electric car that has yet to be shown to the public.

Are You Kidding Me?

Anyway, this is the guy who's going to be building a product that will ship 100 million units "faster than any company has ever shipped 100 million of something new before."

It took 3.6 years for Apple to sell 100 million iPhones, and nearly six years for them to it 100 million Apple Watches. It took four years for Amazon to sell 100 million Echo devices, Former NFT scam Rabbit claims to have sold over 130,000 units of its "barely reviewable" "AI-powered" R1 device, but told FastCompany last year that the product had barely 5000 daily active users. The Humane Pin was so bad that their returns outpaced their sales, with 10,000 devices shipped but many returned due to, well, it sucking. I cannot find another comparison point, because absolutely nobody has succeded in making the next smartphone or "third device."

To give you another data point, Gartner — another reliable analyst firm, at least when it comes to historical sales trends, although its future-looking predictions about AI and the metaverse can be more ‘miss’ than ‘hit’ — says that the number of worldwide PC shipments (which includes desktops and laptops) hit 64.4 million in Q4 2024. OpenAI thinks that it’ll sell nearly twice as many devices in one year as PCs were sold during the 2024 holiday quarter. That’s insane. And that’s without mentioning things like… uh, I don’t know, who’ll actually build them? Where will you get your parts, Sam? Where will you get your chips? Most semiconductor manufacturers book orders months — if not years — in advance. And I doubt Qualcomm has a spare 100 million chipsets lying around that it’ll let you have for cheap. 

Yet people seem super ready to believe — much like they were with the Rabbit R1 — except they're asking even less of Jony Ive and Sam Altman, the Abbott and Costello of bullshit merchants. It's hard to tell exactly what it is that Ive did at Apple, but what we do know is that Ive designed the Apple Watch, a product that flopped until it refocused on fitness over fashion, and apparently wanted the watch to be a "high-end fashion accessory" rather than the "extension of the iPhone" that Apple executives wanted according to the Wall Street Journal, heavily suggesting that Ive was the reason the Apple Watch flopped far more than the great mind that made Apple a success.

Anyway, this is the guy who's going to build the first true successor to the smartphone, something Jony Ive already failed to do with the full backing of the entire executive team at Apple, a company he worked at for decades, and one that has literally tens of billions of cash sitting in its bank accounts.

Jony Ive hasn't overseen the design or launch of a consumer electronics product in — at my most charitable guess — three years, though I'd be very surprised if his two-or-three-year-long consultancy deal with Apple involved him leading design on any product, otherwise it would have extended it.

If I was feeling especially uncharitable — and I am — I’d guess that Ive’s relationship with Apple ended up looking like that between Alicia Keys and Research in Motion, which in 2013 appointed the singer its “Global Creative Director,” a nebulous job title that gives Prabhakar Raghavan’s “Chief Technologist” a run for its money. Ive acted as a thread of continuity between the Jobs and Cook eras of Apple, while also adding a degree of celebrity to the company that Apple’s other execs — like Phil Schiller and Craig Federighi — otherwise lacked. 

He's teamed up with Sam Altman, a guy who has categorically failed to build any new consumer-facing product outside of the launch of ChatGPT, a product that loses OpenAI billions of dollars a year, to do the only other thing that loses a bunch of money — building hardware.

No, really, hardware is hard. You don't just design something and then send it to a guy in China - you have to go through multiple prototypes, then find one that actually does something using, then work out how to mass-produce it, then actually build the industrial rails to do so, then build the infrastructure to run it, then ship it. At that point, even if the device is really good (it won't be, if it ever launches), you have to sell one hundred million of them, somehow.

I repeat myself - Hardware is hard, to the point where even Apple and Microsoft can cock-up in disastrous (and expensive) ways. Pretty much every 2011 year MacBook Pro — at least, those with their own discrete GPUs — is now e-waste, in part because the combination of shoddy cooling and lead-free solder led these machines to become expensive bricks. The same was true of the Xbox 360. Even if you think the design and manufacturing processes go swimmingly, there’s no guarantee that problems won’t creep up later down the line. 

I beg, plead, scream and yell to the tech media to take one fucking second to consider how ludicrious this is. Io raised $225 million in total funding (and OpenAI already owned 23% of the company from those rounds), a far cry from the billion dollars that The Information was claiming it wanted to raise in April 2024, heavily suggesting that whatever big, secret, sexy product was sitting there wasn't compelling enough to attract anyone other than Sutter Hill Ventures (which famously burned hundreds of millions of dollars investing in Lacework, a company that sold for $200 million and once gave away $30,000 of Lululemon gift cards in one night to anyone that would meet with the company’s sales representatives), Thrive (which has participated in or led multiple OpenAI funding rounds), Emerson Collective (run by Lauren Powell Jobs, a close friend of Jony Ive and Altman according to The Information) and, of course, OpenAI itself, which bought the company in its own stock after already owning 23% of its shares.

This deal reeks of desperation, and is, at best, a way for venture capitalists that feel bad about investing in Jony Ive's lack of productivity to get stock in OpenAI, a company that also doesn't build much product.

While OpenAI has succeeded in making multiple different models, what actual products have come out of GPT, Gemini or other Large Language Models? We're three joyless years into this crap, and there isn't a single consumer product of note other than ChatGPT, a product that gained its momentum through a hype campaign driven by press and markets that barely understood what they were hyping.

Despite all that media and investor attention — despite effectively the entirety of the tech industry focusing on this one specific thing — we're still yet to get any real consumer product. Somehow Sam Altman and Jony Ive are going to succeed where Google, Amazon, Meta, Apple, Samsung, LG, Huawei, Xiaomi, and every single other consumer electronics companies has failed, and they're going to do so in less than a year, and said device is going to sell 100 million units.

OpenAI didn't acquire Jony Ive's company to build anything — it did so that it could increase the valuation of OpenAI in the hopes that it can raise larger rounds of funding. It’s the equivalent of adding an extension to a decrepit, rotting house. 

OpenAI, as a company, is lost. It has no moat, its models are hitting the point of diminishing returns and have been for some time, and as popular as ChatGPT may be, it isn't a business and constantly loses money.

On top of that, it requires more money than has ever been invested in a startup. SoftBank had to take out a $15 billion bridge loan from 21 different banks just to fund the first $7.5 billion of the $30 billion it’s promised OpenAI in its last funding round.

At this point, it isn't obvious how SoftBank affords the next part of that funding, and OpenAI using stock rather than cash to buy Jony Ive's company suggests that it doesn’t have much to spare. OpenAI is allegedly also buying AI coding company Windsurf for $3 billion. The deal was announced on May 6 2025 by Bloomberg, but it's not clear if it closed, or whether the deal would be in cash or stock, or really anything, and I have to ask: how much money does OpenAI really have?

And how much can it afford to burn? OpenAI’s operating costs are insane, and the company has already committed to several grand projects, while also pushing deeper and deeper into the red. And if — when? — its funding rounds convert into loans, because it failed to convert into a for-profit, OpenAI will have even less money to splash on nebulous vanity projects. Then again, asking questions like that isn't really how the media is doing business with OpenAI — or, for that matter, has done with the likes of Mark Zuckerberg, Satya Nadella, or Sundar Pichai. Everything has to be blindly accepted, written down and published, for fear of...what, exactly? Not getting the embargo to a product launch everybody else got? Missing out on the chance to blindly cover the next big thing, even if it won't be big, and might not even be a thing?

Into The Bullshitverse

So, I kicked off this newsletter with a bunch of links tied to the year 2026, and I did so because I want — no, need — you to understand how silly all of this is.

Sam Altman's OpenAI is going to, in the next year, according to reports:

  • Design, prototype, manufacture and ship the next big consumer tech device, shipping 100 million units — or, as mentioned, nearly twice the number of PCs shipped in Q4 2024 — faster than any other company in history.
  • Bring both the barely-started Stargate Texas and the still-theoretical Stargate UAE data centers online.
    • As a note, SoftBank, who will have full financial responsibility for the project, is having trouble raising the supposed $100 billion to build it.
    • This project is also dependent on Crusoe, a company that has never built an AI data center, and it is being, to quote CEO Chase Lochmiller, forced to "...deliver on the fastest schedule that a 100-megawatt-or-greater data center has ever been built."
    • In fact, the entire Texas project is contingent on debt. Bloomberg reports that both OpenAI and SoftBank will put $19bn each "to start" (with what money?) and Abu Dhabi-based investment firm MGX and Oracle are putting in $7 billion each. Oracle has also signed a 15-year-long lease with Crusoe, and Stargate has one customer — OpenAI.
  • Launch a non-specific AI-specific chip with Broadcom.
    • I cannot express how unlikely it is that this happens. Silicon is even harder than hardware!

Even one of these projects would be considered a stretch. A few weeks ago, Bloomberg Businessweek put out a story called "Inside the First Stargate AI Data Center."  Just to be clear, this will be "fully operational" (or "constructed" depending on who you ask!) by the middle of 2026. The real title should've been "Outside the First Stargate AI Data Center," in part because Bloomberg didn't seem to be allowed into anything, and in part because it doesn't seem like there's an inside to visit.

Again, if I’m being uncharitable — which I am — this whole thing reminds me of that model town that North Korea built alongside the demilitarized zone to convince South Koreans about the beauty of the Juche system and the wisdom of the Dear Leader — except the beautiful, ornate houses are, in fact, empty shells. A modern-day Potemkin village. Bloomberg got to visit a Potemkin data center. 

Data centers do not just pop out of the ground like weeds. They require masses of permits, endless construction, physical service architecture, massive amounts of power, and even if you somehow get all of that together you still have to make everything inside it work. While analysts believe that NVIDIA has overcome the overheating issues with its Blackwell chips, Crusoe is brand fucking spanking new at this, and The Information described Stargate as "new terrain for Oracle...relying on scrappy but unproven startups...[and] more broadly, [Oracle] has less experience than its larger rivals in dealing with utilities to secure power and working with powerful and demanding customers whose plans change frequently."

In simpler terms, you have a company (Oracle) building something at a scale it’s never built at before, using a partner (Crusoe) which has never done this, for a company (OpenAI) that regularly underestimates the demands it puts on its servers. The project being built is also the largest of its kind, and is being built during the reign of an administration that births and kills a new tariff seemingly every day.

Anyway, all of this needs to happen while OpenAI also funds its consumer electronic product, as well as their main operations which will lose them $14 billion in 2026, according to The Information.

It also needs to become a for-profit by the end of 2025 or lose $10 billion of SoftBank's funding, a plan that SoftBank accepted but Microsoft is yet to approve, in part (according to the Information) because OpenAI wants to both give it a smaller cut of profits and stop Microsoft from accessing its technology past 2030.

This is an insane negotiation strategy — leaking to the press that you want to short-change your biggest investor both literally and figuratively — and however it resolves will be a big tell as to how stupid the C-suite at Microsoft really is. Microsoft shouldn't budge a fucking inch. OpenAI is a loser of a company run by a career liar that cannot ship product, only further iterations of an increasingly-commoditized series of Large Language Models.


At this point, things are so ridiculous that I feel like I'm huffing paint fumes every time I read Techmeme.

If you're a member of the media reading this, I implore you to look more critically on what's going on, to learn about the industries in question and begin asking yourselves why you continually and blandly write up whatever it is they say. If you think you're "not a financial journalist" or "not a data center journalist" and thus "can't understand this stuff," you're wrong. It isn't that complex, otherwise a part-time blogger and podcaster wouldn't be able to pry it apart.

That being said, there's no excuse for how everybody covered this Jony Ive fiasco. Even if you think this device ships, it took very little time and energy to establish how little Jony Ive has done since leaving Apple, and only a little more time to work out exactly how ridiculous everything about it is. I know you need stories about stuff — I know you have to cover an announcement like this — but god, would it fucking hurt to write something even a little critical? Is it too much to ask that you sit down and find out what Jony Ive actually does and then think about what that might mean for the future?

This story is ridiculous. The facts, the figures, the people involved, everything is stupid, and every time you write a story without acknowledging how unstable and untenable it is, you further misinform your readers. Even if I’m wrong — even if they somehow pull off all of this stuff — you still left out a valuable part of the story, refused to critique the powerful, and ultimately decided that marketing material and ephemera were more valuable than honest analysis. 

There is no reason to fill in the gaps or “give the benefit of the doubt” to billionaires, and every single time you do, you fail your audience. If that hurts to read, perhaps ask yourself why. 

Holding these people accountable isn’t just about asking tough questions, but questioning their narratives and actions and plans, and being willing to write that something is ridiculous, fantastical, or outlandish. Doing so — even if you end up being proven wrong — is how you actually write history, rather than simply existing as a vessel for Sam Altman or Jony Ive or Dario Amodei or any number of the world’s Sloppenheimers. 

Look, I am nobody special. I am not supernaturally intelligent, nor am I connected to vast swaths of data or suppliers that allow me to write this. I am a guy with a search engine who remembers when people said stuff, and the only thing you lack is my ability to write 5000 or more words in the space of five hours. If you need help, I am here to help you. If you need encouragement, I am here to provide it. If you need critiques, well, scroll up. Either way, I want to see a better tech media, because that’s what the world deserves.

You can do better.

The Era Of The Business Idiot

2025-05-22 00:34:18

Fair warning: this is the longest thing I've written on this newsletter. I do apologize.

Soundtrack: EL-P - $4 Vic

Listen to my podcast Better Offline. We have merch.


Last week, Bloomberg profiled Microsoft CEO Satya Nadella, revealing that he's either a liar or a specific kind of idiot.

The article revealed that — assume we believe him, and this wasn’t merely a thinly-veiled advert for Microsoft’s AI tech — Copilot consumes Nadella’s life outside the office as well at work.

He likes podcasts, but instead of listening to them, he loads transcripts into the Copilot app on his iPhone so he can chat with the voice assistant about the content of an episode in the car on his commute to Redmond. At the office, he relies on Copilot to deliver summaries of messages he receives in Outlook and Teams and toggles among at least 10 custom agents from Copilot Studio. He views them as his AI chiefs of staff, delegating meeting prep, research and other tasks to the bots. “I’m an email typist,” Nadella jokes of his job, noting that Copilot is thankfully very good at triaging his messages. 

None of these tasks are things that require you to use AI. You can read your messages on Outlook and Teams without having them summarized — and I’d argue that a well-written email is one that doesn’t require a summary. Podcasts are not there "to be chatted about" with an AI. Preparing for meetings isn't something that requires AI, nor is research, unless, of course, you don't really give a shit about the actual content of what you're reading or the message of what you're saying, just that you are "saying the right thing."

To be clear, I am deeply unconvinced that Nadella actually runs his life in this way, but if he does, Microsoft’s board should fire him immediately.

In any case, the article is rambling, cloying, and ignores Microsoft AI CEO Mustafa Suleyman's documented history of abusing his workers. Ten custom agents that do what? What do you mean by "other tasks"? Why are these questions never asked? Is it because the reporters know they won't get an answer? Is it because the reporters are too polite to ask more probing questions, knowing that these anecdotes are likely entirely made up as a means to promote a flagging AI ecosystem that cost billions to construct, but doesn’t really seem to do anything, and the reporter in question doesn’t want to force Satya to build a bigger house of cards than he needs to. 

Or is it because we, as a society, do not want to look too closely at the powerful? Is it because we've handed our economy to men that get paid $79 million a year to do a job they can't seem to describe, and even that, they would sooner offload to a bunch of unreliable AI models than actually do?

We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what's useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don't participate in it and our tech companies are directed by people that don't experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

I, however, believe the problem runs a little deeper than the economy, which is a symptom of a bigger, virulent, and treatment-resistant plague that has infected the minds of those currently twigging at the levers of power — and really, the only levers that actually matter. 

The incentives behind effectively everything we do have been broken by decades of neoliberal thinking, where the idea of a company — an entity created to do a thing in exchange for money —has been drained of all meaning beyond the continued domination and extraction of everything around it, focusing heavily on short-term gains and growth at all costs. In doing so, the definition of a “good business” has changed from one that makes good products at a fair price to a sustainable and loyal market, to one that can display the most stock price growth from quarter to quarter. 

This is the Rot Economy, which is a useful description for how tech companies have voluntarily degraded their core products in order to placate shareholders, transforming useful — and sometimes beloved — services into a hollow shell of their former selves as a means of expressing growth. But it’s worth noting that this transformation isn’t constrained to the tech industry, nor was it a phenomena that occurred when the tech industry entered its current VC-fuelled, publicly-traded incarnation. 

In The Shareholder Supremacy, I drew a line from an early 20th-century court ruling, to former General Electric CEO Jack Welch, to the current tech industry, but there’s one figure I didn’t pay as much attention to, and I regrettably now have to do so.

Famed Chicago School economist (and dweller of Hell) Milton Friedman once argued in his 1970 doctrine that those who didn’t focus on shareholder value were “unwitting pup­pets of the intellectual forces that have been undermining the basis of a free society these past decades," and that any social responsibility — say, treating workers well, doing anything other than focus on shareholder value — is tantamount to an executive taxing his shareholders by "spending their money" on their own personal beliefs.

Friedman was a fundamentalist when it came to unrestricted, unfettered capitalism, and this zealotry surpassed any sense of basic human morality — if he had any — at times. For example, in his book, Capitalism and Friedman, he argued that companies should be allowed to discriminate on racial grounds because the owner might suffer should they be required to hire an equally or better-qualified Black person. 

Bear in mind, this was written at the height of the civil rights movement, just six years before the assassination of Martin Luther King, and when America was rapidly waking up to the evils of racism and segregation (a process, I add, is ongoing and sadly not complete). This is a direct quote: 

“...consider a situation in which there are grocery stores serving a neighborhood inhabited by people who have a strong aversion to being waited on by Negro clerks. Suppose one of the grocery stores has a vacancy for a clerk and the first applicant qualified in other respects happens to be a Negro. Let us suppose that as a result of the law the store is required to hire him. The effect of this action will be to reduce the business done by this store and to impose losses on the owner. If the preference of the community is strong enough, it may even cause the store to close. When the owner of the store hires white clerks in preference to Negroes in the absence of the law, he may not be expressing any preference or prejudice, or taste of his own. He may simply be transmitting the tastes of the community. He is, as it were, producing the services for the consumers that the consumers are willing to pay for. Nonetheless, he is harmed, and indeed may be the only one harmed appreciably, by a law which prohibits him from engaging in this activity, that is, prohibits him from pandering to the tastes of the community for having a white rather than a Negro clerk. The consumers, whose preferences the law is intended to curb, will be affected substantially only to the extent that the number of stores is limited and hence they must pay higher prices because one store has gone out of business.”

Friedman was grotesque. I am not religious, but I hope Hell exists if only for him. 

The broader point I’m trying to make is that neoliberalism is inherently selfish, believing that the free market should reign supreme, bereft of government intervention, regulation or interference, thinking that somehow these terms will enable "freedom" rather than a kind of market-dominated quasi-dictatorship where our entire lives are dominated by the whims of the affluent, and that there is no institution that can possibly push back against them. 

Friedman himself makes the facile argument that economic freedom — which, he says, is synonymous with unfettered capitalism — is a necessary condition of unfettered political freedom. Obviously, that’s bollocks, although it’s an argument that’s proven persuasive with a certain class of people that are either intellectually or morally hollow (or both).

Neoliberalism also represents a kind of modern-day feudalism, dividing society based on whether someone is a shareholder or not, with the former taking precedence and the latter seen as irrelevant at best, or disposable at worst. It’s curious that Friedman saw economic freedom — a state that is non-interventionist in economic matters — as essential for political freedom, while also failing to see equality as the same. 

I realize this is all very big and clunky, but I want you to understand how these incentives have fundamentally changed everything, and why they are responsible for the rot we see in our society and our workplaces. When your only incentive is shareholder value, and you raise shareholder value as a platonic ideal, everything else is secondary, including the customer you are selling something to. Friedman himself makes a moral case for discrimination, because shareholder value — in his example, the store owner — matters more than racial equality at its most basic level. 

When you care only about shareholder value, the only job you have is to promote further exploitation and dominance — not to have happy customers, not to make your company "a good place to work," not to make a good product, not to make a difference or contribute to anything other than further growth.

While this is, to anyone with a vapor of an intellectual or moral dimension, absolutely fucking stupid, it’s an idea that’s proven depressingly endemic among the managerial elite, in part because it has entered the culture, and because it is hammered across in MBA classes and corporate training seminars. 

In simpler terms, modern business theory trains executives not to be good at something, or to make a company based on their particular skills, but to "find a market opportunity" and exploit it. The Chief Executive — who makes over 300 times more than their average worker — is no longer a leadership position, but a kind of figurehead measured on their ability to continually grow the market capitalization of their company. It is a position inherently defined by its lack of labor, the amorphousness of its purpose and its lack of any clear responsibility. 

While CEOs do get fired when things go badly, it's often after a prolonged period of decline and stagnancy, and almost always comes with some sort of payoff — and when I say "badly," I mean that growth has slowed to the point that even firing masses of people doesn't make things better. 

Sidebar: I also note that “fired” means something different when it comes to top execs. Excluding those fired due to criminal levels of malfeasance — like Robert Moffat, the man once tipped to be the next CEO of IBM, had he not been convicted of securities fraud and jailed for six months, thus losing nearly $85m in benefits — most ousted corporate leaders often enjoy generous severance packages, far beyond the usual “two weeks of pay and COBRA.” WeWork founder Adam Neumann’s $200m in cash and $225m in (now-worthless) stock is perhaps the most egregious example of this. 

We have, as a society, reframed all business leadership — which is increasingly broad, consisting of all management from the C-suite down — to be the equivalent of a mall cop, a person that exists to make sure people are working without having any real accountability for the work themselves, or to even understand the work itself.

When the leader of a company doesn't participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels. Management as a concept no longer means doing "work," but establishing cultures of dominance and value extraction. A CEO isn't measured on happy customers or even how good their revenue is today, but how good revenue might be tomorrow and whether those customers are paying them more. A "manager," much like a CEO, is no longer a position with any real responsibility — they're there to make sure you're working, to know enough about your work that they can sort of tell you what to do, but somehow the job of "telling you what to do" doesn't come with it any actual work, and the instructions don’t need to be useful or even meaningful.

Decades of direct erosion of the very concept of leadership means that the people running companies have been selected not based on their actual efficacy — especially as the position became defined by its lack of actual production — but on whether they resemble what a manager or executive is meant to look like based on the work that somebody else did.

That’s how someone like David Zaslav, a lawyer by trade and arguably the worst CEO in the entertainment industry, managed to become the head of Warner Brothers (that, and kissing up to Jack Welch, who he called a “big brother” that “picked him up like a friend”). It’s how Carly Fiorina — an MBA by trade — went on to become the head of HP, only to drive the company into a ditch where it stopped innovating, and largely missed the biggest opportunities of the early Internet era. The three CEOs that followed (Mark Hurd (who was ousted after fudging expense reports to send money to a love interest and still got tens of millions of dollars in severance), Leo Apotheker (who the New York Times suggests may have been worse than Fiorina), and Meg Whitman (famous for being a terrible CEO at HP and co-founding doomed video startup Quibi) similarly came from a non-tech background, and similarly did a shitty job, in part because they didn’t understand the company or the products or the customers. 

Management has, over the course of the past few decades, eroded the very fabric of corporate America, and I'd argue it’s done the same in multiple other western economies, too.

I’d also argue that this kind of dumb management thinking also infected the highest echelons of politics across the world, and especially in the UK, my country of birth and where I lived until 2011, delivering the same kind of disastrous effects but at a macro level, as they impacted not a single corporate entity but the very institutions of the state. I’m not naive. I don’t think that the average politician is a salt-of-the-earth type, someone who did a normal job and then decided to enter politics. Especially not in the UK, where the trappings of class permeate everything, and we’re yet to shake off the noxious influence of the aristocracy and constitutionally-mandated hereditary privilege. Our political elite often comes from one of two universities (Oxford and Cambridge, the alma mater of 20% of current UK Members of Parliament) and a handful of fee-paying schools (like Eton, which is a hellmouth for the worst people to ever exist, and educated 20 of the UK’s 55 prime ministers). 

The UK has never been an egalitarian society. And yet, things have changed markedly in the past few decades.The difference between now and then is that the silver-spooned elite was, whether because they believed it or because it was politically expedient, not totally contemptuous of those at the bottom of the economic ladder. 

I was born in the midst of the Thatcher government, and my formative years were spent as British society tried to restructure itself after her reforms. Thatcher, famously, was an acolyte of the Friedman school of thought, and spent her nearly twelve years in office dismantling the state and pushing the culture towards an American-style individualism, once famously quipping that there was “no such thing as society.” 

She didn’t understand how things worked, but was nonetheless completely convinced of the power of the market to handle what was the functions of the state — from housing to energy and water. The end result of this political and cultural shift was, in the long run, disastrous. 

The UK has the smallest houses in the OECD, the smallest housing stock of any developed country,  and some of the worst affordability. The privatization of the UK’s water infrastructure meant that money that would previously go towards infrastructure upgrades was, instead, funnelled to shareholders in the form of dividends. As a result, Britain is literally unable to process human waste and is actively dumping millions of liters of human sewage into its waterways and coastline. When Britain privatized its energy companies, the new management sold or closed the vast majority of its gas storage infrastructure. As a result, when the Ukraine War sparked, and natural gas prices surged, Britain had some of the smallest reserves of any country in Europe, and was forced to buy gas at the market prices — which were several times higher than their pre-war levels. 

I’m no fan of Thatcher, and like Friedman, I too wish hell exists, if only for the both of them. I wrote the above to emphasize the consequences of this clueless managerial thinking on a macro level — where the impacts aren’t just declining tech products or white-collar layoffs, but rather the emergence of generational crises in housing, energy, and the environment. These crises were obvious consequences of decisions made by someone whose belief in the free market was almost absolute, and whose fundamentalist beliefs surpassed the actual informed understanding of those working in energy, or housing, or water. 

As the legendary advertiser Stanley Pollitt once said, “bullshit baffles brains.” The sweeping changes we’ve seen, both in our economy and in our society, has led to an unprecedented, gilded age of bullshit where nothing matters, and things — things of actual substance — matter nothing. 

We live in a symbolic economy where we apply for jobs, writing CVs and cover letters to resemble a certain kind of hire, with our resume read by someone who doesn't do or understand our job, but yet is responsible for determining whether we’re worthy of going to the next step of the hiring process. All this so that we might get an interview with a manager or executive who will decide whether they think we can do it. We are managed by people whose job is implicitly not to do work, but oversee it. We are, as children (and young adults), encouraged to aspire to become a manager or executive, to "own our own business," to "have people that work for us," and the terms of our society are, by default, that management is not a role you work at, so much as a position you hold — a figurehead that passes the buck and makes far more of them than you do.

This problem, I believe, has poisoned the fabric of almost every part of modern business, elevating people that don't do work to oversee companies that make things they don't understand, creating substrates of management that do not do anything but create further distance from actually doing a job.

While some of you might automatically think I'm talking about Graeber's concept of Bullshit Jobs, this is far, far bigger. The system as it stands selects people at all levels of management specifically because they resemble the kind of specious, work-averse dullard that runs seemingly every company — a person built to go from meeting to meeting with the vague consternation that suggests they're "busy."

As a result, the higher you get up in an organization, the further you get from the customer, the problem you've solving, and any of the actual work, and the higher up you get, the more power you have to change the conditions of the business. On some level, modern corporate power structures are a giant game of telephone where vibes beget further vibes, where managers only kind-of-sort-of understand what's going on, and the more vague one's understanding is, the more likely you are to lean toward what's good, or easy, or makes you feel warm and fuzzy inside.

The system selects for people comfortable in these roles, creating org charts full of people that become harder and harder to justify other than "they've been here a while." They do not do "work" on the "product," and their answer as to why would be "what, am I meant to go down on the line and use a machine?" or "am I meant to call a customer and make a sale?" and the answer is yes, you lazy fucking piece of shit, you should do that once in a while, or at the very least go and watch or listen to somebody else do so, and do so regularly.

But that's not what a manager does, right? Management isn't work, it's about thinking really hard and telling people what to do. It's about making the calls. It's about "managing people," and that can mean just about anything, but often means "who do I take credit from or pass blame to," because modern management has been stripped of all meaning other than continually reinforcing power structures for the next manager up.

This system creates products for these people, because these people are more often than not the ones in power — they are your boss, your boss' boss, and their boss too. Big companies build products sold by specious executives or managers to other specious executives, and thus the products themselves stop resembling things that solve problems so much as they resemble a solution. After all, the person buying it — at least at the scale of a public company — isn’t necessarily the recipient of the final product, so they too are trained (and selected) to make calls based on vibes.

I believe the scale of this problem is society-wide, and it is, at its core, a destruction of what it means to be a leader, and a valorization of selfishness, isolationist thinking, turning labor into a faceless resource, which naturally leads to seeing customers in an equally faceless way, their problems generalized, their pain points viewed as parts of a powerpoint rather than anything that your company earnestly tries to solve or even really thinks about. And that assumes that said pain points are even considered to begin with, or not ignored in favor of a fictitious and purely hypothetical pain point. 

People — be they the ones you're paying or paying you — become numbers. We have created and elevated an entirely new class of person, the nebulous "manager," and told decades-worth of children that's what they should aspire to, that the next step from doing a job is for us to tell other people to do a job, until we're able to one day tell those people how to do their job, with each rung on the corporate ladder further distancing ourselves from anything that interacts with reality.

The real breaking point is fairly simple: the higher up you go at a company, the further you are from problems or purpose. Everything is abstract — the people that work for you, the people you work for, and even the tasks you do. 

We train people — from a young age! — to generalize and distance oneself from actual tasks, to aspire to doing managerial work, because managers are well-paid and "know what's going on," even if they haven't actually known what was going on for years, if they ever did so. This phenomenon has led to the stigmatization of blue-collar work (and the subsequent evisceration of practical trade and technical education across most of the developed world) in favor of universities. Society respects an MBA more than a plumber, even though the latter benefits society more — though I concede that both roles involve, on some level, shit, with the plumber unblocking it and the MBA spewing it. 

Sidebar: Hey, have you noticed how most of the calls for people to return to the office come not from people who actually do the jobs, but occupy managerial roles? More on that later. 

I believe this process has created a symbolic society — one where people are elevated not by any actual ability to do something or knowledge they may have, but by their ability to make the right noises and look the right way to get ahead. The power structures of modern society are run by business idiots — people that have learned enough to impress the people above them, because the business idiots have had power for decades. They have bred out true meritocracy or achievement or value-creation in favor of symbolic growth and superficial intelligence, because real work is hard, and there are so many of them in power they've all found a way to work together.

I need you to understand how widespread this problem is, because it is why everything feels fucking wrong.


Think of the Business Idiot as a kind of con artist, except the con has become the standard way of doing business for an alarmingly large part of society. 

The Business Idiot is  the manager that doesn't seem to do anything but keeps being promoted, and the chief executive officer of a public company that says boring, specious nonsense about AI. They're the tenured professor that you wish would die, the administrator whose only job appears to be opening and closing their laptop, the consultant that can come up with a million reasons to charge you more money yet not one metric to judge their success by, the marketing executive that's worked exactly three years at every major cloud player but does not appear to have done anything, and the investor that invests "based on founders," but really means "guys that look at sound exactly like them."

These people are present throughout the private and public sector, and our governments too, and they paradoxically do nothing of substance, but somehow damage everything they touch. This isn’t to say our public and private sector is entirely useless — just that these people have poisoned so many parts of our power structures that avoiding them is impossible.  

Our economy is oriented around them — made easier and more illogical for their benefit — because their literal only goal in life has been to take and use power. The Business Idiot is also an authoritarian, and will do whatever they need to — including harming the institution they work for, or those closest to them, like their co-workers or their community — as a means of avoiding true accountability or responsibility.

Decades of neoliberalism has incentivized their rise, because when you incentivize society to become management — to "manage or run a company" rather than do something for a reason or purpose — you are incentivizing a kind of corporate narcissism, one that bleeds into whatever field the person goes into, be it public or private. We go to college as a means of getting a job after college using the grades we got in college, rendering many students desperate to get the best grades they can versus "learn" anything, because our economy is riddled with power structures controlled by people that don't know stuff and find it offensive when you remind them.

Our society is in the thrall of dumb management, and functions as such. Every government, the top quarter of every org chart, features little Neros who, instead of battling the fire engulfing Rome, are sat in their palaces strumming an off-key version of “Wonderwall” on the lyre and grumbling about how the firefighters need to work harder, and maybe we could replace them with an LLM and a smart sprinkler system. 

Every institution keeps its core constituents and labor forces at arms-length, and effectively anything built at scale quickly becomes distanced from both the customer and laborer. This disconnection — or alienation — sits at the center of almost every problem I've ever talked about. Why would companies push generative AI in seemingly every part of their service, even though customers don't like it and it doesn't really work?

It's simple: they neither know nor care what the customer wants, barely know how their businesses function, barely know what their products do, and barely understand what their workers are doing, meaning that generative AI feels magical, because it does an impression of somebody doing a job, which is an accurate way of describing how most executives and middle managers operate.


Let me get a little more specific.

An IBM study based on conversations with 2,000 global CEOs recently found that only 25% of AI initiatives have delivered their expected ROI over the last few years, and, worse still, "64% of CEOs surveyed acknowledge that the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring to the organization." 50% of respondents also found that "the pace of recent investments has left their organization with disconnected, piecemeal technology," almost as if they don't know what they're doing and are just putting AI in stuff for no reason.

Johnson & Johnson recently decided to "shift from broad generative AI experimentation to a focused approach on high-value use cases" according to the Wall Street Journal, adding that "only 10 to 15% of use cases were driving about 80% of the value." Their last two CEOs (Alex Gorsky and Joaquin Duato) both have MBAs, with current CEO Duato's previous ten years at Johnson & Johnson being "some sort of Chairman or Vice President," and the previous two CEOs (Alex Gorsky and William Weldon) were both pharmaceutical sales and marketing people. 

Fun fact about Alex Gorsky! During his first tenure at Johnson & Johnson he led marketing of products that deliberately underplayed some drugs' side effects and paid off the largest nursing home pharmacy in America to sell more drugs to old people.

The term "executive" loosely refers to a person who moves around numbers and hopes for the best. The modern executive does not "lead," but prod, their managers hall monitors for organizations run predominantly by people that, by design, are entirely removed from the business itself even in roles like marketing and sales, where CMOs and VPs bark orders without really participating in the process.

We talk eagerly about how young people in entry level jobs should "earn their stripes" by doing "grunt work," and that too is the neoliberal poison in the veins of our society, because, by definition, your very first experience of the workforce is working hard enough so that you don't have to work as hard.

And anyway, the same managerial types who bitch about the entitlement and unrealistic expectations of young people are the same ones that also eviscerated the bottom rung of the career ladder — typically by offshoring many of these roles, or consolidating them into the responsibilities of their increasingly burned-out senior workers — or see AI as a way to eliminate what they see as an optional cost center, and not the future of their workforce. 

Society berated people for "quiet quitting," a ghastly euphemism for “doing the job as specified in your employment contract,” in 2022 because journalism is enthralled by the management class, and because the management class has so thoroughly rewritten the concept of what "labor" means that people got called lazy for literally doing their jobs. The middle manager brain doesn't see a worker as somebody hired and paid for a job, but as an asset that must provide a return. As a result, if another asset comes along that could potentially provide a bigger return — like an offshore worker, or an AI agent — that middle manager won’t hesitate to drop them. 

Artificial intelligence is the ultimate panacea for the Business Idiot — a tool that gives an impression of productivity with far more production than the Business Idiot themselves. The Information reported recently that ServiceNow CEO Bill McDermott — the chief executive of a company with a market capitalization of over $200 billion, despite the fact that, like SalesForce, nobody really knows what it does  — chose to push AI across his whole organization (both in product and in practice) based on the mental consideration I'd usually associate with a raven finding a shiny object:

When ChatGPT debuted in November 2022, McDermott joined his executives around a boardroom table and they played with the chatbot together. From there, he made a quick decision. “Bill’s like, ‘Let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI,’” recalled Tzitzon, the ServiceNow vice chair.

To begin a customer meeting on AI, McDermott has asked his salespeople to do what amounts to their best impression of him: Present AI not as a matter of bots or databases but in grand-sounding terms, like “business transformation.

During the push to grow AI, McDermott has insisted his managers improve efficiency across their teams. He is laser-focused on a sales team’s participation rate. “Let’s assume you’re a manager, and you have 12 direct reports,” he said. “Now let’s assume out of those 12, two people did good, which was so good that the manager was 110% of plan. I don’t think that’s good. I tell the manager: ‘What did the other 10 do?’”

You'll notice that all of this is complete nonsense. What do you mean "efficiency"? What does that quote even mean? 110% of plan? What're you on about? Did you hit your head on something Bill?

I'd wager Bill is concussion-free — and an example of a true Business Idiot — a person with incredible power and wealth that makes decisions not based on knowing stuff or caring about his customers, but on the latest shiny thing that makes him think "line go up." No, really, that's Bill McDermott's thing. Back in 2022, he said to Yahoo Finance the metaverse was "real" and that ServiceNow could help someone "create an e-mall in the metaverse" and have a futuristic store of some sort. One might wonder how ServiceNow provided that, and the answer is it didn't. I cannot find a single product that it’s offered that includes it.

Bill, like any of these CEOs, doesn't really know stuff, or even do stuff, he just is. The corporate equivalent of a stain on a carpet that nobody really knows how it got there, but hasn’t been removed. The modern executive is symbolic, and the media has — due to the large amount of Business Idiots running these outlets and middle managers stuffed into the editorial class — been trained to never ask difficult questions, such as "what the fuck are you talking about, Bill?" or even the humble "what does that mean?" or "how would you do that?" or saying "I'm not sure I understand, would you mind explaining?"

Perhaps the last part is the symptom of the overall problem. So many layers of editorial and managerial power are filled full of people that don't know anything, and there's never anyone crueler about ignorance than somebody that's ignorant themselves. 

Worse still, in many fields — journalism included — we are rarely rewarded for knowing things or being "right," but being right in the way that keeps the people with the keys from scraping them across our cars.  We are, however, rewarded for saying the right thing at the right time, which more often than not means resembling our (white, male) superiors, speaking like our peers, and delivering results in the way that makes everybody feel happiest.


A great example of our vibes-based society was back in October 2021, where a Washington Post article written by two Harvard professors rallied against remote work by citing a Microsoft-funded anti-remote study and quoting 130-year-old economist Alfred Marshall about how "workers gather in dense clusters," ignoring the fact that Marshall was so racist they've had to write papers about it, how excited he was about eugenics, or the fact he was writing about fucking factories.

Remote work terrifies the Business Idiot, because it removes the performative layer that allowed them to stomp around and feel important, reducing their work to, well...work. Office culture is inherently heteronormative and white, and black women are less likely to be promoted by their managers, and continuing the existence of "The Office" is all about making sure The Business Idiot reigns supreme. Removing the ability for the managerial hall monitors to look at you and try and work out what you're doing without ever really helping is a big part of being a manager — and if you're a manager reading this and saying you don't do this, I challenge you to talk to another person that doesn't confirm your biases.

The Business Idiot reigns supreme. Their existence holds up almost every public company, and remote work was the first time they willingly raised their heads. Google demanded employees return to the office in 2021 — but let one executive work remotely from New Zealand because absolutely none of the decisionmaking was done with people that actually do work. While we can  (well, you can, I'm not interested) debate whether exclusively working remote is as productive, the Return To Office push was almost entirely done in two ways:

  1. Executives demanding people return to the office.
  2. Journalists asking executives if remote work was good or not, entirely ignoring the people actually doing the work.

The New York Times, The Washington Post, The Wall Street Journal, and many, many other outlets all fell for this crap because the Business Idiots have captured our media too, training even talented journalists to defer to power at every turn. When every power structure is stuffed full of do-nothing management types that have learned exactly as little as they need to as a means to get by, it's inevitable that journalism caters to them — specious, thoughtless reproductions of the powerful's ideas.

Look at the coverage of AI, or the metaverse, or cryptocurrency, or Clubhouse. Look at how willingly reporters will accept narratives not based on practical experience or what the technology can do, but what the powerful (and the popular) are suddenly interested in. Every single tech bubble followed the same path, and that path was paved with flawed, deferential and specious journalism, from small blogs to the biggest mastheads.

Look at how reporters talk to executives — not just the way they ask things (like Nilay Patel's 100+ word questions to Sundar Pichai in his abominable interview), but the things they accept them saying, and the willingness reporters have to just accept what they're told. Satya Nadella is the CEO of a company with a market capitalization of over $3 trillion. I have no idea how you, as a reporter, do not say "Satya, what the fuck? You're outsourcing most of your life to generative AI? That's insane!" or even "do you really do that?" and then asking further questions.

But that would get you in trouble. The editorial class is the managerial class now, and has spent decades mentoring young reporters to not ask questions, to not push back, to believe that a big, strong, powerful company CEO would never mislead them. Kara Swisher's half-assed interviews are considered "daring" and "critical" because journalism has, at large, lost its teeth, breeding reporters rewarded for knowing a little bit about a few things and punishing those who ask too many questions or refuse to fall in line.

The reason they don't want you to ask these questions is that the Business Idiot isn't big on answers. Editors that tell you not to push too hard are doing so because they know the executive likely won't have an answer. It isn't just about the PR person that trained them, but the fact that these men more often than not have only a glancing understanding of their underlying business.

Yet in the same way that Business Idiots penetrated every other part of society, they eventually found their way to journalism. While we can (and should) scream at the disconnected idiots that ran Vice into the ground, the problem is everywhere, because the Business Idiots aren't just at the top, but infecting the power structures underlying every newsroom.

While there are many really great editors, there are plenty more that barely understand the articles they edit, the stories they commission, or that make reporters pull punches for fear of advertiser blowback.

That, and mentorship is dead across effectively all parts of society, meaning that most reporters (as with many jobs) learn by watching each other, which means they all make sure to not ask the rough questions, and not push too hard against the party/market/company messaging until everybody else does it.

And under these conditions, Business Idiots thrive.


The Business Idiot's reign is one of speciousness and shortcuts, of acquisition, of dominance and of theft. Mentoring people is something you do to pass on knowledge — it may make them grateful to you, but it ultimately, in the mind of a Business Idiot, creates a competitor or rival. 

Investing in talent, or worker conditions, or even really work itself would require you to know what you're talking about, or actually do work, which doesn't make sense when you're talking to a worker. They're the ones who're meant to work! You're there to manage them! Yet they keep talking back — asking questions about the work you want them to do, asking you to step in and help on something — and all of that's so annoying. Just know the stuff already! Get it done! I have to go to lunch and then go back out to another lunch! 

I believe this is the predominant mindset across most of the powerful, to the point that everything in the world is constructed to reaffirm their beliefs rather than follow any logical path. Our stock market is inherently illogical, driven not by whether a company is good or bad, but whether it can show growth, even if said growth is horrifically unprofitable, and I'd argue it's because the market has no idea how to make intelligent decisions, just complex ones that mean that you don't really need to understand the business so much as you understand the associated vibes of the industry.

Friedman's influence and Reagan's policies have allowed our markets to be dominated by Business Idiocy, where a bad company can be a good stock because everybody (IE: other traders and the business press) likes how it looks, which allows the Business Idiots to continue making profit using illogical and partially-rigged market-making, with the business press helpfully pushing up their narratives. 

This also keeps regular people from accumulating too much wealth — if regular people could set the tone for the markets as "a company that makes something people like and people pay them for it and they make more money than they spend," that might make things a little too even.

It doesn't matter that CoreWeave quite literally does not have enough money for its capital expenditures and lost over $300m in the last quarter because its year-over-year growth was 420%. It doesn't matter that it has October loan payments that will crush the life out of the company either. These narratives are fed to the media knowing that the media will print them, because thinking too hard about a stock would mean the Business Idiot had to think also, and that is not why they are in this business.

The "AI trade" is the Business Idiot's nirvana — a fascination for a managerial class that long since gave up any kind of meaningful contribution to the bottom line, as moving away from the fundamental creation of value as a business naturally leads to the same kind of specious value that one finds from generative AI.

I’m not even saying that there’s no returns, or that LLMs don’t do anything, or even that there’s no possible commercial use for generative AI. They just don’t do enough, almost by design, and we’re watching companies desperately try and contort them into something, anything that works, pretending so fucking hard they’ll stake their entire futures on the idea. Just fucking work, will you? Agentforce doesn’t make any money, it sucks, but god damn is Marc Benioff going to make you bear witness.

Does it matter that Agentforce doesn't make Salesforce any money? No! Because Benioff and Salesforce have got rich selling to fellow Business Idiots who then shove Salesforce into their organization without thinking about who would use it or how they'd use it other than in the most general ways. Agentforce was — and is — a fundamentally boring and insane product, charging $2 a conversation for a chatbot that, to quote The Information, provides customers with "...incorrect answers — AI hallucinations — while testing how the software handles customer service queries."

But this shit is catnip to the Business Idiot, because the Business Idiot ideally never has to deal with work, workers or customers. Generative AI doesn’t do enough to actually help us be better at our jobs, but it gives a good enough impression of something useful so that it can convince someone really stupid that doesn’t understand what you do that they don’t need you, sometimes.

A generative output is a kind of generic, soulless version of production, one that resembles exactly how a know-nothing executive or manager would summarise your work. OpenAI's "Deep Research" wows professional Business Idiot Ezra Klein because he doesn't seem to realize that part of research is the research itself, not just the output, as you learn about stuff as you research a topic, allowing you to come to a conclusion. The concept of an "agent" is the erotic dream of the managerial sect — a worker that they can personally command to generate product that they can say is their own, all without ever having to know or do anything other than the bare minimum of keeping up appearances, which is the entirety of the Business Idiot's resume.

And because the Business Idiot's career has been built on only knowing exactly enough to get by, they don't dig into Large Language Models any further than hammering away at ChatGPT and saying "we must put AI in everything now." Yet the real problem is that for every Business Idiot selling a product, there are many more that will buy it, which has worked in the past for Software as a Service (SaaS) companies that grew fat and happy hocking giant annual contracts and continual upsells, because CIOs and CTOs work for Business Idiot CEOs that demand that they "put AI in everything now," a nonsensical and desperate remit that's part growth-lust and part ignorance, borne of the fear that one gets when they're out of their depth.

Look at every single institution installing some sort of ChatGPT integration, and then look for the Business Idiot. Perhaps it's Cal State University Chanceller Mildred Garcia, who claimed that giving everybody a ChatGPT subscription would "elevate...students' educational experience across all fields of study, empower [its] faculty's teaching and research, and help provide the highly educated workforce that will drive California's future AI-driven economy," a nonsensical series of words to justify a $16.9 million-a-year single-vendor no-bid contract to a product that is best known as either a shitty search engine or a way to cheat at college.

In some ways, Sam Altman is the Business Idiot's antichrist, taking advantage of a society where the powerful rarely know much other than what they want to control or dominate. ChatGPT and other AI tools are, for the most part, sold based on what they might do in the future to people that will never really use them, and Altman has done well to manipulate, pester and terrify those in power with the idea that they might miss out on something. Does anyone know what it is? No, they don't, because the powerful are Business Idiots too, willing to accept anything that somebody brings along that makes them feel good, or bad in a way that they can make headlines with.

Hey, whatever happened to Gavin Newsom's Blockchain executive order? Did that do anything?

In any case, Altman's whole Sloppenheimer motif has worked wonders on the Business Idiots in the markets and global governments that fear what artificial intelligence could do, even if they can't really define artificial intelligence, or what it could do, or what they're scared of. The fear of China's "rise in AI" is one partially based on sinophobia, and partially based on the fact that China has their own Business Idiots willing to shove hundreds of millions of dollars into data centers.

Generative AI has created a reckoning between the Business Idiot and the rest of society, its forced adoption and proliferation providing a meager return for the massive investment of capital and the revulsion it causes in many people, not just in the Business Idiot's excitement in replacing them, but how wrong the Business Idiot is.

While there are many people that dick around with ChatGPT, years since it launched we still can't find a clean way to say what it does or why it matters other than the fact that everybody agreed it did. The media, now piloted by Business Idiots, has found itself declawed, its reporters unprepared, unwilling and unsupported, the backbone torn out of most newsrooms for fear that being too critical is somehow "not being objective," despite the fact that what you choose to cover objectively is still subjective.

Reporters still, to this day, as these companies burn billions of dollars to make an industry the size of the free-to-play gaming industry, refuse to say things that bluntly because "the cost of inference is coming down" and "these companies have some of the smartest people in the world." They ignore the truth as it sits in front of them — that the combined annual recurring revenue of The Information's comprehensive database of every generative AI company is less than $10 billion, or $4 billion if you remove Anthropic and OpenAI.

ChatGPT's popularity is the ultimate Business Idiot success story — the "fastest growing product in Silicon Valley history" that didn't grow because it was useful, or good, or able to do anything in particular, but because a media controlled by Business Idiots decided it was "the next big thing" and started talking about it nonstop since November 2022, guaranteeing that everybody would try it, even if even to this day the company can't really explain what it is you're meant to use it for.

Much like the Business Idiot themselves, ChatGPT doesn't need to do anything specific. It just needs to make the right sounds at the right times to impress people that barely care what it does other than make them feel futuristic.

Real people — regular people, not Business Idiots, not middle managers, not executive coaches, not MBAs, not CEOs — have seen this for what it was early and often, but real people are seldom the ones with the keys, and the media — even the people writing good stuff — regularly fails to directly and clearly say what's going on. 

The media is scared of doing the wrong thing — of "getting in trouble" with someone for "misquoting them" or "misreading what they said" — and in a society where in-depth knowledge is subordinate to knowing enough catchphrases, the fight often doesn't feel worth it even with an editor's blessing.

I also want to be clear that this goes far beyond money. Editors aren't just scared of advertisers being upset. They know that if narratives have to shift toward more critical, thoughtful coverage, they too will have to be more thoughtful and knowledgeable, which is rough when you are a Business Idiot and got there by editing the right people in a way that didn't help them in the slightest.


Nothing about what I'm saying should suggest the Business Idiot is weak. In fact, Business Idiots are fully in control — we have too many managers, and our most powerful positions are valorized for not knowing stuff, for having a general view that can "take the big picture," not realizing that a big picture is usually made up of lots of little brush strokes.

Yet there are, eventually, consequences for everything being controlled by Business Idiots.

Our current society — an unfair, unjust one dominated by half-broken tech products that make their owners billions — is the real punishment wrought by growth, a brain drain in corporate society, one that leads it to doing illogical things and somehow making money. It doesn't make any fucking sense that generative AI got this big. The returns aren't there, the outcomes aren't there, and any sensible society would've put a gun to a ChatGPT and aggressively pulled the trigger.

Instead it's the symbolic future of capitalism — one that celebrates mediocrity and costs billions of dollars, every human work it can consume, and the destruction of our planet, all because everybody has kind of agreed that this is what they're all doing, with nobody able to give a convincing explanation of what that even is. Generative AI is revolting both in how overstated its abilities are and in how it continually tests how low a standard someone will take for a product, both in its outputs and in the desperate companies trying to integrate it into everything, and its proliferation throughout society and organizations is already fundamentally harmful.

We’re not just drowning in a sea of slop — we’re in a constant state of corporate AI beta tests, new “features” sprouting out of our products like new limbs that sometimes function normally but often attempt to strangle us. It’s unclear if companies forcing these products on us have contempt for us or simply don’t know what good looks like. Or perhaps it's both, with the Business Idiot resenting us for not scarfing down whatever they serve us, as that's what's worked before. 

They don't really understand their customers — they understand what a customer pays for and how a purchase is made, you know, like the leaders of banks and asset managers during the subprime mortgage crisis didn't really think about whether people could pay those mortgages, just that they needed a lot of them to put in a CDO.

The Business Idiot's economy is one built for other Business Idiots. They can only make things that sell to companies that must always be in flux — which is the preferred environment of the Business Idiot, because if they're not perpetually starting new initiatives and jumping on new "innovations," they'd actually have to interact with the underlying production of the company. 

Does the software work? Sometimes! Do successful companies exist that sell like this? Sure! But look at today's software and tell me with a straight face that things feel good to use.

And something like generative AI was inevitable: an industry claiming to change the world that never really does so, full of businesses that don’t function as businesses, full of flimflam and half-truths used to impress people who will likely never interact with it, or do so in only a passing way. By chasing out the people that actually build things in favour of the people that sell them, our economy is built on production puppetry — just like generative AI, and especially like ChatGPT. 

These people are antithetical to what’s good in the world, and their power deprives us of happiness, the ability to thrive, and honestly any true innovation. The Business Idiot thrives on alienation — on distancing themselves from the customer and the thing they consume, and in many ways from society itself. Mark Zuckerberg wants us to have fake friends, Sam Altman wants us to have fake colleagues, and an increasingly loud group of executives salivate at the idea of replacing us with a fake version of us that will make a shittier version of what we make for a customer that said executive doesn’t fucking care about. 

They’re building products for other people that don’t interact with the real world. We are no longer their customers, and so, we’re worth even less than before — which, as is the case in a world dominated by shareholder supremacy, not all that much.

They do not exist to make us better — the Business Idiot doesn’t really care about the real world, or what you do, or who you are, or anything other than your contribution to their power and wealth. This is why so many squealing little middle managers look up to the Musks and Altmans of the world, because they see in them the same kind of specious corporate authoritarian, someone above work, and thinking, and knowledge. 


One of the most remarkable things about the Business Idiot is their near-invulnerability.

Modern management is resource control, shifting blame away from the manager (who should hold responsibility. After all, if they don’t, why do they have that job?) onto the laborer, knowing that the organization and the media will back it up. 

While you may think I’m making a generalization, the 2021-2023 anti-remote work push in the media was grotesque proof of where the media’s true allegiance lies — the media happily manufactured consent for return-to-office mandates from large companies by framing remote work as some sort of destructive force, doing all they can to disguise how modern management has no fucking idea how the workplace actually works

These articles were effectively fan fiction for managers and bosses demanding we return to the office — ridiculous statements about how remote work “failed young people” (it didn’t) or how employees needed remote work more than their employers because “the chitchat, lunches and happy hours” are so important. Had any of those reporters spoken to an actual worker, they’d say that they value more time with their families, rather than the grind of a daily commute softened with the promise of an occasional company pizza party — which usually happens outside of the typical working hours, anyway. 

These articles rarely (if ever) cared about whether remote work was more productive, or that the disconnect appeared to be between managers and workers. It was, from the very beginning, about crushing the life out of a movement that gave workers more flexibility and mobility while suppressing managers’ ability to hide how little work they did. I give credit to CNBC in 2023 for saying the quiet part out loud — that “...the biggest disadvantage of remote work that employers cite is how difficult it is to observe and monitor employees” — because when you can’t do that, you have to (eugh!) actually know what they’re doing and understand their work. 

Yet higher up the chain, the invulnerability continues. 

CEOs may get fired — and more are getting fired than ever, although sadly not the ones we want — but always receive some sort of golden parachute payoff at the end before walking into another role at another organization doing exactly the same level of nothing.  

Yet before that happens, A CEO is allowed to pull basically every lever before they take a single ounce of accountability — laying people off, pay freezes, moving from salaried to contracted workers, closing down sites, cutting certain products, or even spending more fucking money. If you or I misallocated billions of dollars on stupid ideas we’d be fired. CEOs, somehow, get paid more.

Let me give you an example. Microsoft CEO Satya Nadella said that the “ultimate computer…is the mixed reality world” and that Microsoft would be “inventing new computers and new computing” in 2016, pushing his senior executives to tell reporters that Hololens was Microsoft’s next wave of computing in 2017, selling hundreds of millions of dollars’ worth of headsets to the military in 2019, then debuting HoloLens 2 at BUILD 2019 only for the on-stage demo to break in realtime, calling for a referendum on capitalism in 2020, then saying he couldn’t overstate the breakthrough of the metaverse in 2021. Let’s see what he said about it (props to Preston Gralla of ComputerWorld for finding this):

Nadella, in that 2021 keynote, made big promises: “When we talk about the metaverse, we’re describing both a new platform and a new application type, similar to how we talked about the web and websites in the early ’90s…. In a sense, the metaverse enables us to embed computing into the real world and to embed the real world into computing, bringing real presence to any digital space. For years, we’ve talked about creating this digital representation of the world, but now, we actually have the opportunity to go into that world and participate in it.”

As Gralla notes, Nadella said Microsoft would be, “…beefing up development in projects such as its Mixed Reality Tool Kit MRTK, the virtual reality workspace project AltspaceVR (which it had bought back in 2017), its HoloLens virtual reality headset, and its industrial metaverse unit, among others,” before firing all 100% members of its industrial Metaverse core team along with those behind MRTK and shutting down AltSpace VR (which it acquired in 2017) in 2023, before discontinuing HoloLens 2 entirely in 2024.

Nadella was transparently copying Meta and Mark Zuckerberg’s ridiculous “metaverse” play, and absolutely nothing happened to him as a result. The media — outlets like The Verge and independents like Ben Thompson — happily boosted the metaverse idea when it was announced and conveniently forgot it the second that Microsoft and Meta wanted to talk about AI (no, really, both The Verge and Ben Thompson were ready and waiting) without a second’s consideration about what was previously said. 

A true Business Idiot never admits wrongdoing, and the more powerful the Business Idiot is, the more likely there are power structures that exist to avoid them having to do so. The media, captured by other Business Idiots, has become poisoned by power, deferring to its whims and ideals and treating CEOs with more respect, dignity and intelligence than anyone that works for them. When a big company decides they want to “do AI,” the natural reaction is to ask “how?” and write down the answer rather than think about whether it’s possible or whether the company might profit (say, by increasing their shareholder price) by having whatever they say printed ad verbatim. 

These people aren’t challenged by the media, or their employees, because their employees are vulnerable all the time, and often encouraged to buy into whatever bullshit is du jour like hostages being held by a terrorist group that eventually fall victim to Stockholm syndrome. They’re only challenged by shareholders, who are agnostic about idiocy because it’s not core to value in any meaningful sense, as we’ve seen with crypto, the metaverse and AI, and shareholders will tolerate infinite levels of idiocy if it boosts the value of their holdings. 

It goes further too. 2021 saw the largest amount of venture capital invested in the last decade, a record-breaking $643 billion, with a remarkable $329.5 billion of that invested in the US alone. Some of the biggest deals include Amazon reseller aggregator Thrasio, which raised $1 billion in October 2021 and filed for bankruptcy in February 2025, cloud security company Lacework, which raised $525 million in January 2021 then $1.3 billion in October 2021 and was rumoured to be up for sale to Wiz, only for the deal to collapse, and autonomous car company Cruise, which raised $2.75 billion 2021 and was killed off in December 2024

The people who lose their livelihoods — those who took stock in lieu of cash compensation, and those who end up getting laid off at the end — are always workers, while people like Lacework co-CEO Jay Parikh (who oversaw “reckless spending” and “management dysfunction” according to The Information) can walk into highly-paid positions at companies like Microsoft, as he did in October 2024 a few months after a fire sale to cybersecurity Fortinet for around $200 million according to analysts

It doesn’t matter if you’re wrong, or if you run your company badly, because the Business Idiot is infallible, and judged too by fellow disconnected Business Idiots. In a just society, nobody would want to touch any of the C-suite that oversaw a company that handed out Nintendo Switches to literally anyone who booked a meeting (as with Lacework). Instead, the stank remains on the employees alone. 

One point about this: Meta’s most recent layoffs were explicitly said to target low-performers, needlessly harming the future job prospects of those handed a pink slip in an already fucked tech job market. It was cruel and pointless and — I’m certain — a big fat lie.

Meta is spending big on AI, has spent big on the metaverse (which went nowhere), and owns two dying platforms (Instagram and Facebook) and one that’s hard to monetize (WhatsApp). It needs to get costs down and improve margins. Layoffs are one way to do that. And things are getting bad enough that Meta is now, according to The Information, walking around Silicon Valley begging other big tech companies for money to train their open source “Llama” LLM.

The “low-performer” jibe is an unnecessary twist of the knife, demonstrating that Meta would happily throw its workers under the bus if it serves their interests — because the optics of firing low-performers is different to, say, firing a bunch of people because you keep spunking money on dead-end vanity projects and me-too products that nobody wants or uses. 

Mark Zuckerberg, I add, owns an island on Hawaii. The idea that he even thinks this much about Meta is disgraceful. Go outside, you fucking freak.

It’s so easy, and perhaps inevitable, to feel a sense of nihilism about it all. Nothing matters. It’s all symbolic. Our world is filled with companies run by people who don’t interact with the business, and that raise money from venture capitalists that neither run businesses nor really have any experience doing so. And despite the fact that these people exist several abstractions from reality, the things that they do and the decisions they make impact us all. And it’s hard to imagine how to fix it. 


We live in a system of iniquity, dominated by people that do not interact with the real world who have created an entire system run by their fellow Business Idiots. The Rot Economy’s growth-at-all-costs mania is a symptom of the grander problem of shareholder supremacy, and the single-minded economic focus on shareholder value inevitably ends at an economy run by and for Business Idiots. There is a line, and it ends here — with layoffs, the destruction of our planet and our economy and our society, and a rising tide of human misery that nobody really knows where it comes from, and so, we don’t know who to blame, and for what.

If our economy actually worked as a true meritocracy — where we didn’t have companies run by people who don’t use their products or understand how they’re made, and who hire similarly-specious people — these people would collapse under the pressure of having to know their ass from their earhole.

Yet none of this would be possible without the enabling layers, and those layers are teeming with both Business Idiots and those unfortunate enough to have learned from them. The tech media has enabled every single bubble, without exception, accepting every single narrative fed to them by VCs and startups, with even critical reporters still accepting the lunacy of a company like OpenAI just because everybody else does too.

Let’s be honest, when you remove all the money, our current tech industry is a disgrace.

Our economy is held up by NVIDIA, a company that makes most of its money selling GPUs to other companies primarily so that they can start losing the money selling software that might eventually make them money, just not today. NVIDIA is defined by massive peaks and valleys, as it jumps on trends and bandwagons at the right time, despite knowing that these bandwagons always come to an abrupt halt.

The other companies feature Tesla, a meme stock car company with a deteriorating brand and a chief executive famous for his divorces from both reality and multiple women along with a flagrant racism that may cost the company its life. A company that we are watching die in real time, with a stagnant line-up and actual fucking competition from companies that are spending big on innovation.

In Europe and elsewhere, BYD is eating Tesla’s lunch, offering better products for half the price — and far less stigma. And this is just the first big Chinese automotive brand to go global. Others — like Chery — are enjoying rapid growth outside of China, because these cars are actually quite good and affordable, even when you factor in the impact of things like tariffs. 

Hey, remember when Tesla fired all the people in its charging network — despite that being one of the most profitable and valuable parts of the business? And then hired them back because it turns out they were actually useful?

This is a good example of managerial alienation — decisions made by non-workers who don’t understand their customers, their businesses, or the work their employees do. And let’s not forget about the Cybertruck, a monstrosity both in how it looks and how it’s sold, and that’s illegal in the majority of developed countries because it is a death-trap for drivers and pedestrians alike. Oh, and that nobody actually wants, with Tesla sitting on a quarter’s worth of inventory that it can’t sell

Elsewhere is Meta, a collapsing social network with 99% of its revenue based on advertising to an increasingly-aged population and a monopoly so flagrantly abusive in its contempt for its customers that it’s at times difficult to call Instagram or Facebook social networks.

Mark Zuckerberg had to admit to the Senate Judiciary Committee that people don’t use Facebook as a social network anymore. The reason why is because the platform is so fucking rotten, run by a company alienated from its user base, its decrepit product actively hostile to anybody trying to use it. 

And, more fundamentally, what’s the point of posting on Facebook if your friends won’t see it, because Meta’s algorithm decided it wouldn’t drive engagement? 

Meta is a monument to disconnection, a company that runs in counter to its own mission to connect people, run by Mark Zuckerberg, a man who hasn’t had a good idea since he stole it from the Winklevoss Brothers. The solution to all that ails him? Adding generative AI to every part of Meta, which…uh…was meant to do something other than burn $72 billion in capital expenditures in 2025, right? It isn’t clear what was meant to happen, but the Wall Street Journal reports that Meta’s AI chatbots are, and I quote, “empowered to engage in ‘romantic role-play’ that can turn explicit” — even with children. In a civil society, Zuckerberg would be ousted immediately for creating a pedophilic chatbot, — instead, four days after the story ran, everyone cheered Meta’s better-than-expected quarterly earnings.

In Redmond, Microsoft sits atop multiple monopolies, using tariffs as a means to juice flailing Xbox revenue as it invests billions of dollars in OpenAI so that OpenAI can spend billions of dollars on cloud compute, losing billions more in the process, requiring Microsoft to invest further money to keep them alive. All because Microsoft wanted generative AI in Bing. What a fucking waste! 

While also raising the costs of its office suite — which it’s only able to hold a monopoly on because it acted so underhandedly in the 1990s.

Amazon lumbers listlessly through life, its giant labor-abuse machine shipping things overnight at whatever cost necessary to crush the life out of any other source of commerce, its cloud services and storage arm, unsure who to copy next. Is it Microsoft? Is it Google? Who knows! But one analyst believes it’s making $5 billion in revenue from AI in 2025 — and spending $105 billion in capital expenditures. There are slot machines with a better ROI than this shit. 

Again, it’s a company that’s totally exploitative of its customers, no longer acting as a platform that helps people find the shit they need, but to direct them to the products that pay the most for prime advertising real-estate, no matter whether they are good or safe.

Let’s be clear: Amazon’s recklessness will kill someone, if it hasn’t already.   

Then there’s the worst of them — Google. Most famous for its namesake, a search engine that it has juiced as hard as possible, and will continue to juice before the inevitable antitrust sentencing that would rob it of its power, along with the severance of its advertising monopoly. But don’t worry, Google also has a generative AI thing, for some reason, and no, you don’t have a choice about using it, because it’s now stapled onto Google Search and Google Assistant

At no point do any of these companies seem to be focused on making our lives better, or selling us any kind of real future. They exist to maintain the status quo, where cloud computing allows them to retain their various fiefdoms.

They’re alienated from people.

They’re alienated from workers.

They’re alienated from their customers.

They’re alienated from the world.

They’re deeply antisocial and misanthropic — as demonstrated by Zuck’s moronic AI social network comments.

And AI is a symptom of a reckoning of this stupidity and hubris.

They cut, and cut, and stagnated. Their hope is a product that will be adopted by billions of imaginary customers and companies, and will allow them to cut further without becoming just a PO Box and a domain name.

We have to recognize that what we’re seeing now with generative AI isn’t a fluke or a bug, but a feature of a system that’s rapacious and short-term by its very nature, and doesn’t define value as we do, because “value” gets defined by a faceless shareholder as “growth.” And this system can only exist with the contribution of the business idiot. These are the vanguard — the foot soldiers — of this system, and a key reason why everything is so terrible all the time, and why nothing seems to be getting better. 

Breaking from that status-quo would require a level of bravery that they don’t have — and perhaps isn’t possible in the current economic system. 

These people are powerful, and have big platforms. They’re people like Derek Thompson, famed co-author of the “abundance” agenda, who celebrates the idea of a fictitious version of ChatGPT that can entirely plan and execute a 5-year-old’s birthday party, or his co-author Ezra Klein, who, while recording a podcast where his researchers likely listened, talked proudly about replacing their work with OpenAI’s broken Deep Research product, because anything that can be outsourced must be, and all research is “looking at stuff that is relevant.”

And really, that’s the most grotesque part about Business Idiots. They see every part of our lives as a series of inputs and outputs They boast about how many books they’ve read rather than the content of said books, about how many hours they work (even though they never, ever work that many), about high level they are in a video game they clearly don’t play, about the money they’ve raised and the scale they’ve raised it at, and about how expensive and fancy their kitchen gadgets are. Everything is dominance, acquisition, growth and possession over any lived experience, because their world is one where the journey doesn’t matter, because their journeys are riddled with privilege and the persecution of others in the pursuit of success. 

These people don’t want to automate work, they want to automate existence. They fantasize about hitting a button and something happening, because experiencing — living! — is beneath them, or at least your lives and your wants and your joy are. They don’t want to plan their kids’ birthday parties. They don’t want to research things. They don’t value culture or art or beauty. They want to skip to the end, hit fast-forward on anything, because human struggle is for the poor or unworthy. 

When you are steeped in privilege and/or have earned everything through a mixture of stolen labor and office pantomime, the idea of “effort” is always negative. The process of creation — or affection, of love, of kindness, of using time not just for an action or output — is disgusting to the Business Idiot, because those are times they could be focused on themselves, or some nebulous self-serving “vision” that is, when stripped back to its fundamental truth, is either moronic or malevolent. They don’t realise that you hire a worker for the worker’s work rather than just the work themselves, which is why they don’t see why it’s so insulting to outsource their interactions with human beings. 

You’ll notice these people never bring up examples of automating actual work — the mind-numbing grunt work that we all face in the workplace — because they neither know nor care what that is. Their “problems” are the things that frustrate them, like dealing with other people, or existing outside of the gilded circles of socialite fucks or plutocrats, or just things that are an inevitable facet of working life, like reading an email. Your son’s birthday party or a conflict with a friend can, indeed, be stressful, but these are not problems to be automated out. They are the struggles that make us human, the things that make us grow, the things that make us who we are, which isn’t a problem for anybody other than somebody who doesn’t believe they need to change in any way. It's both powerful and powerless at the same time — a nihilistic way of seeing our lives as a collection of events we accept or dismiss like a system prompt, the desperate pursuit of such efficient living that you barely feel a thing until you die. 

I’ve spent years writing about these people without giving them a name, because categorizing anything is difficult. I can’t tell you how long it took for me to synthesize the Rot Economy from the broader trends I saw in tech and elsewhere, how long it took for me to thread that particular needle, to identify the various threads that unified events that are otherwise separate and distinct.

I am but one person. Everything you’ve read in my newsletter to this point has been something I’ve had to learn. Building an argument and turning it into words — often at the same time — that other people will read doesn’t come naturally to anyone. It’s something you have to deliberately work at.  It’s imperfect. There are typos. These newsletters increase in length and breadth and have so many links, and I will never, ever change my process, because part of said process is learning, relearning, processing, getting pissed off, writing, rewriting, and so on and so forth. 

This process makes what I do possible, and the idea of having someone automate it disgusts me, not because I’m special or important, but because my work is not the result of me reading a bunch of links or writing a bunch of words. This piece is not just 13,000 words long — it’s the result of the 800,000 or more words I wrote before it, the hundreds of stories I’ve read in the past, the hours of conversations with friends and editors, years of accumulating knowledge and, yes, growing with the work itself. 

This is not something that you create through a summation of content vomited by an AI, but the chaotic histories of a human being mashed against the challenge of trying to process it. Anyone who believes otherwise is a fucking moron — or, better put, just another Business Idiot.